TL;DR
Happy-dom is a popular headless browser, widely used by developers for crawling tasks or for server-side requests in Node.js. A recent critical vulnerability (CVE-2025-61927) shows that this package is vulnerable to remote code execution when used for rendering untrusted web pages.
Concretely, malicious JavaScript code can bypass the insufficient isolation deployed by happy-dom to escape the sandbox and run arbitrary commands. As a mitigation, the maintainer of the package recommends its users to avoid running untrusted JavaScript altogether or to disable primitives for dynamic code execution, e.g., eval or Function(), to prevent any sandbox escape attempts.
The Endor Labs Research Team reviewed this recommendation and found it insufficient, because the untrusted code can still use the weak sandbox to deploy prototype pollution payloads and temper with the JavaScript builtin objects in the global scope, subsequently achieving unwanted effects such as DoS or RCE.
We disclosed this finding to the maintainer, who issued a follow-up advisory (CVE-2025-62410) with a recommendation that prevents such attacks. Below, we discuss the initial vulnerability, the background for the attack we proposed, the payload, and the improved recommendation.
The initial finding
In most practical scenarios, ensuring that all the JavaScript code passed to happy-dom is benign is a hard-to-fulfill requirement. evelopers often use happy-dom to run crawling tasks involving untrusted code or to server-side render partially-trusted React components. Hence, in such cases, happy-dom is used under malicious assumptions, i.e., the executed code is considered untrusted.
However, happy-dom’s implementation relies on Node.js’ builtin module vm, which is clearly marked as “not a security mechanism” and should never be used to run untrusted code. The recently-published advisory contains a simple PoC, illustrating how an attacker can leverage holes in the vm isolation to run arbitrary code:
const { Window } = require('happy-dom');
const window = new Window({ console });
window.document.write(`
<script>
const process = this.constructor.constructor('return process')();
const require = process.mainModule.require;
console.log('Files:', require('fs').readdirSync('.').slice(0,3));
</script>
`);This snippet shows that, when run with happy-dom, untrusted web code can obtain access to Node.js’ require method to access the file system and other powerful APIs. To do so, the code abuses a reference to the Function constructor obtained from “this” object to create and invoke a function outside the vm and subsequently, grab a reference to the “process” object. Similar attacks led to the deprecation of vm2, a popular npm sandbox, and multiple other advisories in language-based components that claim to safely isolate untrusted code.
In response to the disclosed vulnerability, the maintainer disabled JavaScript execution of the loaded pages by default, and recommends running Node.js with the "--disallow-code-generation-from-strings" flag to “make sure that evaluation can't be used at process level to escape the VM”. Essentially, the recommendation prevents the above payload by disabling access to the function constructor. As mentioned above, we find that this mitigation is insufficient, allowing sophisticated prototype pollution payloads that can also lead to arbitrary code execution.
Isolating untrusted JavaScript
JavaScript is an essential part of web browsers, and its first use case as a language was to allow untrusted web code to run safely on the user's machine without compromising it, e.g., modifying the integrity of the file system. There are multiple security mechanisms used in modern browsers to isolate dangerous scripts downloaded from the internet, from low-level hardware primitives that prevent writes to certain memory locations to software solutions that confine the code’s side effects.
With the advent of Node.js, JavaScript code was given unprecedented powers like directly spawning new processes or writing files on the disk. Since the use cases for Node.js are very different from that of a browser, e.g., build server-side or standalone applications, its developers often assume that the entire code executed in this runtime is trustworthy (see the Node.js threat model).
In practice, there are multiple use cases for running untrusted JavaScript code in Node.js, and the case of happy-dom is but one of them. Other examples are execution of user-provided plugins, evaluation of smart contracts, or AI-generated code. In all these cases, developers need a way to isolate the untrusted code and restrict its access to (most of) the powerful Node.js APIs. Essentially, they need a sandbox with an associated policy that specifies which resources the untrusted script is allowed.
Even though Node.js and other server-side runtimes build on top of JavaScript engines that offer powerful isolation primitives like Isolates in V8, they often do not expose this to the JavaScript world. To add injury to the insult, Node.js offers a builtin module called “vm” for running code in a separate context/global scope. The documentation of this module uses confusing security-charged terminology like “virtual machine”, “sandbox”, “isolated scope”, even though its developers never intended it to be a security primitive. This led to a lot of confusion in the ecosystem, and open-source packages either treated this module as a security sandbox or tried to build one using the vm module as a building block. Both these are discouraged by the Node.js maintainers, who felt compelled to add a warning about this in version 10 of the runtime and highlight it with bold, red font in version 16: “The vm module is not a security mechanism. Do not use it to run untrusted code.”
Primer on prototype pollution
JavaScript allows arbitrary changes to builtin objects like Object, Math, or Array. Most objects in the runtime, including these important builtins, inherit (transitively) from a root prototype referred to as Object.prototype. During a property lookup, the runtime first attempts to access the desired property on the target object, and in case it is not present, it will attempt a lookup in the object’s prototype, its prototype’s prototype, and so on, traversing the so-called prototype chain.
There are two main ways of accessing an object’s prototype, either by using the property “__proto__” or “prototype”. In a prototype pollution attack, adversaries attempt to abuse nested property accesses in benign code to modify properties on the (root) prototype, and hence pollute multiple objects in the runtime with undesired properties:
benignObj[prop1][prop2] = value
// prototype pollution payload prop1=”__proto__”, prop2=”foo”, value=”23”In this basic example, the attacker can set a property called “foo” with value 23, for all objects inheriting from the root prototype. Attackers can also overwrite important builtin methods like toString, causing the application to malfunction. Since this is one of the simplest examples of pollution payload, prototype pollution was for a long time considered a medium severity vulnerability that is hard to exploit for anything other than crashes.
However, recent academic work shows that this can often lead to remote code execution or code reuse attacks via prototype pollution gadgets. The main insight is that attackers can pollute specific properties that trigger unexplored paths in the code:
if (input.foo) { // force this unexplored branch
} else {}or control parameters that reach in injection-prone APIs like:
let filename = input.foo || “foo.txt”
exec(`touch ${filename}`) // control the argument to exec via an undefined propertyConsidering these risks, prototype pollution is often considered a severe vulnerability today, if attackers can trigger it in remote contexts without restrictions. Moreover, JavaScript runtimes like Node.js attempt to remove prototype pollution gadgets from their code base and hence limit the most facile exploitation techniques.
Exploiting the vm loophole with a pollution payload
Node.js’ vm module was only intended for lightweight isolation of benign code and to prevent unintended side-effects in the global scope. Thus, there are ways to escape this confinement and get access to the root prototype of the underlying context:
const vm = require('vm'); vm.runInNewContext(`
this.constructor.constructor.__proto__.__proto__.foo=23;
Object.prototype.foo = 12;`);
console.log({}.foo); // will print 23Explicit attempts to set properties of the root prototype inside the vm, will only pollute objects from that vm. However, malicious code running inside the vm can abuse references on the “this” object to escape the isolation and pollute the underlying context/scope.
There are also non-trivial payloads like in the case of CVE-2021-23555 for the vm2 package, in which references to exception objects can be used to escape the vm. Such payloads lead to the deprecation of at least two JavaScript sandboxes built on top of the vm module: vm2 and realms-shim.
Deployed patch and lessons learned
Let us now revisit the initial CVE-2025-61927 published for happy-dom. Since the package is built on top of Node.js’s vm module, the example PoC discussed at the beginning of this article shows how malicious code can obtain a reference to the function constructor in the global scope and further access arbitrary Node.js builtins like “fs”. The proposed fix was to either avoid running JavaScript code inside happy-dom, or disable APIs for code execution like the function constructor, using a Node.js command line flag. While this prevents, the arbitrary code execution payload above, it does not prevent prototype pollution ones like:
import { Browser } from "happy-dom";
const browser = new Browser({settings: {enableJavaScriptEvaluation: true}});
const page = browser.newPage({console: true});
page.url = 'https://example.com';
page.content = `<html>
<script>
this.constructor.constructor.__proto__.__proto__.toString = () => {};
</script>
<body>Hello world!</body></html>`;
await browser.close();
console.log(`The process object is ${process}`);With this payload, one can rewrite the global toString method and cause DoS. We reported this payload to happy-dom’s maintainer who was very responsive and eager to work with us. The resulting advisory CVE-2025-62410 shows a more sophisticated payload, though, in which the prototype pollution can be used for arbitrary code execution.
The maintainer updated the code and documentation to inform the users about this risk of pollution, and recommended an additional Node.js command line flag “--frozen-intrinsics” that prevents the alteration of builtins like the root prototype. While this fix prevents pollution attempts, we still believe the malicious code might be able to read global values from the underlying context's global scope or abuse side-channels. We recommended migrating to a secure sandbox instead, but the maintainer declined this proposal because of the amount of refactoring needed. Thus, as an extra precaution, we advise deploying additional isolation around happy-dom, e.g., running it in a Docker container or a virtual machine, when using it for scrapping untrusted content.
More generally, for anyone out there using the Node.js’ vm module to run untrusted code, we recommend migrating to a sturdy security sandbox like isolated-vm. This package exposes the underlying V8’s Isolate primitive that is also used for sandboxing untrusted scripts in the Chrome browsers; hence, this is a more reliable isolation solution, well-tested in billions of daily usages.
Another worth-mentioning sandboxing effort is Secure EcmaScript, a Stage 1 proposal for adding a primitive in modern JavaScript runtimes for running untrusted code. There is an open-source implementation of this proposal, developed and maintained by Agoric and based on years of research in JavaScript isolation. The main insights of this proposal are to freeze all the intrinsic objects in the global scope and to disallow dangerous language constructs and primitives, similar to the recommendation in happy-dom. While both these two sandboxing solutions (isolated-vm and Secure EcmaScript) are worth considering, isolated-vm provides better isolation, but it struggles with supporting arbitrary open-source libraries, while Secure EcmaScript excels at integrating third-party code, while offering more fragile isolation.
Overall, this case study shows once again the difficulty of isolating untrusted JavaScript code and the dangers of false expectations when using a third-party package or built-in API, under a malicious threat model.
40+ AI Prompts for Secure Vibe Coding



What's next?
When you're ready to take the next step in securing your software supply chain, here are 3 ways Endor Labs can help:







![Blocking with Confidence: Relativity's Dev[eloper] Experience Journey](https://cdn.prod.website-files.com/6574c9e538a34feac8cec013/68b9d2c15649a03ae0ae83aa_LinkedIn%20Article_%20Relativity.png)

