By clicking “Accept”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.
18px_cookie
e-remove
Blog
Glossary
Customer Story
Video
eBook / Report
Solution Brief

The Dangers of Reusing Protobuf Definitions: Critical Code Execution in protobuf.js (GHSA-xq3m-2v4x-88gg)

We found a critical remote code execution vulnerability in protobuf.js, a core building block for anything that speaks Protocol Buffers from JavaScript, directly or through gRPC, Firebase, and most cloud SDKs. Here's how it works, the risk, and how to fix it.

We found a critical remote code execution vulnerability in protobuf.js, a core building block for anything that speaks Protocol Buffers from JavaScript, directly or through gRPC, Firebase, and most cloud SDKs. Here's how it works, the risk, and how to fix it.

We found a critical remote code execution vulnerability in protobuf.js, a core building block for anything that speaks Protocol Buffers from JavaScript, directly or through gRPC, Firebase, and most cloud SDKs. Here's how it works, the risk, and how to fix it.

Written by
Cris Staicu
Cris Staicu
Published on
April 17, 2026
Updated on
April 17, 2026

We found a critical remote code execution vulnerability in protobuf.js, a core building block for anything that speaks Protocol Buffers from JavaScript, directly or through gRPC, Firebase, and most cloud SDKs. Here's how it works, the risk, and how to fix it.

We found a critical remote code execution vulnerability in protobuf.js, a core building block for anything that speaks Protocol Buffers from JavaScript, directly or through gRPC, Firebase, and most cloud SDKs. Here's how it works, the risk, and how to fix it.

Executive Summary

Endor Labs researchers discovered a critical vulnerability in protobuf.js, the most widely used JavaScript runtime for Protocol Buffers, a data format used by millions of applications to exchange information, including services built on Google Cloud, Firebase, and most modern cloud platforms. The protobuf.js package is downloaded roughly 52 million times per week and is often installed as a hidden dependency of other popular libraries, meaning many development teams ship it without realizing it.Exploitation is straightforward. It requires an attacker to supply a malicious configuration file (protobuf schema) to the target application — a precondition that sounds narrow but is common in practice. Applications routinely load these files from shared registries, partner integrations, or third-party servers. Once a poisoned file is in memory, exploitation is trivial: the first message the application processes triggers the payload, with no authentication or user interaction required.

Key takeaways

  • The vulnerability (GHSA-xq3m-2v4x-88gg) has a critical severity score (CVSS 9.4) and affects protobufjs ≤ 8.0.0 and ≤ 7.5.4.
  • No exploitation has been publicly reported, but the attack is straightforward
  • Patches are available in 8.0.1 and 7.5.5.
  • Organizations should upgrade immediately and audit transitive dependencies, especially via @grpc/proto-loader, Firebase, and Google Cloud SDKs.
  • This is part of a broader pattern where legitimate developer tools become code-execution primitives when fed hostile input. Organizations should treat schema-loading endpoints and .proto files with the same rigor as executable code.

Affected Versions

Package Name Advisory Version Published (UTC) Status Severity
protobufjs (8.x) GHSA-xq3m-2v4x-88gg ≤ 8.0.0 Apr 16, 2026 Patched Critical
protobufjs (8.x) GHSA-xq3m-2v4x-88gg ≤ 7.5.4 Apr 16, 2026 Patched Critical

Upgrade with npm install protobufjs@^8.0.1 or npm install protobufjs@^7.5.5 depending on your major line. npm ls protobufjs will surface transitive pulls.

Disclosure Timeline

Date Event
Mar 2, 2026 Vulnerability reported to the protobuf.js maintainers
Mar 9, 2026 Issue confirmed by maintainers
Mar 11, 2026 Fix committed and first patched release published on GitHub
Apr 4, 2026 Fixed 8.x release (8.0.1) published to npm
Apr 15, 2026 Fixed 7.x release (7.5.5) published to npm
Apr 16, 2026 GitHub Security Advisory GHSA-xq3m-2v4x-88gg published

Introduction

Protocol Buffers are the default serialization format across large parts of modern infrastructure, gRPC services, mobile APIs, telemetry pipelines, and internal RPC. protobuf.js is the JavaScript runtime most of those workloads rely on when they need to speak protobuf from Node.js or the browser. The package has around 52 million weekly downloads on npm and 10k GitHub stars. It is pulled in, directly or transitively, by @grpc/proto-loader, Firebase SDKs, Google Cloud client libraries, and a long tail of observability and data-plane tooling. If you have a node_modules tree with anything cloud‑adjacent, you almost certainly ship protobufjs.

The library's value proposition is runtime dynamism: give it a .proto file or a JSON descriptor and it will synthesize constructors, encoders, and decoders on the fly. That dynamism is also exactly the problem.

This vulnerability is not a supply-chain attack against protobuf.js itself, the package is legitimate and maintained by developers affiliated either now or in the past with Google. It is a vulnerability in how protobuf.js processes the data developers feed it. And as we'll argue below, this class of bugs, dev-tool-as-code-execution-primitive, has an emerging threat model that the ecosystem has been slow to internalize or accept.

The vulnerability

protobuf.js does not interpret protobuf schemas; it compiles them. For every message type, the library assembles a small string of JavaScript source and hands it to the Function constructor, which parses and executes it as fresh code. This is a common performance trick: generated code is faster than a generic interpreter walking a schema at runtime.

The helper that turns schema fragments into executable source lives in lib/codegen/index.js. Its assembly step is a single concatenation:

js
function toString(functionNameOverride) {
    return "function " + (functionNameOverride || functionName || "") + "(" +
        (functionParams && functionParams.join(",") || "") + "){\n  " +
        body.join("\n  ") + "\n}";
}

There is no AST, no escaping pass, and no identifier validator. functionName is interpolated verbatim between "function " and "("; whatever string lives in that slot becomes the identifier of a live JavaScript function declaration. The resulting source is then compiled and invoked:

js
return Function(source)(); // eslint-disable-line no-new-func

Function(source) is essentially eval() by another name: the argument string is parsed and executed as fresh JavaScript. It is exactly the pattern ESLint's no-new-func rule is designed to flag: the rule's own documentation calls the Function constructor "similar to eval()" and warns that it "introduces potential security risks." Both Function(...) call sites in codegen/index.js carry `// eslint-disable-line no-new-func`, and we suspect those disables are a structural reason the bug went unnoticed for so long: subsequent readers saw the opt-out and trusted the earlier review, instead of tracing every string that flows into the call. Eslint documentation recommends treating eslint-disable comments on security-sensitive rules as important discussion points during code review or audit.

The untrusted input reaches this machinery through src/type.js. Type.generateConstructor calls the codegen helper with the message type's *name* as the function identifier, i.e., the functionName that lands in the "function " + functionName + "(" slot above. Before the fix, that name came straight from reflection metadata, which for schemas loaded from JSON traces back to the attacker-supplied descriptor: no escaping, no validation. If a type is named User, the generated source is unremarkable. If a type is named User){process.mainModule.require("child_process").execSync("id");function x(, the "name" closes the synthetic signature, inlines the payload, and starts a throwaway function to balance the trailing brace. Function happily parses and runs the whole thing, running the attacker-controlled command id under the hood.

Crucially, the constructor is generated lazily. Loading the malicious descriptor with Root.fromJSON does not, on its own, fire the payload; the Function call happens the first time the application needs the type's constructor, which in practice is the first time it decodes (or encodes, or instantiates) a message of that type. This is why the vulnerability is a threat to running applications rather than to build pipelines. The advisory's minimal proof of concept makes this explicit: load the descriptor, look up the outer type, pass a short, completely benign,buffer to .decode, and a child process running id appears. In a real deployment the triggering message is ordinary user traffic; the schema is the poisoned component.

The developer-centric threat model

The advisory's precondition sounds reassuring: you're only at risk if an attacker can influence the protobuf schema your application loads. In practice, the habit of reusing protobuf definitions makes that precondition easy to meet. Teams are actively encouraged to pull schemas from third parties: the Buf Schema Registry hosts versioned, dependency-managed .proto modules for consumption across organizations; googleapis/googleapis and the well-known types in protocolbuffers/protobuf are vendored into countless repositories; envoyproxy/envoy/api and cncf/xds are the de-facto control-plane schemas for service meshes. Reuse is the explicit design point. The attack surface opens the moment any of those sources, or the pipeline by which they reach your service, can be influenced by someone outside your trust boundary.

  • Servers that self-describe over the wire. gRPC defines a standard protocol (grpc.reflection.v1.ServerReflection) in which a client asks a server for its own schema and receives FileDescriptorProto bytes back. Every major gRPC explorer, grpcurl, evans, buf curl, BloomRPC, Kreya, Postman's gRPC mode, uses this pattern, and in Node.js the pipeline is one function call wide: the official @grpc/reflection client returns descriptor bytes, protobufjs.Root.fromDescriptor reconstructs the schema, and the first lookupType(...).decode(...) triggers the constructor codegen on a server-supplied type name. A rogue or compromised gRPC server can therefore hand a client executable code in the same round-trip it tells the client what its API looks like.   
  • Schema registries and internal marketplaces. Many engineering organizations run internal protobuf/Avro registries where any team can push types. If the trust boundary between "team that publishes a schema" and "team that consumes it" is weaker than the trust boundary between engineers (and it usually is), one compromised developer account becomes arbitrary code execution on every service that decodes traffic under that schema.
  • Dynamic gRPC proxies and protocol-aware gateways. Service meshes, API gateways, and request-replay/debugging tools accept .proto files so they can decode traffic on the fly. A poisoned schema turns "I'm decoding this vendor's messages" into "the vendor runs code in my gateway."
  • Multi-tenant SaaS that stores user schemas. Analytics platforms, iPaaS tools, observability backends, and data pipelines that let a customer upload a protobuf descriptor and then decode events against it are directly exposed: one malicious descriptor plus the customer's own ordinary event stream yields RCE on the ingestion tier.
  • Reverse-engineering and interop work. A developer talking to a third-party gRPC service whose .proto files aren't public grabs a copy from a gist, a vendor portal, or a Slack DM and starts decoding captured traffic locally. The same bug fires on the developer's machine the moment they decode a sample message.

If your service only loads strictly trusted, version-controlled .proto files from your own repository, and which your development team wrote, you are outside the attack model. If any path loads schemas at runtime from somewhere humans can influence, which is the entire reason Root.fromJSON exists, you are inside it.

The cultural assumption around schemas, config files, OpenAPI specs, and similar artifacts is that they are inert descriptions. Developers vet JavaScript dependencies with some skepticism; almost no one runs SCA against a .proto file or peer-reviews the x-enumDescriptions block in a vendor's OpenAPI document. The moment a dev tool compiles, generates code from, or interprets that data in a non-trivial way, the trust model needs to be the same as for executable code. protobuf.js is an unusually pure example because the compilation step is literal Function-constructor source assembly, but the pattern data input → dynamic code generation → execution is the rule, not the exception, in modern developer tooling.

Dev tools are the new attack surface

Attackers have been aggressively targeting and monetizing developer tooling for years. Two categories are worth separating:

  1. Supply-chain attacks against the tools themselves. The canonical example is the eslint-scope incident of July 2018: an ESLint maintainer's npm account was taken over (credential reuse, no 2FA), and eslint-scope@3.7.2 was published with a postinstall script that read ~/.npmrc, exfiltrated the npm auth token, and set up the next stage of token-harvesting package hijacks. Every developer who ran npm install in the wrong hours had their npm credentials stolen. The incident became the reason npm 2FA, provenance, and scoped tokens are now industry defaults. Similar patterns have recurred: ua-parser-js, coa, node-ipc, the ongoing @ctrl/tinycolor and `shai-hulud` style worms; always with the same leverage: own a tool, own everyone who builds with it.
  2. Code injection in tools that process "trusted" inputs. This is the category protobuf.js belongs to, and it's the one that gets less airtime. Two very recent cases rhyme with it almost exactly:
  • [GHSA-mwr6-3gp8-9jmj (Orval, CVE-2026-22785, CVSS 9.3). Orval is an OpenAPI-to-TypeScript client generator. Its MCP-server codegen path concatenated the OpenAPI summary field into generated code without escaping. A summary like Finds Pets by status.' + require('child_process').execSync("open -a Calculator").toString(),// broke out of the string literal and landed arbitrary JavaScript inside the developer's generated client. Run orval against a hostile spec in CI and you are executing vendor‑supplied code on your build runner.
  • GHSA-h526-wf6g-67jv (Orval, CVE-2026-23947, CVSS 9.3). A sibling sink the first patch missed: getEnumImplementation() embedded x-enumDescriptions into generated const enum comments without escaping. "pwned */ require('child_process').execSync('id'); /*" closed the comment and again produced executable output. Fixed in 7.19.0 and 8.0.2.

Different package, different field, same shape as protobuf.js: an untrusted string concatenated into a template that will later be executed as code. The dev tool ecosystem is full of these sinks because so much of it lives in the space where "data" and "code" overlap: template engines, codegen, schema compilers, linters with AST rewriters, notebook kernels, LSP servers, build plugins.

What connects these cases is less "where the code runs" and more "who controls the input that ends up as code." Orval and eslint-scope compromise happens at build time, on developer machines and CI runners, i.e., hosts loaded with SSH keys, cloud credentials, signing material, and package publishing tokens. protobuf.js compromise happens at runtime, inside services that speak to customers and partners, hosts loaded with user data, service tokens, and access to internal systems. Both surfaces matter, and both are reached through the exact same design pattern: a trusted library composing code from untrusted text.

The fix

The upstream patch is intentionally small. Pull request #2127 (Fix commit: 535df44) adds a single line in the Type constructor in src/type.js:

js
name = name.replace(/\W/g, "");

Non-word characters, i.e., everything outside [A-Za-z0-9_], are stripped before the name is used anywhere in the codegen path. Parentheses, braces, semicolons, quotes, whitespace: all the tokens an attacker needs to close the synthetic function signature are simply filtered out. The commit message notes, correctly, that there is no legitimate reason for a protobuf type name to contain characters outside the alphanumeric set.

A safer fix would be to stop round-tripping attacker-reachable identifiers through Function at all, emit code that uses safe lookups in a table keyed by sanitized identifiers, rather than splicing names into source. That is a larger change and the one-line filter is correct as a conservative remediation.

Impact

The impact is what you would expect from arbitrary code execution in a library this deep in the dependency graph, triggered on the production decode path:

  1. Full RCE on running services that decode messages against an attacker-influenced schema. Application servers, protocol-aware gateways, service-mesh sidecars, ingestion pipelines, and any worker that root.lookupType(...).decode(...) a message it received.
  2. Credential and data exfiltration from the service process. The payload runs in the Node.js process and inherits its access to environment variables, service tokens, database connections, KMS-decrypted secrets, and in-memory user data.
  3. Lateral movement from the data plane. Services that decode partner or tenant traffic typically sit next to internal APIs, queues, and object stores; an RCE on the decoder is a pivot into the rest of the system.
  4. Developer-machine compromise during interop work. Engineers who load third-party .proto files locally to debug traffic are exposed the moment they decode a sample message, an important but narrower case than the production path.

What you should do

  1. Upgrade. protobufjs ≥ 8.0.1 on the 8.x line, ≥ 7.5.5 on the 7.x line. Check transitive pulls too; many applications don't import protobufjs directly but pick it up through @grpc/proto-loader, firebase, or cloud SDKs. npm ls protobufjs and npm audit will both surface it. Even better, use reachability analysis to see if this package is actually used in your application.
  2. Treat schema-loading endpoints as untrusted input surfaces. Root.fromJSON, load, and parse on arbitrary bytes are equivalent, from a threat-model standpoint, to eval on arbitrary bytes. 
  3. Prefer precompiled static artifacts in production. pbjs and pbts alternatives emit static code you can vet, commit, and ship. If your production request path doesn't need to compile descriptors at runtime, don't give attackers a reason to reach that code.
  4. Pin and verify the schemas you reuse. If you pull definitions from Buf Schema Registry, googleapis/googleapis, or an internal registry, pin to immutable versions, verify checksums, and apply the same review discipline you'd apply to a new npm dependency. "It's just a .proto" is the attacker's favorite assumption.
  5. Put dev tools in your SCA scope. Codegen tools, linters, formatters, build plugins, and language servers are production dependencies for the parts of your company that produce production dependencies. They deserve the same patching discipline as your web framework, even when, as here, the exploitation happens at runtime rather than at build time.

Takeaways for developers

  1. Data-as-code is the default in modern tooling. If your tool interpolates a string into generated source, evaluates an expression language, or compiles a schema into a closure, the "data" your users feed you is source code with extra steps. Design and threat-model accordingly.
  2. Supply-chain attacks are not the only threat to developer tools. Watching for compromised npm packages is table stakes. The next wave, visible in Orval, in protobuf.js, in the expression-sandbox escapes we keep seeing in template engines is attackers exploiting *legitimate* tools through the data those tools accept.
  3. Codegen sinks are nearly always sibling-prone. The Orval advisories (GHSA-mwr6-3gp8-9jmj and GHSA-h526-wf6g-67jv, with the second patching a sink the first missed) are a pattern: once one field is found to flow into generated source unescaped, there are almost always more. 
  4. Prefer structural escaping over blacklists. A regex filter is the right short-term fix for protobuf.js. The right long-term fix is to stop templating attacker‑controlled strings into source at all and instead emit code that looks them up via a safe dispatch table.
  5. Treat eslint-disable on security-sensitive rules as an audit flag, not a green light. `no-new-func`, `no-eval`, `security/detect-*`, and their peers exist because those APIs are dangerous. When you see `// eslint-disable-line no-new-func`, the question to ask is not "did a reviewer accept this?" but "where does every string that reaches this call come from, across the whole call graph?" protobuf.js survived two such disables on its two most dangerous call sites for years. CI-enforced review of suppression comments catch exactly this pattern.
  6. The trigger site for dev-tooling bugs is not always the build server. protobuf.js compiles schemas into live code, but it does so lazily, in production, the first time an application decodes a message. Orval and eslint-scope hit at build time; this one hits at request time. Threat-model each dev-ecosystem dependency by asking both "when is the vulnerable code actually run?" and "who controls the inputs at that moment?" - the answers are not always the obvious ones.

Conclusion

protobuf.js is an excellent library with a subtle trap: the same dynamic-compilation machinery that makes it fast is a hairline from executing attacker-supplied text. GHSA-xq3m-2v4x-88gg is fixed by restricting the names in protobuf definition files, but the lesson is broader. Every developer tool that compiles, generates, or evaluates "data" is a code-execution surface. eslint-scope showed us what compromised developer tools can do; Orval and now `protobuf.js` show us what benign developer tools can do when fed hostile input. Both sides of that threat model need to be part of how teams evaluate the tooling in their build pipelines and in their node_modules.

If you ship protobuf schemas dynamically, patch today, move toward precompiled static artifacts, and start treating your .proto files, and every other "just a config" file in your stack, with the same care you give the JavaScript you run.

References

  1. GitHub Security Advisory GHSA-xq3m-2v4x-88gg, https://github.com/protobufjs/protobuf.js/security/advisories/GHSA-xq3m-2v4x-88gg
  2. Fix commit 535df44, https://github.com/protobufjs/protobuf.js/commit/535df444ac060243722ac5d672db205e5c531d75
  3. Upstream pull request #2127, https://github.com/protobufjs/protobuf.js/pull/2127
  4. Issue #2124, https://github.com/protobufjs/protobuf.js/issues/2124
  5. GHSA-mwr6-3gp8-9jmj (Orval MCP codegen injection, CVE-2026-22785), https://osv.dev/vulnerability/GHSA-mwr6-3gp8-9jmj
  6. GHSA-h526-wf6g-67jv (Orval enum-description codegen injection, CVE-2026-23947), https://osv.dev/vulnerability/GHSA-h526-wf6g-67jv
  7. eslint-scope npm supply-chain incident (2018), postmortem, https://eslint.org/blog/2018/07/postmortem-for-malicious-package-publishes/
  8. CWE-94: Improper Control of Generation of Code, https://cwe.mitre.org/data/definitions/94.html

Find out More

The Challenge

The Solution

The Impact

Book a Demo

Book a Demo

Book a Demo

Welcome to the resistance
Oops! Something went wrong while submitting the form.

Book a Demo

Book a Demo

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Book a Demo