AI coding agents have evolved from assistants into semi-autonomous systems that write code, install dependencies, and interact with external services. Their reach now extends beyond the code itself into the surrounding systems that support it, including filesystems, packages, databases, and CI pipelines. This shift has expanded the attack surface in ways traditional security tools were never designed to handle.
When something goes wrong in the development environment today, a malicious package installed by an agent, a destructive command executed in an unattended session, a compromised extension redirecting agent behavior, you find out after the fact, if you find out at all. There's no alert, no audit trail, no policy that fired. The source code is not the only attack vector; it’s also every system that goes into producing it.
To secure this new layer of risk, Endor Labs is launching Agent Governance and Package Firewall, new capabilities built on AURI to protect against the next wave of software supply chain attacks targeting the agentic development environments. The capabilities work as an independent, verifiable, and configurable security layer across the different agent harnesses used in the enterprise today, including Cursor, Claude Code, Google Antigravity, and others.
"Enterprise engineering and security leaders are not asking whether to adopt agentic coding; they are asking how to do it at scale with the trust the business requires,” said Brian McCarthy, President, Global Revenue and Field Operations, Cursor. “We have invested heavily in tools, security controls, and governance, along with partnerships, including with Endor Labs, that let security teams see what every agent is doing, enforce policies across workstations, and ship with confidence. The result is developers moving at full speed with the guardrails enterprises need.”
The rise of software supply chain attacks
Our recent research into malware in open source ecosystems found a 14x increase in malware advisories over the last two years. In 2025 alone, 92% of all npm maintainer account takeovers ever recorded took place — a signal that attackers have identified package registries as their highest-leverage target, and AI coding agents as the shortest path from registry to production.
Recent campaigns prove the point. Last week, the software supply chain attack on the Bitwarden CLI targeted CLIs for Claude Code, OpenAI Codex, and Gemini to exfiltrate developer secrets. We saw a similar pattern in the compromise of the Nx build system last year.
Monitoring and guardrails for AI coding agents
Agent Governance gives visibility into the systems generating code, including the code agents developers are using, the models they use, and what MCP tools they interact with. It works across the different AI coding agents used in the enterprise today.
- Agent Layer, see the coding agents in use, along with version details and session counts. It also captures the accounts associated with these tools, making it easy to spot when developers are using personal accounts instead of corporate ones, a common but often invisible compliance gap.
- Model Layer, see the models used in the agent harness, and who is using them. Identify usage that falls outside your approved list of providers.
- Tool Layer, see every MCP server an agent connects to, local or remote, along with usage frequency, actions performed (like reading files or executing queries), and last activity. This makes it easy to understand which systems agents are touching and where to focus the review.
- Skills Layer, skills across agents, how often they run, their associated risk scores, and the primary risk dimension (like instruction integrity or data protection). This makes it easy to identify high-risk patterns before they become an incident.

To ensure complete governance over agentic activity, it follows the same flexible policy model as the existing AURI platform, giving you a single, centralized place to define, manage, and enforce policies across all agents. With built-in policies and support for custom regex-based rules, you can detect and control patterns in prompts and model outputs while continuously monitoring agent behavior.
"By leveraging integrations such as Model Context Protocol (MCP) servers, Endor Labs brings its security intelligence directly into the Gemini ecosystem, providing the real-time guardrails and attribution that allow Google Cloud customers to scale AI-native workflows securely,” said Vikas Anand, Director, Product Management, Google Cloud.
You can enforce controls or trigger alerts across key surfaces:
- Shell Commands, block destructive actions like rm -rf, sudo, or reverse shells, and alert on supply chain activity like git push or npm publish
- File Access, block access to sensitive files and directories (e.g., .env, .pem, .ssh/), including credentials, private keys, and config; alert on changes to source code, configs, or dependencies
- MCP Tools, prevent dangerous queries like DROP or DELETE, and alert on potential data exposure in outputs
- Prompts, block prompt injection or secret leakage (API keys), and alert on prompts involving PII or sensitive data
- Skills, control which skills agents can use, detect risky behaviors, and audit usage across environments
Blocking malicious packages at the source
Package Firewall leverages AURI’s real-time analysis of newly uploaded open-source packages across npm, PyPI, nuget, Maven, and others to block malicious and vulnerable open source packages before agents can pull them into the workstation or CI pipelines. It works whether a developer or an agent requested the package, whether the request came from npm, PyPI, or another registry, and whether the package is a direct or transitive dependency.
The Hooks integration we shipped in December in partnership with Cursor was the first step. Package Firewall extends that same protection to nearly every agentic development environment, so the same guarantees hold whether an agent is pulling dependencies inside an agentic IDE like Cursor and Claude Code, in a CI pipeline, or from a developer's terminal.

Getting started
Agent Governance and Package Firewall are available today. If you're evaluating how to secure AI coding agents, we'd like to show you what this looks like end-to-end. Book a demo, or start coding with AURI now.
What's next?
When you're ready to take the next step in securing your software supply chain, here are 3 ways Endor Labs can help:









