Endor Labs Original Research

State of Dependency Management 2025

AI Coding Agents and Software Supply Chain Risk
By
Henrik Plate
Henrik Plate
and
Luca Compagna
Luca Compagna
Book titled '2025 Dependency Management report' with subtitle 'AI Coding Agents and Software Supply Chain Risk' by Henrik Plate and Luca Compagna, published by Endor Labs, on a green background with floating shield and checkmark icons.

Download the report

Executive Summary

The State of Dependency Management 2025 reveals how AI coding agents and Model Context Protocol (MCP) servers are introducing a new layer of software supply chain risk as AI becomes an integral part of modern software development.

As developers increasingly rely on AI tools to accelerate coding, Endor Labs researchers found that these agents not only produce insecure code but also import vulnerable or non-existent open-source dependencies at scale.

In this year’s edition, to get a comprehensive overview of the MCP server landscape, we analyzed 10,663 GitHub repositories implementing MCP servers and performed large-scale testing of AI-generated dependency recommendations across major ecosystems such as PyPI, npm, Maven, and NuGet.

49%

of dependency versions imported by AI coding agents have known vulnerabilities

34%

of dependency versions are hallucinated - they do not exist in actual package registries

1 in 5

dependency versions recommended by AI coding assistants are safe to use

3X

improvement in dependency safety when AI agents are equipped with tools

The research also highlights that while extending AI capabilities with tools show promise, the MCP ecosystem itself remains immature:

  • 75% of MCP servers are built by individuals, often without enterprise-grade safeguards.
  • 41% lack any license information, limiting corporate adoption.
  • 82% of MCP servers use sensitive APIs that require careful security controls to avoid vulnerabilities.

Why This Matters

The report underscores a critical shift in modern AppSec: As AI coding agents become embedded in IDEs and development workflows, new types of dependencies are entering the software supply chain.

Unvetted AI-generated code and MCP integrations now represent new “links” in the dependency chain, expanding the attack surface beyond package managers or build pipelines.

Recommended Actions

Endor Labs recommends enterprises treat AI-generated code as untrusted third-party input, enforcing the same code review, SAST/SCA scanning, and dependency governance controls used for human development.

Organizations should:

Establish a strong prompt culture by training developers in secure, spec-driven prompting practices that align AI instructions with organizational standards and embed security requirements directly into development workflows.
Integrate security tools into AI workflows via MCP, enforcing safe dependency selection and vulnerability detection.
Vet MCP servers as part of the software supply chain, using allowlists and continuous monitoring for typosquatting or brand-jacking risks.
Reinforce developer training and prompt integrity, ensuring AI-assisted development aligns with secure-by-design principles.

Prefer a printed copy?

Hardcover book titled '2025 Dependency Management report: AI Coding Agents and Software Supply Chain Risk' by Henrik Plate and Luca Compagna from Endor Labs.

Four Years of Dependency Research 

Over the past four years, Endor Labs’ State of Dependency Management reports have shown that dependencies are responsible for most of the code developers ship and most of the vulnerabilities they face.

About Endor Labs Research

Endor Labs’ Security Research team investigates how modern software is built—and how it breaks. Led by Henrik Plate, our team of six PhDs and security researchers is on a mission to help the industry understand and reduce risk across the entire software supply chain. From open-source dependency analysis to the rise of AI-generated code, Endor Labs research combines large-scale empirical data with hands-on experimentation to reveal how real development practices impact security outcomes. Each study aims to translate complex findings into practical guidance for engineering and AppSec teams, empowering organizations to build software that is both faster and safer by design.

Download Report

This report was led by:

Henrik Plate
Henrik Plate 
Head of Security Research
Luca Compagna
Luca Compagna
Security Researcher

AppSec for The Software Development Revolution

Old school, we like it!

Let us know where to send a copy and we'll get one to you ASAP.