By clicking “Accept”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.
18px_cookie
e-remove
Blog
Glossary
Customer Story
Video
eBook / Report
Solution Brief

Shadow AI in Your Codebase: A Hidden Supply Chain Risk

Unvetted AI models and services are entering your codebase. Do you have a plan to find and govern them?

Unvetted AI models and services are entering your codebase. Do you have a plan to find and govern them?

Unvetted AI models and services are entering your codebase. Do you have a plan to find and govern them?

Written by
Julien Sobrier
Julien Sobrier
Published on
August 20, 2025

Unvetted AI models and services are entering your codebase. Do you have a plan to find and govern them?

Unvetted AI models and services are entering your codebase. Do you have a plan to find and govern them?

Machine learning (ML) and large language models (LLMs) are becoming foundational to modern application development as part of AI-native applications. Although data science teams have long integrated ML into their workflows, tech-forward companies increasingly expect developers to use LLM tools for code generation and review. As a result LLMs usage is rapidly expanding across the rest of engineering.

At Endor Labs, we’re observing a new trend: open-source AI models and third-party AI services are being integrated directly into applications—often without centralized visibility, review, or guardrails. A single app might include dozens of models; a single organization, thousands. This rise of “shadow AI” introduces hidden risks security teams can’t afford to ignore.

What is shadow AI?

Shadow AI is an emerging class of risk, similar to shadow IT and shadow cloud before it. It refers to the undocumented—and often unauthorized—use of AI tools, models, and services in software development. Developers may adopt tools they use personally, like LLMs or AI services, and integrate them into work projects without formal vetting, threat modeling, or oversight.

Many organizations still lack clear guidelines for how AI should be used in software, including which providers are approved and what data can be sent to LLMs. Even when policies exist, enforcement is rare. Security teams often rely on developers to self-govern, which is unrealistic given the speed and complexity of modern engineering. With thousands of projects in flight, auditing AI usage at scale is a major challenge and a growing blind spot.

Security risks of shadow AI models and third-party services

The security risks tied to LLMs and third-party AI services are just beginning to surface. One of the most prominent is prompt injection, where attackers manipulate model inputs to produce harmful, biased, or malicious outputs. In some cases—especially when combined with agents or action-taking components—prompt injection can lead to full control over local or remote environments. The recent attack on Amazon Q is a prime example.

In addition to these technical vulnerabilities, there are also legal and ethical concerns. For instance, potential violations of the General Data Protection Regulation (GDPR) are a major consideration, as organizations utilizing LLMs must navigate compliance issues when handling personal data. The implications of unauthorized data transfers and data sharing practices are often not fully understood, raising questions about accountability and responsibility in the case of data leaks.

What organizations need to govern shadow AI

To address the risks of shadow AI, organizations must establish a baseline of governance, visibility, and enforcement across the software development lifecycle. That starts with putting the following requirements in place:

  • A policy framework for acceptable AI use:  Organizations should define which AI providers are allowed, what data can be shared with them, and under what conditions integration is approved. Policies should align with legal, privacy, and security requirements.
  • Review and assessment of AI supply chain risks: Like other dependencies, AI components should be reviewed for license compliance, security risks (e.g., malicious code), and quality before integration.
  • Visibility into AI components in your codebase: You can’t govern what you can’t see. Application security teams need automated discovery of AI models and services—whether they’re open source, commercial, or embedded deep in dependencies or plugins.
  • Guardrails for safe AI adoption in the SDLC: The introduction of AI models and services should trigger review—not just by security teams, but also legal and compliance. That means adding policy-based guardrails to trigger review, warn developers, or even block components.
  • SBOM reporting for compliance: Just like OSS dependencies, AI components need to be tracked in software bills of materials (SBOMs). This enables traceability, risk assessment, and compliance reviews.

How Endor Labs can help

Endor Labs offers solutions to assist security leaders and development teams in establishing internal governance around LLM usage:

  1. Build a complete inventory of AI components in your applications (AI-SPM), including: 
    1. Open-source AI models from Hugging Face
    2. Third-party AI services including AWS Bedrock, OpenAI, Azure OpenAI, Anthropic, and others
  2. Report integrated AI components in your SBOM (AI-BOM)
  3. Investigate AI models for supply chain risks ranging from license compliance to malicious code injection from unsafe file formats
  4. Set policies and gatekeeping mechanisms to manage the introduction of AI components

Conclusion

The usage of LLMs poses security risks, even if unintended. A developer might install an autocomplete plugin powered by OpenAI, while a backend team integrates a third-party chatbot. These additions often bypass security review, and the data sent to these models—such as logs, source code, or customer emails—may be sensitive. Security teams must treat AI usage as a data boundary issue, with visibility and governance on par with other critical software components.

To manage this risk, organizations need to inventory their AI integrations just as they do with open source dependencies. This foundation enables teams to define, enforce, and audit policies for safe and responsible AI adoption. A well-documented inventory supports regular reviews, regulatory compliance, and better decision-making about the models and services in use.
Contact us to discuss how Endor Labs can help you setup organization-wide safeguards to manage the onboarding, integration, and governance of open-source AI models and third-party services.

Find out More

The Challenge

The Solution

The Impact

Book a Demo

Book a Demo

Book a Demo

Welcome to the resistance
Oops! Something went wrong while submitting the form.

Book a Demo

Book a Demo

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Book a Demo