Blog

Learn about software supply chain security and Endor Labs

Nx build platform compromised by supply chain attack – How attackers collude with AI code assistants

Nx build platform compromised by supply chain attack – How attackers collude with AI code assistants

Nx supply chain attack: malicious npm versions of Nx exfiltrated SSH keys and tokens to GitHub—abusing AI code assistants. Learn how to detect and fix.

How We Cracked SCA for C/C++ Codebases

How We Cracked SCA for C/C++ Codebases

Endor Labs improves C/C++ SCA by combining cryptographic hashing, code embeddings, and a curated index for accurate dependency and vulnerability detection.

When CodeRabbit became PwnedRabbit: A cautionary tale for every GitHub App vendor (and their customers)

When CodeRabbit became PwnedRabbit: A cautionary tale for every GitHub App vendor (and their customers)

Kudelski Security uncovered an RCE flaw in CodeRabbit exposing 1M+ repos. Here’s what happened, how it was fixed, and key lessons for secure AI apps.

Shadow AI in Your Codebase: A Hidden Supply Chain Risk

Shadow AI in Your Codebase: A Hidden Supply Chain Risk

Unvetted AI models and services are entering your codebase. Do you have a plan to find and govern them?

Under the Hood: How I Vet Early-Stage Startups for Critical Security Programs

Under the Hood: How I Vet Early-Stage Startups for Critical Security Programs

Greg Pettengill, a Principal Product Security Engineer at Five9, is an early adopter of startup technology. In this article he shares his methodology for picking vendors that can deliver on promises.

Detect End-of-Life (EOL) Software in Containers with Endor Labs

Detect End-of-Life (EOL) Software in Containers with Endor Labs

Endor Labs now detects end-of-life (EOL) software in containers, helping AppSec teams eliminate risk early.

The Most Common Security Vulnerabilities in AI-Generated Code

The Most Common Security Vulnerabilities in AI-Generated Code

Learn about the most common and emerging security risks of AI-generated code, from injection flaws to hallucinated dependencies.

The Last Mile of AI Productivity Is Code Review

The Last Mile of AI Productivity Is Code Review

Developers are generating more code with AI coding assistants, but release velocity isn’t increasing. Here’s how to fix it.

How to Detect LLM Prompt Injection Risks

How to Detect LLM Prompt Injection Risks

Learn how to detect prompt injection vulnerabilities in GenAI applications and prevent attackers from exploiting LLM-powered workflows.

Why Your AI Code Assistant Might Be Shipping CVEs

Why Your AI Code Assistant Might Be Shipping CVEs

LLMs often recommend outdated or vulnerable open source packages—here’s why it happens, why it matters, and how AppSec and DevOps leaders can stay ahead.

Anti-Pattern Avoidance: A Simple Prompt Pattern for Safer AI-Generated Code

Anti-Pattern Avoidance: A Simple Prompt Pattern for Safer AI-Generated Code

How CWE-specific prompts cut LLM code vulnerabilities by more than half.

Endor Labs now integrates with GitHub Copilot in VS Code

Endor Labs now integrates with GitHub Copilot in VS Code

Secure AI-generated code at the source with a new integration for GitHub Copilot powered by the Endor Labs platform.

Securing AI Coding Assistants: A Total Cost Analysis

Securing AI Coding Assistants: A Total Cost Analysis

A CISO’s guide to analyzing and containing the security costs of AI-generated code

Endor Labs Now Available on Google Cloud Marketplace

Endor Labs Now Available on Google Cloud Marketplace

Endor Labs is now available on the Google Cloud Marketplace, enabling faster procurement and deployment of software supply chain security for GCP customers and partners.

How to Detect Infrastructure as Code (IaC) Misconfigurations with AI Security Code Review

How to Detect Infrastructure as Code (IaC) Misconfigurations with AI Security Code Review

Learn how to detect misconfigurations in Infrastructure as Code (IaC) files, preventing privilege escalation and unsafe defaults before they reach production.

Endor Labs now integrates with Cursor AI Code Editor

Endor Labs now integrates with Cursor AI Code Editor

Secure AI-generated code at the source with a new Cursor integration powered by the Endor Labs platform.

Secure-Insecure Diff: A Smarter Way to Prompt for Safer Code

Secure-Insecure Diff: A Smarter Way to Prompt for Safer Code

How a multi-step prompt technique can reduce vulnerabilities in AI-generated code

How to Evaluate Endor Labs SCA for C/C++ Projects

How to Evaluate Endor Labs SCA for C/C++ Projects

A step-by-step guide to testing Endor Labs SCA accuracy for C/C++ projects

CVE-2025-54313: eslint-config-prettier Compromise — High Severity but Windows-Only

CVE-2025-54313: eslint-config-prettier Compromise — High Severity but Windows-Only

CVE-2025-54313 tracks a supply chain attack on eslint-config-prettier, where four malicious versions of a popular npm library targeted Windows machines with a remote-code execution payload. Learn how it happened and how to stay safe.

Everything You Need To Know About The FedRAMP RFC-0012

Everything You Need To Know About The FedRAMP RFC-0012

The new FedRAMP RFC shifts the standard to require deep context into the reachability and exploitability of vulnerabilities. Here’s what you need to know.

Structuring Prompts for Secure Code Generation

Structuring Prompts for Secure Code Generation

A practical guide to embedding security requirements into AI coding workflows

Endor Outpost: Deploy Endor Labs Behind Your Firewall

Endor Outpost: Deploy Endor Labs Behind Your Firewall

Endor Outpost extends the full capabilities of the Endor Labs AppSec platform to Self-Hosted SCMs like Bitbucket Datacenter and GitLab Self-Managed.

Endor Labs & Oligo: Closing the Loop Between Secure Code and Secure Runtime

Endor Labs & Oligo: Closing the Loop Between Secure Code and Secure Runtime

Endor Labs and Oligo keep pipelines fast and secure with unified reachability, real-time threat blocking, and safe, automatic fixes.

Struggling to Patch Spring-Web? Try This Instead

Struggling to Patch Spring-Web? Try This Instead

Fixing Java deserialization vulnerabilities in Spring-Web is notoriously difficult, but Endor Labs offers an alternative with patches.

40+ AI Prompts for More Secure Coding

40+ AI Prompts for More Secure Coding

AI coding assistants make writing code a breeze, but they also contain security flaws. This free prompt library helps reduce vulnerabilities at the source, with more secure prompting practices and examples tailored to real-world use cases.

Book a Demo

Protect your open source dependencies, secrets, and CI/CD pipelines without slowing down devs.