By clicking “Accept”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.
18px_cookie
e-remove

The Last Mile of AI Productivity Is Code Review

Developers are generating more code with AI coding assistants, but release velocity isn’t increasing. Here’s how to fix it.

Developers are generating more code with AI coding assistants, but release velocity isn’t increasing. Here’s how to fix it.

Developers are generating more code with AI coding assistants, but release velocity isn’t increasing. Here’s how to fix it.

Written by
A photo of Varun Badhwar — Founder & CEO, Endor Labs.
Varun Badhwar
Published on
August 11, 2025

Developers are generating more code with AI coding assistants, but release velocity isn’t increasing. Here’s how to fix it.

Developers are generating more code with AI coding assistants, but release velocity isn’t increasing. Here’s how to fix it.

Engineering organizations are rapidly rolling out AI coding assistants like Cursor, Windsurf, and GitHub Copilot to capitalize on the productivity promise of generative AI. GitHub’s own research claims developers code up to 55% faster using Copilot.

Sundar Pichai, CEO of Alphabet, shared during their Q3 2024 earnings call that “more than a quarter of all new code at Google is generated by AI, then reviewed and accepted by engineers.” In a recent update, he put that figure at 30%.

But here’s the catch. In an appearance on the Lex Fridman Podcast, Sundar clarified:

“Looking at Google, we’ve given various stats around 30% of code now uses AI-generated suggestions or whatever. But the most important metric, and we measure it carefully, is how much has our engineering velocity increased as a company due to AI, right? It’s tough to measure, and we really try to measure it rigorously, and our estimates are that number is now at 10%.”

Velocity is lagging behind code volume

At first glance, this mismatch between output and velocity surprised a lot of people. But it reflects what many engineering leaders are quietly seeing: AI tools are generating more code, but they aren’t accelerating delivery velocity by the same multiples.

Some of this gap is likely the result of external and internal incentives—marketing numbers are meant to impress investors and tell a strong AI adoption story, not estimate delivery metrics. But even with some inflation, the pattern is consistent. Across multiple studies, senior engineers report productivity gains closer to 10%, while more junior developers sometimes see improvements as high as 40%.

So if more code is being written faster, why aren’t engineering teams able to ship faster?

Code review hasn’t scaled with AI adoption

Even before AI coding assistants went mainstream, code review was a well-known bottleneck. LinearB studied 733,000 pull requests and 3.9 million comments from 26,000 developers and found that on average it takes developers 5 days to review and merge code.

So yes, code is being generated more quickly by AI coding assistants. But just like code written by a developer, AI-generated code has to be scrutinized for security flaws, verified for correctness, tested, and often rewritten to correct any defects. Security risks that aren’t caught during review can disrupt CI when security fails builds, or require additional engineering cycles later.

In addition to the volume of reviews, AI-generated code can also be harder to scrutinize given the complexity of the risks it introduces:

  • Business logic flaws
  • Hallucinated packages
  • Swapped libraries
  • Missing security controls

Unfortunately, as code volume increases, most organizations haven’t rethought how code review should scale. They still rely on overloaded senior engineers to do manual reviews in between meetings and sprints. 

Security can help unblock the bottleneck

That’s where platforms like Endor Labs come in. Our automated secure code review system analyzes every pull request in real time to identify and prioritize potential risks—especially the nuanced logic flaws and architectural issues that static tools weren’t designed to catch.

Endor Labs uses a multi-agent system to deliver deep, context-aware reviews at scale. It generates a summary of the changes made, which helps human reviewers quickly understand what changed. It also:

  • Classifies risk by domain and security impact
  • Prioritizes security issues based on severity and scope
  • Notifies security if risks are accepted and merged

In short: it acts like a security engineer reviewing every PR, but with the speed and scale required for AI-native development. As a result teams enjoy faster delivery and fewer bottlenecks.

Deliver a win for the business

During DevOps Connect 2025 at RSA, Anthropic CISO Jason Clinton said his biggest payoff he saw from a solution they built in-house for automating code review wasn’t a lower risk score; it was unblocking engineers to ship code without waiting for manual review.

Let’s take a conservative scenario: a team of 300 developers, each shipping just one pull request per day, generates over 75,000 PRs annually. At just 15 minutes of review per PR (an industry rule of thumb for medium-sized PRs), that’s 18,750 manual review hours.

Studies have shown that reviewers can’t effectively identify risky code changes and security issues when code reviews involve more than 100 lines of code. As a result it takes at least 15 reviewers to reach 95% confidence level that all security vulnerabilities have been found. That kind of redundancy isn’t practical—especially at the pace AI-generated code is moving.

Security review workload before and after Endor Labs Before automation After Endor Labs
Pull requests per year 75,000 75,000
PRs needing human security review 75,000 7,500 (90% reduction)*
Manual review hours (15 min / PR) 18,750 1,875

* Design partner median results

Endor Labs customers report that fewer than 10% of PRs require human security review after implementing AI Security Code Review. That shift unlocks over 16,000 engineering hours annually.

With the right automation, you don’t have to choose between speed and security. Book a demo to see how Endor Labs can help your team safely unlock the full value of AI-driven development.

Additional Resources

The Challenge

The Solution

The Impact

Book a Demo

Book a Demo

Book a Demo

Welcome to the resistance
Oops! Something went wrong while submitting the form.

Book a Demo

Book a Demo

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Book a Demo