Ask a room of security engineers what they think of SAST, and you’ll mostly get laughter, or groans. Everyone has scars from false positives, endless triage, and tickets that lead nowhere. Most will tell you that their SAST tools have a 90% false positive rate.
They’re not exaggerating. Independent benchmarks like NIST’s SATE V show traditional SAST produce 68–78 percent false positives on average across languages including Java, C, and Python.
Somehow, that’s become normal.
SAST was supposed to make secure coding scalable. Instead, it became background noise. Over the years, tools grew more complex, integrations deeper, but the core problem never changed: SAST never evolved alongside modern software development. It scans code the way we built apps a decade ago—file by file, rule by rule—blind to frameworks, middleware, and intent.
This post looks at why that happened. How a technology that promised to catch critical vulnerabilities ended up drowning security and engineering teams in false alarms, and what the next generation must do differently.
A Brief History of Shifting Security Down
Application security has always been a moving target, but the best progress has come from taking problems away from humans. Every major leap in security has been a story of automation and abstraction, of shifting responsibility down the stack to the layer that can enforce it by default.
1. The Early Days: Memory Safety
In the early days of software, the biggest security threats weren’t SQL injection or XSS, they were buffer overflows (CWE-121), dangling pointers (CWE-825), and array bounds violations (CWE-119). Security was a matter of discipline: developers had to remember to allocate safely, validate input sizes, and avoid memory corruption.
The problem? Humans are bad at being consistent.
The solution was to shift application security down into the compiler and runtime so software developers didn’t have to think about it anymore. Languages like Java and later Go, Rust, and others were born from this insight. They made memory safety a property of the language itself. Operating systems introduced address space layout randomization (ASLR) and non-executable stacks to further contain unsafe behavior.
In short, we moved from: human → compiler
That shift eliminated entire classes of vulnerabilities that once dominated security advisories.
2. The Web Era: Logic and Input Vulnerabilities
As the web exploded, new kinds of vulnerabilities emerged: SQL injections (CWE-89), cross-site scripting (CWE-79), cross-site request forgery (CWE-352). These weren’t memory problems anymore; they were logic and input validation problems.
The response? Another shift down. Languages alone couldn’t prevent these issues, so we built opinionated frameworks that embedded security best practices by design. Frameworks like Django, Laravel, and Ruby on Rails began escaping HTML by default. ORMs like Hibernate and SQLAlchemy made parameterized queries standard. Cryptography moved into well-tested libraries, reducing the need for developers to implement encryption themselves.
We went from: compiler → framework
Developers could once again build secure applications by default without solving the same problems over and over again.
3. The Infrastructure Shift: Secure Transport By Default
Once frameworks handled the bulk of application-layer security, attention moved outward to the network itself and other infrastructure components. Transport security, for example, was long an opt-in feature. Setting up HTTPS required manual certificates, complex configuration, and often, additional cost. That friction meant many sites simply didn’t bother.
Then came another shift down:
- Cloud platforms began offering managed certificates.
- Let’s Encrypt made TLS free and automatic.
- And browsers started warning users about insecure sites.
Today, HTTPS is enabled by default on nearly every major platform. Security shifted from being developer-optional to infrastructure-guaranteed. Once again security was shifted down into the technology rather than left to the human to manage.
The net result? The class of vulnerabilities around unencrypted communication has virtually disappeared from most modern web stacks.
Where static analysis stumbled
Meanwhile, static application security testing (SAST) tools designed to find these vulnerabilities haven’t evolved at the same pace. Traditional SAST tools rely on syntactic pattern matching, occasionally enhanced with intraprocedural taint analysis. But modern applications are much more complex and often use middleware, frameworks, and infrastructure to address risks.
As responsibility for vulnerabilities shifted down into other parts of the stack thanks to memory safety, frameworks, and infrastructure, SAST tools were left finding false positives at the granular, code level. They’re catching ghosts of technology past.
Modern security problems are rarely about unsafe string concatenation or missing input checks. Instead risks come from logic flaws, abuse of legitimate features, and contextual misconfigurations. And those aren’t problems a regex-based scanner can meaningfully understand.
The next frontier
We’re hitting another inflection point in how software, and security, gets built. Just like compilers and frameworks absorbed yesterday’s security problems to an extent, the next shift down is happening from frameworks to autonomous coding agents.
These systems can reason about application context, use security best practices by default, and even simulate potential logic abuses before code is shipped. They can automatically:
- Use secure APIs and libraries.
- Implement proper authentication and authorization flows.
- Enforce least privilege across services.
- Detect insecure data flows across layers, not just within a file.
In this world, the “secure coding” responsibility moves yet again from humans writing code to machines generating and verifying it. It’s the same pattern we’ve seen throughout the history of application security: as classes of problems are understood, they’re absorbed by the layer below.
To summarize, we’ve gone from: humans → compilers → frameworks → infrastructure → agents
Each progression makes security less brittle, and more accessible to more builders. Static analysis in the new era will look for very different, and balancing backwards-facing capabilities to secure code from previous eras while looking forward into new classes of security risks.
Conclusion
The story of application security is one of automation and abstraction. Every generation has made the next safer by design, turning yesterday’s hard-won best practices into tomorrow’s defaults. As autonomous coding agents mature, expect SAST to evolve alongside them to effectively perform human-level code review at machine scale.
Security has always been about building systems that optimize towards producing secure code by default, with as few exceptions as possible. “Shifting down” has always been the solution, and will be in this next era of software development too.
40+ AI Prompts for Secure Vibe Coding



What's next?
When you're ready to take the next step in securing your software supply chain, here are 3 ways Endor Labs can help: