Expanding the context of SAST
Static Analysis Security Testing (SAST) has been around in one form or another for decades.
The premise of analysing code (or binaries) to detect security errors has yielded billions of findings and has undoubtedly resulted in safer applications.
Now, as just about every industry blog will tell you ad nauseam, AI coding assistants are starting to take over writing the code, if not designing the entire application. Since LLMs' coding capabilities result from training on existing open-source codebases, it’s not surprising that the code they initially produce has a similar rate of flaws as human code. In my experience, most flaws that SAST tools detect are simple injection flaws, cross-site scripting, or other issues with the improper handling of user input. These flaws are easily detected and usually easily remediated.
I’m also confident that the combination of better prompts, improved models, and routine use of security-focused system prompts (the default instructions that a coding assistant should follow for every prompt) will make these kinds of flaws scarce, eliminating 70% or more of SAST findings. What will be left are the harder-to-detect, harder-to-fix logic flaws, and findings that appear to be flaws but are not relevant because of a process upstream or downstream of the application or service being tested.
I’m categorically not saying that SAST scanning will become irrelevant; the variable nature of LLM output means there will always be a chance that a SQL injection flaw, for instance, will creep in, so you should still scan your code. But to be effective, rather like LLMs themselves, the context in which your SAST tool operates needs to expand too. Simply scanning code, building call graphs and dataflows, and looking for anti-patterns won’t be enough; your tool will need to look holistically at the application, the environment, and the controls outside the specific code under test. Expanding the context of first-party code security to include as much environment-specific information as possible will be an essential area of development for all code security tools.
This context layer exists outside the SAST tool itself, but it needs to be part of the platform, and in-line before results are surfaced to the scan initiator. You might think the term 'scan-initiator' is an odd choice, but it leads me to the other key change for SAST in the age of AI.
Shift Lefter
In the dim-dark days of the past (or about five years ago), SAST tools and their like were the domain of the security team. Coders would code, commit, and merge code, then, somewhere downstream from their work, the security team would run various scans and hand back their report of findings. Typically, there would be multiple false positives in the findings, or findings that were hard to explain, resulting in frustration all around, plus delays, additional work, and all that fun stuff. There have been multiple studies showing that (somewhat obviously) the further away from when a flaw was created, the more it costs to remedy it.
The obvious solution was to move some of the security testing back towards the code development phase, also known as ‘shift-left’. This move somewhat trailed the DevOps movement, but is now an accepted part of the software development lifecycle (SDLC). Security companies have provided integrations to get security testing tools into developers' Integrated Development Environment (IDE) or the build system that turns raw code into packaged applications. This does all the good DevOps things, like shortening feedback loops, driving accountability, and improving software development velocity.
A not-atypical workflow might involve a developer having tools to scan locally before committing. A non-blocking scan runs in the CI/CD system when code is pushed to a development branch, and then a blocking scan on a Pull Request into the main/production branch. In general, the developer must manually initiate the scan via an IDE plugin after writing enough code to commit, so they can fix any issues detected before pushing.
Now that an agent is writing the code, we need to give the agent the tools and instructions to run scans, fix issues, and rerun scans iteratively to fix all flaws before completing the job. Since LLMs can’t push plugin buttons, we need to give them tools they can access that produce results in a format that's easy to process. We’re solving this with both Model Context Protocol (MCP) servers or integrations with hooks from the AI coding assistant, and generating results in formats like JSON. Effectively, we are pushing security scanning tools into the AI assistant, so that the results they deliver to developers (hopefully) for review are free of security defects.
SAST doesn’t go away—it grows up
The headline takeaway here isn’t that static analysis is obsolete in an AI-driven world. It’s that the job it’s being asked to do has fundamentally changed.
As AI coding assistants take on more of the responsibility for generating code, the volume of obvious, easily detectable flaws will likely drop. That’s good news—but it also means the remaining issues will be more subtle, more contextual, and more tightly coupled to how an application actually behaves in its environment. A SAST tool that only understands files and functions, without understanding deployment context, data flows, trust boundaries, or compensating controls, will increasingly feel out of step with reality.
At the same time, “shift-left” now needs to shift again. The first consumer of SAST results is no longer just a human developer; it’s an autonomous or semi-autonomous agent that needs machine-readable feedback, clear remediation guidance, and the ability to iterate without friction. Security tools that can’t operate programmatically, inline, and at machine speed will be bypassed, not out of malice, but out of necessity.
The future of SAST isn’t just better analysis or deeper data flows. It’s better context, better integration, and better alignment with how software is actually built today: by humans and machines working together. Tools that embrace that reality will remain indispensable. Those that don’t will still find bugs, but increasingly at the wrong time, in the wrong place, and for the wrong audience.
Detect and block malware



What's next?
When you're ready to take the next step in securing your software supply chain, here are 3 ways Endor Labs can help:









