Under the Hood: People.ai's Proactive Approach to AI Security
Hear how a CISO at an AI-first company is thinking about securing AI, and how AI should improve security programs.
Hear how a CISO at an AI-first company is thinking about securing AI, and how AI should improve security programs.
Hear how a CISO at an AI-first company is thinking about securing AI, and how AI should improve security programs.
Hear how a CISO at an AI-first company is thinking about securing AI, and how AI should improve security programs.
Hear how a CISO at an AI-first company is thinking about securing AI, and how AI should improve security programs.

At People.ai, security isn't just a department; it's an integral part of how we build, innovate, and serve our customers. As the CISO and Head of Platform, along with my team, we balance the need to enable our developers and engineers with the critical responsibility of shipping a secure product.
This dual focus has become more vital than ever, especially with the rapid evolution of Artificial Intelligence (AI). The rise of generative AI has fundamentally reshaped our risk landscape and how we look at security. We believe that companies like ours, who integrate AI into their core solutions, must proactively address these new challenges.
The new risks keeping us up at night
One of the most significant shifts we've observed and are actively tackling is the new set of threats associated with AI/LLM adoption. This isn't just about external threats; it's also about managing internal usage and ensuring the responsible application of AI.
AI code assistants…friend or foe?
Tools like Cursor, GitHub Copilot, and Windsurf are making it faster and easier for developers (and non-devs!) to build applications. But these tools generate security flaws at the same or even higher level as developers. Traditional application security tools designed for a previous architectural era simply weren't built to handle this kind of risk. This realization has prompted us to rethink our entire approach to application security and look for vendors that are ready for this new reality. We’re seeking ways to introduce security best practices to our AI-generated code, and a key concern is establishing guardrails to protect customer and internal data. Customer data is "the currency of choice", and more valuable than ever before, making its protection paramount.
If everyone can use AI, how do we know what’s in use?
Another growing challenge we face is shadow AI. Just as we've seen shadow IT, employees are using and downloading AI tools and models without necessarily clearing them through our information security channels. It would be foolish for us to try and block the use of AI entirely. Its adoption is widespread, and organizations risk being left behind if they don't integrate it at scale in a secure manner. Therefore, our focus is on understanding, assessing, and implementing proper controls and policies around these emerging uses.
How AI is going to help us sleep at night
Beyond using AI in our product, we are keenly interested in how AI can enhance our security programs. Whether solving a new problem introduced by AI, or using AI to solve an old (but worsening challenge), we see a lot of opportunities for vendors to drive good change.
Shine a light on shadow AI
Enabling an organization to manage shadow AI goes far beyond simply knowing it exists or getting alerts about it. My constant question to vendors is, "So what?". It's great to show me where these instances are, but what I truly need is to assess shadow AI for risks and then enforce controls to deal with risky choices. Just like how vulnerability management evolved from mere identification to providing solutions, AI security tools must mature to offer concrete safeguards. For example, Endor Labs detects AI models being used in our codebase and then can strategically block problematic models (like Deepseek) or warn developers if they select models with low scores.
Shift scanning even further left
Not so long ago, it was acceptable to only discover risks after they shipped and then create a mountain of Jira tickets for engineering. But we know that doesn’t lead to meaningful risk reduction, and People.ai worked hard to shift scanning into the pipeline so we can stop preventable risks from entering the codebase. As tricky as AI code assistants can be, they offer us a new opportunity to shift even further left. Integrating SAST, SCA, and secret detection into AI code assistants means we can help developers write secure code from the beginning. But a word of caution here: shifting this far left comes with great responsibility. It’s even more important to have accurate context and data to guide LLMs in these tools towards secure outcomes…otherwise, why not just use ChatGPT? (This is why we’re excited about the Endor Labs MCP server.)
AI-ify the SOC
A challenging problem we believe AI is poised to solve is the augmentation of Security Operations Centers (SOCs). The sheer volume and speed of security signals and logs today are overwhelming for human analysts, and with the velocity of threats increasing, that’s only getting harder. But imagine if AI agents could rationalize our architecture, understand our diverse technology stack, and provide actionable steps for security incidents. This capability would allow us to maintain a high level of security posture without requiring constant "eyes on glass" from human analysts, freeing them to focus on complex, strategic tasks.
Why we’re pursuing ISO 42001
Building a strategy isn’t just about tools; standards and frameworks are key to the responsible use of AI. As a company that openly uses AI models within our solution and processes customer data, we believe it's essential to not just have a policy, but to measure and govern our AI usage effectively.
For that reason, and because our customers are already asking about our adoption stance, People.ai is pursuing the new ISO 42001 standard for AI. This standard specifies requirements for establishing, implementing, maintaining, and continually improving an Artificial Intelligence Management System (AIMS) within organizations. While adoption of ISO 42001 is currently voluntary, we know it gives us a business advantage to be aligned and someday we expect some or all to be mandatory.
Picking a vendor that can deliver on AI promises
AI washing is prevalent in the security industry, and it can be hard to tell whether a vendor is really able to help us secure AI or leverage AI in our programs. When evaluating tools, we prioritize certain characteristics to ensure they align with our long term vision.
- Flexible Architecture: As an all-cloud, all-SaaS company with geographically distributed teams, we need tools that are inherently flexible and support multiple cloud providers. We also look for API-first, modern architectures built with generative AI in mind, rather than legacy systems using it to patch their problems. For example, Endor Labs has agents built into the core of the platform that leverage their extensive data.
- Customer Success and Partnership: A crucial factor in our selection process is the vendor's receptiveness to feedback and willingness to act as a true partner. AI is moving fast, and we need vendors who lean in and collaborate on how their tools can scale to meet our enterprise-level demands and auditor questions.
- Developer Experience: A major application security goal is to enable a seamless experience for our developers. We choose tools that reduce friction in the development process, making compliance and risk management an integrated part of daily software development rather than a cumbersome hurdle.
About the Author
Aman Sirohi, currently SVP - Chief Security Officer & Platform at People.ai, is a security technology leader with extensive experience envisioning and delivering a wide range of security solutions globally. He is passionate about building deep levels of trust with customers, employees, and partners, leading high-performing teams to deliver transformational security outcomes. Throughout his career, he has been a trusted advisor and strategic problem solver across Technology, FinTech, and Retail & Supply Chain sectors at companies including Guidewire, Ross Stores, and Accenture Management Consulting.
About People.ai
People.ai is the leading AI data platform for go-to-market teams. Since 2016, People.ai has been transforming how go-to-market teams improve sales effectiveness and win rates via industry-leading comprehensive data foundation and generative AI capabilities.