By clicking “Accept”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.
18px_cookie
e-remove
Blog
Glossary
Customer Story
Video
eBook / Report
Solution Brief

Fireside Chat: Building an AppSec Program for Cursor

Learn how Cursor is building an AppSec program that's both effective and non-disruptive to the engineering team's workflow.

Learn how Cursor is building an AppSec program that's both effective and non-disruptive to the engineering team's workflow.

Learn how Cursor is building an AppSec program that's both effective and non-disruptive to the engineering team's workflow.

Written by
Jenn Gile
Jenn Gile
Travis McPeak
Travis McPeak
Published on
September 19, 2025

Learn how Cursor is building an AppSec program that's both effective and non-disruptive to the engineering team's workflow.

Learn how Cursor is building an AppSec program that's both effective and non-disruptive to the engineering team's workflow.

Travis McPeak runs security programs for Cursor (Anysphere). In this video, he sits down with Jenn Gile (Head of Community at Endor Labs) to talk about the program and its goals. Give it a listen to see how AppSec is being built at one of the fastest-growing startups!

Malicious Packages and Industry Trends

[ 0m0s ] Jenn Gile: why don't we start with what's going on in the world of like malicious packages. The last, I don't know, two weeks for us have at least been pretty chaotic with all of these attacks. Um, do you see there, you know, being kind of like a turning point in how malware gets treated by the industry?

[ 0m25s ] Travis McPeak: Yeah, it's funny. Um, so 10 years ago, I was thinking a lot about this problem because I was working on open stack. And open stack was pulling in 250 open source python packages. and it occurred to me as somebody responsible for open stack security, if any of those gets compromised then like the whole thing's done. and I was just frankly like really surprised with the state of Python package security. So there was at the time there was no like 2 FA to publish packages. There was no brute force protection. and it's like one of those 250 maintainers at least has reused their password or has a terrible password or something. So I really didn't feel good about it. But nobody else is paying attention to it. I was just kind of like, yeah, that's just the way the world works. And so I was like, okay, that's the way the world works and I just moved on. Um, one thing that I find interesting about the security industry is that it is it does just go through cycles. So it's like the exact same thing is true today that was true 10 years ago, but now everybody cares about it. and 10 years ago nobody cared about it. So I think there's kind of a hive mentality going on.

[ 1m27s ] Jenn Gile: Yeah, I see a lot of parallels with log 4J and CVEs, you know, that kind of more conscious this is a thing that we need to be, you know, addressing programmatically. Do you think that we're headed that way with malware where more vulnerability management programs will have malware detection and prevention and mitigation is like a more explicitly we're gonna do this thing as opposed to maybe being more focused on CVEs, for example.

[ 2m2s ] Travis McPeak: Probably. I think security's just so driven by, you know, budget cycles and when something gets in the news as much as the NPM vulnerabilities have been in the last two weeks then executives pay executives pay attention to it. They ask the security team what are you doing about this and then security team all of a sudden gets new budget to go and solve the problem. It starts to take over more of the seas so dinners. everybody talks about malware and how are you solving it and whatever. What's funny is that it's going to be basically the exact same thing all over again as we did with like antivirus and then it's oh it's not antivirus. It's next gen antivirus. It's EDR. So we've gone through this whole cycle before. It's still going to be heuristics based, right? Like look at the thing. Do we see anything like visibly wrong with it. and we've also seen I think it was the XC package which had very sophisticated backdoor. We're not going to find that. And at the end of the day, all of this code that we're pulling in that's not our code is vulnerable or could be vulnerable. And we're I guess going to have like very low grade detections about that that will hopefully get better.

Developer and Security Team Relationships

[ 3m2s ] Jenn Gile: Yeah, I mean, I've worked previously in national security looking at, uh passport fraud and you kind of catch the egregious ones, right? The really obvious ones. Uh, where do you think, like setting tooling aside, how do you see engineering organizations and security teams like adapting to being more cautious about what they're consuming? How they consume it?

[ 3m37s ] Travis McPeak: Yeah, I think, um, so the industry is going to solve this very different ways. In general, my view of security in general is that we should be solving those kinds of problems as much as we can for engineers and not have to make them worry about what's the latest security thing this week. What do I have to do to prevent it? So I think the right way to handle this is we have good tooling that integrates and notices anytime either a new package has been pulled in or an update to a package happens that significantly changes some attribute of it. And then the security team looks at it and bets out whether that's safe. Obviously, engineers are going to add something, you know, sometimes to to build CI, whatever and we need to have the tooling to catch that and then flag it for immediate review. But it has to have the context. We don't really want engineers to have to ramp into as much as I've been paying attention about this for 10 years. We just want them to get the right information that they need and not have to be keeping up with the security news of the day.

[ 4m41s ] Jenn Gile: And do you think that might look like something like, hey, this package literally just came out. Let it breathe a little bit or we see some potentially sketchy, you know, behavior in this package. Um, you know, this is a, you know, maintainer that we're not familiar with. Like, what kind of context do you think engineers need?

[ 5m6s ] Travis McPeak: Yeah, so I think, um, one of the concepts that I've learned a while ago that was really powerful for me is the type one type two thinking. So there's like the fast autopilot, like I'm just doing things. I'm not really thinking about them and that's what phishing relies on. You're just cruising. You're not really like paying a lot of attention. And then there's the slow thinking and you can use nudges to kick people over into the other one, the slow. And so that's probably what it's going to look like is engineer writes something, you know, they fat finger it. Um, they've pulled in a package that has 1 100th the adoption that some other package has or it's brand new and this other one is five years old. So that kind of heuristic if you just present them with the context then they will catch the mistake themselves and the security doesn't have to be involved at all. Um, that's probably like what good's going to look like and then at some point it's like, I don't know. These both look kind of, um, equally good and, uh, security should have some kind of an opinion or help the engineer decide like what is the right package to use in this case.

The "Why" of a Security Program

[ 6m12s ] Jenn Gile: Yeah. So kind of zooming out a bit, um, talk to me about any sphere's, you know, cursor's security program. Um, the question I like to ask is, why is it funded? You know, what, what is the raison d'être that the company has a security program beyond the very obvious, you know, people probably wouldn't use your tool if you didn't have something.

[ 6m42s ] Travis McPeak: Yeah, it's a great question. So I I think the why of a security program is always very important and it has to be specific to a company. Um, for example, you know, Amazon, Google, like big cloud providers, if they get compromised, then like every all of their downstream customers are compromised. So like they have to get security to a 99.999%. Um, they have nation state hackers. Um, at at cursor specifically, users are entrusting us with a few things. So one, there's a binary that we run on their system, which is our actual IDE, our CLI. Um, they want to know that when they install that or they update it that it's safe for them to do so. Um, and that's just standard with any kind of mass market, um, consumer thing that you install. Um, then there's the AI itself. So what our customers use us for is AI software development and that will use AI to perform actions and create code and they want to have confidence that when they do that, that it's safe. Um, if they don't feel like it's safe, then it's going to impact how the company grows. So that's another one. And then finally, um, we have, you know, an increasing amount of data. Uh, we have a privacy mode where we, we retain no code data of any of our customers. We can't even see it ourselves. Um, but there's another mode where, you know, there is data of our customers and we want to be really good stewards of that data. I think that today, you know, we invest relatively overweight, um, for a company at our stage just because we know that it's important and we want everybody that uses cursor to feel like that's the safe choice.

[ 8m14s ] Jenn Gile: Yeah. I mean, as a growing technology space, if this, you know, if you mess this up, that could be the end of a company, right?

[ 8m22s ] Travis McPeak: Yep.

Using AI to accelerate security work

[ 17m0s ] Jenn Gile: Love it. Um, so in terms of using AI within cursor, I would assume you all are. Um, how do you think about using AI for security? And I don't just mean the, you know, crop of startup tools that have come out for, you know, whatever AI security thing, but how are you thinking about it for making your team more effective? Like even though you're willing to invest in more staff, you know, how are you having them use it? Or how are they already using it?

[ 17m40s ] Travis McPeak: It's the greatest gift ever, quite frankly. So one of, I don't know, 10 years ago or so, I heard, um, some loose quote, which is the NSA's hacking group, um, Tau, Taylor access organization or something operations. They would come in and they would infiltrate a company and they would understand the way the company worked better than everybody at the company does. And what they meant is that chaos is the enemy of security in an organization. There's just too much stuff, too many things changing. You don't really understand how anything works, you know, especially if you're in a place where security is 1 to 100 or something. Like things are changing around you faster than you can keep up with. Um, AI is really good at just telling you like how something works. Um, or what has changed. So I'm personally and everybody on my team uses it that way all the time. If I have to ramp into a system to patch something for example, I'm going to get up to speed very quickly on the code base by just having a conversation with AI and having it show me things.

[ 18m49s ] Jenn Gile: Yeah. I mean to your earlier point, if you're hiring a bunch of generalists, presumably they're not an expert in all the languages you have or maybe someone has more Appsec versus cloudsec experience. So using it that way as a an acceleration tool. Are you also looking at using it, um, in your own, you know, security, uh, engineering efforts, for example, I talked to somebody else in the community who's using it to, uh, generate WAF rules.

[ 19m25s ] Travis McPeak: Oh, constantly, every single day. Yeah. Um, I would say everybody in engineering at cursor, you know, including us in security, uses cursor constantly. We build cursor with cursor. Um, and that's the same thing for security. Um, I've heard metrics about how effective people are when they introduce AI coding and those tend to be, you know, 10% or 40% and honestly, I have no idea what they're talking about. Um, for me using AI to implement security work, I would guess that I'm five times more effective than I am without it. So projects that would normally take weeks, I can get done in a half a week.

[ 20m10s ] Jenn Gile: What are some like tips that you would have for using cursor for security?

[ 20m15s ] Travis McPeak: Yeah, so anytime you're unfamiliar, first of all, with you know, like I said, how something works, you go ask the right questions. There's no bad questions with AI. It's not going to ridicule you that you don't know how something works. So that's one. Um, two, any any kind of like new syntax. Um, for example, yeah, your point, firewall rules. I have no idea how to write Sucata rules, but I don't have to. Uh, I go have a conversation with AI. I try some things. I can try experiments very quickly now. Um, even writing Terraform, I can go get something deployed pretty instantly with AI. So I can try experiments, learn how something works, have a back and forth conversation until I understand some new technology. Um, and then finally, any kind of, um, gradual deployment, you know, like how does this thing work, um, troubleshooting, all of that is so much faster as well that, yeah, I just, honestly, it blows my mind when people say like, you know, 10% more effective. Like that number is off by orders of magnitude to me.

[ 21m25s ] Jenn Gile: I mean, when I hear those kind of numbers and frankly, stories, most of the time what I'm hearing is a frustration that they're having to correct things that AI did, that the AI, you know, whether it's cursor or some other tool, um, wrote the code in a way that doesn't fit into their overall architecture or, um, you know, essentially, the context is so big that it's kind of losing track of things. So what do you do to avoid, I guess, having to rework what the tool did for you?

[ 22m5s ] Travis McPeak: Yeah, so first of all, um, one personal observation that I have is that people do not use them correctly. So they will come in and they will try to, you know, lovingly hand craft the perfect prompt. They'll spend hours like writing a prompt and then they'll expect that they're going to one shot it and they'll say either the AI did or didn't do everything correctly. I think that's completely wrong. The way that I use it is very collaboratively. So I will I will say, okay, we're starting off with a blob. Let's do this first. And it will go and implement something. I'll say, oh, let's do this this way instead. And then that thing's done. I'm I'm like, great. Let's go do this thing next.

[ 22m48s ] Jenn Gile: Sequential.

[ 22m48s ] Travis McPeak: The whole time it's back and forth. And then also for all of the people that say, oh, you know, the AI made a mistake. Yes, but also it catches a bunch of mistakes that I make too. So it's very much like, um, like a paired programmer, you know, rubber duck kind of situation when I use AI, the AI and I are bouncing ideas off of each other. And then finally, everything that we do goes through bugbot, which is our AI review tool and a bunch of times bugbot finds bugs that I hadn't catch and then I'll go and fix those again, you know, with AI and commit it back through.

MCP server security concerns

[ 23m24s ] Jenn Gile: How do you feel about MCP servers?

[ 23m29s ] Travis McPeak: Mixed. So, yeah, so um, in general, I think, um, it's good to give I think that MCP is most useful for putting boundaries around what you want um AI systems to do. So the, I guess one question is, why do we even have MCP? Like why don't we just have the AI UCLI's? We already have an interface where something can use software and then deal with APIs. So that's one view of it. I think where MCP becomes particularly useful is saying like, okay, but I don't want you to use all of the CLI. I want you to have, you know, AWS, um, create an instance. I don't want you to have like AWS EC2 everything.

[ 24m16s ] Jenn Gile: Right.

[ 24m16s ] Travis McPeak: Put the bumpers on it. Exactly. So I think that's the most useful and then MCP comes bundled with prompts. So you can tell the AI, you know, more information about how to use it, give it examples, whereas a CLI, you know, has a manual, but nobody reads manuals. So

[ 24m34s ] Jenn Gile: I talk to a lot of people in the community who are worried about the security risks of using MCP servers. There's a lot of worry about authorization and access to data. Um, we haven't seen a big, you know, exposure yet. So at this point, I think it's mostly theoretical, but like how, how real of a challenge slash threat do you see that?

[ 24m59s ] Travis McPeak: It's funny because I would say all of the AI security issues are just rehashing of the same issues that we've been done with before. So, yeah, once again, we're dealing with like authorization isn't very good and, you know, we tend to give things like star and and hope for the best. So, yeah, you can definitely have issues where you have a very over permissioned system that AI can call and then you have confused deputy problems among the people the callers of that system. Like that's a whole class of issue. Um, there's prompt injection, which is basically just a rehash of command injection and SQL injection and cross-site scripting and all of the other we didn't separate the control plane and the data plane, you know, elements of software like just rehashing that again. Yeah, exactly. So there's definitely that and and all of that aside, AI can just do wrong stuff sometimes, right? So like, uh, it just, I don't know, it got it wrong and it decided that the best way to like unstick me was to like, you know, reset my file system or whatever. Like that kind of stuff can happen too. So I think that the concerns about um, just getting having AI do a bunch of stuff. Like those are well-founded. Like I wouldn't use it that way. Um, I tend to run with broad permissions to to systems that I touch and yeah, I I would like to review uh, potentially dangerous commands before they get run every single time.

[ 26m21s ] Jenn Gile: No, yolo mode for you.

[ 26m23s ] Travis McPeak: No yolo mode for me. Um, we would like to make that safe and we have some good ideas about how to do it. At least safer. Um, I don't think anybody should ever do yolo mode for potentially destructive actions. Like I think that those should always be reviewed. Um, but yeah, there's there's a whole bunch of rehashing of, you know, how do we do authorization, while kind of issues that we're going to do all over again for AI and giving it tools and I think these things will get ironed out to the same degree that we did with cross-site scripting. Like it's still an issue, but there's also secure kind of frameworks and and ways to use it that mitigate a lot of the issues.

[ 27m1s ] Jenn Gile: Uh, this may be getting beyond what you're willing to talk about, but how much do you feel like vendors like cursor are somewhat responsible for driving that?

[ 27m9s ] Travis McPeak: Uh, I don't really have an opinion. I think the, um, the company like cares a lot about security and enabling people to use AI safely. Um, and also there's, you know, more opportunity in the market than we can all do. Like as as much as like it feels like a like a fast-moving and fast-growing company. There's still a ton of opportunities. So, frankly, um, some of this I would love for other vendors to solve. Um, I think that one good way that we can handle this is to give our users, you know, basically like a flexible system that they can integrate with. And so if they want every single command that AI logs to uh or every single command the AI runs to be logged, then they and in their specific system they can go set that up. Right. Like we gave them a thing that is every time a command go do this thing. And then similarly prompt injection, I would love for this to be systemically solved. Um, quite frankly, I don't think that we are going to solve this. I think this has to be solved at a foundation model level. Um, and so to that extent, we want to give our customers a way before commands run or before certain types of commands run that they can go check it out and see if it's safe or not.

Selecting an SCA tool and using Endor Labs

[ 28m20s ] Jenn Gile: Yeah. Okay, switching to talking more specifically about your, um, software composition analysis, you know, program, decisions. Um, when you came in, there was kind of a, a, we'll say table stakes SCA tool in place. You made a decision to remove that. You talked a little bit earlier about the drivers for that being, uh, if you're getting a bunch of alerts and you can't act on them, what's the point? Can you talk a bit more about what triggered the decision to go look for a paid tool?

[ 29m5s ] Travis McPeak: Yeah, absolutely. So, uh, you know, like I mentioned, uh, if there is going to be patching stuff done in the name of security then it's going to be our team doing it. I I don't want to stop my product engineers from innovating by asking them to go patch something. And especially not by asking them to go patch something that like I don't actually know if it's impactful or not. So given that uh, there's no throwing work over the wall. Like all of the security work in this domain, like if engineers want to upgrade a library for functionality reasons, great, they can do that that kind of patching. If the patching is strictly for security, then I own it or like my team owns it. And so in that case, I really need to know like, is this something that we care about or not? Cuz as much as we'd like to believe that, you know, patching is is always safe, you know, even for minor versions, it sometimes it's not.

[ 29m57s ] Jenn Gile: I mean, our data shows 95% have the potential for breaking changes.

[ 30m2s ] Travis McPeak: Exactly. So every time and and also as a security team, I don't want to be known as somebody that causes instability in the system. Like I really care about making sure that when I do work, it's like well vetted work and it's like as safe as I can make it. So with all of that context, given that it's going to be my work, I want it to be safe. Um, and I have a bunch of other stuff that I need to do as well. The first thing I needed was is this something that we actually care about and the the very like obvious answer to that is, you know, we're pulling in tons of libraries and you know, there those libraries pull in a tons of libraries as well. And so a lot of these code paths like you get a CVE that's on a library and then you don't really know, like is the vulnerability like something that we're even dealing with, right?

[ 30m53s ] Jenn Gile: Is it being used?

[ 30m54s ] Travis McPeak: Yeah. Yeah, cuz these libraries are huge sometimes. So there could be some, you know, obscure thing over here like we're not using at all. And so that is like completely irrelevant. That's just noise. Like I don't want to like look at it or think about it ever again. And so that's where like the reachability analysis, like that is that is the Holy Grail, right? Like that sifts through this giant, you know, list of vulnerabilities I have and tells me like, okay, here are the ones that like could be relevant for you and brings the big number down to a manageable number.

[ 31m29s ] Jenn Gile: So what other factors because again, your team is the team that's, you know, performing these upgrades. It's nobody's full-time job and I don't think anybody wants that as a full-time job. How are you looking to prioritize what gets upgraded first? Because if you take that cut first and say, it's all reachable, then how are you prioritizing what what comes first?

[ 31m52s ] Travis McPeak: Yeah, so it has to be then business context, right? So like, um, I use this example from Netflix all the time. Like, um, I think a lot of the industry will kind of have this like very binary decision where it's either like it needs to be patched like within 30 days or like I'm never going to pay attention to this thing again. And that's usually like a CV uh, CVSS 7.0 cut line. You know, if it's 7.1, it's in the patch cycle. If it's 6.9, like it's dead to us. And I just don't think that thing makes any sense. So if there was a, you know, 4.5, but it like the way that we're using the package is impactful and that thing is on our front door edge proxy. Like that goes way up the list from like a CV uh, CVSS 10 and some like sandbox application over here that has access to nothing. So it's really um, the business context at that point, like what is what is this bit of code? What is it have access to?

[ 32m58s ] Jenn Gile: Are you running kind of like your own EPSS type of, you know, not algorithm necessarily because that is what EPSS is, but like looking at both the probability that it can be exploited and then doing customizations to kind of like bump up your own CVSS scores or is it more of a it's on this application. So I know it's important.

[ 33m26s ] Travis McPeak: Yeah, I think EPSS is a good idea. I don't know how much stock I put into it. Um, I think, you know, something could be, you know, EPSS very low and then somebody writes an exploit and then it gets shipped into meta exploit or whatever and then like EPSS jumps all the way up. So I don't think, you know, whether it's like trendy and being actively exploited is necessarily the best indicator of risk to me. It's really about um, you know, the things that I care about protecting, like my crown jewels and the the relative position of this vulnerability to that access.

[ 34m9s ] Jenn Gile: What other, I guess, related things are important to you in terms of being able to action on alerts?

[ 34m18s ] Travis McPeak: Yeah, it's all data, right? Like I think if you look at the really impactful security breaches and incidents, it's all about some kind of data that your company like really cared about protecting and then didn't. And so it's that's where it all goes to me. Um is it's proximity to data, access to data, like is this a system that processes our most sensitive data? Is this thing like in a completely separate account and it has no access to anything? You know, who's the user base? Are these like all trusted people that like they're not going to go and send a malicious, you know, payload to it or are these like randos from the internet? Like all of that kind of stuff is is the kind of context where I'm going to presently do a kind of manual mapping but eventually I want to make this more automating. We had a system for doing this that I liked at Netflix that would tell us like, how much does security care about this particular vulnerability?

[ 35m13s ] Jenn Gile: And then as this program ramps up, what kind of data do you care about? You know, are you looking at things like mean time to remediation, uh, number of, you know, vulnerabilities on a given product? Like, what ultimately will tell you if this program is successful?

[ 35m34s ] Travis McPeak: I think it's that. I think uh those two factors, right? Yeah, like what is our overall exposure area and like how is that changing over time? And then, yeah, mean time to resolution, especially for like things that, you know, if we find an issue that we care about and we end up fixing that thing, like what is that full cycle time? Like if we didn't have the right visibility, we might not find out about it for a year. Uh, I think the visibility is like one of the first things I got to and that's part of what indoor helps me solve. Um, and then the other bit is, okay, now we got the visibility. Like I said, our team's going to be doing patching. Like, how good are we at that? And that's going to be determined by how risky is it? Like, how much other systems have we put in place so that we feel comfortable doing a patch? Because like, obviously, I just bumped the version number and cut a PR and deploy it. Like that bit's easy. It's all the other stuff around it. So I patching is important. It's like one of those evergreen problems in security. It's never going away and my job here and something that I care a lot about is just making that like a regular fact of life, which means like de-risking it.

[ 36m40s ] Jenn Gile: Right.

[ 36m40s ] Travis McPeak: Let's see what I might have missed first. Ah.

[ 36m41s ] Jenn Gile: Um, in terms of ease of use, extensibility, flexibility, API first, how much of that factored into your decision making when you were looking at tools?

[ 36m53s ] Travis McPeak: Yeah, um, a lot of it. So I've got a lot of stuff going on right now and what I'm looking for are rock solid vendors where I know that it will handle the problem that I'm trying to solve and I'm not going to have to babysit it. It's not going to be broken all the time. And then, yeah, in general, I think that a good tool is going to produce data and that data is going to go into some other system. I don't want to have to go into the UI like to go and figure out what's going on or do I care about something. So those were those were the factors. Like just want a rock solid product that can send the data where I need it.

[ 37m29s ] Jenn Gile: So is your plan uh, to centralize all your vulnerability data, you know, regardless of tool in the same system?

[ 37m36s ] Travis McPeak: Yep. Yeah.

[ 37m37s ] Jenn Gile: And will that process wise for you look like, um, you know, checking it out periodically to look at those stats we talked about, is it about that system being there for your engineers to pull tickets from? Like what does that look like for you?

[ 37m54s ] Travis McPeak: Well, not the second one. No, I don't want engineers going in there and looking at it at all, pulling ticket, none of that. Um, my goal is to never file a ticket for an engineer at this company.

[ 38m3s ] Jenn Gile: Sorry, I should clarify. Your security engineers.

[ 38m5s ] Travis McPeak: Ah, yeah. So, yeah, so I think in general, there's a couple of things I care about. One is, you know, topical new stuff. So, you know, I went to bed one night and I woke up and there's a new like critical and something that I care about. Like, so we're going to need some kind of a mechanism for letting us know about that and then also the progress that we're making. So, you know, going back to the metrics, are we burning those down effectively and like how is that improving over time. If we're doing our job, then we're making both of those metrics better over time and we're not regressing. If we're regressing, it means we're not investing enough and we're falling behind.

[ 38m48s ] Jenn Gile: Okay. Let's see here. Um, you made the decision to select indoor labs pretty quickly. Uh, what happened during the evaluation that gave you the confidence to make that decision quickly?

[ 39m17s ] Travis McPeak: Yeah, so uh, any, you know, like I said, the factors that I care about are it's going to be really good. Um, I don't have to babysit it. It gives me the data that I want. It's it's solid. So, um, I was impressed with how easy it is to integrate and then the deployment model. So I I want this to have full coverage of our organization and all of the repos that I care about. And so I really like the deployment model where, you know, we basically like run it internally in a system that we control and it's not like we just have to install, like, you know, no offense to indoor, but like we don't have to just ship all of our code to you.

[ 39m55s ] Jenn Gile: We don't want your code.

[ 39m56s ] Travis McPeak: Yeah, exactly. Right. It's a liability for you. You shouldn't. Yeah. So I was impressed with that, how easy it was to set up and then the the team around it, like anytime I had a question about anything, they were just immediately on top of it. And then that gave and then on of course, the data it gave me was like, wow, we're going to cut down a ton of these things. Like these were not relevant data that we were looking at before and now we have, you know, 5% of the problem that we used to have. So that whole kind of evaluation cycle was very quick and easy and effective and it gave me confidence that this is just going to solve the problem for me and I'll check this off my box and go move on to other things. Yeah, exactly.

[ 40m38s ] Jenn Gile: So I've got a little bit of data. Uh, you've been on boarded for about two months. We're at 95% dependency resolution and uh, 97.5% noise reduction. So meaning only 2.5% of your findings are reachable. What does that mean to you?

[ 40m56s ] Travis McPeak: You said that's even better than I thought. Yeah. It's really good. Yeah. Yeah, yeah, yeah, exactly. I mean those are those are things that like I said, I just put those out of my mind. I never think about those things again. So that was basically noise that, you know, me or somebody from our team was going to have to look at and I don't have to look at it anymore and that number cuts down to something that's so small that like if we had to we could just go manually look at everything all the time. Yeah. So that's that's solving the problem for me. Now I have this like class of issue just covered.

[ 41m32s ] Jenn Gile: Now, it again, hasn't been very long, but do you have any examples of, uh, you know, a scenario where endo has helped you solve something really quickly, you know, you mentioned, if you wake up and there's a big vulnerability, something new, um, you know, story wise.

[ 41m54s ] Travis McPeak: Yeah, so I just pop into the uh, into the UI like despite, you know, I want the API to push the data where I want. Like, I I still checking it out. Yeah. I just pop into the UI and like search like where is this thing? And then it told me like exactly where it is. So I think it's quite useful for that. Like the quick kind of ad hoc investigation. It's it's just nice and effective and always there.

[ 42m14s ] Jenn Gile: Do you want it to send you alerts in the future for new things?

[ 42m19s ] Travis McPeak: Yeah, I'll I'll set it up that way. I'm sure that'll be easy.

[ 42m22s ] Jenn Gile: Yeah. And then uh in terms of like if you're going to investigate something that's new, uh, how long does it take you now?

[ 42m31s ] Travis McPeak: Oh, it's very fast. Like five minutes. Yeah. I mean, with the combination of the indoor product, um, data and then AI, like I can go get to the bottom of something like very, very quickly.

[ 42m46s ] Jenn Gile: Um, what would you like in the future? Like, you know, you mentioned your goal is to not be using the UI. So if you're, um, bringing insights from the product and combining with AI, like what do you want that experience to look like?

[ 43m1s ] Travis McPeak: Yeah, I think the uh, you know, like I mentioned, so the next step after like here's the stuff I have is going to be augmenting it with my business context. So in the uh, in the cloud that's going to look something like tags. Like I just go and say like this thing is connected to this and then there's going to be like cloud graph kind of stuff going on. So the ability to fold um, Endor then into that graph and say, okay, like here's the app and like here's the dependencies it has associated with it. Um, here's, you know, any vulnerabilities that are are are outside standing in it. Like that kind of a system. Um, that could end up being an Endor could end up being in something else. It could be that I just have really good data feeds. I pull in everything and like just do, you know, some lightweight dashboard myself.

[ 43m49s ] Jenn Gile: I'm going to give you a hypothetical. Uh, six months to a year from now, you have enough data to be able to see trends about, you know, vulnerabilities, dependency health, things like that from, you know, feature to feature. How do you feel like you might use that to engage with your engineering partners if you're seeing trends that worry you?

[ 44m13s ] Travis McPeak: I will not.

[ 44m15s ] Jenn Gile: No.

[ 44m15s ] Travis McPeak: No. No, I don't want them to have to think about it at all. Like if there's trends that worry me, I'm going to go figure out why those are problems and then I'm going to go and build a systemic solution to it. Like I really want every engineer at the company to know, you know, first of all, like treat us like partners, you know, like security team's never going to create work for you. They're they're not there to like say no. Like if there's a security thing, like you're just very, very comfortable coming and chatting with us because we're like a true partner for you. And that's I've been very happy. Like that's what we have going right now. And so it's very important for me to to keep that going.

[ 44m56s ] Jenn Gile: I'm going to go out on a limb and say it sounds like you don't see a future where you would need like a security champions program.

[ 45m2s ] Travis McPeak: No. I mean, all things are possible, you know, when the company grows like huge and like nobody knows each other anymore. But yeah, at present day, I think the way that we're implementing the security program, I don't want engineers to spend their time that way. I want them to go and build awesome product features.

[ 45m19s ] Jenn Gile: Once onboarding is complete, what uh, I kind of understand the role of the product in your day-to-day. What are you looking forward to from us?

[ 45m28s ] Travis McPeak: Just keep doing the thing. Yeah. Yeah, that's I really like solutions that I just, you know, spend a day, get the thing dialed in right and then just like it's autopilot forever. Like it just solves the problem and tells me about the thing and so, yeah, I don't I don't need anything else from you. Just keep doing the awesome thing that you do.

[ 45m49s ] Jenn Gile: Right on. Um, if a colleague friend asked you about indoor labs, what would you tell them?

[ 45m56s ] Travis McPeak: I would say that I jumped in, effectively, you know, solve the problem that I was trying to solve and um it's a great team behind it.

[ 46m3s ] Jenn Gile: All right. Well, that's been great. Thank you.

[ 46m4s ] Travis McPeak: Awesome. Yeah, it was fun.

Find out More

The Challenge

The Solution

The Impact

Welcome to the resistance
Oops! Something went wrong while submitting the form.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.