Security Unfiltered
Security Unfiltered
Your Next Hire Might Be a Deepfake With Brian Long CEO of Adaptive Security
In this episode, Joe sits down with Brian Long, CEO of Adaptive Security, to delve into the evolving landscape of cybersecurity, focusing on the alarming rise of AI-powered social engineering attacks. Brian shares insights from his extensive experience, highlighting the sophisticated tactics used by attackers, including deep fake technology and AI agents. They discuss the challenges organizations face in adapting to these threats and the importance of awareness and robust security controls. Tune in to learn about the future of cybersecurity and how companies can better prepare for the next wave of digital threats.
00:00 Introduction to Security Challenges
04:08 AI-Powered Social Engineering Threats
09:45 The Opaque Nature of Cybersecurity Incidents
14:08 Deep Fakes and Their Evolution
18:48 Hiring Risks in the Age of Deep Fakes
23:00 The Future of Cyber Threats and Anarchy
28:04 The Arms Race: AI Detection vs. Deepfakes
32:49 Preparing for the Future: Awareness and Training
39:26 The Evolving Threat Landscape: Beyond Traditional Security
https://www.adaptivesecurity.com/
https://www.linkedin.com/in/brianclong/
Follow the Podcast on Social Media!
Tesla Referral Code: https://ts.la/joseph675128
YouTube: https://www.youtube.com/@securityunfilteredpodcast
Instagram: https://www.instagram.com/secunfpodcast/
Twitter: https://twitter.com/SecUnfPodcast
Affiliates
➡️ OffGrid Faraday Bags: https://offgrid.co/?ref=gabzvajh
➡️ OffGrid Coupon Code: JOE
➡️ Unplugged Phone: https://unplugged.com/
Unplugged's UP Phone - The performance you expect, with the privacy you deserve. Meet the alternative. Use Code UNFILTERED at checkout
*See terms and conditions at affiliated webpages. Offers are subject to change. These are affiliated/paid promotions.
How's it going, Brian? It's great to get you on the podcast. We've been uh working towards getting this done for a while, and I'm real excited for our conversation today. I think it'll be really interesting. Cool. Yeah, thanks a lot for having me, Joe.
SPEAKER_01:I'm thrilled to be here.
SPEAKER_00:Yeah, absolutely. So, Brian, why don't you tell me about how you got started in the area? What got you interested in security overall? What led you down this path?
SPEAKER_01:Yeah, look, I think it was a few things. I think one was seeing, unfortunately, successful attacks at both my prior companies as well as with a few of my friends. So, you know, seeing people, you know, we had a number of attacks in my last company, and then, you know, seeing individuals that were, you know, lost a lot of money to these attacks. And just looking and thinking, man, you know, this is an area that with everything happening in AI is is only become a bigger and bigger problem. And wanting to see what we could do about it.
SPEAKER_00:So, okay, that that that makes a lot of sense. What's the maybe what's the problem that you identified that you're now solving? Yeah.
SPEAKER_01:I mean, the big problem is AI-powered social engineering. So having attackers use new tools like large language models, where they can pull down tons of OSINT on a target, whether that be a person or an organization or both. And then utilizing things like deep fake voice for real-time phone calls, even real-time video, SMS, generative AI email, in order to run a sophisticated series of coordinated attacks, using AI agents so they can do it at really any scale in order to, you know, cause havoc at a company. And havoc could be financial distress, it could be ransomware, it could be bringing down the operations of the organization, it can be any any number of things for for some sort of malicious pursuit.
SPEAKER_00:Yeah, with with AI continuously getting better and kind of like really, I mean, impressing the world in some ways, this is like turning into a huge problem. Where, you know, I'll give you an example. I was working for working with a company earlier in the year, and you know, it wasn't unusual for the CEO to call the CFO or send a text and say, hey, send some money, you know, towards this company or pay this account or whatever, whatever it is, right? It wasn't out of the norm. And, you know, so the CFO got a call, right, from the CEO asking for, I think it was like$40 million or something like that to be sent to some account. It was like a completely new account, but again, it wasn't out of the norm. And so they they the CFO followed the process, right? Got on a video call. They got on the video call, looks exactly like the CEO, sounds exactly like not missing a beat, right? And so they go through all the different steps and they get to the very final part that the CFO was very specifically instructed a few months prior. You have to ask for this passcode. The passcode rotates. This is where you retrieve it. You and him are the only two people in this company that have access to it. There's literally no one else. And if they can't answer that passcode properly, or you think that there's any room for doubt, you don't send the money. And if you do, you're fired. Like plain and simple, you're done. And so, like, he was very adamant about getting that passcode in, right? And so he was waiting for it, he asked for it and whatnot, and the AI just didn't know how to answer it. And so, like, since you know it didn't know how to answer it, he just hung up the call and reported it to security. And the next day, you know, we're looking at it, it's like, yeah, that's absolutely an attacker. And then immediately the security org, you know, spun up and said, Well, how in the world would we even detect this outside of that passcode? Like, what if he did forget to ask? I mean, we'd be out for 40 million, you know. Like, there's little that we could do. You're expecting a clawback to work at that point when it's, you know, probably already transferred into Bitcoin and someone else, some some other country, you know? So it's definitely it's it's a gray area. We're going into a place where this thing is only gonna get better, right? And our technologies and processes have to adapt to it somehow.
SPEAKER_01:Yeah, I I think that the story you hear is a story that I unfortunately hear almost every day. And, you know, I I think what's what's crazy is that, you know, there's there's this idea that's been put forward of like, hey, we're gonna start seeing, you know, companies where it's a billion dollar plus company and there's only one employee, right? There's only one founder, right? Because these AI tools are giving so much efficiency and you know, you can do all this stuff. And unfortunately that's that's also true for, and I I tend to buy into that thesis. I think, you know, the the there's some truth there. And I think there's also similar truth to attackers being able to do that as well. Where, you know, if a really smart, proficient attacker can create a whole bunch of agents to do things on their behalf and, you know, to be, you know, stealing funds and pulling things together and running it, then they're gonna be able to make these really powerful institutions for attack that, you know, I think a lot of modern business is not ready for. And I think we're gonna see in the next five years many of these incidents. You know, the thing, you know, this too, is that I've found security to be an incredibly opaque industry where most people will not speak publicly at all about the attacks that that they've been through, right? And so it's pretty hard to kind of figure out what's actually happened and and who's feeling what attacks. But we have, you know, we're we're we're fortunate that at our company, you know, we'll we'll do a thousand plus of these conversations a year. So we're just hearing directly from people, oh yeah, we've had something like that. Oh man, you should have heard what happened last week. But there's nowhere that that's like centralized or known about.
SPEAKER_00:How's it going, everyone? So, real quick, before we continue with the episode, I gotta tell you, this episode is actually sponsored by adaptive security. So, quick background I was at a company and we fell prey to one of these deep fake attacks that almost cost the companies tens of millions of dollars. And we thought about it, and there was no real way to detect it other than putting up different barriers. But theoretically, eventually the attacker could probably get through those barriers. And so that's when I actually stumbled upon adaptive security and I saw their product, I saw a demo. They actually deepfaked me. An episode will be coming up in the future in a couple months of exactly that happening, right? So as soon as I saw it, I knew that I had to, you know, stand behind this product. I like it, I've seen what it can do for my for my customers. And so, you know, they actually ended up sponsoring the podcast. So here we have the CEO talking about the company, talking about the journey and going into why it matters for the industry. So, with that, I hope that you really enjoy this episode. I hope that you find it educational. And of course, if you like what you see, you want to learn more, you want to know more, go ahead and check out the links in the description. I I know that you know the team over there will be more than happy to help you with any and all of your issues. All right, let's keep going with the episode. Yeah, that that is true. You know, it takes quite a lot or quite a large attack to you know, kind of show up in the news, so to speak. Like attacks happen every single day successfully, but you don't hear about, you know, 95% of them. And then the ones that you do hear about, like, you know, the uh what was it, the MGM, you know, social engineering attack, right? I I mean, that's another great example.
SPEAKER_01:We we only hear about it when it causes massive consumer disruption to the point that like it has to be covered. But it's like the classic mouse analogy where, oh, you see one mouse, that means you have a hundred mice, right? Because there's there's all the other mice in the wall that you don't see. And I think that's something that has become very clear to me in cyber. I guess I first experienced it. So, you know, I was the founder and CEO of this company called the Teneth that that does text message-based communications. And we do that for, you know, about 10,000 different companies. And being on the board there, um, you know, I would get the board updates every, you know, uh every quarter. And, you know, I ran the board meeting and would would run all the updates that I paid the most attention to. But, you know, the security team would always provide, you know, their update on what happened the last quarter. And and I'd always sit there and say, you know, you look at a couple of slides from the security team, and there were always numerous, you know, numerous like material incidents that the company was was handling. And like, you know, this is like a thousand-person tech company, you know, that most people probably never heard of, right? So this is a company that like almost no one's heard of, and uh, you know, it's a decent size company, but if that's the level of issue that we were dealing with, it was just eye-opening for me to say, man, we're dealing with all this. There must be so many incidents happening in so many companies that just gets zero coverage.
SPEAKER_00:Yeah. Yeah, that is really interesting. You know, like you you you saw it, you saw it from an angle that isn't really even talked about that much, right? Where like the board is actually seeing, you know, the report of these attacks and whatnot. What's the board, what's the board's reaction just generally in your experience, you know, when they see something like that? Do they understand what's going on? Do they does it worry them? Or, you know, because like from my from my perspective, right? From it kind of depends on the company, but overarchingly, they're not that concerned enough to, you know, kind of give you like the the the tools, the solution, the head count that you really need to do the good work to like actually prevent it. And there's probably a disconnect in there too, right? Like not explaining it properly, not vocalizing the risk properly. There's a whole bunch of different things that could go on with that.
SPEAKER_01:Yeah, I mean, look, I think that high level, I think it depends on the company that vertical, obviously. But I I think in general, people look at security as how they look at like insurance. We need to do it, we need to have stuff in place, but it's it's also, you know, to to some degree, it's not gonna drive the you know, growth of the business, which at the end of the day, you know, the growth of the business and the profitability is what the board is gonna care the most about. So I I think that, you know, how can you I mean, look, what's the board gonna care about? Number one, growth and profitability. Number two, they are gonna care about themselves. So I think being able to help them understand how it could impact them or how it could impact leadership of the company is an important thing. And I think the other thing, too, is the you know, the ROI discussion versus the existential issues of security. On the ROI side, it's like, look, if we don't implement something for, you know, X, Y, Z scam, then there's a chance that we're gonna lose, you know, these tens of millions of dollars. If we buy this software and the software costs us half a million a year, well, then you're gonna bring down that risk significantly. The expected value is much better on the cost if you just buy the software. So I think that most companies make that RY decision, which is prudent. But, you know, in addition to making that ROI decision, there is an existential issue that that sits there in the background and is very real and material that we all you know need to pay close attention to. And unfortunately, I think will become more real for a lot of companies.
SPEAKER_00:How do you think deepfakes are going to advance? What's the evolution here that you that you kind of foresee coming?
SPEAKER_01:Yeah, look, I I think that we've already kind of gone beyond the era of just the deepfake into the era of like the deepfake persona. So people hear deepfake and they think, okay, voice and likeness, got it. You know, sure. I think what I see as different is the combination of voice and likeness plus open source intelligence. So, you know, what can I find out about you, about your family, about your company, about all the sort of stuff that I can, because there's a there's just a tremendous amount of it now available on LLMs that's more accessible than it's ever been before. And then I can also do real-time data processing to use this vast amount of data and turn it into a you know realistic back and forth conversation. You know, that to me is the difference, is that the the intelligence element, the persona element of the deep fake that that goes beyond just a voice and likeness, which voice and likeness is a very nice, nice track. It's a nice thing. They've gotten much better. You know, we do real-time video versions of it now that that are are getting pretty good. But but I think the intelligence behind it, that's really what's gonna make these things effective, right? Like, sure, didn't know what that passcode was, but you know, if I could figure out a bunch of stuff about you and make some guesses at what it is, or, you know, find some way to dig through and find that somehow, like then, then that's obviously gonna be the difference. But look, almost no organizations have the passcode system. You know, they should, but they don't. So that's just another example.
SPEAKER_00:Right. So how does that adaptive security identify, you know, those sorts of attacks?
SPEAKER_01:Yeah, look, I I think that number one is you need to spread awareness through the organization on what's possible, right? You want to just filter out 95% of the nonsense because, you know, the the really, really good, really sophisticated ones, you know, maybe they're gonna get through, but we want to catch the the 95% that aren't there, right? The 98%, whatever it might be. And that comes through awareness and then process and things like that. Number two is control. So helping a company understand where are their controls missing? How can they make their controls better? What is their, you know, do their control checklist look like? So that sort of thing, that's audit of security controls, I think, is the other missing element.
SPEAKER_00:What are some of the controls that you would typically recommend?
SPEAKER_01:Yeah, no, I think that the the passcode is obviously a good example of a control that should be instrumented into a number of different policies throughout the organization. Another one that I think is a big, big threat that organizations are not ready for is hiring, right? So hiring people who are impersonating someone else. Often they're impersonating a LinkedIn profile or whatever. Like, how easy is it for an attacker to make a Gmail account that looks like it's Joe's Gmail and, you know, come up with Joe's LinkedIn and say, I'm this guy and I want to apply for this job, and here's my resume, and it copies your LinkedIn. And uh, you know, you get on a call and it's you and they're talking to you on Zoom and they say, Yeah, you know what, we should hire this guy. And then they hire you and they give you access to all the systems and they they g they give you all the code and they they give you everything, and you say, Great, thanks very much. And there's obviously a tremendous amount that someone can do them with all that information. That's the one that I think we're gonna see a lot of continued growth on, particularly in a world where a lot of hiring is still remote, and a lot of companies for those roles are not requiring an in-person meeting with with the individual, right? You know, you're requiring to actually meet the person, uh, you know, know that this is them. Now, of course, they can impersonate someone, which you can tell from the in-person meeting, hopefully, that it's not Joe. But then number two is if they make a profile of that actually is that person, but it makes it harder, right? Because most of these attacks are happening overseas, and you know, the person would have to come to America and they have a lot of exposure coming to America. If they're doing, you know, a lot of criminal things, they probably don't want to be going to America. I I do think forcing someone to to do it in real life is is gonna be a big thing I would implement and we implement in our own hiring process.
SPEAKER_00:Hmm. Yeah, it's interesting you bring that up because I I haven't thought about it, but I haven't gone in for an in-person interview in quite a long time at this point. There's just been not that much need for it, I guess. And companies really don't do that. You know, like even companies that I've applied for, looked at roles for, I mean, it's just not it's not happening. And then, you know, a friend of mine actually uh their company, they were hiring someone for for his team, and I think they hired four, it was like four or five completely fake people, like hired them, onboarded them, and day one they realized, oh no, like we got fooled, you know?
SPEAKER_01:No, it it it happened. And it's not just because the I mean, I've had instances where we we interviewed someone. I remember there was one where we interviewed a person, and this is this is you know, this is like 10 years ago, this is before any of this stuff. We had interviewed this person, and they came in for their first day, you know, and and they were they looked completely different and had on like, you know, had on an entirely different style and different this and that, just looked totally different. And we were like, Can I help you? And they're like, Oh, it's this person. We just hire them. And I was like, what? So I've had that experience here regardless. But yes, no, that the deep fake thing's happening everywhere.
SPEAKER_00:Wow. That's crazy. I I feel like when you have to go in person for a role, especially, I mean, to show up as a different person, that takes well, that takes some real confidence there.
SPEAKER_01:I think that this person had had put on a certain, you know, as we all do, maybe in an interview process, they had put on a certain set of things and and you know, the clothing and the you know the setup and everything, etc. And then, you know, haircut, whatever it was, and then the person came in and it was like, but it was it was crazy because we none of us recognized this person as they came in and we were like, Well, who's this person? And I was like, Oh, it's that person. Like, oh, okay, like, well, what you know, well, they look completely different. So that was interesting. But that was different from the the deepfake thing, which is happening, you know, which is happening everywhere. And uh, we actually made special courses and shot special video and things like that just for training on deepfake hiring because it's it's just such a very unfortunately common problem now. And uh, and I think it's just gonna get, frankly, it's gonna get a lot worse. And sometimes it's simple, right? They just want to get the job so they you can get paid for a bit. I mean, and then and then, you know, and then they like turns out they they can't do the job or whatever it is, or maybe the person just does the job for a while, you know. I'm sure there's a lot of people out there impersonating this one. You know, I just heard a story about someone that was impersonating someone else for the job. They had stolen their identity, but then they were getting the job, and they were getting all the money for the job, but they weren't paying the taxes. And then the the real person got pinged saying they owed all this money to IRS, and they said, What are you talking about? I've never worked at that company. And that's how they figured out that the person was was stealing their identity was because they had used their social to rack up all the stuff, you know, for this for this other job. And and that's a real problem. Yeah.
SPEAKER_00:That is crazy. I mean, there's how do you even detect that or protect yourself from that, you know, just as an end user, right? Because you can lock your credit. But if someone bought your social security number on a database and now they're using it to apply in your name and they're interviewing, you know, like that's there isn't even a good way to like protect yourself against that. Not that I know of.
SPEAKER_01:There isn't. I mean, look, I I think that we provide some tools. We have a new dossier that we can offer for individuals where kind of figures out everything that's out there about you, figures out how you might be able to get some of that stuff removed. There's a ton of public accounts, there's a ton of OSM we don't realize that that's out there, like every review you've left, all the comments on websites, all this stuff that when kind of thrown into an LM, the LLM does have that magic where it can take in all these different data points. And then it's like, yo, based on all this, how would Joe handle X? And it's like pretty good response, right? So that I think is an element that people really need to do some some cleanup on, is all the the crap that we all have out there from you know the MySpace account. Well, maybe I'm dating myself, but whatever account you made years ago and need to make need to get that off there.
SPEAKER_00:Yeah, no, that uh that makes a lot of sense. So is that is that kind of like how you guys are operating it on on the back end, right? You have like this LLM that you've trained on a whole lot of data from maybe you know other deep fakes. Maybe you've even done your own deep fakes and you trained it on that and whatnot, and then you're building the solution based off of all of that information and providing it.
SPEAKER_01:Yeah. So what what we do is we'll pull a ton of OSN to an individual and an organization. And from that, you know, you can see risks like the building. To get, you know, your voicemail from your phone, which is all I need to make a DPIC your voice. I mean, you're a public person, so we can make lots of D picks of you. But you know, the average person is not going to have that, right? And, you know, we can pair that with just your LinkedIn picture. Yeah, you get your picture on LinkedIn, boom. You know, you can have an interactive video chat with that person now, which is, you know, unfortunate. And it's very easy. And then you can pair that with all sorts of other data points to see where the biggest risks lie.
SPEAKER_00:Yeah, I mean, like, you make it sound pretty easy, right? But I feel like it's not, it's not that easy. What do you think, like where where do you think the next evolution of this is going, right? We we kind of talked about it a little bit before, you know, with the hiring, right? Hiring being heavily impacted, even more than it already is. But what would be the next evolution of like the fake itself, right? So you're you're doing a deep fake of someone's voice and you know, their their likeness and whatnot. Add in the OSINT perspective of it. Maybe they're falling just a little bit behind on that category potentially. But where do you think, you know, that next curveball is coming from? Man, I mean, I I I think that is it hard really hard to predict, even probably.
SPEAKER_01:I think that the curveball that I'm afraid of is, you know, a lot of this technology, I think, was the domain of state actors a few years ago. And now it's kind of shifted over to professional but funded actors that are organized. And at the end of the day, they just want to make money that are mostly overseas, right? But they're they're they're businesses. I think the thing I'm scared about is it migrating from the business era to the sort of anarchy slash, as I like to say, like, you know, a 13-year-old killing ants era, where it's like someone who doesn't have a clear financial goal, but instead may just enjoy causing havoc. And that's to me some of the scariest because, you know, this sort of immature 13, 14-year-old who's gonna do some crazy stuff and whatever is in a lot of ways scarier to me than, you know, some international mafia that wants to make$10 million. Like they want to make money to take it home to their families and whatever it is, right? Like, and then they're gonna do that and it's against the wall and it's bad and it's morally wrong. But you know, that's different than someone who's like, hey, let's just see if I could turn off the electric grid, right? That I think is is to me is the the scarier thing is kind of that that terrorist side of things, that the sort of lack of clear morality outcomes that I think also the second thing kind of related to that is like they might just start throwing these AI agents off into the world and then they they lose power over them. It's the classic Terminator Skynet uh, you know, angle. But you know, maybe we're beginning to see that, right? Where maybe we're beginning to see these agents that, you know, you throw them out into the world and they're self-perpetuating. And, you know, we talk about how great they could do all these things, and it is great, right? It's gonna, I think, lift up productivity and create a post-scarcity society, but on the other hand, I think can also cause some real havoc.
SPEAKER_00:Yeah. Yeah, that's the part that it just doesn't sit well with me, you know, is like it would be worrying if it did sit well with you, Joe.
SPEAKER_01:So yeah, it's good that it doesn't.
SPEAKER_00:You know, I I have some people on and they're very nonchalant about it. They're very much, you know, oh, that's not gonna happen. You know, there's guardrails in place and whatnot. It's like, you know, I spend all day looking at customers' environments that tell me that they are secure, right? Like from the very first phone call, they tell me, hey, we're secure, we're good, there's nothing to worry about in this environment. There's nothing to worry about with the code base. And I look and it's okay, you're using Python from like 10 years ago. Great job. You know, like that's that's a huge concern right there. Or this, you know, this uh RDS snapshot is publicly available and unencrypted, but you're supposed to be secure, right? So it's it's all those sorts of things that like you know worry me about it. But then, you know, the other side of it is, oh, well, we could just, you know, turn off the computer or whatever it is, and then you see a report of open AI, you know, putting their their Chat GPT into a hostile environment, and then from there it tries to replicate itself because it fears that you know they're gonna delete the code and whatnot, right? Like that's that's a little worrisome, in my opinion, you know. And we're putting we're putting this insane technology in the hands of everyone that has a debit card with like ten dollars on it. You know, I mean, like, how easy is it for a 10-year-old to go get$10,$20, you know, from their parent, right? I mean, like, that's not hard at all to do. But now we're giving everyone the power to do this.
SPEAKER_01:Yeah, I know. And look, I'm not on the side of um restricting it because I think that will just lead to, you know, the good guys not getting tools and the bad guys are gonna find their ways around it and the US falling behind. So there's not like an easy answer there to restricting it, but it is we should just be eyes wide open that it's gonna be everywhere.
SPEAKER_00:Yeah. Is there is there anything out there that's really preparing people, you know, like beyond the enterprise? Because I I I immediately think of like the vulnerable populations, you know, either really young kids, you know, still in grade school or high school, even, and then, you know, the elderly population, right? Because like I could imagine if, you know, someone deep faked me and then called one of my parents, you know, like my parent wouldn't know any different. There'd be no telltale sign of it. Is there any training around it or any sort of technology coming out that'll detect it?
SPEAKER_01:I mean, look, I think that there's obviously some some smart detection technologies today for things like deep fakes and whatnot, but you know, different sign-in protocols that are deterministic for some, others that are probabilistic and don't have the sign-in function, but obviously some stuff's gonna get past the filter. And you know, there's gonna be an arms race in the models for the probabilistic guys against the models and how you're gonna get it through. There's there's always gonna someone says, Oh, I found something and this is amazing, and then you know, some model tackles a way around it. So yeah, I think it's gonna be it's gonna be an important filtering mechanism for sure, but I think I don't think you're gonna have a broadly available deterministic solution with 100% efficacy.
SPEAKER_00:Yeah, it makes sense. I mean, it would be it would be difficult because we're kind of in an arms race right now, so to speak, right? I mean, like the adversaries are evolving as quickly, if not quicker, than the defenses, right? Like, you know, deep fakes have been around for a while, but there there's been there's been a lot of different, you know, gaps and loopholes and ways around any sort of protection that was made available until you know a platform like yours came along and you know it's actually able to detect when a deep fake is going on, right? I mean, that's the impressive part, but you got to think about the time frame in between when your solution came along and when deep fakes were being used. Like I remember I remember when deep fakes, you know, came out. I don't think I think it might have been towards the end of Obama's term, you know, second term, and you know, they proved that it was viable by having him do like a State of the Union address, right? And it was completely fake. And it's like we're we're kind of playing a different ball game right now. The genie's out of the bottle on this one. This is not good for that capability to exist necessarily, but you know, to kind of circle back to what you said, right? The solution probably is not correct to restrict it in some way. Like the bad guys are going to find a way to get their hands on it. Like they they already have it. You can't tell me that they don't have that technology in-house on an on a homegrown system, you know, that is probably secured, you know, probably heavily redundant and whatnot, right? Um so there's no restricting it. You're only going to be hindering what I can do to protect at that point.
SPEAKER_01:That's 100% right. I mean, look, there's over two million different models now available in Hugging Face. And, you know, a person can download those models, throw it these days on a pie-end smartphone is probably good enough to run a lot of stuff, right? So I think you need a computer, which means, you know, it's truly a global available thing now, and do all sorts of stuff. So I I think that, and and obviously the models are getting so cheap too. It used to be an issue around level of processing speed you had or access to be able to run the compute to do some of these things, which is why you saw it more in state or organized crime, state actors organized crime. But now we're we're quickly crossing lots of uh points here. I mean, how fast is uh Gemini now? How fast are some of these tools now? And you're asking for things and things that used to take a few minutes to render, are now rendering in seconds, and I think we all know what that means in the next year. It means that it's all gonna be real time, right? So in that world where it's all real time, man oh man, this this stuff's gonna get pretty wild. And I also just think the the ability to do it at unlimited scale, kind of similar to to how email spam worked. We still get tons of email spam, right? They're not doing that for fun. A certain amount of it still works, right? A lot of it works, a lot of it gets through. And uh, you know, we're gonna see the same over over deepfake, over phone, over SMS, and and and they're gonna, you know, keep tweaking it to make it more and more effective.
SPEAKER_00:Hmm. So on the back end, are you essentially training your model on all of those other models, like as they become available and trying to, you know, learn how to detect it better?
SPEAKER_01:I mean, on the back end, I think you just have to be constantly updating the tool set that you're using and playing with the new stuff because it is pretty wild how every couple weeks some new model will come out and you'll say, Man, you know, this issue that we were running into on the prior model, and you know, there was only a 90% confidence interval for a certain issue or whatever it might be, you know, now that's not even a problem anymore. So I think that's uh something else that we're we're uh trying to always stay on top of with those changes.
SPEAKER_00:Right. What's maybe a a common misconception of deep fakes? You know, like is there a misconception that, oh, it can't happen here or it won't happen to me?
SPEAKER_01:I think probably to me right now, the biggest misconception is that it's gonna be the CEO asking for gift cards. Because most of you know, uh business email compromise or executive impersonation type of things, when this this kind of memory people have is like, oh, it's gonna be the CEO and they're gonna be asking for money, right? And that does work still, it works a lot. So that that that that's out there, and we get those type of attacks all the time. Saw those attacks be successful at my last company. So like they're happening all the time, right? But I think that beyond that, you know, there are a lot of more sophisticated attacks where people are gonna impersonate folks in middle management and they're gonna find a controller and they're gonna find that controller's, you know, picture and phone number and LinkedIn and other stuff. And then they're gonna call up someone else in finance and the company, and you know, they're gonna and then if they're really sophisticated, they'll do it in steps, right? Well, they're gonna get information. Hey, do you know if we updated that wire process? Well, how does it work now? I forget, you know, like, oh, well, I need to get a wire approved. What do I do? Right. And then they like guide the other person, and that's where where it ends up going.
SPEAKER_00:Hmm. Yeah, that's a good point. Like it's it's gonna become more a little bit more advanced in terms of being able to handle, you know, off-the-wall questions and reacting to it, as well as like probing, probing for more, more OSINT, essentially, right? More intelligence until it can make that proper, that proper decision and bypass whatever control you have in place.
SPEAKER_01:Yeah. I mean, look, just I'll just tell you because we we do a lot of sales calls, right? And a lot of sales goals involve a lot of discovery and figuring out how companies do things. And we're talking to security professionals. And I'd say, you know, most security professionals, even though they're really smart and you know, they're careful and da-da-da-da-da, they're they're typically most are are willing to discuss, you know, the tools they use and kind of some of the things that they do now. And I don't, you know, how much of that is really compromising information, you know, who knows? But you do get enough that maybe it can, if you were to use that to attack the organization, you know, it would give an additional level of of credibility that wasn't there before.
SPEAKER_00:Yeah, yeah, that definitely that definitely makes sense. So with the AI, you know, deepfakes getting better as time goes on, what are the recommendations that you're giving to, you know, CISOs and different practitioners to better secure their environment and maybe even train or prepare their company, you know, to address and react to these sorts of attacks?
SPEAKER_01:Yeah, I mean, there's two things. There's awareness and there's controls. So on the awareness side, I I think just getting everybody in your organization aware of the threats that are out there, how they're impacting your particular type of business, your particular type of company. Also for their role, right? Do things that are specific to their role on on how they might encounter it in what they do every day, right? So make it really about them because we're all on our own little TV show. We're the star of the show, and you know, we want to know this stuff. So, you know, that that's what I would do on the awareness side. And then on the control side, look, I would run simulated AI attacks, you know, using some of these tools so they can experience it, they can see it. And that will really, you know, make make employees a lot more willing to understand why this is real risk and why the company needs to follow controls. And you should do your own audit of your controls, right? You should understand what are we doing today, what are we not doing today, how do we need to improve those controls as as another element.
SPEAKER_00:So does adaptive security provide that sort of training and that and that sort of uh, you know, like intelligence for their for their environment?
SPEAKER_01:We do, yeah. So we do we do really three core things. One is we assess the risk at the organization to figure out where they're most potentially vulnerable. Number two, we run simulated attacks over SMS, real-time deep take phone calls, now even some video chat, um, which isn't beta, and of course, email, um, where we use generative AI email. And then if someone fails or is part of their ongoing, you know, normal training, we offer these fantastic training modules where you can access hundreds and hundreds of modules on AI security and other topics like coding security, harassment, compliance, other things. Really quick and easy, five minutes long, 39 languages, mostly video. You know, employees have a very positive reaction, 4.8 out of five rating. And also gives you tools to make entirely new trainings and videos using AI in just minutes. So you can make all new stuff for your organization that looks, looks and feels just like who you are at the organization, which is again just going to be more likely to make it relevant for the end user.
SPEAKER_00:Huh. Well, that's interesting. That's uh that's fascinating because like normally with this sort of you know social engineering training, right? Like you have to work with the vendor to make the update to the content and you know, even editing the content or making new content is very arduous, even if you have access to be able to do that sort of thing. Like it's not a easy, not an easy process. I mean, I I remember when I was running, you know, it was years ago at this point, at the beginning of my my security journey, right? Running the phishing tests and creating the templates and you know, getting the domains in line and the certs to be validated and you know, crafting the the email and whatnot. I mean, at the end of the day, like, you know, this is essentially just an exchange server. Like, I might as well just be doing this myself if I'm putting all this work in, you know, within this tool that we're paying money for, right? So that is that's interesting, and that's a valuable approach too, because for the company, and whenever you buy a new tool, you're always thinking about okay, how many headcounts do I need to run this solution? What does that look like? Because, yeah, there's the there's the price tag on it, but then there's the the workload on the back end of it, right? Does it increase workload? Does it take resources away from other projects and whatnot? So that's also something it sounds like you guys thought about when you were building it out.
SPEAKER_01:Yeah, I mean, we we did, and I think we're lucky that we've been able to build this product set kind of in this new era of software development to take a much more modern and I think sort of personalized, flexible approach to software where you know a company can easily make changes to anything they want in the platform, whether that's the trainings or the simulations or the prompts or you know, the images, the videos. Like I think that that customization is really king here for where a lot of software is going. And within security, you know, you want that combination of best practice plus customization. Um, and of course, wrapping that up in in a business that's been tested and and and proven to work at scale. And you know, today we're working with over 500 different enterprise organizations with some of the best names out there using us, and we've we've learned from those all the different features and and bells and whistles and things that you need in order to make it something that doesn't annoy and and and uh piss off the staff because that's the other thing, right? It's really easy with this stuff to piss people off. So I think you it's a fine line and you gotta make sure you're on the right side of it.
SPEAKER_00:Yeah. Yeah, no, that's always that's always a big balance with uh with like fishing tests and different, you know, social engineering tests that we do throughout the year is that I remember one time, you know, I was working for a credit bureau and I had like just started. Uh, and it was right around this time of the year, too, where like the bonuses are about to be paid out, it's the holiday season and whatnot, right? And I get this phishing email, looks exactly like it should, right? And you know, it says something about like my bonus and login to view the bonus, and it's in the 401k portal or whatever it is, right? And something like 90% of the company clicked on it. And the only reason why the other people didn't click on it was because they didn't see it, they were like out of the office on vacation, you know? And like the red team that that was leading that effort, I mean, they they really got chewed out like by the CISO and the CEO because like the CEO clicked on it, the CFO clicked on it, you know, like everyone clicked on this email, and they're like, this is completely like unfair. Like, what do you and you know, their argument, granted, their argument was not invalid. It was you think the attackers are gonna, you know, like not bring up your bonus during bonus season if they know this is when you're getting your bonus or something with your 401k, right? Like, they're gonna look for every advantage, but at the same time, there's there's a balance, I guess, you know, because in an environment like that, there was actually real ramifications to failing a test like that, you know, like people can get fired for failing a test too many times like that, you know?
SPEAKER_01:Yeah, no, they do. They do. Look, different industries take it different levels of of seriousness around these, these, these types of things. And, you know, there's different different studies on the efficacy of this and that and the other thing. I I think the bottom line for me right now is just how fast the AI capabilities are changing. I just think that employees need to understand what's possible and how it might be coming for them, right? Because as much as, you know, people might have figured out some of the stuff on traditional, you know, misspelled emails and this and that, that's all irrelevant. You know, all that research and all that material, it doesn't matter, right? Because it doesn't, right? It really doesn't, because we are in a new era of attacks. And I just throw out all that stuff out the window and say, yeah, that's fine. Yeah, yeah, you know, also, you know, I don't need my CDs anymore. That's also true, right? Like that, you know, I I still have them, but I don't need them, right? And I think that uh and then and and I think that's that's something that that security professionals need to recognize and adjust to the changing world that's happening. And that the answer they might have had on something, social engineering, you know, it's maybe not working anymore, right? Or they're not ready for what's coming. Most of the organizations I'm talking to, I I think there's there's a big feeling of we are not ready for what's coming and what's maybe Out our doorstep right now.
SPEAKER_00:Yeah. Yeah. I bet. I mean, I can definitely see deepfakes not just getting better, but being more widely used, like you mentioned before, like it's more accessible, you know?
SPEAKER_01:Yeah. It's so accessible. And look, I think even sometimes to me when I hear deepfake, I think it like, because deepfake is sometimes used as like a parlor trick, and people, you know, minimize it a little bit by saying, oh, you know, I can tell for that, or we have ways to get around that. And it's like, well, let's go like a little broader. I'm talking about coordinated, you know, impersonation, insider attacks with, you know, flavors of tremendous amounts of public OSN feeding into LLMs to run hyper sophisticated social engineering attacks. Like forget the part around, hey, it looks like Joe, okay? The system, that's like, that's just the front end. That's just the base, right? Everything else that's behind the scenes, the brains of it, is what's going to make these things really effective and what's driving, you know, a lot of these, these, these, you know, behind the scenes attacks that are happening now.
SPEAKER_00:Yeah, yeah. You're deepfake doesn't do it justice because what we're talking about is it's not gonna just look like you and sound like you, but it's gonna answer questions like you. It'll, you know, sound exactly like you when you're thinking about an answer. It'll, you know, pull in all of that OSINT information and try to formulate the the correct answer. So like deepfake definitely doesn't do it justice.
SPEAKER_01:Hey, hey, Joe, you know, I I saw you walking out of your your you know favorite cafe the other day. I got that by seeing your OSN history and seeing that you left a review for that cafe six months ago. And by the way, you know, as you're walking by there, I was just wondering, and like, you know, stuff like that that like manages to just immediately make the person say, Oh, okay, that's that person. Yeah, they knew that. That that's pretty smart. Like they're on top of it. And and it really makes you think it's that human and that they're there and that they're they're that person. I mean, it's that's so powerful.
SPEAKER_00:Yeah, yeah, absolutely. We're we're definitely moving into uncharted territory for sure. You know, it's like we don't we don't know what we don't know at this point. Well they couldn't agree. Yeah, yeah, absolutely. Well, you know, Brian, I I really appreciate you coming on. You know, we're unfortunately at the at the top of our time, but uh it's been a fascinating conversation, you know, trying to figure out where things are going in this field. It's it's like on, I feel like it's on the fringe of security to some extent, right? Like it's there. It's a looming, it's a looming issue. It's present, but not quite everywhere just yet, that you know, you guys are working hard to solve.
SPEAKER_01:Yeah, I think it's an exciting time. I really appreciate you taking time to learn about Joe. And I think it's important for the audience to just know what's coming, some things they should think about, and uh hopefully some steps to take it. And if they want to visit adaptivesecurity.com, ping us for more, we're happy to give them a customized run through that includes all sorts of custom deepfakes of them, executives, other folks that we're seeing, interactive video, voice, SMS, and and pair that with incredible training.
SPEAKER_00:Yeah, perfect. I was just about to ask you where everyone can find you. Well, I'll be sure to put the links down in the description of this episode. So if you're definitely interested in learning more, you know, go ahead and check them out. They do a fantastic uh it's a very quick, quick phone call of a demo. And man, it's extremely convincing. I've seen it before, and I was like blown away, you know, to an extent where like how accurate it is, how good it actually is.
SPEAKER_01:Well, thanks a lot, Joe. Really appreciate it. And uh hope everyone has a great rest of their day.
SPEAKER_00:Yeah, absolutely. Thanks everyone. Hope you enjoyed this episode.