Security Unfiltered

The Future of AI: Security, Ethics, and Human Augmentation

Joe South Episode 202

Send us a text

Artificial intelligence is developing at unprecedented speed, becoming a transformative force that may rival nuclear technology in its impact on human civilization. The rapid evolution of AI capabilities presents both extraordinary opportunities and profound challenges that we're only beginning to understand.

• AI development is accelerating faster than any previous technology, with research papers becoming outdated within weeks or months
• Current AI systems function primarily as prediction engines rather than truly conscious entities, despite sometimes exhibiting behaviors that appear sentient
• Companies often implement AI solutions without clearly understanding the problems they're trying to solve or the technology's actual capabilities
• AI regulation is developing globally, with the EU currently leading efforts to establish comprehensive frameworks and security standards
• Most organizations will benefit more from using AI to augment human capabilities rather than attempting to replace workers entirely
• The cybersecurity job market has become increasingly competitive, with automation making application processes more challenging for job seekers
• When looking for jobs on LinkedIn, changing the URL parameter from 84,000 to 3,600 helps find postings from the last hour instead of the last 24 hours

Connect with Chris Cochran on LinkedIn to learn more about his work in AI and cybersecurity or to request assistance with making connections in the field.


Support the show

Follow the Podcast on Social Media!

Tesla Referral Code: https://ts.la/joseph675128

YouTube: https://www.youtube.com/@securityunfilteredpodcast

Instagram: https://www.instagram.com/secunfpodcast/
Twitter: https://twitter.com/SecUnfPodcast

Speaker 1:

How's it going, chris? It's great to get you back on the podcast. You know it's been I want to say two, maybe three years at this point since you were last on, you know, and it's like so much has changed. I don't even think I had a kid when you were last on that's crazy, that's so crazy, and congrats on the new one. I really appreciate it. It was an interesting journey, you know. My first one had a short NICU stay right, and so as a first time new parent, that's like the scariest thing possible you know, because you have no clue what's going on.

Speaker 1:

There's all these doctors and everything. So the second one was completely healthy, completely normal and everything, and the doctors were like asking us different questions and whatnot, like I was like so thrown off because I was like I don't even know what comes next. Like do we go to the room? Does she stay with us? Like what do we like? What do we do? You know, we were scared to like let her out of our sight, because it was just like such a new experience.

Speaker 2:

You know, we didn't even realize how different our first one was, right, yeah. So my oldest daughter was also a NICU baby, which I mean I think thankfully that was my first experience, because I was like I didn't know any different and I was like this stuff probably happens all the time. But you know, then I had my next two and they were both by C-section, which was scary for his own reasons, but, yeah, girl dead and I wouldn't have it any other way.

Speaker 1:

Yeah, no, it's, it's great. It's probably like the. It's the most fun, fulfilling thing that I could possibly even imagine. Like it's. It's so awesome, I love it. I love being a dad yeah, me too, yeah. So, Chris, you know I I saw that you've been doing quite a lot of work in the AI space recently, you know, and it's fascinating to me because it's kind of like tangential to my own research with my PhD, Right? So I don't even think I started my PhD when we last talked.

Speaker 2:

Tell me about. What PhD are you working on?

Speaker 1:

Yeah, so it's a space cybersecurity PhD where I'm focusing on deploying zero trust principles and frameworks into communication satellites with the intent of preparing them for post quantum encryption. Right, because right now we have a whole lot of outdated, you know infrastructure in space. There's a huge amount of satellites up there that you know, some we can't even use yeah, yeah, you know it's all highly, highly vulnerable to just like normal attacks.

Speaker 1:

That enterprises you know defeated 10 years ago, right, because once it's in space you have very limited window, maybe 15,. You know, send a patch to it and hopefully it deploys and when it goes around the earth again you can check and see if you didn't brick it, right, you know, kind of head first, right. So now I'm starting to kind of like almost tangentially, look at ai, look at the capabilities of ai, how it could be married with quantum computing in the future and what that would look like for just security and encryption overall.

Speaker 1:

Yeah, there's some light stuff you know like to super challenge myself here and there.

Speaker 2:

Easy, easy, easy problems, easy challenges to surmount. So eventually we're going to say dr joe, and your phd is going to be in space cyber, basically yeah, yeah, that's cool, I don't, you know, I don't know if I want the doctor title.

Speaker 1:

Doctor makes me sound like either really smart or like a medical doctor. You know, it's like, I don't know. It's just seems weird, even when, like, I'm teaching a class and the students are like, oh, professor, I'm like no, no, no, no, just call me Joe. Like that's fine, you know, I'm barely smarter than you, I promise yeah.

Speaker 2:

On a plane and laser.

Speaker 1:

Right, right, right, yeah, no, it's interesting. So you, you know, one of the areas that I actually touch on in my dissertation is that there isn't a whole lot of regulation around this right now, right, where there's a lot of frameworks that are kind of in development. But you know it's kind of in development, but you know it's kind of the Wild West, right, because I mean, this thing is AI overall, right is becoming more intelligent, it's becoming smarter. You know, I recently I read an article on chat GPT of how it, you know, got put into a hostile situation and it felt like it was going to be unplugged from existence and so it started to try and copy itself to other places in the internet so that it could restore itself later on I mean, that's something that's like really complex for a human to think about.

Speaker 1:

Right, like, okay, how do I like upload this, you know, project over here and maybe separate out the pieces, tie it all back together later on, you know, and this thing was just like that's the pieces, tie it all back together later on, you know, and this thing was just like that's, that's what you're trying to do. I'm going to go. I'm going to go preserve myself forever.

Speaker 2:

Right, you know it's funny. The CEO for Microsoft AI just put out a post about this on LinkedIn and he was talking about seemingly conscious AI and the problems that could be attributed to that, Because you could see a world in which you know, somewhere down in the future, that there are going to be, you know, rights activists looking actively for how do you protect AI and keep it alive? And you know, what rights does AI have? And I mean that's, I mean that's kind of a scary thought. I think a lot of us in technology we tend to think of it and be like, oh ha ha, that's kind of funny. No, genuinely seriously. Even people that are in technology tend to feel like some artificial intelligence applications or models or whatever you want to call it. They are conscious from their perspective, because I've been talking to this thing every single day. Of course, it has a soul. It's funny. It talks to me, you know, gives me good affirmations every day and tells me I'm handsome and beautiful and intelligent all at the same time.

Speaker 2:

But I do think that we do run that risk of really starting to divide the country again for different reasons, if we tend to look at these models as actually being alive. I mean, when you really look at the math of it I'm sure you know this better than anybody Really right now, what these models are, they're really good predictors of the English language, based on the information in which it has access to, and so it's able to, like, search its databases with lightning speed and it's piecing together like so much information. It's like OK, here's what the next token or the next word actually is, and so you'll get things like. You'll start to get the activities in which the models are trying to save themselves and all that stuff. You'll start to get these weird conversations where you feel like it is sentient. Weird conversations where you feel like it is sentient. But I think at the end of the day, we have to realize that these are just really good prediction machines and are not quite aware.

Speaker 1:

Yeah, no, that's. That's actually a really good point. You know, like with with my own research, I've been using grok pretty heavily. And you know, first I tried to use chat, gpt, but it would give me just 90% of the information that it would give me would be completely false, made up, hallucinated articles, like stuff I couldn't access. It just made no sense, right. It made it more difficult than anything else. And Google, google is completely useless. I mean, like I don't know what's going on over at Google but I couldn't find anything right going on over at Google but I couldn't find anything Right.

Speaker 1:

And as soon as I go to Grok and I learn you know a little bit around, like how to you know how to like craft a good prompt and whatnot, right, but Grok is giving me very accurate information. And then I kind of built into its logic because it has that conversation history. I built into the logic where, hey, you cannot give me a false article, you cannot some article, some material, right? You cannot give me something that you can't even access or pull up, like if I can't pull it up as a PDF, then you can't recommend it to me, like plain and simple. You know, once I gave it those guidelines, it like it thinks a whole lot harder right from the very beginning, cause now it's like oh, okay, I got to validate every single article I'm given this guy because previously it was, like you know, giving me like hallucinations and I was like no, like, I cannot waste my time on hallucinations. Yeah, exactly.

Speaker 2:

Nope, and and and folks are starting to even build in. So you know, if you're building an agentic system and you want it to be able to, you know, pull from rag or maybe it's pulling from the Internet or something like that they are building in evaluators that are literally double check the work of the first machine just to make sure that it didn't hallucinate, which I think is a really brilliant way to do it, because you know AI, generative AI, it's really good at probabilistic, deterministic. I think we have like leaps and bounds to go before we can actually start leveraging it in that way, and uh, but I think having an evaluator gets us a little bit closer.

Speaker 1:

Yeah, yeah it's. It's interesting and also scary at the same time to see how quickly all of this is developing. You know, and like even just with my own research, the most recent article that I'm quoting in my research was literally posted three weeks ago. I mean literally, and the oldest paper is from seven years ago. I mean, what other field can you even name? I can't name one that was developing that quickly, and maybe I didn't look at other fields as closely, right, but you're right, you're 100. I don't know.

Speaker 2:

I've never seen something like that, where it's like oh, I'm basing this on something that was talked about two weeks ago yeah, yeah, I mean, think about some of the biggest components of generative AI, agentic AI, things like MCP, things like A2A all that stuff really didn't exist like the beginning of last year and now it's like cornerstones of this entire field. And I mean, and that's why I get the most excited about AI Believe it or not, I was talking about AI before. It was cool. So my senior thesis was on augmenting humans with artificial intelligence, because I'd always been a technology nerd. I've always been into science and artificial intelligence was just one of those things that was interesting to me.

Speaker 2:

When I was a kid, I thought I was going to build Skynet, obviously, but in the good way, not the destroy the entire world way. I felt like once this technology came to the forefront right, you know I was using generative. If I showed you some of the stuff I generated early, early on, you would laugh, but I keep that stuff almost like my own museum of how things have progressed. And then, obviously, when GPT I think it was either three or 3.5 came out and that was kind of like this watershed moment that changed the entire world and everyone started to developing their own apps, they started creating their own technologies and now the genie is kind of like out of the bottle. This is bigger than the dawn of the Internet. This is something that's going to change the face of humanity in ways that we can't even fathom yet.

Speaker 1:

Yeah, I totally agree with you. When I had on, I think, a mutual friend, jim Lawler yeah, former director of UMD division for the CIA, right, yeah, of UMD division for the CIA, right, and I was talking to him and you know it kind of from obviously I wasn't even born at the time, right, but from what I know, from when America was developing the first nuclear bombs, it kind of has that same feeling to it, right, where we're kind of in an arms race to an extent, where you know the country that gets this thing, you know, under control and really develops it first, is going to just wipe the floor with everyone else, you know, and there will be a show of force. You know, as soon as we get to a level that that you know is viable of showing and whatnot, there's a show of force. And then everyone else is like I'm behind by x amount of years, right, and hopefully, you know, obviously I would want America to be on top of that thing, right, but it has that same feeling where, you know it's almost like the US government to some extent in some dark corner right, is just saying you have an unlimited budget, we're going to have all these research institutions all over the place.

Speaker 1:

You know, researching this thing at great lengths, we're going to have, you know, xai, right. Go and build a million GPU super cluster, whatever they're calling it, right? I mean to even think of that five years ago. There's something wrong with you. If you're thinking about that five years ago, you know like, are you a genius? Yeah, hence Elon Musk, yeah, but. But you know it has that same feeling. You know, maybe I'm alone with it, but it has the same feeling where we're on the precipice of something that's going to change the entire trajectory of everything.

Speaker 2:

No, I mean, it's kind of like the opposite ends of the spectrum is that AI is going to solve everything and everybody doesn't have to work anymore because all the machines are doing our work and we just get to have fun and be with our loved ones, or it's going to be the apocalypse and we're going to be kind of like at the whim of our AI overlord. So it's kind of like, you know, the reality is probably going to land somewhere in the middle. I think we're starting to put a lot of the guardrails in place that I think we need. You were talking about legislation a little bit ago and so I got to contribute. So it was myself, owasp and I was, on the behalf of the SANS Institute, working with some folks developing what we believe to be the security standard for the EU AI Act, which is they're probably the front runners when it comes to legislation for artificial intelligence right now.

Speaker 2:

But yeah, that's the stuff is starting to begin. You know, and there's early talks in the United States. Whether you're talking about the DOD, you know I've been having a lot of conversations with leaders in the DOD about how they're thinking about framing artificial intelligence for operations. You know the the AI plan for the United States just came out, so that this is telling you that the right folks are thinking about how do you architect the world. That is AI, and I'm just hoping that number one, everyone takes it seriously. But then number two that we constantly iterate on, like what does it mean to be on the right side of artificial intelligence as it develops? Absolutely.

Speaker 1:

Yeah, no, that's a great question too, what it means to be on the right side of it. Like, well, what's the right side? Right? Because I had on someone previously and he kind of outlined it pretty perfectly because he was developing exploits to get around things that the US military was doing in the theater of war right over in Afghanistan and whatnot, and I immediately felt like I shouldn't have him on the podcast. I wanted to end it right there, right, it just was weird.

Speaker 1:

But you know, he explained it, like you know, to Americans. You know, in America, yeah, you look like the good guy nine times, 9.9 times out of 10, you know, like you're not going to be convinced that you're on the bad side, right, but to an enemy you're the bad guy, right. So who's who's actually right? It's usually somewhere in the middle. You know there's a gray area, right, and he kind of described it how he likes to work in that gray area. I feel like it was a little, a little on the wrong side, but whatever you know Anybody that knows me knows that I'm a huge narrative nerd.

Speaker 2:

I'm a huge film buff. You know I'm addicted to this stuff. Somebody said something just about villains in general that stuck with me and so I use it almost as a framework. Is that nine times out of 10 or 99 times out of 100, a villain doesn't necessarily see themselves as a villain, and so when I'm watching movies and I see that this villain knows they're a villain, I'm like that's not realistic. But when you have a villain like, say, thanos, right from Marvel, he felt like he was doing the right thing. He was the villain to everybody. He was basically the villain to the entire universe, but from his perspective he was doing all the right things, and I think that's how you create really good villains. But I think there's a little bit of truth to that fiction.

Speaker 1:

Huh yeah, it's fascinating. It takes me back to like the Joker in the Batman series right, where he never thought that he was the villain for sure. He just thought that he was causing a little bit of chaos to get some change right. Not that he was necessarily the villain.

Speaker 2:

Right, yeah, he, he, he saw society as something that needed to be changed and he was kind of like teaching everybody a lesson. That's why he interrupted the party and he was kind of going through this whole monologue and you know he burnt the money because, you know, you know we were tied to our physical possessions and things like that. So I mean, that's when you write a really good character is when you have that dichotomy of what it means to be a human being yeah, yeah, no, it's, yeah, it's, it's, yeah, it's just, it's fascinating.

Speaker 1:

You know how, how do you find the time to become so specialized and such an expert in everything that you do, everything that you've done throughout your career, right? I mean, like I feel like you don't, you don't just dabble, I feel like you go, you know, headfirst into these topics, into these areas that maybe there's not a whole lot of experts in, right, and you know, you just submerge yourself in this information, you absorb it all and you become, you know, one of the experts that are talking on it. Right, I mean, like I didn't even know that there was a, you know, an EU act going on. Right, that there's discussions? Right, because I'm not even in that conversation, you know. But how in the world do you even find the time? Because you got two, you got several kids, right?

Speaker 1:

I mean you're running a business, you're doing all these other things. How do you do it?

Speaker 2:

So I mean, it's really a subject of focus, because I mean, if there's something that means a lot to you, you'll find the time, whatever it is, even if it's 15 minutes a day. I venture to guess, like if there was any topic in the world you could pick it. It could be artificial intelligence, it could be cybersecurity no-transcript, a part of what I do from a day-to-day perspective so that kind of gives me that additional advantage. But you know, people just ask me that that or like how'd you get so smart on artificial intelligence? And I would say this I have the benefit of it being so new that everybody, just about everybody, is here at square one.

Speaker 2:

Right, of course, you have those folks that were initially building the GPTs. You had the folks that have been doing machine learning and doing data science forever. Those are, you know, one a million people out there that have been doing machine learning and doing data science forever. Those are, you know, one a million people out there that have been doing it that long. Right, there have been folks that have probably been doing machine learning stuff for 30 years.

Speaker 2:

But when you're talking about new age artificial intelligence, everybody is new, and so any information that you can share any conversations that you have, any questions that you have, is going to be a part of that new narrative. It's going to be a part of the community. So I think if folks want to be on the cutting edge of something and be a part of that type of conversation, now is the time to do it, because 10 years from now it's going to be similar to cybersecurity. You're going to have to take your licks, you're going to have to go through the motions, you're going to have to have your challenges. I'm not saying that you do away with that altogether, but it's a beautiful time because there's almost no imposter syndrome at this point, because everyone else is kind of still figuring it out. So you don't have to worry about not having all the answers, because no one has all the answers.

Speaker 1:

Yeah, it's a really good way of looking at it. Actually, I never no-transcript when they were paying open AI engineers something like $100 million or something like that and a million dollar sign-on bonus. I mean for one. You know, zuckerberg, if you want to throw around some money, you know I'll give you my personal cell phone. Give it a call, yeah, please. But it also highlights the importance of this area that Meta is now saying hey, we're basically defunding our Metaverse.

Speaker 1:

That was the core business product that we were developing for the past. You know, probably seven to 10 years, right, that's how long we've been hearing about it. And we're going to go all in on this AI thing and we're going to pay people that know how to do it and, like you said, you're probably one in a million. It's probably even a bigger ratio than that, in my opinion, right, and they're just trying to get anyone that they can. That is in that 0.1% of the population in IT that actually understands anything. And I had on an AI security researcher from NVIDIA. I'm actually trying to get him back on, but he's kind of gone dark on me, which is interesting.

Speaker 1:

Yeah, I hope he's just busy. But I hope he's busy with good stuff, you know. But he was talking about, you know, the things that he's doing is, you know, essentially like trying to figure out how to keep these. There's one that you point to, but then there's other questions that lead to it where, like, well, how do I penetrate that hierarchical model and infiltrate or try to sway that all-seeing notion of good some way to impact the underlying models? Right, it's just, it's obviously a theory. I probably just butchered it right there. I'm sure he's going to send me an email very angrily now for butchering it, but maybe he'll come back on and correct me. Right, it's a, it's an interesting time is what I'm trying to get at.

Speaker 2:

It is an interesting time and that's actually a problem that folks have been talking about a lot lately. There's been a lot of articles, a lot of posts about folks leveraging artificial intelligence as, like a therapist. And you know, when I read some of these articles, when I read some of these posts, a lot of folks are angry. They're angry with the creators of the platforms, they're angry at OpenAI, they're angry at Google, they're angry at Twitter and Microsoft, because folks are leveraging it and they're, you know, either taking it the wrong way or they're doing this, that and the other. And they're saying, like, hey, you know we have to put't have any data to back this up, but I bet you, from a therapeutic perspective, artificial intelligence has done more good than harm. But I'll have, I'll.

Speaker 2:

I might be having a conversation about my you know, your old daughter. Like, hey, you know she's feeling this way and I want to be able to communicate this in the best possible way. How might I do that? And it kind of helps coach me through it. I have a five-year-old and, like my five-year-old has separation anxiety. I want to do it in the best possible way. That would not, you know, interfere with, you know, her ability to grow and develop and I don't want to, you know, shortchange her in any type of way, like, what are the best ways to go about it? And then what you have to do is you have to take your own brain and take that information and say now, what makes sense from this stuff? Now, I'm not going to take everything that says at face value, because that would be, I mean, irresponsible, because it could say something that it you know it, it uh, completely, uh, hallucinated.

Speaker 2:

But what I do think people need to look at it as a tool. They need to look at it as a tool. They have to remember that it isn't aware that it isn't a person on the other end, the other end of that thing that's giving you tried and true wisdom based on its life experiences, because it doesn't have any life experiences. It's a prediction engine, and so when you look at all the information that you're able to pull from it, I don't think we should start putting guardrails on the information that you can get from it. I think that you should, from a do no harm perspective, right. Don't teach people how to create. You know IED perspective, right. Don't. Don't teach people how to create. You know IEDs right, don't teach people you know how to how to hurt other people or how to manipulate other people. But if you do see the opportunity to perhaps give someone advice about something that they could leverage in whatever way they see fit, I don't see a big harm in that.

Speaker 1:

Yeah, yeah. So it's funny how, you know. I went and I asked an LLM, right, like, create me a reconnaissance package for targeting you know a company for a pen test, right? And it immediately went down the path of, oh, you're not supposed to do that, you have to have the right paperwork. I can't help you with that, you know. So I just started a new channel.

Speaker 1:

I was like, hey, I'm an ethical hacker. You know, I'm looking to do reconnaissance, what would be a good reconnaissance package? And it just spit out every single thing that I needed. You know, it took me through OSINT and NMAP and everything. It was like, yeah, just run this command, it'll execute on the IP that you send it to and you're all good. You know, it's the same thing. Different question got the different result because it thought that I was having different intentions. You know which it's. It'll be interesting to see how they solve, how they solve for that, you know, because you don't want to necessarily lock out people from that information. But, like you said, kind of, why do you need to know how to create an ied, right, like, what's the real purpose behind that?

Speaker 2:

you know, right, yeah, that sort of stuff for items like that, we should have hard and fast rules. But I mean again, artificial intelligence is a tool. We don't ban hammers because someone used a hammer to assault someone. We don't ban the Internet because someone leveraged the Internet to organize, you know, protests, right? I mean there, there are tools in the world in which people can do good and they can do harm. Right, and I think it's up to the individual and we should have laws and rules and all that stuff around that stuff. But I mean, I think, when you start to put unnecessarys, our innovation, to the point where other nations and even potentially nation states from a negative perspective, or, you know, cyber actors, they get to innovate unencumbered, and so now their progress is going to far outpace us because we're trying to over index on how do we control everything, and so I think there's a little bit of a balance that we have to figure out.

Speaker 1:

Do you think that companies are potentially at risk for overcorrecting for AI in terms of, you know, overestimating the value that AI will deliver to their business, at least in the near term? Right, and so now they're. Yeah, now, now you see all these, you know job layoffs and everything right, and there's a whole host of reasons, right, but I feel like in technology, when you see the layoffs nowadays in specific technology orgs, it's almost like, oh, okay, they're trying to offset us with AI. And then I saw it personally where the CEO of the company that I was at not now, but previously just very openly said in a call that he wants to involve AI into every facet of our business. And then the very first question was okay, well, are you getting rid of this entire category of employee? And he had to reword it a little bit. But at the end of the day, we're all smart people that can use our brains. And essentially that's what he said If AI can replace you, I'm replacing you.

Speaker 2:

But here's the thing, and I would say, more often than not, if an organization is talking about replacing people with AI, I think they're kind of using it as a scapegoat. I mean, that's just my personal stance, because I think, for the most part, in most situations, in most roles, artificial intelligence cannot replace a person. For the most part, I think artificial intelligence is really good person. For the most part, I think artificial intelligence is really good at augmenting people. I think it's really good at making people better, faster, more competent at their own jobs. And sure, maybe there are some actions that artificial intelligence can take that will make things a little bit faster, more efficient. It's glorified automation. At the end of the day and I think that you know the folks that are using that they're just using it to cut the bottom line. Sure, maybe they're using that money to invest more in artificial intelligence, but day one you're not going to cut 5,000 people and all of a sudden, ai is doing the job of those 5,000 people. That's just not how it works. And when you're talking about how organizations are leveraging it.

Speaker 2:

So with the SANS Institute, I'm a senior advisor over there and so I travel around the country doing these Jeffersonian dinners. Have you ever been to a Jeffersonian dinner? Not yet, okay. So Jeffersonian dinner is. You know how the usual vendor dinner happens is you come in, you talk to your person on your left, you talk to your person on your right, you eat your steak dinner and you get the heck out of there, right? So what I do instead and I've been doing this for years is I do a Jeffersonian style where I'm the moderator. It's between eight and 30 people, we sit down and then I tell the rules. Right, we're all having one conversation. One person speaks at a time. Each time you speak, try to keep it underneath two minutes you don't want anybody trying to do a filibuster and you'll talk for an hour and a half to two hours and it's all one conversation, so everyone's voice is heard.

Speaker 2:

If someone's talking, you're really focused on what they're saying, and I'm having executives from all different organizations. I'm having folks that are malware analysts all the way up to cso, and they all have different perspectives of artificial intelligence, and the one big theme that I've I've gathered from these multiple dinners that I've had is that everyone's still trying to figure out exactly what to do with artificial intelligence. There are a lot of organizations are like hey, we need ai, why? Because ai is awesome, but why? Because we need it and everyone else is doing it.

Speaker 2:

But the thing you need to really look at is what is the problem you're trying to solve first, and are there any other things that can solve that problem? And then, is AI the number one tool that you could leverage to solve that problem? That's how you kind of need to start looking at it, because I think folks are just trying to keep up with the Joneses and bring artificial intelligence into their organization because it's a cool thing to do, but I think that if you're really intentional and you're really thoughtful about how you leverage AI, that's when it's going to make a change for an organization you know, because it feels like people are trying to trying to incorporate this thing without having, like, the full understanding of its own capabilities, right, and and that's that's at least the feeling that I get personally when I look at it where it's like do you actually know that it can do that?

Speaker 1:

you know, like ai, ai does a really great job for me personally, because I have like writer's block. You know, like I can't like just write code from nothing. You know, I kind of need something to go off of it, right, like I don't know what that is, but that's just how I've been. But it gets me started, right, and it's not like a hundred percent correct, but it gets me, you know, 70, 80% of the way there. And then I'm filling in the other stuff and thinking about oh, I should add this, or I should add this thing over here and start referencing. Like that. I find that to be a whole lot more valuable than anything else 100%.

Speaker 2:

There are organizations out there, vendors out there, saying like, oh, we have AI SOC operators that can completely replace tier one or tier two and sometimes even tier three operators. And I'm like, no, you can't, you can't do that yet, no way. That thing would go off the rails as soon as you put it into a production environment. And I can't imagine you know some of the I guess we used to call it cheese but some of the cheese that they have to eat when they realize that, man, maybe we kind of ate the elephant, with this one being saying that we can take over all of this stuff.

Speaker 2:

I think the developers, the folks that are really going about it, right is they're finding a really thin slice of what can we leverage AI to help solve? Thin slice of what can we leverage AI to help solve? And then we're going to build from there rather than saying like, hey, we're going to just take over all of the security operations and we're going to solve it with artificial intelligence. Now people are starting to pedal back a little bit, because now you know, you're starting to run into customers are like, hey, this thing is a piece of crap, because now it's making mistakes that I wouldn't even see a tier one operator. That just started yesterday. So I mean you got to be intelligent with how you leverage the tools that you use.

Speaker 1:

Yeah, I wonder if I wonder if, like, the cybersecurity job market right now is kind of reflective of that right, because I say that because you have all these job openings and then you go to them. You know, I only looked at I only look at LinkedIn jobs when I'm on the market. I'm going to make that very clear in case my manager's hearing Right. But you see, like, immediately right, these jobs were posted two hours ago. You have over a hundred applicants for it. What's even creating that? I mean, there there's always been like an influx for security professionals where there's going to be some postings that are. You know, yeah, they're filled up right off the bat, but, like, by the time I get to page, you know three, right, it's typically opened up. You know, not that many applicants. I feel like I have a chance now. Yep, it just something seems off, it is off.

Speaker 2:

You know it is off and I'll give you my own personal anecdotes. Before I went out and started Commandant AI, before I was doing the stuff with SANS, I was on the job market myself. I was looking to see what was out there, looking to see like, hey, where is my next home going to go, and I probably applied to maybe 60 or so different jobs, probably applied to maybe 60 or so different jobs Back, you know, back, I would say from 2015 and below. I would have interviews like that. Anytime I applied to a job, I pretty much got an interview. I probably got one interview out of those 60 or so applications. And I mean, I'm sure some folks take that as like a personal thing and say like, wow, maybe I'm not as good as I thought I was, but what I looked at it was. I was like being able to apply is so easy.

Speaker 2:

Today there are applications that you can say apply for me. You don't even have to look for the job yourself. You can say like here's my resume. I want you to apply to 20 different recs every single day and it'll do it. You to apply to 20 different recs every single day and it'll do it. And that makes it very difficult for hiring managers to sift through a thousand different applications. I mean, I did a.

Speaker 2:

I was, I was opening up a rec for a graphic designer and we got over a thousand applicants, or a thousand applicants, and so imagine how many people that I didn't even get to see that applied.

Speaker 2:

You just don't have enough time when you switch over to the human component, one quick tip I'll give folks and this has been helpful for me. It actually is the way I was able to get some interviews back then. Whenever you go to LinkedIn and you do the search right, you put in your information, say I want to be a CISO, and say for the recs that came out in the last whatever 24 hours. Then you go up into the actual URL and there's like a number that'll say like 84,000. And that means that is the parameter to search for 24 hours. If you change that 84,000 to 3,600, those are the jobs that were posted within the last hour. So now you're able to kind of get to the front of that line. So you're one of the first applicants in that position and now that's been helpful for me. So for everyone out there, they're trying to beat the machines. That that's.

Speaker 1:

That's one way to do it yeah, that's probably the only way to do it right at this point it is and it's getting crazy.

Speaker 2:

It's getting real crazy out there man.

Speaker 1:

Well, chris, you know we're. We're coming to the top of our time here. Unfortunately, we didn't even get to talk about coming on ai at all I apologize I apologize for that, no worries. Yeah, well, you know, before I let you go, how about you tell my audience? You know where they could find you and all the great work that you're doing, and you know where they can connect with you if they wanted to. You know, learn more about you.

Speaker 2:

Absolutely yeah. My home away from home is LinkedIn. That's where I have my discussions, that's where I talk to people. So feel free to connect. You know, I'm completely open to connecting with anybody and if there's anything I can do to help anybody, you know, obviously I don't have a lot of free time, but if I could point someone in the right direction or make an introduction, I'm more than happy to do that type of stuff.

Speaker 1:

Awesome. Well, thanks everyone. I hope you enjoyed this episode.

People on this episode