KuppingerCole's Advisory stands out due to our regular communication with vendors and key clients, providing us with in-depth insight into the issues and knowledge required to address real-world challenges.
Optimize your decision-making process with the most comprehensive and up-to-date market data available.
Compare solution offerings and follow predefined best practices or adapt them to the individual requirements of your company.
Configure your individual requirements to discover the ideal solution for your business.
Meet our team of analysts and advisors who are highly skilled and experienced professionals dedicated to helping you make informed decisions and achieve your goals.
Meet our business team committed to helping you achieve success. We understand that running a business can be challenging, but with the right team in your corner, anything is possible.
This panel explores strategies for constructing AI threat-resistant infrastructures that surpass traditional multi-factor authentication (MFA) measures. With a focus on the executive perspective, discussions will highlight advanced defense mechanisms, integrating AI-driven security solutions, and the importance of a comprehensive, forward-thinking approach to protect corporate assets against increasingly sophisticated AI phishing threats.
This panel explores strategies for constructing AI threat-resistant infrastructures that surpass traditional multi-factor authentication (MFA) measures. With a focus on the executive perspective, discussions will highlight advanced defense mechanisms, integrating AI-driven security solutions, and the importance of a comprehensive, forward-thinking approach to protect corporate assets against increasingly sophisticated AI phishing threats.
So thank you for joining us here in this track, we're talking about trends and innovations with a heavier lean on things like deep fakes. How does identity verification play into this? How do we prepare the right people to make the right decisions about this? So there's several really interesting tracks that we'll be discussing today, and there's always an opportunity for you to be involved in the conversation. So please use your app. If you go into the session, you can enter in your questions. I'll receive them up here. If there's an abundance of time, I'll scan the room.
And if you have a question, you can raise your hand and I'll come to you with a microphone to ask those. But on the safer side, send those questions through the app and I'll receive those. And that goes the same for our virtual attendees as well. So just because you're not here in the room with us doesn't mean you can't participate. So we'll get started with our first session, which is a panel, really a, a, a great treat. We've got a lot of really interesting people all in the room to give an executive alert, we need to navigate AI driven security threats for boards and c-suites.
So we'll come over here to the panel and get to know everybody. Would you begin? Sounds good. I am Patrick Parker. I'm the CEO and Co-founder of Empower. Id nice to be here. Absolutely. My name's Joseph Carson. I'm the Chief security scientist and advisory CIO at dea. And I'm Andrew Hughes, VP of Global Standards at Face Tech and Chair of the Board of Canterra Initiative. I'm Alexander Ko, VP sales EMEA at ko. Thank you all of you for being here and bringing your different perspectives here.
And as Andrew, I think you pointed out first, it's good to make sure that either we're on the same page or we understand where we're coming from when we look at this challenge. So AI driven threats are here to stay, but how does the nature of AI security threats differ from the familiar ones and how does that impact the organization's response? I can jump in. I can jump in on that. I'd say the first thing is that you want, and you, you're not gonna be able to stop the employees from using ai.
I mean, none of us are gonna be writing out documents anymore. So if they're going to use ai, provide a a, an approved and hopefully secure and governed channel throughout the organ in the organization that they can use, where you can at least know how they're using AI and implement governance policies. And then the other big thing is just the, the dynamic nature of ai, which we can get into later, is that you have new identity types.
So when you're talking about zero trust lease privilege, there are all these dynamic identity types that might exist for a fraction or a few minutes and then disappear. It may be related to a human identity, may not be related to a human identity, could be a hive of agents. So lots of new challenges that we just need to start talking about and make sure we have worked into our governance plans.
Yeah, Yeah, yeah, absolutely. I just wanted to add to that as well. You're also right.
We, we, we hear a lot of times that you know how the attackers are using ai, but they're not using it that much. The biggest risk for most organizations is employees going and taking company data and putting it into public AI engines in order to create new content. And that means that we end up with a lot of things where company data is now being exposed publicly. That's one of the major risks organizations. So to your point, absolutely, they need guidelines and policies around what's acceptable use now in the threat landscape.
Attackers, they're not using it in the way that many people in the media makes assumed that they're making these, you know, malware's been updated in real time and generating in these accelerated attacks. The way that they're using AI today is primarily around identity compromise. It's around DeepFakes, it's around looking at how to do things like business email compromise or making the phishing campaigns much more realistic. So they're using it to complement the existing techniques they use today. Absolutely. Yeah.
Andrew, what I'm, so in, in, in the last year or two, I've been focused quite heavily on identity assurance and proofing and onboarding. And now with, with the company I'm at adding biometric matching and liveness verification into that mix. So of course we see the, the, the rise of threats in faking real people. So if you're doing a remote identity proofing where everything looks real, your systems have to be able to detect it, but your and your humans can't anymore.
So you have to, you know, you have to, you have, people are starting to have to shift the way they do remote onboarding because if they had human agents in the mix, the humans can't cope because everything looks real. What's your thoughts on the Zscaler hack where the, the founder, they used clips of his voice and his image and they, they defrauded, they talked people into transferring money. I can't remember which attack that was. There's so many of them. That's Fad, isn't it?
Yeah, No, I can only only agree to what was previous said. I think now the individual is getting more in the focus, getting the center of attacks. 'cause if we look back at phishing aid texts, maybe two or three years ago, there have been grammar mistakes in emails. There was wrong spelling. So you immediately look at the email and say, oh, this is a spam email. So you delete it. Now it's getting more difficult for individuals really to understand what's going on. Is it a real thing? Is it not real?
And with ie, it's getting yeah, much more in detail and it looks more true than it did before. So identity, identity, proving individual and making individuals safer is much more important.
Now, also looking at voice, voice recognition can be easily copied now. So we've seen different examples in the past in media, so where a voice or a quote was made from someone, especially from some politicians, and it was ai. So it seems like it's true. So we just, just to add to that, so I'm, I'm in Estonia and one thing for many years, the, the good thing about it is that the Estonian language is so complex, it is so challenging. It's very difficult.
And for many years the language has protected the country from, from phishing campaigns and social engineering, because it was very difficult to translate it. You would need to pay someone to do proper translation. And it was interesting that I was listening last year to the country cert statement and the cert is their inter response for the country. And they said that the language is no longer protecting society today because of the advancements in generative AI means that translations are so perfect that it's even better than Estonian own language capabilities themselves.
So it's interesting that how advances become, but it's the point where it's, it's really focused on those areas where it's things like dfas, where it's about business email compromise or financial fraud. Those are the specific cases where it is being used because in most attacks today, the basic things still work. Password compromise, reusing passwords, credential compromise, the basic still works. So the majority of attacks are still using non-AI techniques. And even though it's not common, everybody loves to hear about a good, a good new hack.
There was a recent one called Black Mamba where it a down, you got you to download an innocuous executable that it would pass any scanner because it didn't have any malicious code in it. And all it did was was call out to a well-known, trusted, open API endpoint, which seems trustworthy, but then that would generate dynamically code that could execute locally. And every time it ran, it would regenerate. So it's polymorphic different code on every run. So literally you couldn't really get a signature for it.
So it's not common yet, but it's basically a proof of concept for all the hackers out there to see what they could do. Yeah. So that's to, to pull in the, the different concept or, or yeah, ideas and, and contributions from each of you. It's a question on reality and are humans in these settings a a good judge of reality? And so with that, like how do you, how do you make an organization resistant to that?
How do you equip the people so that they are a good first line of defense, but also put in the safeguards to, to keep their errand and fallible decisions and behavior from causing too much harm? I think anyone who has parents on Facebook knows that's impossible, Could mind ask me about things or, or re reshare them all the time. And I'm like, how did you possibly believe that was true? So it's a tough one.
I mean, I had an idea, which in yesterday during Mike's keynote, but he shot it down and because that's something I hadn't thought about was that maybe everyone could have to digitally sign any content they create so you could verify the identity of the content producer. But then he rightly pointed out, well, what about people in countries where, you know, dissidents or where the free speech isn't allowed, that would really just, you know, shut all that down. So you don't think about those types of things.
I think, I think the good news is for, for everyone in the room is that the advancements in defensive AI capabilities accelerating beyond the ones that's being used for attack, attack will come in the future using AI to, to do attacking from the cybercriminal side of things. But the good news is that today it's the defensive which is accelerating beyond that, which means that organizations are looking away in order to use AI in order to do things quicker, more scalable to make this, you know, decisions much faster.
So that's the good news is, is that defensive is at this point in time looking to be ahead of the attacking perspective side of things. So I'm, we're hoping that that pace will continue. Absolutely. Yeah. I think from, from our point of view, customers are changing a lot. So we come from a more technically point of view saying, okay, we have a phishing resistant MFA and it's important to have a phishing resistant MFA. But now with these AI based attack factors, companies are moving to phishing resistant users.
They put the individual in in the middle of protection, which is important now because the individual in the end needs to be protected. And if we look at yeah, organizations now, the last years probably they would start with the most exposed individuals in their organizations like privilege assess users or others. Now they say, now we need to protect the entire organization. We need to take care of any, any individual, because especially AI uses the weakest point of entry. And this is a change.
And this is also important for organizations to understand that they cannot simply select some populations, they need to protect them all. This is important. Absolutely. And I got nothing, you got nothing for the next one. I think you got something, as Patrick pointed out, everybody loves a good hack, but what's not impossible, but sometimes a little harder is, is pulling out like, can we find good prevention? What organizations are out there? What models are they following? What solutions are they using to have good prevention?
I mean, I think recently Oasp released the top 10 list for LLMs, which is good. I mean, and one of one of them is it ties into least privilege if you're gonna have autonomous agents, definitely, you know, I hate to use a buzzword, but you wanna reduce the blast radius based upon how much damage they could do based upon the permissions that they have at any one point in time.
So you're, you're gonna have in your zero trust, lots of new different points where you have these agents that are spawning dynamically and using different types of tools and you wanna make sure that they're authorized to do that, how they're authorized and that they have the least privileges possible to accomplish just a task they're authorized to do. Yeah. So it's a lot more to audit and encompass and try to control. Absolutely.
It's, it's all about, one of the things that we can use today is that reducing understanding about the ability to quickly make predictions about reducing the amount of data we have down to things that we can actually take real action on. And that's what's critical is being able to, for example, if you're looking at, you know, lots of auditing and seam information and logs and data, what's the things that really matter most? And using basically AI algorithms, we can apply those.
And that's what organizations are gonna be providing is algorithms they're gonna be providing with algorithms that will allow you to take your data sets and analyze them in order to fundamentally understand what's the important things for you to take action.
And that's where we get the real return on investment here is reducing waste of time, is being able to allow employees to focus on the things that matter, to be able to get also to the point where you can apply zero trust principles or least principle least privilege to get to the point where if you're specifically taking an action that is there other things that indicate that that action that you're taking, that access or authentication or authorization is actually justified, is approved.
And that's where we're really getting into a point where that can be done in real time today because of the power that we have with ai. Now that's a, that's a good point. This zero trust, this brings us back again to the individual because with the zero trust, you need to prove that it's a real individual and that it's the right individual and this, yeah, this is important For, for getting access to resources And, And then Yeah, I was talking with a, a government recently about their services, card applications and they use human agents to interview applicants to activate their app.
And they're, they, they can't have enough agents and they only work business hours and they're starting to try to figure out how to deal with the onslaught and the scale of, of fakes, right?
So if the agent has, is intermediated by remote devices and networks and all that, and they can't tell the real from fake anymore, if you can put basically biometric verification in front, if you can automate say 80%, 90% of the applications, then your human agents can deal with the exceptions and maybe phone them up and use other channels to, to verify the human lets them focus their, their expensive time on the cases that can't be automated.
So that, that's part of the discussion is using new tools, better tools to offload the burden from the humans so that they can focus on things that people are actually doing. Which actually reminds me of the, we, we talked about the project with the eu Lisa, where they did the passport shein area and the whole time, if you ever go through the passport controls in the eu, what allows 'em to do when they scan the passports is that they can automatically detect whether you have authorization to enter or not. And the only thing now the border guards need to do is look for the anomalies.
They look for the flags that say that this is something that's suspicious and it allows them to focus on who should they have additional questions for. And in that same process, when you think about border guards and immigration and customs controls, we apply those same principles and identity authentication, access and authorization. Did you wanna jump in Just one thing?
So I mean, you know, the, the, the vision of AI is that you pump all your critical business data, including logs, including authorization, bus business information into some vectorized store that you can pop, pop a rag onto for retrieval augmentation. And then you can talk to AI and AI can access all your data and give you insight.
Well, if you have aro, if you have a hacker who's trying to map out your network, you know, it's not the old bloodhound where they're trying to track through your active directory, now they have insight that they can query to know who's doing what to figure out who has the access that they need. So it's a goldmine treasure trove for hackers to instantly map your network with you doing all the legwork.
If so, you have to really think about how are we gonna control authorization of who can access which data in this big amorphous vectorized data landscape. And and there's cases of that Exactly. So to the point where if an organization becomes compromised and they're able to extract that data, what now they actually, it's interesting, I've seen cases where the attackers have analyzed the financial records of the organization to know how much that organization, organization can afford to pay in a ransom.
I've also seen it where they've analyzed the data and determined that that organization has been actually in fraud and therefore they actually filed the eight K form to the SEC themselves. The attackers filled it in for them and filed it directly. So you have the ability as, yes, we, we we're creating the data to make available for us.
Yeah, but that also data becomes an attractive target for the attackers as well. Absolutely. Somehow we're approaching the end of our time. It's been such an interesting conversation that we dove right in. So let's wrap it up in, in something that people can hold onto and remember for the C-suite, for the C levels, what recommendations do you have for them? What are their major takeaways? I'd say form a team now start mapping your vulnerabilities, your technologies approving what you're going to use and what users are not allowed to use.
And then start, you know, thinking through your red team blue team to try to get prepared for it. Yeah, Absolutely. I think it's a, that's a valid amazing point. I think addition to that is while you get your team together is also define your guardrails and your, your how, your acceptable guidelines of using it within your organization as well. I think it's really important to set that right now because it's really difficult to make changes later once your organization employees are already used to that cultural change and a differentiation. I think it's important to set the boundaries today.
Okay, I'm a standards guy so my answer is fully fund your standards people to go write this, please. No, it, it's the, the techniques are changing, the standards are out of date because they're not keeping up pace and there are many of us working on, on fixing them so that we can sort of, you know, have the rising tide lifts all boat scenario, especially on the proofing side.
Yeah, yeah. So my, my, my final note on this one, I think it's very important for sea level to understand, make it, make it easy, easy to adopt and easy to use for your employees. That's the most important thing. And not only looking at office workers that are used to work with it.
Also, if you look at production areas, blue color workers. So they need to have a technology in place that helps them to protect their environment, their access, their identity. That is easy to use and that will be accepted. Absolutely. Thank you so much to all of you. Thank you. Thank you.