Let's just begin with a short round of introductions where I would like our panelists, and by the way, they ask me to be one as well, to introduce themselves and make the kind of a short, punchy statement just to initiate the discussion. What's your view on this whole battle of WIS or AI in cybersecurity is?
Ladies first.
You want some time to breathe after you? Thanks. Hi everyone. I'm Beverly McCann, I a director of analysis in EMEA for Darktrace. What does that mean? I am leading a team of cyber Analysts here in EMEA that protect our customers, work in their SOC and support them when they need it. That's sort of my major role. I, I look after our roughly 9,000 customers at the moment. Who are we at Darktrace, in case you haven't come across us yet. We are an AI based cybersecurity company. I know AI is, is in every vendor's name at the moment. However, we started with this already 10 years ago, and our fundamental approach was to use AI to defend and, and secure businesses. So maybe as a, as a punchy statement, I know there have been plenty of talks already this morning and I really enjoyed listening to them. And yeah, there were all these graphs about what gen AI can be used on the attacker side as well as the defender side. And it's not just a, a concept anymore. Like we do really see those attacks emerging. We can see the use of generative ai and that's something that we've been like predicting for, for many, many years now. And now with the rise of gene ai, it really is starting to to show quite heavily. And I'm, I'm more than happy to go into that in more detail.
Cool. I'm also looking forward, so sorry. You, you, you want
Question please? Just to remind our new view? Maybe those will be watching this in a recording. So what's your relationship with AI and cybersecurity?
Okay, cool. Yeah, I'm, so my background is originally embedded systems engineering and security also. Yeah, I have two. Those two and machine learning. I'm, I'm, I, I, I went into this, I'm, I'm a, I'm site joiner I could say, but gained a lot of practice now and I see the potential a lot of, of course, yeah. For both sides. Sure. But my provocative statement is I'm always puzzled as I, as I try to explain in the, in the talk also, it, it still creates a lot of effort. Also, you need first, I mean, there's this no free launched theorem, right? Which does, at the end, if, if you don't have any assumptions Yeah. Then, then every machine learning model or approach more or less has the same performance, right? So it, it always causes a lot of effort. First you need to get clear what, what kind of problem you have.
And I'm always a bit skeptical if, if, if, I don't mean your company or something, but, but if, if comp companies, some companies claim, okay, we're doing a lot of AI and more or less give the impression that it's works out of the box. And I think I'm skeptical about this. Yeah. I, I want to understand the detail and I dunno, but the potential is there and we there, but there's still a lot of work. It's not just switch chat, GP GT GPT on or a security version of that, and then it solves your problem. Right. So I don't, okay.
Okay, great. Well, as for me, my name is Alexei Gonsky. I'm a lead Analyst at company Cole, the host of this conference. And I am by no means an AI expert in the traditional sense like you both are. But I do remember over 30 years ago, my, my first exposure to this whole problem was I read an article about the Chinese problem discussion. And maybe since then I've been the kind of, if not an outright AI skeptic, but at least the one who goes around talking to vendors, poking them with a stick and saying, okay, you say you use ai, show me, prove it. So my statement probably would be there is no such thing as the ai, it's not magic, it's just the best case, a lot of meth. And at the worst case, it's some stuff outsourced to cheap labor in India. And those use cases really have happened before.
Okay. So let's maybe start with, again, kind of going back to the elephant in the room. Everyone is talking about generative ai, but surely like gen like AI did not start with change of pity, right? So it's been around for years and you are, I mean, what I've seen before actually does not actually touch that area. So what's AI for you? Is it new? Is it old, how it's been evolving and what's really there? Like a threshold event which suddenly turned AI from a obscure sophisticated tech technology to something everyone suddenly wants to have in use?
Yeah, I'm, I'm very happy to go over that first. I mean, yes, AI has been around for over 40 years. Large language models have been around for almost 10 years now, the ch gpt and stuff. So it's nothing new. We've, we've been using these techniques in these tools and the AI in, in so many different applications and ways for a very long time. It is really, and, and maybe to go on, on top of your point, it is something that you as a business, as an industry, as a team, yeah. Really need to challenge when a vendor comes up to you and says, oh, we use ai. Well, what is it? What type of ai? In which, in which context? Which data are you using? So it's really crucial to, to make sure that you have this understanding of, okay, when we talk ai, what is it?
Are you using machine learning? Is it supervised? Is it unsupervised? Are you working on my data or are you working on a group of data? What data are you using to feed that AI with and, and where the data sources, sources, and also how do you store my data? And that's, I think, a big discussion, especially in Germany where data protection is obviously really, really crucial about what happens to my data. Where do you store it? Where do you use it? So, so yes, AI isn't something new. We have been using it for 10 years. And, and again, yeah, we as a vendor, we, we don't just use one ai. We have a, we're in our sixth generation of, of using different types of AI techniques. So we use a combination of unsupervised machine learning. We use graph theory. We, we, we use large language models, we use natural language processing, we use beige statistics. So yeah, it's, it's really important to understand what is it that that is being used and how it's being used and with what data.
Yeah, absolutely. Yeah. AI a term, I think it goes back to the fifties, I think touring and people like this, they framed this first, and of course back, back in the days it was all RU based. Yeah. Like Eliza probably heard about this. You can program this in, in, and then it, it, it chats with you. But of course the performance not so well. And then we had in the eighties, I think the neural networks were, were, were invented in the eighties already, but all of that couldn't fly because there was no, the computational power was not there. The, the data was not available, the bandwidth and the storage was not cheap enough and so on. And then of course, so it's it's an old story. Yeah. And a lot of this is based on statistics. Absolutely. Yeah. So sometimes also data science statistics very, very close to each other.
And then, yeah, there were some graphs in some previous talks, right. Where, where suddenly it was more or less linear, the, the, the power. And suddenly it went up expon exponentially with, with the availability of, of the data. And of course, this in turn has amplified the development of new techniques. Yeah. Deep learning, particularly natural language processing, as you said, and many, many things. And, but, but for instance, yo probably heard about them. Yeah. He invented the LSST m networks, for instance. He, he's, he often uses to say, he invented this back in the nineties, and he says sometimes in, in conferences, which take place these days, he says, why is this new? We did this already in the nineties and something, and so many, many background theory was invented already there. And yeah, some, some people have it, it, it's still, where does it go next?
I don't know. So it's, it's, it's all very exciting. Sometimes a bit overhyped maybe, but of course the potential, we are still on the, on the verge to understand the, the, the, the potential and so on. And some people, like y Leal Kunna, he's a touring award winner. He's, they say, we are in a, in a dead end a bit because we do not, what is next? So all this, we need a lot of data to train them. This is also an sustainability problem, not only to power consumption, but also it's not how humans learn still. They say we can do a lot of e involvement there, but some, somehow we do also something wrong, something incorrectly, and therefore maybe we also need a fundamental change. And general is observation that this deep learning, it performs so well. However, the theory lacks behind. So we are not fully clear why so to, to a certain extent, at least. I don't know how you say it, but yeah.
So yeah, it's really interesting. I mean, everyone probably remember that we used to have this thing called big data, which was really expensive and required special hardware to run with and came with a lot of limitations and only the biggest companies could afford it. And now we just don't talk about it anymore because it's commoditized to such an extent that it just became data. Right. And the same, I guess, goes with AI now. You just have, I mean, anyone can run AI in the cloud, maybe in a few years you could run JGPT on your mobile phone. So yeah, it, it'll eventually become cheaper. And I wasn't joking, by the way, about this story of outsourcing quote unquote AI to, to India, because that was a real thing A few years ago. There was a camera with object recognition capabilities, which was actually not used in real ai, but it was cheaper and easier, I guess back then to use people to do that job. Not anymore. But we now of course have other challenges and lots of other challenges, data protection and compliance you already mentioned. But yeah, I guess we have to go back to this question, the battle of wheats. Do you really see that the attackers are overtaking us, the defenders in that regard? Do you see practical usage of AI of any kind in this?
Yes, we've definitely started to seeing some indications of, of, especially with the rise of gen ai, the usage of these tools on the attacker side. And I think we were, if you've been to Sergei sessions earlier, you would've seen a really nice overview of potentials of where an attacker could actually apply AI in, in the way of attacking, like automating, running of, of attacks and codes. And, but what we've definitely seen that already is like an in a huge increase in, in sophistication of phishing emails. And obviously, yes, they have been around before social engineering techniques to really tailor an, a phishing email towards the target that they want to want to compromise. But now that can obviously be, be elevated in a, in a huge way, rather than having to do it really one by one. I have to, as an attacker, do my research in, in the social platforms to find out information about a, a, a target, and then I craft a very specific email that relates to a social media post that has been made.
Now I can just task the AI to, to go away, grab that information for me, tailor that email and send it off, create, maybe create a, a domain that, that sounds very similar to, to the company's domain. Buy that domain for me, put that into a phishing email and, and send it off. And you can do that at scale. So it's not also to some extent, obviously it's that sophistication of really targeted attacks, but it's also then to, to really scale that and, and thereby getting to your goals really, really quickly. And, and we've also seen obviously that the, the speed of attacks increasing and that is, is due to the, the usage of automation and of tools that help you with automating these.
Yeah. And, and that's sort of ways where we can see this already emerging and obviously the the next things are to come. So we'll just have to yeah. Be be getting ready for that and being able to yes. Be prepared for this to come. So whereas obviously a, a big approach was previously is, is awareness training and trying to train your, your workers of, of trying to look for grammar, grammar mistakes and spelling mistakes and, and look out for, don't click any links. And, but the way that emails look like these days, they look so convincing. You wouldn't be able to distinguish anymore whether an email has been generated by a human or by an ai. You need to be able to have tools in place that will detect these changes so that you cannot just put all the, all the pressure on your, on your employees to, to be able to say, yes, of course, we're always gonna, we're never gonna click on links and we're never gonna open up an attachment that contains the malicious code. So yeah, it's really being prepared for these changes to come and, and having tools in place that will detect those changes.
Okay. And by the way, I wanted to follow up on that specifically for Sebastian with a question for you. So kind of two, fight those AI threats, you have to understand, you have to research them first. Do you think you, have you already have enough tool, enough expertise to do this, like as an independent researcher, as a scientist? Or do you think you have to collaborate with other stakeholders like the vendors, like by way or maybe even on the authorities and the government, the governments,
Of course, research is always interdisciplinary and it makes it better. Of course. Yeah. That, that's also why I look always for collaborating partners. Like for instance, the company where, where you have the, the real data and the, it's not like researching in in your dark room and trying to do something on paper anymore or something. Of course, you collaborate a lot and yeah, it would be, would be awesome to, to also do it with, with companies that actually, I mean, you do the same, right? You, you, you also started, when we met today, you started as a research company, right? And you also have research. So in this, it's not so much different besides that your researchers just do research. I also do do teaching. But, but yeah, of course. Absolutely, absolutely. Makes absolutely sense. And I see it similarly as Beverly said, right? With this, the other part there, it, it, it speeds things a lot up, right?
So you, it it, it gets this ai, the productivity boost is, is, is enormous. And this, I see it also as an interaction with, with, with the hacker. And you can use this as a tool and it's the next step of automation. Just like, do you, do you develop, maybe you, everybody uses GitHub copy maybe. So it's, it's really amazing. Yeah. As you drive, just write some code and then it, it, it, it suggests you how how it continues. Yeah. That's awesome. Right. And you can also use this, I mean, also to craft some bot, for instance, of course, or during the capture, capture the flag session. Yeah. One team used ATBT to solve the crypto puzzle, right? Because this is, well, pretty well in, in the, in, in detecting patterns. Yeah. These are cool, tangible examples, how, how easy it'll or the productivity boost there. Absolutely. But of course, yeah, I'm, I'm always looking for partners to collaborate with in research.
Yeah. Okay. Awesome. So I guess one maybe even final question for today. So yeah, you or like the, the developers, the, the, the researchers at the forefront of this AI cyber security research, like you have lots,
A lot of things to do and you do deliver a lot, probably even like a little bit too much. Sometimes we have this feeling there is so much AI being thrown around and like, how do we understand which kind of AI is actually good enough and which isn't, how do we measure the, I mean we as the layman or in this industry, the customers end of this, how do we understand, how do we tell apart the good AI tools from the bad ones? Is there any way kind of to independently measure the efficiency of cybersecurity tools or is there something you have to research as well?
I mean, you can, you can always measure the performance of a machinery for classification problem or regression problem or something. You always have the, for instance, by a classification, you can measure false positives, false negatives. So you have also key performance indicators then like precision recall, you have this, this, this curve, the so-called receiver operator characteristic where you see if the, if the area under the curve is big, then the performance is better. Yeah. Okay. And I mean, okay, this is maybe too specific for a completely uneducated guy, but, but still this is not complex machine learning stuff. You, you could, you could compare those maybe, but, but still you need some people who understand something. Right? I, I guess that, that, that would, would be my guess. Yeah. To, to distinguish how does it really work? Well, so you also, you need to measure, I mean, false positives, false negatives. You can also measure directly in, in, in this approach that I presented, right? You, you get feedback from the customer support. And if, if there are a lot of complaints, maybe the tool was not so bad, okay, maybe a bit early in the stage, you want, you want to distinguish it before, right? But yeah, you need some people who, who, who, who know how to measure the, the, the, the strength of such, such a, an approach, I suppose, which can help you, I suppose, I dunno probably what you said.
Yes. I, I think it is, it is generally really hard for like a layman and like I, I don't understand the depth of AI and, and as well, but I think a big part of it is obviously, and that's with every new technology that comes out, is that you need a level of trust. You need to trust the technology to be able to deliver what you, what it says it delivers. And, and I think that's why there are a lot of conversations going on in the world of politics as well to see whether we need to put some guardrails around it, some, some regulations around it just to be able to showcase okay, within the realms of, of this concept, yes, AI can be and should be used in this way and this is where the limit is. And and also have the ability for the technologies that the, the, the vendors, the, that they have to showcase.
Yes, these are the technologies they're using. I think it's really hard to, I don't think there is any good AR or bad AI as such. I think it is really key about applying the right AI to the right problem because I don't think Jet GPT can be put on any pro all the problems out there. Like to maybe have a quick final example. If we're looking around these chatbots coming out, these prompt based AI solutions, that promise that they can help your security teams to investigate certain, certain anomalies. If you as a, as a security Analyst wanna look at the problem, you always look at it from a very malicious sort of, everything is bad angle. So if you ask the chat GPTs to investigate a unusual connection, you ask it whether it's malicious. So obviously it will go from a biased angle of trying to see and find out whether something is malicious.
Whereas if you're an IT engineer and you wanna, obviously the goal for you is to always keep everything up and running and keep everything as good as possible. Your question might be to that church GPT, is this a legitimate activity? Is it a legitimate connection? And again, so you, based on the questions you're asking, you create a bias to your answer. So it's really important to keep that in mind. Whatever you put in, it always needs to be looked at with, with a different human set of eyes again. So yeah, it's, it's all about trying to apply the right AI to the right problem, I think.
Okay. Awesome. So I guess let's finish this panel with one final takeaway for panelists. And I hope you will, excuse me, this shameless block to say, always kind of ignore the labels and always look for specific capabilities. And if you cannot understand those capabilities yourself, always look for a neutral second opinion from company. Like, thank you very much.
Nice one.
And well, Sebastian, what would be your takeaway from this?
Well, okay, yeah, a lot of potential ai, we are just at the start, but there's also a lot of research going on. Maybe so is this can evolve in every direction and we are living in exciting days, but, and don't, do not trust every AI label. Really try to understand what's really in there.
Right.
Yeah. And for me, I guess, yeah, just being, being aware that AI attacks are on the rise, AI attacks are coming, they're there actually already. It is not just a future concept anymore. And to be able to attack ai, you cannot throw more and more humans at it. You need to have AI in your security tech stack as well to be able to defend against those attacks. That will be from me.
Okay. Awesome. Well, thank you very much Beverly and Sebastian.