and Jürgen Schulze. Jürgen Schulze, KuppingerCole Analysts, cyberevolution 2023, Jürgen Schulze Good morning, ladies and good morning, lady and gentlemen, and everyone in between. So first of all, thank you very much for attending, although this was not necessarily perfectly announced. So I have yet to get all the attendees from the other room because my presentation is announced in the other room. But the gentlemen would not play with me and keep who is in the room. So let me quickly give you some advice on how we run the sessions. So the first thing is very important.
I've been advised to not run around like a headless chick today because this is streamed online, and this would create a terrible view on all online attendees. So welcome all online attendees as well. The microphone, this is one thing which is also very important for the online attendees.
If you, and this is something that I'm planning to do, contribute to the session and ask questions or make comments, I need to come to you with a microphone. Otherwise, our online attendees wouldn't hear.
Actually, I've prepared a deck which was supposed to be for a bit bigger audience, but that's not a problem. So I can absolutely walk away from the deck as soon as there's a discussion kicking in, which gives the whole session the depth that you expect from such a session. So there are two ways that we can run the session today. One is actually a superficial touching a lot of ground, and in the next 90 minutes, the other way is just walking away from the agenda and sit together and go in depth. So let's keep that an interactive session.
I would invite you to voice any questions, any concerns, any objections, and I love objections, as soon as they pop up in your mind. Just raise your hand, and I'm going to come with a microphone and make sure that you're heard. Very quick, some words about myself, just to put what I'm telling you in perspective. I'm actually a social worker. That was about 40 years ago. Then I ended up in IT, which was about 39 years ago. Ever since, I've worked in distribution, information security, and a focus on identity access management.
That's actually one of the reasons why I got to know decades ago with EIC. I also spent a lot of time with national language understanding, which is a discipline of AI, which I will actually focus on in the course of today. I've also been working, actually, across the door, across the building here in this big building in PwC for four years in the cyber and privacy area, and a wonderful ex-colleague joins us today. Thank you very much.
Obviously, I'm also writing, and this was the key reason why I've been invited to share some of my thoughts with you. I wrote a book lately where I actually drilled a bit into all these aspects of national language understanding or gen, regen AI that are typically not publicly discussed. We come to that later because there's a bit of politics in that as well. When we learned one thing from our parents, the sun will go up next day, and that's for sure. Fear is actually something which is guiding our principles, a lot of our principles, particularly in IT.
Some of our colleagues sell information security based on fear. I've been working in Symantec about 20 years ago, and we were selling on fear. That worked actually very nicely. It doesn't anymore, so fear is over. We're in a new time where we have a more positive view on situations.
However, the human factor still plays a role. What I'm trying to get to today is to a point where we accept and understand the role of human beings in an organization in the context of AI impacting cybersecurity. That's why I call it the human factor turns into a human vector. It's one of the many, many vectors that we have to deal with, very important, and we need to drill into that with a lot of understanding. What I'm also typically drilling deep into is everything that has to do with how risks behave when they act together or against each other.
The risk aggregation, which is where the risk at the end of the day is bigger than the sum of all fears, the sum of all risks. The aggregation is very important, and also the interdependence because risks are acting interdependently, so they can trigger each other. They can also sequentially trigger each other like a domino effect. It starts easy, and then it turns out to be difficult. What I'm also trying to do today is creating awareness for the non-obvious stuff, for the non-obvious factors.
I'm known, and Oliver knows that I'm known to think around corners, so I would cordially invite you to stop me if I take one corner too many. I'm thinking in terms of what's next after next. I'm paranoid by design, so that's what people like us do.
Also, provoking some unusual thoughts. We are dealing with new situations currently. Our ex-chancellor called that Neuland, so we are trying to cover new areas, which particularly for the attendees in that room, and I talked to some of the colleagues in the beginning, we are working in cyber security. We are working very focused, and I'll come to that at a later point.
We work very focused on threats and threat actors and threat vectors and all these kind of things, and sometimes we forget all the contributing factors that we don't have on our radar screen that are sitting elsewhere in the organization. Or we look at them as a negative. We had this discussion in the beginning. We are the nice guys, so we need to look at it from a positive side. We are the enabling people. My first question I would like to ask you today is, how many of you are using chat GPT or other large language models at work for your work? That's quite good, actually.
Last week I was in Bonn on a keynote, and I asked the same question to the audience of about 200, and there were only about three or four hands popping up. Who got trained on using chat GPT alarms?
Okay, so it's a bit less. Have you been trained successfully in terms of the outcome of your work with this kind of technology? Can I ask you a nasty question? How do you measure success? One sec. When the results are better than what I expected them to be and better than what I would have created. So if I can guide the LLM to producing something that actually makes sense to me and actually enhances what I would have provided, that's success. Thank you very much. Anything else?
Okay, so he's been trained by the gentleman who just gave me the answer. Who is allowed to use chat GPT in the organisation officially? And who is prohibited? No wonder why. So we come to that later on. This is a bit of a home turf for me. So paranoid by design, I already said that we in cyber tend to look at situations from a risk standpoint. The second step from an opportunity beneficial standpoint, at least initially. So unless we figure out what we can do with it and how we can do things with it. So what we need to understand today also is I won't go deep into the benefits.
Because this is something, we'll have one session and we'll have four intermissions actually, four quick discussion rounds where we drill into the benefits of this technology and into the risks. But I wouldn't want to go into the technology advantages like coding and so on because that's something that's beyond my grasp as a social worker. So the journey is out of the bottle. So there's one thing which is a bit of a Germanic point of view here. What we typically try to do in Germany if something new pops up, we try to stop it. We try to prevent it from happening. We find the bad stuff.
We try to regulate it. We try to find the bad in it. And we try to spend all energy available to stop it. So that's what Germany is a bit famous for. So we waste a lot of energy here. There's no energy to be wasted. The genie is out of the bottle. So it's out. We won't get it back in anymore. And it's fat in the meantime. It grew. So we won't get it back anyway. It's too fat.
In 1997, does anyone know Niklas Nikoponti? Who knows Niklas Nikoponti? No? Niklas Nikoponti was the founder of the MIT Media Lab, Massachusetts Institute of Technology. He was the author of the book Being Digital. So a pretty famous guy actually. It's kind of a digitization Bible in black for good reason. And I met him in San Diego. I think it was San Diego in the U.S. And he shared with me what he's doing with his students in Boston. And I was like, holy moly. And a lot of this stuff actually came to life now 30 years later.
And at the end of the discussion, he said, Juergen, everything that you can imagine happens. Plus. So that plus actually worried me a bit. So the big question that I had with everything that I don't know that I can't imagine happen as well. And what is that? So that's a. I leave you with some questions today, so I'm sorry about that. So if you drill a bit into the details of what we are looking at at the very moment, it's. I call it spear phishing taken to the next level.
Actually, the social engineering on steroids made it to press last week. I had it last week in a presentation. And that was something people actually jumped on right away because one of the one of the main or one of the purposes that technology such as is being used is, for example, for phishing, phishing to start with. So if we get phishing mails today from which use typically used to be recognizable at first glance due to bad English or bad German, excuse me, bad language, bad grammar, and just, you know, recognizable at first glance as a phishing mail. They have become much better.
So the German is excellent. So the grammar is excellent. And even for us who are trained on on this kind of things, we are sometimes sitting in front of these phishing mails as well. It's not the prince of someone here or the Nigerian prince trying to get money. It's someone who claims that he's from DHL or UPS and I have to pay some money for getting getting something released from customs. So I'm going a bit the wrong directions. In balance of power, I already spared that because the balance of power game is something which is fairly known to you.
So it's something we don't need to talk about because we know that the bad guys are using AI. So we have to use it as well because in absence of talent in our industry, and we're about three and a half million experts in cyber shorts globally, and it's probably more than three and a half million, we need to look into automatization. And but the bad guys, they're also very efficient. They try to achieve a lot with very little resources. So they're also using the same tool. So there is no way for us to escape that vicious circle. Emotions are also very important in cyber.
And people who are pissed off do bad things. It's a very simple, simple rule of life. And if there are many people who are which are upset, they do even worse things because they team up to do bad things. So that's place, particularly in the political arena. It plays a role where also state actors using bad emotions that they created to attack. And so emotions play a very important role. I'll come to that also a bit later in terms of how can we make people happy with what we are doing. Idea mimicking, spoofing, nothing I need to explain to you.
Deepfakes, I mean, we had this last two or three days with the anchorman of the news and the anchorwoman got deepfaked with a lot of political, nasty, hated rambling. And that was so well done that it's actually made government take action. And that's just, it's just the beginning. And chat GPT and the likes obviously also play a role here because language, the way words are used in context of the person who has been abused in that case, create credibility. And it is the small things that add to the equation of credibility. It's the visuals. It's the sound of the voice.
And obviously, it's also the way people express themselves, whether it's reliable or not. In advertising, we are looking at things like the fourth, you know, the tens, thousands of, how do you say that, the fourth digit behind the comma. And this is exactly what we're looking at the very moment. This is sometimes with the edge which kicks off a reaction. Very small, very small factor, big impact. This information is also very important, also plays a role in our industry. We have a lot of people who actually try to get information out of publicly available LLMs.
And this is also something which is a pretty risky thing because this is not search engines. This is not what we used to call knowledge engine. These machines deliver. They don't deliver what I need. They deliver what they think that I expect. And that's something we always need to bear in mind.
So, if our people try to find solutions for problems and pull them from publicly available resources such as ChatGPT, the results might not help. Model inversion attacks, also something I, who came across that term already in the room?
Okay, very quick, very quick. If you put information in a system, obviously, there are people who want to understand what did you put in the system, right? And they want to pull it back out. If it's on a hard drive, it's fairly easy. You take the hard drive. But if it's in ChatGPT, you need to have a very sophisticated way of prompting the system to get information out. And I'll give you one example here. And it was Samsung, actually. I shouldn't put any names out here, but that's a fairly known case.
The engineers have been encouraged to put code in the system to get the code improved by the system. What they did was they took the whole corporate IP, the whole source code of the company, and dumped it into ChatGPT. What happens is they socialized their IP. And the IP is publicly available. And the IP of a lot of engineers is now helping other people to improve their code. On the other hand, they also help bad people to find vulnerabilities. So if you run a model inversion attack, you actually teach another AI structure to prompt ChatGPT in a way to pull that information out of the system.
Which means once it's in, it's in. And if you're smart enough, you get it out. It's on a different level now. It's a different level of sophistication.
However, something to bear in mind when you deal with it. I'm sure you've been trained not putting sensitive information in. I didn't mention that in the beginning, actually. Two days ago, Microsoft advised us to block ChatGPT in their company. That was for a different reason. There was a DDoS attack on OpenAI. But they blocked it. Apple and Amazon actually blocked ChatGPT since May already. Because they know that sensitive information will inevitably find its way into the system. And Apple is religious about keeping their secrets. So sometimes it's good to look at Apple. Did I get everything?
Trust. Yeah, well, trust. Trust is also something we'll come to that also a bit later. Trust is very important, particularly in our world, in our cybersecurity world. And so we need to build trust in order to make sure that people follow our advice. So people only follow if they trust their leaders. Polarization.
Yeah, polarization. The first thing that we do if we come across a new technology, as I said, is blocking it. At least in Germany. There are countries in Europe which don't do that. Like in the Nordics, they embrace change. They love it. Germans don't like change. So the first thing that we do is we fill our perceptual drawer of our perception, all the perceptions that we have, with the obvious stuff. So anything that's obvious to us. If I ask you what's the biggest concern, and we'll come to that also in the session, on ChatGPT, I got a lot of answers such as, yeah, well, they write bad code.
And it's not accurate. These are the two things that filled all the drawers at the moment. And they also filled the public discussion, which is, I think, a bit of a smokescreen because there are more dangerous things underneath the radar screen that we have to deal with. So polarization, and this also plays into that problem here. We have a situation. You're laughing, right?
So, I mean, these are the things that you don't talk about if you're in sales. You don't talk about sex, religion, and politics.
Actually, when you're good at sales and have good rapport with your client, you talk about exactly those three things. And this is gender neutral, so everyone does.
But, however, you probably don't do it at sports. Well, it depends on which is your favorite sports team. But if you are living in Munich, talking about Borussia Dortmund is not advisable, or the other way around. So sports can also be a deal killer in a professional discussion. So we just added one thing, which is chat GPT, because chat GPT is highly polarizing our society at the moment because one half of the society has the strong opinion that chat GPT and the likes will just wipe out human beings and mankind because this is the classic Terminator 1 thing.
Unless Terminator comes, we'll lose the fight against the machines. The other half of the population is actually supporting this new technology regardless of, because their opinion is, if we don't use it now, we miss the train of innovation. And I have yet, I have to admit, I don't know what the middle actually is thinking at the moment. So I haven't found a more balanced view on the use of chat GPT as of yet in the discussions I had. So either I got objected heavily or I got embraced. So there's nothing, and I'm a bit critical too.
I mean, as you could already hear, I'm a bit critical to the situation. So emotions, talking emotions. When I was researching also the quality of output of the technology, I came across one thing which actually bothered me quite a bit. When I'm at home, and I have to make sure that my iPhone is away from me now, because when I'm at home and want to switch on the light, it goes like, hey Siri, switch on the light. So it's not, hey Siri, please switch on the light. Thank you.
So what happens, and this at least was my thinking for the next chapter of the book, was we will turn into very unfriendly people applying what we learn from our gadgets, from our voice-controlled gadgets to the real world. So it's not like I would say to my wife, can you please cook something? Thank you very much. I'm cooking as well, so I'm not one to go into any wrong direction here. And when she's cooked or when we cook together, it's a thank you, it was very good. So I'm not asking, hey Petra, cook. I can do that. Once. So I thought, this will change behaviour.
This will change the way we interact with each other. And then, please go ahead.
Oh wait, wait a sec. So about language, I have a question. Because I read an article that internally, chat GPT, they use capital for emphasis, capital letters.
And also, it's been trained on lots of language. So could it be that if you use please, or if you, let's say, change the tone of voice while prompting it, that the response would be different? And that it might actually be a... You're wonderful. Thank you very much for pre-empting my next point.
Yes, University of Texas, California, and Maryland found out that being nice to LLMs creates better results. That's proof. That's scientific proof. So if you say, can you please and thank you, then you get better and more qualitative results. And you build trust with the system, which is also coming back to the trust thing. Just to close that file, this is actually a programming language. And this programming language has been, the compiler has been built in a way that you need to be nice to it so it compiles good code. But also being too nice to it will also make it object and fail.
So you need to find the right... I'm not joking. These are things, if you do research on these kind of stuff, you unearth things that just make you giggle. But bottom line is, my fear that we will change behavior and turn into nasty commanding beasts is likely to go away because if people learn that the results are better, they will be nice to the machines. And I hope that they will also be nice in real life later on. So changing behavior is going to be the next thing I'm talking about. And talking about trust, Schadenfreude is also something I can't really translate.
So I did some of the translation for our English-speaking guest here. Actually, currently, Amazon is spilling over with books written by ChatGPT. And this has many reasons. One of the reasons is to rank topics up in Google. But that's a different story, so I won't go there. But what also happens is that ChatGPT writes funny books that you better don't trust because that could be the final way of the final trust. So in that case, actually, it wrote a book about mushrooms, schwammerl in Bavarian. And you only eat them once, or you feed your mother-in-law. Why are you laughing?
I have a very nice mother-in-law. She just turned 95, so I want her to turn 100 so she won't get mushrooms from me. But you see, this is also what I made before, the statement I made before, making sure that we look at stuff that comes out of these systems in a very suspicious and very cautious way. One of my ex-bosses in Symantec, John W.
Thompson, who was the late chairman of Microsoft, he always said when we had calls about non-disclosed things, he always said, deal with them with your eyes wide open. That would be my advice here as well.
Also, one very important thing that we need to take into consideration is you can't have one without another. This is a Frank Sinatra song, We're Married With Children, Schrecklich nette Familie. I think you know that. Everything comes at a price. There are very, very good ways of dealing with chatty PT, particularly in translation in sectors which are very clearly defined, like legal. Please go ahead. Wait a second. That's a very nice point, but maybe we'll come to that later because we are all talking about artificial intelligence as if it appeared out of the blue that year.
But translators are using that since more than 10 years. I don't actually know, but they are training their own model to make it work quicker. And if you have technical translations, you won't be without it, and it makes it quicker. Absolutely. I totally agree. As long as you don't buy Taiwanese IT gadgets and try to get through the user manuals because at that point you see this was not AI, but it's absolutely correct.
However, we also need to look into the other side of the legal thing, and this is something if you use the technology in-house or if you use it for in-house purpose, always bear in mind that there is a lot of legality in the background still fighting the whole thing. Like the author of Game of Thrones sued OpenAI because OpenAI obviously digested all his books to learn. I know what I put in my book here. It says everything, every use, being digital or analog, outside, without my permission, is illegal, full stop. It's a very clear case.
Actually, the Americans have the same in their books. So if he succeeds and if Stephen King kicks in, he has all about 400 or 500 million books, so that's power or ruling or whoever of the authors, then it's going to be fun because what you are using is technology that's potentially illegally trained. I'm paid by miles. Can you go back there and run back? It's okay. So did you notice on last week, when was it? Dev Days for OpenAI.
One of the things they told the developers was that if they get sued for infringement, that they would be indemnified by copyright infringement, that they would be indemnified by OpenAI. Did you hear about that? Does that change anything?
Actually, I'm not a lawyer. I'm watching this space very carefully right now because from a compliance point of view, you need to make sure.
I mean, like in the old social media, when social media analytics started, that's pretty much comparable to that. Then the big question was, can companies use stuff that is allegedly collected in an illegal way? Big question. It took 10 years to get to some points, which is called justified interest in GDPR. It took the European High Court to make the final decision on that. It's 10 years later. So at the very moment, we have two choices. We take the risk and play with it and see what happens, or we are cautious and use our own tenant, for example. We train the systems on our own data.
That's also an option, put it in our own basement. So we still have these options.
However, we need to drill into it. We need to bear that in mind. And also the next law thing, and I think that's something you've certainly come across, where a lawyer in the US, and this is about hallucination, a lawyer was lacking reference cases, and we have in the US, we have a reference law system, and he asked CHHPT, and CHHPT came up with wonderful cases, which didn't exist. So he brought it to court. He got actually fined by the judge $8,000 for abusing the courtroom. There was courtroom abuser. Just don't ask me how that term is called exactly.
But what happened was he got what he needed, not, as I said in the opening statement, he didn't get what was there, but he got what he needed. If you ask persistently, if you continue to ask, I sometimes think there's some small guy sitting in CHHPT and say, get it over with, just give him what he needs. And that's a risk. So we need to bear that in mind. In test psychology, the small stuff, we call it intervening variables.
So if you have a test environment where you try to find out about behaviour of people, and the scientists wear a white coat, a blue coat, a red coat, have bad breath, are unshaved, whatever, all these elements, sorry about creating pictures in your head, but that's exactly what sticks. It has an impact on the situation. It's the small stuff, it's not the big stuff, as I said earlier. So it's the small drop that actually changes the whole setup and renders a sophisticated test environment useless if we don't take it into consideration. So we need to look at it.
And if you look back in the past, there were factors that once seemed irrelevant become deal-breakers nowadays. And while much ado about nothing factors turn into headlines that make headlines, require large-scale solutions and keep us awake at night, we simply lack the experience currently to deal with these factors. And that's also something we need to understand. We don't have the experience yet. The big-scale rollout of this technology happened since one year, one and a half years. It took us 10 years to understand social media.
Well, we still don't understand it. So we are quite in the beginning where we need to build our awareness.
Also, we have obviously political and commercial interests playing here that we have to take into consideration as well. One drop is sometimes enough. The risk equation, it's very simple for us here. We are simple people in security. Risk equals probability times potential damage. We have one challenge here. We don't know the probabilities and we don't know the potential damage yet by using this technology. So we are kind of guessing currently. What's the impact? It doesn't mean that we must not use the technology, but we need to apply some more brains and attention rolling the stuff out.
These factors actually, we will be going through overfitting language, brain drain, trust and bias. All the other factors we'll do some writing within the next couple of slides. These factors are factors that I found out that we don't have answers on yet.
Actually, I didn't even find questions related to these factors, like credibility. Is the person who gives me information credible with what he gives me in terms of information? So the big question is, who is behind CHPT? Who put that information in? Was that a credible person? Was that an expert? Was that a knowledgeable person? I don't know. We don't know. We'll never find out actually because it's a gray box. Authenticity. Is that person authentic? Is the feedback... When I'm talking to you, for example, I check your card. I know where you're coming from. I asked you what you're doing.
So I put things in perspective. So any answer coming back from you, I put it in perspective of what I see, what I feel, what I read. So that creates authenticity. The big question is, can an answer from a system be authentic? Don't know. Context. I said context is also very important. In which context do we get the information? And as I said, you wanted to understand. I wanted you to understand where I'm coming from so you can put things that's coming from me in context. I've been a social worker. I've been writing. I've been working at PwC. Paranoid by design.
So this gives you the context of what this guy is talking about. Value. What's the value in the information we get? I won't go into all the details. You can actually read that in my book. Come to that also later. Control. Do we have control? Are we losing control? Did we ever have control? So let's go on here and see what we find with these top three, four points that I made. Overfitting.
Actually, this is how it all started. Overfitting. Overfitting means that the big question mark I had was, where's the information coming from that ChatGPT is using in order to give me smart answers? I've been working with national language understanding models since 2007. And the first thing we did training a system was just taking Wikipedia and dumping it into the system because that's kind of the world knowledge. So it puts things in context so we understand.
Also, to a certain extent, disambiguation. And we have the terms, we have the people, and so on. So Wikipedia is a good start. You need to know that even Wikipedia, obviously, is not necessarily curated to a point that the information that you find there is valid, scientifically correct, and so on. It's also a highly political thing. So it's a very risky thing to use Wikipedia as the foundation without curation. And so that's the first thing we do. And what happens then is the system starts to create content. So what do we do with content? We use it. We use it publicly.
We use it on our website. We write articles, and so on. So we write new content. What's happening right now is that JetGPT is crawling the web for new content because it's learning. So it comes across its own stuff. So what happens is it crawls its own stuff, digests it, and learns from its own output. That's the friendly way to put it. I love this Stephen King book, actually. That's a very illustrative way of describing what it's doing. It eats its own tail. It eats itself.
And on top of it, actually, it also eats assets of you because you paid an agency to create expensive content, put it on a website, JetGPT crawls this, smarts up for everyone else benefiting from it and beating you in your own field. Well, you know, it's a question that you have to ask yourself whether it's smart or not. So the Amazon book, I already had a quick one on the Amazon JetGPT-created books. They also get redigested. So what happens is wrong information stays wrong. It doesn't get more correct, but it gets more granular. It gets more granularity in its wrongness.
And that's called overfitting. And that's a big risk, actually, that we are facing right now.
Actually, I made a statement last week, which I've been quoted on publicly, is that JetGPT is evidently getting more stupid by the day. And actually, it is, depending on whom you ask, and we're coming to the polarization thing, you know, the black and white thing. Depending on whom you ask, you get either agreement or not. It depends on how you prompt it.
Yes, absolutely, if you pay $400,000 a year for a prompter, you might get good content. But my big question is, how stupid has a system to be in order to throw $400,000 a year after it in order to get cool information? It's a philosophical question. But prompt reading, I put that in because this is something which actually made my fuses blow. Because if you don't want to spend $400,000 a year on a prompter, you just create a prompting engine with JetGPT which prompts JetGPT. Yes. So it gets to a point where it's kind of stand-up comedy. So... Just one question.
I was just thinking about when I'm searching for some IT problems, a solution to some IT problem, I will always get like hundreds of answers to the same thing because they are referring to one Microsoft article or one press release, and it gets... So I think we are seeing that since years that some small piece of information gets doubled by 10 times, by 100 times, and it doesn't get... And if it's wrong, nobody will check for it. But you have to know something about the topic in the first place to really understand it. And that gets worse with JetGPT because it's doing it itself.
Like there is no editor writing an article because someone asked for it, but it's always the same content. Thomas Jersich, who is the chief security officer of Deutsche Telekom, and I would highly recommend you his session on... I think it's in two days from now. When I asked him about the subject, he also found some of his ideas found way into the book. When I asked him on the subject, he said the risk for him is that if I search something in Google, I get a selection of answers which I can choose from. If I ask JetGPT, I get one answer.
And people are happy with one answer because they love convenience. Convenience is actually what kills us in cybersecurity if people get convenient because cybersecurity, like democracy, is an effort. Everyone needs to put efforts into it, not just the guys in charge, but everyone, all hands on deck, everyone in the company needs to. So I totally agree with you, and this is a major risk.
Well, Google, in that respect, actually delivers better results because it gives me choice based upon my expertise. However, people who have no expertise will also fail with answers of Google because they take what's in the beginning, trusting Google's relevance algorithm. So you can't have one without another, as I said earlier on.
A big one, a quick one on dumbing down, because that was also a very important factor last week, and that's also very important for you to understand. People who are using, let's take one step back, a very simple example. When I'm writing, the most difficult part is the first page. I have an idea, and I always say, the book is ready in my head already, and Oliver knows it.
He edited, actually, the book, and it took two months or so. I don't know how many iterations I bothered him with until I was done. But in the beginning, the book is ready in my head. It's nothing but an idea. The first page is the most difficult thing. So what happens is I'm sitting there for the first page, have an idea, and I just can't start writing. What do I do? I pick up the phone. I call people. I call Oliver. I call friends. I talk to Thomas Scherzig. I talk to people who are experts in their field, and they challenge me. And this is how the first page actually starts to fill.
My big question was, what happens if people use chat GPT for the first page? Will they stop interacting with other people? And there has been research about two weeks ago, has been research released that during Corona, during the lockdowns, the human brain lost capacity.
Evidence, fact, no fiction. So which means, and they actually saw the key reason for that, the lack of interaction between people. So the big question is, we know that chat GPT is dumping down. Will we follow chat GPT now by, you know, reducing interaction with our fellow peers and fellow colleagues? Another scientific proof for chat GPT is the knowledge. The knowledge is the taxi driver test in London. It's very nasty because you need to read, you need to learn every single road in London. You're coming from London, right? You're coming from London.
So this is the taxi drivers in London have a brain capacity in the wayfinding section of the brain, which is outpacing everyone else on the planet. On the other hand, use of GPS that we're doing is, weakens exactly that very spot. One could say, and this is one of the objections I constantly hear is, you free up capacity for some new stuff. Wrong. This section of the brain will remain dormant. So I won't turn into quantum physics because I've got more capacity.
No, I'm dumping down. I won't find my way anymore. And this is something also to bear in mind that the use of these kind of technologies need to be used with caution because it might decrease the talent and the capacity of the intellectual capacity of your people. You get quick results, big benefit, but in the long term, there is a dumping down effect on, on human kind visibly and measurable already. So that's a proof as well. So we are getting dumped by the day anyway. So the question is, do we want to accelerate that?
This is something that popped up in my, in my talking about information quality. And I told you about, about the curation of information before it gets in. It's like this old paradigm, like crap in crap out. It's still valid. And so the big question was in the U.S. and I'm getting a bit political here. They have a new speaker of the house and, and he has a strong opinion that Adam and Eve rode on T-Rex into the sunset. Because they lived at the same time. You can argue with that or not. It's a thing of belief, not of science.
However, Mr. Musk said, I'm going to use Twitter to feed my AI. So if Adam and Eve ride on T-Rex into X and Musk pulls that data to feed his AI, these are pictures you will never lose. But this is actually an issue. We have an issue here because the quality of data is certainly not based on scientific consensus. So bear that in mind as well. And trust. I already said that. I'm going to jump over that a bit so we have some more time for working here. The question that we always have to bear in mind is something that we do also in the company.
I only ask people who I know they give me answers I can trust. If you ask Churchill P.T. the big question or any of these technologies, the big question is who is behind it? And we had that a bit in the beginning. Which platform was it on? Is it a credible platform that we're using? What's the ultimate source of that information? So I'm not just going to the platform. I'm trying to go to the ultimate source. And this is called in Germany, you have this wonderful word called media competency that adds to it. So media competency turns into AI competency.
Bias is a very, we only have one lady in the room. And this is actually you are illustrating a problem. You're not the problem but you're illustrating it. And I'm walking away a bit from my script here because that happened only last week. 19% of the people who are reaching a degree in AI in the U.S. are women. In Germany, 16%. 16% in the area. That means the algorithms and the data has the likelihood that it's masculine, has masculine origin is 80 plus percent.
So the question that we have to ask ourselves is, is that information that we pull out of the system evenly shared between the genders? And I know in PwC we had this, we had a very strong move towards hiring talent, hiring female talent because complex threats require diverse approach and diverse approach. So women are very opportunistically, very important to solve problems. It goes beyond that, so I beg your pardon for that. So don't judge me on that. But so how, if information repositories are masculine, not, you're the only, I'm terribly sorry about that.
But if the information that we get from the system is mainly created by men, how can we attract women to our world? Oh, please. Challenge me. First of all, by the way, as you know, of course, the chief product officer at OpenAI is a woman.
And very, what was that? You don't like it? It doesn't, okay. That's not my point. The thing is that there's a difference between the training data, right, which that's not the same as the person who's actually training the model, right? So how do you, so if ten men train a model that's based on training data that was all created by women, right, that's not the same thing, right? Are you conflating those two?
Yes, that's why I differentiated between algorithm and data and data collection. And I'm going a bit into Germany now, and it's a valid point.
As I said, it's all with question marks. Taking one step back, okay, if you look at data available about the Congo, Congo is a country in Africa, right? But the data available is written by the conquerors, by the colonialists. Congo is Belgium, so the heritage of Congo is Belgium because they were literate about holding or writing down their history. Because in Africa, people share their stories on the fireplace. They talk to each other. It doesn't find its way into the systems. So which means, yes, you can have as many women as you want.
The data available is that of the conqueror, the conqueror's right. Wait a second. I call it colonial bias, okay? The second thing is, and which is very important, and I came across it only a couple of weeks ago, we have this discussion in Germany about gender language, gender speech. It's a highly political thing, so people are very controversial about that. But most of the data available, and this is the data that systems get trained on, are male. They are male in the way they are expressed.
They are male in terms of research being made because 80% of the research material available is done by men. Plus, the language is male. We talk about der Wissenschaftler, in terms of a male gender, right? And that finds its way into the system. It turns into masculine bias. And that's, I mean, as I said, there are some things that probably take another couple of years until you have the long-term research and results on that.
The question that I ask is, can a system represent a part of the society that is only in the system, that only finds his way or her way in the system, only at a very small fraction of a society? It's a question that we have to ask. I know my friend in Stockholm, an author, she tried to find Vikings in lingerie. It didn't work because in the Nordics, Vikings in lingerie is a very nice way of showing the female attitude of a whole society. But the system, in that case, it was also from what I didn't find anything. It couldn't create an image of a Viking in lingerie. So because it's male, right?
Yeah, as I said, another pick in your head. The proof, actually, for the Congo thing I just gave you has popped up in August, actually, where the first scientific evidence was shown that those development countries did not find their way into the systems properly, which also has another problem for us here in the room because it's also when it comes to finding talent. I talked about 3.5 million missing experts in our field. We won't find them there because they don't happen in the systems. And they are not approached properly.
They don't feel understood because the content they get back is not geared towards them. And that's something we also have to bear in mind. It's another risk. By the way, the Moluccs are the same issue, Portuguese and Dutch heritage. And Indonesia has more power and is more advanced.
However, they can't make up for the history. I would like to make a quick intermission right now. That's not a problem when everyone is hiding in the back. I would like you to just pair up, just two of you, on the white piece of paper, or three of you. You're sitting alone. If you want to join, right. So the first question I would have for you, actually, is what the use cases that from everything you heard so far outside of your classic world, what use cases do you see? What use cases do you know for the use of large language models in your organization or also for your personal life?
So going a bit beyond automation, because that's the obvious. So I'm killing that already, because I want the other stuff. So I would give you just two or three minutes to write down the top three of the use cases that you have in mind for large language models.
Please, on the white, on the big white. Pens. I'll lend you my pen. Thank you. Go ahead.
Yeah, I know the name, yeah. He has a lot to say about how really what you're building with an LLM is just a statistical model of the data. It should never be used as a search engine itself. But it is.
Yes, absolutely. But then he goes on, actually, I'll have to send you, there's a really nice presentation that he gave actually here in Germany. And he talks about, he goes, it's fun. LLMs are fun. And he goes through seven or eight very specific problems Yeah, yeah, yeah. This is like the shovel. I have that in the book as a shovel analogy. It's a multi-purpose. You can use it for all sorts of things. You can make a mother-in-law or you can drill a hole. Right. And bury your mother-in-law. So it's a question of what purpose are you using.
But people tend to use the purpose that gives them the quickest release of any pain. Yeah, which is obvious if you're aware of what the technology actually is and what it's capable of doing perfectly. But if you're coming to it new and you don't understand that, you might think of it as a tool that looks like it'll give you an answer every time. It'll never say, I don't know. But anyway, I'll tell you, I'll send you a link.
Yeah, please, please. Yeah, yeah, yeah, cool. Because I'm extending the stuff right now because it's, there is only a, it can't be a snapshot today. Every week something.
Okay, ladies and gentlemen, a very quick one. On the green now, the benefits of what you just wrote down, the benefits for you, just two or three points. He's not working at all. Which are use case, wait a sec. The red one, the risks. Red is risk. And on the green, the green, the benefits.
The green, the benefits. The red, the risks.
Sorry, I did the wrong direction. Whatever you prefer. If you have risks that you want to get rid of, write them down. So it's perfect. But in a perfect world related to your use cases, yeah. The bad stuff is red. The good stuff is green. And last but not least, on the yellow one, two or three words on how you're going to mitigate the red stuff. How do you want to deal with the red stuff? Sorry for picking on you all the time, but you're shining the room as the only soul woman here. Okay. Let's quickly go a quick, a very quick round, just a minute, through the groups. Are you all set?
Where to start? Start in the beginning here. Okay. So what are the use cases that you came up with? Oh. Wonderful. Thank you. Yeah. So one of the use cases that we're trying to do in our organization is to build the model that will help, you know, to identify as much as the system can develop and provide the three scenarios. So for the, to pen test the system, yeah. So highly technical.
Yeah, yeah, yeah. And what are the, what are the benefits for that? Yeah. Yeah. Yeah. So what are the benefits for that? So the benefits, it's for sure the volume. So because the, yeah, so we will get the available amount of the scenario that we can test, but the most risk that I see, it's probably the credibility because how we can, or the trust, how we can trust that these scenarios that we will execute will show us the valid results. Yeah. So. Validation. Validation. Yeah. How do you deal with that then?
How, what's, what's. We've not done yet. No idea yet. Yes. Yeah. Okay. Thank you very much, sir.
The next, next group. Yeah. Basically what came to mind first was maybe something that we are struggling with in our organization. With my colleague and I.
But, so we are struggling with sales. We are struggling with sales. Okay. Increasing our pipeline and closing sales and so on. And so all, everything took our mind to those. In other words, great marketing material, benefit, higher quality, lead generation, productivity. In other words, generation leads is heavy lifting. And better target customer profiling. So who are the actually ideal customers to approach? Potential customers, benefits is also better leads generation. And the risk for us was simply not having enough sales in the company.
And mitigation at some of the applying these use cases in LLMs with LLMs. So I haven't thought wider than that. Thank you. It reminds me on the good old social media times, because that's also the way when social media selling or social selling started exactly the same way. So we are history repeats itself. It does. Thank you. Yeah. With regards to use cases, we note down, well, the, I think the classic use case, generating text based on given inputs. So we're putting bullets in and want them to basically make a nice and appealing text out of it.
Then we have translation on list, pre-translation, so to say. So translating something and then we'll review it makes sense. And we need to still do our, put our own efforts in and simple debugging such as Excel macros or whatsoever. Something about the translation. And I also put it, the German text is like, what I say, I'm very, very keen with texting and I really like it, but it's really dumb to, to write the 20 years forward telling you that cyber security is a threat nowadays and we need to blah, blah, blah. And that's what a chat GPT can do very well.
Just ask him to, to write a forward about some cyber stuff and he will write it and I can just go on with my own essay. So yeah. And the same goes with translation. I will not translate a novel, but only like the same forward. Thank you. And risks we see is, well, mistakes that we basically oversee ourselves. Then losing the feeling for the language itself. So if you translate everything and just take it as this, well, then you also lose some of the feelings of the language and the words you're using.
And well, unemployment benefits, benefits for sure. Well, it's way faster and it takes off some burden of you for like nasty work. And while the main control is basically, well, it's easy set, but I'm working responsibly with AI. Thank you. So I'm a little bit from the group of the bad guys. So I'm using a chat GPT and such tools more as a source of inspiration for hacking, find creative ways to break things.
And so, or to understand unknown systems. When I break into a company and I see some old system, I get an inspiration how to go further, how to break the next thing. And of course a classical use case to summarize texts, large instruction manuals and such things. So the main benefit here is then time saving. I could research how such systems behave on my own would take hours that I don't have. And so the main risk for me is hallucination. If a chat GPT and it does it regularly in events, hacking tools, but this risk can easily be navigated mit mitigated by fact checking.
So when I try to acquire or download the tool, I, yeah, I won't find it. And so this, this is quite easy to mitigate. So for bad guys, the risks are not that high because yes, if I break a system while breaking into it, it doesn't matter. It's that anyway.
Before we, thank you. So before we get to this, you scared the hell out of me.
Can we, can we agree on one thing from what I heard so far? Correct me if I'm wrong. I put the notion in a book also that chat GPT has, has, is likely to make smart people smarter and dumb people dumber because the smart people will put their intelligence in, into a contest with chat GPT versus the dumb people try to replace their lack of knowledge with that of chat GPT. Is that a safe assumption? Okay. Okay.
No, no. Okay. Right. So we didn't get to go through all of it, but we had a few examples. So in this group, chat GPT and other models were used for it support, email copywriting and graphics and illustrations. So the risks that we saw with this was a potential data loss and breach of trust and reliability.
Of course, the benefits pretty similar to what was said in the other group would be efficiency, cost benefit speed. And a post potential measure could be human verification afterwards. So if you get any kind of output from a model that you get a human to just quickly verify it, look it over. Super. Thank you. You were part of that team. The last team. So similar to what you've heard. I like to think of it as a good tool for some initial research. It's good for generating summaries. It's good for reusing content, thinking up ways to break up content. You've already got, you know, right.
Simple things. There, of course, risks that overall you might just stop thinking because it limits creativity.
You know, one, one test I did with it was to say, give me cybersecurity best practices. And, you know, it crapped up like eight things. But then if you think about it for a few minutes, you think, well, what about, what about this?
You know, but if you're new to the field, you don't know. You'll be constrained by the output, unless you're able to think creatively beyond what you get out of the, the interface itself. And then we're also concerned about loss of intellectual property and becoming dependent on it. We think things like DLP training, an expert review can help mitigate that. And the benefits are things like getting new ideas, saving time, faster content creation. Thank you everyone. So I'm gonna, I have 10 minutes to go.
So I will do some acceleration here because there were some questions that I didn't want to lose the answers on. So regulation is one, one way of mitigation is regulation and regulation is typically seen as a, thank you. Thank you. As a negative, I personally see regulation as a positive, particularly at a, at a crossroad with traffic lights. So I'd like to have that to be regulated. So there are good aspects of regulation.
Regulation also is very important is fostering and triggering innovation because if you have to work within a regulated time of frame, you have to be more innovative than if you could just do as you wish. So I love regulation to a certain extent. We love regulation because we lived under regulation. And much more. So the big question is Shakespeare for much, much to do about nothing. So the initial slide that I showed you was, was the on cat commission of the German parliament was about 10 years ago where all of that has been outlined.
The issue is that it has never been put to action, but that's something that happens in our world frequently, particularly when we talk politics and execution. So the big question is, will regulation take us to the next level in terms of protecting and protecting our identities, protecting our IP, protecting our privacy, protecting our society because I talked about emotion and state actors and all that kind of stuff in the beginning. That's something where I was planning to have another intermission, but I will skip that thanks to the time. So regulation is something we won't get away with.
So we have to deal with it. And there is not the question whether we like it or not. So we have the ASC for, we have the EU, AI act. We have all the PSI stuff. We have all this stuff that the big audit firms are going, coming to you and certify you on. We have ESG, which plays into that as well, whether we like it or not. So ESG is a, is a very nice and positive way of actually enacting regulations in, in our space. So it's corporate social responsibility, it's environment.
So it's, it's a lot of positively connotated stuff. So at the end of the day, we are, as I said, previously said we are the good guys. What I think is very important is to take into consideration how our employees, how our colleagues feel about what we are doing. And we are all professionals here. We know that putting pressure on people gives us pressure back. It just doesn't work. So the motivation to do act and to help and be self-responsible needs to be intrinsically triggered. It needs to come from the inside. So which is a matter of, and a subject to change management as well.
And it's also a subject of internal awareness building and communication. So this is when cybersecurity, when we actually have to leave a bit, our, our drawer, as I called in the beginning and go fishing outside in all the other elements of an organization that feed and put pressure, potential pressure on our information security infrastructure. So that's very important. And I've done that research, actually research.
I've written a paper with about two and a half years ago in my old job where we tried to find the positive aspects of cybersecurity, the enabling nature, because we enable an organization to do their job and deliver without running into havoc, into risk. So it's a positive thing. So happiness, believe it or not is a major driver for adoption and for acceptance and for being intrinsically motivated to play alongside internal requirements when it comes to protecting the digital corporate assets. And that it doesn't work is evidenced by a very simple slide here.
What happens in a Mexican airport gets hacked. And the first thing that happens is the blame goes to the employee who downloaded a piece of malware. My big question was why could the employee download malware on his computer? Why were there no protective measures to protect the organization from malware? Putting blame on John Doe in Germany, it's Lucien Müller. It's the easy way to do it, right? As you could see in this research, 70% of organizations believe it is Lucien Müller.
It's me, it's Jürgen Schulze, it's John Doe. Who screwed it? He pressed the wrong button. The big question is why has the wrong button been there and why did the wrong button do bad things? Something to bear in mind. We go to that as well. Come to the conclusion, and this is actually something that comes from my social working experience. Kurt Levine, he used to be a sociologist, a German sociologist who emigrated to the US in the 30s. He created a model of three leading principles that were valid until the 17s, which was laissez-faire, authoritarian, and democratic.
The big question on how we can solve the situation in dealing with new technology is let it run. It's not wise. I think you've seen that during the last one hour, 20 minutes, that it's not an option for us to just let it go. The second thing is authoritarian, yes and no, in an unobtrusive way, dealing with the aspects that have an impact on the individual. That's very important. So not regulating the individual, but regulating an organization, not your organization, but those who offer the technology to create transparency.
Transparency is the key regulative measure to understand what we are dealing with. If you have no transparency, we don't know what we are up against. So we need to regulate for transparency. I actually had the discussion with lawmakers in Germany about that, and they firmly supported the point, giving freedom to the individual while protecting them and putting the pressure on those who create the technology, like patch obligations and all these things, just to make sure that Lucien Müller or John Doe can do what they have to do without taking any risks.
The democratic way, and this is the way actually where our Nordic friends are about roughly eight years ahead of us, because when AI kicked in in Sweden, for example, their first response from the government was, cool, let's do something with it. Let's do something with it in an aware, conscious and respectful and legal manner. We need to understand that they are eight years ahead of us in digitization, particularly in talking Germany, so not about the UK. You're a bit ahead of the Germans anyway, in terms of digitization, not in terms of food, but it's a different story.
So what we need to understand, what we need to learn is have an inclusive way of dealing with new technology, talking about it, constantly talking about it. I was at Deutsche Telekom last week and met the team in charge of awareness, and they start in preschool already building this awareness. So we need to start extremely early. In my family, it's not a big deal because I torture them since 20 plus years in a very unobtrusive way because they would push me back. But we need to start very early in order to create a positive attitude towards new technology and awareness and how to deal with it.
Coming back to media competency, AI competency is the key. We can start early enough. So explain, educate, understand and try things out.
So, well, concluding, your own story has yet to be written. That was the sign outside, which was white.
Trust, be trusted. That's very important. Trust plays a major role in our information security world. Trust where trust is due. So we need to understand who is behind the scene, the actors. We need to understand them in order to judge. We have this authenticity, identity and so on. Accept self-responsibility. Self-responsibility cannot be delegated. That would be an oxymoron. It just doesn't work. So we need to accept self-responsibility in order to take the next level. Ask questions because we are lacking the questions.
And, well, any questions from the audience? We made a hard landing, but there might be some questions for you or grab me when I'm outside. Challenge me because the next chapter has yet to be written and I'd like to collect all that stuff too. You take cash, credit card and PayPal, right? So I thank you very much for your, for your patience and attention. So I hope I could give you some new spins on, on, on looking at the situation from a security professional standpoint. And I'm more than happy to hear back from you.