Good morning ladies, and good morning lady and gentlemen and everyone in between. So first of all, thank you very much for attending. Also, this was not necessarily perfectly announced, so I have yet to get all the attendees from the other room because my presentation is announced in the other room, but the gentleman would not play with me and keep who is in the room. So let me quickly give you some, some advice on, on how we run these sessions. So the first thing is, is very important. I've been advised to not run around like a headless chick today because this is streamed online, and this would create a, a, a terrible view on all online attendees. So welcome all online and attendees as well. The microphone, this is one thing is, which is also very important for the online attendees. If you, and this is something that we are, that I'm planning to do, contribute to the session and, and ask questions or make comments I need to come to you for with a microphone.
Otherwise, our online attendees wouldn't hear time, time, actually, I've prepared a deck, which was supposed to be for a, a, a few, a a bit bigger audience, but that's not a problem. So I can absolutely walk away from the deck as soon as there's a discussion kicking in, which gives the whole session the depth that, that you expect from such a session. So there are two ways that we can run the session today. One is actually a superficial touching a lot of ground, and in the next 90 minutes, the other way is just walking away from the agenda and sit together and go, go in depth. So let's keep that an interactive session. I would invite you to, to voice any questions, any concerns, any objections, and I love objections. As soon as they pop up in your mind, just raise your hand and I'm gonna come with a microphone and make sure that you are, that you're heard a, a very quick some words about myself, just to put what I'm telling you in, in perspective.
I'm actually a social worker, you know, that was about 40 years ago. So, and then I ended up in it, which was about 39 years ago. And ever since I've worked in distribution information security had a focus on identity access management. That's actually one of the reasons why I got to know, copying a call decades ago with the EICI also spend a lot of time with national language understanding, which is a discipline of, of ai, which I will actually focus on in the course of today. I've also been working actually across the door, across the building here in this big building in PWC for four years in the, in the cyber privacy area. And a wonderful ex-colleague joins us today. Thank you very much.
And obviously I'm also writing, and this was the key reason why I've been invited to, to share some of my thoughts with you. I've wrote a book lately where I actually drilled a bit into all these aspects of, of national language understanding or gen region AI that are typically not publicly discussed. We come to that later because there's a bit of politics in that as well. So when we learned one thing from our parents, the sun will go up next day and that's for sure. So fear is actually something which is guiding our principles. A lot of our principles, in particularly in it, some of, some of our colleagues sell information security based on fear. I've been in working in semantic in about 20 years ago. So, and we were selling on fear and I worked actually very nicely. It doesn't anymore. So fear is over.
We are in a new time where we have a more positive view on, on situations. However, the human factor still plays a role. And what I'm trying to get you today is to a point where we accept and understand the role of human beings in an organization, in the context of AI impacting cyber security. And that's why I call it the human factor terms turns into a human vector. It's one of the many, many vectors that we have to deal with, very important. And we need to drill into that with a lot of understanding. What I also, what I'm also typically drilling deep into is everything that has to do with how risks behave when they, when they act together or against each other. So the risk aggregation, which is where the, the risk at the end of the day is bigger than the sum of all fears, the sum of all risks.
So the aggregation is very important. And also the interdependence because risks in acting interdependently so they can trigger each other and they can also sequentially trigger each other like a, a domino effect. So easy, it starts easy and then it turns out to be difficult. So what I'm also trying to do today is creating awareness for the, for the non-obvious stuff. So for the non-obvious factors, I've, I'm known and Oliver knows that I'm known to think around corners. So I would quarterly invite you to stop me if I, I take one corner too many. So I'm thinking in terms of what's next after next. So I'm paranoid by design. So that's what we, people like us do. Also provoking some unusual thoughts. So we are dealing with an, with new situations currently our ex chancellor called that Moland. So new we are trying to cover new areas, which particularly for the attendees in that room. And I, I talked to some of the colleagues in the beginning.
We are working in cybersecurity, so we are working very focused. And I come to that later point. We have worked very focused on threats and threat actors and, and threat vectors and all these kind of things. And sometimes we forget all the contributing factors that we don't have our other screen that are sitting elsewhere in the organization or we look at them as a negative. And we had this discussion at the beginning, we had the nice guys. So we need to look at it from a positive side. We are the enabling people. So my first question I would like to ask you today is who of you is using Chachi PT or other large language models at work for your work? Almost. Oh, that's, that's quite good actually. I, I, last week I was in born on a keynote and I asked the same question, the audience of about 200, and there were only about three or four hands popping up. So with or without training, who got trained on using chat? P tlms. Okay, so it's a bit less. Have you been trained successfully in terms of, of the outcome of your work with this kind of technology?
I believe so.
Can I ask you a nasty question? How do you measure success?
The results are different.
One sec.
Yeah, when the results are better than what I expected them to be and better what than what I would have created. So if I can guide the LLM to producing something that actually makes sense to me and actually enhances what I would have provided, that's success.
Thank you very much. Anything else?
He trained me, so,
Okay. Okay, so, so he has been trained by, by the gentleman who just gave me the answer. So
Who
Is allowed to use JJPT in the organization officially and who is prohibited?
Oh p pwc, no wonder why. So we come to that later on. This is a bit of a home turf for me. So paranoid as by design already said that, you know, we in cyber tend to look at situations from a risk standpoint at a second step, from a, from an opportunity benefit, beneficial standpoint. So at least initially. So unless we figure out what we can do with it and how we can do things with it. So what we need to understand today also is, I won't go deep into the benefits because this is something we, we'll have one session and we'll have four intermissions actually for quick discussion rounds where we drill into the benefits of, of this technology and into the risks. But I wouldn't want to go into the technology advantages like coding and so on because that's something that's beyond my grasp as a social worker.
So the journey is outta the bottle. So there's one thing which is a bit of a German point of view here. What we typically try to do in Germany, if something new pops up, we try to stop it, we try to prevent it from happening, we find the bad, bad stuff, we try to regulate it, we try to find the bad in it, and we try to spend all energy available to stop it so that, that's what Germany is a bit famous for. So we waste a lot of energy here. There's no energy to be wasted. The genius is outta the bottle. So it's out, we won't get it back in anymore and it's fat in the meantime, it grew so we won't get it back anyway. It's too fat in 97, 19 97. And does anyone know Nicholas Nickel Ponti? Who knows Nickel? Nico Ponti? No, from the scheme.
Nico. Nico Ponti was the, the founder of the MIT Media lab, Massachusetts Institute of Technology was the author of the book being digital. So pretty famous guy actually, it's kind of a digitization bible in black for good reason. And I met him in San Diego, I think it was San Diego in the US and, and he shared with me what he's doing with his students in, in Boston. And I was like, holy moly. And a lot of the stuff actually came to life now 30 years later. And at the end of the discussion he said, Juergen, everything that you can imagine happens plus, so that plus actually voided me a bit. So the big question that I had with everything that I don't know that I can't imagine happen as well. And what is that? So that's, I'll leave you with some questions today. So I'm sorry about that.
So if you drill a bit into the details of what we are looking at at the very moment, it's, I I call it spearfishing taken to the next level. Actually the, the social engineering on steroids made it to press last week. I had it last week in a presentation and that was sack something people actually jumped on right away because one of the, one of the, the main or one of the purposes that technology such as cha Chacha PT is being used is for example, for phishing, phishing to start with. So if we get phishing mails today from, which used typically used to be recognizable at first glance due to bad English or bad German, excuse me, bad language, bad grammar, and just, you know, recognizable at first glance as a phish mail, there have become much better. So the German is excellent, so the grammar is excellent and the, even for us who are trained on, on these kind of things, we are sometimes sitting in front of these phishing males and say, well, it's not the prince of Samund here or the Nigerian prince trying to get money, it's someone who claims that he's from DHL or UPS and I have to pay some money for getting, getting something released from customs or Butcher Telecom is also a pretty famous victim for these kind of attacks because obviously the attacker want to have the, get the credentials and, and, you know, steal my, my access rights to the, to the platform.
So I'm going a bit the wrong directions in balance of power. I already spared that because the balance of power game is something which is fairly known to you. So it's something, something we don't need to talk about because we know that the bad guys are using ai, so we have to use it as well because in absence of talent in our industry, and we are about three and a half million experts in cyber short globally, and it's probably more than three and a half million. We need to look into automatization and, but the bad guys, they're also very efficient. They try to achieve a lot with very little resources. So they're also using the same tool. So there is no way for us to ex escape that, that vicious circle emotions are also very important in cyber. People who are pissed off do bad things.
It's a very simple, simple rule of life. And if there are many people who are, which are upset, they do even worse things because they team up to do bad things. So that's plays a, particularly in the political arena, it plays a role where also state actors are using bad emotions that they created to attack. And so emotions play a very important role. I come to that also a bit later in terms of how can we make people happy with what we are doing. Id mimic king spoofing nothing I need to explain to you deep fakes. I mean we had this last two, three days with this, the, the anchorman of the, of the news and the anchorwoman got deep faked with, with a lot of political, nasty, hated rambling. And that was so well done that that it's actually made government take action. And, and that's just, it's just the beginning and chat two PT and the likes obviously also play a role here because language, the way words are used in, in context of the, of the, of the, the person who is being abused in that case create credibility.
And it is the small things that add to the equation of credibility. It's the visuals, it's the sound of the voice. And obviously it's also the, the way people express themselves, whether it's reliable or not in advertising, we are, we are looking at things like the, the, the fourth, you know, the, the tens, thousands of, how do you say that the, the fourth digit behind the comma. And this is exactly what we are looking at the very moment. This is sometimes with the edge which kicks off a reaction. Very small, very small factor, big impact. This information is also very important, also plays a role in our industry. We have a lot of people who actually try to get information out of publicly available LLMs. And this is also something which is a pretty risky thing because this is not search engines, this is not what we used to know. Call knowledge engine. These machines deliver. They don't deliver what I need. They deliver what they think that I expect. And that's something we always need to bear in mind. So if our people try to find solutions for problems and pull them from publicly available resources such as chat, bt, the results might not help model inversion attacks also something I who, who came across that term already in the room.
Okay, very quick, very quick. What if, if you put information in a system, the obviously people, there are people who want to understand what, what did you put in the system, right? And I want to pull it back out. If, if it's on a hard drive, it's fairly easy. You take the hard drive, but is if it's in chat pt, you need to have a very sophisticated way of prompting the system to get information out. And I give you, give you one example here, and there was Samsung, actually I shouldn't put any names out here, but that's a fairly known case. The engineers have been encouraged to put code in the system to get the code improved by the system. What they did was they took the whole corporate ip, the whole source code of the company and dumped it into Jet GPT. What happens is they socialized the ip and the IP is publicly available and the IP of a lot of engineers is now helping other people to improve their code.
On the other hand, they also help bad people to find vulnerabilities. So if you run a model inversion attack, you actually teach another AI structure to prompt chat PT in a way to pull that information out of the system. And which means once it's in, it's in, and if you're smart enough, you get it out, it's on a different level. Now it's a different level of sophistication. However, something to bear in mind when you, when you deal with it, I'm sure you've been trained not putting sensitive information and I didn't mention that in the beginning. Actually two days ago Microsoft advised US block chat g PT in the company. That was for different reason. There was a DDoS attack on on OpenAI, but they blocked it. Apple and Amazon actually blocked CHE and may already because they know that sensitive information will inevitably find its way into the system. And Apple is religious about keeping their secrets. So sometimes it's good to look at Apple.
Did I get everything? Trust? Yeah, well trust, trust is also something, we'll come to that also a bit later. Trust is very important, particularly in our, in our world, in our cybersecurity world. And so we need to build trust in order to make sure that people follow our advice. So people only follow if they trust their leaders. Polarization. Yeah, polarization. The first thing that we do if we come across a new technology, as I said, is blocking it. At least in Germany, our countries in in Europe, who, which don't do that. Like in the Nordics, they embrace change, they love it. Germans don't like change. So the first thing what we do is we fill our perceptual draw of our perception, all the perceptions that we have with the obvious stuff. So anything that's obvious to us. If, if I ask you what's the biggest concern or we come to that also in the session for on pt, I got a lot of answers such as, yeah, well they write bad code and, and it's, it's not accurate.
These are the two things that filled all the draws at the moment. And they also filled the public discussion, which is I think a bit of a smoke screen because there are more dangerous things underneath the radar screen that we have to have to deal with. So polarization, and this is also also plays into that, into that problem. Here we have a situation, you're, you're laughing, right? So I mean these are the things that you don't talk about if you're in sales, you don't talk about sex, religion and politics. Actually, when you're good at sales and have good rapport with your client, you talk about exactly those three things. And this is gender neutral. So everyone does, but however, you probably don't do that sports well, it depends on which, which is your favorite sports team, but if you are living in Munich, talking about Bo Doman is not advisable, right?
Or the other way around. So sports can also be a, a, a deal killer in a, in a professional discussion. So we, we just added one thing which is chat g pt because chat GPT is highly polarizing our, our society at the moment because one half of the society is, has this strong opinion that chat, GPT and the likes will just wipe out human beings and mankind because this is the classic terminator one thing, you know, the world, unless Terminator comes, will, will lose the fight against the machines. The other half of the population is actually supporting this new technology regardless of, because their opinion is, if we don't use it now, we miss the train of innovation. And I have yet, I have to admit, I I I don't know what the middle actually is thinking at the moment. So I haven't found a more balanced view on, on the use of chat g PT as of yet in the discussions I had.
So either I got objected heavily or I got embraced. So there's nothing, and I'm a bit critical to, I mean as you, as you could already hear, I'm a bit critical to the situation. So emotions, talking emotions. When I was researching also the quality of output of of, of the technology I came across one thing which, which actually bothered me quite a bit when I'm at home and I have to make sure that my iPhone is away from me now because when I'm at home and want to switch on the light, it's, it goes like, Hey sir, switch on the light. So it's not, Hey Siri, please switch on the light. Thank you. So what happens, and this at least was my, my thinking for the next chapter of the book was we will turn into very unfriendly people applying what we learn from our gadgets, from our voice controlled gadgets to the real world. So it's not like I would say to my wife, can you please cook something? Thank you very much. I'm cooking as well. So I'm, I'm not one to go into any wrong direction here. And when, when she's cooked or when we cook together, it's a thank you. I was very good. So I'm not say asking her, asking her, Hey cook, I can do that.
What,
What? So, so, and I I thought, you know, this will change behavior, this will change the way we interact with each other. And then please go ahead. Oh wait, wait a sec, wait a sec, wait a second.
So about a language, I, I have a question because I, I read an article that internally chat g pt they use capital for emphasis capital letters and also it's being trained on lots of language. So could it be that if you use please or if you, if you let's say change the, the tone of voice while prompting it, that the response would be different and that it might actually be a
You're wonderful. Thank you very much for preempting my next point. Yes, university of Texas, California and California and Maryland found out that being nice to to LMS creates better results. That's proof, that's scientific proof. So if you say, can you please and thank you, then you get better and more qualitative results and you build trust with the system, which is also coming back to the trust, trust thing. You know, I just to close that file, this is actually a programming language and this pros programming language has been, the compiler has been built in a way that you need to be nice to it. So it compiles good code, but also being too nice to, it will also make it object and fail. So you need to find the right valve. I'm, I'm, I'm not joking, this is, these are things if you do research on these kind of stuff, you unearth things that just make you giggle.
But bottom line is my fear that we will change behavior and turn into nasty commanding beasts is likely to go away because if people learn that the results are better, they will be nice to the machines and I hope that they will also be nice in real life after later on. So changing behavior is gonna be the next thing I'm talking about and talking about trust re is also something I can't really translate. So I did some of the translation for our English speaking guest here actually currently if Amazon is, is spilling over with books written by chat g pt. And this has many reasons, one of the reasons is to rank topics up in Google, but that's a different story so I won't go there. But what also happens is that chat GPT writes funny books that you better don't trust because that could be the final way of the final trust. So in that case, actually it wrote a book about mushrooms al in Bavarian and you only eat them once or you feed your mother-in-law.
Why are you laughing? I have a very nice mother-in-law. She just turned 95, so I want her to turn on hundreds so she won't get mushrooms from me. But you see, this is also what I made before the statement I made before, making sure that we look at stuff that comes out of these systems in a very suspicious and very cautious way. One of my ex bosses in, in semantic, John Thompson, who was the late chairman of, of Microsoft, he always said when we had calls about non-disclosed things, he always said, deal with them with your eyes wide open. That would be my advice here as well. Also, one very important thing that we need to take into consideration is you, you can't have one without another. This is a Frank Sinatra song or Married with children Amilia. I think you know, that it all, everything comes at a price. There are very, very good ways of dealing with chacho pity, particularly in translation in in sectors which are very clearly defined like legal. The Please go ahead, wait a sec.
Yeah, that's a, a very nice point, but maybe we'll come to that later because we are all talking about artificial int intelligence as if it appeared out of the blue that year. But translators are using that since more than 10 years. I don't actually know, but they're training, they're all modeled to make it work quicker and if you have technical translations, you won't be without it and it makes it quicker. So
Absolutely, I totally agree. As long as you don't buy Taiwanese it gadgets and try to get through the user manuals because at that point you see this was not ai, but it's absolutely correct. However, we also need to look into the other side of the legal thing. And this is something if you use the technology in in-house or if you use it for in-house purpose, always bear in mind that there is a lot of legality in the background, still fighting the whole thing. Like the, the author of Game of Thrones sued OpenAI because open air obviously digested all, all his books to learn. I know, I know what I put in my book here. It says everything, every use being digital analog outside without my permission is illegal. Full stop. It's a very clear case actually the Americans have the same in in their books. So if, if he succeeds and if Stephen King kicks in, he's all about 400 or 500 million books. So that's power or rolling or whoever of the, the authors, then it's gonna be fun because what you are using is technology is potentially illegally trained. Oh, I'm paid, I'm paid by miles. Yeah,
Can you go back there and run back?
Gimme a laugh. That's okay.
So did you notice on LA last week, was it last week, when was dev days for OpenAI? One of the things they told the developers was that if, if they get sued for infringement, that they would be, they would be indemnified like copyright infringement, that they would be indemnified by open ai. Did did you hear about that? And what does that change anything?
Actually, I'm, I'm not a lawyer. I'm watching the space very carefully right now because from a compliance point of view, you need to make sure, I mean that that like in in in the old social media, when social media analytics started, that's pretty much comparable to that. Then the big question was can companies use stuff that is I legibly collected in an illegal way? Big question. It took 10 years to get to some points, which is called justi justified interest in in GDPR. It took the European European high court to make the final decision on now is 10 years, 10 years later. So at the very moment we have two choices. We take the risk and play with it and see what happens or we be, we are cautious and use our own own tenant. For example, we train the systems on our own data. That's also an option. Put it in our own basement. So we have, we still have these options however we need to, we need to drill into it, we need to bear that in mind. And also the next law thing, and I think that's something you, you've certainly come, come across we a lawyer in us, and this is about hallucination.
A lawyer was lacking reference cases and we have a us we have a reference law system and he asked Cchi pt and Chachi PT came up with wonderful cases which didn't exist. So he brought that to court. He got actually fined by the judge, $8,000 for abusing the, the courtroom. There was courtroom abuser. This don't ask me how that term is called exactly, but what happened was he got what he needed. Not as I said in the, in the, in the opening statement, he didn't get what was there, but he got what he needed. If you ask persistently, if you continue to ask, I sometimes think as some, some small guy sitting in say, get it over with, just give him what he needs and, and that's a risk. So we need to, we need to bear that in mind in test psychology, the small stuff, we call it intervening vari variables.
So if you have a test environment where you try to find out some about behavior of people and the, the the, the scientists wear a white coat, a blue coat, a red coat, have bad breath, are unshaved, whatever, you know, all these elements here, sorry about creating pictures in your head, but that's exactly what sticks. It has an impact on the situation. It's the small stuff. It's not the big stuff as I said earlier. So it's that the small drop that actually changes the whole setup and takes everything in a, in a renders a sophisticated test environment. Useless if we don't take it into consideration. So we need to look at it. And if you look in, in, in, in, in the past, back in the past there were factors that, that one seemed irrelevant, become deal breakers nowadays. And while much ado about nothing factors turn into, turn into headlines that make headlines require large scale solutions and keep us awake at night, we simply lack the experience currently to deal with these factors. And that's also something we need to understand. We don't have the experience yet. The big scale exploit, the big scale rollout of this technology happened since one year, one and a half years. It took us 10 years to understand social media where we still didn't under don't understand it. So we are quite in the beginning where we need to build our awareness. Also, we have obviously political and commercial interests playing here that we have to take into consideration as well.
One drop is sometimes enough. The risk equation, it's very simple for us here we are simple people and, and security risk equals probability times potential damage. We have one challenge here. We don't know the probabilities and we don't know the potential damage yet by using this technology. So we are kind of guessing currently what's the impact and it's, it doesn't mean that we must not use the technology, but we need to apply some more brains and attention rolling the stuff out. These factors actually we, we will be going through overfitting language, brain drain, trust and bias. All the other factors will do some, some writing in the next, within the next couple slides.
These factors are factors that I found out that we don't have answers on yet. Actually I didn't even find questions related to these factors like credibility is the person who gives me information credible with what he gives me in terms of information. So the big question is who is behind Jet GP pt who put that information in? Was that a credible person? Who was that an expert? Was that a knowledgeable person? I don't know, we dunno, we'll never find out actually because it's a great box, right? Authenticity is that person authentic. If the is the feedback when I'm, when I'm talking to you, for example, I check your, I check your card, I know where you're coming from, I asked you what you're doing. So I, I put things in perspective. So any answer coming back from you, I put it in perspective of what I see, what I feel, what I read. So that creates authenticity. The big question is can an answer from a system be authentic?
Dunno context. I said context is very also very important. In which context do we get the information? And as I said, you wanted to understand, I wanted you to understand where I'm coming from so you can put things that's coming from me in context. pH a social worker, I've been writing, I've been working in pwc, paranoid by design. So this gives you the context of what this guy is talking about value, what's the value in the information we get. I won't go into all the details. You can actually read that in my book, come to that also later. Control. Do we have control? Do we, are we losing control? Did we ever have control? So let's go on here and see what we find with these top three, four points that I made. Overfitting actually, this is how it all started. Overfitting. Overfit means that my, the big question mark I had was where's the information coming from that chat GPT is using in order to give me smart answers?
I've been working in with national language understanding model since 2007. And the first thing we did training a system was just taking Wikipedia and dumping it into the system because that's kind of the world knowledge. So puts things in context so we understand also to a certain extent this amortization and we have the terms, we have the people and so on. So Wikipedia is a good start. You need to know that even Wikipedia obviously is not necessarily created to a point that the information that you find there is valid scientifically correct and so on is also highly political thing, right? So it's a very risky thing to use WikiEd as the foundation without creation. And so that's the first thing. That's the first thing we do. And what happens then is the system starts to create content. So what do we do with content? We use it, we use it publicly, we use it on our website, we write articles and so on.
So we write new content. What's happening right now is that chat, GPT is crawling the web for new content because it's learning. So it comes across its own stuff. So what happens is it crawls its own stuff, digested and learns from its own output. It's the friendly way to put it. I love this key of Stephen King book actually. That's a very illustrative way of, of describing what it what's doing. It eats its own tail, it eats itself. And on top of it actually it also eats assets of of you because you paid an agency to create expensive content, put it on a website, chupe crosses smarts up for everyone else benefiting from it and beating you in your own field.
Well, you know, that's a, it's a question that you have to ask yourself whether it's smart or not. So the Amazon book already had had a quick one on, on the, on the Amazon creating chat, pity created books. They also get sted. So what happens is wrong information stays wrong, it doesn't get more correct, but it stays it gets more granular, it gets more granularity in its wrongness and that's called overfitting. And that's a big risk actually that we are facing right now. Actually, I made, made a statement last week, which, which I've been quoted on publicly, is that Jet GPT is evidently getting more stupid by the day. And, and actually it is, depending on whom you ask, are we coming to the polarization thing? You know, the black on white thing, depending on whom you ask, you get either agreement or not. And then people, it depends on how you prompt it. Yes, absolutely. If you pay 400 KA year for a prompter, you might get good content. But my big question is how stupid has a system to be in order to throw 400 KA year after it in order to get cool information? It's a philosophical question, but prompt breeding, I put that in because this is something which, which actually made my fuses blow because if you don't want to spend 400 KA year on a prompter, you just create a prompting engine, which at GPT, which prompts Jet GPT.
Yes. So it gets to a point where it's, it's kind of standup comedy. So sorry,
Yeah, just one question I was just thinking about when I'm searching for, for some it problems a solution to some IT problem, I, I will always get like hundreds of answers to the same thing because they are referring to one Microsoft article or one press release and it gets re So I think we, we are seeing that since years that some small piece of information get doubled by 10 times by 100 times and it doesn't get, and if it it's wrong, nobody will check for it. But, but you have to know something about the topic in on and in the first place to really understand it. And that gets worse with chat GPT because it's cell doing it itself. Like there is no editor writing an article because someone asked for it. But it's always the same content.
Thomas Chassi, who is the chief security officer of Dutch Telecom, and I would highly recommend you his session on, on, I think it's in two days from now. When I asked him out about the subject, he also found some of his ideas found way in into the book. When I asked him on the subject, he said the risk for him is that if you, if he, if I search something in Google, I get a selection of answers, which I can choose from. If I ask Jet GPT, I get one answer and people are happy with one answer because they're con they love convenience. Convenience is actually what kills us in cybersecurity if people get convenient because cybersecurity like democracy is an effort. Everyone needs to put efforts into it. Not just the guys in charge, but everyone all, all hands on deck, everyone in the company needs to.
So I totally agree with you, and this is, this is a, a, a major risk. Well Google in that respect actually delivers better results because it gives me choice based upon my expertise. However, people who have no expertise will also fail with answers of Google because they take what's in the beginning trusting Google's relevance algorithm. So you can't have one without another. As I said earlier on a big one, a, a quick one on, on dumbing down that, because that was also a very important factor last week. And that's also very important for you to understand people who are using, let's take a take, take one step back. Very simple, simple example. When I'm writing the, the most difficult part is the first page. I have an idea and I always say the book is ready in my head already. And Oliver knows that he edited actually the book and you know, it took two months or so, I don't know how many iterations I bothered him with until I was done.
But in the beginning the book is ready in my head, but it's not nothing but an idea. The first page is the most difficult thing. So what happens is I'm sitting there for the first page, have an idea, and I just can't start writing what I do. I pick up the phone, I call people, I call Oliver, I call friends, I talk to too much Jesse, I talk to people who are experts in their field and challenge and they challenge me. And this is how the first page actually starts to fill. My big question was, what happens if people use chat g PT for the first page? Will they stop interacting with other people? And there has been research about two week, two a week bef ago has been research released that during Corona, during the lockdowns, the human brain lost capacity, evidence, fact, no fiction. So which means, and they, they actually saw the key reason for that, the lack of interaction between people. So the big question is, we know that Chap Chet is dumbing down. Will we follow Chachi patino by, you know, reducing interaction with our fellow peers and fellow colleagues? Another scientific proof for Chate is the knowledge. The knowledge is the taxi driver test in London it's very nasty because you need to read, you need to learn every single road in London. You, you're coming from London, right?
Me, you're coming from London. So this is the taxi drivers in London have a brain capacity in, in the, in the way finding section of the brain, which is it outplace, outpacing everyone else on the planet. On the other hand, g use of GPS that we are doing is weakens exactly that very spot one could say, and this is, this is one of the objections I constantly here is you free up capacity for some new stuff wrong. This section of the brain will remain dormant. So I won't turn into a quantum physics because I've got more capacity. No, I'm dumping down. I won't find my way anymore. And this is something also to bear in mind that the use of these kind of technologies need to be used with caution because it might decrease the talent and the capacity of the intellectual capacity of your people.
You get quick results, big benefit. But in the long term there's a dumping down effect on on human kind, visibly and measurable already. So that's a proof as well. So we are getting dumped by the day anyway. So the question is do we want to accelerate that? This is something that popped up in my, in my talking about information quality and I told you about, about the, the curation of, of information before it gets in. It's, it's like this old paradigm like crap in, crap out. It's still valid. And so the big question was the, in the US and I are getting a bit political here, they have a new speaker of the house and, and he has a strong opinion. This Adam, that Adam and Eve wrote on T-Rex into the sunset. 'cause they lived at the same time. You can argue with that or not.
It's a thing of belief, not of science. However, Mr. Musk said, I'm gonna use Twitter to feed my ai. So if Adam and Eve write on T-Rex into X and Musk pulls that data to feed his ai, these are pictures you will never lose. But this is actually an issue. We have an issue here because the quality of data is certainly not based on scientific consensus. So bear that in mind as well and trust. I already said that. I'm gonna gonna jump over that a bit so we have some more time for, for working here.
The question that we always have to bear in mind that this is something that we do also in the company. I I only ask people who I know, they give me answers I can trust. But if you ask judge if he did a big question or any of these technologies, the big question is who is behind it? And we had that a bit in, in the beginning, which platform was it on? Is it a credible platform that we are using? What's the ultimate source of that information? So I'm not just going to the platform, I'm trying to go to the ultimate source, and this is called in Germany you have this wonderful word called media competency that adds to it. So we it media competency, competency turns into ai competency.
Bias is a very, we we only have one lady in the room, and this is actually, you are illustrating on problem, you're not the problem, but you're illustrating it. Okay? And I'm, I'm, I'm walking away a bit from my, from my script here because that happened last, only last week, six 19% of of the people who were reaching a degree in AI in the US are women in Germany. 16, 16% in the area. That means the algorithms and the data has the likelihood that it's a masculine, has masculine origin is 80 plus percent. So the question that we have to ask ourself is, is that information that we pull out of the system evenly shared between the genders? And I, I know in in PWC we had this, we had a, a very strong move towards hiring talent, highly hiring female talent because complex threats require diverse approach and diverse approach. So women are very opportunistically very important to solve problems. It goes beyond that. So I beg your pardon for that, so don't judge me on that. But so how if information repositories are masculine, not you're the only, you're the only, I'm terribly sorry about that. But if the information that we get from the system is mainly created by men, how can we attract women to our world? Actually we have a question. Oh, please challenge me.
Okay, first of all, by the way, as you know, of course the, the chief product officer at OpenAI is a woman and very, wait, what was that? You don't like her?
Doesn't does okay. Yeah, that's
Not my point. But the thing is is that there's a difference between the training data, right? Which that's not the same as the person who's actually training the model, right? So how do you, so if if 10 men train a model that's based on training data, that was all created by women, right? That's that's not the same thing, right? Are you conflating those two?
Yes. That's why I differentiated between algorithm and data and data collection. And I'm going a bit into Germany now and, and it's, it's, it's a valid point. As I said, it's all with question marks. The date taking, taking one step back, okay? If you, if you look at data available about the Congo, Congo is a country in Africa, right? But the data available is written by the conquerors, by the colonialists. Congo is Belgium. So the heritage of Congo is Belgium because they were literate about holding or writing down their history. Because in Africa people share their stories on the fireplace, they talk to each other, it doesn't find its way into the systems. So which means yes, it's, you can have as many women as you want the data available Is that of the conqueror, the conquerors, right? Wait a sec, I called it colonial bias.
Okay? The second thing is, and which is very important, and I came across it only a couple weeks ago, we have this discussion in Germany about gender, language, gender, speech. It's a highly political thing, so people are very controversial about that. But most of the data available, and this is the data that systems get trained on are male. They're male in the way they're expressed, they're male in terms of research being made because 80% of the research is done. Research material available is done by male by men, plus the language is male. We talk about their vision chapter, this scientist in terms of a male gender, right? And that finds its way into the system. It turns into, into, into masculine bias. And that's, I mean, as I said there, there's, there's some things that probably take another couple of years until you have the long-term research and results on that. The, the question that I ask is, can a system represent a part of the society that is only in the system that only find its his way or her way in the system only at a very small fraction of, of a society. It's a question that we have to ask, ask. I I know my, my friend in in, in Stockholm, an author, she tried to find Vikings in Lindry.
It didn't work because in the Nordics, in Vikings in Lindry is a, is a, a very nice way of showing the, the, the female attitude of a whole society. But the system in that case, it was also, I didn't find anything. It she, it, it couldn't create an image of a vi in, in, in linger. So because it's male, right? Yeah. As I said, another pick in your head, the proof actually for the, for the Congo thing I just gave you has been, has popped up in about, in, in August actually where the, the first scientific evidence was shown that, that those development countries did not find their way into the systems properly. Which also has another reason, another problem for us here in the room because it's also, when it comes to finding talent, I talked about three and a half million missing experts in our field. We won't find them there because they don't happen in the systems, okay? And they don't, they they're not approached properly. They don't feel understood because the content they get back is not geared towards them. And that's something we also have to bear in mind. It's another risk objections by the way, the mul looks are the same issue. Portes and Dutch heritage and Indonesia has more power and is more advanced. However, they can't make up for the history.
I would like to make a quick intermission right now if you, that's now that's not a problem. When everyone is hiding in the back, I would like you to just pair up just two of you right on the white piece of paper or three of you. You're sit you're sitting alone if you, if you want to join, right? So the first question I would have for you actually is what the use cases that from everything you heard so far outside of your classic world, what use cases do you see? What use cases do you know for the use of large language models in your organization or also for your personal life? So going a bit beyond automation, because that's the obvious. So I'm, I'm killing that already, so because I want the other stuff. So I would give you just two or three minutes to write down the, the top three of the use cases that you have in mind for large language models.
Speaker 10 00:58:49 So we're supposed to use these
Please, on the white, on the big, on the big white one. I need to pens. I lent you my pen.
Speaker 11 00:59:33 You don't have expert pen. You know, actually one thing I was gonna ask you,
Go ahead.
Speaker 11 00:59:39 You know Yun right? Like the, he's the head of AI at Meta
Yun. Yeah, I know the name. Yeah.
Speaker 11 00:59:50 He has a lot to say about how really what you're building with an LLM is just a statistical model of the data. It's should never be used as a search engine itself. Never.
But it's,
Speaker 11 01:00:01 It's, people use
It that way. Yes, absolutely.
Speaker 11 01:00:03 But then he, he goes on and actually I'll, I'll have to send you like please. There's a real
Presentation that
Speaker 11 01:00:08 He gave actually here in, in Germany and he talks about, he goes, it's fun, LLMs are fun, but, and he goes through like seven or eight very specific problems of things that they are terrible at
Doing. Yeah, yeah, yeah, yeah, yeah, yeah.
Speaker 11 01:00:20 And and then he, he was saying, don't, don't use it for this, this is, and actually he says LLMs are shit, basically.
And yeah,
Speaker 11 01:00:28 Yeah. And so anyway, it's just, so the problem is, is that if you don't know the science behind it, you might assume that this is a replacement for Google or something.
This is, this is like the shovel that I I have that in the book as the shovel analogy. You, it's a multipurpose, you can use it for all sorts of things. You can carry your mother-in-law or you can drill a hole, right. And bury your mother-in-law. They can, but, so it's a, it's a question of what purpose are you using it? But people tend to use the purpose that gives them the quickest release of any pain. Terrible ideas.
Speaker 11 01:01:01 Yeah. And yeah, which is obvious if you are, if you're aware of what a technology actually is and what it's capable of doing, that's
Speaker 10 01:01:07 Perfectly
Yeah. Yeah. But
Speaker 11 01:01:08 If you're coming to it new and you don't understand that, you might think of it as a tool that it looks like it'll give you a right. It'll give you an answer every time, whatever say I don't know. Yeah. You know, but anyway, I'll tell you, I'll send
You a link. Yeah, please. Really good presentation. Yeah, cool. Because I'm, I'm extending the stuff right now because it's you, there is only a, it can be a snapshot today. You every week something. Okay, ladies and gentlemen, a very quick one on the green. Now the benefits of what you just wrote down the benefits for you, just two or three points.
Speaker 10 01:02:26 What was the question?
Which I use case, wait a sec.
Speaker 10 01:02:32 There we go. So that's the green one, right?
The the red one, the risks red is risk.
Speaker 10 01:02:39 Oh, okay.
And on the green, the green, the benefits, the green, the benefits, the red, the risk, sorry, I I did the, the wrong direction.
Speaker 10 01:03:09 The risk associated with our use case or the risk,
Whatever you prefer. If you have risks that you want to get rid of, write them down. So that's, it's, it's perfect. So, but in a perfect world related to your, to your use cases here, the bad, bad stuff is red, the good stuff is green. And last, not least on the yellow one, two or three words on how you're gonna mitigate the red stuff. How do you want to deal with the red stuff?
Speaker 10 01:03:56 It's mostly
Sorry for picking on you all the time, but you, you, you, you, you're shining the room here as the only soul room here. I'm doing good.
Speaker 10 01:04:14 Is it based on those Yeah.
What's dispersing? Scooter mitigation. Plow. Okay. Let's quickly go a quick, a very quick round, just a minute through the groups. Are you, are you all set where to start? Start in the beginning here. So what are the use cases that you came up with? Oh
Speaker 10 01:05:15 Yeah. So
Speaker 12 01:05:17 One of UKI will start from mine. Yeah. So one of UK that we're trying to do in, in, in our organization is to build the, the model that will help to identify the, as much as the system can develop and provide the three scenarios. So for the to, to pen test the system. Yeah, okay. So,
So highly technical.
Speaker 12 01:05:44 Yeah, yeah, yeah.
And what are the, the well, the benefits for that?
Speaker 12 01:05:51 So the, the the benefits, it's for sure the, the volume. So because the, yeah, so we will get the, the, the valuable amount of this scenario that we can test, but the, the, the, the most risk that I see, it's probably the, the credibility because how we can, or the trust, how we can trust that these scenarios that we will, we will execute will show us the, the valid results. Yeah. So
Validation. Yeah,
Speaker 12 01:06:25 The validation, yeah. How
Do you deal with that then? How, what's, what's your We've
Speaker 12 01:06:28 Not done yet. No idea yet? Yes. Yeah.
Okay. Thank you very much sir. The next, next group.
Speaker 13 01:06:42 Yeah, basically what came to mind first was maybe something that we are struggling with in our organization with my colleague and I, we had to leave the room, but, so we are struggling with sales
And
Speaker 13 01:06:58 We are struggling with sales.
Okay.
Speaker 13 01:07:00 Increasing our pipeline and closing sales and so on. And so all, everything took our mind to those. In other words, great marketing material benefit, higher quality lead generation productivity, in other words, generation leads as a, as heavy lifting and better target customer profiling. So who are the actually ideal customers to approach potential customers benefits is also better leads generation. And the risk is for us was simply not having enough sales in the company and, you know, mitigation at some are applying these use cases in LLMs with LLMs or Jet GPT. So I haven't thought wider than that.
Thank you. It, it reminds me on the good old social media times, because that's also the way when social media selling or social selling started exactly the same way. Yeah. So we are history repeats itself. It does. Thank you.
Speaker 14 01:08:09 Yeah. With regards to use cases, we noted down, well the, I think the classic use case generating text based on given inputs. So we're putting bullets in and want them to basically make a nice and well appealing text out of it. Then we have translation on the list, prett translation, so to say. So translating something and then we'll review if it makes sense. And we need to still do our, put our own efforts in and simple debugging such as Excel macros or whatsoever. Just
Let me add something about the translation. And I also put it the jumbo browse text is like what I say. I am very, very keen with texting and I really like it, but it's really dumb to, to write the 20th forward telling you that cybersecurity is a threat nowadays and we need to blah, blah, blah. And that's what Chachi BT can do very well. Just ask him to, to write a forward about some cyber stuff and he will write it and I can just go on with my own essay. So yeah. And the same goes with translation. Yeah. I'm not will, I will not translate a novel, but only like the same forward because it's, thank
Speaker 15 01:09:34 You.
Speaker 14 01:09:36 Yeah. And risks we see is well mistakes that we basically oversee ourselves then losing the feeling for the language itself. So if you translate everything and just take it as is, well then you also lose some of the, the feeling for the language and the words you're using and, well, unemployment benefits, benefits for sure. Well, it's way faster and it takes off some burden of you for like nasty work and, well, the main control is basically, well, it's easy set, but working responsible responsibly with ai,
Speaker 16 01:10:28 Ai competency, like medium competency, I think you already said.
Speaker 15 01:10:33 Thank you.
Speaker 15 01:10:39 So I'm a little bit from the group of the bad guys. So I'm using chat GT and such towards more as a source of inspiration for hacking, find creative ways to break things and so, so, or to understand unknown systems. When, when I break into a company and I see some, some old system, I get an inspiration how to go further, how, how, how to break the next thing. And of course a classical use case to summarize texts, large instruction manuals and such things. So the main benefit here is then time saving. I could research how such systems behave on my own would take hours, yeah. That I don't have. And so the main risk for me is hallucination. If, if chat g PT and does it regularly in events, hacking tools, but this risk can, can easily be navigated, mitigated by fact checking. So when, when I try to acquire or download the tool, I, yeah, I, I won't find it. And so this, this is quite easy to to, to mitigate. So for bad guys, the risks are not that high because yes, if I break a system wh while breaking into it doesn't matter. It's that anyway.
Be before we Thank you. So before we go to this, you scare the hell out of me. Can we, can we agree on one thing? From what I heard so far, correct me if I'm wrong, I put the notion in the book also that Chacha PT has, has is likely to make smart people smarter and dumb people dumber because the smart people will put their intelligence in into a contest with CHE bt, whereas the dumb people try to replace their lack of knowledge with that of Che bt. Is that a safe assumption? Okay. No, no, I wouldn't know. Okay.
Speaker 18 01:13:12 Right. So we didn't get to go through all of it, but we had a few examples. So in this group chat, GPT and other models were used for it support, email, copywriting and graphics and illustrations. So the risks that we saw with this was potential data loss and breach of trust and reliability. Of course, the benefits pretty similar to what was said in the other group would be efficiency, cost benefit speed, and a post potential measure could be human verification afterwards. So if you get any kind of output from a model that you get a human to just quickly verify it. Mm, look it over.
Super. Thank you. You were part of that team, the last team. So
Speaker 19 01:14:08 Similar to what you've heard. Oh, I, I like to think of it as a good tool for some initial research. It's good for generating summaries, it's good for reusing content, thinking up ways to break up content. You've already got, you know, write simple things. There are of course risks that overall you might just stop thinking because it limits creativity. You know, one, one test I did with it was to say give me cybersecurity best practices and you know, it crapped up like eight things. But then if you think about it for a few minutes, you think, well what about, you know, this and that and this, you know, so, but if you're new to the field, you don't know, you'll be constrained by the output unless you're able to think creatively beyond what you get out of the, the interface itself. And then we're also concerned about loss of intellectual property and becoming dependent on it. We think things like DLP training and expert review can help mitigate that. And the benefits are things like getting new ideas, saving time, faster content creation.
Thank you everyone. So I'm gonna, I have 10 minutes to go, so I will do some acceleration here because there were some questions that I didn't want to lose the answers on. So regulation is one, one way of mitigation is regulation. And regulation is typically seen as a thank you, thank you. As a negative. I personally see regulation as a positive, particularly at a, at a crossroad with traffic lights. So I'd like that to be regulated. So there are good aspects of regulation. Regulation also is very important, is fostering and triggering innovation. Because if you have to work within a regulated time of frame, you have to be more innovative than if you could just do as you wish. So I love regulation to a certain extent. We love regulation because we lived under regulation and much more. So the big question is Shakespeare work much, a much a do about nothing.
So the the initial slide that I showed you was, was the oncat commission of the German parliament was about 10 years ago where all of that has been outlined. The issue is that it has never been put to action, but that's something that happens in our world frequently, particularly when we talk politics and execution. So the big question is, will regulation take us to the next level in terms of protecting and protecting our identities, protecting our ip, protecting our privacy, protecting our society? Because I talked about emotion and, and state actors and all that kind of stuff in the beginning. That's something where I was planning to have another intermission, but I will skip that thanks to the time. So regulation is something we won't get away with, so we have to deal with it. There is not the question whether we like it or not.
So we have the AC four, we have the EU AI Act, we have all the BSI stuff, we have all the stuff that the big audit firms are going, coming to you and certify you on. We have ESG, which place into that as well, whether we like it or not. So ESG is a, is a very nice and positive way of actually enacting regulations in, in our space. So it's corporate social responsibility, it's environment. So it's, it's a lot of positively connotative stuff. So at the end of the day, we are, as I said previously said, we are the good guys. What I think is very important is to take into consideration how our employees, how our colleagues feel about what we are doing. And we are all professionals here. We know that putting pressure on people gives us pressure back. It just doesn't work.
So the motivation to do act and to help and be self-responsible needs to be intrinsically triggered. It needs to come from the insight. So, which is a matter of, and a subject to change management as well. And it's also subject of internal awareness, building and communication. So this is when cybersecurity, when we actually have to leave a bit our, our draw as I called in the beginning and go fishing outside in all the other elements of an organization that feed and put pressure, potential pressure on our information security infrastructure. So that's very important. And I've done that res actually research. I've, I've, I've, I've written a paper with about two and a half years ago in my old job where we tried to find the positive aspects of cybersecurity, the enabling nature because we enable an organization to do their job and deliver without running into havoc, into risks.
So it's a positive thing. So happiness, believe it or not, is a major driver for adoption and for acceptance and for being intrinsically motivated to play alongside internal requirements when it comes to protecting the digital corporate assets. And that it doesn't work is evidenced by a very simple slide here, what happens? And a Mexican airport gets hacked and the first thing that happens is the blame goes to the employee who downloaded a piece of mail. My big question was why could the employee download malware on his computer? Why were there no protective measures to protect the organization from malware? Putting blame on John Doe in Germany, it's Ian Mu it's the easy way to do it, right? As you could see in this is research, 70% of of organizations believe it is Lucian Miller, it's me, it's jhu, it's John Doe who cr did he press the wrong button?
The big question is why has the wrong button be been there and why did the wrong button do bad things? Something to bear in mind. We go to that as well. Come to the conclusion, and this is actually something that comes from my social working experience. The Kurt Levine, he used to be a sociologist, a sociologist, a a German socialist and who, who immigrated to the US in the thirties, he created a model of three leading principles that were ed until the seventeens, which was less affair, authorit, authoritarian, and democratic. So the big question on how we can solve the situation in dealing with new technology is let it run. You know, it's not wise. I I think you've seen that during the last one hour, 20 minutes. That, that it's not an option for us to just let it go. The second thing is authoritarian, yes and no in an unobtrusive way, dealing with the aspects that have an impact on the individual.
That's very important. So not regulating the individual, but regulating an organization to, and not your organization, but those who offer the technology to create transparency. Transparency is the key regulative measure to understand what we are dealing with. If we have no transparency, we don't know what we are up against. Excuse me. So we need to regulate for transparency. I actually had the discussion with lawmakers in Germany about that and with, and they firmly supported the point, giving freedom to the individual while protecting them and putting the pressure on those who create the technology like patch obligations and all these things. So just to make sure that Ian Miller or John Doe can do what they have to do without taking any risks. The democratic way, and this is the way actually where our Nordic friends are about eight years, roughly eight years ahead of us. Because when AI kicked in in Sweden, for example, their first response from the government was, cool, let's do something with it. Let's do something with it in an aware, conscious and respectful and legal manner. We need to understand that they are eight years ahead of us in digitization, in particularly in, in talking Germany. So not about the uk, you're a bit ahead of the germs anyway in terms of digitization, not in terms of food, but that's a different story. Sorry.
So what we need to, what we need to understand, what we need to learn is have a have an inclusive way of dealing with new technology. Talk ab talking about it, constantly talking about it. I was at Deutsche Telecom last week and met the, the, the, the team in charge of awareness and they start in preschool already building this awareness. So we need to start extremely early in my family, it's not a big deal because they, I torture them since 20 plus years in a very unobtrusive way because they would, they would push me back. But we need to start very early in order to create a positive attitude towards new technology and awareness and how to deal with it. Now, coming back to media competency, AI competency is the key. And we can start early enough. So explain, educate, understand, and try things out so well.
Concluding your own story has yet to be written. That was the, the sign outside which was white trust, be trusted. That's very important. Trust plays a major role in, in, in our information security world Trust where trust is due. So we need to understand who is behind the scene that the actors, we need to understand them in order to judge, we have this authenticity, identity and so on. Accept self responsibility. Self responsibility cannot be delegated. That would be an oxymoron. It just doesn't work. So we need to accept self responsibility in order to take the next level. Ask questions because we are lacking the questions. And well, any questions from the audience? We made a hard landing, but there might be some questions for you or grab me when I'm outside, challenge me because the next chapter has yet to be written and I'd like to collect all that stuff too. You, you take cash, credit card and PayPal, right? So I thank you very much for your, for your patience and attention. So I hope I could give you some new spins on, on, on looking at the situation from a security professional standpoint. And I'm more than happy to hear back from you.