Event Recording

Panel | Misinformation – Disinformation – Malinformation (MDM): The Next Big CISO Challenge?

Even though MDM has had a long history during war and times of high tension,  the digital era has been increasing reach and potential impact of weaponized misinformation. Sophisticated tools such as machine learning mechanisms and software bots is opening a huge battlefield for creating and spreading manipulated information at scale even for those with limited technical skills. From nation state attacks through organized crime down to that one single customer who feels treated unwell – they all can use such tools. What does this trend mean for your organization and what ist he CISO´s role combating MDM attacks? In this extra-long panel session we will try to find answers on how MDM will affect our organizations and how we can increase antoi-MDM resilience.

So we had this fantastic keynote by senior in the morning about the role, misinformation, disinformation, malformation place in that case more on a societal and, and, and at the end global scale. And I think we have seen a lot of things in, in the past years around elections and other things where there was quite, quite a bit of mal information. I think the interesting, or the topic for today, it's a bit one, which is about the, the interesting question why? So this is something that impacts or that is also within the domain of the CISOs. So this is something which is big, big, big vote as trust at a society level relevant or is it something every CISO needs to care about today? And I think this is basically what we'd like to discuss. Before we dive into the discussion, I'd like to ask all of you to quickly introduce yourself and maybe give a one sentence statement about how you see this subject. Barbara, do you wanna start? Oh,
Of course. So my name is Barbara Mandel and my role is
Here. Please speak into the
Microphone. Oh, I never do that, sorry. Okay. My name is Barbara Mandel and my last real work, because I'm sort of retired, was the SISO of Daimler Global Daimler. How I see this You want one sentence? Okay.
Sentence. A short sentence. So I built up an SOC At that time not everybody had an soc. And I think working there on these topics from a technical point of view would be correct if you do it in an enterprise will come later. Okay. Is that okay?
That's wonderful. Yes. The next one is Jennifer.
Yeah, thank you. My name's Jennifer. I'm managing partner at Berlin Risk Advisors. We're risk and compliance consulting firm actually. And a large part of the work that we do is due diligence, third party due diligence, reputational due diligence on m and a transactions. We work a lot from, if I so for the World Bank, for the European Investment Bank on their high risk transactions, so to speak. So the whole topic of disinformation malformation is enormous and it's shifting. So I've been in this sector for 20 years, over 20 years now. And the way that we have to analyze information is changing and the risks in the reliability of the information in the source of information is changing. And yeah, so we have to adapt the way that we assess the risk we have to adapt the tools we use to retrieve information. Yeah. Maybe as a starting point.
Okay. Senior, you're next.
Thank you so much. My name Isen and I, I work at Detector Media, Ukrainian Civil Society organization. I lead research activities on malign information campaigns, not only in information environment of Ukraine, but in various countries around the world. And I think that basically we have so many data points actually proving that what you're asking is already a necessity, not an open question to think whether we need that or not. So I think it just something that we all have already to be doing in the process and rather than just thinking whether this is something we need.
Okay. And last, not least, Hila.
Thank you. Good morning. Or it's good? No, I don't know. Yeah, it's almost good afternoon. Yeah. Hila Mailer, I am Chief Revenue Officer for Bit Security Bit is what used to be called British Telecom in the past. We are a large entity of about 3000 security experts. And what we basically do, we secure BT itself. BT is a massive telco organization, active in 180 countries, a critical national infrastructure in a few countries. So we protect BT itself and we also protect BT customers. We've been in the industry for more than 25 years in different roles. And I think that what we'll see today is that when it comes to data accuracy, misinformation, data integrity, there will be multiple scenarios in which a CSO should be concerned and it should be a priority. So that's, I'm sorry for giving you the end result already.
No, that's totally fine. And okay, we can go. So, so I see we, we have, That's, that's great. We have different, different perspectives. Before we dive into the discussion, maybe just a quick reminder to the online audience for all these sessions. There's always the option to raise questions and I'll, we'll constantly watch that tablet here for questions you send in so that we can pick them up. And surely also for the people in the room, feel free to ask your questions and to sort of influence and participate in this panel discussion. We have plenty of time, which is good because it's a huge topic. So maybe let's start when we, when we think about is it relevant for an enterprise, is it relevant for a c I think the, maybe the best point to start would be concrete examples of what can hit in misinformation, mal information cetera, and enterprise. Maybe a little bit of stories from your life, so to speak. From
My life
Who's, whoever life. But I think you have experiences where concretely happened. Yes. And I think that that probably makes it way more visible to everyone. Why we should care
As a, As an
Enterprise that's Yeah, exactly. As
An enterprise.
Again, people can hear
Better's so loud. I think
Justify why, why
I just tell me every time
We work on that.
Okay. So for instance, one topic I found in that time when I was responsible for security, as you know, Dahler started building automated cars. And also Mercedes me with maybe some guys of you who have a Mercedes know this little thing there. I was very worried at that time because this was done by engineers. And the engineers were not into security. I mean, they don't ask the negative questions. And I had to fight really hard to be heard. And so I worked with Mr. Come who was the developer then because I was afraid that something would happen and then we would have a very bad reputation. So they sent me, and that's a bit, it's very small compared to your topic. I had to go to all these journalists for, for cars and sit there with a serious face, you know, and security. So I'm making sure that everything is secure. But this was like making sure that the people knew that DMA was doing stuff because it can always be, remember the case? I was not security then, but the, the A class which fell on the side.
Well this example, it's true. So in this case it's true. It's not, and it's not a fake story, but how do you get rid of this? I mean, or, or how do you now say, well we don't have that problem anymore. Cause it's in
The heads. And so I think we, we had these, these things, when you look at some of the, the accidents with automated vehicles, Yes. Like with Tesla immediately was, oh, this entire thing is wrong. And, and then there's mean by the way,
Or Yes, please intervene.
Yeah, I agree that this is a problem to solve.
But, but I think the, the question is, is this a problem for the CSO to solve?
I know that's, but I agree. But he asked me now what Yes it is in so far in this case let's talk about automated cars. What BMW did, there was a small mistake and we all knew that. And we had to be sure that I have to look in the technical solution, I have to give up. I had to sit at the board, discuss with them the risks so that we are prepared. Also the communication people if something would happen. Now whether it belongs to me, I don't know. They wanted me, I'm sorry. So the engineers wanted me to help them with the security. And the question is, his question goes further. I, I'm just not sure because you're having sync with your question. Maybe we go to actually don't think someone else, back to his
Question. One or two samples. Who wants to add a bit from, from concrete examples on that where, where you see the CSO needs to take care?
I can, I can share some of what we've been through as BT during covid and I think, yeah, there are a few interesting examples. One is around all those silly conspiracies that were around 5g. Cause as you all know, 5G created covid. We all know that it's a fact. It was in the internet. So it's, it's a fact. But that resulted at a lot of our and all other providers in the UK cell sites being burned to the ground vandalized. The New York Times published a statistic that around the April timeframe about a hundred cell sites were burned and vandalized in the UK because 5G is the cause of Covid. One of those cell sites was actually the one that served, I think it was the Birmingham Hospital with their covid department.
Not only that, we had stuff being attacked verbally, physically, and one engineer stabbed ended up in the hospital because of 5G conspiracies and business continuity, the physical protection of our sites. Yes it is the problem of our cso, which is also cso, part of his responsibilities. So that can give you an example of how yes, the misinformation in the real world can impact your, No one was expecting that. You know, as we were rushing to move into digital workplace and allow more people to connect remotely, no one was expecting that we will have to double down, you know, guards and security on cell sites. Cause there were people that were there, you know, burning sites. The pick of that tax was actually during Easter holiday. Cause apparently people have a lot of free time to go around with gasoline and and damage site. So this is one thing that we've seen during Covid reading and, and that impacted us.
And it was an area that no one was expecting really. It was a bad surprise. I would say another thing that we could see in our kind of privileged position as BT as a telco provider during that period was that there was a shift around fishing. And fishing is a type of misinformation malformation that is being leveraged against consumers or enterprises in order to gain either identity theft credentials, malware distribution. Right. And so if you remember, I think it was again around the April or June timeframe that there was a joint advisory that was being issued by both NCSC and the US C I S A, which is part of Homeland Security. Both of them issued together an advisory alerting the populations be aware because of covid malicious sectors are now pushing some serious phishing campaigns, whether on SMS or email messages pretending to be the World Health Organization or other COVID related things. FFI reported an increase of 220% phishing attacks in that period.
So in fact, using metal information information in the context of phishing attacks, that is a, I think a very concrete example, clear example where where so needs to care about understanding what, what is happening because it directly impacts security. You have a question from over there? Yep.
I just wanted to ask what do we need guidance within our own organizations and policies which determine if our staff are involved in misinformation? They may or may not believe that it's misinformation. Obviously if they're going around breaking 5G masks, that's a, a criminal issue. But if they are perpetuating misinformation, how do we respond to that and what should we be doing about that? And you know, how is that, how do we move forward from dealing with things that our staff say maybe about our own organizations or competitor organizations? How do we deal with
That? And I'm pretty sure that, you know, the answer to that question will differ based on the country where you have your employees. Cause in some situation you will have highly protected individuals. Like we sit in a country that of protecting its employees in other countries. You might have more freedom to apply policies around that. But I think it'll be really depend, it'll depend a lot on the specific countries where you have employees.
Jennifer, just like to add that I think, you know, awareness campaigns are enormously important and I think also within the security department to have teams of people who are trained and skilled in the analysis of information. Yeah, I mean I've seen courses out there and I did, you know, actually smile when reading it that they're offering courses now in critical thinking. Yeah. Which I kind of thought was something that, you know, is something that one would expect at least university educated people to bring to the party. But you know, so I think that, you know, the way information is being analyzed and you know, I heard a statistic, I think it was yesterday morning on one of the panel discussions that today we have 50% of adults who consume news via social media. Yeah. So they're not accessing information via bbc, say F id, whatever mainstream newspapers are.
And that in itself, yeah, it leaves so much room for people being influenced by disinformation, mal information. The campaigns are so enormously sophisticated that I think you need a structured operation to deal with that internally within the security department. And I think it sits nicely within security as opposed to compliance. And because compliance is very much, you know, in the business we see it as well automating kyc, just doing everything like, you know, really quickly. And they're, you know, through this approach, they're really leaving themselves open to a lot of these campaigns being quite successful actually. I mean we see on the very basic level, for example, this is something just two days ago. So on a KYC process, there is these kind of red flags popping up saying, Oh, this company's involved in financial crime in Ukraine, actually. Interesting. And, and then we have this investigation software that we use to check the source of information. So not just the fact finding but actually the source of information. And we found that all of these campaigns were coming from some blogs in California, you know, so, but the company was completely discredited. Right. And if you had an automated system in house where you were basically saying, Okay, this company is out because of this red flag, you know, you're basically missing a business opportunity. You're discrediting somebody who's actually shouldn't be discredited. So I don't want
So, so I first Yeah, no, no. And then Hila Okay. Yeah. I just wanted her to answer his question. Yeah. So that was, I think everyone one is waiting for for being able to, to respond to that. Yeah, yeah. Keep it in mind. You wanna sit,
I mean we had so many actually different cases and experiences in that we had a case of actually very coordinated targeted disinformation campaign against one of the biggest banks. It actually also included a lot of the phishing, a lot of the scheming. It resulted in kind, I don't know exactly the numbers, but they were not really sharing the numbers of how much it costed to the bank, but amount of their clients as decreased. And there was the huge reaction from the public basically claiming that after that, that they were not be able to protect actually their brand and their clients from the, this targeted disinformation campaigns. They're no longer interested in being their clients. And there are like a lot of cases like that. We have also very targeted campaigns targeting media enterprises as well, discrediting media enterprises, newspapers. And also a lot of damage was, was done in that direction.
And just the about the, the interesting thing about the employees there was just like, it was a private initiative of one of the, one of the biggest changes of supermarkets in Ukraine to test the level of media literacy of their workers. Basically, they just started wondering like, what's what's there, because there was a lot also a lot of fakes evolving around, around basically the, the business. And they were like, okay, are our employees the part of that or not? Do they themselves kind of spread this kind of information? And the results were shocking because 60% of their employees were sharing that kind of information on their Facebook pages.
Yeah. And, and, and I think, I think that that brings us to, to two aspects going a bit back come to you to you in a minute and you both as well. But why is this really c relevant? So if you have employees that Yes. That don't trust your organization, right? Then you have your sort of your internal attacker. Yes. And then it's a CSO topic later when we are talking about internal attackers, about malicious users, then we, we are at that point, and I think the other point, Jennifer, you talked about this distrust in organizations and, and I think this is also increasingly a C topic. We had this the context of what happened around the BSI boss, et cetera with Russian, Russian influence software vendor. And so the challenge always is what is true when, when I look at what I hear about vendors then, then sometimes someone comes and says, but they have Chinese money in Yeah. This is discrediting company and I think this, this makes it a challenge and it also means at the end for, for a Cecil and going back to the Cecil part, which software to allow and to make the right decisions here that is based on valid information, not just on Mel or misinformation, but Barbara,
I let her speak. Okay. I just wanna hold that point for you. Okay. And that we talk about hold
That point. Yeah. I first heat and
I don't have question, but I don't wanna forget it. I'm old. I forget my stuff. Please, please you.
I'm not trying to be devil's advocate with what I say next, but you know what you were saying about social media, I can give you an example. My younger brother, I, he was watching things on YouTube and I said, Why are you getting news from YouTube? The fact of the matter is that all of us have to accept that people have a right to ignorance. People have a right to alternative facts, which may be wrong and they're not facts, but they have a right to that. And I think we have to one, understand that. Secondly, think about how we're gonna deal with it. Because I mean, as Trump said at 1 1, 1 occasion we have to stop listening to scientists. And I thought to myself, Hold on. Why do we have to stop listening to scientists? And, and the thing is, people do have a right to be stupid. Yep. And we can't take that right away from them. Yep. And if we do that, we are enforcing something and then we are becoming, they will start to labor us as being communists because we are interrupting and interfering with their right to freedom to do what they want to. Sorry. Okay,
Over here may, may I respond to that directly? Yes. Everybody might have the right to be stupid, but I have the right to say that you are stupid if I meet a stupid person. So my level of freedom is the level of freedom that I'm granting to the, the other individual. And we have to find ways or maintain ways that we have developed in the last couple of centuries for a discourse which is actually goal oriented and not oriented at making the other person my enemy, giving me the right to fight them.
Okay. So, so, so before we drift too much into the more the, the, the societal aspect of it and I see a lot of hands raised. So I go with you them first hila and then then Barbara before she, it's before she finally has forgotten what she wanted to say. Then I come to Bra and Europe.
Okay. Simply when we started work, every one of us, we had the luxury at the first of the work to ask the silly questions. This dumb question, the wrong questions, what is perceived as stupid? However, I believe that as we grow up we are afraid to ask such questions. Maybe someone in the organization had made something wrong that affects all the Caesar's work. However we are afraid to say it. Oh, I will look stupid. Oh I will look dumb. So I'm not saying, so I totally agree that we need to enhance such culture, enhance the frankness truth. Let's say it and discuss it. Everyone has the right to be stupid at some point in his life. No.
Okay. Disagree. Okay. You have to wait with a disagreement because next is Hila. And then Barbara, I have so many hands raised that I need to structure a little, so hi.
No, no. So it's good cause it was all like a build of what I wanted to say. And it started with Jenny talking about awareness and the need to really educate people to be able to have critical thinking and awareness around fishing. And then we heard from someone that people have the right to be stupid. And then we heard from someone that, you know, people are actually afraid to ask questions. Now let's go back to what happened during Covid, right? People were alone, all alone in their houses connecting remotely. So even that ability to maybe have a little chat with a colleague that might be sitting next to them, Hey, do you think this is legit? What do you think? Is this something I should reply, press this link. Should I? That even that didn't exist. So there is way more responsibility on, on the end users now to have this critical thinking ability to make decisions on their own really. Cuz they're completely isolated. Now we started moving back to the office, but still a lot of people work at least half of the time remotely. So, you know, having the right to be stupid, being unable or not willing to ask questions. And they need to have, I think it all links, especially today to what we're seeing.
I think that goes a bit into, there's a difference between the right to be stupid and the, and non-existing, right. To cause harm, for instance, to your employer, which are two very different things. Barbara.
Yeah, that's what I wanted. I'm coming back to the employer. So when you ask this
The SISO role, thank you. Thank you. The SISO role, We've done this all the time, right? You can hear me now, right? Should I shout louder? No,
We just customers or we bring some tape to
Yeah, hold it like this. So the C role was always, you were always involved with this kind of stuff? Not, not in that extent, but for instance if we knew that an employee was causing problems, Yeah, this is not stupidity the problem the person put harm or we had that case already 20 years ago that someone was putting on the internet information and this was a hard project to collect. It was in all papers and this idiot put it on some website. Okay? I was not the C, but I was in to collect, I was for security there. So I saw that and I thought we have to do something because it was already in the papers, as we all know, because it was not working, it was all blah blah. So we had to get rid of this guy. Now that gets very difficult because if you do employees, you cannot do that as a cso.
You know the rules in Germany or in other countries. How you work with that. So we've learned to work with that. So the question here was, I understood you Martin, and your question was does the SISO get involved in those cases? We always did get involved because we had that information to speak organizational, back to your question, I would do the terrible answer. It depends. So I'm working for a company at the moment. They develop software in this direction. Yeah. And this is software. It is used now by the government. So there is software for it. But to really use that software, I would come back to my point would be being in soc. Yeah. You know, to use the tool and to understand the tool. So
To you what you said, I think there's an interesting point. We're thinking a bit in Iraqi metrics from a C perspective. So is the C responsible the R or is it C the consultant or the informed. But right now I give the microphone finally to behold
Speaker 10 00:26:43 Comment on stupidity.
Need to comment on that too.
Speaker 10 00:26:50 My strong opinion here is
It very much depends on the road someone has, right? I'm obviously also responsible for a company. I will not allow my people to be stupid. I will not allow, for example, my people to misuse their passwords. Neither will I allow them to be stupid about my company or about anything else. This is their responsibility. They have a role to play here. And this is not negotiable. Of course I have limited influence on my clients and obviously I have even limited influence on the society as such. But it depends on the role.
Yeah, that's my
Point. Can I ask, can I answer with a
Question? Yeah, you can
Directly, I have a question to you switching, but how much is this victim blaming? Is this because we are all victims of crime or nation states and how much of victim blaming is the right one to have? And and who is the, the real responsibility, isn't it with the offenders?
And before, before you answer, maybe we had one comment from, from the online audience that that says at the end, claiming someone is being stupid is is in some way unfair and argues that there's not a right to say that. Because at the end of the day, who touches what is correct and what is wrong? At the end of the day, no one for us has all data and all insights and this always right, probably some belief, but it it is the case. Could we
Keep in mind? Yeah.
Could we agree on the word ignorance?
We can, I think on the word
Correct, I think that would be, because I don't like the word stupid
At all. Ignorance is maybe better so battles than you, than you and your, We don't forget you.
Your question was
For you about Yeah. Aren't you blaming the victim instead of blaming the real offenders, the real root cause which are criminal organizations or nation states?
Yeah, it is difficult. Difficult question. Yeah. So I have no, no final answer to that to be honest.
Understand the question. Could you
Rephrase? We put so much emphasis, I mean rules, regulation, standards, policies, they put a lot of emphasis on the actual victims. The victims are the end users on the enterprise that were compromised. But like for example, in the Ukrainian case, they're dealing with nation state capabilities that go after commercial civilian organizations. The real root cause the real offenders are cyber criminal organizations or nation states. But we put so much emphasis on the actual victims
First. And Jennifer, I
Wanted to step in here actually. And just to to, to add to that comment from online, is that like it is, we definitely should have like should take out this from the debate that like, oh, we don't know the truths. Oh it's like so many shades of truths is out there. No, they're not. Like we have so much data. I think that we are much more capable of first being able to differentiate it second, like when we identify malign information campaigns, it has two very specific indicators. First malign intent, second coordination. And this is something that honestly, but in times of social media stuff, it's very easy to prove, like much easier than we all are thinking. And that's, I really, I really hate when people are bringing this debate like, oh, but are we actually sure? Like yeah, like let's talk about victims, not the ones, the offenders.
I think we do have enough capabilities of tracking the sources, naming them and shaming them and putting out there and holding them accountable. Because that's what in all this misinformation, disinformation debates is always lacking. Actually making people accountable, like responsible for, for that make them pay show one time two times, three times. That's, that's not something that will be tolerated in democratic societies because people have this kind of stereotypes, you know, this, oh, there's a freedom of speech that's like, as you said, right, to be stupid and stuff not really like democracy. And, and, and, and freedoms do not exclude responsibility. So I would just really like to, to have that. I mean like Ukrainian case, it's very extreme for most, for most people here. Like we had the full scale war. So for us we really learned it as a very hard like price on every level, societal level, business level, state level, that propaganda, disinformation, malign information, campaigns, these are the things that actually kill people. So I just really want all of us have that in and,
And, and I think it's my sometimes really worse to go back to, to all the series about the state a couple of centuries ago, which at the end always are about that a state comes with responsibilities and with certain regulations and is a frame for something. So it's not something where, where you have all your rights on yourself but you give something to the state and got something back. And I think this is probably a bit lost that, that thinking about the role in some areas or got a bit more lost in the, in the, the past time anyway. Jennifer,
I just wanted come back to your point. I mean as a chief information security officer or as a risk professional, you have a responsibility Yeah. To your organization, to society, to, So you know, I did that from that that you have the responsibility to be as well informed in terms of the decisions that you are entrusted with and the decisions that you have to make as a a professional. And of course, you know, there's an enormous amount of information out there. And coming back to my initial point, the key challenge is filtering that information. Yeah. I mean going back 10 years ago I remember people saying, Oh you know, I don't need to do due diligence. I checked the first page of Google, that's my answer. There's my report finish. Yeah. And you know, it's not as simple as that. And the more complex the decisions are, you know, the more research you've gotta do.
Yeah. And you know, we had one case, and this is an interesting one, you know, just to give some practical examples that you said we were advising on a transaction in the Middle East and you know, we do the public records, so all of that, you know, public information, but this disinformation mal information can also transcend into the opinions that you collect from human sources. Yeah. So we like, you know, I think we had interview like, you know, 12 people actually on this particular transaction and 11 of the people said one thing. Yeah. And one person came with a different argument. Yeah. So we're like, you know, the deadlines approaching, it's in 24 hours. Like we gotta get the report out. And it was very easily like, you know, we were having these discussions and that's why, you know, these risk based discussions, these committees internally are so important where you say, well listen, you know, if one person is saying it, you can, what some people were saying, well you know, that person must be wrong. You know, 11 people are right, that person must be wrong. But in fact that one person was right and everybody else was wrong. Yeah. So just to see how, you know, disinformation in the press, in the media can also inform opinion of highly regarded professionals. Yeah. And so, you know, the critical thinking is everything when you're looking at information.
Okay, Patrick, first he's waiting for quite a while right now.
Speaker 11 00:34:50 No problem. And I think, I think one thing we have to ask ourselves is stupid didn't bother us so much in the past. So what is different now? Stupid used to mean uneducated or uninformed. But now stupid is more misled and radicalized because the we're being led misled by the, the platforms are algorithmic radicalization engines. So stupid is now a radical stupid where they're in your face and they're spouting stupid all the time because before it was just they didn't know or they weren't educated. But now it's, everyone is in active, they're not in politics, but everybody's in politics. They're actively, aggressively radicalized by the
Platform. That's something for a second. You're speaking, I know what you're speaking about and that's why I asked her the question, question, why was it possible that 95% of the Ukrainians believe that they're in good whatever you said. So that they're good and that they were not falling for all that stuff. And that I found the most interesting part because it's information, information, information. You kept the people, not you only, but the KY and everybody kept the people informed.
Speaker 11 00:36:01 Cause
Speaker 11 00:36:03 You can't block the source cuz blocking the source and holding them accountable would be nice. But that's like blocking IP addresses as cyber security strategy. It's not,
But just a very important thing to, to step in here. Naming and shaming and tracking the sources is of course very important for making them accountable for that. Definitely. But it is even much more important for first and, and that's partly answering to that question is basically threats awareness. That's an essential thing to do. And to have and to make to actually, like I I, I gave this example when we were talking, so like when you're struggling with some issues personally, like the first thing for your recovery is to acknowledge that you are having a problem. So it's very important to, in terms of the the companies and stuff, it's to talk about it, to understand that that's, that is the threat. It's, it's not necessary. You have to have all the solutions out there when you're starting to talking about the threat. I mean, we, we didn't, we still don't, like we are looking for, there are so many things that we are trying out out there. Some are successful, some are not, you know, and just, but just talk about that and understand and be very, very honest about this conversation about really knowing who to point, who to blame and just, yeah, just that's, I'm, I'm, I understand that it sounds like very poetical, but really like threat awareness is like a key to so many things to the decision making for, for those responsible on the higher level and on the personal level. So,
So, so would it mean to go back to the CISO that, that a CISO is so to speak the advice or for the business decision makers about the trustworthiness or correctness of information used for these decisions? Maybe first hila, because she's waiting for a while.
I'm, I'm okay. But might, I might change the direction of the, of the discussion. So you might, if if anyone has to comment on that. Okay, then
I'm a short one then and then I is waiting for
Half an hour. No, I agree with the decision maker. Yeah. You're part of the decision maker. The s well decision makers. Yeah.
I think it depends. In some organizations it depends where that function sits, right? I mean, from the outside sometimes there's information due diligence, third part research sits within a different function in compliance, there's a lot going on there. Sometimes it sits in security. Yeah. And you know, wherever it sits there, there needs to be that interaction with security for sure. Yeah. Because I think the security function has a different mindset. They're much more risk orientated and aware and much more, you know, embedded in the risk management process also increasingly now with the digitization. So I, I think definitely it needs to be embedded somehow. And the chief information security officer I think has an important role to play in assessing the risks attached to information for sure.
Okay. Hila, you'd want to take the discussion to a totally different direction to get it right. Yeah.
So we are almost 45 minutes into the, to the discussion and we've been talking a lot about human beings as the victims of, of misinformation or bad information. But it's not just human beings, it's also machines, for example, in OT security scenarios or iot, you talked about cars, right? What if we feed those machines the wrong information about the temperature, the air pressure that can kill, Right? Also that that's dangerous and that's definitely a c responsibility. That's definitely a problem. Also
Yeah. Yeah, yeah. And then also in the AI and machine learning world, right? Those systems are trained, they're trained a lot to make decisions very quickly, analyze large amounts of data and based on that, make decisions maybe act and what if they're fed with wrong false data. There are a lot of research on, on how to hack ai.
Yeah. If are concrete examples, I think the Microsoft AI in a relatively initial stage was trained by some users to behave like a fittler or to to talk like a fit to respond like a fittler. So this happens and it can happen. I think especially, especially for ai, this is a huge challenge because this is, we again could, could argue about what is the part of the C role. I think the C role needs to ensure that nothing wrong goes in. And then we have also the data scientists and the people responsible for ai, which need to, to understand what is happening. And this is the results of what the AI is providing really what, what it should provide. I think these are, there are multiple parties involved. And I think I also like this OT example. So because the, I think it's definitely scary. Think about a huge number of IIOT devices or i t devices becoming compromised and, and spreading wrong information. You can cause disastrous consequences.
And it happened already, right? And it happened
Concretely example,
It was on the news stocks things.
Well there was this accident with Tesla, right? Yeah. Many years ago. And that was a, that was the person died actually. So very unfortunate. And it's with all these sensors. So I'm, I'm I'm emphasizing the story. The, the problem there is, and that's why this is a CSO thing and that's because we're critical thinking be because when the engineers started to do all these sensors, I went like, holy moly, this is not gonna work together. But they only thought straightforward. They never thought this could be hacked. Yeah. And that's when I got involved, but you know, Yeah. So
Fully, okay. Don't forget your microphone. Good. Okay. First hand over to York. Now
Speaker 12 00:42:23 Most of it has been discussed already, but one thing as I understand MDM campaigns is they are explicitly aiming at weakening the resilience of structures. And even already by that it is a CSO task. He needs to know if a structure of a bank or of a, an energy supplier is weakened. Because as, for example, hela described with such an MDM campaign, you, you see waves of phishing attacks and whatever coming in you have to battle with, you have to fight against and such waves can easily be used and have been used to do targeted attacks underneath those waves. So you need to be
Aware. Yeah. And, and, and at the end, I think there's the point of, I like this very much what you said, the, there's this point of the CSO also being very closely involved into business impact analysis. And what you're actually saying is these campaigns change what we, the results of a business impact analysis because when they, it means you can have different threats, different impacts you didn't expect. And this is something that latest there. I think it's important that CISO understands and works on that and takes it into account because at the end it's part, as you stated of resilience, I think hia you, did I get it right or you want to comment on No, no,
I have other stuff to I'll it to
Looks like she wants to say something.
Yeah. I just wanted to say like on, on this, on the business analysis and and intelligence is that it is very until very recently that I've like personally was hired for private companies to first teach their business analysts to take into account the malign information campaigns when analyzing it. And second to do like by myself to like with what we were doing. Like we were taking the same topic, they were analyzing it in the way they do and I was analyzing it in the way I do. Like I usually look for the bad stuff. Meaning that for, for that. And I, I cannot comment on, on the topic itself, but I will just tell you that the approach, like the, the results that we get were extremely fascinating. It was, the task was to understand whether to invest or not to expand in that area, in that business or not.
And they had the, the answer no. And my answer was yes because part of their like decision was made basically on the malign information campaign that was actually spread by the competitor and fueled by the foreign state because it was kind of like the, the field they were working is very close collect connected to what state is doing. So that's just until very recently and this was like very interesting experience for me as well because I've never looked from that kind of perspective because I work mostly with like this societal perspective, you know, from the state whole society approach. And that's when I also had this eye-opening experience in terms of business and decision making over there. That
Goes back into Jennifer's domain I think this
Example. No, that's absolutely right and interesting to have this comparative approach. Yeah, no I think also what I wanted to say is that, you know, you can really get a completely different picture and evaluation of a business's reputation depending on what sources of information you pull out or pull together and, and there's also apart from disinformation and misinformation and you know, there's a a lot of activity on the PR side. Yeah. And that's also something that needs to be considered that, you know, certain reputations can be created. They might not necessarily be with wrong facts or with the intent to harm. Yeah. But they might be kind of a distorted or you know, pushing a certain narrative Yeah. Into the market. And you know, if you only check the first kind of, even if it's the first 30 hits of Google, you might get a certain picture of a business or an individual, which is very different to, if you kind of go back 10 years and look at everything that they've done between 10 years and now. And obviously you can't do that on every third party or on every situation. But you know, on a risk based decision and depending on what the risk exposure is, you need to really consider how much information you are collecting and how you're going about analyzing that in order to be able to, at the end of the day make a good risk based decision. Yeah.
But, but we had interestingly roughly one hour ago a presentation this room, which was about we need more standardization automation around third party risk assessments so that we can do it. Because I think the count made on, on a survey we, we've did for that, that company was, that professional risk area has approximately between 0.7 and two days time per year per supplier to do the risk assessment. And that's also heavily because there's a lack of automation standardization. So, so I think we need to make progress to, to, to get away from, we can't do it for everyone to, we have it standardized to a certain degree so that we can focus on the, the very high risk.
Absolutely. It's definitely you need the strong tools and the good tools and that's I think the challenge. There's so many tools that have come onto the market recently for due diligence and KYC and the question is, you know, what tools gonna be able to really help me filter out those high risk
Ones. Still have some way to go here, but right now in interest of time, we have don't have much time left. One question from the audience then hela Anden and then we go into the closing round.
Just it was around due diligence. And I think if we look at advertising and what makes advertising successful, what makes misinformation campaign successful? I mean if you look at advertising they often say it's 20 times somebody has to see an advert before they start to get influenced. Now if you look at misinformation campaigns, anyone who's taking on information the first time they see it, every time they see the next lot of information, they are building up stronger feeling towards that view that they've got. So if we know what stage it's at in terms of our due diligence, we would get a better understanding of the amount of strength that that individual that we're trying to explain things to might take. And I think that is the sort of techniques that we need to think about because we need to understand it. What your thoughts on
That? I interrupt you in the rest of time a bit. I think this is exactly what Xen said before around we are capable to understand whether something's organized and this, this is at the end Exactly I believe about that point. So Hila, you had one point then Xen and then we go into the closing run.
Yeah, it's more of a thought that I have and it's for us insecurity, it's usually it's scary thoughts. Okay. Because it's part of our business. No, we talked about critical thinking and we talked about AI and the future of fishing is fishing with deeplake, right? And these are things that are already, we see very sophisticated fishing attacks, very sophisticated ones that even professional will fall. And, and if you now think about combining already available deep fake capabilities with fishing or or spray fishing, spear fishing specific, something that is completely tailored for a specific individual, this is going to be scary. So even having critical thinking might not be good enough and it will be require brand new technologies that are now being developed. Not completely available, but yeah.
But that's the future. But a good thing is AI can be a weapon for deep faking, but it can be also a weapon against deep faking. I think this is what we always should keep in mind, but we need to to work on that part as Wellen and them closing around.
Yeah, just wanted to add like about the solutions available out there. There are definitely a lot of solutions, but none of them are actually taken into account the, the danger coming from malign information campaigns, like they simply are not built for that. I mean like I, I work specifically with the technology that help us to be able to identify it and it's always like the biggest problem. Like we've tried so many solutions available out there for brand tracking, you know, for like any, any kind like name, name one. And I'm sure that at some point we, we, we went and and tested it and none of it, like from first from the technical perspective is capable to catch that. And the second one is more important one, these solutions, they don't even set themselves this goal is to be able to catch Okay, catch that.
Okay. So we don't have much time anymore unfortunately. I think we could continue for quite a while. On the other hand, we don't want to be between all the people in the room and lunch outside. So you know, it's always like, like someone when their bell rings in school and then raising the hand and bringing out another question doesn't make you popular in your class. So we, we do 32nd statements please right now about a very simple thing we discussed for this hour, which is what is your number one aspect from this discussion that is relevant to diseases? What is the number one thing diseases should look at from all the things we've discussed today, Barbara, or you start and then we go that right
That way. So I think what Jennifer said before, it's a problem, this general question for SISO is difficult because we know all companies have a different structure of the siso. Also you need the skill of a SISO to work with the other departments. So if supposedly there's something which we did in Indonesia, I need to know that from the, the politics in the company that there's something going on in Indonesia then we focus on on Indonesia. So you can't do the si a role on yourself, but I do think we're starting now in the government with such a, a product here in Germany. So I think it will continue. Okay.
Yeah, I just wanted to reiterate maybe that, you know, this whole topic of critical thinking is something that really needs to be at the top of the agenda of the C and everybody in his team and to have some kind of structure in place that facilitates the C to tap into a skill set Yeah. That is capable of implementing critical thinking but also combined with the right tools. Yes. I mean we use viras, which is an investigation software for example, which we think is, you know, really state of the art and you know, those data sets need to be as comprehensive as possible and you need to have that access.
Yeah, thank you.
Just to say that we all have kind of different backgrounds, but each of us gave I think a very practical experience of why it, it is the matter and I think it's right now very important to move to this discussion. Like really how to make it happen, how to incorporate it basically in the work and just to, yeah, to combine experience as much as possible because I think we all have to finally agree on that these things are so multidimensional and combine so many different topics at the same time that the solutions should be complex in its nature. For
Sure. Finally, Hila.
Yeah, so I think that one of the key challenges that the C will face right now, it's really, it's the lack of regulations, legislation, standards, available technologies to address all those issues. And a lot of it is related to the pace of change that we see right now. And it links to the question that you have, right, right now cause misinformation existed always, but the amount of changes that we now grow through and consume is, it's all happening so quickly. It took years for the radio to be used by everybody or the TV to be used by everyone. So that allowed us to develop cultures and guidelines and regulations and standards to allow us to use it safely. What's happening around us is happening so quickly. Social media has been around for like 14 years, 15 years, right? And so the nature of of our kind of safety mechanisms, it takes longer for us as, as as societies to to create those protection me mechanisms.
Yeah. Okay. Thank you very much. And I think when I wrap up, I wanna go back to this mail inform misinformation. This inform impacts the business and the business resilience. And so it's something CISOs must work on and understand and also counter what is happening otherwise they are failing in a key part of their role. So CISOs must start taking this into account. Thank you very much for this very intense and very insightful conversation, discussion and for the participation of many from the audience here as well. And hope to have you back in the afternoon. Enjoy your lunch.

Stay Connected

KuppingerCole on social media

Related Videos


Recap Cybersecurity Leadership Summit 2022


Key Findings on Malign Information, Misinformation, and Cyberattacks

Ksenia Iliuk, Head of Research at Detector Media, Ukraine tells us about some key findings of their research in the media landscape of Ukraine. Find out what she has to say about Telegram and what it has to do with #cybersecurity .

Webinar Recording

Effective IAM in the World of Modern Business IT

Digital Transformation promises lower costs, and increased speed and efficiency. But it also leads to a mix of on-prem and cloud-based IT infrastructure, and a proliferation of identities that need to be managed in a complex environment. Organizations adopting a Zero Trust approach to…

Analyst Chat

Analyst Chat #149: The Top 5 Cybersecurity Trends - Looking Back at CSLS 2022

Deep Fakes, AI as friend and foe, Business Resilience, Mis-, Dis- and Malinformation: The Cybersecurity Leadership Summit has taken place in Berlin and covered all of this and much more. Martin Kuppinger and Matthias look back on the event and identify their Top 5 Trends from CSLS2022 in…

Event Recording

Assessing your Cybersecurity Tools Portfolio: Optimize Cost, Increase Security

Most organizations don’t suffer from a lack of cybersecurity tools. They suffer from the cost and administrative burden of running too many of these. They suffer from the lack of integration. They suffer from the lack of skills in optimally configuring the tools and analyzing the…

How can we help you

Send an inquiry

Call Us +49 211 2370770

Mo – Fr 8:00 – 17:00