Okay. All right. So thank you very much. And can you pull up my slides? To kind of echo, to carry on with the conversation that we just had, I think there, I'll give you a story as to why I think it's going to be irrelevant and why things are going to matter more. But to give you a perspective of where I'm going with this, first of all, as I mentioned, I'm currently a CTO at a startup, which I won't tell you about right now. But my other claims to fame is that I created this thing called the Cyber Defense Matrix. For those who were at the workshop earlier, hopefully you enjoyed that.
But I also created this thing called the DIE Triad. So I'm going to kind of quickly talk about both because it gives context around what's to come. So the Cyber Defense Matrix is this thing that I created that helps me organize all these different things that we see in the cybersecurity industry. I blurred out the vendors because what I want you to notice is an obvious gap. And it should be like pretty obvious, right? Why are there so few things over here? And I had a couple of different theories. And one of the theories ended up being what I call the DIE Triad.
So the Cyber Defense Matrix is useful, but my plan isn't to talk about the Cyber Defense Matrix. My plan is also not to talk about the DIE Triad. But the DIE Triad tells us something that's coming. And so the perspective is when I was trying to answer why there are so few things on the right here, I had to look back at history and say, OK, what happened in these different eras of time from the 80s to the 90s to the 2000s to the 2010s? And it turns out there's a nice pattern that emerges.
So in the 80s, we bought a bunch of technologies because it became cheap, but we ran into a problem, which was what did I just buy and what business function did it support? In the 90s, we started seeing viruses and people walking into our networks. In the 2000s, we started to get inundated with logs and we had to find ways to find those intrusions. In the 2010s, we saw breaches everywhere and we needed to be able to kick the attackers out. What was interesting in each of these eras is that map nicely to the NIST cybersecurity framework. So the 80s, we had an identified problem.
The 90s, we had a protect problem. In the 2000s, we had a detect problem. In the 2010s, we had a respond problem. So of course, the 2020s, the current age is a recover problem.
OK, so if you've wondered why we are talking about recovery or resiliency all of a sudden, it's because we are in that era now. OK, so I said, OK, we needed a new solution. And the new solution, in my view, was what I call the D.I.E. triad. The D.I.E. stands for distributed immutable ephemeral. And it gave me a perspective of just how we solve the problems that we see in the future. But this is just one way to think about the future, just a pattern that we see over the past several decades. And I said, OK, this is great. The D.I.E.
triad, if you don't know anything about it, definitely go take a look. This session is not about the D.I.E. triad. This session is about what happens in the 2030s. OK. So the NIST cybersecurity framework ends with recover. There's nothing after recover. Why do I think cybersecurity is going to be irrelevant?
Well, because there's not a sixth function in the NIST cybersecurity framework. And govern doesn't count, by the way, for those who know. All right. So now what? And I would argue that, again, there's a much, much bigger challenge that we're going to have in the 2030s. And we're kind of seeing that right now, which is, of course, all things around A.I. And so let me make sure it's clear to me, to you. There's a lot of experts in the in this space around A.I. I am not one of them. OK. I think I've learned enough. I've done a lot of work in A.I. at Bank of America.
I was on a number of risk communities to try to understand how to make sense of this and to do how to manage it properly. And I'm also my startup is actually all about A.I. But I would definitely not put myself in the category of like the true experts. But I think I've gotten past the peak of Mount Stupid and just barely getting up the slope of enlightenment here.
So, you know, calibrate. But that said, I think there's an opportunity for us to think broadly around what are the opportunities and challenges. And as I think about the challenges, there's tons of challenges. It seems like it's the worst of times. And it's the worst of time from three different angles, from how our employees are using it, from how our developers are building it, and from how attackers are weaponizing it. OK. And each of these challenges, even though we can see it as the worst of times, I think there's an opportunity for us to see it as the best of times as well.
So I'm going to cover each of the worst of times, but also talk about how we can see ourselves differently in this new ecosystem when cybersecurity becomes irrelevant. OK.
So first, the worst of times is represented probably in something like this, where you see Amazon lawyers saying, OK, stop using ChachiBT. Don't send proprietary material there, because it looks just like all our proprietary material that we have internally. Let me tell you, if ChachiBT tells you what looks like your proprietary material, your personal proprietary material, you're not very creative. Your organization is not very creative. OK. It's probably well known, whatever ideas that you have, it's just embodied in other places and you haven't discovered it.
LLMs generate, they don't commemorate, meaning they don't regurgitate the exact information that, even if it got trained on, like, say, your personal material, it would not statistically find it. OK. Furthermore, even if it, but that's not even how it works anyway. The LLMs that, the information that you send to an LLM today isn't baked back into the model such that someone else can use it. OK.
So in the context, we, it's definitely, there is a concern around, like, let's say open AI and from a third party risk standpoint, but not from a, because I use it, you can now, you as a customer of open AI can also grab my content as well. That's not how it works. But anyway, that said, I think we're thinking about this the wrong way in general, though. We're thinking about this, our intellectual property, as if it's something that we have to close, very closely guard.
And I would offer, what if we instead look at intellectual property as if it's currency and we spend money to make money, right? Well, we can spend currency to get better currency.
Now, in that regard, I think we have, again, an opportunity that where the thing was in advance. There it goes. So if you think about what a CFO does, they don't make money. They instead govern the wise and appropriate use of it. And if I actually said, if I were a CFO and I said, you cannot spend any money, you might as well fire me as a CFO, because I'm not really helpful to the business. So think about this from the cybersecurity standpoint.
What if we become, what if we have the opportunity to become like the CFO for intellectual property, where we govern the wise and appropriate use of IP for intellectual property? Again, if you're a CISO, if you are a security team and you say, you cannot spend any of IP, you might as well fire the security team, because they're not really helping the business grow and to be able to spend that currency in a way that helps them build better currency.
Now, here's the thing. Just as much as you have different budgets and different levels of spending within an organization, CFO doesn't care if you spend petty cash, okay? Whatever petty cash might be for your threshold. At Bank of America, my threshold for petty cash was actually pretty high. But in the light, and similarly in many organizations, there are things that you would consider intellectual property that are really the equivalent of petty cash. Do you really need security review for these little snippets of code? Probably not. Do you need security review for the entire base of code?
Probably. Okay. But the perspective here is, what if our role changes to be looked upon as a CFO for intellectual property, where we enable the business to actually use that wisely?
Second, worst of times, is developers building artificial intelligence systems. Now, I want to reference this architecture here. It's from Andreessen Horowitz. There's a lot of detail, but the main detail I want you to kind of make out is the blue lines. The blue lines represent user input, okay? User input.
Now, there is one inviolable rule of security, which is never trust user input. And unfortunately, there's a fundamental flaw with LLMs in that user input is trusted everywhere, okay?
So, we look at the, I highlighted it here, but the blue lines now, the highlighted blue lines show where user input is trusted. It's trusted in pretty much every part of it, nearly every part of the architecture, which is a real problem, a real challenge.
And so, we do have, this is definitely a real concern. We have to deal with this. We're struggling to figure out how to make this workable security, how do we work in security into this type of environment? And there's a bunch of tools out there. I'm not going to go over the specifics, but there's the Berryville Institute of Machine Learning, their taxonomy attacks, which I love. It's a great way to think about a holistic model for attacks against AI systems. There's the OAS Top 10, which I'm sure you guys have seen, or if you've looked at this at all.
There's MITRE's Atlas that talks about different types of attacks against AI systems. But overall, I would say, again, this is an opportunity for us to go past the current set of challenges and think differently about our role. And I hinted at this earlier, but this notion of the difference between safety and security.
Now, in the context of, so in German, it's the same word. In English, it's two words. In cybersecurity, it's the same word again.
So, how can you tell the difference between what is safety versus security? And one way to do it is to think about it from a food context.
So, in the English language, food safety refers to things like hygiene, compliance, good practices, software bill of materials, personal responsibility. When it comes to security, it's dealing with things that are inherently a governmental function. Starvation, poisoning, where did all the baby formula in the US go? Those are things that is a governmental function. And this is important because we think that we're doing cyber security when most of the times, we're actually doing cyber safety.
So, if I think about this from a different context, if I'm Boeing, if I'm Airbus, if I'm Lufthansa, then my job as an aircraft engineer is to make sure that the plane doesn't fall out of the sky. My job is not to dodge Russian and Chinese missiles. The job of the government is to ensure that the airspace is free and clear of Russian and Chinese missiles. If I get hit by one, is it really my fault?
And we, unfortunately, within our industry, we do a lot of victim shaming. But the perspective here at the end of the day is that we practice more cyber safety than we do cyber security. And the reason why this is important is because we have a much bigger problem coming up. And the bigger problem is AI safety. And who better than a group of people who've been doing cyber safety for many years to pivot over to doing AI safety? Who has a stronger claim to say, we understand how to do digital safety. Let's now pivot over to how we can do AI safety. And I'm proud to say, I'm really happy.
So, I I propositioned this many, many months ago. And I'm really happy to hear, for example, that Vijay Bolina, the former CISO at Google DeepMind, is now the chief AI safety officer at DeepMind.
So, this sort of shift is already happening. And I'm super happy to hear that, because folks like Vijay are helping lead the way in terms of defining this field for, this new field that we can take on. And I think it will be us to, it'll be upon us, I think we are going to have the best chances to be able to shape this future for ourselves as well. And the reason why that's important is because we're seeing some really interesting things that are happening that we need to be aware of.
So, what you're seeing is this progression of model development over the past couple years, where for a long time, it was growing, the performance of the model was growing steadily, not very short, it was just a linear curve. But all of a sudden, once the size of the model grew past a certain point, we hit an inflection point. And all of a sudden, now the accuracy of these models are growing much faster.
And so, what's interesting is that, again, before our expectation was that this curve that we see, we had a trade-off curve between performance and the model working against a broad set of use cases. And this curve would gradually be increasing as we throw more resources at it. But what's actually happening is that this curve is bending outward.
And because we don't know what's going to happen when this curve continues to grow outward, the notion of AI safety being a preeminent concern, a much greater concern than cyber security, is causing us to shift our opinion, our thinking towards how do we deal with this problem before it gets out of hand. And we thought, in the curve on the left, we thought we had time. We thought we would have maybe a decade, two decades before it starts to become a big concern. This is an emergent property that we didn't anticipate, that researchers didn't anticipate.
That is why open AI researchers, that's why the top, the frontier model researchers are basically saying, look, we kind of need to stop development of these bigger models until we figure out how we get a better handle of what we're going to do when these bigger models come into play. So, chief AI safety officer.
Oh, and then for AI safety, I came up with a set of guard rails as an example of how we can think about how we as digital safety officers, CISOs, can shift over to becoming a chief AI safety officer. How do we apply these sort of techniques? And if you want to see more details, you can see the article or the RSA talk I gave on this.
All right, then weaponizing. So, the third worst of times is attackers weaponizing.
So, there are four major things that I think we're concerned about that I've heard many times. First is, are we going to see more convincing phishing emails? Or we're going to see lots more malware being developed? As far as I'm concerned, that's not that interesting.
That, for most of those things that we're saying, we should keep doing what we're already doing. Security awareness training, tools that help us find malware and lockdown environments and so on and so forth. But that's nothing really new, okay? That's old stuff that we've been working on for a while.
However, there is something new. And the two new things, as I see it, are the ability to find vulnerabilities faster and more efficiently. Generative AI is really good at being creative. What do you need for finding vulnerabilities? Lots of creativity.
So, over the next six months to a year, we collectively should expect to see lots more zero days, lots more vulnerabilities that are being disclosed by the vendors that produce software, as well as external attackers finding those before we do. And so, that's going to be a big concern over the next couple months.
And then, of course, we saw earlier Biden giving us a welcome that these deepfake attacks, we don't know exactly where that's going to lead, but those are two concerns that are new. And to that end, fortunately, I think there are some interesting things that AI is helping us with that will help us address these challenges. And the first one is around the vulnerability research piece. One of the things that we, I think we know, so going back to William Gibson, the future is already here. It's just unevenly distributed. What's unevenly distributed is our knowledge of how to build secure systems.
It's known. It's just not widely known.
So, imagine now in the future or near future, you have a situation where you can say, hey, I want to build a system with these principles or these sort of features, and it generates for you code in memory safe languages like Rust, or it builds in all the core security principles that we want it to be built with. And so, we have systems with fewer security concerns right from the very beginning, whether it's because you're already using design patterns that are secure or because you're using memory safe languages that don't have those security issues to begin with.
And then, secondly, as far as the deepfakes, what I'm really excited about is that we now have a, we're starting to develop mechanisms to authenticate data. Okay.
Consider, we've had a real issue with email for a long time. We wanted to have authenticated emails for a long time, and we still struggle with it.
But now, there's a bigger societal concern as represented through deepfakes. And so, you have this coalition for content provenance and authenticity. They're starting to bake into our devices, our cameras, whatever it might be, a digital stamp that says, this is authenticated. It came from this device. It was edited in this particular way.
And so, we're starting to see that, and that's starting to get widespread adoption. And my view is, if they can authenticate much, much larger bits of content like video, audio, images, and so on and so forth, how simple would that be for email, right?
So, there's a foundation that's being laid right now to help us address this last issue, which then, I think, will help us address the email issue that we've been working with for a long time. But that said, as we think about challenges that we see in security overall as attackers leveraged in AI, I think there's another model that we can look at to say, okay, how do we anticipate the future and where that's going to go?
And so, one of the ways I think about this is using another mental model called the DIKW Pyramid. So, the DIKW Pyramid, as you go up, it provides more context, gives you a greater understanding. But here's the exercise I want you to go through. Consider the word data and the words that follow it. Data engineering, data lakes, data pipelines, data provenance.
And so, those words, whatever words that you come up with for data, you now apply that to information, okay? Information search, information engineering, information governance, information security, okay? And do that for knowledge now. And I would argue ChatGPT has allowed us to enter into this knowledge economy. And in this knowledge economy, we have a ton of things that the new knowledge-centric enterprise is going to need.
So, we think that we have all these problems right now with data security and information security. Just you wait. There's a whole bunch of other new challenges that we're going to face. And these are all very much driving towards this new cognitive enterprise or this new generative enterprise.
Now, I said the word generative enterprise. And you think I'm using that word generative enterprise because of generative AI, right?
No, actually, I mean generative enterprise according to Restrom, okay? So, Restrom created this typology of organizational culture many, many years ago. And interestingly enough, he used the word generative here. And I'm going to appropriate that and say, huh, it's interesting because in this new knowledge economy, we are actually building towards this generative enterprise. And this generative enterprise is contrasted with a pathological and a bureaucratic one.
So, you heard earlier the Sapphire model and the different aspects of that. Here, and you can see where like Oak and Redwood might fit into the either bureaucratic or pathological. But what's fascinating as a generative enterprise is the enterprise that we all want to work in, right? It's messengers are not shot or they're not tolerated. They're actually trained. We want to share this information. It's all the wonderful things that we saw in the bamboo world. And we have that opportunity here.
But what's interesting is also you're going to have people who are going to feel threatened by the generative enterprise. You're going to see executives within their organization be threatened because their power base is now being removed.
And so, in this new generative enterprise where we all want to live in this world where knowledge is openly shared and all, you're going to have people who are going to say, no, that can't happen. And we need to be able to tightly constrain that.
So, but in this new generative enterprise, what is our role? What is the role of the CISO? Okay.
And so, with that, let me offer another way to think about this. Chief data officer, chief information officer, might we be the chief knowledge officer?
Again, from the standpoint of adjacent roles, might we be able to make a claim that we understand how do we do knowledge security, knowledge governance? Those are things that we are having to deal with with data and information today. Might we also have a claim for the chief knowledge officer role as well?
And so, just to wrap up, I think the future of the CISO or the future of security is going to be fast irrelevant in the age of AI. And we have three opportunities. And they're not necessarily mutually exclusive, but one is to be a CFO for intellectual property, the second is to be the AI safety officer, and the last is to be the chief knowledge officer. And with that, thank you very much.