Okay. All right. So thank you very much. And can you, I pull up my slides, the, to kind of echo to, to carry on with the conversation that we just had. I, I think there, I'll give you a story as to why I think it's gonna be irrelevant and why things are gonna be matter more. But to give you a perspective of where I'm going with this, first of all, as I mentioned, I, I'm, I'm currently a cso, ACTO at a, at a steal startup, which I won't tell you about right now. But my, my other claims to fame is that I created this thing called the Cyber Defense Matrix for those who were at the workshop earlier. Hopefully enjoyed that. But I also created this thing called the DIE triad. So I'm gonna kind of quickly talk about both 'cause it gives context around what's to come.
So the Cyber Defense Matrix is this thing that I created that helps me organize all these different things that we see in the cybersecurity industry. I blurred out the vendors because what I want you to notice is an obvious gap and it should be like pretty, pretty obvious, right? Why are there so few things over here? And I had a couple of different theories and one of the theories ended up being what I call the DIE triad. So the cyber Defense matrix is useful, but I'm not talking, I'm not, my plan isn't to talk about the cyber defense matrix. My plan is also not to talk about the DIE triad, but the DIE triad tells us something that's coming, okay? And so the perspective is when I looked, when I was trying to answer why there were so few things on the right here, I had to look back at history and say, okay, what happened in these different eras of time from the eighties to the nineties to the two thousands to the 2010s?
And it turns out there's a nice pattern that emerges. So in the eighties we had to, we bought a bunch of technologies because it became cheap, but we ran into a problem which was, what did I just buy? And what business function and the support. In the nineties, we started seeing viruses and people walk into our networks. In the two thousands, we started to get inundated with logs and we had to find ways to find those intrusions. In the 2010s we saw breaches everywhere and we needed to be able to find, to kick the attackers out. What was interesting in each of these eras is that map nicely to the NIST cybersecurity framework. So the eighties, we had an identified problem. The nineties we had a protect problem in the two thousands we had a tech problem, 2010s, we had a respond problem. So of course, the 2020s, the current age is a recover problem, okay?
So if we've wondered why we are talking about recovery or resiliency, all of a sudden it's because we are in that era now, okay? So I said, okay, and, and what, so we needed a new solution. And the new solution in my view was what I call the DIE triad. The DIE stands for distributed, immutable, ephemeral. And it gave, it gave me a perspective of just how we solve the problems that we see in the future. But this is just one way to think about the future, just a pattern that we see over the past decade, several decades. And I said, okay, this is great. The DIE triad, if you don't know anything about it, definitely go take a look. This session's not about the DIE triad, the session's about what happens in the 2030s. Okay?
So the NIST cybersecurity framework ends with recover. There's nothing after re recover. Why do I think cybersecurity is gonna be irrelevant? Well, because there's not a sixth function in the n cybersecurity framework. And govern doesn't count, by the way, for those who know. All right? So now what, and I would argue that again, there's a much, much bigger challenge that we're gonna have in the 2030s, and we're kind of seeing that right now, which is of course all things around ai. And so I, let me make sure it's clear to me, to you, there's a lot of experts in the, in this place space around ai. I am not one of them. Okay? I think I've learned enough. I, I've done a lot of work in AI at Bank of America. I was on a number of risk communities to try to understand how to make sense of this and to do, to how to manage it properly.
And as I'm also, my startup is actually all about ai, but I would definitely not put myself in the category of like the true experts. But I think I've gotten past the, the peak of mountain stupid. I'm just barely getting up the slope of enlightenment here. So just, you know, calibrate. But that said, I think there's an opportunity for us to think broadly around what are the opportunities and challenges. And as I think about the, the challenges, there's tons of challenges. It seems like we, it's the worst of times and it's the worst of time. And from three different angles from how our employees are using it, from how our developers are building it, and from how attackers are weaponizing it. Okay? And, and each of these challenges, even though we can see it as the worst of times, I think there's an opportunity for us to see it as the best of times as well.
So I'm gonna cover each of the worst of times, but also talk about how we can see ourselves differently in this new ecosystem when cybersecurity becomes irrelevant. Okay? So first, the worst of times is represented probably in something like this where you see Amazon lawyers saying, Hey, stop using chacha t don't send proprietary material there because it looks just like our, all our proprietary material that we have internally. Lemme tell you if chat GBT tells you what looks like your proprietary material, your personal proprietary material, you're, you're not very creative. Your, your organization is not very creative, okay? It's, it's probably well known. Whatever ideas that you have, it's just embodied in other places and you haven't discovered it.
LLMs generate, they don't commemorate meaning they don't regurgitate the exact information that even, even if it got trained on like say your personal material, it, it would not statistically find it. Okay? Furthermore, even if it, but that's not even how it works anyway, the, the LLMs that the information that you send to an LLM today isn't baked back into the model such that someone else can use it. Okay? So in the context, we, it is definitely a, there is a concern around like, let's say open AI and from a third party risk standpoint, but not from a, because I use it, you can now, you as a customer of OpenAI can also grab my content as well. That's not how it works. But anyway, that's it. I think we're thinking about this the wrong way. In general though, we are thinking about this, the, our intellectual property as if it's something that we have to close very closely guard.
And I would offer, what if we instead looked at intellectual property as if it's currency and we spend money to make money, right? Well, we can spend currency to get better currency. Now in that regard, I think we have again, an opportunity that, or the thing that's in advance, there it goes. So if you think about what ACFO does, they don't make money. They instead govern the whys and appropriate use of it. And if I actually said, if I were ACFO and I said, you cannot spend any money, you might as well fire me as ACFO 'cause I'm not really helpful to the business. So think about this from the cybersecurity standpoint. What if we become, what if we have the opportunity to become like the CFO for intellectual property where we govern the wise and appropriate use of IP for intellectual property. Again, if you're a ciso, if you are a security team and you say you cannot spend any of ip, you might as well fire the security team because they're not really helping the business grow and to be able to spend that currency in a way that helps them build better currency.
Now here's the thing. Just as much as you have different budgets and different levels of spending within an organization, you know, the CFO doesn't care if you spend petty cash, okay? Whatever petty cash might be for your threshold at Bank of America, my threshold for petty cash was actually pretty high. But in the like, and, and similarly in many organizations, there are things that you would consider intellectual property that are really the equivalent of petty cash. Do you really need security review for these little snippets of code? Probably not. Do you need security review for the entire base of code? Probably. Okay. But the perspective here is what if our role changes to be looked upon as ACFO for intellectual property where we en enable the business to actually use that wisely. Second worst of times is developers building artificial intelligence systems. Now, one of the, I wanna reference this architecture here.
It's from Andreessen Horowitz. There's a lot of detail, but the main detail I want you to kind of make out is the blue lines. The blue lines represent user input. Okay? User input. Now, there is one inviable rule rule security, which is never trust user input. And unfortunately, there's a fundamental flaw with LLMs in that user input is trusted everywhere. Okay? So we look at the, i I highlighted it here, but the blue lines now the highlighted blue lines show where user tru user input is trusted. It's trusted in pretty much every part of the, nearly every part of the architecture, which is a real problem, a real challenge. And so we do have, this is definitely a real concern. We have to deal with this, we have to, we're struggling to figure out how to make this a workable solu security. How do we work in security into this type of environment?
And there's a bunch of tools out there. I I'm not gonna go over the specifics, but there's the Berryville Institute, the machine learning, their taxonomy attacks, which I love. It's a great way to think about the, a holistic model for attacks against AI systems. There's the O top 10, which I'm sure you guys have seen, or if, if you've looked at this at all, there's mi miters Atlas that talks about different types of attacks against AI systems. But overall, I would say again, this is an opportunity for us to go past the current set of challenges and think differently about our role. And I, I hinted at this earlier, but this notion of the difference between safety and security. Now in the context of, so in the, in the, in German it's the same word in English, it's two words. In cybersecurity, it's the same word again.
So how do you, how can you tell the difference between what, what a safety versus security? And one way to do it is to think about it from a food context. So in the English language, food safety refers to things like hygiene, compliance, good practices, software, bill of materials, personal responsibility, okay? When it comes to security, it's dealing with things that are inherently a governmental function, starvation poisoning, where did all the baby formulary in the US go? Those are things that we are, that is a governmental function. And this is important because we think that we're doing cyber security when most of the times we're actually doing cyber safety. Okay? So if I think about this from a different context, if I'm Boeing, if I am Airbus, if I'm hanza, then my job as an aircraft engineer is to make sure that the plane doesn't fall outta the sky.
My job is not to dodge Russian and Chinese missiles. The job of the government is to ensure that the airspace is free and clear of Russian and Chinese missiles. If I get hit by one, is it really my fault? And we unfortunately within the, within our industry, we do a lot of victim shaming. But the perspective here at the end of the day is that we practice more cyber safety than we do cybersecurity. And the reason why this is important is because we have a much bigger problem coming up. And the bigger problem is AI safety. And who better than a group of people who've been doing cyber safety for many years to pivot over to doing AI safety? Who, who, who has a stronger claim to say we understand how to do digital safety, let's now pivot over to how we can do AI safety.
And I'm proud to say I'm really happy. So I, I propositioned this many, many months ago, and I'm really happy to hear for example, that Vijay Bolina, the former CISO at Google DeepMind, is now the Chief AI safety officer at DeepMind. So this sort of shift that's already happening, and I'm super happy to hear that because folks like VJ are helping lead the way in terms of defining this field for our, this new field that we can take on. And I think it will be us to, it'll be upon us. I think we again have the best chances to be able to shape this future for ourselves as well. And the reason why that's important is because we're seeing some really interesting things that are happening that we need to be aware of. So what you're seeing is this progression of model development over the past couple years where for a long time it was growing, the performance of the model was growing steadily, not very short, you know, it was just, it was a linear curve.
But all of a sudden, once the size of the model grew past a certain point, we, we hit an inflection point. And all of a sudden now the accuracy of these models are growing much faster. And so what's interesting is that, again, before our expectation was that this curve that we see between, we had a trade-off curve between performance and the model working against a, a broad set of use cases. And this curve would gradually be increasing with as we draw more resources at it. But what's what actually happening is that this curve is bending outward. And it, because we don't know what's gonna happen when this curve continues to grow outward, the notion of AI safety being a preeminent concern, a, a much greater concern than cybersecurity is what's causing a lot of is, is causing us to shift our opinion or our thinking towards how do we deal with this problem before it gets out of hand, okay?
And we thought in the curve on the left, we thought we had time, we thought we would have, you know, maybe a decade, two decades before it starts to become a big concern. This is an emergent property that we didn't anticipate, that researchers didn't anticipate. That is why open AI researchers, that's why the top the, the frontier model researchers are basically saying, look, we kind of need to stop development of these bigger models until we figure out how we get a, a better handle of what we're gonna do when these bigger models come into play. So chief AI safety officer, oh, and then for AI safety, I, I came up with a set of guardrails as an example of how we can think about how we as digital safety officers, CISOs, can shift over to becoming a chief AI safety officer. How do we apply these sort of techniques?
And if you wanna see more details, you can see this, the, the four full, the article or the RSA talk I gave on this. Alright, then weaponizing. So the third worst of times is attackers weaponizing. So there, there are four major things that I think we're concerned about that I've heard many times. First is, are we're gonna see more convincing phishing emails or we're gonna see lots more malware being developed. As far as I'm concerned, that's not that interesting that for most of those things that we we're seeing, we should keep doing what we're already doing. Security awareness training tools that help us find malware and lockdown environments and so on so forth. But that's nothing really new, okay? That's old stuff that we've been working on for a while. However, there is something new. And the two new things as I see it are the ability to find vulnerabilities faster and more efficiently.
Generative AI is really good at being creative. What do you need for finding vulnerabilities? Lots of creativity. So over the next six months to a year, you should ex we collectively should expect to see lots more, zero days, lots more vulnerabilities that are being disclosed by the vendors that produce software as well as external attackers. Finding those before we do. Okay? And so that's, that's gonna be a big concern over the next couple, next couple months. And then of course we saw earlier, Biden giving us a, a welcome that these deepfake attacks, we don't know exactly where that's gonna lead, but those are two concerns that are new. And to that end, fortunately though, I think there's some interesting things that AI's helping us with that will help us address these challenges. And the first one is around the, the vulnerability researching piece piece. One of the things that we, I think we know, so going back to William Gibson, the future's already here.
It's just unevenly distributed. What's unevenly distributed is our knowledge of how to build secure systems. It's known, it's just not widely known. So imagine now in the future or near future, you have a situation where you can say, Hey, I wanna build a system with these principles or these sort of features. And it generates for you code in memory safe languages like rust or it, it, it builds in all the core security principles that we want it to be built with. And so we have systems with fewer security concerns right from the very beginning, whether it's because you're already using design patterns that are secure or 'cause you're using memory safe languages that don't have those security issues to begin with. And then secondly, as far as the DeepFakes, what, what I'm really excited about is that we now have a mechanism, well, we're starting to develop mechanisms to authenticate data.
Okay? Consider, we've had a real issue with email for a long time. We, we've wanted to have authenticated emails for a long time and we still struggle with it, but now there's a bigger societal concern and the as represented through dfas. And so you have this coalition for content providence and author authenticity. They're starting to bake into our devices, our cameras, whatever it might be, a a, a digital stamp that says this is authenticated, it came from this device, it was edited in this particular way. And so we're starting to see that and that's starting to get widespread adoption. And my view is if they can authenticate much, much larger bit bits of content like, like video, audio, images and so on and so forth, how simple would that be for email, right? So there's a foundation that's being laid right now to help us address this last issue, which then I think will help us address the email issue that we've been working with for a long time.
But that said, as we think about challenges that we see in security overall as attackers leverage gen ai, I think there's another model that we can look at to say, okay, how do we anticipate the future and where that's gonna go? And so one of the ways I think about this is using another mental model called the DIKW pyramid. So the DIKW pyramid, as you go up, it provides more context, it gives you a greater understanding. But here's the exercise I want you to go through. Consider the word data and the words that follow it, data engineering, data lakes, data pipelines, data providence. And so those words, whatever words that you come up with for data, you now apply that to information, okay? Information search, information engineering, information governance, information security, okay? And do that for knowledge Now, and I would argue chat, GPT has allowed us to enter into this knowledge economy.
And in this knowledge economy, we have a ton of things that the new, that the, that the, the new knowledge centric enterprise is gonna need. So we think that we have all these problems right now with data security and information security. Just you wait, there's a whole bunch of other new challenges that we're gonna face. And these are all very much driving towards this new cognitive enterprise or this new generative enterprise. Now I said the word generative enterprise and you think I'm using that word generative enterprise because of generative ai, right? No, actually, I mean generative enterprise according to rum. Okay? So rest rum created this typology of organizational culture many, many years ago. And interestingly enough, he used the word generative here. And I'm gonna appropriate that and say, huh, it's interesting because in this new knowledge economy, we are actually building towards this generative enterprise.
And this generative enterprise is contrasted with a pathological and a bureaucratic one. So you heard earlier the the sapphire model and the different aspects of that here. And you can see where like oak and you know, redwood might fit into the either bureaucratic or pathological, but what's fascinating as a generative enterprise is the enterprise that we all want to work in, right? It's messenger are not shot or they're not tolerated, they're actually trained. We wanna share information. It's all the wonderful things that we saw in the bamboo world and we have that opportunity here. But what's interesting is also you're gonna have people who are, who are gonna feel threatened by the generative enterprise. You're gonna see executives within their organization be threatened because that their power base is now being removed. And so in this new generative enterprise where we all want to live in this world where knowledge is openly shared and know you're gonna have people who are gonna say, no, that can't happen.
And we need to be able to tightly constrain that. So, but in this, in this new generative enterprise, what is our role? What is the role of the ciso? Okay? And so with that, let me offer another way to think about this. Chief data officer, chief information officer, might we be the Chief knowledge officer? Again, from the standpoint of adjacent roles, might we be able to make a claim that we understand how, how do we do knowledge security, knowledge governance, those are things that we are having to deal with with data and information. Today. Might we also have a claim for the chief knowledge officer role as well? And so just to wrap up, I think the future of the CISO or the future of security is gonna be fast, irrelevant in the age of ai. And we have three opportunities and they're not necessarily mutually exclusive, but one is to be ACFO for intellectual property. The second is to be the AI safety officer, and the last is to be the chief Knowledge Officer. And with that, thank you very much.