And so in terms of the slide changing, do I just ask for the slide to be changed or is it a clicker? Oh, clicker. Let's try it. So thank you very much. I really appreciate the opportunity to speak to everyone here Again, I'm Scott David. I work in an applied physics lab. I've been an attorney for 30 years. And so people say, why are you an applied physics lab? I say, well, it's because we're doing engineered rhetoric. And that's not rhetoric as fluoride speech, but rather rhetoric as persuasive speech. It's the narratives that we're living by. And we've already heard discussions about how the technology is developed and needs policy. Policy is ultimately a narrative. It's an enforceable narrative. Something I'd ask you to think about during the presentation. My slides are not very professional. I'm an academic. I don't have support of graphic arts and things.
If there are people, if there are people who are frustrated with the appearance of them, I'd be delighted to ask for some resources for us to improve the slides for the next time we meet. So let's talk. I'm gonna talk about things that you already know and so that sounds like might be boring, but these are things you hear about every day and you've heard about in the conference already. Many of these are very, very exciting and not in a good way. And so for instance, look at the red words here. We we have the unmanaged exponential change. I've already heard people talk about exponential change. Today we have anomalies that are increasing risk. We have failed institutional governance, a lack of incentives for cooperation. People are asking for dashboards for situational awareness. And there's a concern with artificial intelligence, as we were already alluded to, these are very, very difficult and exciting problems in our work at the irs, iri, excuse me, IRS ii, we adopt the notion that the source of the problems is the source of the solutions.
And so what does that mean? The way that we're gonna do that in today's presentations, we're gonna take those words and we're gonna make them into characters in a story. And this is just one possible story. And it's a story that for purposes of this conference and this program, there are many different possible stories and we're gonna hear, different companies have different takes on it. This offering is less about the specifics and more about a possible future for us to think about using these situations that we find ourselves in. How can we do do something to extract value from it? And for, for everyone around the world, today's story's gonna have villains, heroes, monsters, plot twists. It's gonna be quite exciting, I assure you. So here's the plot summary of the story. This is a summary of the entire presentation. So for those of you who get tired of hearing this nasal twang of mine, if you see the slide, you can leave.
So the characters in the story are in red, as I alluded to before. The story starts with exponential change and the risk of failing institutions. But spoiler alert, the plot twist is that anomalies encode untapped value. Also, there's a growing shared risks that are occurring, we'll talk about that in a minute, that provide incentives for all entities, humans, organizations to self bind to governance and synthetic intelligence, which we'll talk about. The definition of that in a minute, emerges at the climax of the story to help us deal with complexity and ai. So let's get to it. Well, let's start with everyone's favorite supporting character. The dashboard, everyone's always talking about dashboards. Many folks call for dashboard as part of their information risk solution. And they, in order to have better situational awareness and to serve up relevant information for future managing future interactions, well the imagination of the dashboard, the notion of the dashboard is intended to conjure up images of really well designed, dynamic information.
INF interfaces like you see in a traditional sports car dashboard, well, what they're really proposing is an instrument panel. And since instrument panels were located in the dashboards of early vehicles, but by referencing a dashboard, they're inadvertently revealed something about their product and service. The dashboard was originally the board placed between the back end of a horse and the driver for purposes of blocking dash dash is the dirt, mud and manure thrown up by horse hoves. The dashboard blocked flying manure. So the next time someone suggests that your information system needs a dashboard, it might be worth checking on what manure is being hurled your way. Thank you.
So now that we've clarified the role that popular supporting character, let's move on to one of the main characters. Exponential change. Exponential is another term that's getting lots of use as technology in technology context, excuse me. And the news generally, you read about it in the paper all the time. Exponential change is a major character in our story and usually seen as a villain. Well, and that the reality is that exponential change is a big deal and it's appropriately cast as the bad guy. The Covid pandemic demonstrated that exponential change kills exponential change. Also devalues yrg, if you wouldn't mind passing around those bills that I gave you. Yrg. Yrg. Okay, there we go. He's, he was not even paying attention. The ex hyperinflation was experienced in Zimbabwe, Yugoslavia, the Weimar Republic. It shows how exponential change destroys value and impoverishes populations, I have the currency that's being passed around.
You can see all the zeros and you can enjoy that. And notwithstanding the fact that they're only worth a couple of bucks each, I'd appreciate if the, this 2 trillion bills came back up at the end of the presentation. The exponential change increase in the global human population strains resources, it results in climate change, starvation, forced human migration, war conflict, species extinction. None of those are good things. Exponential change is not limited to physical systems. It can also be devastating in non-physical systems. In physical systems, at least the exponential change is bounded by physical laws. It's, it's limited. But in intangible, abstract information domains such as social networks, there are no physical limits. Exponential change. And that's scary. And that's what we're all responsible to help with that. Well, why don't we just use exponential metrics for exponential change. Well, the problem is that exponential metrics are confusing.
The Richter scale, for instance, the difference between four and a five on the Richter scale is 10, 10 x. And so it's very difficult in terms of making the public aware of the risks. The decibel scale also can cause harm at high levels of sound. It's a decibel scale is also an exponential scale. It's very difficult to interpret. Well, the interpretation problem is even worse when you're talking about exponential change or growth. A Richter scale, decibel scale are instantaneous measures. But what we're talking about here is exponential growth. So it's a measure of change. And what happens is if you don't understand the exponential nature of it, the risk proliferates without an ability to detect it, then humans have a linear bias. Sometimes scientists will think about the idea of exponential growth and recognize that if in your early part of the curve it's not as problematic, but you can recognize that it's happening.
I was at EIC in 2014 and I, we were talking about exponential growth and I said, at the time, we're gonna pine for that day. Everyone thought that was so difficult back then you could see how much more challenging things have become. Well, let's talk a little bit about anomalies. An anomaly is a phenomenon, excuse me, beyond measurement. Exponential change results in many anomalies as it blows through our systems of measurement in various business, operating, legal, technical and social institutions. That's bolts business, operating, legal, technical, social. So it creates anomalies. The institutional metrics actually create the anomalies by being siloed. So at the edge of the system, there's an anomaly. Well, institutions are bundles of metrics that encode risk into shared practices. There are two big problems with institutions hosting metrics. First, as I alluded to a minute ago, siloed institutional metrics are too narrow.
Anything beyond the known knowns is deemed a potentially risky anomaly. Anomalies arise at the edge of institutional measurements and governance. While a second, even narrow siloed metrics are useless beca because all institutions are rendered blind by the internet. This is a, a diagram from Paul Barron's paper from for the Rand Corporation 1966. So imagine for a moment on the left hand diagram that you're at a company and there're 20 people talking about an HR issue on email and they're at their desk sitting at there and the CEO wants to know what's going on. The CEO can go to the central email server and see the chatter, see what? See what this talk is. If the same people sitting at the same desks talking about the same issue at the same company, go on text or Facebook, the CEO's blind, that's the right hand diagram. Where does the CEO look to see what the chatter is?
So Paul Marin was dealing with the ability of a communication system to resist nuclear attack. But the same is true of control. And so we need to be really careful about the notion of control. Will institutions amplify human risk? Leverage signals when singles become unmeasurable. Anomalies where ignorant of the risk and opportunities and the ignorance cultivates a risk imagination the old maps to the earth signaled risk with monsters. And said one of the famous maps from very far back in the age of exploration had h Hickson Draco on it, hereby monsters. Well, the same can be said of information landscape we traverse on the internet. Our new non-physical territory is even harder to de-risk with discovery and exploration. And so we have our own risky sea of anomalous interactions without shared rules and generating its own information. Monsters and some well-known ones are up there.
That's the matrix down on the left there. So here this we're, we're shifting, now we're gonna shift from despair to hope. All those who who are despairing worry not anomalies are usually seen as a failure of measurement. Well, what if we shift and see an anomaly as a signal and measurement itself? First, anomalies are latent signal of the edges of a system because by definition, if you can't measure anymore, it's anomalous. So that's actually, if you stop measuring, it means you're at the edge of the system. They reveal our silos. Second anomalies in your system may result from other siloed systems mitigating their own siloed risks. So what if another system of measurement outside your silo encodes risks that are now relevant to your silo in a very connected world. So adoption of that external metric and related practices is a shortcut to risk mitigation.
So if you are outside and you hear a crow in the air, you look up and you, you hear the sound and you look up and the crow is calling because they're warning other crows about a hawk. Well, the crow is speaking crow. I don't speak crow. It's a crow silo. I'm in the human silo, but I'm encoding, or excuse me, decoding the encoded crow risk signal. That's something to keep in mind when you're thinking about where to find other opportunities for de-risking is to look for signals in other domains. Let's talk about risk. Exponential risk is destructive and difficult to measure. And anomalies are increasing as institutional risk. Metrics fail. Excuse me, this is very bad news. But let's look at the positive side and start to build towards solutions. We can see that exponential increase in risk actually fuels the incentive for folks to self bind to new governance with anomalies as inputs into a synthetic intelligence process.
Let's talk about that. So Moore's law, or finally getting to the title of the presentation is gonna guide us from despair to hope. Moore's Law, as we know, describes the exponential increase in transition density on chips, but it also provides a link from that physical chip transistor density to the exponential change in intangible information space. And here's how the second daughter effect of Moore's law is called Bell's law. And the Bell's law is, is an exponential decrease in the size of computers powered by the chips. Here it's a line because it's a logarithmic scale and left, not a curve. Well, the third order effect of that miniaturization is the exponential proliferation of device deployment. Internet of Things is an example of that. The fourth order effect of that is ubiquity of deployment is an exponential increase in the digitization of old interactions. Beyond that, the fifth order effect is an exponential increase in the volume of new interactions such as online shopping at markets, generative ai, they're all examples of new interactions which themselves are increasing exponentially.
Well, here we get to the punchline, the six at any a lawyer, business people, engineers know that interactions breed risk because a certain percentage of interactions in any system is not gonna go according to expectations. So an exponential increase in interactions yields an exponential increase in risk. Here's the most important statement in the entire presentation. We have no metrics as humans to adequately measure exponential change. Period. That is bad. Well, okay, I don't, that's just, I didn't bring you to hope yet. That's still despair. Our risk measuring institutions are all blind and our risk is increasing exponentially. All institutions are artifacts of bundled practices and metrics for risk mitigation and they're not fit for function. So what do we do? Well, let's talk about incentives. So information D differentials yield a new risk for everyone. We, we have to, everyone is affected globally and that drives us together through a mutual self-interest to de-risk in ways that no one can do alone.
And that really is the key. It sounds selfish, but it's not. It's self-interested. If you think about organisms, organizations, they typically don't want to internalize cost. And so this is not new. Our world's built on systems called institutions and their bundles of rules. Well, let's look at governance since we're talking about institutions and rules. What is that all about? Brings us to governance and how do we create and self bind to systems to have rules to de-risk and leverage together in massive and expanding gaps where yesterday's institutions simply don't function. So let's look at how governments emerges simply as a matter of scale. Recall the higher order effective Moore's laws and exponential increase in interaction volumes. Well, the failure of existing institutions to de-risk and leverage these new interactions opens up myriad risk measurement gaps called anomalies across business, operating legal, technical and social domains.
Well, what's a governance system? If we're gonna try to build them, we need to know what they are. Will all governance systems have to have at least three components rule making operation under the rules and enforcement of the rules? They can be normatively cross-referenced, but they need to be present for the system to be a system of governance. And here the EU buildings that house those functions are illustrated. Well, governance has always emerged naturally from practices in human society is a matter of scale and it's a matter of scale of the system interactions. It starts with an awareness of multiple different practices and then it develops from practices to best practices to standards institutions. Let's take a look at how that occurs. So let's say that you and I or are sell, have our own companies and I'm selling jewelry and he's selling groceries and we, I do it, I sell my jewelry in an eight by eight by 12 cardboard box and he sells his groceries in a nine by nine by 15 plastic box.
And we make our own boxes. So we get together, we know each other, we say, Hey, you know, they'd be easier for shipping save us costs if we standardize our boxes. So why don't we go and have a take our practices and make a best practice of the two of us and we'll call and we'll, and we use decide to use the eight by eight by 12 cardboard box. And so that's, we moved from practice to best practice and that's rulemaking. We decided to make a rule. Well, let's say that 10,000 people say these guys are really hitting it with this box thing. This is terrific. They start doing it, but we don't know, we don't know them. It's 10,000 people's way beyond Dunbar's number for the, those of you familiar with that number. And we say, you know, gee, this is really gonna dilute it.
If we don't know what they're doing, they could really mess up our wonderful best practice and, and make it worse for us. What we did decide to do is do a certification mark. And the certification mark lets us signal conformity to the box standard. If someone uses the mark but then is not conformant, we can sue them. That's enforcement. So we just added the judicial process. Lastly, let's say it's everyone loves the box standards. So a million people are doing it. So Jor and I recognize, we say, wait a minute, we're not in the box making business. I'm in the gross jewelry business. He's in the grocery business. Why are we still making boxes? So we decided, we say let's outsource it and have someone else make the boxes. That's what happened in the eighties with the outsourcing revolution. And we go back to existentially who we are, jewelry and groceries, not box making.
That's executive function. So what we just did is we just con constructed governance as a matter of scale as it went up, Grange's had a similar thing for agriculture practices. What we need now is, is the grain system for information practices. Let's talk about that. We end the story with synthetic intelligence. And this is a, the, the exponential growth in information differentials from Moore's law created anomalies in the old systems institution. Metrics, the risk of those anomalies, incentivizes entities at large scales to share and apply risk mitigation practices in the wide variety of governance processes that we're gonna call synthetic intelligence. So what, what specifically is synthetic intelligence? Well, we hear a lot about ai. I actually started using the phrase and coined the phrase AI about 10 plus years ago to put it next to ai. But what it's, what humans have already always done, it's not new.
We synthesize our combined intelligences through shared practices metrics meaning context into larger units. So we can de-risk and leverage interactions at larger scales, period. Well, synthetic intelligence emerges whenever humans and their organizations convene to create or apply governance using the four step ladder of institution construction that we talked about a minute ago. Si takes place in information engines, which are virtual reaction chambers where shareholders share metrics and practices to de-risk at larger scales. I won't spend a lot of time with this, but anomalies. The, the, the, we use the second law of thermodynamics because the math is identical. And some of you who are familiar with Claude Shannon's quantitative theory of information, Von Noman told him to use a second law of of thermodynamics for his work. But that was Shannon entropy. We're talking here about a kind of market entropy, but again, I won't go into detail there.
We can talk about it offline as a refresher. In a heat engine you have that combustion and that reaction being caught in a combustion chamber. Well, what we're talking about here is the virtual equivalent of a combustion chamber in information engines such as the stock market there it's the information differentials rather than the temperature differentials that are, that drive the exchange activity. Information differentials cause risk, including valuable entrepreneurial risk. And the virtual reaction vessels of information engines provide the context and meaning to convert inert data into valuable information. So contracts are information engines. I said they're enforceable stories and humans have historically created all sorts of different information engines. Shared language is an information engine. Shared cultures are information engines shared identity is an information engine. The they let us leverage and de-risk its scales. We simply cannot do alone. Last part of this presentation, I just want to get back to who we are and where we came from a little bit.
So we were once one village in Africa and 60,000, a hundred thousand years ago. We migrated around the world, came up with different food, politics, language, culture, music. The internet brought us together for show and tell. And what that did is it created a lot of anomalies. Now remember anomalies, we didn't like that before. But now in the four step ladder of institution construction, the anomalies represent new practices. Just like the crow call, we can say, oh that's another way of doing it. That's another way of doing it. So now we have the ultimate richness and the inputs we need for synthetic intelligence. So this is the next steps. Last couple of slides here. What should you do? What should your family do? What should your company do to be synthetically intelligent in the steep part of this exponential curve? Cuz we, this is, this is not a drill that we're in right now.
Well first collect anomalies. There's signals that are not yet decoded by your system. Then examine the anomalies. They may encode like the crow, they may encode risks from other systems. That's a cheap way of getting some de-risking. Then ev internally evaluate your dependencies. Talk to your accountants. That's not something that everyone embraces, but the accountants really are risk photographers. The highest part of your budgets, the most payments that are made are your biggest dependencies. So the anomalies from those domains may be most relevant. And then also set the risk management expectations. Notwithstanding the fact that you can't measure exponential change directly talk about it. Because if people are experiencing it and you're not talking about it, they're going to act in incorrectly. And lastly, be curious about risk. Map your bolts, business, operating legal, technical and social interaction environment. Identify the local bolts, risk paradigms and practices of your neighbor. Join existing information engines, the risk clubs, the trade associations, et cetera. Of course attend EIC and other conferences. And then create new information engines using the four step ladder of institution construction with other people sharing practices. And thank you very much for your time today.
So we, we have not mentioned that, but of course you can ask questions to the presenters using your app, but you can also raise your hand if you have a question. And, and, and Dubai, we unfortunately we don't have much time, but just 1, 1, 1 thing out of personal interest. You basically, I got the impression that you now say the development we are in with regard to AI is we have seen something similar before already. What do you think of, of the proposal, I think Elon Musk made to make a break and wait for a little bit and then continue. Yep.
Yeah, the, that the question about whether we should take a break. I think it's the equivalent of bunnies, little rabbits taking a vote and saying we would like to take a break on being eaten by hawks. It really is, it's an inevitability. The exponential curve, the exponential increase in interactions is a phenomenon that's happened since the Big bang. And so there it is not stopping information and interactions are gonna continue to increase exponentially. Yeah, AI is merely an artifact of that phenomenon. So it is not stoppable. And the best thing we can do is build surfboards and ride, ride the wave. I like this one. Yeah. Thank you. Thank you.