KuppingerCole's Advisory stands out due to our regular communication with vendors and key clients, providing us with in-depth insight into the issues and knowledge required to address real-world challenges.
Optimize your decision-making process with the most comprehensive and up-to-date market data available.
Compare solution offerings and follow predefined best practices or adapt them to the individual requirements of your company.
Configure your individual requirements to discover the ideal solution for your business.
Meet our team of analysts and advisors who are highly skilled and experienced professionals dedicated to helping you make informed decisions and achieve your goals.
Meet our business team committed to helping you achieve success. We understand that running a business can be challenging, but with the right team in your corner, anything is possible.
Decentralized AI can enhance transparency and accountability by distributing data, computation, validation, optimization, and execution across multiple nodes, thus preventing a concentration of power that could threaten individual freedoms.
It can also foster innovation by allowing third-party developers to verify the data and algorithms that an AI system uses.
Technologies such as blockchain and federated learning are at the forefront of this revolution. Blockchain's transparency, immutability, and decentralization can enhance trust in data-driven decision-making.
Federated learning, on the other hand, allows for the training of machine learning models on decentralized data, thus preserving privacy and reducing the need for data centralization. In this panel session we will discuss the technical foundations and the pros and cons of decentralized AI.
Decentralized AI can enhance transparency and accountability by distributing data, computation, validation, optimization, and execution across multiple nodes, thus preventing a concentration of power that could threaten individual freedoms.
It can also foster innovation by allowing third-party developers to verify the data and algorithms that an AI system uses.
Technologies such as blockchain and federated learning are at the forefront of this revolution. Blockchain's transparency, immutability, and decentralization can enhance trust in data-driven decision-making.
Federated learning, on the other hand, allows for the training of machine learning models on decentralized data, thus preserving privacy and reducing the need for data centralization. In this panel session we will discuss the technical foundations and the pros and cons of decentralized AI.
Yes, we, we will introduce now the panel and I will allow you for the question because it is actually related as well. We are talking about transparency. Please hin we invite you to, to sit here and please join in me on welcome Mihaela Trent again, that they were with us this morning. And Richard, for this panel, we will talk in this panel about decentralization. Is the centralization the best way to achieve transparency and verifiability in ai? That's a good question. Thank you and welcome again.
And while the panelists are, are coming up to the stage, I think it was interesting, I heard that you started with coal wining, 'cause I read recently, I haven't confirmed that when you do a internet query, excuse me, an AI query, that 20% of the energy associated with answering that query is from coal at present, is what I heard. So it's kind of interesting and ironic that we're moving to that.
Well, thank you. That was a fantastic presentation. And you know, it's interesting because in a sense from our earlier presentations today, you talk about the social socialization of technology and it feels as if we're moving towards the, the technology has become so intimate now and so connected that it feels like there's almost a mutual socialization going on technology of us and us of the technology.
And I, I'd like to ask the questions starting with rj, what can we learn from the history of decentralized knowledge systems that can help us build and effectively manage or deal with decentralized AI type systems? Do you have a mic Mahala? Could you have, Thank you. Okay. Alright. Well I could, I could spend 30 minutes answering that.
I, I think, so I, I've been working on applied knowledge management and thinking about the history of, of, of knowledge management is not corporate management of information, but how we act as a synthetic intelligence in managing information at civilizational scale down to organizational, down to independent. And I've been working on that for 20 years now, which it means I either started very young or I am a very deceptive 45. So I think, you know, I think we can, we can see one that requirements have always kind of been in the same set of categories regardless of the tools used, right?
We need to be able to collect set requirements for collection, summarize, disseminate, right? These structures have always been the same and now we're just pumping the gas really, really hard.
If that, if that makes sense. Sorry, we have a question from, from, from down. So then question.
Yes, exactly. Do other folks have notes on that? Kind of that idea of the, is there a sis information that we get from earlier knowledge systems that we can apply towards dealing with, again, the mutual socialization that's happening now with ai? Is that a question that any of the three of you would like to answer as well?
I I Can add a maybe a very non-tech anecdote and it is a, it is a, it is a question I think about about, you know, read about what is the agency or that, and if we outsource decision making to another party, no matter what that party or how much control you have, you've lose quite a bit. You lose quite a lot sometimes, right? I have respect with, you know, all the doctors and, and, and lawyers and everybody. But when you delegate, you always lose something.
And, and this is the dilemma. We are in that AI system. We are really delegating a lot of decision making or work to these systems and they're gonna make system, you know, make decisions not exactly what we want. 'cause there's a communication barrier even if you have full control over. So there's complexity involved of delegation that's never complete and the cost is very high if you want to complete it. So these are really external intelligence systems. They're like aliens and they'll never quite follow what you want. And so how to learn to manage that.
Like we learning into dealing with, you know, maybe a business client that doesn't speak your language, for example, in this case, luckily AI learned our language maybe too well, but they really learn our language. So be careful. We should learn their language too. Otherwise they will start to bit you. Yeah. And before we take the question, I just wanted to comment on that, that, so I work with the IEE AG agent AI work group and one of the tasks they gave me was what are the general agency constraints we should put on agentic ai?
And as a lawyer, I went to the presidents and there's uniform agency law out there, which sets the expectations of human to human agency. And so all sorts of elements and sub delegations and the limitations and the cancellations, all those things as you go through it, it's fascinating because it becomes a catalog of paradox. Yeah. Right? And so because our, our expectations from the human constructs don't all map directly. And so it's very interesting that the, it is a mutual socialization that's gonna happen.
Trent, do you have a thought on that? I think overall, right, if these entities are, you know, we're on path for a SI to come as soon as 2027, right? About 10 XA year for the next few years. And maybe just to set context, right, but five years ago we had, AI is at the level of a kindergartner, right? A GP two. But three years ago we were at the level of a a a grade two GPD three, but a year ago we were at the level of a grade eight high school student, right? G PT four.
And in the next six months to 12 months, we'll be probably at the level of a, you know, first year university student GPD five roughly, right? And, and then it's gonna keep going, right?
Every, you know, 10 XA year, right? Thanks to Moore's Law, thanks to AI algorithm improvements and thanks to other sort of tricks that people put in. And so pretty soon, right, 10 x 10 x 10 x takes us, you know, from university student to, you know, wizard sage programmer or professor to someone smarter than anyone we've ever met before. And to a thousand people smarter than we'd ever met before, right? And this is all happening soon, right? We're not talking 10 or 30 years from now. And so it's happening soon, three years, five years, seven years.
It's really hard for us to comprehend our brains think linearly. We're in a sense, the conversation today shouldn't be just about how do we deal with the GPD fours of the world. It's how do we deal with the GPD eights of the world that will have have agency that could be a thousand times smarter than us. And then we can ask, how do we control these guys? Right?
Well, have you, who here has watched Avengers one? Raise your hand, right? Do you remember the scene where they put Loki into the super fancy prison, right? Loki is Thor's brother, right? He's super sneaky, super smart, right? Within two minutes he had escaped, he tricked Thor, right? And if you read books like for example, of hackers that are, that are basically like making Swiss cheese of security systems. Why? Because the weakest link is humans themselves giving away the keys accidentally all this, right?
Phishing, whatever, right? So we can think all we want about how we're gonna manage these systems a thousand times smarter than us, but it ain't gonna happen, right? So instead we should ask how do we cooperate with these people, these, these ais, the whatever we want to call 'em, they will be, want to call people, they want equal rights. I mentioned that, right? So we just have to be realistic about this. Let's not design for the today, let's design for the tomorrow. But again, before we have the question, let's complete this question, rj.
Oh, sorry, Either way. I thought I didn't say anything so far, so, Okay. My human rights as I was saying.
So, so yeah, so, you know, thank you for setting the context. 'cause initially I didn't wanna say anything to your initial question about the management knowledge management.
I'm like, what is that? When I was like, some decades ago, I was at the University of Calgary, I was just telling trend and I was dev, I was working on an expert system with knowledge elicitation from experts, right? And at the time, and it was a pain, we had to go in the surgery room and ask the surgeon what he's doing and how, and so on and so forth. And now we have all that knowledge, we have access to it through, you know, GPT and L LMS and so on and so forth. So the questions are radical changing, radically changing, right?
So it's about, okay, a system which can discover things is like Nobel kind of discoveries, 10 of them per day. So what do we do then? And so I just wanted to maybe ask you, Scott, I'm sorry to come back to you to rephrase the question is, from, from that future, is it about management or is it about co-creation maybe?
And, and how do we work with that? How can we benefit from that? How can we stimulate that to make more discoveries or maybe to stimulate us to co discover?
Yeah, I I might actually answer. So, so I, I think it's not just, I think sometimes we, we, we hear AI and it's like one big thing, right?
And it's, and I, I feel that a lot. It's, it's just like this one mega bot matrix, even in the way we talk about it. And what you're getting at is on cooperation between, between us and them. Not just us and agents, but us and a, a variety of GBTA agents, right? So when I say knowledge management, I'm thinking in terms of not, and that's why I said not not like a corporate sense of enterprise knowledge management, but, but how we engage in trade and information, right?
So something I think about is that the Roman army, for as much as they're known for engineering prowess, they had more information specializations than they did engineering specializations. And it had to do with structuring and making sure that different groups that spoke different languages that could or couldn't right? Could all trade in information in a way that that allowed, you know, record keeping dissemination, intelligence synthesis, et cetera.
So, and when I think about that cooperation, what comes to mind is the notion of requisite variety.
So that's, that the variety in the mechanisms of, of regulation and control and management and coordination in a system need to be roughly proportionate to the variety of states that that system is capable of expressing and information and knowledge are have, they have infinite variety, intelligence has infinite variety, and not just infinite variety, but infinite to the infinite variety because there's infinite potential valid states and infinite ways to validate those states and validate the validation, right?
So, so yeah, I think it, it is very much about how we cooperate with these structures. And, and as a last thing, and then I'll shut up, I think that cognition is very expensive. We are very rarely at the wheel in terms of our own, you know, cognitive security, cognitive control is fleeting and mostly illusory.
So, so maybe management of of AI is the wrong word because we can't manage our own intelligence. How are we gonna manage this?
Yeah, Fair enough. Let's go to the question from the audience, please.
Yeah, Please. Thank you. This is great philosophical dis discussion. So I hope you can humor me with a financial question. I come from the blockchain space and we have been able to distribute the value chain of services across multiple different participants through economic incentives of tokens, gas fees, mining with generative ai. I've been told that there are only maybe around four companies that have the resources to train models. And I even heard the training can cost a hundred million.
I don't know if that's a correct number or not, but if that's the case, then how can decentralized AI models be designed so that there are incentives for a distribution of the, essentially all the inputs that go into creating generative AI models today? I'll, I'll answer this in two parts. First of all, anyone know what the largest computer in the world is?
Anyone, anyone? Bitcoin. It's not just the largest computer in the world, it's the largest computer in the world by far. Yet it was only written by like one dude or maybe a team of people, you know, and it just released via a mailing list. But with the power of incentives, like you hinted at it grew to the, a computer that has more power than anything else in the world. Now in its case, it's doing not super useful computations just hashing, it's securing the network, which is actually very useful.
So in that sense, it's way less compute and energy and money than, you know, the world's banking systems to secure that, that, and it's, you know, ultimately securing hundreds of billions of dollars of value, right? So, and that was just with the power of incentives. And so you can ask yourself, what if we did proof of work, but rather than just merrily securing a chain, can we somehow twist incentives to incentivize people to build super powerful models? And there's actually quite a few approaches to this.
So, and I had asked this question actually initially in 2013, finally went for it in in 2017 by, by founding Ocean Protocol, right? And at the same time helping to kickstart a field called token engineering, which takes mechanism design, incentive design, and really castes as, as an engineering discipline working with Michael Z, who he works closely with actually and others.
So, and we really ran with token engineering and then thought about how do we decentralize AI towards these generative models. And in the meantime, the centralized AI folks went super far, super fast, like impressively fast, right?
So, you know, fast forward seven years and there was three OG projects in blockchain in 2017, ocean Fetch and Singularity Net. And we all started asking ourselves like, crap, right? We've moved too slowly compared to what's going on in centralized land, what do we do about it? So we put our money where our mouth is and we've merged our tokens, right? So Ocean and fetch and singularity net have merged to become an $8 billion entity, basically where we're going for it.
We're chasing, we're basically right now in catchup mode, but where we can deploy serious capital to actually have a fighting chance for decentralized ai, decentralized, large, large models, et cetera. And the, the reason we have a fighting chance is the superpower called incentives. And you know, it's interesting the notion of, I I thought about those four companies as a form of colonialization of information space. And it's interesting the notion of a non colonial power being able to be assembled in a new space.
And so, yeah, yeah. So, and it really aligns a lot with the identity notion. For years we've been talking about decentralization. What's that gonna look like? What we're seeing in technology may be the liberating force to actually make it happen because even though we've been talking about it for years, it's been hosted on the apparatus of large, powerful, centralized structures.
Yeah, ying, we, We do see in commercial space that these centralizations also happening. Remember at the end of the day, all these AI or AI algorithms we are talking about require a physical processor to, to run. And so we now all have a pretty powerful computer in our pocket that, that gets better. We also see there's a, you know, re renewal of interest into more powerful laptops and PCs because those actually you can own and run yourself. We see a lot of effort into sort of get the knowhow of AI into public domain. So there's a lot of open source projects.
LAMA two for example, is wonderful. There's many, there's some in Europe as well. And so just like the ai, you know, the softwares learning our language, they speak English very well now can do quite nice things. We should learn their language too.
This, I don't think, you know, we should be thinking about somebody giving me a so-called decentralized thing. I think we need to go acquire them Other folks.
Marina, do we have a question? Yes, We have a question here also from our audience. Once again, thank you for engaging in our, our audience online. And I would like to mention that if someone here in the room has a question, please feel free to to, to raise your hand. The question is, with all the data that we are giving to the companies, how can we be sure that the algorithms that they're using are transparent and verifiable? It's a good question. Actually.
I can, I can talk about a little bit about that. 'cause so this one, if you were earlier in my presentation, this little diagram of understanding how the sausage is made, so we go through every single step what information gets into it, et cetera. And so we do give a lot of information. Not all those information are very useful for AI training. AI training actually needs a more, more high quality data.
So, you know, you, you heard about garbage in garbage out. So some of them are not very useful, at least for the, the, the foundation model training.
But the, it, it is a, it, it it is one of the factor that how the data goes in somewhere becomes a model. What, how the model is tested. And all that I think are very valid place for, as a society to sort of understand how this is made, is it being made safely, fairly, et cetera, right? So those are all very good. But I want to caution on this notion of controlling input data. Because remember we think about data as the data we sort of type in called structural data.
If we type in the form, we, oh, I'm releasing data now I'm sitting here talking, there's a, you know, I know how many gigabytes of data I'm releasing. All these are data, all those are very useful for AI training. And so let's keep in mind that at the end of the day, we cannot control all data. We may be able to control it very soon. Slice of it. Yeah. So on on that caution, I, I think, so first was name nomenclature, instability of reference to object, right?
And, and we solved that with location addressability, which is, we changed the name of the object, but we have, we, we know where it is and, and then we can change the, you know, version addressing et cetera. And then, okay, well what happens when location becomes unstable?
When, you know, our organizations are trading in information and we have different locations for the same object and we're using it twice redundantly, then it becomes content addressability, which hashing in, in, in addition to a variety of other mechanisms. But then we have content instability, which is, we're trying to refer to what you just said in, in your last response. But from that angle and from that angle and on somebody's cell phone video, and we wanna recognize that we're actually looking at the same thing.
Now we have to go to I, and I think, I think it's question addressability and, and we, we need, we need modes of common reference. And, and I think there's, there's a lot of discussions going on about this right now of, of how we establish those common reference systems for things that are, it's subjective whether or not we decide it's the same thing because, and a good example of this is in record keeping for museums. So one is we have the same object, but we disagree on fundamental attributes about it.
But we need to com we had need to have common reference, even though it's like, well what's, what is the canon name of this object is fully disagreed upon between maybe a British museum and an Indian museum and, and, and there is no reconciliation 'cause they need to be able to structure it. So there's that piece with it. And then second is, is the fact truth isn't scale free. We can pinch hair in zoom.
So, so is gravity real as it's been described? And it's like I have some friends who are physicists who are like, no, but I do not want my architects to be confused about the nature of gravity and how it's applied, right? So I think there is a piece of this that it's not just the verification of the data, but mapping to to what context we actually want that as training data, if that makes sense. Nice. I think that transparency and verifiability and unbiased ness are losing games. You can improve things a bit, but you'll never get that far and it's not gonna help that much in the end.
And we have to be honest to ourselves about it. It's very politically, politically incorrect of me to say this, but I have to be blunt and Canada about this. So let's give some examples, right? If you have an AI model that's, you know, 10 layers deep with, I don't know, 10,000 weights per layer, right? You can see all you want, you can see the exact floating point value of every single weight. You can drop pity pictures of what's going on and, and maybe if it's a CNN you can see some images and stuff, but overall you're not really gonna know what's going on, right?
In terms of, so it's not gonna be that transparent, right? In terms of verifiability, yes, AI is explainable just not in a language we understand.
Well, yeah, no, no actually i, I just say Well yes, but it's not in a language that Matters. It's not in a language that matters at all to anyone in my view, right? So let me elaborate here. So overall, that's an example for an AI model. And then in terms of the verifiability, sure you can verify that you have clean data going in, but if it goes through something you don't understand, then it doesn't really matter, right? So it can be a clean in through something you don't understand. It means that you don't understand the output anyway, right? It's black box.
And then in terms of the biasness too, you'll never get to a, a system that every human will agree is unbiased. And lemme give examples of all three for humans, for transparency, verifiability and biasness. So for transparency, we don't see what's going on in our brains yet. We trust ourselves, right? We have friends, we have no idea what's going on in their brains for sure, but we trust them if we know them well, right? So just be transparency and trust are two different things. And or if you're driving a car, maybe someone knows how the motor works, but you don't, but you trust it anyway.
Right? Okay. But lots of things we really don't know and our human brain is the best example for verifiability. We definitely have no mathematical proofs or numbers of how our brain works, but it works, right? It's existence proof. That's it. That's all we need. So we don't need verifiability of neural networks from fancy math theorems.
It does, doesn't matter. And in terms of biasness, you know, you ask one religion to agree with another religion and they will agree maybe 20%, maybe 50%, right? Or any two humans, right? We can agree on a few basic things like don't kill people, a few others. But after that it's wide open. We can disagree on 80% of things.
So, and then imagine that the government stipulates, hey, you need to have an AI that is unbiased, like the European government just passed. It's actually impossible to pass because one person might say it's good and another might not, right? So I think we should, we need to get off of our politically correct high horse and think of what's actually practical and what the real problems are. There's A bright line between bias and expertise. Thank you. Yeah.
So just, just because this is also the title of our session, right? It is about verifiability and that, and I was, you know, when I read the title of our session, I was like, I remember the keynote yesterday, the first speaker, which was Mr. Inger and his title, and he changed it. So I was wondering, I think we also should change our title.
And it is, you know, is AI centralized decentralized AI better than centralized ai? And when it comes, because it, it, what Trent is saying is actually that the title of our session is a mute point. Yes. There's no point with the verifiability and however, however, to the point of verifiability, I have to say something. So there are two concepts, verifiability and validation, right? So when it comes to ai, AI is intelligence is the ability to achieve complex goals and which are the goals? Yes.
Kill all humanity or enable humanity or each and every individual to flourish according to their abilities, okay? In order to know that the AI is on the path to achieve those goals. So we can do validation that is, yes, it's, it's it's related to goals. Verifiability is one step below. So it's once I have set the goal, okay, is it going to achieve it? But is is doing the operations in order to achieve the goal, but the validation of the system, and again, mark Steig Mart has a lot of work on that. And I I invite you to, to read that verifiability validation.
Sure, yeah. I would, I would add, so this philosopher, I trying to remember the Pakistan, which said where my language ends and my, you know, my, my word ends and, and, and almost all important problems are terminology problems. And so with that notion, right, I think you can see the debate going is we need to readjust what these, these words mean. What is verifiability, what is understandability, right? It used to be people insist that you explain this to me.
Well, I can print out all the parameters for you to read. What does that really explain? There's not much. And I would say any intelligence system are fundamentally not explainable, otherwise it would be intelligent.
Therefore, you know, the, the comp complexity theory that is not reducible, right? And so we would have to accept that and then come to a new notion of explaining, you know, it's like a person explaining and this, this is a, this is the work we need to do is make them explainable in that fashion. But there is one thing we can talk about centralization, which is the system. We don't want a matrix. I hope we all agree. We don't want the one single system happen to know everything and no one knows what's going on inside. And that I think is very fundamental.
There there are two things that i, I wildly agree with. One is, and, and this maps to what I wildly agree with in what you're saying, which is not wanting the matrix. So there is so much talk about removal of bias. I said before really quickly, it's like there's no bright line between bias and expertise.
And, and I think we should want bias. We're just not in the, you know, not in the way that people are using the term, but there needs to be variation in terms of priority and goals.
And, and there should be a non, there shouldn't be convergence across AI agents and the same answers. That's bias, right?
I want a, a certain agent that's biased toward representation in maths. I want another agent that's, that's biased toward representation like legal engineering, let's say.
So, so we want that bias that avoids us converging on, you know, one giant, you know, we already tried single vendor lock-in with information during the middle ages. It didn't work very well.
And, and I'd rather us not do it again. So, so yeah, no, I, I think, I think it's like we, we we, and if it comes back to an ontology problem of, 'cause what we're really looking at is, is you know, how this thing achieves its goals, intelligence and, and under what conditions and what's its reachability, which we're able to do with black boxes. We write requirements.
So, so it's like, why aren't we talking more, I, I see this in, in conversations about AI safety is like, why, why aren't we talking more about writing requirements around the black box rather than trying to verify what's in it. That's what we do with any other system that, that's represented as a black box.
We, we, you know, write and validate requirements. But I guess it's, it's hard with natural language, but yeah.
Ah, I'm, I'm sorry we already are on the time to finish the session. I know that we could keep, you know, talking, going on and on, but I would like to say thank you very much. All your words were very interesting.
No, like I think this one is, is is not correct. It is not correct. Yes. Can I add one final sentence? There is great USPS to decentralize ai. Just verifiability and unbiased ness is not at the top of the list. Thank you so much. I think just thank the, Thank the panelists. Thank you so much.
Yes, thank you so much. It was really, really interesting. I believe that the time is not even enough to continue with this topic. Thank you so much.