Event Recording

Mark Stephen Meadows - Why Bots Need License Plates

The consumer experience is changing radically, and globally. Today, AI-powered bots in the form of chatbots, voice assistants, and avatars, are responsible for the majority of traffic on the web and conversational systems.  And CPA chatbots or accountant avatars are now telling us what to do with our money – what to buy, where to buy it, and where to invest.  How can we trust these bots? After all, they are made by humans, and sadly not all humans have your best interest at heart.

These bots need license plates. Bots and AI need authentication. They need regulation, transparency, and systems we can trust. Luckily Blockchain solves many of the problems that AI presents.  This keynote looks at how this is authentication implemented and the implications for the coming decade from know-your-customer to anti-money laundering to new economic models around the democratization of AI.

Yeah, it's, it's great to hear and thank you very much. Just, well, as an American, I'll start with an advertisement actually, to try to contextualize some what's happening up here. Botanic provides the tools for our customers to deploy multimodal bots. So this includes chat bots. So to do a text with back and forth, it also includes assistance such as Alexa, Cortana, Siri, these kinds of voice driven systems so that you interact with via voice. But we also provide video chat bots for conversational avatars. So Microsoft's one of our biggest customers, and we've been doing multiple deployments on the Skype Bob platform. So that just as you speak with a person, you video chat, the bot can see you and respond and you can see it. And that's great because that gives you a graphical user interface. We all know from financial data that that can record.
So we have come across now for myself has been 15 years in San Francisco, primarily in the artificial intelligence space, about 20 years in VR and five or six in blockchain, we're coming across increasingly advanced problem sets such as the fact that now bots are able to do what many people can do. And that presents ethical problems. Because if you don't know whether you are speaking with a bot or a person, you know how much you can trust it, as these systems manage our financial data, as well as our healthcare data, we need to build trust mechanisms of a range of different sorts. We need to know what these systems are doing with our data and why. And so let's consider first off just to try to clear off and separate a couple things. A bot is not artificial intelligence, just like a web page is not a database.
A bot is a user interface to artificial intelligence. Bob is a conversational user interface specifically speaking now, as I mentioned, we build bots and this includes also systems that are omnichannel. So you can now move from your car to your home, to your mobile device. We've done projects with Logitech and w where you have multimodal omnichannel systems. It's like a little AI that's following year round can talk with it, but we like to divide these into three different categories or modalities. So on the left you of chat bots, which is with the words and the assistance, and that includes an automated speech recognition or voice recognition component, text to speech output. And then we also have, as I mentioned, the video chat bots, so nobody really wants to talk with a robotic robot. We all would prefer to know that we have some sense of recognition who we are as individuals. And that this system that we're speaking with is also going to be adaptable enough and contextually relevant enough to help us complete at least a task, if not have a friend while we're lonely.
So what we've come across like, well, my, my aunt is a Tennessee tobacco farmer. And when I started with Nutanix, about seven years ago, she said to me, mark, you got two ears and one mouth for a damn good reason. And I thought about that. And it seems as though bots also, even though they don't have two ears, they do have eight microphones, at least in the case of Alexa. And that's probably because what these systems are actually doing is collecting massive amounts of data. Now let's remember that I'm gonna come back to this, that end user data is as of today, the single most valuable commodity of the planet, more so than the oil or wood or whatever else we can see by the measurable size of companies, just Google and Facebook that are based upon our user data. Well, that's supposed to be valuable. Now, Amazon is evolving that model. They're now not just collecting end user data, they're collecting and business data. How many of you have a Facebook account? It's easy if you're hand in the now how many you have a Google account. Okay. So each hand represents roughly $150 of your value per month. So you get two hands there. That's about $300. Let's call it 250 euros, say the value per month for those systems. Amazon's listening all the time. And the proof of that is that there's a wake word.
So the questions, whether or not Amazon is actually keeping that data, what they're doing with that data. Now, I myself do not have an Alexa in the house because they have multiple class action lawsuits leveled against them for invasion privacy, since 2011, we'll see what happens, but Alexa could perhaps be keeping your data or it might not. But what's also important to recognize is if every we, the middle tax, we can exist on top of that, excuse me, I'm not sure here. I got water on top of that. We have other issues such as the Alexa skills store, which multiple people, entities, businesses, and collaborative communities are contributing data to the skill store. And when you interact with Alexa with over 33,000 different possible tasks that can be completed with that system, you don't know which entity you're actually interacting with and what is happening to your vocal data.
By the way, say digression vocal data is a lot richer than Lexi data. We've been typing the webpages appreciate we've been typing the webpages for a couple decades now, and as that private and user data's been collected, obviously it's valuable, but you can tell from my tone of voice, now that I need lots of water, but I'm also male I'm in my late forties, grew up the United States. And after about 30 seconds, you can also identify with over 200 vectors of data in most voice recognition systems in 30 seconds, whether the end user is drunk. So there's a lot of data that this system's soaking up. Okay? Now this is part of the reason why the chat bot market's gonna grow. I'm gonna shut up for a second. I'll let you read some stats. Okay. Roughly three times growth of the chat bot market. Not necessarily the assistant market in the next three years, right now, we have something like 30 million American users, or roughly 10% of the American population with Amazon Lexus in the home. Now they're paying about six or $70 for, I have this like a hundred bucks for that system depends on just the dot and the echo. But what they're doing is the same thing that Facebook did, which is that they're giving away tools, which are basically S for collecting end user data.
So the market's gonna continue to grow. Now, there's an issue though, that Amazon has right now, and I'm only thinking on Amazon because they're just one. I mean, certainly I, I definitely prefer apple and their end user data management policies, but let's consider the fact that right now, Amazon is not able to conduct international money transfers. I, it is illegal right now for Amazon to send money. Now about four or five years ago at Botanic, we said, okay, we've got bots. People can speak with bots. They're gonna wanna buy plane tickets. They're gonna want to interface with their bank. We had done some work with S a a and bank of America. And we began to stumble across blockchain has this great solution that would allow whether it is anti-money laundering compliancy, or even though your customer compliancy via blockchain begin to look at what are the implications of blockchain and bots.
So we put in like a voice to blockchain bridge, patent filing, and we did some other kind of exploration around IP so that we could build our, our, we have a strong patent portfolio, but we've always been very defensive. Other people build within that space. And we don't make a living by twisting elbows. But what we do like to do is we're very quick to say, oh, we've got idea. Let's put this down. Let's share that idea with other people, come back to this one second. So we still across trust mechanisms that certainly authentication is one of these, but we've also found social trust is important, but where we've been focusing in terms of our design, our technology implementation are the psychological components of trust. This is Andy. This is what we did for this, where Microsoft. And I think it was November that we rolled this out.
Andy helps end users that are interviewing fridge op Andy says, oh, I can't see your face, cuz there's too much light behind you or straighten your tie or wipe your nose. And the system can talk with the end user as a chat bot mode. In this video mode, we got some press here's a use flow on the left. The system is actually that we don't show it there analyzing emotion history. So can tell whether you're looking stressed off, it's stressed out or can tell whether you're pissed off and it's able to identify from the voice as well as the face, as well as the words, how you feel with gradual redundant data layers, which are great. But that system is actually working with the human living system, which hasn't really a goal in about 15 years. Our brain has been this like legacy hardware. We all have stuff between our ear and the very reason why we're all in the same room right now is because we trust somebody more. When we can see their face, we can trust more. We can hear their voice. And so we're building actually by having a video, chat off a mechanism of establishing trust on a psychological basis with the end user.
So, you know, like back in the day, the best team is with hi farmer. And it would say like, hi shepherd, and they would wave and they would do this thing. And, and you could tell if I'm standing like this, it's not different than I'm standing like this and that's not really counts. So I have to build that into these artificially intelligent systems. And these days, parts of the way that we establish that trust is by using these technologies, like for the input on the left natural language, understanding automated speech recognition and computer vision, respectively for the word sense and images. And then for the output using natural language generation, Texas speech and information that is, but I put this preface here to authentication of a technical sort, because if we don't have trust built in prior to the technical authentication, then if people aren't going to necessarily understand that they're talking with an individual and that's key to authentication is key to trust.
That's because we have this input note, but this is just a mockup of an example of how computer vision works. So Carol on the left is trying to stop smoking cigarettes. So if you on the right here is help do so. And so if it can see computer vision, all right, that's also part of that data that's being collected. And what I'm trying to do is I'm trying to warn you that these systems are highly surveillance. They'll be talking with these systems, they'll be speaking with software robots via Skype and signal and messenger and WhatsApp and kick and slack, and pretty soon on YouTube. And you need to recognize that it can see you and it can collect data on you. And if like Facebook or Google, you have access to that service for free, ask yourself, who's giving it to you. And why just like it's someone to walk up to you and give you a drug on the street and say, here, take this drug.
You should ask yourself, what is their motivation for giving the drug? Is it beneficial? Are there side effects? Is it detrimental? Is it addictive? Maybe it's going to heal me. Maybe it'll hurt me. AI systems. There's speaking with you today are highly surveillance, especially when they do things like use computer vision and especially within the context of Facebook's and Google's massive revenue streams. So there are more than 100,000 chats on chat bots on Facebook messenger. Great. Again, they'll be replacing us. So what we need are trust mechanisms that are not just psychological, but also technical. So this is sort of the crux of what I'm up here to talk about right now. And that's that bots need to have license plate. We need to treat them as cars. They are potentially fantastic. I, they can take us places very quickly and like a, like a care pace or some kind of a prosthetic offer these psychological extensions of who we are.
That's awesome. But they're also extremely dangerous like cars and they can cause big accidents, especially if you're trusting a bot to manage your financial doubt or your healthcare data. So these license plates are key to identifying each bot as its own unique entity. Just like a person. Because if you get a text message from a person they're saying, Hey, go ahead and send your money. They'll be like, what are you talking about? Right? Those things exist on Facebook today. They run around like, you know, in the feral wild and they can spam and scam and fish and spoof as well as any human can only a lot better because they don't get tired or psychologically exhausted from it. So they need authentication quickly. Put the, I go to the airport in San Francisco and I give them my airplane ticket, my passport, and they say, okay, we've identified you.
We can verify the remark. And then they've authenticated me once I finally go and I get onto the airplane. Okay. When you go to the same basic approach with Bob, that's not including things like verification or even certification. If I modify a client, now let's consider a couple of ways we can do this. I have to go into detail because some of the mechanics of this are important, but we just looked at OAU and open ID a couple of minutes ago, Peter speaker. So let's also look at some other methods. keybase.io, unique ID show card, and block stack. These are other methods that some which rely on blockchain now, ever new is the last one. That's one I'd like to draw your attention to in particular, this is what's called self sovereign identity management. I don't need to rely on a third party for authentication.
I can say I'm mark. And as long as I talk with Martin and John and Elsa, I'm able to then via that accumulation of interaction, build my own identity and manage it. Okay. So ever M is also supported by our investor outlier ventures as a blockchain company. These are all methods by which we can authenticate bots. And what they do is they'll allow security S basically ways, which you can say all here's how user data is being protected. Here's how the brand identity of the bot that's representing the bank is being protected. It establishes trust and dialogue markets sequence the sequence markets as well, and social rankings, where we have verified, trusted identity, etcetera. So these things are all key to also building social trust mechanisms. Now I'm going to run quickly through this, but blockchain is important because ultimately what it does, it's a social trust mechanism.
It's a ledger. It's not blockchain is not a database. It's just tributed ledger that we can read together. That's an important thing to note because those social components of blockchain are what make it valuable as a technology. What we found is that blockchain is also excellent for building not only authenticated, trusted and socially contextualized bots, but we stumbled across about two years ago. An interesting solution to this Facebook problem where this Alexa problem let's call it in which your data is being vacuumed up by an oligopoly operating under what appear be a fuel system of surveillance capitalism. In other words, we need to democratize artificial intelligence. We need to not just authenticate bots, but what we need to do is recognize that all of the data that we give to these very powerful AI systems, whether they're Google or Facebook is data that we ourselves should also be remunerated for.
Now that $300 a month, or let's say it's one 50, we have just like the Facebook account is data that we believe you can then get paid for. And blockchain now provides a mechanism by which we can do that. So we're gonna authenticate the bot. You talk to the bot, you say, oh boy, I can't balance my checking account because I have blown my credit out because I have three kids. And that data is all valuable. And now you can be paid via micro 10 actions when another third party uses that data via blockchain. This system that I that's mentioned occurred on the board sea token, Botanic is a partner with been about a year, developing the system. And what seed token, by the way, allows, is Analyst developers deployers and advertisers are all able to interate with end users via bots for an exchange of authenticated bot identity, to manage financial data that also allows end users to protect their own privacy that they choose or to share their data. So GDPR compliant uses blockchain, and it's so new that we have years of work at us. So just to wrap it up, the problem is that we need to trust bots because they manage our financial data. They manage our health data, and they really progressively who
We are
And solution for trust mechanisms, which function on psychological technical and social levels. So I'm Martin Meadows. I'd love to talk with you guys about building bots and AI systems. I mean, the financial use case for these systems is Legion. I'd love to, I'm gonna be here also talking later this afternoon on the intersection of artificial intelligence and blockchain. And thank you very much, Melissa. Appreciate it.
Frightening the life out of me now. So I'm, I'm afraid that the next speaker has not actually being found. So we are going to have another, an extended break. So there's a gentleman about putting their hand
Up question to mark,
Being here in Europe and with GDPR right in place.
Sorry is the gentleman hasn't here
Being here in Europe and with GDPR being, or coming in place soon. Now, how you see that with the bots, especially the ones which you mention Alexa, which are listening all the time and gathering data. How, how does the future looks like? And, and
So the question, just to make sure I understand this GDPs coming up, we're gonna be required to abide by this SIM, how does this inter operate in terms of thoughts?
Correct. Exactly. I mean, listening without content, without knowing exactly what they're listening and what data they are capturing, right. Puts them at risk and how you see that going in the future, right? One trend is data privacy and the other one, right. Capturing all personal data as much as possible.
Yeah. I think we're going to see that there's a range of different systems that are implemented. China has a very different approach than the Germany. For example, us somewhere in between, we're looking at specific options and we're actually collaborating with ever on some of these several large customers, such as for example, an English company that has like 15 separate companies that are the same brand. They need to provide end users with the opportunity to say, I'm willing to share my healthcare data, but not my financial data. And I'll share my travel data, but I don't want it to know my phone number. That information gets registered into a user state entry. And that user state entry then is in some cases used by the bot to build co preferential data. And the end user is. And in other cases, not that is an optin method.
It's basically a semipermeable membrane of information. In other cases, we're using much stricter technical methods, such as obfuscation, where you might have seen a person being interviewed and you can't, their faces blurred out and their voices kind of digitally altered. There's obfuscation, there's differential privacy in which you have groups that are sampled to build a general trend, but the individual is still not known. And thirdly, what we have is just the, my favorite, we're doing a project for Monash university right now using an open source messaging platform called single the end users. Data is encrypted sent over the line decrypted on the other end again, entirely open source it's it's not telegram signal. So it's fully open source. On the other side, the message is decrypted. The bot then can read users saying, and their identities protected along the way. So three different. Okay, thank, thank you very much, indeed. For that.

Stay Connected

KuppingerCole on social media

How can we help you

Send an inquiry

Call Us +49 211 2370770

Mo – Fr 8:00 – 17:00