Event Recording

Prof. Dr. Christoph von der Malsburg: AI Beyond Deep Learning


Log in and watch the full video!

So far, AI relies totally on human intelligence, in the form of human-written programs in classical AI or the human-provided sample data of deep learning.  The pursuit of AI over the last five decades has been caught within a fixed conceptual framework. Given the current level of tremendous attention, investment, technological infrastructure and application potential, maybe we are just a simple fundamental change in perspective away from a tremendous technological explosion.

Log in and watch the full video!

Upgrade to the Professional or Specialist Subscription Packages to access the entire KuppingerCole video library.

I have an account
Log in  
Register your account to start 30 days of free trial access
Register  
Subscribe to become a client
Choose a package  
Thank you. So here is what you can expect from me. I put AI in the context of the digital revolution. I talk about the strengths and fundamental limitations of AI, and I will discuss an idea, a specific idea how to overcome its limitations. As you know, the world is growing a nervous system in terms of communication, sensing computing, data, handling storage, and process automation. Mr. Copy has just talked about that. This is what we call the digital revolution, and this is the environment which turned out to be a, a perfect for AI, which created a perfect storm for AI in terms of hardware support in, in the form of graphic processing units, which are really were originally addressed to the, to the market of computer games, which be had become a possibility on the back of these breathtaking developments in computer graphics. And then of course, the worldwide web has provides masses of web-based data, which are, which turned out to be the necessary ingredient in artificial intelligence.
And that made it possible to scale this idea, which had been published in, in 1960 by Frank Rosenblad the perception, which is a simple network that leads up to decision units, decision for instance, as to presence or absence of a particular pattern dog or so in the input made it possible to scale this up to what appeared in a paper in nine in 2012, by these three authors, including Jeff Hinton, which has now is, is a perception with 650,000 neurons, 60 million parameters, which are the connections between neurons in here. And what was exciting about it was that it reduced the error rate in deciding upon the presence in classifying objects in images. Now this system has a thousand output units, so it could decide on a thousand different categories. This is the error, which was much lower than what was there before this created a feeding frenzy, which is best illustrated by the, in which the processing power on the vertical axis that is invested in various cutting edge systems, starting with Alexei Alexei net here and here, alpha goes zero deep minds version of playing the game go.
This is a breathtaking development because the processing power that went into that system had to be increased by a factor of 300,000 to reach that point here over a few years, this amounts to a doubling of computing input into cutting edge systems doubling in 3.4 months, much faster than Mo's law. Of course, here is one more data point. The newest conversation piece is G PT three, as I'm sure you know, this is a system developed by open AI, a company owned by Microsoft originally started by Ellen Mo. This system is able to take a sentence, a question, and then continue writing text, which sounds very reasonable in, in very good English and stays on the topic. It's it's an impressive thing. But look at this line here, the newest system, it took in 300 billion tokens, which is essentially words. The word must be, it can be one or two or three tokens, 300 billion.
If you read at the speed of 10 words per second, for continuously 24 7, you would have to read a thousand years to, to take in this amount of text, the system has 175 billion parameters and a petaflop machine that is a high performance computer that can produce 10 to the 15 floating point. Operations per second would have had to run 3,640 days in order to compute this, the learning task. This is the end of the rope of this field. I don't think it makes sense to go beyond these numbers here. Now, the field is, is subserved now by downloadable downloaded software base. Young people are very familiar with these terms here. They refer to packages of Python code produced by various organizations here. Young people take a few weeks time to get trained in this, and they are sucked in into this, like in a milestone.
Now, what do we get from AI as this type of AI, deep learning type of AI as a tool? I think the greatest significance of it was to break down the wall that had already always been between the usual world, which is now called the analog world and the digital world by systems being able to recognize objects or, or digits or anything that is shown in a camera. The systems are able to convert speech to text something you use every day on your smartphone. A largely untapped great potential is data analytics, especially in medical diagnostics, but also in industry, in, in industry 4.0 diagnostic or machines for preventive diagnostics. This is going to develop over the next decade, certainly. And then there is natural language processing, you know, translation between different languages. Very impressive. But I must say I have little mixed feelings about it because although the language that is produced sounds very reasonable.
It is clear that the system doesn't understand what it is talking about. The, let me say a word about the current research environment. It is totally dominated by benchmarks, you know, narrowly defined goals in which in order to, to be a player, you have to beat your competition in terms of numbers. Like for instance, recognition rates, the system is totally the, the field is totally concentrated on input pattern statistics, as the only need of means of structuring your system. Your system is essentially structured apart from the initial setup, by the statistics of input patterns. And of course, brute force learning. You must have seen 10,000 dogs in order to recognize. One of them, the world has to be phenomenon to be analyzed, has to be shown in all detail during learning time. Now that has let this fixation conceptual fixation has turned the field into a ride in a tunnel.
The, it is this tunnel is defined by the highly honed software and hardware environment, hardware GPUs. You remember, and by education programs, young people are just sucked into this field. And yet I think one can say the field is over the peak of its hype. You know, Gartner in with their famous hype cycle, put deep learning here. You know, if you read business reports, it is becoming clear that the cost of collecting and curating data is a burden. And so there is a little sober sobering in the field. Interestingly, Gartner puts autonomous driving here and predicts. It will take another 10 years. You may have noticed that Mercedes has just given up on, on, on being a player in this race for totally autonomous in level five autonomous cars. This is probably not feasible on the basis of the present day present day technology. So let's think back at what intelligence is about.
I find it very convincing. I find very convincing this definition here, achieving general goals in varying contexts. And how does present technology measure up to this? You know, both the digital world and AI in particular are characterized by being all the systems being of narrow scope, you know, narrow goal pursued by the system. And they work in rather narrow context. If, if the input leads that context assistance breakdown, that is I think the main limitation of the field, you know, for instance, when it comes to autonomous driving, the, the auto barn or the, the road across the country is no problem that is handled with, with great security. But as soon as what they call a Cornick case, an unusual situation props up the system is helpless. It cannot cope with it. Now this is due to some fundamental problems of AI that need to be overcome.
It is of course necessary for a full blown system to integrate subsystems different senses as is the case in our brain, of course, different data sources, memory in our case, motor control, action control, and a goal hierarchy that resides in the system and, and is not, doesn't need to be imposed application by application from the outside. The system need systems need to get better at generalization learning from few examples, the, the systems need situation awareness in order to cope with varying context and something that is not even spoken about is the systems will need representation of the current situation, something your brain does, of course, in every second of your life. And in order to achieve all of this systems need to be scaled. Think in terms of numbers of synopsis, G P T three, I've just discussed has 10 to the 11 synapses. The brain has 10 to the 14 open AI has just scaled up this system by a factor of thousand.
It was 10 to the eight before, but they cannot repeat this feed. This is, this dump is simply not feasible more for conceptual reasons than for economic reasons. So where can we go from here? Let me just side one of the gurus of the present field, Jeffrey Hinton with whom I have already discussed when he did his PhD in, in Edinburg a long time ago, he says about deep learning. You know, the thing based on back propagation of error, my view is throw it all away and start again. How can we start again? I think in order to find the path forward, we have to take the brain for guidance. You may think, oh God, this is a show stop. The brain is totally on the table, unattainable. This is not my perspective on it. I will discuss just one point about it, which I think is key and can open the door to a future development.
Let me talk about the information content of the brain in order to describe the brain's wiring takes 10 to the 15 bites, petabytes of information. Now, how does this enter the brain, your genetic information on by, on the basis of which your whole organism, including the brain has been created out of fertilized egg is one gigabyte, 3.3 million, a billion nucleotides. That's all that goes into the construction of your whole organism. You may say, well, the bug of the information enters the brain through the sensors, not so we raise our kids in very stable and confined and, and simple environments, which could easily be reproduce in the style of computer games with virtual reality, which would cost a few gigabytes of program code. And yet kids are able at the age of three or so to represent their immediate surrounding act in, in it purposefully are conscious of the surrounding and learn from single examples they can generalize.
So what about this bottleneck gigabytes go as input into the brain. And in order to describe its wiring, it takes a petabyte. So where do 99.999% of the information come from? You may think this is a great mystery, but in reality, this question has been studied by research 40 years ago, on a particular example, namely the way the eye, the fibers growing from the eye in the growing embryo to the brain, this particular brain structure called the optic Tetum. How do these fibers find their proper location? Originally in early stages, it's a complete jumble. The, the connections are totally dispersed, but after only a few hours or a, or a day, you get a perfect image here when you shine an image into the brain. So that neighboring points here connect to neighboring points. There, there was an intensive field of research, hundreds of experiments where performed to test various theories.
I'm proud to be the author of the winning theory together with my friend, David Wilshaw also from Edinburg. And it is a now a system that can be crisply described by mathematics and is well understood. The general theme is that a network of original network of neurons talks to itself by way of its signals. The signals modifies the, the, the connections with something called synoptic plasticity modifying the network until a, an attractive stage is reached. So the brain is totally dominated by network patterns that are attracted to this process. The process is called network self organization. So the brain is an overlay of lots of such structured, highly structured network patterns. Let me just talk about one application to face recognition. We know that the face comes into the brain in the form of a two dimensional field of, of texture elements, neurons that respond to local textures.
We know that these neurons are laterally connected. So they form a network according to this idea, which I have applied deployed into companies. One in, in Bohol in Germany, the other in Los Angeles successfully. Yeah. The idea in this was that in order to recognize that phase, you need a model, which is another network with the same structure as this one here, which is static doesn't move and is in a particular part of cortex called the Fuze form complex. And there is a mechanism for finding out the similarity of the identity of the structure of these networks, that mechan, that mechanism has the form of fibers that activate on the fast time scale form and network, which connects corresponding points. One calls this pro process homeo morphine. So this process is dominated by networks that are fluidly created. You know, this phase here moves as quickly as the eye moves or as quickly as the person moves.
And, and, and, and as quickly move these networks, the networks are composed of little network fragments that have been created during learning time. So in order to go forward, the field has to transform from thinking in terms of individual neurons, that classify individual things to think in terms of active networks, that self organize on the fast time scale and organize whatever goes on in, in the brain. So let me come to my conclusion. AI is caught in a tunnel, a fresh start is needed. The brain has to lead the way that is my conviction and the brain is dominated by self-assembled nets. Thank you.

Stay Connected

KuppingerCole on social media

How can we help you

Send an inquiry

Call Us +49 211 2370770

Mo – Fr 8:00 – 17:00