字幕表 動画を再生する 英語字幕をプリント ALAN WINFIELD: Thank you very much indeed. It's really great to be here. And thank you so much for the invitation. So yes, robot intelligence. So I've titled the lecture "The Thinking Robot." But of course, that immediately begs the question, what on earth do we mean by thinking? Well we could, of course, spend the whole of the next hour debating what we mean by thinking. But I should say that I'm particularly interested in and will focus on embodied intelligence. So in other words, the kind of intelligence that we have, that animals including humans have, and that robots have. So of course that slightly differentiates what I'm talking about from AI. But I regard robotics as a kind of subset of AI. And of course one of the things that we discovered in the last 60 odd years of artificial intelligence is that the things that we thought were really difficult actually are relatively easy. Like playing chess, or go, for that matter. Whereas the things that we originally thought were really easy, like making a cup of tea, are really hard. So it's kind of the opposite of what was expected. So embodied intelligence in the real world is really very difficult indeed. And that's what I'm interested in. So this is the outline of the talk. I'm going to talk initially about intelligence and offer some ideas, if you like, for a way of thinking about intelligence and breaking it down into categories or types of intelligence. And then I'm going to choose a particular one which I've been really working on the last three or four years. And it's what I call a generic architecture for a functional imagination. Or in short, robots with internal models. So that's really what I want to focus on. Because I really wanted to show you some experimental work that we've done the last couple of years in the lab. I mean, I'm an electronics engineer. I'm an experimentalist. And so doing experiments is really important for me. So the first thing that we ought to realize-- I'm sure we do realize-- is that intelligence is not one thing that we all, animals, humans, and robots have more or less of. Absolutely not. And you know, there are several ways of breaking intelligence down into different kind of categories, if you like, of intelligence, different types of intelligence. And here's one that I came up with in the last couple of years. It's certainly not the only way of thinking about intelligence. But this really breaks intelligence into four, if you like, types, four kinds of intelligence. You could say kinds of minds, I guess. The most fundamental is what we call morphological intelligence. And that's the intelligence that you get just from having a physical body. And there are some interesting questions about how you design morphological intelligence. You've probably all seen pictures of or movies of robots that can walk, but in fact don't actually have any computing, any computation whatsoever. In other words, the behavior of walking is an emergent property of the mechanics, if you like, the springs and levers and so on in the robot. So that's an example of morphological intelligence. Individual intelligence is the kind of intelligence that you get from learning individually. Social intelligence, I think, is really interesting and important. And that's the one that I'm going to focus on most in this talk. Social intelligence is the intelligence that you get from learning socially, from each other. And of course, we are a social species. And the other one which I've been working on a lot in the last 20 odd years is swarm intelligence. So this is the kind of intelligence that we see most particularly in social animals, insects. The most interesting properties of swarm intelligence tend to be emergent or self-organizing. So in other words, the intelligence is typically manifest as a collective behavior that emerges from the, if you like, the micro interactions between the individuals in that population. So emergence and self-organization are particularly interesting to me. But I said this is absolutely not the only way to think about intelligence. And I'm going to show you another way of thinking about intelligence which I particularly like. And this is Dan Dennett's tower of generate and test. So in Darwin's Dangerous Idea, and several other books, I think, Dan Dennett suggests that a good way of thinking about intelligence is to think about the fact that all animals, including ourselves, need to decide what actions to take. So choosing the next action is really critically important. I mean it's critically important for all of us, including humans. Even though the wrong action may not kill us, as it were, for humans. But for many animals, the wrong action may well kill that animal. And Dennett talks about what he calls the tower of generate and test which I want to show you here. It's a really cool breakdown, if you like, way of thinking about intelligence. So at the bottom of his tower are Darwinian creatures. And the thing about Darwinian creatures is that they have only one way of, as it were, learning from, if you like, generating and testing next possible actions. And that is natural selection. So Darwinian creatures in his schema cannot learn. They can only try out an action. If it kills them, well that's the end of that. So by the laws of natural selection, that particular action is unlikely to be passed on to descendants. Now, of course, all animals on the planet are Darwinian creatures, including ourselves. But a subset of what Dennett calls Skinnerian creatures. So Skinnerian creatures are able to generate a next possible candidate action, if you like, a next possible action and try it out. And here's the thing, if it doesn't kill them but it's actually a bad action, then they'll learn from that. Or even if it's a good action, a Skinnerian creature will learn from trying out an action. So really, Skinnerian creatures are a subset of Darwinians, actually a small subset that are able to learn by trial and error, individually learn by trial and error. Now, the third layer, or story, if you like, in Dennett's tower, he calls Popperian creatures, after, obviously, the philosopher, Karl Popper. And Popperian creatures have a big advantage over Darwinians and Skinnerians in that they have an internal model of themselves in the world. And with an internal model, it means that you can try out an action, a candidate next possible action, if you like, by imagining it. And it means that you don't have to actually have to put yourself to the risk of trying it out for real physically in the world, and possibly it killing you, or at least harming you. So Popperian creatures have this amazing invention, which is internal modeling. And of course, we are examples of Popperian creatures. But there are plenty of other animals-- again, it's not a huge proportion. It's rather a small proportion, in fact, of all animals. But certainly there are plenty of animals that are capable in some form of modeling their world and, as it were, imagining actions before trying them out. And just to complete Dennett's tower, he adds another layer that he calls Gregorian creatures. Here's he's naming this layer after Richard Gregory, the British psychologist. And the thing that Gregorian creatures have is that in addition to internal models, they have mind tools like language and mathematics. Especially language because it means that Gregorian creatures can share their experiences. In fact, a Gregorian creature could, for instance, model in its brain, in its mind, the possible consequences of doing a particular thing, and then actually pass that knowledge to you.