Placeholder Image

字幕表 動画を再生する

  • ALAN WINFIELD: Thank you very much indeed.

  • It's really great to be here.

  • And thank you so much for the invitation.

  • So yes, robot intelligence.

  • So I've titled the lecture "The Thinking Robot."

  • But of course, that immediately begs the question,

  • what on earth do we mean by thinking?

  • Well we could, of course, spend the whole

  • of the next hour debating what we mean by thinking.

  • But I should say that I'm particularly interested in

  • and will focus on embodied intelligence.

  • So in other words, the kind of intelligence that we have,

  • that animals including humans have, and that robots have.

  • So of course that slightly differentiates

  • what I'm talking about from AI.

  • But I regard robotics as a kind of subset of AI.

  • And of course one of the things that we discovered

  • in the last 60 odd years of artificial intelligence

  • is that the things that we thought were really difficult

  • actually are relatively easy.

  • Like playing chess, or go, for that matter.

  • Whereas the things that we originally thought

  • were really easy, like making a cup of tea, are really hard.

  • So it's kind of the opposite of what was expected.

  • So embodied intelligence in the real world

  • is really very difficult indeed.

  • And that's what I'm interested in.

  • So this is the outline of the talk.

  • I'm going to talk initially about intelligence

  • and offer some ideas, if you like,

  • for a way of thinking about intelligence

  • and breaking it down into categories

  • or types of intelligence.

  • And then I'm going to choose a particular one which

  • I've been really working on the last three or four years.

  • And it's what I call a generic architecture

  • for a functional imagination.

  • Or in short, robots with internal models.

  • So that's really what I want to focus on.

  • Because I really wanted to show you

  • some experimental work that we've

  • done the last couple of years in the lab.

  • I mean, I'm an electronics engineer.

  • I'm an experimentalist.

  • And so doing experiments is really important for me.

  • So the first thing that we ought to realize--

  • I'm sure we do realize-- is that intelligence is not one thing

  • that we all, animals, humans, and robots

  • have more or less of.

  • Absolutely not.

  • And you know, there are several ways of breaking intelligence

  • down into different kind of categories,

  • if you like, of intelligence, different types

  • of intelligence.

  • And here's one that I came up with in the last couple

  • of years.

  • It's certainly not the only way of thinking about intelligence.

  • But this really breaks intelligence into four,

  • if you like, types, four kinds of intelligence.

  • You could say kinds of minds, I guess.

  • The most fundamental is what we call

  • morphological intelligence.

  • And that's the intelligence that you get just

  • from having a physical body.

  • And there are some interesting questions

  • about how you design morphological intelligence.

  • You've probably all seen pictures of or movies of robots

  • that can walk, but in fact don't actually

  • have any computing, any computation whatsoever.

  • In other words, the behavior of walking

  • is an emergent property of the mechanics,

  • if you like, the springs and levers and so on in the robot.

  • So that's an example of morphological intelligence.

  • Individual intelligence is the kind of intelligence

  • that you get from learning individually.

  • Social intelligence, I think, is really

  • interesting and important.

  • And that's the one that I'm going

  • to focus on most in this talk.

  • Social intelligence is the intelligence

  • that you get from learning socially, from each other.

  • And of course, we are a social species.

  • And the other one which I've been

  • working on a lot in the last 20 odd years

  • is swarm intelligence.

  • So this is the kind of intelligence

  • that we see most particularly in social animals, insects.

  • The most interesting properties of swarm intelligence

  • tend to be emergent or self-organizing.

  • So in other words, the intelligence

  • is typically manifest as a collective behavior that

  • emerges from the, if you like, the micro interactions

  • between the individuals in that population.

  • So emergence and self-organization

  • are particularly interesting to me.

  • But I said this is absolutely not the only way

  • to think about intelligence.

  • And I'm going to show you another way

  • of thinking about intelligence which I particularly like.

  • And this is Dan Dennett's tower of generate and test.

  • So in Darwin's Dangerous Idea, and several other books,

  • I think, Dan Dennett suggests that a good way

  • of thinking about intelligence is to think about the fact

  • that all animals, including ourselves, need

  • to decide what actions to take.

  • So choosing the next action is really critically important.

  • I mean it's critically important for all of us, including

  • humans.

  • Even though the wrong action may not kill us,

  • as it were, for humans.

  • But for many animals, the wrong action

  • may well kill that animal.

  • And Dennett talks about what he calls

  • the tower of generate and test which I want to show you here.

  • It's a really cool breakdown, if you like, way

  • of thinking about intelligence.

  • So at the bottom of his tower are Darwinian creatures.

  • And the thing about Darwinian creatures

  • is that they have only one way of,

  • as it were, learning from, if you like,

  • generating and testing next possible actions.

  • And that is natural selection.

  • So Darwinian creatures in his schema cannot learn.

  • They can only try out an action.

  • If it kills them, well that's the end of that.

  • So by the laws of natural selection,

  • that particular action is unlikely to be

  • passed on to descendants.

  • Now, of course, all animals on the planet

  • are Darwinian creatures, including ourselves.

  • But a subset of what Dennett calls Skinnerian creatures.

  • So Skinnerian creatures are able to generate

  • a next possible candidate action, if you like,

  • a next possible action and try it out.

  • And here's the thing, if it doesn't kill them

  • but it's actually a bad action, then they'll learn from that.

  • Or even if it's a good action, a Skinnerian creature

  • will learn from trying out an action.

  • So really, Skinnerian creatures are a subset of Darwinians,

  • actually a small subset that are able to learn

  • by trial and error, individually learn by trial and error.

  • Now, the third layer, or story, if you

  • like, in Dennett's tower, he calls Popperian creatures,

  • after, obviously, the philosopher, Karl Popper.

  • And Popperian creatures have a big advantage

  • over Darwinians and Skinnerians in that they

  • have an internal model of themselves in the world.

  • And with an internal model, it means

  • that you can try out an action, a candidate

  • next possible action, if you like, by imagining it.

  • And it means that you don't have to actually have

  • to put yourself to the risk of trying it out

  • for real physically in the world,

  • and possibly it killing you, or at least harming you.

  • So Popperian creatures have this amazing invention,

  • which is internal modeling.

  • And of course, we are examples of Popperian creatures.

  • But there are plenty of other animals-- again,

  • it's not a huge proportion.

  • It's rather a small proportion, in fact, of all animals.

  • But certainly there are plenty of animals

  • that are capable in some form of modeling their world

  • and, as it were, imagining actions before trying them out.

  • And just to complete Dennett's tower,

  • he adds another layer that he calls Gregorian creatures.

  • Here's he's naming this layer after Richard Gregory,

  • the British psychologist.

  • And the thing that Gregorian creatures have

  • is that in addition to internal models,

  • they have mind tools like language and mathematics.

  • Especially language because it means that Gregorian creatures

  • can share their experiences.

  • In fact, a Gregorian creature could, for instance,

  • model in its brain, in its mind, the possible consequences

  • of doing a particular thing, and then actually pass

  • that knowledge to you.

  • So you don't even have to model it yourself.

  • So the Gregorian creatures really

  • have the kind of social intelligence

  • that we probably-- perhaps not uniquely,

  • but there are obviously only a handful

  • of species that are able to communicate, if you like,

  • traditions with each other.

  • So I think internal models are really, really interesting.

  • And as I say, I've been spending the last couple

  • of years thinking about robots with internal models.

  • And actually doing experiments with

  • robots with internal models.

  • So are robots with internal models self-aware?

  • Well probably not in the sense that-- the everyday

  • sense that we mean by self-aware, sentient.

  • But certainly internal models, I think,

  • can provide a minimal level of kind

  • of functional self-awareness.

  • And absolutely enough to allow us to ask what if questions.

  • So with internal models, we have potentially a really powerful

  • technique for robots.

  • Because it means that they can actually ask

  • themselves questions about what if I take this

  • or that next possible action.

  • So there's the action selection, if you like.

  • So really, I'm kind of following Dennett's model.

  • I'm really interested in building Popperian creatures.

  • Actually, I'm interested in building Gregorian creatures.

  • But that's another, if you like, another step in the story.

  • So really, here I'm focusing primarily

  • on Popperian creatures.

  • So robots with internal models.

  • And what I'm talking about in particular

  • is a robot with a simulation of itself

  • and it's currently perceived environment and of the actors

  • inside itself.

  • So it takes a bit of getting your head around.

  • The idea of a robot with a simulation of itself

  • inside itself.

  • But that's really what I'm talking about.

  • And the famous, the late John Holland, for instance,

  • rather perceptively wrote an internal model

  • that allows a system to look ahead

  • to the future consequences of actions

  • without committing itself to those actions.

  • I don't know whether John Holland

  • was aware of Dennett's tower.

  • Possibly not.

  • But really saying the same kind of thing as Dan Dennett.

  • Now before I come on to the work that I've been doing,

  • I want to show you some examples of-- a few examples,

  • there aren't many, in fact-- of robots with self-simulation.

  • The first one, as far as I'm aware,

  • was by Richard Vaughan and his team.

  • And he used a simulation inside a robot

  • to allow it to plan a safe route with incomplete knowledge.

  • So as far as I'm aware, this is the world's first example

  • of robots with self-simulation.

  • Perhaps an example that you might already be familiar with,

  • this is Josh Bongard and Hod Lipson's work.

  • Very notable, very interesting work.

  • Here, self-simulation, but for a different purpose.

  • So this is not self-simulation to choose,

  • as it were, gross actions in the world.

  • But instead, self-simulation to learn

  • how to control your own body.

  • So that the idea here is that if you have a complex body, then

  • a self-simulation is a really good way of figuring out

  • how to control yourself, including

  • how to repair yourself if parts of you

  • should break or fail or be damaged, for instance.

  • So that's a really interesting example

  • of what you can do with self-simulation.

  • And a similar idea, really, was tested

  • by my old friend, Owen Holland.

  • He built this kind of scary looking robot.

  • Initially it was called Chronos, but but then it

  • became known as ECCE-robot.

  • And this robot is deliberately designed to be hard to control.

  • In fact, Owen refers to it as anthropomimetic.