Placeholder Image

字幕表 動画を再生する

  • ALAN WINFIELD: Thank you very much indeed.

  • It's really great to be here.

  • And thank you so much for the invitation.

  • So yes, robot intelligence.

  • So I've titled the lecture "The Thinking Robot."

  • But of course, that immediately begs the question,

  • what on earth do we mean by thinking?

  • Well we could, of course, spend the whole

  • of the next hour debating what we mean by thinking.

  • But I should say that I'm particularly interested in

  • and will focus on embodied intelligence.

  • So in other words, the kind of intelligence that we have,

  • that animals including humans have, and that robots have.

  • So of course that slightly differentiates

  • what I'm talking about from AI.

  • But I regard robotics as a kind of subset of AI.

  • And of course one of the things that we discovered

  • in the last 60 odd years of artificial intelligence

  • is that the things that we thought were really difficult

  • actually are relatively easy.

  • Like playing chess, or go, for that matter.

  • Whereas the things that we originally thought

  • were really easy, like making a cup of tea, are really hard.

  • So it's kind of the opposite of what was expected.

  • So embodied intelligence in the real world

  • is really very difficult indeed.

  • And that's what I'm interested in.

  • So this is the outline of the talk.

  • I'm going to talk initially about intelligence

  • and offer some ideas, if you like,

  • for a way of thinking about intelligence

  • and breaking it down into categories

  • or types of intelligence.

  • And then I'm going to choose a particular one which

  • I've been really working on the last three or four years.

  • And it's what I call a generic architecture

  • for a functional imagination.

  • Or in short, robots with internal models.

  • So that's really what I want to focus on.

  • Because I really wanted to show you

  • some experimental work that we've

  • done the last couple of years in the lab.

  • I mean, I'm an electronics engineer.

  • I'm an experimentalist.

  • And so doing experiments is really important for me.

  • So the first thing that we ought to realize--

  • I'm sure we do realize-- is that intelligence is not one thing

  • that we all, animals, humans, and robots

  • have more or less of.

  • Absolutely not.

  • And you know, there are several ways of breaking intelligence

  • down into different kind of categories,

  • if you like, of intelligence, different types

  • of intelligence.

  • And here's one that I came up with in the last couple

  • of years.

  • It's certainly not the only way of thinking about intelligence.

  • But this really breaks intelligence into four,

  • if you like, types, four kinds of intelligence.

  • You could say kinds of minds, I guess.

  • The most fundamental is what we call

  • morphological intelligence.

  • And that's the intelligence that you get just

  • from having a physical body.

  • And there are some interesting questions

  • about how you design morphological intelligence.

  • You've probably all seen pictures of or movies of robots

  • that can walk, but in fact don't actually

  • have any computing, any computation whatsoever.

  • In other words, the behavior of walking

  • is an emergent property of the mechanics,

  • if you like, the springs and levers and so on in the robot.

  • So that's an example of morphological intelligence.

  • Individual intelligence is the kind of intelligence

  • that you get from learning individually.

  • Social intelligence, I think, is really

  • interesting and important.

  • And that's the one that I'm going

  • to focus on most in this talk.

  • Social intelligence is the intelligence

  • that you get from learning socially, from each other.

  • And of course, we are a social species.

  • And the other one which I've been

  • working on a lot in the last 20 odd years

  • is swarm intelligence.

  • So this is the kind of intelligence

  • that we see most particularly in social animals, insects.

  • The most interesting properties of swarm intelligence

  • tend to be emergent or self-organizing.

  • So in other words, the intelligence

  • is typically manifest as a collective behavior that

  • emerges from the, if you like, the micro interactions

  • between the individuals in that population.

  • So emergence and self-organization

  • are particularly interesting to me.

  • But I said this is absolutely not the only way

  • to think about intelligence.

  • And I'm going to show you another way

  • of thinking about intelligence which I particularly like.

  • And this is Dan Dennett's tower of generate and test.

  • So in Darwin's Dangerous Idea, and several other books,

  • I think, Dan Dennett suggests that a good way

  • of thinking about intelligence is to think about the fact

  • that all animals, including ourselves, need

  • to decide what actions to take.

  • So choosing the next action is really critically important.

  • I mean it's critically important for all of us, including

  • humans.

  • Even though the wrong action may not kill us,

  • as it were, for humans.

  • But for many animals, the wrong action

  • may well kill that animal.

  • And Dennett talks about what he calls

  • the tower of generate and test which I want to show you here.

  • It's a really cool breakdown, if you like, way

  • of thinking about intelligence.

  • So at the bottom of his tower are Darwinian creatures.

  • And the thing about Darwinian creatures

  • is that they have only one way of,

  • as it were, learning from, if you like,

  • generating and testing next possible actions.

  • And that is natural selection.

  • So Darwinian creatures in his schema cannot learn.

  • They can only try out an action.

  • If it kills them, well that's the end of that.

  • So by the laws of natural selection,

  • that particular action is unlikely to be

  • passed on to descendants.

  • Now, of course, all animals on the planet

  • are Darwinian creatures, including ourselves.

  • But a subset of what Dennett calls Skinnerian creatures.

  • So Skinnerian creatures are able to generate

  • a next possible candidate action, if you like,

  • a next possible action and try it out.

  • And here's the thing, if it doesn't kill them

  • but it's actually a bad action, then they'll learn from that.

  • Or even if it's a good action, a Skinnerian creature

  • will learn from trying out an action.

  • So really, Skinnerian creatures are a subset of Darwinians,

  • actually a small subset that are able to learn

  • by trial and error, individually learn by trial and error.

  • Now, the third layer, or story, if you

  • like, in Dennett's tower, he calls Popperian creatures,

  • after, obviously, the philosopher, Karl Popper.

  • And Popperian creatures have a big advantage

  • over Darwinians and Skinnerians in that they

  • have an internal model of themselves in the world.

  • And with an internal model, it means

  • that you can try out an action, a candidate

  • next possible action, if you like, by imagining it.

  • And it means that you don't have to actually have

  • to put yourself to the risk of trying it out

  • for real physically in the world,

  • and possibly it killing you, or at least harming you.

  • So Popperian creatures have this amazing invention,

  • which is internal modeling.

  • And of course, we are examples of Popperian creatures.

  • But there are plenty of other animals-- again,

  • it's not a huge proportion.

  • It's rather a small proportion, in fact, of all animals.

  • But certainly there are plenty of animals

  • that are capable in some form of modeling their world

  • and, as it were, imagining actions before trying them out.

  • And just to complete Dennett's tower,

  • he adds another layer that he calls Gregorian creatures.

  • Here's he's naming this layer after Richard Gregory,

  • the British psychologist.

  • And the thing that Gregorian creatures have

  • is that in addition to internal models,

  • they have mind tools like language and mathematics.

  • Especially language because it means that Gregorian creatures

  • can share their experiences.

  • In fact, a Gregorian creature could, for instance,

  • model in its brain, in its mind, the possible consequences

  • of doing a particular thing, and then actually pass

  • that knowledge to you.

  • So you don't even have to model it yourself.

  • So the Gregorian creatures really

  • have the kind of social intelligence

  • that we probably-- perhaps not uniquely,

  • but there are obviously only a handful

  • of species that are able to communicate, if you like,

  • traditions with each other.

  • So I think internal models are really, really interesting.

  • And as I say, I've been spending the last couple

  • of years thinking about robots with internal models.

  • And actually doing experiments with

  • robots with internal models.

  • So are robots with internal models self-aware?

  • Well probably not in the sense that-- the everyday

  • sense that we mean by self-aware, sentient.

  • But certainly internal models, I think,

  • can provide a minimal level of kind

  • of functional self-awareness.

  • And absolutely enough to allow us to ask what if questions.

  • So with internal models, we have potentially a really powerful

  • technique for robots.

  • Because it means that they can actually ask

  • themselves questions about what if I take this

  • or that next possible action.

  • So there's the action selection, if you like.

  • So really, I'm kind of following Dennett's model.

  • I'm really interested in building Popperian creatures.

  • Actually, I'm interested in building Gregorian creatures.

  • But that's another, if you like, another step in the story.

  • So really, here I'm focusing primarily

  • on Popperian creatures.

  • So robots with internal models.

  • And what I'm talking about in particular

  • is a robot with a simulation of itself

  • and it's currently perceived environment and of the actors

  • inside itself.

  • So it takes a bit of getting your head around.

  • The idea of a robot with a simulation of itself

  • inside itself.

  • But that's really what I'm talking about.

  • And the famous, the late John Holland, for instance,

  • rather perceptively wrote an internal model

  • that allows a system to look ahead

  • to the future consequences of actions

  • without committing itself to those actions.

  • I don't know whether John Holland

  • was aware of Dennett's tower.

  • Possibly not.

  • But really saying the same kind of thing as Dan Dennett.

  • Now before I come on to the work that I've been doing,

  • I want to show you some examples of-- a few examples,

  • there aren't many, in fact-- of robots with self-simulation.

  • The first one, as far as I'm aware,

  • was by Richard Vaughan and his team.

  • And he used a simulation inside a robot

  • to allow it to plan a safe route with incomplete knowledge.

  • So as far as I'm aware, this is the world's first example

  • of robots with self-simulation.

  • Perhaps an example that you might already be familiar with,

  • this is Josh Bongard and Hod Lipson's work.

  • Very notable, very interesting work.

  • Here, self-simulation, but for a different purpose.

  • So this is not self-simulation to choose,

  • as it were, gross actions in the world.

  • But instead, self-simulation to learn

  • how to control your own body.

  • So that the idea here is that if you have a complex body, then

  • a self-simulation is a really good way of figuring out

  • how to control yourself, including

  • how to repair yourself if parts of you

  • should break or fail or be damaged, for instance.

  • So that's a really interesting example

  • of what you can do with self-simulation.

  • And a similar idea, really, was tested

  • by my old friend, Owen Holland.

  • He built this kind of scary looking robot.

  • Initially it was called Chronos, but but then it

  • became known as ECCE-robot.

  • And this robot is deliberately designed to be hard to control.

  • In fact, Owen refers to it as anthropomimetic.

  • Which means anthropic from the inside out.

  • So most humanoid robots are only humanoid on the outside.

  • But here, we have a robot that has a skeletal structure,

  • it has tendons, it's very-- and you

  • can see from the little movie clip

  • there, if any part of the robot moves,

  • then the whole of the rest of the robot

  • tends to flex, rather like human bodies or animal bodies.

  • So Owen was particularly interested in a robot

  • that is difficult to control.

  • And the idea then of using an internal simulation of yourself

  • in order to be able to control yourself or learn

  • to control yourself.

  • And he was the first to come up with this phrase,

  • functional imagination.

  • Really interesting work, so do check that out.

  • And the final example I want to give

  • you is from my own lab, where-- this is swarm robotics

  • work-- where in fact we're doing evolutionary swarm robotics

  • here.

  • And we've put a simulation of each robot and the swarm

  • inside each robot.

  • And in fact, we're using those internal simulations

  • as part of a genetic algorithm.

  • So each robot, in fact, is evolving its own controller.

  • And in fact, it actually updates its own controller

  • about once a second.

  • So again, it's a bit an odd thing to get your head around.

  • So about once a second, each robot

  • becomes its own great, great, great, great grandchild.

  • In other words, its controller is a descendant.

  • But the problem with this is that the internal simulation

  • tends to be wrong.

  • And we have what we call the reality gap.

  • So the gap between the simulation and the real world.

  • And so we got round that-- my student Paul O'Dowd came up

  • with the idea that we could co-evolve

  • the simulators, as well as the controllers in the robots.

  • So we have a population of robots

  • inside each individual physical robot, as it were,

  • simulated robots.

  • But then you also have a swarm of 10 robots.

  • And therefore, we have a population of 10 simulators.

  • So we actually co-evolve here, the simulators

  • and the robot controllers.

  • So I want to now show you the newer

  • work I've been doing on robots with internal models.

  • And primarily-- I was telling [? Yan ?] earlier

  • that, you know, I'm kind of an old fashioned electronics

  • engineer.

  • Spent much of my career building safety systems,

  • safety critical systems.

  • So safety is something that's very important to me

  • and to robotics.

  • So here's a kind of generic internal modeling

  • architecture for safety.

  • So this is, in fact, Dennett's loop of generate and test.

  • So the idea is that we have an internal model, which

  • is a self-simulation, that is initialized to match

  • the current real world.

  • And then you try out, you run the simulator

  • for each of your next possible actions.

  • To put it very simply, imagine that you're a robot,

  • and you could either turn left, turn right, go straight ahead,

  • or stand still.

  • So you have four possible next actions.

  • And therefore, you'd loop through this internal model

  • for each of those next possible actions.

  • And then moderate the action selection mechanism

  • in your controller.

  • So this is not part of the controller.

  • It's a kind of moderator, if you like.

  • So you could imagine that the regular robot

  • controller, the thing in red, has a set of four

  • next possible actions.

  • But your internal model determines

  • that only two of them are safe.

  • So it would effectively, if you like,

  • moderate or govern the action selection mechanism

  • of the robot's controller, so that the robot

  • controller, in fact, will not choose the unsafe actions.

  • Interestingly, if you have a learning controller,

  • then that's fine because we can effectively extend or copy

  • the learned behaviors into the internal model.

  • That's fine.

  • So in principle-- we haven't done this.

  • But we're starting to do it now-- in principle,

  • we can extend this architecture to, as it were, to adaptive

  • or learning robots.

  • Here's a simple thought experiment.

  • Imagine a robot with several safety hazards facing it.

  • It has four next possible actions.

  • Well, your internal model can figure out

  • what the consequence of each of those actions might be.

  • So two of them-- so either turn right or stay still

  • are safe actions.

  • So that's a very simple thought experiment.

  • And here's a slightly more complicated thought experiment.

  • So imagine that the robot, there's

  • another actor in the environment.

  • It's a human.

  • The human is not looking where they're going.

  • Perhaps walking down the street peering at a smartphone.

  • That never happens, does it, of course.

  • And about to walk into a hole in the pavement.

  • Well, of course, if it were you noticing

  • that human about to walk into a hole in the pavement,

  • you would almost certainly intervene, of course.

  • And it's not just because you're a good person.

  • It's because you have the cognitive machinery

  • to predict the consequences of both your and their actions.

  • And you can figure out that if you

  • were to rush over towards them, you

  • might be able to prevent them from falling into the hole.

  • So here's the same kind of idea.

  • But with the robot.

  • Imagine it's not you, but a robot.

  • And imagine now that you are modeling

  • the consequences of yours and the human's

  • actions for each one of your next possible actions.

  • And you can see that now this time,

  • we've given a kind of numerical scale.

  • So 0 is perfectly safe, whereas 10 is seriously dangerous,

  • kind of danger of death, if you like.

  • And you can see that the safest outcome

  • is if the robot turns right.

  • In other words, the safest for the human.

  • I mean, clearly the safest for the robot

  • is either turn left or stay still.

  • But in both cases, the human would fall into the hole.

  • So you can see that we could actually

  • invent a rule which would represent

  • the best outcome for the human.

  • And this is what it looks like.

  • So if all robot actions, the human is equally safe,

  • then that means that we don't need to worry about the human,

  • so the internal model will output the safest actions

  • for the robot.

  • Else, then output the robot actions

  • for the least unsafe human outcomes.

  • Now remarkably-- and we didn't intend this,

  • this actually is an implementation

  • of Asimov's first law of robotics.

  • So a robot may not injure a human being,

  • or through inaction-- that's important,

  • the or through inaction-- allow a human being to come to harm.

  • So we kind of ended up building Asimovian robot,

  • simple Asimovian ethical robot.

  • So what does it look like?

  • Well, we've now extended to humanoid robots.

  • But we started with the e-puck robots,

  • these little-- they're about the size of a salt shaker,

  • I guess, about seven centimeters tall.

  • And this is the little arena in the lab.

  • And what we actually have inside the ethical robot is--

  • this is the internal architecture.

  • So so you can see that we have the robot controller, which

  • is, in fact, a mirror of the real robot controller,

  • a model of the robot, and a model of the world, which

  • includes others in the world.

  • So this is the simulator.

  • This is a more or less a regular robot simulator.

  • So you probably know that robot simulators

  • are quite commonplace.

  • We roboticists use them all the time to test robots in,

  • as it were, in the virtual world,

  • before then trying out the code for real.

  • But what we've done here is we've actually

  • put an off the shelf simulator inside the robot

  • and made it work in real time.

  • So the output of the simulator for each

  • of those next possible actions is evaluated and then goes

  • through a logic layer.

  • Which is essentially the rule, the if then else rule

  • that I showed you a couple of slides ago.

  • And that effectively determines or moderates

  • the action selection mechanism of the real robot.

  • So this is the simulation budget.

  • So we're actually using the open source simulator

  • Stage, a well-known simulator.

  • And in fact, we managed to get Stage to run about 600 times

  • real time.

  • Which means that we're actually cycling

  • through our internal model twice a second.

  • And for each one of those cycles,

  • we're actually modeling not four but 30 next possible actions.

  • And we're modeling about 10 seconds into the future.

  • So every half a second, our robot with an internal model

  • is looking ahead 10 seconds for about 30 next possible actions,

  • 30 of its own next possible actions.

  • But of course, it's also modeling the consequences

  • of each of the other actors, dynamic actors

  • in its environment.

  • So this is quite nice to actually do this in real time.

  • And let me show you some of the results that we got from that.

  • So ignore the kind of football pitch.

  • So what we have here is the ethical robot, which we

  • call the A-robot, after Asimov.

  • And we have a hole in the ground.

  • It's not a real hole, it's a virtual hole in the ground.

  • We don't need to be digging holes into the lab floor.

  • And we're using another e-perk as a proxy human

  • we call this the H-robot.

  • So let me show you what happened.

  • Well we ran it, first of all, with no H-robot at all,

  • as a kind of baseline.

  • And you can see on the left, in 26 runs,

  • those are the traces of the A-robot.

  • So you can see the A-robot, in fact,

  • is maintaining its own safety.

  • Its avoiding, its skirting around the edge

  • almost optimally skirting the edge of the hole in the ground.

  • But then when we introduce the H-robot,

  • you get this wonderful behavior here.

  • Where as soon as the A-robot notices that the H-robot is

  • heading towards the hole, which is about here, then

  • it deflects, it diverts from its original course.

  • And in fact, more or less collides.

  • They don't actually physically collide

  • because they have low level collision avoidance.

  • So they don't actually collide.

  • But nevertheless, the A-robot effectively

  • heads off the H-robot, but then bounces off safely,

  • goes off in another direction.

  • And the A-robot then resumes its course to its target position.

  • Which is really nice.

  • And interestingly, even though our simulator is rather low

  • fidelity, it doesn't matter.

  • Surprisingly, it doesn't matter.

  • Because the closer the A-robot to the H-robot

  • gets, then the better its predictions about colliding.

  • So this is why, even with a rather low fidelity simulator,

  • we can collide with really good precision with the H-robot.

  • So let me show you the movies of this trial with a single proxy

  • human.

  • And I think the movie starts in-- so this is real time.

  • And you can see the A-robot nicely heading off

  • the H-robot which then disappears off

  • towards the left.

  • I think then we've speeded it four times.

  • And this is a whole load of runs.

  • So you can see that it really does work.

  • And also notice that every experiment is a bit different.

  • And of course, that's what typically happens when

  • you have real physical robots.

  • Simply because of the noise in the system,

  • the fact that these are real robots with imperfect motors

  • and sensors and what have you.

  • So we wrote the paper and were about to submit

  • the paper, when we kind of thought, well,

  • this is a bit boring, isn't it?

  • We built this robot and it works.

  • So we had the idea to put a second human in the-- oh sorry.

  • I've forgotten one slide.

  • So before I get to that, I just wanted

  • to show you a little animation of-- these little filaments

  • here are the traces of the A-robot and its prediction

  • of what might happen.

  • So at the point where this turns red,

  • the A-robot then starts to intersect.

  • And each one of those traces is its prediction

  • of the consequences of both itself and the H-robot.

  • This is really nice because you can kind of look into the mind,

  • to put it that way, of the robot,

  • and actually see what it's doing.

  • Which is very nice, very cool.

  • But I was about to say we tried the same experiment, in fact,

  • identical code, with two H-robots.

  • And this is the robot's dilemma.

  • This may be the first time that a real physical robot

  • has faced an ethical dilemma.

  • So you can see the two H-robots are more or less

  • equidistant from the hole.

  • And there is the A-robot which, in fact,

  • fails to save either of them.

  • So what's going on there?

  • We know that it can save one of them every time.

  • But in fact, it's just failed to save either.

  • And oh, yeah, it does actually save one of them.

  • And has a look at the other one, but it's too late.

  • So this is really very interesting.

  • And not at all what we expected.

  • In fact, let me show you the statistics.

  • So in 33 runs, the ethical robot failed

  • to save either of the H-robots just under half the time.

  • So about 14 times, it failed to save either.

  • It saved one of them just over 15, perhaps 16 times.

  • And amazingly, saved both of them twice,

  • which is quite surprising.

  • It really should perform better than that.

  • And in fact, when we started to really look at this,

  • we discovered that the-- so here's

  • a particularly good example of dithering.

  • So we realized that we made a sort

  • of pathologically indecisive ethical robot.

  • So I'm going to save this one-- oh no, no, that

  • one-- oh no, no, this one-- that one.

  • And of course, by the time our ethical robot

  • has changed its mind three or four times,

  • well, it's too late.

  • So this is the problem.

  • The problem, fundamentally, is that our ethical robot

  • doesn't make a decision and stick to it.

  • In fact, it's a consequence of the fact

  • that we are running our consequence engine,

  • as I mentioned, twice a second.

  • So every half a second, our ethical robot

  • has the opportunity to change its mind.

  • That's clearly a bad strategy.

  • But nevertheless, it was an interesting kind

  • of unexpected consequence of the experiment.

  • We've now transferred the work to these humanoid robots.

  • And we get the same thing.

  • So here, there are two red robots

  • both heading toward danger.

  • The blue one, the ethical robot, changes its mind,

  • and goes and saves the one on the left,

  • even though it could have saved the one on the right.

  • So another example of our dithering ethical robot.

  • And as I've just hinted at, the reason

  • that there our ethical robot is so indecisive

  • is because it's essentially a memory-less architecture.

  • So you could say that the robot has a-- again, borrowing

  • Owen Holland's description, it has a functional imagination.

  • But it has no autobiographical memory.

  • So it doesn't remember the decision

  • it made half a second ago.

  • Which is clearly not a good strategy.

  • Really, an ethical robot, just like you

  • if you are acting in a similar situation,

  • it's probably a good idea for you

  • to stick to the first decision that you made.

  • But probably not forever.

  • So you know, I think the decisions probably

  • need to be sticky somehow.

  • So decisions like this may need a half life.

  • You know, sticky but not but not absolutely rigid.

  • So actually, at this point, we decided

  • that we're not going to worry too much about this problem.

  • Because in a sense, this is more of a problem for ethicists

  • than engineers, perhaps.

  • I don't know.

  • But maybe we could talk about that.

  • Before finishing, I want to show you

  • another experiment that we did with the same architecture,

  • exactly the same architecture.

  • And this is what we call the corridor experiment.

  • So here we have a robot with this internal model.

  • And it has to get from the left hand to the right hand

  • of a crowded corridor without bumping

  • into any of the other robots that are in the same corridor.

  • So imagine you're walking down a corridor in an airport

  • and everybody else is coming in the opposite direction.

  • And you want to try and get to the other end of the corridor

  • without crashing into any of them.

  • But in fact, you have a rather large body space.

  • You don't want to get even close to any of them.

  • So you want to maintain your private body space.

  • And what the blue robot here is doing

  • is, in fact, modeling the consequences of its actions

  • and the other ones within this radius of attention.

  • So this blue circle is a radius of attention.

  • So here, we're looking at a simple attention mechanism.

  • Which is only worry about the other dynamic actors

  • within your radius of attention.

  • In fact, we don't even worry about ones that are behind us.

  • It's only the ones that are more or less in front of us.

  • And you can see that the robot does eventually make it

  • to the end of the corridor.

  • But with lots of kind of stops and back tracks

  • in order to prevent it from-- because it's really

  • frightened of any kind of contact with the other robots.

  • And here, we're not showing all of the sort

  • of filaments of prediction.

  • Only the ones that are chosen.

  • And here are some results which interestingly show us--

  • so perhaps the best one to look at is this danger ratio.

  • And dumb simply means robots with no internal model at all.

  • And intelligent means robots with internal models.

  • So here, the danger ratio is the number of times

  • that you actually come close to another robot.

  • And of course it's very high.

  • This is simulated in real robots.

  • Very good correlation between the real and simulated.

  • And with the intelligent robot, the robot

  • with the internal model, we get a really very much safer

  • performance.

  • Clearly there is some cost in the sense

  • that, for instance, the intelligent robot

  • runs with internal models tend to cover more ground.

  • But surprisingly, not that much further distance.

  • It's less than you'd expect.

  • And clearly, there's a computational cost.

  • Because the computational cost of simulating clearly

  • is zero for the dumb robots, whereas it's quite high for

  • the intelligent robot, the robot with internal models.

  • But again, computation is relatively free these days.

  • So actually, we're trading safety

  • for computation, which I think is a good trade off.

  • So really, I want to conclude there.

  • I've not, of course, talked about all aspects

  • of robot intelligence.

  • That would be a three hour seminar.

  • And even then, I wouldn't be able to cover it all.

  • But what I hope I've shown you in the last few minutes

  • is that with internal models, we have

  • a very powerful generic architecture which we could

  • call a functional imagination.

  • And this is where I'm being a little bit speculative.

  • Perhaps this moves us in the direction

  • of artificial theory of mind, perhaps even self-awareness.

  • I'm not going to use the word machine consciousness.

  • Well, I just have.

  • But that's a very much more difficult goal, I think.

  • And I think there is practical value,

  • I think there's real practical value in robotics of robots

  • with self and other simulation.

  • Because as I hope I've demonstrated,

  • at least in a kind of prototype sense, proof of concept,

  • that such simulation moves us towards safer and possibly

  • ethical systems in unpredictable environments

  • with other dynamic actors.

  • So thank you very much indeed for listening.

  • I'd obviously be delighted to take any questions.

  • Thank you.

  • [APPLAUSE]

  • HOST: Thank you very much for this very fascinating view

  • on robotics today.

  • We have time for questions.

  • Please wait until you've got a microphone so we have

  • the answer also on the video.

  • AUDIENCE: The game playing computers-- or perhaps more

  • accurately would be saying game playing algorithms,

  • predated the examples you listed as computers

  • with internal models.

  • Still, you didn't mention those.

  • Is there a particular reason why you didn't?

  • ALAN WINFIELD: I guess I should have mentioned them.

  • You're quite right.

  • I mean, the-- what I'm thinking of here

  • is particularly robots with explicit simulations

  • of themselves and the world.

  • So I was limiting my examples to simulations of themselves

  • in the world.

  • I mean, you're quite right that of course

  • game-playing algorithms need to have a simulation of the game.

  • And quite likely, of the-- in fact,

  • certainly, of the possible moves of the opponent,

  • as well as the game-playing AI's own moves.

  • So you're quite right.

  • I mean, it's a different kind of simulation.

  • But I should include that.

  • You're right.

  • AUDIENCE: Hi there.

  • In your simulation, you had the H-robot with one goal,

  • and the A-robot with a different goal.

  • And they interacted with each other halfway

  • through the goals.

  • What happens when they have the same goal?

  • ALAN WINFIELD: The same goal.

  • AUDIENCE: Reaching the same spot, for example.

  • ALAN WINFIELD: I don't know is the short answer.

  • It depends on whether that spot is a safe spot or not.

  • I mean, if it's a safe spot, then they'll both go toward it.

  • They'll both reach it, but without crashing

  • into each other.

  • Because the A-robot will make sure

  • that it avoids the H-robot.

  • In fact, that's more or less what's

  • happening in the corridor experiment.

  • That's right.

  • Yeah.

  • But it's a good question, we should try that.

  • AUDIENCE: The simulation that you

  • did for the corridor experiment, the actual real world

  • experiment, the simulation track the other robots

  • movements as well?

  • Meaning what information did the simulation

  • have that it began with, versus what did it perceive?

  • Because, I mean, the other robots are moving.

  • And in the real world, they might not

  • move as you predict them to be.

  • How did the blue robot actually know for each step

  • where the robots were?

  • ALAN WINFIELD: Sure.

  • That's a very good question.

  • In fact we cheated, in the sense that we

  • used-- for the real robot experiments,

  • we used a tracking system.

  • Which means that essentially the robot with an internal model

  • has access to the position.

  • It's like a GPS, internal GPS system.

  • But in a way, that's really just a kind

  • of-- it's kind of cheating, but even a robot with a vision

  • system would be able to track all the robots

  • in its field of vision.

  • And as for the second part of your question,

  • our kind of model of what the other robot that would do

  • is very simple.

  • Which is it's kind of a ballistic model.

  • Which is if a robot is moving at a particular speed

  • in a particular direction, then we

  • assume it will continue to do so until it encounters

  • an obstacle.

  • So very simple kind of ballistic model.

  • Which even for humans is useful for very simple behaviors,

  • like moving in a crowded space.

  • Oh hi.

  • AUDIENCE: In the same experiment--

  • it's a continuation of the previous question.

  • So in between some of the red robots, how

  • changed their direction randomly-- I guess so.

  • Does the internal model of the blue robot consider that?

  • ALAN WINFIELD: Not explicitly.

  • But it does in the sense that because it's

  • pre- or re-initializing it's internal model every half

  • a second, then if the positions and directions of the actors

  • in its environment are changed, then they

  • will reflect the new positions.

  • So--

  • AUDIENCE: Not exactly the positions.

  • But as you said, you have considered the ballistic motion

  • of the objects.

  • So if there is any randomness in the environment-- so

  • does the internal model of the blue robot

  • consider the randomness, and change

  • the view of the red robots?

  • It's like it views the red robot as a ballistic motion.

  • So does it change its view of the red robot

  • that red robots more in the ballistic motion?

  • ALAN WINFIELD: Well, it's a very good question.

  • I think the answer is no.

  • I think we're probably assuming a more or less deterministic

  • model of the world.

  • Deterministic, yes, I think pretty much deterministic.

  • But we're relying on the fact that we

  • are updating and rerunning the model,

  • reinitializing and rerunning the model every half a second,

  • to, if you like, track the stochasticity which is

  • inevitable in the real world.

  • We probably do need to introduce some stochasticity

  • into the internal model, yes.

  • But not yet.

  • AUDIENCE: Thank you.

  • ALAN WINFIELD: But very good question.

  • AUDIENCE: Hello.

  • With real life applications with this technology,

  • like driverless cars, for example,

  • I think it becomes a lot more important how you program

  • the robots in terms of ethics.

  • So I mean, there could be dilemma like if the robot has

  • a choice between saving a school bus full of kids

  • versus one driver, that logic needs to be programmed, right?

  • And you made a distinction between being an engineer

  • yourself and then had been an ethicist earlier.

  • So to what extent is the engineer

  • responsible in that case?

  • And also does a project like this in real life

  • always require the ethicist?

  • How do you see this field in real life applications

  • evolving?

  • ALAN WINFIELD: Sure.

  • That's a really great question.

  • I mean, you're right that driverless cars will-- well,

  • it's debatable whether they will have to make such decisions.

  • But many people think they will have to make such decisions.

  • Which are kind of the driverless car equivalent of the trolley

  • problem, which is a well-known kind of ethical dilemma thought

  • experiment.

  • Now my view is that the rules will

  • need to be decided not by the engineers, but if you like,

  • by the whole of society.

  • So ultimately, the rules that decide

  • how the driverless car should behave

  • under these difficult circumstances, impossible,

  • in fact, circumstances-- and even

  • if we should, in fact, program those rules into the car.

  • So some people argue that the driverless cars should not

  • attempt to, as it were, make a rule driven decision.

  • But just leave it to chance.

  • And again, I think that's an open question.

  • But this is really why I think this dialogue and debate

  • and conversations with regulators, lawyers, ethicists,

  • and the general public, users of driverless cars,

  • I think is why we need to have this debate.

  • Because whatever those rules are, and even

  • whether we have them or not, is something

  • that should be decided, as it were, collectively.

  • I mean, someone asked me last week,

  • should you be able to alter the ethics of your own driverless

  • car?

  • My answer is absolutely not.

  • I mean, that should be illegal.

  • So I think that if driverless cars were

  • to have a set of rules, and especially

  • if those rules had numbers associated with them.

  • I mean, let's think of a less emotive example.

  • Imagine a driverless car and an animal runs into the road.

  • Well, , the driverless car can either ignore the animal

  • and definitely kill the animal, or it could try and brake,

  • possibly causing harm to the driver or the passengers.

  • But effectively reducing the probability

  • of killing the animal.

  • So there's an example where you have some numbers

  • to tweak if you like, parameters.

  • So if these rules are built into driverless cars,

  • they'll be parameterized.

  • And I think it should be absolutely

  • illegal to hack those parameters, to change them.

  • In the same way that it's probably illegal right now

  • to hack an aircraft autopilot.

  • I suspect that probably is illegal.

  • If it isn't, it should be.

  • So I think that you don't need to go

  • far down this line of argument before realizing

  • that the regulation and legislation has

  • to come into play.

  • In fact, I saw a piece in just this morning in Wired

  • that, I think, in the US, regulation for driverless cars

  • is now on the table.

  • Which is absolutely right.

  • I mean, we need to have regulatory framework,

  • or what I call governance frameworks for driverless cars.

  • And in fact, lots of other autonomous systems.

  • Not just driverless cars.

  • But great question, thank you.

  • AUDIENCE: In the experiment with the corridor,

  • you always assume-- even in the other experiments-- you always

  • assume that the main actor is the most intelligent

  • and the others are not.

  • Like they're dumb, or like they're

  • ballistic models or linear models.

  • Have you tried doing a similar experiment

  • in which still each actor is intelligent but assumes

  • that the others are not, but actually everyone

  • is intelligent?

  • So like everyone is a blue dot in the experiment

  • with the model that you have.

  • And also, have you considered changing the model so that he

  • assumes that the others have the same model

  • that that particular actor has, as well.

  • [INTERPOSING VOICES]

  • ALAN WINFIELD: No, we're doing it right now.

  • So we're doing that experiment right now.

  • And if you ask me back in a year,

  • perhaps I can tell you what happ-- I mean, it's really mad.

  • But it does take us down this direction

  • of artificial theory of mind.

  • So if you have several robots or actors,

  • each of which is modeling the behavior of the other,

  • then you get-- I mean, some of the--

  • I don't even have a movie to show you.

  • But in simulation we've tried this

  • where we have two robots which are kind of like-- imagine,

  • this happens to all of us, you're walking down

  • the pavement and you do the sort of sidestep dance

  • with someone who's coming towards you.

  • And so the research question that we're

  • asking ourselves is do we get the same thing.

  • And it seems to be that we do.

  • So if the robots are symmetrical, in other words,

  • they're each modeling the other, then

  • we can get these kind of little interesting dances, where each

  • is trying to get out of the way of the other, but in fact,

  • choosing in a sense the opposite.

  • So one chooses to step right, the other chooses to step left,

  • and they still can't go past each other.

  • But it's hugely interesting.

  • Yes, hugely interesting.

  • AUDIENCE: Hi.

  • I think it's really interesting how

  • you point out the importance of simulations

  • and internal models.

  • But I feel that one thing that is slightly left out

  • there is a huge gap from going from simulation to real world

  • robots, for example.

  • And I assume that in these simulations

  • you kind of assume that the sensors are 100% reliable.

  • And that's obviously not the case in reality.

  • And especially if we're talking about autonomous cars or robots

  • and safety.

  • How do you calculate the uncertainty

  • that comes with the sensors in the equation?

  • ALAN WINFIELD: Sure.

  • I mean, this is a deeply interesting question.

  • And the short answer is I don't know.

  • I mean, this is all future work.

  • I mean, my instinct is that a robot

  • with a simulation, internal simulation,

  • even if that simulation in a sense is idealized,

  • is still probably going to be safer than a robot that has

  • no internal simulation at all.

  • And you know, I think we humans have multiple simulations

  • running all the time.

  • So I think we have sort of quick and dirty, kind of low fidelity

  • simulations when we need to move fast.

  • But clearly when you need to plan something, plan

  • some complicated action, like where

  • you are going to go on holiday next year,

  • you clearly don't use the same internal model, same simulation

  • as for when you try and stop someone

  • from running into the road.

  • So I think that future intelligent robots

  • will need also to have multiple simulators.

  • And also strategies for choosing which

  • fidelity simulator to use at a particular time.

  • And if a particular situation requires

  • that you need high fidelity, then for instance,

  • one of the things that you could do,

  • which actually I think humans probably do,

  • is that you simply move more slowly to give your self time.

  • Or even stop to give yourself time

  • to figure out what's going on.

  • And in a sense, plan your strategy.

  • So I think even with the computational power we have,

  • there will still be a limited simulation budget.

  • And I suspect that that simulation budget

  • will mean that in real time, when you're

  • doing this in real time, you probably can't run your highest

  • fidelity simulator.

  • And taking into account all of those probabilistic, noisy

  • sensors and actuators and so on, you probably

  • can't run that simulator all the time.

  • So I think we're going to have to have

  • a nuanced approach where we have perhaps multiple simulators

  • with multiple fidelities.

  • Or maybe a sort of tuning, where you can tune

  • the fidelity of your simulator.

  • So this is kind of a new area of research.

  • I don't know anybody who's thinking about this yet, apart

  • from ourselves.

  • So great.

  • AUDIENCE: [INAUDIBLE].

  • ALAN WINFIELD: It is pretty hard, yes.

  • Please.

  • AUDIENCE: [INAUDIBLE].

  • ALAN WINFIELD: Do you want the microphone?

  • Sorry.

  • AUDIENCE: Have you considered this particular situation

  • where there are two Asimov robots--

  • and that would be an extension of the question that he asked.

  • So for example, if there are two guys walking

  • on a pavement and there could be a possibility

  • of mutual cooperation.

  • As in one might communicate whether that I might step out

  • of this place and you might go.

  • And then I'll go after that.

  • So if there are two Asimov robots,

  • will there be a possibility, and have

  • you considered this fact that both will communicate

  • with each other, and they will eventually come to a conclusion

  • that I will probably walk, and the other will get out

  • of the way.

  • And the second part of this question

  • would be what if one of the robots

  • actually does not agree to cooperate?

  • I mean, since they both would have different simulators.

  • They could have different simulators.

  • And one might actually try to communicate that you step out

  • of the way so that I might go.

  • And the other one doesn't agree with that.

  • I mean, what would the [INAUDIBLE].

  • ALAN WINFIELD: Yeah, it's a good question.

  • In fact, we've actually gotten a new paper which

  • we're just writing right now.

  • And the sort of working title is "The Dark Side

  • of Ethical Robots."

  • And one of the things that we discovered-- it's actually not

  • surprising-- is that you only need to change one line of code

  • for a co-operative robot to become a competitive robot,

  • or even an aggressive robot.

  • So it's fairly obvious, when you start to think about it,

  • if your ethical rules are very simply written,

  • and are a kind of layer, if you like,

  • on top of the rest of the architecture,

  • then it's not that difficult to change those rules.

  • And yes, we've done some experiments.

  • And again, I don't have any videos to show you.

  • But they're pretty interesting, showing

  • how easy it is to make a competitive robot, or even

  • an aggressive robot using this approach.

  • In fact, on the BBC six months ago or so,

  • I was asked surely if you can make an ethical robot,

  • doesn't that mean you can make an unethical robot?

  • And the answer, I'm afraid, is yes.

  • It does mean that.

  • But this really goes back to your question earlier,

  • which is that it should be-- we should make

  • sure it's illegal to convert, to turn,

  • if you like, to recode an ethical robot

  • as an unethical robot.

  • Or even it should be illegal to make unethical robots.

  • Something like that.

  • But it's a great question.

  • And short answer, yes.

  • And yes, we have some interesting new results,

  • new paper on, as it were, unethical robots.

  • Yeah.

  • HOST: All right, we are running out of time now.

  • Thanks everyone for coming today.

  • Thanks, Professor Alan Winfield.

  • ALAN WINFIELD: Thank you.

  • [APPLAUSE]

ALAN WINFIELD: Thank you very much indeed.

字幕と単語

ワンタップで英和辞典検索 単語をクリックすると、意味が表示されます

B1 中級

アラン・ウィンフィールド"考えるロボット」|Googleで講演 (Alan Winfield: "The Thinking Robot" | Talks at Google)

  • 280 14
    richardwang に公開 2021 年 01 月 14 日
動画の中の単語