字幕表 動画を再生する
[MUSIC PLAYING]
[MUSIC PLAYING]
Ethics applies in any context in which you introduce robots.
And it pervades really everything we do.
Robots are entering into a lot more areas of our lives
than ever before and taking on different social roles.
I love robots.
I want to build robots.
But I think we can't be naive about the possible harms
and repercussions that we don't even expect.
Because that will affect real people's lives.
[MUSIC PLAYING]
I think we have a very powerfully ambivalent
relationship to robotics.
And what I mean by that is, on the one hand
we're drawn to them very strongly.
They're very compelling and captivating as objects.
But they also can be very scary in many cases.
[GUNSHOTS]
There's a host of concerns about robotic systems.
Robots really represent the ability
to have roaming, roving cameras that are capturing
all kinds of information.
All of the outdoor space, public space,
places that were not observed, are going to be observed.
And that data is all going to be captured and recorded.
And we really don't have any kind of laws and regulations
about how that can be utilized.
The other thing that makes these very different
is the fact that they move around.
These are physically engaged with the world.
And that means they can do a lot of harm in the physical world.
And Amazon is talking about these quadrotors or octorotors
that are going to be delivering packages to your front door.
My first concern is, well, that's eight lawn mower
blades coming down for a landing in front of your house,
especially if it's carrying a box of books.
Then you have a whole host of legal issues
around who's responsible when these things do some harm.
So there's the person who maybe owns the robot.
There's the person who programs it or maybe
tells it what to do.
The idea of who really shares in these kinds of responsibilities
becomes very critical, especially when
we think about technological innovation.
Because a company that manufactures a robot,
if they hold all of the liability,
then that's a really big impediment
to them bringing to market these different technologies.
But if we shift the law so that they don't have any liability
or we greatly limit the kind of liability that they can have,
we could wind up with a lot of really dangerous robots.
So we really have to kind of find a balance between these.
[MUSIC PLAYING]
Asimov's three laws of robotics are pretty straightforward.
A robot should obey a human being.
A robot shouldn't do any harm.
And a robot should then engage in effect self-preservation.
But if you actually read Asimov's stories,
in nearly every story, something goes awry largely because
of these laws themselves.
So what happens if two humans give orders to the robots
and they're conflicting orders?
What should the robot do then?
He illustrated how just a simple rule-base morality does not
work.
[CRASHING NOISE]
Robots are going to encounter some hair-raising scenarios.
With self-driving cars on a narrow bridge and a school bus
is headed toward it, what does it do Does it crash
into the bus?
Or does it drive off the bridge, taking you
and the other passengers in the car to immediate death?
In the ancient times, we used to believe that being moral
was to transcend all your emotional responses
and come up with the perfect analysis.
But actually, an awful lot more comes
into play, our ability to read the emotions of others,
our consciousness, our understanding
of habits and rituals and the meaning of different gestures.
It's not clear that we know how to get
that kind of understanding or appreciation
into those systems.
So will their analytical tools be satisfactory?
They may win the game of Jeopardy.
And the danger of that is that will
make us liable to attribute levels or kinds of intelligence
to them that they do not have and may lead to situations
where we become increasingly reliant on them
managing tools that they won't really
know how to manage when that idiosyncratic and truly
dangerous situation arises.
I think the revolution in military robotics has been
I think just the widespread adoption and use
of the unmanned aerial systems by the US military.
As these things make mistakes, we
don't really know who's necessarily controlling them.
If they've been programmed, again, who's responsible?
So it becomes much more easy to distance yourself
from the responsibility.
And I think in the case of autonomous systems,
it's a really big question.
Because if these things accidentally
kill people or commit something which,
if it was done by a human, we would consider it a war crime,
now it's being done by a machine,
so is that just a technological error?
Or is it a war crime?
In legal terms, that's really about intention
to commit a war crime.
Otherwise, it's just sort of a mistake,
which is very different than in product liability.
So if you make a mistake with product liability,
there's a lawsuit and the company still
has to pay even though they didn't intend the harm.
But in war, that's not the case.
I think robots are not a threat in and of themselves.
I think we have to worry about how they're used.
And we have to design them and build them very responsibly.
But I think we can't be naive about the possible harms
and repercussions and the ways that they
may be used that we don't approve of or the ways that we
don't even expect, so relieving us of responsibility
for our actions because we can point to the robot
and say, well, it's not my responsibility.
I think that's a kind of abdication of responsibility.
It's a threat to who we are as humans
and how we develop as a society.
What interests me is how humans are starting
to interact with robots that views robots
not so much as objects but as lifelike things
and the ethical questions that come along with that.
Will you be my friend?
Sure thing, Martin.
Anthropomorphism is our tendency
to project human-like qualities on animals
or life-like qualities on objects.
And the interesting thing about robots, particularly social
robots, is that just the fact that we're having something
moving around in our physical space
that we can't quite anticipate lends itself
to this projection.
And we start to name these object
or give them a gender or ascribe intent or states of mind
to them.
We'll even feel bad for them if they get stuck under the couch.
And so we start to perceive these objects differently
than we do other objects like toasters.
This can get actually pretty extreme.
So I did a workshop where my friend [INAUDIBLE]
and I gave people little robotic dinosaurs
and had them play with them.
And then we asked them to torture and kill them.
And they had a lot of trouble doing it.
They basically refused to even hit the things.
So anthropomorphism can go a long way
in how we're willing to treat things.
Even with the primitive social robots we have now,
there are YouTube videos of robots being
tortured or treated violently.
And the comments underneath these videos
are very polarized.
And some people already get very upset by this type of behavior.
We're not at this stage yet in robotic development
that robots could actually experience something
that's anywhere near to how we imagine our own pain
experience.
So the question is not do the robots actually feel it
when you hurt them?
The question is more, what do we feel when we hurt them?
One could imagine especially a child who doesn't really
understand the difference between a cat
and a robotic object becoming either
traumatized or desensitized to this type of behavior.
If our research shows that violent behavior and hurting
robots translates to treating other humans in a violent way
or turning off parts of us that are empathetic,
then yes, we probably should prevent people
from hurting robots.
If we put a red line in the sand right now,
that doesn't mean that we will never
see terminator-like creatures.
But it means we're trying to design
in the appropriate ethical considerations
in today's systems.
It's going to be more important what we project onto robots
and how we interact with them and what
roles they play in society.
And that's going to drive what moral status we give them,
essentially.
And I think that's why it's important for us
collectively to also think about who we are as humans
and how we develop as a society.
[MUSIC PLAYING]