字幕表 動画を再生する
There are two visions for self-driving cars today. One of them -- this is a symbol
that the Toyota team uses, which I really love. They call it the mobility
[teammate] concept. In this particular model, the car is a virtual
copilot that arguably drives even better than you do.
It drives better than you do because it's paying attention all the time. It's
paying attention because it's a computer. It sees all around itself. And if we could
create the technology to make it possible to drive by itself, this virtual
copilot will keep you out of harm's way. I kind of imagine this idea where your car
essentially has this virtual bubble around it, virtual force field around it.
You never run into anything else around it. It knows exactly where to go. And it
keeps you out of harm's way. The second vision is a car that has no drivers at
all. In both of these examples, the computing
capability necessary to achieve autonomy,
self-driving capability, is going to be far greater than anything that's currently
available. And we'll talk about that in a second and make it clear why that's
so. In both of these visions, if we could realize it, the
contribution we can make to society is really, really wonderful and great. And so
our vision is to create the computing platform necessary to make this possible.
The self-driving program loop basically works like this. There's a sensing part.
You get all kinds of sensors. You want sensors that can see during the daytime,
during nighttime, during rain, during fog. All of the different types of conditions
that you could possibly imagine
surrounding the car, you would like to be able to sense. It could be radars.
It could be LIDARs. It could be cameras. It could be ultrasonics. You have inertial
measurement in units. You have GPS. All of these sensors contribute to sensing
where you are and what's around you.
There are several different [blocks] that I'll talk about. The top block is the map,
the precision map. That map was generated in advance. It was done through scanning,
probably LIDAR scanning. You see these cars that are running around mapping
the world in 3D, measuring the world and precisely mapping every part.
In the middle, that block is called localized. That has something to do with where you
are. Based on what you measure, based on what you perceive, we have to figure out
where you are within a few centimeters. The bottom block is perception. Perception --
seeing everything around you,
whether it's during daytime, driving directly into sunlight, in snow, in
fog, in rain. Whenever it is, we need to perceive where you are and
what's around you. And based on all of that information, we have to figure out a
way to plan your path. So starting from where you are, you perceive the world. We
figure out a way to calibrate that with a pre-known map. And then, based on
everything that's moving around you,
your movement where you are in the world all the movement of all the other
objects around you -- whether it's pedestrians crossing or bicyclists or
motorcyclists or cars that are moving all around you -- you need to find a way to
find your path. That's called path planning. You do this in basically
an infinite loop. You just keep doing this continuously. You're perceiving the
world continuously, you're comparing it against the map continuously, you're
localizing continuously, and you're planning the next step
continuously. And we do this as fast as we can so we can control the car and
make the car drive and path plan, so that it keeps you out of harm's way. It turns
out all of this is relatively hard to do. Each and every single one of these
blocks are hard to do.
Perception is hard, localization is hard, path planning is hard,
ffiguring out what sensors to use, using the right amount of sensors,
but modestly so we can make it cost-effective -- hard to do. All of these
components are hard to do, and there's innovation,
R&D, and discovery in every single component of it.
Self-driving cars is hard. It turns out that driving is hard.
When you think about highway driving, we've constrained the problem enough
that highway driving has become relatively easy, per se, to the point
where grandparents can drive. We all largely stay in our lane, we're traveling
about the same speed, and so relative to each other we're hardly moving. Highway
driving is relatively easy. And even then, there are many conditions by which
highway driving hasn't been solved. We have highway driving in all kinds of
conditions. If a truck carrying some junk, some parts of it fell off onto a
road -- that happens, as you know, pretty much all the time. And when that happens,
your car has to do the right thing, take evasive action, and keep you out
of harm's way. So, even highway driving is hard. But we can't realize
the full potential of our vision unless we can solve also city driving. Now, in
city driving, almost none of us are going in the same direction.
You've got cars coming sideways, you've got people walking all over the place, bikers
are of course on the same road you are, motorcyclists. And people are of
course sometimes following the rules and largely not. So, you can't read a manual
from the DMV and figure out exactly what program to write into
your car to cause it to drive properly. And so city driving is very chaotic. It's very,
very hard, and the perception problem, as you can imagine, explodes. You have to
perceive all kinds of cars, all different types of people, all different types of
bicyclists, all different types of environments.
Sometimes the road is being fixed; sometimes there's construction going on.
Sometimes the lane is painted over from a new lane, and all of the sudden the number of
lanes you look at with your car is rather confusing for the car. And so,
everything around that environment is chaotic.
It's complex, it's unpredictable, and oftentimes it's even hazardous. So, self-driving
cars are hard.