字幕表 動画を再生する
[MUSIC PLAYING]
Maneuvering a vehicle in any type of weather
can come with its own set of challenges and limitations.
Maneuvering a vehicle through conditions
that limit visibility, such as mist or fog,
can be even more challenging or even dangerous.
But now, thanks to a team of researchers out of MIT
and their newly developed system,
there may be a solution to this problem.
MIT researchers have developed a novel imaging system that
can gauge the distance of objects
shrouded by fog so thick that human vision can't penetrate
it.
An inability to handle misty driving conditions
has been one of the main obstacles
to the development of reliable autonomous vehicular navigation
systems.
So the MIT system could be a crucial step
toward self-driving cars.
To test their system, the team placed objects
in an enclosed box approximately one meter long
and then gradually filled the space with thick fog.
Outside, pointing into the box, there
is a laser which fires pulses of light
into the foggy scene and then a camera that
measures the time it takes their reflections to return.
What they found was their system was able to image objects
even when they were indiscernible to the naked eye.
More specifically, in fog so dense
that human vision could only penetrate 36 centimeters,
their system was able to resolve images of objects
and gauge their depth at a range of 57 centimeters.
57 centimeters is not a great distance,
but the fog produced for the study is far denser than any
that a human driver would have to contend
with in the real world.
The vital point is that the system performed far better
than human vision, whereas previous systems have performed
worse.
The system is designed to get around
the issue of light reflecting off water droplets in fog,
which confuses most imaging systems, making
it almost impossible to discern objects ahead.
The MIT researchers developed an algorithm
that uses statistics about the way fog
typically scatters light to separate
the raw data from the camera into two parts,
the light reflected from the shrouded object
and the light reflected from the fog.
The light reflected from the object
is then used to image the scene and calculate the object's
distance.
Of course, visibility is not a well-defined concept,
since objects with different colors and/or textures
are visible through fog at different distances.
So to assess the system's performance,
the team used a more rigorous metric
called "optical depth," which describes the amount of light
that penetrates the fog.
Optical depth is independent of distance,
so the performance of the system on fog
that has a particular optical depth at a range of one meter
should be the same as its performance on fog
that has the same optical depth at a range of, say, 50 meters.
In fact, the system may even fare better
at longer distances, as the difference
between light particles' arrival times
will be greater, which could make for more accurate images.