Placeholder Image

字幕表 動画を再生する

  • SUSANNA: Authors at Google today is very pleased

  • to invite David Mindell.

  • David Mindell is the Dibner professor

  • of the history of engineering and manufacturing at MIT.

  • He has 25 years of experience as an engineer

  • in the field of undersea robotic exploration

  • as a veteran of more than 30 oceanographic expeditions,

  • and more recently, as an airplane pilot and engineer

  • of autonomous aircraft.

  • He is the award winning author of "Iron Coffin-- War,

  • Technology, and Experience aboard the USS Monitor,"

  • and "Digital Apollo-- Human and Machine in Spaceflight."

  • And his most recent book, "Our Robots, Ourselves"

  • was published by Viking on October 13 of this year.

  • Please join me in welcoming David Mindell.

  • [APPLAUSE]

  • DAVID MINDELL: Thank you Susanna for that nice introduction.

  • It's a pleasure to be here.

  • And I'm going to talk about my new book,

  • "Our Robots, Ourselves."

  • I come to this book sort of out of the experience

  • of my previous book, which was called "Digital Apollo."

  • And "Digital Apollo" was about the computers

  • and the software that were used inside both the command module

  • and the lunar module for the Apollo lunar landings

  • in the '60s, how they were designed,

  • how they were engineered.

  • It was really the first embedded computer.

  • It was certainly the first computer

  • that software became central to human life

  • and was life critical software, and one of the first real time

  • control computers, and the first digital fly by wire system.

  • And you can see in this image over on the right, which

  • is the cover image-- it was made actually by John Knoll who you

  • may know from Industrial Light & Magic-- and a little bit more

  • clearly presented here.

  • The book focuses on this kind of climactic moment in the Apollo

  • 11 lunar landing where the mythology went,

  • Armstrong reaches up and turns off

  • the computer the last minute and lands the spacecraft by hand

  • to avoid this crater that you can see there

  • out the window, West crater.

  • And the book sort of takes that moment as a starting point

  • for why would he turn off the computer,

  • and why was that important?

  • And now it turns out that he didn't turn off the computer.

  • He turned it from a fairly highly automated targeting

  • mode that kind of allowed him a kind of cursor

  • control around the moon to a still fairly highly

  • semi-automated mode, attitude hold

  • in his right hand, rate of descent

  • with a switch in his left hand.

  • Still very much a fly by wire kind of automated mode.

  • That's actually not so far from how pilots

  • fly Airbus airliners today.

  • He didn't turn off the computer.

  • He moved it to a different level of automation

  • to suit what he felt was the situation at the time.

  • And this was a very interesting moment because I learned,

  • in the course of writing this book, at the time,

  • the Soviet spacecraft were controlled by analog computers.

  • And they were very highly automated.

  • They left very little discretion and role for the astronauts.

  • The American computer was a general purpose

  • digital computer, one of the first integrated circuits--

  • uses of integrated circuits.

  • In fact, this computer consumed 60%

  • of the national output of integrated circuits

  • for a couple years during the '60s.

  • So it was a very advanced, forward looking thing to do,

  • including all the complexities of the software.

  • And yet all that advanced technology

  • did not make the machine more automated.

  • It just made it automated in a more nuanced, sophisticated way

  • that gave the human better control over the spacecraft.

  • And that gave me this idea, which I then

  • pursued throughout this new book,

  • that often the highest level of automation--

  • we talk about levels of automation--

  • is not necessarily full autonomy.

  • And the most difficult challenging

  • technological problem is not full autonomy,

  • but rather what I've come to call a perfect five.

  • If you think about level one as fully manual,

  • level 10 as fully automated, the perfect five

  • is really the most complicated, difficult,

  • and I think also socially and financially rewarding

  • place to have automated systems where the human is in control.

  • There's trusted, transparent autonomy.

  • And the system can be moved up and down various levels,

  • turning things on, turning things off,

  • in order to suit the particular situation.

  • And what you see through the Apollo story,

  • and many of the stories in the book is that a lot of systems

  • start out in the engineering phase

  • as imagining full autonomy.

  • The engineers on the Apollo computer

  • thought there would only be two buttons on the interface.

  • One would be go to moon, and one would be take me home.

  • And instead of course what you ended up

  • with was this very rich, very carefully designed

  • mix of instruments and controls.

  • As these systems frequently, time and time again,

  • move from laboratory to field, there

  • are human interventions and human controls

  • put in at critical moments.

  • So again, I've been talking about the perfect five.

  • So the subtitle of the book is "The Myths of Autonomy."

  • I want to read you a little bit from chapter one

  • about what those myths are.

  • First there's the myth of linear progress,

  • the idea that technology evolves from direct human involvement

  • to remote presence, and then to fully autonomous robots.

  • Political scientist Peter Singer--

  • you may be familiar with his book "Wired for War"--

  • epitomizes this pathology when he writes that quote,

  • "this concept of keeping the human in the loop is already

  • being eroded by both policymakers and the technology

  • itself, which are both moving rapidly toward pushing humans

  • out of the loop," unquote.

  • But there's no evidence to suggest

  • that this is a natural evolution,

  • that the technology itself, as Singer puts it,

  • does any such thing.

  • In fact, there's good evidence-- a lot of it

  • is presented in this book-- that people are moving into deeper

  • intimacy with their machines.

  • Second is the myth of replacement,

  • the idea that machines take over human jobs one for one.

  • But human factors, researchers, and cognitive systems engineers

  • have found that really does automation simply

  • mechanize a human task.

  • Rather, it tends to make the task more complex,

  • often increases the workload, or at least shifts it around.

  • And finally, as I mentioned, we have the myth

  • of full autonomy, the Utopian idea that

  • robots today and in the future should operate entirely

  • on their own.

  • Yes, automation can certainly take

  • on parts of tests that were previously

  • accomplished by humans.

  • Machines do act on their own in response

  • to their environments for certain periods of time.

  • But the machine that operates entirely independently

  • of human direction is a useless machine.

  • And I used to say that only a rock is truly autonomous.

  • But then my geologist friends reminded me

  • that even rocks are shaped and formed by their environments.

  • Automation changes the type of human involvement required,

  • transforms it, but does not eliminate it.

  • For any apparently autonomous system,

  • you can always find the wrapper of human control

  • that makes it useful and returns meaningful data.

  • The questions that I'm interested in then

  • are not manned versus unmanned, human control

  • versus autonomous, but rather where are the people?

  • Which people are they?

  • What are they doing, and when are they doing it?

  • And so you can trace through networks of-- in some sense,

  • any programming task is a kind of placing

  • of human intention, and human design,

  • and human views of the world into a piece of software that

  • is later executed at some future date.

  • So the book covers four extreme environments.

  • And the idea is that in these extreme environments like space

  • flight, people have been forced to use robotics for 30

  • or 40 years because, in many cases,

  • it was the only way to do a job where

  • people couldn't physically be.

  • And we can look at those environments

  • and see something about our robotic future

  • in more ordinary environments like automobiles

  • and other aspects of daily life.