字幕表 動画を再生する
TRACE DOMINGUEZ: Hey, everyone.
Thanks for tuning in to Seeker Plus.
I know this is also on Seeker.
This is a slightly different format
where we get really deep into a topic for a little while.
So stick with us.
I'm Trace.
This is going to be really cool.
We're talking about artificial intelligence today.
And we didn't want to present it as artificial intelligence is
scary or artificial intelligence is the best,
but come down in the middle and give you both perspectives
because there are people on both sides of pretty much every one
of these debates.
So let's kick into it.
Artificial intelligence is everywhere,
and it's growing both in scope and also in scale.
It's just getting to be in everything.
And people have strong opinions on this.
One article called "The Debate Itself--
Singulatarians Versus Skeptics," which
we thought was kind of cool.
And people do see artificial intelligence
as a possible threat--
people like Elon Musk, Stephen Hawking,
Bill Gates kind of, and at one point Steve Wozniak.
On the other side, people see artificial intelligence
as the future and inevitable-- people
like Larry Page of Google-- and said that in order for Google
to achieve its mission, that means achieving AI.
Mark Zuckerberg uses AI to run his home,
and Facebook is moving towards having AI assistants.
Even Uber has an AI division.
So it is like this old tech new tech and with Elon Musk
right in the middle.
But chances are you all probably know this stuff already
because you follow Science News likely
if you're watching this show.
But one way to think about this is positives and negatives.
Everyone has an opinion about AI, and both of those opinions
are usually valid.
So we thought we'd see how the debate is put together
rather than coming down on one side or the other.
So let's quickly define artificial intelligence.
It's not just robots and chat bots and assistants.
It's being used across all sorts of fields in industries,
and it's computers and it's machines
imitating human intelligence.
But it's not just that.
It's more than just learning.
It's more than just replicating our intelligence.
It's about learning new things, and it's also
about the machines learning on their own.
The big idea, the big concept here could be great,
but it also could be kind of scary.
So let's talk about different ways
that you could apply artificial intelligence.
A big one being talked about right
now is artificially intelligent automobiles, trucks,
and cars that drive themselves, which, if you think about it,
that is an intelligent task.
It requires decision making and learning and learning
from your experiences.
And a big thing would be safety.
90 percent of car accidents now, car crashes,
are caused by humans, human error.
Smart cars, intelligent cars, they
would be able to take in the environment around them.
GPS could tell them what roads they're on and also
what buildings are nearby and other monuments and things,
of course.
They have cameras and scanners.
So they can see the trees and the way the road actually moves
versus what it's supposed to do.
And you can see things like other cars.
Which what if they were smart, too?
Then they could interact with each other.
And then you have this network that is moving everywhere.
What if you have smart traffic lights?
Eventually this whole system could
be one giant AI system where they're all
talking to each other.
And then we could virtually eliminate 90% of crashes,
right?
Assuming we're eliminating all of the human error ones.
Of course, there are moral issues here.
The car might be designed to cause the least
damage to the owner of the car.
It might not want the car to be destroyed in a crash.
And so that's a moral question.
Can the owner of the car, say, a taxi company
decide not to protect passengers or predestines
over protecting their property?
This is an ethical dilemma.
What about a guy who owns a car?
He would want to protect his family at all costs.
But what if in doing so it sacrificed
other people's property or destroyed
other intelligent cars?
It gets complicated because nothing
is black and white when you get it out into the real world.
And that's part of the problem with artificial intelligence
across the board right now--
is that the real world is messy.
Is it worth it to have a self-driving intelligent car
that might not think some humans are worth saving
over other humans or that some property isn't worth saving
over other property even if it can prevent more accidents,
even if it knows the fastest routes and could drive itself,
so you wouldn't need to worry about people under age
or people over age or people who were
too intoxicated or inebriated in any number of different ways?
At the end of the day, intelligent automobiles
mean that, on the good side, we don't have to go to the DMV.
We don't have to worry about drunk driving.
But on the bad side, we're taking our control away
from something that happens all the time all around us
especially in urban areas.
So if you're in a car and it's raining and you slide,
you go around a corner and maybe something bad happens.
The road's too wet.
The car did something wrong.
It's either going to destroy the life
of the person in the car or a pedestrian on the sidewalk.
That decision is made by a machine not by a human
and maybe by somebody who programmed
or owned that machine.
We as a society have to make a decision
to give up that control and hand it to an artificially
intelligent machine.
How do you feel about that?
It's a debate.
There's good and bad on both sides.
And this applies across the board.
Let's go to another example--
marketing.
Marketing is pervasive, especially
in the era of the internet.
Marketers want to know where you're going to be,
what you're going to buy, and what it takes
to get you to buy something.
You shop for shoes that one time,
and now there's an ad for shoes on every website you
visit and that shoe and a couple of other different shoes
in a variety of true colors.
And they're all really nice, and you kind of want all of them.
But you kind of want none of them
because they're just everywhere.
Or maybe you regularly visit a site on your computer
before you buy something.
Maybe you visit it 20 times.
Now right now all of that stuff happens and all of that data
is there, but no one's really looking
at all of that data except artificially
intelligent machines.
Predictive shopping and also recommendation engines
on Netflix and on Amazon--
those are basic artificial intelligences.
So imagine if they got way better.
When you shop for something, do you just go and buy it
or do you visit the same site once or twice, then
maybe look at some reviews, then maybe ask your friends about it
and then go back and buy it?
An artificial intelligence system
might be able to get you to buy that sooner because they
know your behavior.
So when you go there the first time,
what if an artificial intelligent system
just pulled in recommendations from the sites that you already
visit that it knows you visit because of the cookies--
little files-- that all of those sites
have stored on your computer?
What if it knew your social network
and it went to your friends who it knew
had bought from this web site and already
bought these types of items and said,
oh, here we know that all of your friends like this thing
and we know where you've shopped before.
And the artificially intelligent system
could anticipate what you want.
Is that scary or is that good?
Because sometimes I kind of like shoes,
but I don't know what I want.
And maybe the computer can tell me you do want this,
and I might love it.
But also it's kind of creepy to know that the computer knows
my shopping habits and knows what websites I've visited
and knows what my friends like and is anticipating
what I might like.
Good but also bad.
The Trump campaign tapped into a marketing AI
to help determine how it wanted to send out its messaging.
And this marketing AI claims to have 45,000 data points
on 230 million Americans.
They claim that they know what makes us angry, happy,
impassioned, or despondent.
That's a lot of power for somebody
to have and be able to then manipulate
all sorts of different people into doing
what they want because that's what marketing is about--
is I want to market something to you.
I have to know how to reach you in order
to do that and push your buttons and all sorts of other things.
The idea is to get you to make an emotional decision rather
than a rational one from time to time.
Again, good or bad?
What do you think?
How about in medicine?
Medicine is a field with huge amounts of data.
But most of that data is protected and stored
in very specific places, sometimes never even digitally.
It's just papers or it's just file names and not so much
all of the stuff in the file.
AI machines could potentially take
in huge amounts of this data, like medical records,
doctor's notes, treatment options, disease histories,
family histories, genetic information,
and find patterns that could help doctors diagnose
diseases in people that they didn't even know
would have them.
Let me give you an example.
If I can analyze, as an intelligent machine, all
of your medical history and also all of everyone else's
medical history, who has all of these genetic markers, what
if I can find a pattern and then use that to diagnose you
and many other people?
Would that offset the fact that I now
have your medical information and I'm using it?
Yeah, sure.
I'm using it to help other people.
Great.
But I also have your private medical data.
Maybe not great.
The AI system doesn't necessarily
care whether it has all of your data, but the more data it has,
the better it's going to be able to do.
It does involve us giving up some
of this control to an intelligent machine
in order to get a benefit back.
On top of that, is it good to take away
humans from healthcare?
Will we rely someday 100% on computers diagnosing us?
They don't really have emotions.
They don't have gut feelings.
They can't read real patients' experience in the same way
that a well-trained doctor can.
However, computers can examine data and find patterns
in our genetic codes and things.
Lists of symptoms really help with computers,
but doctors don't always get the whole list.
I've watched a lot of "House."
I know there's a lot of lying that goes on in the examination
room.
I have lied to my doctor.
I imagine you have, too.
How often do you drink a week?
Oh, a couple of times.
Is it a couple of times or is it more?
Mm?
However, how many times have you googled something
that you don't want to tell anybody about?
Probably lots of times.
So maybe, in this case, again, good and bad.
Real people have all of these gut feelings
and can help you understand what's going on,
but you'll tell machines something
that you may not tell people.
So an artificially intelligent doctor
might be able to help you in that way in a way
that a person can't and vice versa.
Really interesting discussion.
We don't have the answer.
Maybe you do.
Let us know.
So moving on.
What about jobs?
And specifically certain jobs.
An Oxford study that came out in 2013
looked at 700 different jobs and assessed their ability
to be automated and their probability that they would be.
The jobs at the top of the list, which is actually
the bottom of the list, were things
like legal advising, marketing, bank teller, tax prep, shipping
and cargo control, in fact, most everything
you can think of for retail and auditing and tax adjusting,
even sports--
things like umpires and refs.
They were the number 15 most automatable job.
Why do I need a referee if I have an intelligent system that
can do this job?
It's just assessing the rules and debating whether or not
somebody had broken them.
That is easy to do for a computer.
It's not as easy to do with a referee or an umpire.
Sports journalism, even, has already
been done in large part much of it by robots.
Humans are just slower at this stuff.
And calculating is what computers do.
So isn't this some place where artificial intelligence
is going to come in?
Now many of you are probably already thinking
about how computers and robots have taken some jobs.
And the first things that pop into people's heads are--
at least people like myself from Michigan--
is the auto industry.
It's already taken a lot of jobs there-- automation.
It's not people building cars.
It's people managing robots building cars.
This has also happened in cargo ships
and in shipping in general.
In fact, "99% Invisible" and Alex Madrigal
came out with this really great audio documentary
called "Containers."
It was featured on "99% Invisible."
It was really good.
So go check it out.
And it talks all about how docks used
to be this really exciting place filled with people and smells
and sights.
And now it's more automated.
And one person with a crane moves containers
from one place to another.
It's very different.
All of that is automation taking over.
Now imagine why do we need the person operating the crane
if we have an artificial intelligence that can do that
and it can do it faster and more precisely?
Interesting.
What about automation in a variety of other things
like when you buy coffee?
Do I really need to buy it from a person
if I have an artificially intelligent system that
can bring me that coffee?
Sidebar-- the vending machine, where
you get snacks or whatever.
That's technically an artificial intelligence system.
It knows how much money you put in.
It knows what you want.
And then you walk away.
The first vending machines showed up in 1070 in China.
They were coin operated pencil vending machines.
In the 1970s, there were machines that dispensed tobacco
at taverns.
And by the late 1800s, you could get paper, envelopes,
postcards, gum, and things on train platforms
and at post offices.
And that was somebody's livelihood before.
That was somebody who was selling you envelopes
and gum or tobacco or pencils.
So robots and automation have been taking jobs
for a long time-- end sidebar.
Anyway, back to this Oxford study
because it's super interesting.
Humans are slower at calculating,
like we were saying.
But if you take tax prep jobs, bank teller jobs, retail jobs
and you give them to intelligent machines,
we are again giving up control, giving up access
to something that used to be done by humans
and was also a point of contact of me with a stranger
or you with a friend.
We're giving that to a machine, which some people come down
on the good side of--
I don't want to have to interact with somebody to get coffee.
I just want coffee--
and some people come down on the bad side of--
I would rather have my person than my coffee.
It's part of my morning ritual.
It's part of how I go through my day.
And if you take that away from me,
then that's one less person that I get to interact with,
and that makes me happy.
Artificial intelligence can do so many different things,
and it will do so many different things in the future.
But for now, it's all a matter of debate.
On top of that, again, another sidebar.
If you take away retail and all of these low level entry level
jobs, what replaces them?
What happens then?
How does someone in high school or college
get into the workforce or pay for college?
How do they earn money if an artificial intelligence
is already doing that job?
How do you get started at work if all of the entry level jobs
are done by machines?
An interesting question.
There's no real answer to that at the moment.
What about an education?
Right now the education system is in peril.
We need more money for schools.
We need less money for schools.
It depends on who you ask, and it
depends on what outlook you have on the future of our education
system.
But an obvious downfall of artificial intelligence
moving into education is taking the roles
of teachers, where an obvious upside is
that an artificial intelligence will be able to do the best
job at teaching in terms of knowing what you need to learn
and knowing how to present that information to you,
so that you learn at the best.
When people say that they're--
I'm a visual learner, I'm a reading learner,
or I'm an audio learner--
an artificially intelligent machine
can do all of those things equally well,
whereas a teacher might be better
at some rather than others.
It's a debate.
And, again, we're relinquishing this control over something
that we have traditionally had control over in order
to potentially have an upside.
But that upside might also have a downside.
Even assisting the teacher would be really interesting
because the AI maybe do tutor stuff with individual students
on an individual basis very personalized for what they
need, while the teacher is there to give them social skills
and interaction and make sure that the classroom as a whole
is progressing together.
They can work together.
This debate does often feel black and white,
but perhaps it's less so because,
if everyone is learning from a computer
but they're getting their social skills
from other humans, what does that mean for how we think?
Are we going to have trouble thinking
when interacting with humans?
| posing more questions than I'm answering in this episode,
but I think the exciting thing is that artificial intelligence
holds so much promise in so many different areas of our culture
and our experience and our abilities
to do more in the future that, when people say,
I'm afraid of the robot takeover,
they're obviously only looking at this bad column
and not at this good column.
Because we could learn quicker and better and more
personalized.
We could have more personalized medicine.
We could have better automobiles and automobile safety.
But we'd also not control those cars.
We'd not control our healthcare data as easily.
We wouldn't have as much social interaction.
We bought coffee maybe.
There are things you have to give up.
But we've given up things before.
And we're OK now.
There are lots of other applications
that we don't have time to get into, like in the entertainment
industry, in the security industry,
in systems that make music or make movies or write stories.
AI can be good, but, like everything, maybe it's
AI in moderation.
If every aspect of our lives is being
driven by intelligent machines, does that
take away our ability to be intelligent ourselves?
I don't know.
Think about it.
A lot of what humans have been doing
over the last few millennia is just figuring out
how to live our lives on this planet
and how to make the best of the situation that we are all in.
We're in a giant aquarium floating in space,
and we're all trying to figure out the best way to live inside
of this bubble.
AI can help us with that potentially,
but it will change things.
So where do you come down on that?
If AI can do all these things that we used to have
to do, then what do we do?
All right, guys.
Thanks so much for watching this episode
about artificial intelligence.
Let us know what you think down in the comments.
Make sure that you watch last week's episode on InfraRed
right here.
And one more favor to ask.
Please go over to vote.webbyawards.com
because we got nominated for a Webby in the people's choice
category, and we need your vote.
It's for the Edge of Space video we
did where we sent a camera into the stratosphere.
It's so cool if you haven't watched it.
Go watch it.