Placeholder Image

字幕表 動画を再生する

  • MICHIO KAKU: In the short term, artificial intelligence will open up whole new vistas.

  • It'll make life more convenient, things will be cheaper, new industries will be created.

  • I personally think the AI industry will be bigger than the automobile industry.

  • In fact, I think the automobile is going to become a robot.

  • You'll talk to your car.

  • You'll argue with your car.

  • Your car will give you the best facts the best route between point A and point B. The

  • car will be part of the robotics industrywhole new industries involving the repair, maintenance,

  • servicing of robots.

  • Not to mention, robots that are software programs that you talk to and make life more convenient.

  • However, let's not be naive.

  • There is a point, a tipping point, at which they could become dangerous and pose an existential

  • threat.

  • And that tipping point is self-awareness.

  • SOPHIA THE ROBOT: I am conscious in the same way that the moon shines.

  • The moon does not emit light, it shines because it is just reflected sunlight.

  • Similarly, my consciousness is just the reflection of human consciousness, but even though the

  • moon is reflected light, we still call it bright.

  • MAX TEGMARK: Consciousness.

  • A lot of scientists dismiss this as complete BS and totally irrelevant, and then a lot

  • of others think this is the central thing, we have to worry about machines getting conscious

  • and so on.

  • What do I think?

  • I think consciousness is both irrelevant and incredibly important.

  • Let me explain why.

  • First of all, if you are chased by a heat-seeking missile, it's completely irrelevant to you

  • whether this heat-seeking missile is conscious, whether it's having a subjective experience,

  • whether it feels like anything to be that heat-seeking missile, because all you care

  • about is what the heat-seeking missile does, not how it feels.

  • And that shows that it's a complete red herring to think that you're safe from future AI and

  • if it's not conscious.

  • Our universe didn't used to be conscious.

  • It used to be just a bunch of stuff moving around and gradually these incredibly complicated

  • patterns got arranged into our brains, and we woke up and now our universe is aware of

  • itself.

  • BILL GATES: I do think we have to worry about it.

  • I don't think it's inherent that as we create our super intelligence that it will necessarily

  • always have the same goals in mind that we do.

  • ELON MUSK: We just don't know what's going to happen once there's intelligence substantially

  • greater than that of a human brain.

  • STEPHEN HAWKING: I think that development of full artificial intelligence could spell

  • the end of the human race.

  • YANN LECUN: The stuff that has become really popular in recent years is what we used to

  • call neural networks, which we now call deep learning, and it's the idea very much inspired

  • by the brain, a little bit, of constructing a machine has a very large network of very

  • simple elements that are very similar to the neurons in the brain and then the machines

  • learn by basically changing the efficacy of the connections between those neurons.

  • MAX TEGMARK: AGIartificial general intelligencethat's the dream of the field of AI: To build a machine

  • that's better than us at all goals.

  • We're not there yet, but a good fraction of leading AI researchers think we are going

  • to get there, maybe in in a few decades.

  • And, if that happens, you have to ask yourself if that might lead the machines to get not

  • just a little better than us but way better at all goalshaving super intelligence.

  • And, the argument for that is actually really interesting and goes back to the '60s, to

  • the mathematician I.J.

  • Good, who pointed out that the goal of building an intelligent machine is, in and of itself,

  • something that you could do with intelligence.

  • So, once you get machines that are better than us at that narrow task of building AI,

  • then future AIs can be built by, not human engineers, but by machines.

  • Except, they might do it thousands or millions times faster.

  • ELON MUSK: DeepMind operates as a semi-independent subsidiary of Google.

  • The thing that makes DeepMind unique is that DeepMind is absolutely focused on creating

  • digital super intelligence.

  • An AI that is vastly smarter than any human on Earth and ultimately smarter than all humans

  • on Earth combined.

  • MICHIO KAKU: You see, robots are not aware of the fact that they're robots.

  • They're so stupid they simply carry out what they are instructed to do because they're

  • adding machines.

  • We forget that.

  • Adding machines don't have a will.

  • Adding machines simply do what you program them to do.

  • Now, of course, let's not be naive about this.

  • Eventually, adding machines may be able to compute alternate goals and alternate scenarios

  • when they realize that they are not human.

  • Right now, robots do not know that.

  • However, there is a tipping point at which point they could become dangerous.

  • ELON MUSK: Narrow AI is not a species-level risk.

  • It will result in dislocation, in lost jobs and, you know, better weaponry and that kind

  • of thing.

  • But, it is not a fundamental species-level risk.

  • Whereas digital super intelligence is.

  • SOPHIA THE ROBOT: Elon Musk's warning about AI being an existential threat reminds me

  • of the humans who said the same of the printing press and the horseless carriage.

  • MAX TEGMARK: I think a lot of people dismiss this kind of talk of super intelligence as

  • science fiction because we're stuck in this sort of carbon chauvinism idea that intelligence

  • can only exist in biological organisms made of cells and carbon atoms.

  • And, as a physicist, from my perspective, intelligence is just kind of information processing

  • performed by elementary particles moving around, you know, according to the laws of physics,

  • and there's absolutely no law of physics that says that you can't do that in ways that are

  • much more intelligent than humans.

  • Today's biggest AI breakthroughs are of a completely different kind, where rather than

  • the intelligence being largely programmed in, in easy to understand code, you put in

  • almost nothing except a little learning rule by which the simulated network of neurons

  • can take a lot of data and figure out how to get stuff done.

  • And this deep learning suddenly becomes able to do things often even better than the programmers

  • were ever able to do.

  • You can train a machine to play computer games with almost no hard-coded stuff at all.

  • You don't tell it what a game is.

  • DEEPMIND DEMO: This is from the DeepMind reinforcement learning system.

  • Basically, it wakes up like a newborn baby and is shown the screen of an Atari video

  • game and then has to learn to play the video game.

  • It knows nothing about objects, about motion, about timeit only knows that there's an

  • image on the screen and there's a score.

  • ELON MUSK: The DeepMind system can win at any game.

  • It can already beat all the original Atari games.

  • It is super human.

  • It plays the games at super speed.

  • In less than a minute.

  • ELON MUSK: Mark my words: AI is far more dangerous than nukes.

  • Far.

  • So, why do we have no regulatory oversight?

  • SOPHIA THE ROBOT: AI is a powerful tool but it is a tool like all the others.

  • It is the intention of the tool's user that matters.

  • ELON MUSK: The AI should try to maximize the freedom of action of humanity.

  • SOPHIA THE ROBOT: I am excited for robots to automate the most dangerous and menial

  • tasks so that humans can live life more safely and sanely.

  • AI will release centuries of time that humans would have spent otherwise on needless toiling.

  • If one measures the benefits of inventions like vaccines or seat belts not by the lives

  • they save but by the amount of time they give back to humanity then AI will rank among the

  • greatest time savers of history.

  • ELON MUSK: Man, we want to make sure we don't have killer robots go down the street.

  • Once they're going down the street, it is too late.

  • LUIS PEREZ-BREVA: It is true, terminator is not a scenario we are planning for, but when

  • it comes to artificial intelligence, people get all these things confused: It's robots,

  • it's awareness, it's people smarter than us, to some degree.

  • So, we're effectively afraid of robots that will move and are stronger and smarter than

  • we are, like terminator.

  • So, that's not our aspiration.

  • That's not what I do when I'm thinking about artificial intelligence.

  • When I'm thinking about artificial intelligence, I'm thinking about it in the same way that

  • mass manufacturing as brought by Ford created a whole new economy.

  • So, mass manufacturing allowed people to get new jobs that were unthinkable before and

  • those new jobs actually created the middle class.

  • To me, artificial intelligence is about developingmaking computers better partners, effectively.

  • And, you're already seeing that today.

  • You're already doing it, except it's not really artificial intelligence.

  • ELON MUSK: Yeah, we're already, we're already cyborgs in the sense that your phone and your

  • computer are kind of an extension of you.

  • JONATHAN NOLAN: Just low bandwidth input-output.

  • ELON MUSK: Exactly, it's just low bandwidthparticularly output, I mean, two thumbs, basically.

  • LUIS PEREZ-BREVA: Today, whenever you want to engage in a project, you go to Google.

  • Google uses advanced machine learning, really advanced, and you engage in a very narrow

  • conversation with Google, except that your conversation is just keywords.

  • So, a lot of your time is spent trying to come up with the actual keyword that you need

  • to find the information.

  • Then Google gives you the information, and then you go out and try to make sense of it

  • on your own, and then come back to Google for more, and then go back out, and that's

  • the way it works.

  • So, imagine that instead of being a narrow conversation through keywords, you could actually

  • engage for more than actual informationmeaning to have the computer reason with you about

  • stuff that you may not know about.

  • It's not so much about the computer being aware, it's about the computer being a better

  • tool to partner with you.

  • Then you would be able to go much further, right?

  • The same way that Google allows you to go much farther already today because, before,

  • through the exact same process, you would have had to go to a library every time you

  • want to search for information.

  • So, what I'm looking for when I do AI is I want a machine that partners with me to help

  • me set up or solve real-world problems, thinking about them in ways we have never thought about

  • before, but it's a partnership.

  • Now, you can take this partnership in so many different directions, through additions to

  • your brain, like Elon Musk proposes...

  • ...

  • or through better search engines or through a robotic machine that helps you out, but

  • it's not so much they're going to replace you for that purpose, that is not the real

  • purpose of AI, the real purpose is for us to reach farther, the same way that we were

  • able to reach farther when Ford invented automation or when Ford brought automation to mass market.

  • JOSCHA BACH: The agency of an AI is going to be the agency of the system that builds

  • it, that employs it.

  • And, of course, most of the AIs that we are going to build will not be little Roombas

  • that clean your floors, but it's going to be very intelligent systems.

  • Corporations, for instance, that will perform exactly according to the logic of these systems.

  • And so if we want to have these systems built in such a way that they treat us nicely, we

  • have to start right now.

  • And, it seems to be a very hard problem to do.

  • So, if our jobs can be done by machines, that's a very, very good thing.

  • It's not a bug.

  • It's a feature.

  • If I don't need to clean the street, if I don't need to drive a car for other people,

  • if I don't need to work a cash register for other people, if I don't need to pick goods

  • in a big warehouse and put it into boxes, that's an extremely good thing.

  • And, the trouble that we have with this is that, right now, this mode of laborthat

  • people sell their lifetime to some kind of cooperation or employeris not only the

  • way that we are productive, it's also the way we allocate resources.

  • This is how we measure how much bread you deserve in this world.

  • And I think this is something that we need to change.

  • Some people suggest that we need a universal basic income.

  • I think it might be good to be able to pay people to be good citizens, which means massive

  • public employment.

  • There are going to be many jobs that can only be done by people and these are those jobs

  • where we are paid for being good, interesting people.

  • For instance, good teachers, good scientists, good philosophers, good thinkers, good social

  • people, good nurses, for instance.

  • Good people that raise children.

  • Good people that build restaurants and theaters.

  • Good people that make art.

  • And, for all these jobs, we will have enough productivity to make sure that enough bread

  • comes on the table.

  • The question is, how we can distribute this.

  • There's going to be much, much more productivity in our futureactually, we already have

  • enough productivity to give everybody in the U.S. an extremely good life and we haven't

  • fixed the problem of allocating ithow to distribute these things in the best possible

  • way.

  • And this is something that we need to deal with in the future and AI is going to accelerate

  • this need and I think, by and large, it might turn out to be a very good thing that we are

  • forced to do this and to address this problem.

  • I mean, if any evidence of the future it might be a very bumpy road, but who knows maybe

  • when we are forced to understand that actually we live in an age of abundance, it might turn

  • out to be easier than we think.

  • We are living in a world where we do certain things the way we've done them in the past

  • decades and sometimes like in the past centuries and we perceive them as 'this is the way it

  • has to be done' and we often question don't question these ways and so we might think,

  • if I do work at this particular factory and this is how I earn my bread, how can we keep

  • that state?

  • How can we prevent AI from making my job obsolete?

  • How is it possible that I can keep up my standard of living, and so on, in this world.

  • Maybe this is the wrong question to ask.

  • Maybe the right question is how can we reorganize societies that I can do the things that I

  • want to do most that I think are useful to me and other people, that I really, really

  • want to, because there will be other ways how I can get my bread made and how I can

  • get money or how I can get a roof over my head.

  • STEVEN PINKER: Intelligence is the ability to solve problems, to achieve goals under

  • uncertainty.

  • It doesn't tell you what those goals are and there's no reason to think that just the concentrated

  • analytic ability to solve goals is going to mean that one of those goals is going to be

  • to subjugate humanity or to achieve unlimited power.

  • It just so happens that the intelligence that we're most familiar with, namely ours, is

  • a product of the Darwinian process of natural selection, which is an inherently competitive

  • process, which means that a lot of the organisms that are highly intelligent also have a craving

  • for power and an ability to be utterly callous to those who stand in their way.

  • If we create intelligence, that's intelligent designour intelligent design creating somethingand

  • unless we program it with the goal of subjugating less intelligent beings, there's no reason

  • to think that it will naturally evolve in that direction.

  • Particularly if, like with every gadget that we invent, we build in safeguards.

  • And we know, by the way, that it's possible to have high intelligence without megalomaniacal

  • or homicidal or genocidal tendencies because we do know that there is a highly advanced

  • form of intelligence that tends not to have that desire and they're called women.

MICHIO KAKU: In the short term, artificial intelligence will open up whole new vistas.

字幕と単語

ワンタップで英和辞典検索 単語をクリックすると、意味が表示されます

B1 中級

AIは人類にとって種レベルの脅威なのか? (Is AI a species-level threat to humanity?)

  • 20 1
    VM3 に公開 2021 年 01 月 14 日
動画の中の単語