Placeholder Image

字幕表 動画を再生する

  • I love that I can unlock my phone with my face,

  • and that Google can predict what I’m thinking.

  • And that Amazon knows exactly what I need.

  • It’s great that I don’t have to hail a cab

  • or go to the grocery store.

  • Actually, I hope I never have to drive again or navigate

  • or use cash or clean or cook or work or learn.

  • But what if all this technology

  • was trying to kill me?

  • The same technology that is making your life easier

  • is being weaponized.

  • That feature that unlocks your phone with your face,

  • here it is attached to a self-learning machine gun.

  • It’s manufacturer, Kalashnikov,

  • made this video to show the gun using object-recognition

  • software to identify targets.

  • They say it gets more accurate the more you use it.

  • That drone advertised to get awesome snowboarding shots,

  • here’s one that doesn't require a pilot.

  • This ad shows it with a high-explosive warhead.

  • It hangs out in the sky, until it

  • finds an enemy radar system, then

  • crashes headfirst into it.

  • Oh, and that driverless car you thought was so cool,

  • well, here it is in tank form at a Russian arms fair.

  • It’s called the T-14.

  • Dmitry, here, says he sells them

  • to the Russian government.

  • That contract is part of a trend that’s changing

  • the way wars are waged.

  • Like all good stories, this one

  • starts at a Russian arms fair.

  • Were a few hours outside of Moscow.

  • Everyone from government officials to gun enthusiasts

  • have come here to see the latest weapons.

  • It’s a family affair.

  • Buyers want to know how the 21st-century technology

  • boom can give their armies a strategic advantage.

  • They want to know: Can technology make war safer?

  • But some fear giving weapons too much power

  • because it brings us closer to machines that could

  • go out and kill on their own.

  • They say, we might not be able to control weapons

  • like these, weapons loaded with artificial intelligence.

  • So artificial intelligence is a study

  • of how to make machines behave intelligently,

  • which means acting in a way that

  • will achieve the objectives that theyve been given.

  • And recently, I’ve become concerned about the use of A.I.

  • to kill people.”

  • Stuart Russell.

  • He was an early pioneer in artificial intelligence.

  • He’s also been warning people about its potential danger

  • for years.

  • So a killer robot is something

  • that locates, selects and attacks human targets.”

  • Stuart isn’t so worried about robots like this.

  • Were still pretty far from theTerminator.”

  • But Stuart says were not as far from something

  • like this bee-sized drone.

  • He imagined one, and made a movie that he hopes

  • will freak you out.

  • In Stuart’s movie, we see swarms

  • of them armed with explosives set loose on their targets.

  • The main issue is youre creating a class of weapons

  • of mass destruction, which can kill millions of people, just

  • like a nuclear weapon.

  • But in fact, it’s much easier to build,

  • much cheaper, much more scalable,

  • in that you can use 1 or 10 or 100 or 1,000 or 10,000.

  • Whereas with a nuclear weapon, it’s sort of all or nothing.

  • It doesn’t destroy the city and the country

  • that youre attacking.

  • It just kills all the people you want to kill,

  • all males between 12 and 60 or all males wearing

  • a yarmulke in Israel.”

  • The weapon Stuart is describing is terrifying,

  • if it works perfectly.

  • With the current state of tech,

  • many experts say it wouldn’t, but that

  • could be even scarier.

  • The way we think about A.I. is we build a machine

  • and we put the objective into the machine.

  • And the machine pursues the objective.

  • So you put in the objective offind a cure for cancer

  • as quickly as possible.’

  • Sounds great, right?

  • O.K. Well, probably the fastest way to do that

  • is to induce tumors in the entire human population,

  • and then try millions

  • of different treatments simultaneously.

  • Then, that’s the quickest way to find a cure.

  • That’s not what you meant, but that’s what you asked for.

  • So we call this the King Midas Problem.

  • King Midas said, ‘I want everything

  • I touch to turn to gold.’

  • And he got his wish.

  • And the, his food turned to gold,

  • and his drink turned to gold and his family turned

  • to gold.

  • He died in misery and starvation.

  • You know, this is a very old story.

  • We are unable to correctly specify the objective.”

  • Machines will always be limited by the minds

  • of those who made them.

  • We aren’t perfect.

  • And neither is our A.I.

  • Facial recognition software has had trouble

  • with dark skin.

  • Self-driving vehicles still need good weather and calm

  • streets to work safely.

  • We don’t know how long it will take for researchers

  • to create weapons with that kind of flexibility.

  • But behind closed doors, defense labs

  • are working on it and theyre not working alone.

  • Militaries don’t have to invent A.I.

  • It’s already being builtit’s being

  • driven by major tech companies out

  • in the commercial sector.”

  • Paul Scharre, here, led a Department of Defense

  • working group that helped establish

  • D.O.D. policies on A.I. and weapons systems

  • for the U.S. military.

  • The reality is all of the technology

  • to put this together, to build weapons that

  • can go out on the road, make their own decisions to kill

  • human beings, exists today.”

  • But it’s one thing to assemble a weapon in a lab, and another

  • to have it work in any environment.

  • And war is messy.

  • Machines are not really at a point

  • today where theyre capable of flexibly

  • adapting to novel situations.

  • And that’s a major vulnerability in war.”

  • Governments around the world see potential advantages

  • in these weapons.

  • After all, human soldiersthey get tired, emotional.

  • They miss targets.

  • Humans get traumatized.

  • Machines do not.

  • They can react at machine speed.

  • If a missile was coming at you,

  • how quickly would you want to know?

  • Autonomous weapons could save lives.

  • The same technology that will help self-driving cars avoid

  • pedestrians could be used to target civilians or avoid

  • them, intentionally.”

  • The problem is weve gotten this wrong before.

  • To really understand the growing trends of automation

  • in weapons that have been growing for decades,

  • you have to go all the way back to the American Civil War,

  • to the Gatling Gun.

  • How do I describe a Gatling Gun?

  • Do I have to describe it?

  • Could you guys show a picture of it?

  • Richard Gatling was looking at all

  • of the horrors that were coming back

  • from the Civil War.

  • And he wanted to find a way to make war more humane,

  • to reduce the number of people that are

  • needed on the battlefield.

  • Wouldn’t that be amazing?”

  • Four people operating Gatling’s gun

  • could fire the equivalent of 100 soldiers.

  • Far less people would be needed on the battlefield.

  • It was the precursor to the machine gun.

  • And it was born with the intention to save lives,

  • at least for the army that had the gun.

  • Of course

  • The reality was far, far different.

  • Gatling’s invention had the very opposite effect

  • of what he intended.

  • And then it magnified the killing and destruction

  • on the battlefield, by orders of magnitude.”

  • Gatling was wrong.

  • Automating weapons didn’t save lives.

  • And Dmitry, here, is saying something eerily familiar

  • over 150 years later.

  • And it wasn’t just Gatling.

  • Revolutions of warfare have typically not gone well.

  • Before we ever developed usable biological weapons,

  • the biologists said, stop doing this.”

  • All civilized countries today have given up

  • chemical weapons as tools of warfare,

  • but we see that they are still used by some rogue nations.”

  • And then, there are nuclear weapons.

  • Even with multiple treaties in place to police their use,

  • the threat of nuclear obliteration

  • remains a global anxiety.

  • Now, I am become death, a destroyer of worlds.”

  • Early in the war in Afghanistan,

  • I was part of a Ranger sniper team

  • that was sent out to the Afghanistan-Pakistan border

  • to watch infiltration routes for foreign fighters coming

  • across the border.

  • We drove all night, and then began

  • to hike up a steep rocky mountain

  • under cover of darkness.

  • From our position on the ridgeline,

  • we could see for dozens of miles in every direction.

  • And by the time the sun came up,

  • we looked down at this compound beneath us.

  • We were basically in someone’s backyard.

  • We were certain that people would be coming

  • to take a closer look at us.

  • What I didn’t anticipate was that they sent a little girl

  • to scout out our position.

  • She wasn’t particularly sneaky, to be honest.

  • She was reporting back our position,

  • and probably how many of us there were.

  • We watched her and she watched us.

  • And then, she left.

  • And pretty soon after, the Taliban fighters came.

  • The gunfight that ensued brought out

  • the whole village.

  • And we knew that many, many more fighters

  • would be coming before long.

  • So we had to leave that position

  • as we were compromised.

  • Later on in the day, we talked about

  • what would we do in a similar situation to that?

  • You know, one of the things that never came up

  • was the idea of shooting this little girl.

  • But here’s the thing: She was a valid enemy combatant,

  • and killing her wouldve been lawful.

  • So if someone deployed an autonomous weapon,

  • a robot that was designed to perfectly follow

  • the laws of war, it wouldve killed this little girl

  • in that situation.

  • Now, I think that wouldve been wrong, maybe not

  • legally, but morally.

  • But how would a robot know the difference between what’s

  • legal and what’s right?”

  • With so much at stake, you’d think

  • a debate would be happening.

  • Well, there is.

  • It’s just that technology moves at a different pace

  • than diplomacy.

  • We will continue our discussion on Agenda Item 6A,

  • characterisation of the systems

  • under consideration in order to promote

  • a common understanding on concepts and characteristics

  • relevant to the objectives and purposes of the convention.”

  • One of the things I learned very quickly

  • was that the official proceedings at the

  • United Nations appear to be completely meaningless.”

  • "Thank you, Mr. Chairperson —”

  • Support continued deliberations —”

  • We need a normative framework —”

  • Difference in interpretation —”

  • The importance of a multi-disciplinary —”

  • Down the rabbit hole of endless discussions

  • on a subject of —”

  • Thank you, Mr. President.

  • We are not in a position to make a declaration right now.”

  • Good morning.”

  • How are you?”

  • “I’m good.

  • How are you feeling?”

  • Oh, I’m fine, except for the governments, you know,

  • their do-nothing attitude.”

  • We’d like to hear about that.”

  • Jody Williams, here, won a Nobel Peace Prize

  • for her work banning land mines.

  • Now, she’s part of the

  • Campaign to Stop Killer Robots.

  • Academics attacked the campaign

  • in the beginning years, you know, saying robotics

  • and A.I. are inevitable.

  • Maybe they are, but applying them

  • to killing human beings on their own

  • is not inevitable, unless you do nothing.

  • And we refuse to do nothing.”

  • Today, the Campaign to Stop Killer Robots

  • is staging a protest outside of the U.N.

  • The group is made up of activists, nonprofits,

  • and civil society organizations.

  • The campaign’s goal?

  • A ban on all weapons that can target and kill on their own.

  • So far, 30 countries have joined them

  • in supporting a ban, as well as 100

  • nongovernmental organizations, the European Parliament,

  • 21 Nobel laureates, and leading scientists,

  • like Stephen Hawking, Noam Chomsky and Elon Musk,

  • as well as Stuart Russell, and more than 4,500 other

  • A.I. researchers.

  • Protester: “Everyone, you can get up now.”

  • "Yay.”

  • [cheering]

  • Jody’s here with Mary Wareham.

  • So this is the sixth time that governments

  • have come together since 2014 to talk

  • about what they call lethal autonomous weapons systems.”

  • Were going to apologize in advance

  • for the obtuse use of acronyms in this portion of the video.

  • Were not trying to prohibit the use

  • of artificial intelligence.

  • You know, it can be beneficial to humanity.

  • Were pro-robots.

  • Were just anti-killer robots,

  • anti-fully autonomous weapons.”

  • The C.C.W., the forum of the Convention for Conventional

  • Weapons, — which actually has a name this long,

  • and I can never remember itoperates by consensus.

  • Which means you either negotiate the lowest

  • common denominator, which means

  • doing pretty much nothing, or if the entire room

  • of diplomats wants to move forward with a treaty,

  • for example, and one state says no, then it

  • goes nowhere.

  • And that’s really a dictatorship by one.”

  • Once a bullet leaves a gun, the rifleman

  • ceases to have control over that bullet.

  • Autonomy is a way of extending human control beyond the time

  • a munition is deployed.”

  • That’s the United States arguing

  • that A.I. will save lives.

  • And remember, without their support,

  • any kind of regulation can’t move forward.

  • Using algorithm and software to determine and engage

  • target reduces people to objects.”

  • In the U.S. perspective, there is nothing

  • intrinsically valuable about manually operating a weapon

  • system, as opposed to operating it

  • with an autonomous function.”

  • The United States isn’t alone.

  • The countries working hardest to build autonomous weapons

  • insist we can’t regulate what doesn’t exist yet.

  • And at the same time, their militaries

  • are developing these weapons right now.

  • The line between a semi-autonomous weapon

  • that has a human in control, and a fully autonomous weapon

  • could simply be a software patch.”

  • Indeed, some may say it is similar to trying

  • to discuss the internet in the ’80s,

  • ’70s, ’60s at this stage.”

  • It is not necessary or desirable at this time,

  • to define laws.”

  • This so-called difficulty of definitions

  • continues to be willful obfuscation.”

  • The truth is, whether they exist or not just depends

  • on how you define them.

  • We don’t have weapons with artificial general

  • intelligence or A.I. that’s as smart as humans.

  • But we do already have weapons that

  • can use A.I. to search, select and engage targets

  • in specific situations.

  • And the technology is only getting better.

  • So it could easily take another 10 years

  • before they even agree on a definition of what

  • an autonomous weapon is.

  • And by that time, it will be too late.

  • I think for some countries, that’s the point.”

  • In the ongoing race between technology and diplomacy,

  • technology is winning because in this race,

  • the dual-use nature of technology means

  • software being designed to make your life easier clearly

  • has military applications.

  • The A.I. community, myself included,

  • we were sort of asleep at the wheel for a long time.

  • And we weren’t really thinking about the ways

  • that it could be misused.”

  • Whether we like it or not, weve

  • entered the age of the algorithm.

  • And A.I. is changing our place on the battlefield.

  • Is it possible the next generation of soldiers

  • won’t have to kill?

  • Look, it’s an appealing idea that, someday, robots

  • will just fight other robots and no one will get hurt.

  • I don’t think that’s realistic.”

  • Unfortunately, if it worked like that,

  • we could just say, ‘Well, why don’t we

  • just play baseball and decide who wins or Tiddlywinks?’

  • No country is going to surrender

  • until the costs that theyve incurred are unbearable.

  • So even if your robots are defeated,

  • the next stage is that their robots will

  • start killing your people.”

  • Because the unfortunate reality is that wars

  • will only end when people die.”

I love that I can unlock my phone with my face,

字幕と単語

ワンタップで英和辞典検索 単語をクリックすると、意味が表示されます

B1 中級

A.I.が(あなたを)殺すのを簡単にしています。その方法をご紹介します。| ニューヨーク・タイムズ (A.I. Is Making it Easier to Kill (You). Here’s How. | NYT)

  • 2 0
    林宜悉 に公開 2021 年 01 月 14 日
動画の中の単語