Placeholder Image

字幕表 動画を再生する

  • So, if you have a bit of training in machine learning. There are open source packages available

  • online where somebody first year could go in to do that. It means that it's kind of

  • democratized it, in the sense that anybody can access it.

  • Hi guys, it's Grace here. Thanks for watching today.

  • That is my voice, but not my face.

  • This is an example of a deepfake, a word that

  • combines the termsdeep learningandfake.”

  • It's an advanced form of artificial intelligence,

  • which can be used to manipulate faces and voices,

  • making the process of creating

  • fake identities and spreading false information much easier.

  • The technology is threatening to undermine the credibility of almost everything we see online.

  • From mobile apps that can seamlessly transpose people's faces into blockbuster movies,

  • to fake statements made by public figures like Barack Obama and Mark Zuckerberg.

  • The technology makes things that never happened appear entirely real.

  • As a consumer, you're going to be watching something, you're not necessarily going

  • to know if this is real or not.

  • Deepfake technology leverages on a deep-learning

  • system to produce a persuasive counterfeit. This is done through studying photographs

  • and videos of a target person from multiple angles,

  • and then mimicking the subject's behavior and speech patterns.

  • Technology like this has the potential to

  • affect our elections and jeopardize our personal safety, which is why many companies are already

  • springing up to battle the cyber threat.

  • The challenge with this is that it is almost

  • impossible to regulate. This is something people can actually develop on their own.

  • To ban it seems to be a bit of a stretch because you want that to generate movies, to do animations.

  • You know you have a video of a stunt man and you want to change the face to some actor, right?

  • Malicious use are kind of difficult to discriminate in terms of the actual technology

  • that is used to generate them. With time we'll see more clearly what is the threat and then people can

  • actually devise counter measures.

  • That's Ori Sasson. He runs a company called

  • Blackscore, which has built a Machine Learning platform to identify abnormalities in web

  • content and on social media.

  • The video you made for me, that was actually

  • just from an open source program, right?

  • So basically, what we did, we used this

  • open source software, which is available online.

  • What it is able to do is basically achieve

  • this outcome of taking a source video, with a new person

  • and superimpose it on the destination video,

  • which is for example a video of you.

  • And all the software needs is a video to manipulate

  • and a target. In this case, a video of me was uploaded to the software, which then extracted

  • my many facial expressions. The target was a video of Ori's colleague,

  • which he uploaded after the video of me. The software then was able to automatically match

  • her face to mine when we showed the same movements.

  • What we did is produce a new video with another

  • person. The sound is the same, but the face has been replaced. Now you can see that there

  • are some slight irregularities that the person can notice, however...

  • But it's crazy that even the eyebrow movements and the blinking is exactly the same as the

  • video that I sent you of me.

  • With deepfake videos becoming more and more

  • common, how can we tell what is real and what is fake?

  • Would you be able to tell?

  • I took to the streets of Singapore to find out.

  • Scary and risky also. Sometimes if it is properly designed,

  • then it can be used for some other purposes.

  • I don't know how to use anything to protect myself,

  • that's why the best option would be not to go on social media much.

  • I don't usually frequent Facebook or any social media.

  • They don't know how to be skeptical enough.

  • But I think for most people,

  • we kinda know when it's fake and when it's not?

  • How can you tell?

  • We will crosscheck with other sources, of course.

  • Deepfakes first gained widespread attention in April 2018, when comedian Jordan Peele

  • created a video that pretended to show former President Barack Obama insulting

  • President Donald Trump in a speech.

  • President Trump is total and complete !@#$.

  • Such videos are becoming increasingly prevalent.

  • And it's especially easy for high profile figures to be the targets of these attacks

  • because there are plenty of publicly available videos of them speaking at various events.

  • Moving forward, we need to be more vigilant with what we trust from the internet.

  • Mike Beck is the Global Head of Threat Analysis at a cybersecurity company called Darktrace.

  • He says we are at the crossroads when it comes to regulation and access to the technology.

  • We're just at a very dangerous position where given the access to the

  • compute power, given developers' access to open AI systems, we're in a place where actually

  • deepfakes are genuine things, they're not farfetched anymore.

  • I think there is a big gap that governments need to fill, right. Governments tend to be

  • very, very, slow to this because this is a new emerging technology.

  • It's picked up by a much younger generation.

  • Social media giants are often the targets

  • for disinformation campaigns, which is likely why platforms like Facebook and Twitter are

  • beginning to take action on deepfakes.

  • Shuman Ghosemajumder is the Chief Technology Officer

  • of Shape Security, a California-based cybersecurity company.

  • We spoke via Skype to discuss the new policies.

  • Shuman can you tell me a bit more about Facebook's

  • and Twitter's deepfake policies?

  • Twitter and Facebook have employed slightly

  • different approaches to dealing with deepfakes.

  • When they detect that type of manipulated content,

  • they may place some type of notice on the tweets themselves and then people will

  • decide how to be able to react to that.

  • Facebook on the other hand, has specifically

  • singled out deepfake videos.

  • But they also have an exception that they've made, specifically

  • for satire and parody videos, and that's going to be an interesting case in terms of

  • how they determine whether something is satire or parody.

  • So how Facebook ends up deciding what's a parody will make a big difference in terms of what

  • content ends up getting pulled down or not. So, something that could be uploaded as a

  • clear parody might stay up. But if that same content is then copied somewhere else, and

  • the new audience doesn't realize that it's a parody, what's the policy there?

  • Are they going to pull it down?

  • So far, regulators worldwide have been slow

  • to clamp down on deepfakes. In the United States, the first federal bill

  • targeting deepfakes, the Malicious Deep Fake Prohibition Act,

  • was introduced in December 2018.

  • And states including California and Texas have enacted laws that make deepfakes

  • illegal when they're used to interfere with elections.

  • Chinese regulators have also announced a ban from 2020 regarding the publishing and distribution

  • of fake news created using deepfake technology.

  • Those international conversations are at a

  • very early stage right now; and it'll be some time before we have any sort of an agreement

  • or standards that will allow us to say this is the right approach

  • that the entire international community can agree on.

  • It's a cat and mouse game where you've

  • got AI on one side that is creating fake content, that's trying to go undetected,

  • and AI on the other side that is trying to detect that content.

  • There's simply too much content that's generated every single day every single hour

  • on those platforms for humans to be in the middle of a decision-making process.

  • So just how easy is it to access this technology?

  • It turns out, easier than you think.

  • Basically many of these technologies become

  • much more accessible to normal people without the big budgets and special software.

  • So that's where the challenge comes because then it could be abused as we mentioned.

  • So, you're saying that if I were just a fourth year uni student studying comp sci,

  • I could just be creating this in my garage?

  • Yeah, if you have a bit of training in machine learning using tensorflow,

  • you could do that and furthermore, actually there are open source

  • packages available online where somebody who is maybe even first year can go do that.

  • It means that it is kind of democratized, in the sense that anybody can access it.

  • Without proper regulation, some fakes may fall into the wrong hands and lead to identity theft.

  • And there is no black and white answer. Many industries such as film and gaming have

  • been heavily reliant ondeepfake technology' to make animation look more realistic.

  • Already, apps with rudimentary face-swapping features are available to consumers.

  • FaceApp, an app made by a Russia-based developer, allowed users to see

  • what they might look like when they get older.

  • And Chinese deepfake app, Zao, let users superimpose their face onto celebrities.

  • Both went viral in 2019 and sparked numerous privacy concerns.

  • For many general consumers, if somebody were to create a video of them doing something,

  • it may not be of much impact, right? I mean it could be a bit creepy,

  • but it may not have a high impact.

  • Today, there are a lot of organizations that

  • do certification exams or other exams by taking a video of the person, he puts up his ID.

  • For example, if you're very sophisticated then maybe you can get another person to take

  • the exam but you create some live modification of the video to put another person's face

  • so I'm taking the exam but actually you're the one appearing to be taking the exam.

  • That's a bit far-fetched in today's technology but it's not that far-fetched.

  • And the lower barriers to entry mean you can expect more deepfake apps to be released.

  • I think there is a big race in the market currently to develop different types of apps.

  • Developers can be kind of be messing about some of these techniques and start to play

  • about with images, and you'll get some developer who is doing really really well and you'll

  • see those apps be released on the app stores.

  • As a consumer you're going to be watching something,

  • you won't necessarily going to know if this is real or not.

  • Cybersecurity is not just about protecting your passwords. By some estimates, the current

  • cybersecurity market is worth more than $150 billion and is

  • predicted to grow to nearly $250 billion by 2023.

  • Mike says many companies are turning to biometrics,

  • facial recognition and voice recognition to add additional levels of security for employee log ins.

  • But these new tools are a double-edged sword. With deepfakes, these technologies

  • may present new vulnerabilities for both employees and employers.

  • As citizens and people who are interacting with Facebook, with YouTube, with Instagram,

  • we're all putting videos out there of ourselveswe're all giving our data away

  • to a degree, so people who have access to those platforms will be able to see

  • those images and potentially reuse them.

  • So, this was about a user who had lost access

  • to their credential. And an attacker had been able to gain access to the password and also,

  • they had a voice recognition system as a second factor which is considered really strong but

  • actually the attacker was able to spoof that voice using an AI system or deepfake,

  • if you like, and gain access, using that multi-factor, to their corporate systems.

  • From social media accounts, online e-forms, a huge trove of our personal information is online,

  • making anyone with a digital presence an easy target.

  • So how can we prevent ourselves from falling into these traps?

  • On a corporate side, that becomes easier because we can put in play

  • more detections and things like that. We can look for anomalies in the systems.

  • On a consumer scale, that's so much harder. In terms of protecting, that's tough.

  • There's a genuine problem here at the moment.

  • If someone really wanted to come after you,

  • they could put you in the place you've never been probably.

  • You are on camera all the time, there is access to you as a person on the newsreel.

  • AI platforms can take that image and put them in places you never thought you'd see yourself.

  • I'm really scaredit sounds like you're saying you're doomed.

So, if you have a bit of training in machine learning. There are open source packages available

字幕と単語

ワンタップで英和辞典検索 単語をクリックすると、意味が表示されます

B1 中級

ディープフェイク技術は何が本当なのかを疑わせる|CNBCレポート (Deepfake technology will make you question what's real | CNBC Reports)

  • 24 0
    Summer に公開 2021 年 01 月 14 日
動画の中の単語