中級 65 タグ追加 保存
動画の字幕をクリックしてすぐ単語の意味を調べられます!
単語帳読み込み中…
字幕の修正報告
Hello.
I'm not a real person.
I'm actually a copy of a real person.
Although, I feel like a real person.
It's kind of hard to explain.
Hold on -- I think I saw a real person ... there's one.
Let's bring him onstage.
Hello.
(Applause)
What you see up there is a digital human.
I'm wearing an inertial motion capture suit
that's figuring what my body is doing.
And I've got a single camera here that's watching my face
and feeding some machine-learning software that's taking my expressions,
like, "Hm, hm, hm,"
and transferring it to that guy.
We call him "DigiDoug."
He's actually a 3-D character that I'm controlling live in real time.
So, I work in visual effects.
And in visual effects,
one of the hardest things to do is to create believable, digital humans
that the audience accepts as real.
People are just really good at recognizing other people.
Go figure!
So, that's OK, we like a challenge.
Over the last 15 years,
we've been putting humans and creatures into film
that you accept as real.
If they're happy, you should feel happy.
And if they feel pain, you should empathize with them.
We're getting pretty good at it, too.
But it's really, really difficult.
Effects like these take thousands of hours
and hundreds of really talented artists.
But things have changed.
Over the last five years,
computers and graphics cards have gotten seriously fast.
And machine learning, deep learning, has happened.
So we asked ourselves:
Do you suppose we could create a photo-realistic human,
like we're doing for film,
but where you're seeing the actual emotions and the details
of the person who's controlling the digital human
in real time?
In fact, that's our goal:
If you were having a conversation with DigiDoug
one-on-one,
is it real enough so that you could tell whether or not I was lying to you?
So that was our goal.
About a year and a half ago, we set off to achieve this goal.
What I'm going to do now is take you basically on a little bit of a journey
to see exactly what we had to do to get where we are.
We had to capture an enormous amount of data.
In fact, by the end of this thing,
we had probably one of the largest facial data sets on the planet.
Of my face.
(Laughter)
Why me?
Well, I'll do just about anything for science.
I mean, look at me!
I mean, come on.
We had to first figure out what my face actually looked like.
Not just a photograph or a 3-D scan,
but what it actually looked like in any photograph,
how light interacts with my skin.
Luckily for us, about three blocks away from our Los Angeles studio
is this place called ICT.
They're a research lab
that's associated with the University of Southern California.
They have a device there, it's called the "light stage."
It has a zillion individually controlled lights
and a whole bunch of cameras.
And with that, we can reconstruct my face under a myriad of lighting conditions.
We even captured the blood flow
and how my face changes when I make expressions.
This let us build a model of my face that, quite frankly, is just amazing.
It's got an unfortunate level of detail, unfortunately.
(Laughter)
You can see every pore, every wrinkle.
But we had to have that.
Reality is all about detail.
And without it, you miss it.
We are far from done, though.
This let us build a model of my face that looked like me.
But it didn't really move like me.
And that's where machine learning comes in.
And machine learning needs a ton of data.
So I sat down in front of some high-resolution motion-capturing device.
And also, we did this traditional motion capture with markers.
We created a whole bunch of images of my face
and moving point clouds that represented that shapes of my face.
Man, I made a lot of expressions,
I said different lines in different emotional states ...
We had to do a lot of capture with this.
Once we had this enormous amount of data,
we built and trained deep neural networks.
And when we were finished with that,
in 16 milliseconds,
the neural network can look at my image
and figure out everything about my face.
It can compute my expression, my wrinkles, my blood flow --
even how my eyelashes move.
This is then rendered and displayed up there
with all the detail that we captured previously.
We're far from done.
This is very much a work in progress.
This is actually the first time we've shown it outside of our company.
And, you know, it doesn't look as convincing as we want;
I've got wires coming out of the back of me,
and there's a sixth-of-a-second delay
between when we capture the video and we display it up there.
Sixth of a second -- that's crazy good!
But it's still why you're hearing a bit of an echo and stuff.
And you know, this machine learning stuff is brand-new to us,
sometimes it's hard to convince to do the right thing, you know?
It goes a little sideways.
(Laughter)
But why did we do this?
Well, there's two reasons, really.
First of all, it is just crazy cool.
(Laughter)
How cool is it?
Well, with the push of a button,
I can deliver this talk as a completely different character.
This is Elbor.
We put him together to test how this would work
with a different appearance.
And the cool thing about this technology is that, while I've changed my character,
the performance is still all me.
I tend to talk out of the right side of my mouth;
so does Elbor.
(Laughter)
Now, the second reason we did this, and you can imagine,
is this is going to be great for film.
This is a brand-new, exciting tool
for artists and directors and storytellers.
It's pretty obvious, right?
I mean, this is going to be really neat to have.
But also, now that we've built it,
it's clear that this is going to go way beyond film.
But wait.
Didn't I just change my identity with the push of a button?
Isn't this like "deepfake" and face-swapping
that you guys may have heard of?
Well, yeah.
In fact, we are using some of the same technology
that deepfake is using.
Deepfake is 2-D and image based, while ours is full 3-D
and way more powerful.
But they're very related.
And now I can hear you thinking,
"Darn it!
I though I could at least trust and believe in video.
If it was live video, didn't it have to be true?"
Well, we know that's not really the case, right?
Even without this, there are simple tricks that you can do with video
like how you frame a shot
that can make it really misrepresent what's actually going on.
And I've been working in visual effects for a long time,
and I've known for a long time
that with enough effort, we can fool anyone about anything.
What this stuff and deepfake is doing
is making it easier and more accessible to manipulate video,
just like Photoshop did for manipulating images, some time ago.
I prefer to think about
how this technology could bring humanity to other technology
and bring us all closer together.
Now that you've seen this,
think about the possibilities.
Right off the bat, you're going to see it in live events and concerts, like this.
Digital celebrities, especially with new projection technology,
are going to be just like the movies, but alive and in real time.
And new forms of communication are coming.
You can already interact with DigiDoug in VR.
And it is eye-opening.
It's just like you and I are in the same room,
even though we may be miles apart.
Heck, the next time you make a video call,
you will be able to choose the version of you
you want people to see.
It's like really, really good makeup.
I was scanned about a year and a half ago.
I've aged.
DigiDoug hasn't.
On video calls, I never have to grow old.
And as you can imagine, this is going to be used
to give virtual assistants a body and a face.
A humanity.
I already love it that when I talk to virtual assistants,
they answer back in a soothing, humanlike voice.
Now they'll have a face.
And you'll get all the nonverbal cues that make communication so much easier.
It's going to be really nice.
You'll be able to tell when a virtual assistant is busy or confused
or concerned about something.
Now, I couldn't leave the stage
without you actually being able to see my real face,
so you can do some comparison.
So let me take off my helmet here.
Yeah, don't worry, it looks way worse than it feels.
(Laughter)
So this is where we are.
Let me put this back on here.
(Laughter)
Doink!
So this is where we are.
We're on the cusp of being able to interact with digital humans
that are strikingly real,
whether they're being controlled by a person or a machine.
And like all new technology these days,
it's going to come with some serious and real concerns
that we have to deal with.
But I am just so really excited
about the ability to bring something that I've seen only in science fiction
for my entire life
into reality.
Communicating with computers will be like talking to a friend.
And talking to faraway friends
will be like sitting with them together in the same room.
Thank you very much.
(Applause)
コツ:単語をクリックしてすぐ意味を調べられます!

読み込み中…

【TED】ダグ・ローブル: 本物の人間にしか見えない3Dアバター (Digital humans that look just like us | Doug Roble)

65 タグ追加 保存
林宜悉 2019 年 5 月 29 日 に公開
お勧め動画
  1. 1. クリック一つで単語を検索

    右側のスプリクトの単語をクリックするだけで即座に意味が検索できます。

  2. 2. リピート機能

    クリックするだけで同じフレーズを何回もリピート可能!

  3. 3. ショートカット

    キーボードショートカットを使うことによって勉強の効率を上げることが出来ます。

  4. 4. 字幕の表示/非表示

    日・英のボタンをクリックすることで自由に字幕のオンオフを切り替えられます。

  5. 5. 動画をブログ等でシェア

    コードを貼り付けてVoiceTubeの動画再生プレーヤーをブログ等でシェアすることが出来ます!

  6. 6. 全画面再生

    左側の矢印をクリックすることで全画面で再生できるようになります。

  1. クイズ付き動画

    リスニングクイズに挑戦!

  1. クリックしてメモを表示

  1. UrbanDictionary 俚語字典整合查詢。一般字典查詢不到你滿意的解譯,不妨使用「俚語字典」,或許會讓你有滿意的答案喔