Placeholder Image

字幕表 動画を再生する

  • [MUSIC PLAYING]

  • EWA MATEJSKA: Hi, everyone.

  • Welcome.

  • I'm Ewa Matejska, and I'm a technical program

  • manager on the TensorFlow team.

  • JASON MAYES: And hi, I'm Jason Mayes.

  • I'm a developer advocate on the TensorFlow.js team.

  • EWA MATEJSKA: So I hear you'll be telling us

  • about TensorFlow.js.

  • JASON MAYES: Indeed.

  • EWA MATEJSKA: What is it?

  • JASON MAYES: Very good question.

  • Essentially, TensorFlow.js allows

  • you to run machine learning anywhere

  • that JavaScript can run.

  • And that's actually in many places, such as in the web

  • browser, on the back end in Node.js,

  • on React Native for mobile apps, and even

  • on Internet of Things, such as a Raspberry Pi.

  • EWA MATEJSKA: So why would I use it instead of, let's say,

  • Python?

  • JASON MAYES: Good question.

  • So essentially, we have several superpowers

  • depending where you're executing,

  • so be it on the server side or on the client side.

  • Now, on the client side, one of the key features is privacy.

  • So that allows you to do all the inference on the client

  • machine, not having to send that data to a remote server

  • somewhere, which might be critical for certain types

  • of data that you're dealing with.

  • Now, linked to that, we also have lower latency

  • because there's no service I could call.

  • You'd have to go from client to server

  • and back again to get the answer.

  • And then, of course, moving on from there, is cost.

  • Because there's no server involved,

  • you don't have to have a GPU running 24/7 just

  • to do that inference.

  • I mean, the final point there is reach and scale.

  • Because anyone with a web browser can simply go to a link

  • and open it up, it just works out of the box.

  • So there's no need to install a complex Linux environment.

  • We have all the CUDA drivers, and everything else.

  • And this is particularly important for maybe researchers

  • because they could actually then utilize everyone in the world

  • to try out their model they've just launched

  • and see the performance and those edge cases

  • that they may have not detected when doing it just

  • with a few people in the lab.

  • EWA MATEJSKA: Do you have any demos for me?

  • JASON MAYES: Of course.

  • EWA MATEJSKA: Awesome.

  • JASON MAYES: So in addition to the existing demos that

  • are open sourced and available online,

  • such as object detection and [INAUDIBLE] estimation and body

  • segmentation, we are going to launch three new ones

  • at Dev Summit this year.

  • So here we go.

  • Let's walk through those now.

  • Now the first one is face mesh.

  • And as you can see, it's running in real time

  • here in the browser.

  • As I move my face around, it's tracking it pretty well,

  • opening and closing my mouth.

  • And if I bring you in as well at the same time,

  • it can track more than one face.

  • EWA MATEJSKA: Wow.

  • Even with glasses.

  • JASON MAYES: Even with glasses.

  • And so this is actually tracking 468 unique points

  • on each of our faces, and then rendering it to the screen.

  • And we're even showing that in 3D on the right-hand side

  • there in WebGL.

  • So we can actually represent a 3D model of your face

  • in real time, which is pretty cool.

  • EWA MATEJSKA: How is it getting such awesome performance?

  • JASON MAYES: Very good question.

  • One key thing with TensorFlow.js on the client side

  • is that it supports multiple back ends to execute on.

  • So if there's no specialist hardware,

  • we can run on the CPU, of course, which is the fail-safe.

  • But if there's a graphics card, we

  • can actually leverage WebGL to do all that mathematics for you

  • on the graphics card at high performance.

  • And we recently released WebAssembly support as well

  • to get more consistent performance

  • across maybe older mobile devices on the CPU as well.

  • So--

  • [INTERPOSING VOICES]

  • EWA MATEJSKA: Very cool.

  • JASON MAYES: Moving on to my next demo,

  • we also have hand pose.

  • And you can see here that this can

  • track my hands in much the same way we're

  • doing the face earlier, but this time with my hand.

  • And this can track 21 unique points on my hand.

  • So you can imagine how you might use

  • this for something like gesture or maybe even sign language.

  • The choice is yours, whatever creative ideas

  • you might have in your mind.

  • And then finally, we have mobile BERT.

  • And if I just refresh the page here,

  • you can see the model loading, and if I hit Continue,

  • we can now essentially ask questions

  • about this piece of text that we have on the screen.

  • So I could ask something like, what is TensorFlow,

  • and you can see it's managed to find

  • the answer to that question for us in the web browser.

  • So this could be really useful if you're

  • on a really large website or a research paper,

  • and you just want to jump to a specific portion of knowledge

  • without having to read the whole thing.

  • This could help you do that.

  • And of course, BERT itself can be used for many other things,

  • but this is just the Q&A model that we've

  • got working today so far.

  • EWA MATEJSKA: And where, can you remind me again, where can I

  • see the source code for this?

  • JASON MAYES: Sure.

  • So all of his stuff is available in our GitHub repository

  • online.

  • Just search TensorFlow.js in Google

  • and you'll find our home page with all the good links

  • that you need, and maybe even in the description of the video

  • as well.

  • EWA MATEJSKA: Excellent.

  • Thank you so much.

  • JASON MAYES: Thank you very much.

  • EWA MATEJSKA: And thank you for joining us.

  • [MUSIC PLAYING]

[MUSIC PLAYING]

字幕と単語

ワンタップで英和辞典検索 単語をクリックすると、意味が表示されます

A2 初級

TensorFlow.js (TF Dev Summit '20) (TensorFlow.js (TF Dev Summit '20))

  • 2 0
    林宜悉 に公開 2021 年 01 月 14 日
動画の中の単語