Placeholder Image

字幕表 動画を再生する

  • BRIJESH KRISHNASWAMI: Hello, everyone.

  • Again, thank you for your patience.

  • We are super excited to be here and talking

  • to you about TensorFlow.js and how

  • to bring machine learning to JavaScript applications.

  • My name is Brijesh.

  • I am a technical program manager on the Google TensorFlow team.

  • And here is my colleague, Kangyi,

  • who is a software engineer on the TensorFlow team.

  • So here is an outline of what we are

  • going to cover in this talk.

  • We will start with the what and why of TensorFlow.js.

  • We will see some of the ready-to-use models

  • that TF.js supports.

  • We will do a code walk-through of the development workflow

  • both to use one of these pre-trained models,

  • as well as training a custom model.

  • We will delve a little deeper into the tech

  • stack and the roadmap.

  • We will show you what the community

  • is building with TensorFlow.js.

  • There are some very impactful applications

  • that are being built, and we'd love to show you some examples.

  • And finally, we will point to some resources

  • that you can start exploring.

  • All right, so, we have a lot of exciting content to cover,

  • so let's get started.

  • You may have heard at the TensorFlow.js keynote

  • an overview of the technology today morning.

  • We are going to build on that here.

  • But first, a quick recap of the basics of TensorFlow.js.

  • So in a nutshell, TensorFlow.js is

  • a library for machine learning in JavaScript.

  • It is built for JavaScript developers

  • to create and run machine learning models

  • with an intuitive JavaScript-friendly API.

  • This means you can use it to perform training and inference

  • in the browser, browser-based platforms, and in Node.js.

  • ML operations are GPU accelerated,

  • and the library is fully open source and anyone is

  • welcome to contribute.

  • OK, so TensorFlow.js provides multiple starting points

  • for your needs.

  • So, you can use the library, you can directly

  • use off-the-shelf models that the library provides,

  • and we will see a lot of these in a bit.

  • You could also use your existing Python TensorFlow models

  • with or without conversion, depending on the platform

  • that you're running on.

  • Or you can retrain an existing model with transfer learning

  • and then customize it to your data sets.

  • That's the second starting point.

  • Transfer learning typically needs a smaller data

  • set for retraining, so that might fit your needs better.

  • And the third starting point is you

  • can build your model entirely from scratch with a Keras-like

  • Layers API and train it.

  • You can train it either in the browser

  • or on server with Node.js.

  • So, we are going to delve much deeper into some

  • of these workflows today.

  • JavaScript, of course, is a ubiquitous language.

  • So by virtue of that, TensorFlow.js

  • works on a variety of platforms.

  • It lets you write ML code once and run it

  • on multiple surfaces.

  • As you can see, the library runs on any standard browser, so

  • regular web apps, progressive web apps are covered.

  • On mobile, TF.js is integrated with mini-app platforms

  • like WeChat.

  • We have just added first-class support

  • for the React Native framework, so apps can seamlessly

  • integrate with TensorFlow.js.

  • On server, TF.js runs on Node.

  • In addition, it can run on desktop applications

  • using the Electron framework.

  • All right, so, why TensorFlow.js?

  • Why run ML in the browser?

  • So we believe there are compelling reasons

  • to run ML on a browser client, especially

  • for model inferencing.

  • Let's look at some of these reasons.

  • Firstly, there are no drivers and nothing to install.

  • You include the TensorFlow.js library,

  • either at page load time by script sourcing it

  • into your HTML page, or by bundling with a package manager

  • into your client app, and you're good to go.

  • That's it.

  • The second advantage is you can utilize a variety of device

  • inputs and sensors using standard device

  • API, such as camera, microphone, GPS--

  • through standard web API, HTML API,

  • and through a simplified set of TF Data API,

  • and we are going to see some examples today.

  • TF.js lets you process data entirely

  • on the client, which means it's a great choice

  • for privacy-sensitive applications.

  • It avoids round trip latency to the server.

  • It is also WebGL accelerated.

  • So, these factors combine to make

  • for a more fluid and interactive user experience.

  • Also, running ML on the client helps reduce server-side costs

  • and simplify your serving infrastructure.

  • For example, no online ML serving that

  • needs to scale to increasing traffic

  • and so forth is needed because you're offloading

  • all your compute to the client.

  • You just host a ML model from a static file location

  • and that's it.

  • On the server, there are also benefits

  • to integrating TensorFlow.js into your Node.js environment.

  • If you are using a Node.js serving stack,

  • it lets you bring ML into the stack as opposed

  • to calling out to a Python-based stack.

  • So it lets you unify your serving stack all in Node.js

  • if you are using Node.js

  • You can also use your existing Core TensorFlow models to bring

  • them into Node.js, not just the pre-built,

  • off-the-shelf models, but rather your custom models that were

  • built with--

  • Python TensorFlow can be converted.

  • And in an upcoming release, you don't even

  • need the conversion process.

  • You can just use them directly in Node.

  • And finally, you can do all of this

  • without sacrificing performance.

  • You get CPU and GPU acceleration with the underlying TensorFlow

  • C library because that's what Node uses.

  • And we are also working on GPU acceleration via OpenGL,

  • so that removes the need for depending on CUDA drivers

  • as well.

  • So effectively, you get performance that's

  • similar to the Python library.

  • So these attributes of the library

  • enable a variety of use cases across

  • the client-server spectrum.

  • So let's take a look at some of those.

  • So, on the client side, it enables

  • you to build features that need high interactivity,

  • like augmented reality applications,

  • gesture-based interaction, speech recognition,

  • accessibility, and so forth.

  • On the server side of the spectrum,

  • it lets you have your more traditional ML

  • pipelines that solve enterprise-like use cases.

  • And in the middle, that can live either on the server

  • or on the client, are applications

  • that do sentiment analysis, toxicity and abuse reduction,

  • conversational AI, ML-assisted content authoring, and so

  • forth.

  • So you get the flexibility of choosing

  • where you want your ML to run--

  • on the client, on the server, either / or.

  • So whatever your use case is, TensorFlow.js

  • is production-ready-- ready to be leveraged.

  • So with that intro, I'd like to delve deeper

  • into the ready-to-use models available in TensorFlow.js.

  • Our collection of models has grown

  • and is growing to address the use cases that we just

  • mentioned, namely image classification for classifying

  • whole images, detecting and segmenting objects and object

  • boundaries, detecting the human body and estimating pose,

  • recognizing speech commands and common words from audio data,

  • and text models for text classification, toxicity

  • and abuse reduction.

  • You can explore all of these models today on GitHub.

  • You can use them by installing them with npm

  • or by directly including them from our hosted scripts.

  • So let's dive in to a few of these models

  • and see more details on these.

  • So this is the PoseNet model.

  • It performs pose estimation by detecting 17 landmark points

  • on the human body.

  • It supports both single person and multi-person

  • detection within an image.

  • Now there are multiple versions of this model.

  • Versions that are backed by MobileNet and those backed

  • by ResNet provide options for balancing accuracy

  • versus model size versus latency,

  • depending on your needs.

  • And it enables use cases like gesture-based interaction,

  • augmented reality animation, and so

  • on-- things that are well-served by running ML on the client.

  • By the way, you can explore a demo of this particular model

  • at our booth in the expo hall.

  • Another human model is the BodyPix model.

  • BodyPix enables person segmentation, again

  • both single and multiple persons in an image.

  • It identifies 24 body parts, such as left arm, right arm,

  • torso, left / right legs, and so forth.

  • It also provides Convenience API to segment and mask

  • each body part in a different color,

  • which is what you're seeing in this particular GIF.

  • And in terms of use cases, it can

  • be used for things like blurring faces in an image,

  • or blur the background, say, to protect privacy.

  • The COCO-SSD object detection model

  • enables object detection of multiple objects in an image.

  • It detects about 90 classes defined in the COCO data set.

  • And the nice thing is it takes as input

  • any browser-based image element, like an image or a video

  • or a canvas element, and returns an array of bounding boxes

  • with the detected class and the confidence level.

  • In this sample image, you can see

  • that the kite is detected with a high confidence score

  • and you get a bounding box.

  • There is also a DeepLab model which

  • offers semantic segmentation.

  • That's coming soon-- yet to be released.

  • So what I'd like to show you here

  • is how easy it is to use a model like that, like COCO-SSD,

  • in your JavaScript app without dealing with tensors,

  • transformations, or layers.

  • And this pattern really applies to any of our pre-built models.

  • So first, you script source the library

  • in your HTML file from the hosted CDN.

  • Alternatively, you can just npm install the library

  • and use a build tool like webpack

  • to bundle into a client.js file.

  • Then you load the model.

  • That's the .load method.

  • And you call the model .detect method on the image element

  • that you are trying to analyze.

  • And that's it.

  • What you get back is an array of objects

  • that have the bounding boxes and the classes that are detected.

  • So really, four or five lines of code to end up

  • leveraging a powerful ML model in the browser.

  • Text models are another useful set.

  • TF.js has a toxicity detection model

  • and a more general universal sentence encoder model.

  • So let's look at the toxicity model a little closer.

  • I want to show you a live example of this model in a web

  • app.

  • Give me a second.

  • OK.

  • There you go.

  • So here is a super-simple web app that simply loads the model

  • and passes a few sentences to it.

  • And what the model does is classify it on a few dimensions

  • like, does this signify an insult, is there toxicity,

  • is there a threat, and so forth.

  • So let's try an example here.

  • So, I'm going to say something toxic--

  • your-- if I can see it--

  • ignorance is pretty--

  • OK.

  • So, something that I would think is toxic.

  • So, that returns as toxic, as well as returns "is an insult."

  • However, this model I want to show you

  • is context-based not keyword-based.

  • So you type the same word in a totally non-toxic context,

  • you're going to see a different answer.

  • And that's detected as "not an insult."

  • So this sort of model can be used

  • on the client side in product reviews,

  • or in chat type of situations.

  • Here's an example I want to show you

  • from a Twillio developer who integrated TensorFlow.js

  • into a chat app.

  • And again, here it's able to detect toxic comments

  • and filter them right before sending.

  • All right, so-- slide.

  • Another interesting model is FaceMesh.

  • This model provides high-resolution tracking

  • of facial features.

  • So it detects about 400 points on a person's face.

  • So we believe that this model has

  • great potential for real-world applications-- for example,

  • detecting facial gestures, emotion, supporting

  • richer accessibility features, and so forth.

  • So now I'd like to show you a couple of cool demos built

  • using FaceMesh.

  • First up, and you might have seen

  • this in the keynote session today morning if you attended.

  • Here's an application built by one of our partner teams

  • at Google.

  • This app is called Lip Sync.

  • This is a game that tracks how well you

  • are lip-syncing to a song, all real-time in the browser.

  • So let's see a demo.

  • Notice in particular how the display turns gray

  • when the lip sync doesn't match the lyrics,

  • and how the score kind of matches how well the person is

  • lip-syncing.

  • So when the lip-syncing--

  • kind of, when the person is not lip-syncing correctly,

  • then there's feedback given back to the user.

  • And this, you can see, is a type of example

  • that one can build entirely on the client using this library,

  • and sort of let your imagination go from there.

  • So, again, this is available at the TensorFlow.js booth.

  • You're welcome to try and see how you do.

  • The next application we'd like to highlight

  • is a virtual makeup try-on app, again using FaceMesh.

  • So this is a mini-app that the company ModiFace which is

  • a subsidiary of L'Oréal has built on the popular WeChat

  • platform.

  • I would like to invite Brendan Duke from ModiFace

  • to come on stage and show you how they use TensorFlow.js

  • to build this app.

  • [APPLAUSE]

  • BRENDAN DUKE: Thank you.

  • Thanks, Brijesh.

  • So, hi, I'm Brendan.

  • I'm a research scientist at ModiFace.

  • And first, let me briefly introduce ModiFace.

  • So ModiFace is a augmented reality for beauty company.

  • It was founded in 2007 in Toronto, Canada,

  • and acquired just last year 100 percent by L'Oréal.

  • So, today, ModiFace collaborates with 20 beauty brands

  • subsidiaries of L'Oréal, such as L'Oréal Paris and Maybelline.

  • And you'll find our technology in such online retail giants

  • as Macy's, Sephora, or Amazon.

  • So, now I'm going to talk to you about how

  • we make use of TensorFlow.js in our virtual try-on

  • applications.

  • So, in order to introduce why we need

  • a framework like TensorFlow.js, I'm

  • going to use the WeChat mini program for makeup

  • virtual try-on that we developed as an example

  • to showcase the kind of challenges

  • that you run into when you're deploying

  • real-time virtual try-on systems.

  • So, first of all, we want to deploy our applications

  • on the client side.

  • This is for user privacy and to avoid the latency

  • from during a round trip to the back-end server every frame.

  • And also, we have a soft requirement

  • of about at least a 10 frame per second frame rate in order

  • for our applications to feel interactive.

  • So this is particularly difficult on the WeChat

  • platform because you have to run your mini program in WeChat's

  • JavaScript environment, and you inherit

  • the limitations of JavaScript.

  • So we needed both to develop a lightweight, spartan model

  • for our face tracking and also to find a framework that

  • runs quickly in JavaScript, and ideally makes use

  • of WebGL to run deep learning operators using the GPU

  • hardware acceleration.

  • So the second challenge we ran into

  • is that WeChat mini-programs have a 2MB cumulative file

  • size.

  • So we need to find a framework that's small

  • and allows us to develop a small model

  • so that we can load it as quickly as possible, as well.

  • And thirdly, because our models have some custom operators,

  • we needed a framework that's extensible with custom

  • operators.

  • And we also needed a framework that is

  • going to support all the different mobile phone

  • models that are supported by WeChat itself.

  • So now, I'm going to tell you about how TensorFlow.js

  • fit the bill and was able to overcome some

  • of these challenges that we ran into in deploying

  • our make-up virtual try-on.

  • So, first of all, TensorFlow.js runs on the client side,

  • and because it makes use of a WebGL back-end

  • it's able to harness the mobile phone GPU hardware

  • acceleration, which gives it like an order of magnitude

  • speedup over browser-based CPU solutions such as WebAssembly,

  • which are kind of limited right now by lack

  • of SIMD and multi-threading.

  • So, TensorFlow.js really allowed us to hit our frame rate

  • requirement.

  • And second of all, the TensorFlow.js library

  • is small and compact.

  • We were able to package the library in about 700 kilobytes.

  • And combined with our roughly 400-kilobyte model sizes,

  • we were able to fit everything within the WeChat mini program

  • file size limit.

  • And thirdly, TensorFlow.js both has widespread support

  • for built-in deep learning operators

  • and also allowed us to extend it with our custom operators

  • that we need for our face tracking.

  • And finally, TensorFlow.js is supporting a wide variety

  • of mobile phone models and has continuous support

  • from the TensorFlow.js team.

  • So for these four reasons, we chose TensorFlow.js

  • as the framework to deploy our makeup virtual try-on

  • on the WeChat plug-in.

  • So now I'd like to share with you some of our results.

  • With the help of TensorFlow.js, we

  • were able to successfully deploy to WeChat

  • our easy-to-use real-time system for realistic AR virtual try-on

  • for makeup.

  • Our entire final solution fit into about 1.8 megabytes,

  • including the code and our models.

  • And on an iPhone 8XS, our rendering and tracking

  • together run at over 25 frames per second.

  • So TensorFlow.js coupled with our tiny CNN

  • face-tracking model allowed us to deploy

  • to web our fastest, smallest, makeup virtual

  • try-on system to date.

  • So now, I'd like to briefly mention a few future research

  • directions that we have going on at ModiFace.

  • So, we've already used TensorFlow.js

  • to create web application demos for our makeup try-on, our hair

  • color try-on, and our nail polish try-on.

  • And in particular for our hair color try-on,

  • we were able to achieve an order of magnitude

  • speed up on a guided filter operator

  • that we use as a post-processing step

  • by just taking our WebAssembly implementation of that operator

  • and re-implementing it in TensorFlow.js.

  • This amounted to about a 20-millisecond reduction

  • in latency for the whole system.

  • We also have a number of other resource projects going on

  • at ModiFace, such as hairstyle transfer, virtual aging

  • simulation, and skin analysis that we

  • hope to use TensorFlow.js to deploy in the near future.

  • So, thank you everyone, for listening,

  • and I'll pass the presentation onto Kangyi.

  • [APPLAUSE]

  • KANGYI ZHANG: Thank you, Brendan.

  • Hi, my name is Kangyi.

  • I'm a software engineer on the TensorFlow.js team.

  • Next, I want to walk you through the workflow of developing

  • apps with TensorFlow.js.

  • We just saw the ModiFace lipsticks virtual try-on app,

  • and I want to show you the details of building

  • a sunglasses virtual try-on app, which uses augmented reality

  • to allow users to try sunglasses through the camera in a web

  • page.

  • To build a sunglasses virtual try-on app,

  • there are several components.

  • First, we need a model that is trained to find the face.

  • And second, the model needs to be loaded in the app

  • to run inference.

  • And third, the app needs to get video data from the camera.

  • And some pre-processing is required,

  • so the video data is compatible with the model as input.

  • And after the model inference, we

  • need some post-processing to use the model output data.

  • And finally, the sunglasses need to be rendered

  • based on the model output.

  • The first technical challenge is to get input data

  • from the camera.

  • TensorFlow.js provides a Data API

  • which enables developers to easily get data

  • from a web camera, microphone, image, text, and CSV file.

  • This Data API will also prepare the data as tensor,

  • so it's ready for the model with customized configurations.

  • And the second technical challenge

  • is to detect the face in the camera

  • and find key points on the face.

  • Previously, we have seen the FaceMesh model which

  • is pre-trained to identify up to 400 facial keypoints in 3D

  • coordinates, and it is a great model for this task.

  • The third challenge is to post-process the model output

  • and display the sunglasses.

  • After we get the keypoints on the face,

  • we want to render the sunglasses on the right place.

  • And we find Three.js, which is an open source cross-browser

  • library used to create and display animated 3D graphics

  • in a web page through WebGL.

  • It will be used to render the sunglasses onto a user's face.

  • And so let's work through the workflow diagram here.

  • We take the video frame from web camera

  • through TensorFlow.js Data API and transform it

  • into a tensor that can be consumed by our model.

  • We run FaceMesh model in the TensorFlow.js runtime

  • to detect the facial keypoints, and then

  • use the model output to put the sunglasses graphics

  • on the right place.

  • And finally we use Three.js to render the sunglasses

  • in the browser.

  • And now let's start coding it.

  • In the HTML file, we started by loading the TensorFlow.js

  • library and the FaceMesh model from our hosted scripts.

  • And next, we add a video element to hold the web

  • camera as input and another container

  • to hold the rendered output.

  • And here's how we will use the model.

  • First, we load the model weights asynchronously and then use

  • the TensorFlow.js Data API to get image frames

  • from web camera.

  • And then use the model to do an inference.

  • The inference output is a JSON object

  • containing the official keypoints in 3D coordinates.

  • And here we prepare the sunglasses image

  • to be rendered in live web camera video.

  • To display with Three.js, we need

  • to have a scene containing the sunglasses image,

  • a camera containing the video, and a WebGL renderer

  • so that we can render the scene within the camera

  • and display it.

  • And finally, we create a loop through requestAnimationFrame,

  • and in each loop, we get an image from the camera,

  • predict facial keypoints in the frame,

  • and render the sunglasses onto the video.

  • Let's see how the app finally looks like.

  • OK, so-- let me first add the FaceMesh render result.

  • Now, just refresh the page and reload the model.

  • And it takes several seconds to load the model and--

  • OK, so you can see it shows the calculated keypoints

  • on my face and also the sunglasses.

  • OK, this demo is built with a pre-trained model we provide.

  • And we also support using pre-trained Python models

  • in JavaScript.

  • We provide a command line tool to bring

  • TensorFlow SavedModel, TFHub model, and Keras model

  • into JavaScript.

  • It supports more than 200 ops and it also

  • supports both TensorFlow 1.x and 2.0.

  • To use a Python model, first use the command line conversion

  • tool we provide to convert the model

  • into a JavaScript-friendly format.

  • And then the converted model can be

  • loaded through a TensorFlow.js API in JavaScript applications.

  • And then finally, you can use the model in the same way

  • as the model we provide.

  • And what about other problems that there

  • is no model available?

  • We provide a Layers API, which is a Keras-compatible

  • API for building models, and the lower-level op-driven Core

  • API if you need fine control of model architecture

  • or execution.

  • Let's see how to build and train a model from scratch

  • with TensorFlow.js.

  • First step is to import the model,

  • and if you are working with Node.js,

  • you can also use the tfjs-node library,

  • which executes the TensorFlow operations using native

  • compiled C++ code.

  • And if you are on a system that supports CUDA,

  • you can also import tfjs-node-gpu GPU library

  • to get GPU accelerate when doing training or inference.

  • And this is what creating a convolution model for our image

  • classification task looks like.

  • As you can see, it is very similar to Keras code

  • in Python.

  • We started by initiating a sequential model where

  • the outputs of one layer are the inputs to the next layer.

  • And then, adding a 2D-convolutional layer

  • and the maxPooling operation with configurations,

  • and then finish the model definition

  • by adding a flatten operation and dense layer with the number

  • of output classes.

  • Once the model is defined, we compile the model

  • and got it ready for training.

  • Here we select a loss function and an optimizer

  • for the training process.

  • And model.fit is the function that drives the training loop.

  • It is an async function, so we want to wait for the results.

  • Once the model is done training, we can save the model.

  • Here we save it to the browser local storage.

  • We also support saving to a number

  • of different destinations, such as local system, or remote URL.

  • Finally, we can use model.predict to get our result

  • from the trained model.

  • Next, I want to show you the tech stack and upcoming

  • features in TensorFlow.js.

  • We provide three layers of APIs--

  • the top layer is the pre-trained models,

  • ready to use out of the box.

  • In the middle, we provide the Layers API to easily

  • build and train models.

  • And in the lower layer, we provide an op-driven Core API

  • so users can do fine control of model architecture

  • for linear algebra calculation.

  • On the client side, including our browser

  • and our mobile platforms running hybrid JavaScript applications,

  • our library is using WebGL for acceleration.

  • It detects the WebGL version and automatically uses it.

  • And on server-side in Node.js, we

  • use TensorFlow CPU and GPU C library under the hood.

  • Soon we will really support for Headless GL back-end

  • as well, which will provide GPU acceleration without dependency

  • on CUDA.

  • And this is the core architecture of TensorFlow.js.

  • We have multiple acceleration options

  • for machine-learning operations faster

  • on both client and server side.

  • You can bring a Python Keras model

  • and load it with Layers API or use TensorFlow SavedModel

  • and execute it with Core API.

  • And this is the performance benchmark on client side.

  • On laptop and iPhone, the MobileNet inference time,

  • you can see, is comparable to TensorFlow Lite.

  • We are working hard to improve the performance on Android.

  • On server-side in Node.js, which is using the TensorFlow C

  • library, you can see the MobileNet inference

  • time is also comparable to TensorFlow Python.

  • We just had an alpha release for React Native support.

  • You can use TensorFlow.js directly

  • inside a React Native app with WebGL acceleration,

  • and load models in the same way as [INAUDIBLE]..

  • This is a React Native demo app performing still

  • transfer image.

  • First, it's taking an image of the picture,

  • and then take the style image.

  • And this is the final result.

  • We are very excited to announce that Google Cloud AutoML now

  • supports TensorFlow.js.

  • You can train customer models using AutoML Vision

  • Edge for both image classification and object

  • detection.

  • All you need to do is to upload images and labels

  • in the AutoML page, and then AutoML

  • takes care of creating the best model for your training data

  • and provide evaluation details.

  • You can also choose whether you want

  • higher accuracy or fast prediction or best trade-off.

  • And after training is done, you can export

  • a TensorFlow.js-compatible model and use it in your JavaScript

  • application.

  • No model building or training code is necessary.

  • And you can get a model through clicking in the Google Cloud

  • Platform.

  • This is the sample code of using a model from Google Cloud

  • AutoML.

  • We provide APIs to load AutoML model

  • and you can use it in the same way as all the other models

  • in TensorFlow.js.

  • As we mentioned in our keynotes, the beta users

  • have already seen impressive performance improvements.

  • In the future, we will bring more pre-trained models

  • based on real-world use case, such as auto-reply

  • and conversation understanding.

  • We are also bringing usability improvements

  • for server-side inference.

  • Soon we will support native SavedModel execution

  • without conversion.

  • We are developing new back-ends hands with WASM and WebGPU

  • to improve the performance in-browser

  • and the React Native we will have a full release soon.

  • The TensorFlow.js community is building also

  • many inspiring applications using machine

  • learning in-browser.

  • We would like to highlight one such example from Mila.

  • Mila researchers have built a radiology assistant

  • to analyze chest rays and make disease predictions, all

  • inside the browser app.

  • To tell us more about this tool and how they use TensorFlow.js,

  • I'd like to invite Dr. Joseph Paul Cohan from Milan

  • to come up on stage.

  • [APPLAUSE]

  • JOSEPH PAUL COHEN: Thanks.

  • Great.

  • So, if we take a look at the traditional diagnostic

  • pipeline, there's a certain area where physicians are already

  • using web-based tools to make and help them make diagnostic

  • decisions about a patient's future--

  • for kidney donor risk or cardiovascular risk,

  • these are already online as early as 2006.

  • So, with advances in deep learning,

  • making diagnostic predictions from chest X-rays using

  • deep learning, the next step is to also put that

  • online in a way that's usable by these doctors.

  • You can imagine such use cases for this in emergency rooms,

  • where humans are time-limited, so we

  • want to have to make less mistakes, right,

  • especially if they are focusing on something

  • and they don't focus on something else that

  • may be important, but not their immediate concern.

  • You can also have rural hospitals,

  • all over the world, that can access things through the web,

  • right?

  • Maybe there's no radiologist nearby to help

  • them make that decision.

  • Maybe the country doesn't even have remote resources to aid

  • in them making that decision.

  • So, using these tools could be the closest opportunity

  • that physician has for a second opinion

  • before they make a decision on the course of treatment.

  • We can also imagine non-experts being able to triage cases

  • for a physician to see.

  • So things like pneumonia or pneumothorax

  • are things that should immediately

  • be brought to the attention of a physician.

  • And maybe there's 200 cases to get through in the morning

  • and six of them have really life-threatening results

  • that they should be able to see in those,

  • so they should be looked at first, right?

  • So we can aid in this as well as identifying rare diseases--

  • this is something we're still working on,

  • but this is a kind of a nice direction

  • that this tool could do.

  • Good deal.

  • Great.

  • All right, so Yann Lecun said this project was "nice,"

  • so I include a quote here.

  • OK, so there's a few reasons why we need to put a chest X-ray

  • tool in a browser.

  • So we could ship a desktop application.

  • That would take money.

  • We don't have any money, we're a university.

  • Oh, we don't have any money for this.

  • So, we need to be able to do this in a way that it's free.

  • And we also can't pay for the computation of processing

  • all these x-rays, so if we made some free web-based tool,

  • we couldn't have a kind of a serving server that actually

  • does the processing in a sustainable way forever

  • that's not reliant on donations.

  • So in this way, we just want to offload the cost

  • to the user's device.

  • And to have them install software themselves from GitHub

  • is probably not something a physician is going to do,

  • so instead, we can deliver all the code in a web browser.

  • There's absolutely no setup.

  • It can run on any device that has a web browser,

  • essentially-- definitely anything Chrome runs on-- also

  • works in Firefox and Safari--

  • but we're able to deliver this in a very elegant way

  • without any without any setup.

  • There's some other issues that are

  • interesting to talk about in this case, where we have

  • to give away this tool for free because when we start charging

  • money, we go into this regulatory space, which

  • is kind of the reason we do this project in the first place.

  • Physicians and radiologists are scared of these tools

  • because companies say, oh, they work really well,

  • that, you know, AI is really able to read these tools

  • when the performance is not 100 percent.

  • And we should be really honest, as researchers talking

  • to physicians, to make sure they really

  • know the extent of the power of these tools,

  • so they can really see how this can impact them.

  • So to kind of bridge the gap and make sure people

  • aren't afraid of these tools, getting these things

  • in front of these radiologists so they can just

  • play with them is a challenge.

  • There's a lot of stuff in the way.

  • So the best way is just give them a URL they can go to.

  • Nothing stands in the way.

  • There's no IT department.

  • There's no red tape at their hospital.

  • There's no money that needs to be

  • paid to make this thing happen.

  • So it really, really enables that use case,

  • and there's really no other way to do it unless you've got

  • the doctor to sit down in your lab and you showed them

  • on your computer, right ?

  • So it's really a game-changer.

  • So we can compute, then send in a browser in about one second

  • once it's loaded.

  • We also need to do auto-distribution detection.

  • So it's an interesting challenge for the kind

  • of expectation matching of the physician

  • or the radiologist and a tool.

  • So, we don't want to process images

  • of cats or regular bones.

  • We want to make sure that only correct X-rays go through

  • so we maintain a certain level of accuracy.

  • We do this with an autoencoder-- s are great--

  • which we also run in the browser.

  • We also need to compute gradients,

  • So why do we have to do this?

  • We want to show a saliency map.

  • And to do that, we need to compute

  • the gradients of the image pixels to the output.

  • We could ship two models.

  • That would be, like, the simplest kind

  • of code way, right?

  • We could ship one that predicts the actual pathology

  • and another one that just computes

  • the gradients for the input image, but that's a lot of work

  • and it's kind of annoying.

  • So what we can do instead is just perform

  • auto diff in the browser to make the new graph, which

  • completes the gradients, which is kind of magical.

  • And then we compute on that graph and we get the gradients.

  • We also do that with TensorFlow.js.

  • So, OK.

  • Thank you.

  • [APPLAUSE]

  • KANGYI ZHANG: Thank you, Joseph.

  • And another example I want to show you

  • is developed by the community.

  • It is the Cognitive OpenTech group

  • at IBM, who gave a talk on this yesterday,

  • is developing a parasite detection

  • web app, which runs an image classification

  • model in the browser.

  • So it is easy to deploy and to run offline in the field.

  • Also, the library was launched last year.

  • And this March, we released version 1.0.

  • We have seen a huge adoption by the community

  • with impressive downloads and the usage numbers.

  • There are a lot of developers who

  • are building add-on libraries and extensions on top

  • of TensorFlow.js, and these are extending TensorFlow.js

  • in a very useful way.

  • We have a variety of resources to help you get started.

  • A couple I want to highlight is a book

  • called "Deep Learning with JavaScript,"

  • which is written by our colleague in TensorFlow team,

  • it provides a variety of machine learning examples written

  • with TensorFlow.js.

  • There are some courses available online.

  • We have our official website hosting the guides, demos,

  • tutorials, and our API documentation.

  • The website also lists all the pre-trained models

  • we provide them.

  • Our code is totally open source, and you can find them

  • in our GitHub repo.

  • If you have any questions or ideas,

  • you can email us at TensorFlow.js@google.com

  • or join our Google Group, tfjs@tensorflow.org.

  • And you can also try our demo and meet our team

  • in the demo booth.

  • Thank you.

  • [APPLAUSE]

BRIJESH KRISHNASWAMI: Hello, everyone.

字幕と単語

ワンタップで英和辞典検索 単語をクリックすると、意味が表示されます

B1 中級

TensorFlow.jsを使ったJavaScriptアプリケーションのためのMLのパワーを解き放つ (TF World '19) (Unlocking the power of ML for your JavaScript applications with TensorFlow.js (TF World '19))

  • 2 0
    林宜悉 に公開 2021 年 01 月 14 日
動画の中の単語