Placeholder Image

字幕表 動画を再生する

  • SANDEEP GUPTA: Thank you, Laurence.

  • So Laurence gave us a very nice introduction

  • to what is machine learning and a glimpse

  • of some of the applications that are

  • possible with machine learning.

  • So at Google, we saw the enormous potential

  • that machine learning was beginning

  • to have all around us, and that led us

  • to releasing TensorFlow as an open source platform

  • back in 2015, and the objective of doing

  • that was to give everyone access to this open source library,

  • to develop machine learning solutions,

  • and thereby, our aim was to accelerate

  • the progress and pace of the development of machine

  • learning.

  • So since that time, TensorFlow has

  • become enormously successful.

  • In about these three years, it has

  • grown to become the number 1 repository on GitHub

  • for machine learning, and we are very

  • proud to have been named as the most loved software library

  • or framework in the 2018 Stack Overflow Developer Survey.

  • So all of this success, actually, in large part

  • is due to the very enormous group of users and developers

  • that we have out there who are using TensorFlow and building

  • lot of these interesting applications.

  • We are blown away by these download numbers.

  • Used more than 17.5 million times.

  • The number that we are very proud of

  • is the more than 1,600 contributors,

  • and vast majority of them being non-Google contributors.

  • And you see very active engagement by the TensorFlow

  • Engineering team with the community in answering issues

  • and fielding pull requests and so on.

  • So it's a very tiring--

  • an active ecosystem that TensorFlow

  • has generated around it.

  • If you look at where our users come from,

  • they come from all parts of the globe.

  • So these are self-identified locations from our users

  • on GitHub who have started TensorFlow,

  • and you'll see that they come from every time zone

  • on the Earth--

  • down in the south, right up to Antarctica,

  • and right up in the Arctic Circle in Norway.

  • And I think I see a dot there in Ireland as well, which

  • might be Laurence, yeah.

  • So it's used everywhere for a wide variety of things.

  • So let me switch gears and talk a little bit

  • about the architecture of TensorFlow-- what

  • it is, what it lets you do.

  • So I'll talk briefly about the APIs that it has,

  • and then describe the platforms and the languages

  • that it supports, and then take a quick look

  • at some of the tooling that really makes it

  • as a very useful platform for doing machine

  • learning and letting you do a very versatile set of things.

  • So to look at the API for TensorFlow,

  • fundamentally it's a computation execution engine.

  • And by that, what we mean is that you called your machine

  • learning algorithm-- or your steps,

  • as Laurence was showing in that example--

  • and then TensorFlow automates all the mechanics

  • of the training process and of running it

  • on your device of interest, and lets you build that model

  • and then use it in practical applications.

  • So you do this by using these higher-level APIs

  • as an easy way to get started, and we

  • have two paths of doing this.

  • Some of you might be familiar with Keras

  • as a library for developing machine learning models,

  • and TensorFlow has full tight integration with Keras,

  • and in fact, Keras is the preferred high-level way

  • of building machine learning models

  • using these LEGO bricks, which let you piece together models

  • one layer at a time and build pretty complex architectures.

  • So you can use the Keras library.

  • In addition, we also package some of these very commonly

  • used models by what we call estimators,

  • and these packaged models are battle tested and hardened,

  • and they let you do your job quickly

  • based on architectures that have already proven to be valuable.

  • And there's also a lot of flexibility

  • in customizing things and changing things

  • and wiring them anyway you need to for your application.

  • So these APIs for model building are fed by data,

  • and part of what TensorFlow offers

  • is very flexible libraries for building these data pipelines--

  • for bringing in data into your models,

  • and then doing some of the feature transformation

  • or pre-processing to prepare the data so

  • that it's ready to be ingested by a machine learning model.

  • And then once you have all this set up,

  • then we have this distribution layer,

  • which basically deals with abstracting away your model

  • and distributing your training job to run

  • on a CPU or single GPU or multiple GPUs,

  • or even on custom architectures such as TPUs,

  • which we'll talk a little bit more about as we go forward.

  • So having trained your model, then, you

  • save your model as an object.

  • And this object basically captures

  • the architecture of the model, as well as

  • the rates and the tuning of the knobs

  • that you did during your training phase,

  • and now it's ready for use in your application.

  • And that's where you, again, have

  • a very flexible choice of how to use that trained model.

  • You can use a library such as tf.serving to manage

  • the whole process of serving.

  • You can deploy it on mobile devices or using TensorFlow.js.

  • You can deploy it in the browser.

  • And we'll talk more about all of these later today,

  • as well as the dedicated talks on each

  • of these components, where you can learn more about them.

  • So on the platform side, as I was saying earlier,

  • TensorFlow lets you run your machine learning models

  • on CPUs, GPUs, as well as on these custom

  • hardwares such as TPUs, as well as on the mobile devices,

  • be it Android or iOS framework, and then

  • going forward on lots of these embedded IoT type devices.

  • So talking a little bit more about the platforms,

  • one platform that we're particularly excited about

  • is Cloud TPUs.

  • So Cloud TPUs were announced by Google actually last year,

  • and then this version 2 of the Cloud TPUs

  • was made general availability earlier this year.

  • And these are specially-designed pieces

  • of hardware from the ground-up which

  • are really, really optimized for machine

  • learning type of workloads.

  • So they're blazingly fast for that.

  • Some of the specs are they're extremely high performance

  • in terms of compute, as well as a large amount

  • of very high bandwidth memory, which

  • lets you parallelize your training job

  • and take full advantage of this kind of an architecture.

  • And the nice thing is that TensorFlow is the programming

  • environment for Cloud TPUs, so TensorFlow is very tightly

  • coupled with being able to do this seamlessly and easily.

  • So let's see what's possible wirh these types of devices.

  • Here I'm showing you some numbers

  • from training, what's called the ResNet-50 model.

  • So ResNet-50 is one of the most commonly used models

  • for image classification.

  • It's a very complex deep learning

  • model which has 50 layers.

  • Has been named ResNet-50.

  • And it has more than 20 million tunable parameters,

  • which you have to optimize during the course of training.

  • So this turns out to be a very commonly used benchmark

  • to look at performance of machine

  • learning tools and models to compare how well the system is

  • optimized.

  • So we train the ResNet-50 model on a public data set

  • called ImageNet, which is a data set of tens of millions

  • of images that are labeled for object recognition

  • type of tasks.

  • And this model could be trained on Cloud TPUs for a total cost

  • of less than $40 with an extremely high image throughput

  • rate, which is mentioned there.

  • And that lets you take the entire ImageNet data set

  • and train your model, in a matter

  • of tens of minutes, what used to take hours, if not days,

  • a few months or years ago.

  • So really exciting to see the pace of this development

  • to do machine learning at scale.

  • On the other end of the spectrum,

  • we see enormous growth in the capabilities

  • of these small minicomputer devices

  • that we carry around in our pockets.

  • Smartphones, smart connected devices, IoT devices--

  • these are exploding.

  • And I think by some counts, their estimates

  • are that there will be about 30 billion such devices

  • within the next five years.

  • And there are a lot of machine learning applications

  • that are possible on these types of devices.

  • So we have made it a priority to make sure that TensorFlow

  • runs-- and runs well--

  • on these types of devices by releasing

  • a library called TensorFlow Lite.

  • So TensorFlow Lite is a lightweight version

  • of TensorFlow, and this is how the workflow works.

  • You take a TensorFlow model, and you train it offline--

  • let's say on a workstation or in distributed computing.

  • And you create your saved model.

  • That saved model then goes through a converter process

  • where we convert it into a model that's specifically

  • optimized for mobile devices.

  • We call it the TensorFlow Lite format.

  • So this TensorFlow Lite model can now

  • be installed on a mobile device where we also

  • have a runtime for TensorFlow, which is a TensorFlow Lite

  • interpreter.

  • So it takes this model, and it runs it.

  • It binds to the local hardware acceleration-- custom hardware

  • acceleration-- on that device-- for example, on Android.

  • You have the NNAPI interface-- the Neural Network interface--

  • which takes advantage of whatever might be the hardware

  • configuration, and this gives you a model that's lightweight.

  • It uses less memory, less power consumption,

  • and is fast for user mobile devices.

  • So here's an example of how this might look in practice.

  • What you're seeing here is a image classification example

  • running on a mobile phone, and we are holding common office

  • objects in front of it, and it's classifying them in real time--

  • scissors and Post-its, and obviously, a TensorFlow logo.

  • So it's very easy to build these types of models

  • and get these applications up-and-running very quickly,

  • and there are examples and tutorials

  • on our YouTube channel to show you how to do this.

  • So in addition to the flexibility on the platform

  • side, TensorFlow also gives you a lot of flexibility

  • in programming languages that you can use to call it in.

  • So we've always had a large collection of languages

  • that have been supported.

  • Python continues to be the mainstay of machine learning,

  • and lot of work being done in that area.

  • But you can see many more languages,

  • and most of these, actually, have been developed

  • through community support.

  • Two languages we are particularly excited about,

  • which we launched earlier this year--

  • one is support for Swift.

  • So Swift gives some unique advantages

  • by combining the benefits of a very intuitive, imperative

  • programming style with the benefits of a compiled language

  • so you get all the performance optimizations

  • that graphs typically bring you, and so you

  • can have best of both worlds.

  • Another language that's extremely exciting

  • is bringing machine learning to JavaScript.

  • There's a huge JavaScript and web developer community

  • out there, and we believe that by using TensorFlow.js, which

  • is the JavaScript version of TensorFlow,

  • it lets JavaScript developers easily

  • jump into machine learning and develop models

  • in the browser using JavaScript, or run it

  • with a node server backend.

  • So we're beginning to see some really cool applications

  • of this, and I'll show you two examples of that here.

  • So this is a tool which you can try out yourself on that site

  • up there.

  • It's called TensorFlow Playground,