Placeholder Image

字幕表 動画を再生する

  • Test.

  • >> Hi, everybody.

  • We have a big show for you today.

  • So, if you have -- now would be a

  • great time to turn -- the exit behind you.

  • [ Applause ]

  • >> Hi, everybody.

  • Welcome.

  • Welcome to the 2018 TensorFlow Lite summit.

  • We have a good day with lots of cool talks.

  • As you know -- we are embarking on the

  • --

  • and the controllers in Europe are

  • useing

  • to project the trajectory of flight

  • through the air of Belgium, Luxembourg, Germany and the

  • Netherlands. This has more than 1.

  • 8 million flights and it is one of the

  • most dense air spaces in the world.

  • And teary farming.

  • We know that a cow's health is vital

  • to the survival of the dairy industry.

  • And -- connected our company in the Netherlands, they wondered

  • if they can

  • use machine learning to track the health

  • of cows and be able to provide insights to farmers and

  • veterinarians on actions

  • to be taken to ensure we have happy,

  • healthy cows that are high yielding.

  • In California, and also from the Netherlands.

  • And

  • --

  • music, the machine learning algorithm,

  • the neural networks --

  • >> And changed by machine learning.

  • The popular Google home, or the pixel or search or YouTube or

  • even maps. Do you know what is fascinateing in all of these

  • examples?

  • TensorFlow is at the forefront of them. Makeing it all

  • possible.

  • A machine learning platform that can solve challengeing problems

  • for all of us.

  • Join us on this incredible journey to make TensorFlow

  • powerful, scaleable and the best machine learning platform for

  • everybody.

  • I now -- with TensorFlow to tell us more about this. Thank you.

  • >> So, let's take a look at what we have been doing over the last

  • few years. It's been really amazing.

  • There's lots of new -- we have seen the popularity of

  • TensorFlow Lite grow. Especially over the last year,

  • we focused on makeing TensorFlow easy to

  • use, and the degrees -- and new

  • programming paradigms like -- execution really make that

  • easyier. Earlier this year, we hit the

  • milestone of 11 million downloads. We are really

  • exciteed to see how much users are uses this and how much

  • impact it's had in the world.

  • Here's a map showing self-identifyied

  • locations of folks on Git hub that

  • started TV -- TensorFlow. It goes up and down. In fact,

  • TensorFlow is useed in every time zone in the world.

  • An important part of any open source product is the

  • contributeors themselves. The people who make this project

  • successful. I'm exciteed to see over a thousand contributeors

  • from outside Google who

  • are makeing contributions not just by improveing code, but

  • also by helping the rest of the committee by answering

  • questions,

  • responding to queries and so on.

  • Our commitment to this community is by

  • share

  • -- sharing our direction in the

  • roadmap, have the design direction, and

  • focus on the key needs like TensorBoard. We will be talking

  • about this later this afternoon in detail.

  • Today we are launching a new TensorFlow Lite blog. We'll be

  • shareing work by the team in the community on this blog, and we

  • would like to invite you to participate in this as well.

  • We're also launching a new YouTube channel for TensorFlow

  • that brings together all the great content for TensorFlow.

  • Again, all of these are for the community to really help build

  • and communicate. All day today we will be shareing a number of

  • posts on the blog and videos on the channel.

  • The talks you are hear hearing here will be made available

  • there as well, along with lots of conversations and interviews

  • with the speakers.

  • To make views and shareing easyier, today we are launching

  • TensorFlow hub.

  • This library of components is easyily integrateed into your

  • models. Now, again, goes back to really

  • makeing things easy for you.

  • Library.

  • With the focus on deep learning and neural networks. It's a

  • rich collection of machine learning environments.

  • It includes items like regressions and

  • decision trees commonly used for many structured data

  • classification problems. There's a broad collection of

  • state of

  • the art tools for stats and

  • Baysian analysis.

  • You can check out the blog post for details.

  • As I mentioned earlier, one of the big key focus points for us

  • is to make TensorFlow it easy to use. And we have been pushing

  • on simpler APIs, and making them more intuitive. The lowest

  • level -- our focus is to consolidate a lot of the APIs we

  • have and make it easier to build these models and train them.

  • At the noise level the TensorFlow APIs are really

  • flexible and let users build anything they want to.

  • But these same APIs are easier to use.

  • TensorFlow contains a full

  • implementation of Keras. You can offer lots of layers to

  • train them as well.

  • Keras works with both executions as well. For distributed

  • execution, we provide

  • estimators so you can take models and distribute them

  • across machines.

  • You could also get estimators from the Keras models.

  • And finally, we provide premade estimators. A library of ready

  • to go implementations of common machine learning environments.

  • So, let's take a look at how this works. So, first, you

  • would often define were model. This is a

  • nice and easy way to define your model. Shows a convolution

  • model here with just a few lines here.

  • Now, once you've defined that, often

  • you want to do some input processing.

  • We have a great idea of the data introduced in 1.

  • 4 that makes it easy to process inputs and lets us do lots of

  • optimizations behind the scenes. And you will see a lot more

  • detail on this later today as well. Once you have those, the

  • model and the info data, now, you can put them

  • together by equating the data-set, computing gradients

  • and updating parameters themselves.

  • You need a few lines to put these together.

  • And you can use your debugger to debug that and involve problems

  • as well.

  • And, of course, you can do it even fewer lines by using the

  • pre-defined lines we have in Keras.

  • In this case, it executes the model as a graph with all the

  • optimizations that come with it. This is great for a single

  • machine or a single device.

  • Now, often, given the high, heavy computation needs for

  • deep learning or machine learning, we

  • want to use more than one actuator. We have estimators.

  • The same datasets that you had, you

  • can build an estimator and really use

  • that to train across the cluster or multiple devices on a single

  • machine. That's great. Why not use a cloud cluster?

  • Why not use a single block box if you can do it faster?

  • This is used for training ML models at scale. And the focus

  • is to take everything you have been doing and build a TPU

  • estimator to allow you to scale the same model.

  • And finally, once you have trained

  • that model, use that one line at the

  • bottom for the deployment itself. Your deployment is

  • important, you often do that in data centers. But more and more

  • we are seeing the need to deploy this on the phones, on other

  • devices as well.

  • And so, for that, we have TensorFlow lithe.

  • And we have a custom format that's designed for devices and

  • lightweight and really fast to get started with. And then once

  • you have that format, you can include that in your

  • application, integrate

  • TensorFlow Lite with a few lines, and you have an

  • application to do predictions and include ML. Whatever task

  • you want to perform.

  • So, TensorFlow runs not just on many platforms, but in many

  • languages as well.

  • Today I'm excited to add Swift to the mix. And it brings a

  • fresh approach to machine learning.

  • Don't miss the talk by Chris Lattner this afternoon that

  • covers the exciting

  • details of how we are doing this. JavaScript is a language

  • that's synonymous with the web development community.

  • I'm excited to announce TensorFlow.JS, bringing it to

  • the web developers. Let's take a brief look at this.

  • The same TensorFlow applications in JavaScript, you

  • can call them just as plain JavaScript code.

  • And a full-fledged layer of API on top. And full support for

  • TensorFlow and

  • Keras models so you can pick the best deployment for you.

  • And under the covers, these APIs are actuated.

  • And we have NodeJS support coming

  • soon, which will give you the power to

  • actuate on CPUs and GPUs.

  • And I would like to welcome Megan

  • Kacholia to talk about how TensorFlow does performance.

  • [ Applause ] >> Thank you. All right.

  • Thanks, Rajat so, performance across all platforms is critical

  • to TensorFlow Lite's success. I want to take a quick step back

  • and talk about some of the things we think about when

  • measuring and assessing TensorFlow's performance. One

  • of the things we want to do is focus on real world data and

  • time to accuracy. We want to have reproducible benchmarks and

  • make sure they're realistic of the workloads and types of

  • things that users like you are doing on a daily basis. Another

  • thing, like Rajat talked about, is we want to make sure we have

  • clean APIs.

  • And we don't want to have a fast version and a pretty version.

  • The fast version is the pretty version.

  • All the APIs that we talked about that we're talking about

  • through various talks, these are the things you can use to get

  • the best performance out of TensorFlow. You don't have to

  • worry about what is fast or pretty, use the pretty one, it

  • is fast.

  • TF Data from Derek after the keynote. As well as

  • distribution strategy from Igor. And these are great examples of

  • things

  • we have been pushing on to ensure good performance and good

  • APIs. We want good performance, whether it's a large data center

  • like here, or maybe you're using something like on the image

  • here.

  • A GPU or CPU box under your desk.

  • Making use of a cloud platform or a mobile or embedded device.

  • We want TensorFlow to perform well across all of them. Now

  • the numbers, because what is a performance talk if I don't show

  • you slides and numbers. First, look at things on the mobile

  • side.

  • This is highlighting TensorFlow Lite performance. There's a

  • talk giving a lot more detail how it works and the things we

  • were thinking of when making it later

  • today

  • by Sarah.

  • And weft the speed yum with Qu -- and it's critical to have

  • strong performance regardless of the platform, and we're really

  • excited to see these gains in mobile. In looking past mobile,

  • just beyond, there are a number of companies in the

  • hardware space which continues to expand. The contributions

  • that come out of the collaborations that we have with

  • these companies, the contributions they give back to

  • TensorFlow and back to the community at large, are critical

  • to making sure that TensorFlow performs well on these specific

  • platforms for the users that each group really cares about.

  • One of the first ones I want to highlight is Intel.

  • So, the Intel MKL-DNN library, open

  • sourced and highly optimized for TensorFlow.

  • We have a 3X inference speedup on Intel platforms, as well as

  • great scaling efficiency on training. And this is one of

  • those things that highlights how important it is to have strong

  • collaborations with different folks in the community. And

  • we're excited to see things like this to go back to all the

  • users.

  • And I want to call out a new of the collaborations with NVIDIA

  • as well.