字幕表 動画を再生する
[MUSIC PLAYING]
TIM DAVIS: Hi, my name's Tim Davis,
and I'm the product manager for TensorFlow Lite.
Here we are at TensorFlow World, and one
of the really exciting things that we have to demo today
is TensorFlow Lite on microcontrollers.
Microcontrollers are everywhere.
They're in all the things that you use.
They're very, very small circuits,
and now they can run machine learning.
MARK STUBBS: What we're demonstrating here
is TensorFlow Lite micro running on a Cortex-M4 from Ambiq.
So we're simulating an anomaly with an offset weight
on the motor.
When we increase the speed, vibration happens.
This turns red, that indicates there's an anomaly.
That's running from the TensorFlow Lite micro engine.
And then on the cloud, this will turn red as well.
TIM DAVIS: Check out tensorflow.org/lite.
There's lots of code, documentation and samples
available.
GAL OSHRI: So TensorBoard is TensorFlow's visualization
toolkit.
It enables you to track your training metrics like loss
and accuracy, visualize your model draft,
inspect the model parameters, and a lot more.
At TensorFlow World, we announced TensorBoard.dev,
which lets you easily upload your TensorBoard logs and get
back a link that you can send to anyone so you can include it
in your GitHub issues, your Stack Overflow questions,
or even your research papers.
You can go to TensorBoard.dev to learn more and try out
a collab that you can get started with really easily.
SANDEEP GUPTA: So TensorFlow JS is
a library for doing machine learning in JavaScript.
And it brings machine learning in the hands of web developers
and other JavaScript developers who don't necessarily now have
to use Python-based tools.
The library is full featured.
We have packaged a whole bunch of models
which let you bring machine learning into your application
straight out of the box.
And these are pre-trained models that
make it super easy to enhance your web applications with.
We are showcasing some new models here,
some faster versions and performance improvements
in some of these models.
And these are great for use cases
like accessibility, recognizing people, and gestures
and images, text classification in web interfaces,
as well as speech commands models
to recognize spoken words.
In addition to using these pre-trained models out
of the box that we packaged for you, often
you need to train a custom model for your application
on your own data.
And this can be pretty challenging.
So Google has a service called AutoML
which lets you bring your data to Google Cloud
and train a custom model.
And these models are optimized for your problem,
they give excellent accuracy and performance,
and then they're ready for deployment.
We are really excited to announce that we now
have integration of TensorFlow JS with this AutoML service.
So what that means is after training a custom image
classification model, you can export it
for use in a web application with the click of one button.
And we have early customer testimonials
who are showing the impressive gains
that they get in their workflow by using the service.
We are also showing some of the improvements
that we have on performance and platforms.
One of the things we are really excited about
is React Native integration.
So if you are a React Native developer who
is trying to build cross-platform native
applications, you can now use TensorFlow JS
directly from inside React Native
and you'll get the full power of WebGL acceleration.
We are seeing a lot our users here
who are giving exciting talks on applications that they are
building with TensorFlow JS.
One of our favorites is from Dr. Joseph Paul
Cohen, who is from the University of Montreal.
And he's showing how they're using
TensorFlow JS for scoring chest X-ray radiology
images in the browser.
And this has huge privacy implications
because patients' medical data stays
client-side in the browser, doesn't go to the server side,
and you can run powerful machine learning models directly
on the client side.
ROBERT CROWE: And we're here looking
at TensorFlow Extended, or TFX, including the great pipeline
that you can create to move your model to production with TFX.
Pipeline starts over here on the left where
you're ingesting your data, you're
creating TensorFlow examples with it,
then you're running statistics across your data, which
you normally would do.
And then you're looking for problems with your data
with ExampleValidator and doing feature engineering
with Transform.
Eventually, when you've got everything right,
you're going to train your model and then
do deep analysis with Evaluator, looking
not just at the top level metrics,
but at each individual slice of your data to make sure
that the performance is good across the whole data set.
Then if it is good, you're going to use
ModelValidator and Pusher to push your model to production.
But TFX is actually more than just a pipeline.
TFX is a whole suite of tools, including the What-If tool.
The What-If tool lets you do deep experimentation
of your data set to see if I make changes,
how does it affect the performance of my model?
If you want to learn more, go to tensorflow.org/tfx.
[MUSIC PLAYING]