字幕表 動画を再生する
[MUSIC PLAYING]
GAL OSHRI: Hi, everyone.
My name is Gal Oshri, and I'm product manager
on the TensorFlow team.
I'm here to talk to you about collaborative machine learning
with TensorBoard.dev.
Machine learning often involves collaboration
where you exchange ideas with a team, post questions on Stack
Overflow, share your code on GitHub,
and publish your findings to the broader research community
through papers and conferences.
In all of these, there's something
you're expressing about what you've done,
potentially through visualizations.
For example, it is common to share experiment results
and papers through various charts,
but it is also common to include this type of information
when asking for troubleshooting help.
In this case, someone posted an issue on GitHub asking why
they're not achieving the expected results alongside
a screenshot of their TensorBoard to explain why
they're-- what's going wrong.
The problem is the pictures don't
convey all the information.
For example, you might want to show what other metrics are
being tracked, how many trials were run,
or let the reader explore the model structure in more depth.
Even when the code is released, it
can require quite a bit of setup before the results can
be viewed.
So how can we make this easier?
Well, we already have TensorBoard,
which is TensorFlow's visualization toolkit.
It is commonly used by researchers and engineers
to understand their experiment results.
It lets them track metrics, visualize their model,
explore model parameters, view their embeddings,
and a lot more.
And we're consistently adding new capabilities to it.
Last year, we launched the HParams Dashboard
to help visualize hyperparameter [INAUDIBLE] results.
This can help you identify which hyperparameters are most
promising for further exploration.
Given that many people are already using TensorBoard,
and some are even sharing screenshots of it,
we launched TensorBoard.dev to make all of this easier.
With TensorBoard.dev, you can upload your experiment results
and get a link that can be shared with everyone for free.
Others can view and interact with your TensorBoard
with no setup or installation.
We started with the Scalars dashboard,
but more dashboards will be added soon.
We're only at the beginning of this journey,
and we want to continue evolving this to help drive
state of the art research.
As inspiration, I want to share a few awesome examples with you
today.
The TensorFlow/models repository on GitHub
provides a collection of officially
supported TensorFlow models on cloud TPUs.
Many of them now provide a link to TensorBoard.dev.
This allows you to quickly view the training dynamics
and gives you a point of reference,
if you're training them all yourself,
in trying to understand if something is going wrong.
In this example, [? Shawn ?] is using
Twitter to share some work on large-scale ML experimentation.
TensorBoard.dev is used to help tell
that story more effectively.
This team of researchers at Google
recently published a paper on optimizer
grafting to help better understand
optimizer performance.
Along with the paper, they've also
upload results to TensorBoard.dev
to show 550 grafting curves that help
illustrate their technique.
This will be difficult to convey purely in a paper.
Another project from Google Research
introduces big transfer, a recipe
for scaling up image-based general visual representation
learning.
By transferring using a simple heuristic,
they achieve state of the art results across multiple vision
tasks.
They uploaded their results to TensorBoard.dev
to show how their model's performance reaches state
of the art and compares with the baseline across multiple data
sets.
This paper from NeurIPS 2019 discusses
binarized neural networks.
Their GitHub repository includes several links
to TensorBoard.dev to illustrate model accuracy.
But what's really cool is that, as part of the NeurIPS
Reproducibility Challenge, another group of researchers
published a report on this paper.
In this report, they included links to TensorBoard.dev
as well to provide a more complete picture
of the training process and include
other useful information to help others with debugging.
So all this sounds great, but how do you get started?
You continue using TensorBoard in the same way
that you use it today.
In this example, the TensorBoard Keras callback is shown,
but there are other APIs that can be used.
If you're not familiar with TensorBoard,
check out TensorFlow.org/TensorBoard
for some great [INAUDIBLE] tutorials.
You can upload these logs with the TensorBoard dev upload
command and provided the log directory.
Just last week, we enabled optionally adding a name
and description to your experiments
to provide more context to those that view it.
This can include links to your paper, GitHub repo,
or simply point out what is really
interesting about your results.
You'll get a link to your TensorBoard
that you can immediately view, even
if your experiment is still in progress.
Finally, you can share the experiment by copying the URL
or using the Share button in the top right.
As I mentioned, we'll be adding more
of TensorBoard's dashboards over time
as well as enabling more collaboration capabilities.
You can learn more at TensorBoard.dev
and follow a simple example to get started.
Thank you for your time, and next up
is Julia to tell you about Kaggle with TensorFlow 2.0.
[MUSIC PLAYING]