Placeholder Image

字幕表 動画を再生する

  • [MUSIC PLAYING]

  • CHRIS LATTNER: Hi, everyone.

  • I'm Chris.

  • And this is Brennan.

  • And we're super excited to tell you about a new approach

  • to machine learning.

  • So here in the TensorFlow team, it

  • is our jobs to push the state of the art

  • in machine learning forward.

  • And we've learned a lot over the last few years

  • with deep learning.

  • And we've incorporated most of that all into TensorFlow 2.

  • And we're really excited about it.

  • But, here, we're looking a little bit further

  • beyond TensorFlow 2.

  • And what do I mean by further?

  • Well, eager mode makes it really easy to train a dynamic model.

  • But deploying it still requires you take that and then write

  • a bunch of C++ code to help drive it.

  • And that could be better.

  • Similarly, some researchers are interested in taking machine

  • learning models and integrating them into larger applications.

  • That also often requires writing C++ code.

  • We always want more flexible and expressive autodifferentiation

  • mechanisms.

  • And one of things we're excited about

  • is being able to define reusable types that

  • then can be put into new places and used

  • with automatic differentiation.

  • And we always love improving your developer workflow.

  • We want to make you more productive

  • by taking errors in your code and bringing them

  • to your source and also by just improving your iteration time.

  • Now, what we're really trying to do here

  • is lift TensorFlow to entirely new heights.

  • And to do that, we need to be able to innovate

  • at all levels of the stack.

  • This includes the compiler and the language.

  • And that's what Swift for TensorFlow is all about.

  • We think that applying new solutions to old problems

  • can help push machine learning even further than before.

  • Well, let's jump into some code.

  • So first, what is Swift?

  • Swift is a modern and cross-platform programming

  • language that's designed to be easy to learn and use.

  • Swift uses types.

  • And types are great, because they can

  • help you catch errors earlier.

  • And also, they encourage good API design.

  • Now, Swift uses type inference, so it's really easy to use

  • and very elegant.

  • But it's also open source and has an open language evolution

  • process, which allows us to change the language

  • and make it better for machine learning which is really great.

  • Let's jump into a more relevant example.

  • This is how you define a simple model in Swift for TensorFlow.

  • As you can see, we're laying out our layers here.

  • And then we can find a forward function, which composes them

  • together in a linear sequence.

  • You've probably noticed that this looks a lot like Keras.

  • That's no accident, of course.

  • We want you to be able to take what you know about Keras

  • and bring it forward into this world as well.

  • Now, once we have a simple model, let's train it.

  • How do we do that?

  • All we have to is instantiate our model,

  • pick an optimizer and some random input data,

  • and then pick a training loop.

  • And, here, we'll write it by hand.

  • One of the reasons we like writing by hand

  • is that it gives you the maximum flexibility

  • to play with different kinds of constructs.

  • And you can do whatever you want, which is really great.

  • But some of the major advantages of Swift for TensorFlow

  • are the workflow.

  • And so instead of telling you about it, what do you think,

  • Brennan, should be show them?

  • BRENNAN SAETA: Let's do it.

  • All right, the team has thought long and hard

  • about what's the easiest way for people to get started

  • using Swift for TensorFlow.

  • And what could be easier than just opening up a browser tab?

  • This is Google Colab, hosted Jupyter notebooks.

  • And it comes with Swift for TensorFlow built right in.

  • Let's see it in action.

  • Here is the layer model, the model

  • that Chris just showed you a couple of slides ago.

  • And we're going to run it using some random training

  • data right here in the browser.

  • So we're going to instantiate the model.

  • We're going to use the stochastic gradient descent SGD

  • optimizer.

  • And here we go.

  • We have now just trained a model using

  • Swift for TensorFlow in our browser on some training data

  • right here.

  • Now, we can see the training loss is decreasing over time.

  • So that's great.

  • But if you're ever like me and whenever I try and use

  • machine learning in any application,

  • I start with a simple model.

  • And I've got to iterate.

  • I've got to tweak the model to make it fit better

  • to the task at hand.

  • So since we're trying to show you the workflow,

  • let's actually edit this model.

  • Let's make it more accurate.

  • So here we are.

  • Now, let's think a little for a moment.

  • What changes do we want to make to our model?

  • Well, this is deep learning after all.

  • So the answer is always to go deeper, right?

  • But you may have been following the recent literature in state

  • of the art in that not just sequential layers,

  • but skip connections or residual connections

  • are a really good idea to make sure your model continues

  • to train effectively.

  • So let's go through and actually add an extra layer

  • to our model.

  • Let's add some skip connections.

  • And we're going to do it all right now in under 90 seconds.

  • Are you ready?

  • All right, here we go.

  • So the first thing that we want to do

  • is we need to define our additional layer.

  • So we're going to fill in this dense layer.

  • Whoops.

  • Flow.

  • And one thing you can see is that we're

  • using Tab autocomplete to help fill

  • in code as we're trying to develop and modify our model.

  • Now, we're going to fix up the shapes right here really

  • quick, so that the residual connections will all work.

  • If I can type properly, that would go better.

  • All right, great.

  • We have now defined our model with the additional layers.

  • All we need to do is modify the forward pass,

  • so that we add those skipped connections.

  • So here we go.

  • The first thing we need to do is we

  • need to store in a temporary variable

  • the output of the flattened layer.

  • Then we're going to feed the output of the flattened layer

  • to our first dense layer.

  • So dense.applied to tmp in context.

  • Now, for the coup de grace, here is our residual connection.

  • So dense2.applied to tmp + tmp2 in context.

  • Run that.

  • And, yes, that works.

  • We have now just defined a new model

  • that has residual connections and is

  • one additional layer deeper.

  • Let's see how it does.

  • So we're going to reinstantiate our model

  • and rerun the training loop.

  • And if you recall from the loss that we saw before,

  • this one is now substantially lower.

  • This is great.

  • This is an example of what it's like to use Swift

  • for TensorFlow to develop and iterate as you apply models

  • to applications and challenges.

  • But Swift for TensorFlow-- thank

  • [APPLAUSE]

  • But Swift for TensorFlow was designed for researchers.

  • And researchers often need to do more than just change models

  • and change the way the architecture fits together.

  • Researchers often need to define entirely

  • new abstractions or layers.

  • And so let's actually see that live right now.

  • Let's define a new custom layer.

  • So let's say we had the brilliant idea

  • that we wanted to modify the standard dense layer that

  • takes a weights and biases and we

  • wanted to add an additional bias set of parameters, OK?