Placeholder Image

字幕表 動画を再生する

  • SUJITH RAVI: Hi, everyone.

  • I'm Sujith.

  • I lead a few machine learning teams in Google AI.

  • We work a lot on--

  • how do you do deep networks and build machine learning systems

  • that scale on the cloud with minimal supervision?

  • We work on language understanding, computer vision,

  • multi-modal applications.

  • But also we do things on the edge.

  • That means, how do you take all these algorithms

  • and fit it to compute and memory-constrained devices

  • on the edge?

  • And I'm here today with my colleague.

  • DA-CHENG JUAN: Hi, I'm Da-Cheng.

  • And I'm working with Sujith on neural structured learning

  • and all the related topics.

  • SUJITH RAVI: So let's get started.

  • You guys have the honor of being here for the last session

  • of the last day.

  • So kudos and bravo--

  • you're really dedicated.

  • So let us begin.

  • So we are very excited to talk to you

  • about neural structured learning, which

  • is a new framework in TensorFlow that

  • allows you to train neural networks

  • with structured signals.

  • But first, let's go over some basics.

  • If you are in this room and you know about deep learning

  • and you care deeply about deep learning,

  • you know how typical neural networks work.

  • So if you want to take, for example, a neural network

  • and train it to recognize images and distinguish

  • between concepts like cats and dogs, what would you do?

  • You feed images like this on the left side, which

  • looks like a dog, and give the label "dog,"

  • and feed it to the network.

  • And the process by which it works

  • is you adjust weights in the network such

  • that the network learns to distinguish and discriminate

  • between different concepts and correctly tag the image

  • and convert the pixels to a category.

  • This is all great.

  • All of you in this room probably have built a network.

  • How many of you have actually built a neural network?

  • Great!

  • And this is the last session, last day--

  • but still, we're very happy that you're all with me here.

  • So it's all great.

  • We have a lot of fancy algorithms,

  • very fancy networks.

  • What is the one core ingredient that we

  • need when we build a network?

  • So almost majority of the applications that we work on,

  • we require label data, annotated data.

  • So it's not one image that you're feeding to this network.

  • You're actually taking a bunch of images paired

  • with their labels, cats and dogs in this case,

  • but of course it could be whatever,

  • depending on the application.

  • But we feed it thousands or hundreds

  • of thousands or even millions of examples into the network

  • to train a good classifier, right?

  • Today, we're going to introduce neural structured learning,

  • which is a framework.

  • We're happy to say it's support an in TensorFlow 2.0 and Keras.

  • And it allows you to train better and more robust

  • neural networks by leveraging structure in the data.

  • So the core idea behind this framework

  • is that we're going to take neural networks

  • and feed it, in addition to feature

  • inputs, structured signals.

  • So think of the abstract image that I showed you earlier.

  • Now in addition to these images paired with labels,

  • you're going to feed it connections or relationships

  • between the samples themselves.

  • I will get to what these relationships might mean.

  • But you have these structured signals and the labels.

  • And you feed both into the network.

  • You might ask, what do you mean by structure, right?

  • Structure is everywhere.

  • In the example that I showed you earlier,

  • if you look at images--

  • just take a look at this graph here.

  • The images that are connected via edges in this picture

  • here basically represent that there's some visual similarity

  • between these.

  • So it is actually pretty easy to construct structured signals

  • from day-to-day sources of data.

  • So in the case of images here, it's visual similarity.

  • But you could think of--

  • what if you tag your images and created

  • albums that represent some specific concepts?

  • So everything within an album or a photo album

  • has some sort of a connection or interaction or relationship

  • between them.

  • So that represents another type of structure.

  • It's not just for images.

  • We can go to more advanced or completely different

  • applications, like, if you want to take scientific publications

  • or news articles and you want to tag them with their topic--

  • one simple thing.

  • Take biomedical literature.

  • All the papers that are published,

  • whether it's Nature or any of the conferences,

  • they have references and citations to other papers.

  • That represents another type of structure or link.

  • So these are the kind of structures

  • that we're talking about here, relationships

  • that are exhibited or modeled between different types

  • of objects.

  • In the natural language space, this occurs everywhere.

  • If you're talking about doing Search,

  • everybody has heard of Knowledge Graph, which

  • is a rich source of information, which captures relationships

  • between entities.

  • So if I talk about the concept Paris and France,

  • the relationship is one is the capital of the other.

  • So these sort of relationships--

  • it's not typical to capture and feed them

  • into a neural network.

  • But these are the kind of relationships

  • which are already existing in day-to-day data sources.

  • So why not leverage them?

  • So that is what we try to do with the neural structured

  • learning.

  • And the key advantages--

  • before we talk about what it does and how we do it,

  • why do you even want to care about it?

  • What is the benefit?

  • So one of them is, as I mentioned earlier,

  • it allows you to take this structure

  • and use it to train neural networks with less

  • labeled data.

  • And that's the costly process, right?

  • So every application that you want to train,

  • if you had to collect a lot of rich annotated data

  • at scale for millions of examples,

  • it's going to be a tedious task.

  • Instead, if you're able to use a framework,

  • like neural structured learning, that automatically captures

  • a data structure and relationship,

  • with minimal supervision you're able to train

  • classifiers or prediction systems with the same accuracy.

  • That would be a huge boon.

  • Who wouldn't want that, right?

  • That's one type of a benefit.

  • Another one is that, typically, when you deploy these systems

  • in practice, in real-world applications,

  • you want the systems or networks to be robust.

  • That means, you train the ones you don't want--

  • if the input distribution changes or data suddenly

  • changes or somebody corrupts the images with adversarial

  • attacks--

  • suddenly the network to flip the predictions and go bonkers.

  • So this is another benefit where,

  • if you use neural structured learning,

  • you can actually improve the quality of your network

  • and also the robustness of the network.

  • So let me dive a little deeper and give

  • you a little more insight into the first scenario.

  • Take document classification as an example.

  • So I'll give you, probably, some example that you probably

  • have at your home.

  • Imagine you have a catalog or a library of books in your home.

  • And these are digitized content.

  • And you want to categorize them and neatly arranged them

  • into specific topics or categories.

  • Now one person might want to categorize them

  • based on the genre.

  • A different person might say, oh,

  • I want it to belong to the same period.

  • A third person might say, oh, I want

  • to capture the books that have the same kind of content