Placeholder Image

字幕表 動画を再生する

  • [MUSIC PLAYING]

  • ARJUN GOPALAN: Hi.

  • I'm Arjun Gopalan, and welcome to episode 2

  • of Neural Structured Learning.

  • In the previous episode, you learned

  • about what Neural Structured Learning is

  • and how this new learning paradigm can be used to improve

  • model accuracy and robustness.

  • In this episode, we'll discuss how Neural Structured Learning

  • can be used to train neural networks with natural graphs.

  • Before we begin, let's define what a natural graph is.

  • Essentially, it's a set of data points

  • that have an inherent relationship with each other.

  • The nature of this relationship can vary based on the context.

  • Social networks and the World Wide Web

  • are classic examples that we interact with on a daily basis.

  • Beyond these examples, they also occur

  • in data that is commonly used for many machine learning

  • tasks.

  • For instance, if we are trying to capture user behavior based

  • on their interactions with data, it

  • might make sense to model the data as a co-occurrence graph.

  • Alternatively, if we are working with articles or documents that

  • contain references or citations to other documents or articles,

  • then we can model the data set as a citation graph.

  • Finally, for natural language applications,

  • we can define a text graph where nodes represent

  • entities and edges represent relationships

  • between pairs of entities.

  • Now that we understand what natural graphs are,

  • let's look at how we can use them to train a neural network.

  • Consider the task of document classification.

  • This is a problem that frequently occurs

  • in a multitude of contexts.

  • As an example, machine learning practitioners

  • might be interested in categorizing machine learning

  • papers based on a specific topic such as computer vision

  • or natural language processing or even reinforcement learning.

  • And often, we have a lot of these documents or papers

  • to classify, but very few of them have labels.

  • So how can we use Neural Structured Learning

  • to accurately classify them?

  • The key idea is to use citation information

  • whose existence is what makes the data set a natural graph.

  • What this means is that, if one paper cites another paper,

  • then both papers likely share the same label.

  • Using such relational information

  • from the citation graph leverages both labeled as well

  • as unlabeled examples.

  • This can help compensate for the insufficiency

  • of labels in the training data.

  • You might be wondering, well, all this sounds great,

  • but what does it take to actually build

  • a Neural Structured Learning model for this task?

  • Let's look at a concrete example.

  • Since we are dealing with natural graphs here,

  • we expect the graph to already exist in the input.

  • The first step then is to augment the training data

  • to include graph neighbors.

  • This involves combining the input citation

  • graph and the features of the documents

  • to produce an augmented training data set.

  • The pack neighbors API in Neural Structured Learning

  • handles this.

  • And notice that it allows you to specify the number of neighbors

  • used for augmentation.

  • In this example, we use up to three neighbors.

  • The next step is to define a base module.

  • In this example, we've used Keras for illustration,

  • but Neural Structured Learning also

  • supports the use of estimators.

  • The base module can be any type of Keras model,

  • whether it's a sequential model, a functional API-based model,

  • or a subclass model.

  • It can also have an arbitrary architecture.

  • Once we have a base model, we define a graph regularization

  • configuration object, which allows you to specify

  • various hyperparamaters.

  • In this example, we use three neighbors

  • for graph regularization.

  • Once this configuration object is created,

  • you can wrap the base model with the graph regularization

  • wrapper class.

  • This will create a new graph Keras

  • model whose training loss includes a graph regularization

  • term.

  • You can then compile, train, and evaluate the graph Keras model,

  • just as you would with any other Keras model.

  • As you can see, creating a graph Keras model is really simple.

  • It requires just a few extra lines of code.

  • A Colab-based tutorial that demonstrates

  • document classification also exists on our website.

  • You can find that in the description below.

  • Feel free to check it out and experiment with it.

  • In summary, we looked at how we can use natural graphs

  • for document classification using Neural Structured

  • Learning.

  • The same technique can also be applied to other machine

  • learning tasks.

  • In the next video, we'll see how we

  • can apply the technique of graph regularization

  • when the input data does not form a natural graph.

  • That's it for this video.

  • There's more information in the description below.

  • And before we get to the next video,

  • don't forget to hit the Subscribe button.

  • Thank you.

  • [MUSIC PLAYING]

[MUSIC PLAYING]

字幕と単語

ワンタップで英和辞典検索 単語をクリックすると、意味が表示されます

B1 中級

神経構造化学習-第2部:自然グラフを用いた学習 (Neural Structured Learning - Part 2: Training with natural graphs)

  • 2 0
    林宜悉 に公開 2021 年 01 月 14 日
動画の中の単語