Placeholder Image

字幕表 動画を再生する

  • SANDEEP GUPTA: Thank you, Laurence.

  • So Laurence gave us a very nice introduction

  • to what is machine learning and a glimpse

  • of some of the applications that are

  • possible with machine learning.

  • So at Google, we saw the enormous potential

  • that machine learning was beginning

  • to have all around us, and that led us

  • to releasing TensorFlow as an open source platform

  • back in 2015, and the objective of doing

  • that was to give everyone access to this open source library,

  • to develop machine learning solutions,

  • and thereby, our aim was to accelerate

  • the progress and pace of the development of machine

  • learning.

  • So since that time, TensorFlow has

  • become enormously successful.

  • In about these three years, it has

  • grown to become the number 1 repository on GitHub

  • for machine learning, and we are very

  • proud to have been named as the most loved software library

  • or framework in the 2018 Stack Overflow Developer Survey.

  • So all of this success, actually, in large part

  • is due to the very enormous group of users and developers

  • that we have out there who are using TensorFlow and building

  • lot of these interesting applications.

  • We are blown away by these download numbers.

  • Used more than 17.5 million times.

  • The number that we are very proud of

  • is the more than 1,600 contributors,

  • and vast majority of them being non-Google contributors.

  • And you see very active engagement by the TensorFlow

  • Engineering team with the community in answering issues

  • and fielding pull requests and so on.

  • So it's a very tiring--

  • an active ecosystem that TensorFlow

  • has generated around it.

  • If you look at where our users come from,

  • they come from all parts of the globe.

  • So these are self-identified locations from our users

  • on GitHub who have started TensorFlow,

  • and you'll see that they come from every time zone

  • on the Earth--

  • down in the south, right up to Antarctica,

  • and right up in the Arctic Circle in Norway.

  • And I think I see a dot there in Ireland as well, which

  • might be Laurence, yeah.

  • So it's used everywhere for a wide variety of things.

  • So let me switch gears and talk a little bit

  • about the architecture of TensorFlow-- what

  • it is, what it lets you do.

  • So I'll talk briefly about the APIs that it has,

  • and then describe the platforms and the languages

  • that it supports, and then take a quick look

  • at some of the tooling that really makes it

  • as a very useful platform for doing machine

  • learning and letting you do a very versatile set of things.

  • So to look at the API for TensorFlow,

  • fundamentally it's a computation execution engine.

  • And by that, what we mean is that you called your machine

  • learning algorithm-- or your steps,

  • as Laurence was showing in that example--

  • and then TensorFlow automates all the mechanics

  • of the training process and of running it

  • on your device of interest, and lets you build that model

  • and then use it in practical applications.

  • So you do this by using these higher-level APIs

  • as an easy way to get started, and we

  • have two paths of doing this.

  • Some of you might be familiar with Keras

  • as a library for developing machine learning models,

  • and TensorFlow has full tight integration with Keras,

  • and in fact, Keras is the preferred high-level way

  • of building machine learning models

  • using these LEGO bricks, which let you piece together models

  • one layer at a time and build pretty complex architectures.

  • So you can use the Keras library.

  • In addition, we also package some of these very commonly

  • used models by what we call estimators,

  • and these packaged models are battle tested and hardened,

  • and they let you do your job quickly

  • based on architectures that have already proven to be valuable.

  • And there's also a lot of flexibility

  • in customizing things and changing things

  • and wiring them anyway you need to for your application.

  • So these APIs for model building are fed by data,

  • and part of what TensorFlow offers

  • is very flexible libraries for building these data pipelines--

  • for bringing in data into your models,

  • and then doing some of the feature transformation

  • or pre-processing to prepare the data so

  • that it's ready to be ingested by a machine learning model.

  • And then once you have all this set up,

  • then we have this distribution layer,

  • which basically deals with abstracting away your model

  • and distributing your training job to run

  • on a CPU or single GPU or multiple GPUs,

  • or even on custom architectures such as TPUs,

  • which we'll talk a little bit more about as we go forward.

  • So having trained your model, then, you

  • save your model as an object.

  • And this object basically captures

  • the architecture of the model, as well as

  • the rates and the tuning of the knobs

  • that you did during your training phase,

  • and now it's ready for use in your application.

  • And that's where you, again, have

  • a very flexible choice of how to use that trained model.

  • You can use a library such as tf.serving to manage

  • the whole process of serving.

  • You can deploy it on mobile devices or using TensorFlow.js.

  • You can deploy it in the browser.

  • And we'll talk more about all of these later today,

  • as well as the dedicated talks on each

  • of these components, where you can learn more about them.

  • So on the platform side, as I was saying earlier,

  • TensorFlow lets you run your machine learning models

  • on CPUs, GPUs, as well as on these custom

  • hardwares such as TPUs, as well as on the mobile devices,

  • be it Android or iOS framework, and then

  • going forward on lots of these embedded IoT type devices.

  • So talking a little bit more about the platforms,

  • one platform that we're particularly excited about

  • is Cloud TPUs.

  • So Cloud TPUs were announced by Google actually last year,

  • and then this version 2 of the Cloud TPUs

  • was made general availability earlier this year.

  • And these are specially-designed pieces

  • of hardware from the ground-up which

  • are really, really optimized for machine

  • learning type of workloads.

  • So they're blazingly fast for that.

  • Some of the specs are they're extremely high performance

  • in terms of compute, as well as a large amount

  • of very high bandwidth memory, which

  • lets you parallelize your training job

  • and take full advantage of this kind of an architecture.

  • And the nice thing is that TensorFlow is the programming

  • environment for Cloud TPUs, so TensorFlow is very tightly

  • coupled with being able to do this seamlessly and easily.

  • So let's see what's possible wirh these types of devices.

  • Here I'm showing you some numbers

  • from training, what's called the ResNet-50 model.

  • So ResNet-50 is one of the most commonly used models

  • for image classification.

  • It's a very complex deep learning

  • model which has 50 layers.

  • Has been named ResNet-50.

  • And it has more than 20 million tunable parameters,

  • which you have to optimize during the course of training.

  • So this turns out to be a very commonly used benchmark

  • to look at performance of machine

  • learning tools and models to compare how well the system is

  • optimized.

  • So we train the ResNet-50 model on a public data set

  • called ImageNet, which is a data set of tens of millions

  • of images that are labeled for object recognition

  • type of tasks.

  • And this model could be trained on Cloud TPUs for a total cost

  • of less than $40 with an extremely high image throughput

  • rate, which is mentioned there.

  • And that lets you take the entire ImageNet data set

  • and train your model, in a matter

  • of tens of minutes, what used to take hours, if not days,

  • a few months or years ago.

  • So really exciting to see the pace of this development

  • to do machine learning at scale.

  • On the other end of the spectrum,

  • we see enormous growth in the capabilities

  • of these small minicomputer devices

  • that we carry around in our pockets.

  • Smartphones, smart connected devices, IoT devices--

  • these are exploding.

  • And I think by some counts, their estimates

  • are that there will be about 30 billion such devices

  • within the next five years.

  • And there are a lot of machine learning applications

  • that are possible on these types of devices.

  • So we have made it a priority to make sure that TensorFlow

  • runs-- and runs well--

  • on these types of devices by releasing

  • a library called TensorFlow Lite.

  • So TensorFlow Lite is a lightweight version

  • of TensorFlow, and this is how the workflow works.

  • You take a TensorFlow model, and you train it offline--

  • let's say on a workstation or in distributed computing.

  • And you create your saved model.

  • That saved model then goes through a converter process

  • where we convert it into a model that's specifically

  • optimized for mobile devices.

  • We call it the TensorFlow Lite format.

  • So this TensorFlow Lite model can now

  • be installed on a mobile device where we also

  • have a runtime for TensorFlow, which is a TensorFlow Lite

  • interpreter.

  • So it takes this model, and it runs it.

  • It binds to the local hardware acceleration-- custom hardware

  • acceleration-- on that device-- for example, on Android.

  • You have the NNAPI interface-- the Neural Network interface--

  • which takes advantage of whatever might be the hardware

  • configuration, and this gives you a model that's lightweight.

  • It uses less memory, less power consumption,

  • and is fast for user mobile devices.

  • So here's an example of how this might look in practice.

  • What you're seeing here is a image classification example

  • running on a mobile phone, and we are holding common office

  • objects in front of it, and it's classifying them in real time--

  • scissors and Post-its, and obviously, a TensorFlow logo.

  • So it's very easy to build these types of models

  • and get these applications up-and-running very quickly,

  • and there are examples and tutorials

  • on our YouTube channel to show you how to do this.

  • So in addition to the flexibility on the platform

  • side, TensorFlow also gives you a lot of flexibility

  • in programming languages that you can use to call it in.

  • So we've always had a large collection of languages

  • that have been supported.

  • Python continues to be the mainstay of machine learning,

  • and lot of work being done in that area.

  • But you can see many more languages,

  • and most of these, actually, have been developed

  • through community support.

  • Two languages we are particularly excited about,

  • which we launched earlier this year--

  • one is support for Swift.

  • So Swift gives some unique advantages

  • by combining the benefits of a very intuitive, imperative

  • programming style with the benefits of a compiled language

  • so you get all the performance optimizations

  • that graphs typically bring you, and so you

  • can have best of both worlds.

  • Another language that's extremely exciting

  • is bringing machine learning to JavaScript.

  • There's a huge JavaScript and web developer community

  • out there, and we believe that by using TensorFlow.js, which

  • is the JavaScript version of TensorFlow,

  • it lets JavaScript developers easily

  • jump into machine learning and develop models

  • in the browser using JavaScript, or run it

  • with a node server backend.

  • So we're beginning to see some really cool applications

  • of this, and I'll show you two examples of that here.

  • So this is a tool which you can try out yourself on that site

  • up there.

  • It's called TensorFlow Playground,

  • and this lets you interact with a machine

  • learning model with a neural network very interactively

  • in the browser.

  • So you can define a simple neural network model.

  • You can control how many layers it has,

  • how many nodes it has, and then you

  • can see the effect of changing parameters

  • like learning rates, et cetera, and look

  • at how individual neurons are behaving

  • and what's happening to your model convergence.

  • And these are the types of interactive visualizations

  • that are really possible when you

  • have a browser as a platform for machine learning.

  • On the TensorFlow.js website, you

  • can see a lot more really fun examples.

  • My favorite one is the one which lets you drive a "Pac-Man"

  • game by using hand gestures or gestures.

  • This one is an example of the PoseNet model, where a machine

  • learning model can be fed to webcam data

  • to identify human pose, and then this data

  • can be used to drive or control actions.

  • So again, all the source code for this model is available,

  • and the model is also available in trained

  • from that you can use.

  • So just finally, I want to talk a little bit

  • about some of the tooling that makes TensorFlow

  • very flexible and easy to use for a variety of applications.

  • One example of such a tool is TensorBoard.

  • So TensorBoard is the visualization suite

  • that comes with TensorFlow, and by using TensorBoard

  • you can dive into the complexities of your model,

  • and you can look at the architecture of the model.

  • You can see how the graph is laid out.

  • You can break down your data.

  • You can see how the data is flowing through the network,

  • look at some convergence metrics and accuracy and so on.

  • It's a very powerful way of working with your machine

  • learning models.

  • What you're seeing here is a visualization

  • of the MNIST data set, which is the handwritten digit

  • recognition data set.

  • And you can see how your data set clusters,

  • and whether you have a good distribution

  • of different types of labels in your data,

  • and how they are separated.

  • So we talked a lot about machine learning model development.

  • It turns out that in order to put machine

  • learning into practice and really using it in a production

  • setting, it takes a lot more than just the model itself.

  • So you have to deal with a lot of things related

  • to bringing the data in, and then dealing

  • with model serving, model management,

  • and evaluating the performance of models

  • and keeping them refreshed over time,

  • and the lifecycle management of models.

  • So for this, we recently open sourced

  • an extension to TensorFlow called TensorFlow Extended,

  • which really deals with this process of end-to-end machine

  • learning and gives you a lot of tools

  • that you can use to deal with these system-level aspects

  • of end-to-end machine learning.

  • And my colleague [? Clemens ?] has a talk later this afternoon

  • where he dives into more details about what all

  • you can do with this platform.

  • So with this brief introduction to TensorFlow,

  • let me turn it back to Laurence, who

  • will show how we are making it easier to use and get started.

  • LAURENCE MORONEY: Thanks, Sandeep.

  • My mic still working?

  • There we go.

  • So Sandeep showed TensorFlow and some of the things

  • that we've been able to do in TensorFlow

  • and what it's all about.

  • One of the themes that we've had for this year in TensorFlow

  • is to try and make it easier for developers.

  • So what I want to share is four of the things

  • that we've actually worked on.

  • The first of these is Keras.

  • Sandeep mentioned, if you're familiar with Keras,

  • it was an open source community framework

  • that it made it easy for you to design and train neural nets.

  • Francois, the inventor of Keras, now actually works for us

  • at Google, and as a result, we've

  • incorporated Keras us into TensorFlow.

  • And to be able to use Keras in TensorFlow,

  • it's as easy as from tf import keras.

  • The second thing that we've been working on

  • is something called Eager Execution.

  • And so I came to TensorFlow as a coder, not as an AI scientist

  • or as a data scientist, and the first thing

  • that I found that it was very difficult for me to understand

  • was how a lot of the things that you do in TensorFlow

  • was that you write all your code up front,

  • you load it into a graph, and then you execute it.

  • And it was hard for me, as a developer, to understand,

  • because I don't know about you, I'm not a very good developer.

  • I need to write two or three lines of code,

  • step through them, make sure they work.

  • Write another two or three lines of code, step through them,

  • make sure that they work.

  • And one of the things that we've added

  • is what we've called Eager Execution to make

  • that possible.

  • So instead of loading everything into a graph

  • and then executing on a graph, you

  • can actually do step-by-step execution now

  • with Eager in your TensorFlow application.

  • So you can debug your data pipelines.

  • You can debug your training and all that kind of stuff.

  • I love it.

  • It makes it a lot easier.

  • The third one is something that we've released

  • called TensorFlow Hub.

  • And the idea behind TensorFlow Hub

  • is that it's a repository of pre-trained models

  • that you can then just incorporate directly

  • in your app with one or two lines of code.

  • And not only can you incorporate the entire model,

  • but through transfer of learning,

  • you can actually just pull individual layers

  • or individual attributes of a model and use them in your code

  • as well.

  • The idea behind that is really to get you up-and-running,

  • get you started quickly in common scenarios such as image

  • classification, text classification,

  • that type of thing.

  • And then the final one is--

  • really it's been designed to help researchers

  • and to make it a lot easier for researchers,

  • and it's called Tensor2Tensor.

  • And the idea, again, behind Tensor2Tensor

  • is to give you pre-trained models

  • and pre-trained scenarios that you can use, you can reuse,

  • and you can start adapting really quickly.

  • But let me start looking at a couple of these.

  • So first of all is Keras.

  • Now, the idea is Keras-- it's a high-level neural network API.

  • It was built in Python.

  • It's designed to make it easy for you,

  • as I've mentioned, to build a neural network.

  • And one of the things that it does

  • is it includes the ability to access

  • various different namespaces--

  • sorry, it allows you to access various different--

  • sorry, my slides animation are broken--

  • access different data sets.

  • So for example, one data set that's built into it

  • is called Fashion-MNIST.

  • The graph that Sandeep showed earlier for handwriting

  • recognition is MNIST.

  • This similar one, called Fashion-MNIST,

  • allows you to do a classification

  • on items of clothing.

  • And in the next session after this one,

  • I'm actually going to show all the code for that--

  • that you can build a classifier are using

  • Keras or TensorFlow to classify items of clothing

  • in about 12 lines of code.

  • So I'm going to show that a little bit later on if you

  • want to stick around for that.

  • But very, very easy for you to use, as you can see here.

  • This is in Keras.

  • If I wanted to use something like Fashion-MNIST,

  • I could just incorporate the data set.

  • And if I want to build a neural network--

  • this neural network is actually what will classify the clothing

  • that I mentioned earlier on, so these three lines of code--

  • these three layers--

  • are what you'll use to define a neural network that

  • can be fed an item of clothing and it

  • will tell you what it is.

  • So Keras-- designed to make all of that simple.

  • Eager Execution-- very similar thing.

  • The idea is it's an imperative programming environment.

  • It evaluates operations immediately.

  • You don't build graphs, and your operations

  • will return concrete values so you

  • can debug them straight away.

  • And to turn it on in TensorFlow, all you have to do is say

  • tf.enable_eager_execution.

  • And if you use, for example, an IDE like PyCharm,

  • that then enables you to do step-by-step debugging

  • through the Python that you're using to build your model

  • and test your model.

  • So TensorFlow Hub-- the idea is it's

  • a library of model components that we've

  • designed to make it easy for you to integrate

  • these pre-trained models into your application,

  • or even pull individual layers through transfer of learning

  • into your application, and it's just a single line of code

  • to integrate them.

  • And here's a sample piece of code here for it,

  • where the idea is that I'm now--

  • the first three lines there is I'm

  • calling on hub.Module to bring in this nasnet model,

  • and then those bottom three lines of code

  • is I could just pull things out of that.

  • So for example, features = module (my_images)

  • is going to give me the set of features in that model.

  • So I can inspect them, I can adapt them,

  • and I can maybe build my own models using them.

  • And other things like the logics and the probabilities,

  • I can actually pull out of that just using method calls.

  • So if you've ever done some simple TensorFlow programming,

  • you might have seen-- for example, on GitHub,

  • there were models such as ImageNet

  • that you could download and start using.

  • This brings that to the next level.

  • Instead of you downloading a model,

  • putting it somewhere, writing code to read that in,

  • the idea is that there's an API for you to pull it out,

  • and also to inspect the properties of that model

  • and use them yourself.

  • And then finally, Tensor2Tensor-- the idea

  • is that this is similar in that it's a library of deep learning

  • models and data sets we've designed

  • that's really for deep learning research

  • to make it more accessible.

  • So for example, if the latest adaptive learning rate method,

  • I think is what we're showing here,

  • then the idea is I can actually pull that

  • in using Tensor2Tensor, and in this case,

  • the code is actually-- this is for translation.

  • So I can pull in the latest research on translation.

  • I believe this one English to German.

  • And I can use that in my own models,

  • or I could build my own models wrapping them.

  • So again, really, the whole idea is to take existing models

  • and to make building your own machine learned applications

  • using those models a lot simpler.

  • So those are four of the advances

  • that we've been working on in this year.

  • The whole idea, like we said-- the theme

  • is to make machine learning easier.

  • First is Keras for building neural networks.

  • Make it very simple.

  • The second is Eager Execution to make life

  • easier for programmers so you've got imperative execution.

  • The third, then, is pre-trained finished models and providing

  • a repository for them so that you can just

  • pull them out using a single line of code

  • and not go through all that friction.

  • And then finally, and more cutting-edge stuff

  • is Tensor2Tensor, where some of the latest research

  • is available, and code that's been written by our researchers

  • is available for you to be able to incorporate their models

  • and their work into your own.

  • So with that, the most important thing for us, of course,

  • is community and to be able to extend TensorFlow

  • through a community, as Sandeep shared.

  • And Edd, who's our specialist in community,

  • will share that with you.

  • Thank you.

  • [APPLAUSE]

  • EDD WILDER-JAMES Thanks, Laurence.

  • Hey, everybody.

  • My name's Edd Wilder-James, and as Laurence said,

  • I work on growing open source collaboration

  • around TensorFlow.

  • The most important thing is that we

  • want machine learning to be in as many hands as possible.

  • So firstly, thank you for being here.

  • And there's one reason that we've convened these two

  • days in this conference where you can come and learn

  • about TensorFlow not from people five-step-removed

  • from the project, but from the actual developers and product

  • managers who are working on all the features.

  • So please stick with us for this track,

  • because you'll be getting it absolutely from the source.

  • Talking about source-- so TensorFlow does

  • have a massive worldwide community,

  • and we'd love you to be involved in it.

  • There's two places you really need to know about.

  • The first of these is tensorflow.org/community.

  • Anything we talk about today, where it's just

  • a resource or a user group or a mailing list,

  • you can find from there.

  • So if you're a user of TensorFlow

  • and haven't joined up, maybe, to the discuss mailing list,

  • or you don't use Dtack Overflow to get your answers,

  • or you don't use GitHub Issues to file when you've

  • got a really gnarly problem, please head over and get

  • familiar with those resources.

  • One of the other unique things about these resources

  • is who's on the end of the issue, who's

  • on the end of the Stack Overflow question.

  • It's actually a member of the TensorFlow team

  • at Google Brian, a lot of the time, who's

  • going to answer your question.

  • We really believe that we should be connected directly to users

  • and be learning from use cases and helping out,

  • so we actually have engineers who basically

  • do a rotation taking turns to answer Stack Overflow questions

  • and address issues.

  • It's really getting right to the heart

  • of the team when you file this.

  • And one of the unique characteristics

  • about TensorFlow as an open source project

  • is you are using the exact same code we use inside Google when

  • you're using TensorFlow.

  • There's no two versions.

  • It's the same code.

  • So you're reaching the same folks,

  • and your issues have a similar eye on them

  • as any other person inside Google's world.

  • The other thing we really want to do

  • is encourage and grow more contributors.

  • So if you're a developer and want

  • to become involved contributing to TensorFlow,

  • there's a developer's mailing list, again,

  • you can find up there.

  • And on that mailing list, we do a lot of things

  • like coordinating releases.

  • We also, now, starting in the middle of this year,

  • publish requests for comments around new designs.

  • As you'll probably hear later, we're

  • in the middle of figuring out what TensorFlow 2.0 is going

  • to look like, and a big part of that for us

  • is public consultation on design.

  • So if you're a heavy user or developer,

  • please play a part in that review.

  • You can join that, again, through the community page.

  • One other great resource-- this is on the other side,

  • if you're new-- is Colabs.

  • Has anyone here used a Colab at all?

  • OK, a few people.

  • So this is a really exciting thing

  • that we brought online this year.

  • Basically, you can use compute resource free, on us,

  • in collabs where TensorFlow is all set up and ready to use.

  • And one of the easiest ways of getting into this

  • is if you're on the tensorflow.org website

  • and you see code examples, you see the Open in Colab button.

  • And you hit that, boom, you're in a Jupyter notebook running

  • TensorFlow, and you get access--

  • for a limited time, obviously-- to GPUs that we provide.

  • So it really is an amazing way that you don't have

  • to worry about the resource.

  • You don't have to worry about installing.

  • You just don't even have to worry about writing the code.

  • You can actually take an example and then start tweaking it

  • straight online, and it's one of the best learning

  • tools for playing around with TensorFlow.

  • And the other fun attributes of this

  • are you can save your work to your Google Drive account

  • so it's not lost, and you can also

  • start up a colab from any GitHub repository.

  • So it's a pretty powerful tool.

  • OK, a few other things that are good places to hang out.

  • We have a YouTube channel.

  • And if you enjoyed Laurence talking,

  • you can just literally have hours of Laurence

  • talking and talking about TensorFlow.

  • But one of the fun things he does there, as well, is

  • gets out into the user base and to the developers

  • and interviews people who are using TensorFlow as well.

  • So if you have a useful and interesting application

  • in TensorFlow, come and find us.

  • We'd love to talk to you.

  • We'd love to profile you, because these things aren't

  • just for us to broadcast out.

  • We want to celebrate the community.

  • Likewise, we have a blog, which you

  • can find on Medium or blog.tensorflow.org.

  • And as well as publishing, I think,

  • what is a really great resource for technical articles

  • and important information about TensorFlow there,

  • we're, again, incorporating content from the broader

  • developer community.

  • So if you're interested in guest blogging,

  • again, come talk to us.

  • We'd love to have a lot of content that reflects everybody

  • using the code.

  • And of course, there's this little thing

  • called Twitter, which is our main newswire for everything

  • that tickers over-- so releases, important articles, and things

  • like that.

  • We'd love you to hook into all those things.

  • We'd love you to talk us over the course of these two days.

  • We also have a presence at the booth over in the expo area--

  • the [INAUDIBLE]--

  • and we're hanging around, made sure

  • that we got a good rotation of people

  • to come that you could talk to.

  • So even if we don't know the right answer to your question,

  • we'll probably be able to say, well, so-and-so is over there.

  • Come and talk to him.

  • This is your best chance to get face-to-face information

  • from us, basically.

  • So thank you very much for your attention so far.

  • I know Laurence is going to speak shortly and guide us

  • into the two days.

  • Very grateful for you being here.

  • Thank you.

  • [APPLAUSE]

SANDEEP GUPTA: Thank you, Laurence.

字幕と単語

ワンタップで英和辞典検索 単語をクリックすると、意味が表示されます

B1 中級

TensorFlowでAIを構築する:概要(TensorFlow @ O'Reilly AI Conference, San Francisco '18 (Building AI with TensorFlow: An Overview (TensorFlow @ O’Reilly AI Conference, San Francisco '18))

  • 4 0
    林宜悉 に公開 2021 年 01 月 14 日
動画の中の単語