字幕表 動画を再生する
was up Everybody.
My name is Magnus.
Are you interested in getting started with machine learning in Tensorflow?
Well, in that case, you've come to the right place because in this screen cast, that's exactly what we're going to do.
First of all, let's get to the most important stuff locating the code we're going to use.
And to find it, simply follow the instructions below.
In the description of this video, I'll give you a couple of seconds to do that.
Great.
Let's now, look, it's what's gonna happen in this screen cast.
We're going to create a deep neural network models to classify images of clothing.
And to do this, we're going to use the data set called Fashion Amnesty, looking the description below to find out what the status that comes from.
Fashion Amnesty contains 28 by 28 pixel images off different clothing, for example, T shirt and tops and sandals and ankle boots.
J.
So you can say it's a really fashionable data set s so corny.
All right, here's the deal.
As I said, we're going to train a deep neural network to classify these images, and this is what the network will look like.
So we're taking 28 by 28 pixels as input, which will be an array of 784 input values.
If we flatten it, then we have our first hidden layer with 128 neurons.
So all the input pixel values will be sent to each of these neurons.
And finally we have the output layer, which gives us 10 values specifying the probability that the images of a certain class, for example, if it's a T shirt top, a Sando or an ankle boot.
And since we're doing classifications here, these output values will be probabilities.
So if you're some all of these values up the 10 values, the result will be fun.
This is because our neural network will always be 100% certain that the image is one of these classes.
I mean, that's the task will train the model to perform in the first place.
That's brilliant or right, but that's enough of my face.
Where now let's use the screen space for more important stuff.
That's right.
Let's look at the code.
Okay, So by clicking on this button here, I can actually execute the code directly from the browser threw Cola Cola is a super cool product that gives you a virtual execution environment running and go will draw out.
The only thing you need to be sure of is that you're not dealing with your Google account.
Next, let's expand the licenses.
So this code is licensed under Apache and M I T.
Now we'll actually start executing some code.
So the first step here is to import our libraries.
Observe your output may show a different tense Ifill version, and that's totally okay.
All right, so let's now low the fashion M nus data set.
We do this by calling this convenience functioning, Care us.
And this will give us two lists.
One that has to images of labels we will use to train the model.
As you can see here, the other list we will use to test how accurate our final model is.
So remember that fashion amnesty has 10 classes here.
You can see all their numbers and their mapping to the class.
Remember the favorites?
I previously mentioned the T shirt and top sandal on the ankle boot.
Here we create a list of these.
So given the numbers.
We confined the textural description of the class.
All right, so let's explore our data set a bit more.
Here you can see the shape of our training data set.
It has 60,000 images each, which is 28 by 28 pixels.
You can also see that we have the same number of labels and that each label is a number between zero and nine, and similarly for our test data set that contains 10,000 images.
Let's take a look at one of these images.
So here we're plotting the first image, and look, it's an ankle boot.
You can also see that each pixel has a grace gave value between zero and 255.
Let's normalize thes values.
So instead of having an integer value between Sarah 255 we will have a float value between zero and one.
Now let's print the 1st 25 images and all supreme, the corresponding names for each.
Ah, that's all looking great.
So many nice looking images of fashion items.
And finally, let's do some machine learning stuff.
First, let's define the neural network, which is going to be a sequential model this means that the layers will be processed, Indio declared.
Here, As you can see, we declare our first layer to be flattened type, followed by two dense layers going back to our picture.
You can see how these statements match the different layers.
First, we have a flattened 28 by 28 picks of image to an array having 784 values.
Then we send each pixel value toe all neurons in our first layer.
That's what a dense layer does.
We also apply the knowledge in your function, really to the results.
And finally, we calculate our 10 output classes using soft max to create the probability distribution that sums to one.
The only thing remaining is to specify the optimizer lost function, and that would like to see the accuracy metrics during evaluation.
And that's it.
Now we're ready to train the model, and, as you can see, provide the training images and label, as well as to use five e books.
One epoch is a full it aeration of the training data set, So since we have 60,000 examples, a total of 300,000 images will be used to train our model, and that's it for training.
Let's evaluate our model and see what the accuracy is on the test data set.
And as you can see, we're actually doing pretty pretty good for a simple model like this.
Now we can use our model to do predictions, and here you can see the prediction for the first images Issa probability distribution that indicates it's for class number nine.
In other words, the ankle boot.
When we all supreme to correct label for the first image, you can see that our model made the correct prediction.
Let's do some more predictions where we print both the predicted value as well as the correct tables with the images.
And as you can see, our model is doing really, really well.
Finally, that's grabbed the first image from the test data set.
As you can see, it has Resolution 28 by 28.
Then we add an initial dimension because the predict call requires a list of images to be passed to it.
We do the predict call, and as you can see, our model predicts, it's glass nine, an ankle boot, and finally we picked this highest index from our probability distribution list.
Index number nine and that's it.
I really hope you enjoy this green cast.
Be sure to subscribe to the Tensorflow Channel to follow the Amazing world of machine learning on Tensorflow.
But now it's your turn.
So go out there and create some great models.
Don't forget to tell us all about it.