Placeholder Image

字幕表 動画を再生する

  • Hey, welcome to neural nets in JavaScript with brain Js.

  • I am super excited to teach this course.

  • The goal is to give you a practical introduction to problem solving with neural networks.

  • What you're gonna be learning in this course propagation both forward and backward layers, neurons, training, error.

  • What feed for dinner?

  • All networks are what recurrent neural networks are in a whole lot more.

  • We're gonna build a Zorg, ate a counter, a basic math network, an image recognized a sentiment analyzer and a Children's book creator.

  • And how we're gonna do it is with 17 lectures where we're gonna focus on practice over theory.

  • What that means is you are going to get your hands dirty.

  • But more than that, you're gonna walk away knowing the ideas behind neural networks.

  • There's a swell a bunch of interactive challenges along the way, and that brings me to our use of scramble.

  • Scramble is a fantastic platform for learning, and at any point during the entire lecture, you can stop me.

  • It won't hurt my feelings.

  • You can just introduced a brand new script and you compress command plus s if you're on Mac or Control Plus s if you're on Lennox or Windows and it will execute exactly your coat that is super important throughout.

  • This course is well, I'm gonna make regular reference to the console, which is directly below here.

  • So if you see some numbers go down there like, I'll go ahead and test that right now, 0.5 just appeared.

  • That's super important.

  • Anytime I start talking about if the neural net was good because it had a low error rate or if the neural net was bad because it had a higher rate, just look down there and that will give you a little bit of reference as to what we're doing.

  • So let's get started.

  • This is gonna be awesome.

  • This is our very first known that this is gonna be awesome.

  • So the first problem we're gonna tackle is called exclusive or and you can do some research on it if you like.

  • But more or less, this is what happens.

  • You have inputs that are the same result in a zero output when they differ results in a one.

  • There's always two inputs.

  • There's always one output.

  • So let's take this very simple comment and let's translate it into something that the neural mint or rather the Java script can understand.

  • Let's have a variable.

  • We're gonna call it training data and theirs are very simple variable that represents all of our training data.

  • Let's go ahead and imports brain Js brain Js is I'm just gonna grab a leak.

  • Vets imports it from a Cdn instant delivery network.

  • Oh, at that.

  • And next, we want to instead.

  • And she ate a new instance of brain.

  • And we do that.

  • Const pressing next equals new rain dogs neural network.

  • And down here, we're going to say next, uh, train, we're gonna give it our training data.

  • And now that line 16.

  • By the time we get there, that the Net will have understood what are inputs?

  • Outputs are so we can hear Consul, log out net.

  • Don't run one of her inputs.

  • So let's choose the 1st 1 Give it our layers Hidden layers three will get more in the hidden layers later.

  • And now we're gonna go ahead and run.

  • And now we have an output that's so awesome.

  • Now, the reason that this number here is not zero is because we're using a neural net and it's very hard for him to speak specifically zero and one.

  • They can speak close to that.

  • So that's exactly what we want is a number close to zero, which is 0.5 Now.

  • Here's a challenge for you.

  • Go ahead and get the next outputs console locked out.

  • Just kind of play around their values and and see how the Net operates.

  • And our last tutorial.

  • We talked about how to build a neural net, a very simple one to solve, exclusive or and in this one we're going to discuss how it did it.

  • So the neuron that has different stages, you'll notice I used two different methods.

  • You're the 1st 1 is trained, and the other one's run now in train would do something called Ford Propagation and back propagation.

  • Those terms may seem scary at first, but they're actually quite simple.

  • In fact, we're going to reduce their complexity down to something that even a child can understand.

  • You take a look at my slides here, forward propagation and back propagation.

  • We have a ball, we're gonna take a ball, and we're gonna throw it at a cool.

  • Now when we do that we're going to make a prediction as to how far the baldies to go, how much energy to put behind it, the pathway of the ball, et cetera.

  • I want you to go ahead and pause the video here and to think about the different steps that happen when you throw a ball article.

  • But the first step is prediction and in prediction, we're going to think about how we're gonna throw the ball where it needs to land, how much power we need to put behind it, that first step and with my str showing us that we did not go far enough with the ball, this step is for propagation.

  • We ourselves are making a prediction from them.

  • That prediction.

  • We can see how far we were off from the actual goal.

  • We can measure that and that that step was measuring.

  • That is back propagation.

  • Now the next thing that we want to d'oh is make a determination as to what we're going to do next.

  • That is our second step of back propagation.

  • That is our learning step and you know how the story goes.

  • We throw the ball again, goes too far.

  • We measure that would make another prediction.

  • We throw the ball again.

  • Third time's the charm that illustrates this very first method and everything that goes on inside the net.

  • The next stage is running are met.

  • Now, in running our net, we no longer have to measure how far we are from the goal we already know.

  • And because of that, there's no need to back propagate.

  • So that step goes away.

  • Now, throughout this entire training of the neural net, Burnett is measuring, and that measurement is referred to as error.

  • Check this out.

  • We go to our net.

  • During its training, we could actually enable something really interesting.

  • We're gonna give it a log function.

  • We're gonna console log era, and we're gonna have our log period sex you 200 intervals.

  • It's out now.

  • We can actually see the error.

  • How far off the net waas you can see for a time.

  • Some of these errors may go down or may go up, but eventually the net catches on and it starts to accelerate its ability to learn until that air rate starts to drop to a ridiculously low number, not zero until training is completed.

  • Once training is completed.

  • There's no need to continue training as we discussed we can then just forward propagate.

  • Now, Last tutorial.

  • We talked about how neural networks learn using four propagation and back propagation.

  • And this tutorial, we're going to understand more the structure of the programmatic neuron that neural nets are actually quite simple.

  • They're composed of a function that receive inputs as an argument and produce outputs.

  • We think of our neural net and this very simplistic way.

  • Then really, we can reduce the complexity of it down to one of the simplest functions that you can write.

  • Pause the video here and just look at the structure.

  • Now we're gonna talk about how the network initiates.

  • Did you think about when you were first born?

  • Likely you didn't know very much overtime, though you begin to know more and more, the neural net begins with a bunch of random values, so everything that affects the outputs is just random.

  • At first you may ask yourself why the reason is because mathematically we've proven that is an effective way to start off with knowledge.

  • The knowledge is very random.

  • At first we don't know the idea.

  • It's not zero and It's not one.

  • It's somewhere in between.

  • Over time we can shape that random data, so that finally becomes where we store what's going on inside of the neural net.

  • Each neuron is quite literally mass dot random.

  • Go ahead and positivity.

  • Oh, cuter and get comfortable with the idea that the Net starts out with random data.

  • Next, I want to talk about activation.

  • A really popular and effective activation function that's used nowadays is called Ray Lou Really looks come like this.

  • Where to put it inside of a function.

  • The function would quite literally look like this.

  • That is our activation function called Raylan.

  • Now, activation functions are measured in back propagation, using what is called their derivative.

  • I'll go ahead and put a link here on our bonus material to I'll go ahead and throw some wings that take you to where Ray Lou and it's derivative are used in the brain.

  • Now, last tutorial.

  • We talked about the structure of a neural net, and in this one we're gonna be talking about layers.

  • Take a look at my illustration that this is a neural net.

  • Each circle represents a neuron.

  • The arrows represent a bit of math.

  • Stacked circles are layers, and so here we have what are called input layers.

  • That's his 1st 1 This next layer would be a hidden layer.

  • It's composed of two neurons.

  • The next is another hidden layer composed of two neurons, and the last is called an output layer in brain.

  • The input layers and output layers are configured for you kind of automatically, however, are hidden.

  • Layers can be configured by us.

  • Our first neural net was composed of two input neurons, one hidden layer that had three neurons and an output layer that had one neuron, just as our illustration has to neurons for the input layer, two hidden layers, the first having to neurons, the second having two neurons and the last one having two neurons.

  • What's interesting about hidden layers is that's really where the majority of their storage is.

  • If you liken it to a human, the hidden layers over the ideas are You may run into a scenario where your neural net isn't learning.

  • I'll go ahead and recreate that here.

  • I'm going to change the hidden layers from a single hidden layer with three neurons to that of one, and I'm going to log out training data, watch what happens.

  • We hit 20,000 iterations without fully understanding the problem and how to solve it.

  • We can easily fix that by changing our hidden layers to a single hidden layer of three neurons.

  • You can see we're able to train in a short amount of time.

  • 4800 iterations.

  • We can't as well have more than one hidden layer.

  • Our illustration has two hidden layers.

  • We could mimic this exact configuration two hidden layers, followed by two hidden layers.

  • And this way.

  • Note of caution, though, but more hidden layers that you add, the longer it takes for the neural net to train, let's try it.

  • See, we get 20,000 iterations without actually fully training.

  • There is no hard and fast rules when it comes to hidden layers.

  • This would be an invitation to experiment.

  • Something I have seen, though, is to treat the hidden layers sort of like a funnel.

  • So if you had, for example, 20 inputs, you could have 15 hidden layers, followed by 10 hidden layers, followed by two output layers.

  • That's just an example, though, and many problems take on different configurations.

  • Switching gears for a moment.

  • Take a look back at our illustration.

  • You remember that we have these arrows.

  • I said These arrows represent a bit of math and a feed for dinner moment.

  • That math can be described as this.

  • We have our input.

  • Waits kinds are inputs plus biases activated.

  • Now, this is simple match, but the implications of it are huge.

  • Pause here and just think about how simple that is and let it sink into your brain.

  • This tutorial series is really about the practical application of neural nets.

  • But if you're curious like me, you can take a look here.

  • See how brain uses this Math has another bonus.

  • Take a look at the additional options that are available for brain Neural nets can be widely configured to solve many different problems.

  • All that takes is experimentation, time and enthusiasm.

  • Up till now, we've concentrated on the basics of hot neural networks, and we've used a raise for our training data.

  • But in this tutorial, we're gonna be talking about sending different shaped data into a neural net.

  • Now, to illustrate what I mean by that, let's take a look at my slides.

  • Here we have Honore now it's a very simple ray, but it's an array, and what makes a raise incredibly useful is we can reference the Values Buy index.

  • For example, the Rays index of zero gives us 0.3.

  • The array index of one gives us the values your appoint.

  • One arrays are useful in neural nets because they represent a collection generally of the same type of values.

  • And we can use the index to look up those values.

  • If we look at an array beside a neural net, we can see each neuron associates meaning with each of the arrays.

  • Index is now when dealing with a collection of the same type of value, this is perfect.

  • But what about when our data isn't in the form of an array?

  • What about, for example, objects?

  • It just so happens that bring J s was built for this type of practical use, and we're gonna build a neural net that uses objects for training to get us started.

  • I've included the browser version of Brain Js here in the index dot html file, and next let's define what are training data will look like our input will be an object that is going to have the properties red, brain and blue, and our output will have properties like neutral and dark.

  • It just so happens I have some training data.

  • I'll go ahead and paste it in, and you can see our colors and brightness is what's really useful with brain.

  • Js is you don't have to define all the properties you can, but you don't have to.

  • When they're missing, it simply uses a zero in their place.

  • So in this first object we see that red is missing, we'll read here will just simply be zero.

  • You could see a similar practice on the brightness is, Let's go ahead and turn these two different arrays of objects into our training data.

  • So we're going to find Constance.

  • Training data equals an array.

  • Now we're going to do it right over the brightness is and colors and build up our training data.

  • So for will say, Let's I equals zero.

  • I is less than colors dot length I plus plus, or you could use it for each or even a map training data that push.

  • We're going to give it an object which one of our training sets is an object and that training set will have an input and that input will be of colors.

  • And our outputs will be from the brightness is and those indexes are equivalent.

  • Next will define our neural nets.

  • So const next equals new grain dust neural net works and we're gonna give this ah, hidden layers.

  • I have a single layer with three neurons and next week and basically just trained our neural net.

  • So Net ducks trained and we're gonna give it me training data.

  • Brother, been logging out what is happening inside the neural net.

  • Let's just get our stats.

  • What happens at the very end and we'll go ahead and log Bove's to the console.

  • Now let's see what we get.

  • Cool.

  • 12 inaugurations.

  • It learned it fairly quickly.

  • And let's see, actually what be known that outputs.

  • So next dot run and a value of red 0.9 of red, and we'll log of those values out.

  • Let's see what we get.

  • Very cool.

  • So you can see in the training set.

  • We did not include dark, neutral and light, and every single one of the brightness is.

  • However, brain is smart enough to combine those together and it gives us with red being 0.9.

  • That red is dark now.

  • As a bonus, I could spelled on us correctly.

  • What if we had to invert the problem?

  • What I mean by that is, what if we're, for example, asking our neural net for colors rather than classifying their brightness in this scenario, posit here and think about how you would accomplish this.

  • So by inverting the problem, our inputs would then be light, neutral and dark, and our outputs would be a color red, green or blue for us to flip the values.

  • Let's define our training data again.

  • This will be constant inverted training data, and we are going to for But I equals zero Michael's colors about length, going to take the inverted training data and push objects to it.

  • That air just like Pryor.

  • Our input an output, but we're going to change the input to accept brightness is and our output to accept colors.

  • That's our training data.

  • Next, let's define a an inverted Next was new grain dots, no neural network, the same hidden layers.

  • And what's training inverted?

  • That's equal inverted necks.

  • I got trained inverted training data.

  • We'll go out and train on this.

  • Cool.

  • Let's look at the stats.

  • There's our stance.

  • We could see it actually didn't do a great job at learning the problem.

  • But that isn't the point of this exercise.

  • It's really just to understand the neural net from a different vantage point.

  • When you flip a neural net like this, you're kind of generating values that could be really useful in predictions.

  • In this tutorial, we're gonna get a bit more adventurous and push the boundaries of what you could do with a neural net, but doing so in a safe and easy manner.

  • But to help us understand, take a look at my slides.

  • In our previous tutorials, we used numbers directly in the neural net.

  • With numbers are just one of the types that exist in Javascript and in other languages.

  • On Java, script is fairly simple.

  • It's types.

  • In fact, we have on lei bully in numbers, objects, strings, no and undefined.

  • It would be nice if we could feed these other types of values into the neural net so that it can understand context better and solved different and seemingly more complex problems.

  • So are we doomed then do they just speak numbers?

  • Previously, we used an object with in Ireland, and we did so using its properties defined with numbers.

  • But the principle of assigning a value to a neuron will provide us the answer for our neural net to speak more than just numbers on the question in your mind is probably how, Let me illustrate in a way that a child would understand that this is a light switch.

  • It is off now.

  • If you were to ask a child to turn the light switch on day, of course would.

  • But I wouldn't just happen like that would happen.

  • More like mine.

  • Son does.

  • Ah, he sees that it is off and he becomes excited and he's taking gymnastics.

  • And so he will use that to his advantage.

  • She'll perform some amazing gymnastic maneuver over to the light switch, turning it on, seeing that it's on.

  • But he just did something very useful.

  • He'll let out a cheer and do some gymnastic moves off in a way celebrating that he was a useful kid and that he has the ability to flip switches.

  • This ability to understand both off and on has huge implications.

  • Are computers.

  • They speak binary.

  • That's just ones and zeros.

  • So it's very, very similar language to the neural net.

  • We could use this same practice.

  • We could say that zero is off and one is on now.

  • We sort of did this previously with objects via their property need, but we fed the inputs directly into the neural net.

  • In this case, we assign a neuron to a specific value that we're training towards either that input or the output.

  • And when that input or output neuron fires, we basically just assigned that value as one.

  • Otherwise, it's zero.

  • So just like our on and off, we're taking these values like a bully in our no value or a stream, et cetera, and we're simply assigning it to the input.

  • So here my potentially no value is being fired upon.

  • So it's the value of one.

  • My string one is as well the other values or not, and ask for out.

  • Put the net is going to try to learn those values just as before.

  • And so really, the implications here are that we can suggest about any value into a neuro net so long as that value is represented by a neuron.

  • Okay, let's go ahead and get coating.

  • And then this will all make sense.

  • First off in our index dot html file, I have included the browser version of Brain Js.

  • Next, we're going to get some data that we're eventually gonna use to train on.

  • This initial data is an object with property names of restaurants whose values are what day we can eat free with kids.

  • And our objective is to find a way to get these string values, representatives, ones and zeros into the neural net.

  • And what we're gonna do is give the neural net a day of the week and it's gonna tell us where to go on that day of the week so that we can eat free with her kids, deposit here for a moment and think how you would accomplish this using that light switch analogy.

  • Next, let's go in and plan how we're going to input our training data into the neural net.

  • So if we are going to use the day of the week as the question we're gonna ask, our neural meant that will be our input.

  • So our input is gonna be a day of the week.

  • So Monday Tuesday would mess gay, etcetera.

  • Our output is going to be the restaurant name.

  • So restaurant two you.

  • So that is our input and output for our neural net.

  • Next, we're gonna go ahead and build up our training data.

  • So let's go ahead and build Brad using a constant that's gonna be training data.

  • That's gonna be an array.

  • And to put data into the training data from our restaurants will need to generate over our restaurant.

  • So for let wrong name in restaurants and the day of the week is the value they're in.

  • So coast a week equals restaurants, restaurant name.

  • All right, so we've got our day of the week in our restaurants and our training data.

  • We're gonna push a value to it.

  • It has our inputs and outputs our input.

  • It's going to be This is where the rubber meets the road, so to speak.

  • It's gonna be our day of the week.

  • We're gonna sign that a value of one.

  • I just think about how simple that is.

  • If you deposit, please do so.

  • I think about how simple what we just did is we're giving an input to the neural meant off a day of the week assigned by its value of one.

  • Now, all the other days of the week because of the way that brain J esus built are gonna initially be zero.

  • So only this day of the week will be one.

  • Next.

  • Let's go ahead and assign our output.

  • That's gonna be absence restaurant name.

  • And we have just built training data.

  • If you need a moment to pause and think about what we've just done, please do so.

  • But it's a very simple principle.

  • The principle of represented values.

  • Next, let's go ahead and define our neural net.

  • It'll be a constant next new bring that neural network and we're gonna give it the same hidden layers as before.

  • Single hidden layer with three neurons and all that is left to do is train on our training data.

  • So we'll do.

  • Const.

  • Stats equals net dot train training data.

  • No more console.

  • Log our stats out.

  • All right.

  • Are you ready?

  • We go Look at that Under 2000 orations.

  • The neural net has decipher DDE What day of the week to eat for free with our kids.

  • That's nice, but let's look at what comes out of the neural net and sold out long.

  • That got run.

  • We're gonna say Monday it's what it says.

  • This is where it gets kind of interesting.

  • All the restaurants are included with our result.

  • We just have a likelihood associated to each one of those restaurants.

  • But really, what we want is to put a string in and to get a string from our own that we're gonna do that next.

  • If you composite, just think about how you might be able to do that.

  • Okay, so we're gonna create a function and its name is restaurant for a day.

  • It's gonna get a day off.

  • We're going to use that value with our neural net.

  • So that got run.

  • Hey, of a week.

  • This is gonna be our result.

  • And from that results will have this collection off restaurants with the likelihood that we should eat there, and the highest likelihood will be the correct prediction for the given day.

  • So what we're gonna do is start out with okay, highest value and as well, the highest.

  • And we're going to generate over our results.

  • So four let for us, not name and result And then we're gonna say if the result But strong name is higher than our highest value, But we're gonna set our highest value to that value.

  • Words we're gonna save the highest restaurant name was Restaurant, and from here, we'll just return the highest restaurant.

  • And so from this, we're going to accept a day of the week a stream.

  • We're gonna put that into the neural net.

  • Internet's gonna give us its predictions.

  • Those predictions are a list of restaurants we're gonna generate over those restaurants.

  • Then we're going to save the highest one, and we're gonna return the highest one.

  • Well, go ahead and log the results out.

  • So this is restaurant for day gonna say Monday And what that stole out there had his brilliant yellow corral.

  • Perfect.

  • Now let's take the rest of the days of the week.

  • So Monday, Tuesday, Wednesday, Thursday, Friday, Saturday, Sunday, Tuesday, Wednesday, Thursday.

  • All right, they There you go.

  • We've got all the restaurants for the given days of the week.

  • So now we've got string in and string out before our neural net.

  • Next as a bonus, try and flip this logic the other way so that your inputting a restaurant name and you are getting out a day of the week.

  • I'll leave you to it.

  • And Mr Traill, we're gonna want that counts.

  • And although that sounds kind of like an easy task, at first it actually is a little bit difficult.

  • But when we use the right tools, it becomes easy.

  • Take a look at my slides, and this will give us a background on where to get started.

  • This is exclusive or our first problem that we solved.

  • Each of the empty squares is zero in each of the black squares is a one.

  • Let's take a look at a difference input one that may be a little bit more tricky.

  • It's hard to kind of see what this particular input means, so let's re arrange it so that it's easier for us humans.

  • Okay, see, this is a four now.

  • Both of these inputs illustrates something that may have occurred to you already, and that is that they have with and height or blink.

  • No width and height is just sort of another way of looking at length, width and height don't really change neural nets.

  • They are constant butts in computers.

  • There are some rules that cannot be bent and others that can be broken.

  • And we'll start illustrating that now by going to the movies on your trip to the movies, you're gonna bring your best friend along.

  • And they're, of course, thrilled at going to the movies with you because it's the latest and greatest movie that you've been looking forward to and everything is going fantastic.

  • And in fact, it's the cliffhanger scene right there at the middle.

  • But all of a sudden the screen goes black.

  • Why?

  • Why did this happen?

  • This the screen went black.

  • It was right as I was expecting something to happen next.

  • Why were they expecting something to happen next?

  • Well, it's because they built up a memory off what was happening up to that point.

  • That's important.

  • I think about that.

  • Just for a moment will come back to it back to our non playing movie.

  • Your best friend looks at you, and they're of course, thrilled that they're at the movies.

  • But our satin that the movie will not continue to play, and then all of a sudden it starts to play again.

  • And they're, of course, Estrada's could be in the movie ends, just as you had hoped.

  • So that is our illustration of the movies.

  • Now, if you think back about what we paused in the movie, every frame of the movie was the same height and width.

  • Every friend no frame was different in size.

  • If you think about that over time, though, a frame being one part of a movie, one frozen image of the movie.

  • Each frame has that constant size, but there are hardly any movies that are the same duration.

  • They all have different times that they play out that duration.

  • That's depth.

  • That's our frames.

  • Plural.

  • That's many frames that is really important with no romance, the depth, the frames, how long the movie isn't what happens on each frame and what leads to the next one.

  • That gives us a context as to what is happening in the movie.

  • That's the same with neural nets, this context and a neural net.

  • Rikers.

  • It's something that happens over and over again, something that has to innocence.

  • Repeat this terminology and neural nets is called recurrent.

  • Now, that sounds like a very complex word.

  • Recurrent.

  • Oh, no.

  • What are we gonna do next?

  • But it's actually quite simple, and to illustrate that less go simple.

  • Let's let's go to something that even a child can understand one now at its very simplest.

  • If I go to a child and I say one, likely the child will not understand what I'm talking about.

  • Unless we've trained on that in previous sessions.

  • So one to a to a child is this foreign and it's the same with a neural meant.

  • I don't know what you mean by one.

  • That's essentially what the neuron that will say.

  • However, as soon as we start giving it context, it being on your own, that or it being a child, they can start to decipher what we're trying to ask from them.

  • Now continue.

  • We talked to our child.

  • We say one, two, three, four and we pause.

  • What do you think the child will reply with?

  • Likely a response would be five.

  • Now this is where recurrent happens with neural nets.

  • Recurrence is like taking each of these states 123 and four and sort of putting them together in a sense, adding them together into account a Google.

  • And that pool says you know the most likely thing they're looking for, it's probably and then out comes a five.

  • That is depth.

  • It's the same as our movie.

  • It's the same as being able to see each frame.

  • That depth happens over time.

  • If the movie played out of sequins, if the frames were shuffled.

  • In a sense like when we get very little out of the movie now, this context is sort of, ah, observer that that looks at each of these steps and can guess what comes next.

  • That context, that depth that time, that recurrence.

  • I'll refer to a very similar concept, that depth that time, that context.

  • It's all dynamic.

  • They're all the same width and height or even the same length.

  • But they're not of the same depth because we can feed in more than one.

  • And the context is dynamic, too, in the sense that we can say 12 and ask for what's next in l say three or weaken, say 34 what's next?

  • And it will give us a five, or we can even reverse it and say 5432 What's next?

  • And it'll give us a one.

  • That's how dynamic that recurrent concept is in our neural net the ability to sort of taken those multiple frames that's called a recurrent neural net.

  • And when this simplest form, the feeding of for example numbers is like stepping through time or a time step.

  • And as I said in this tutorial, we're gonna learn how to count.

  • Okay, Now, to get started.

  • Let's go ahead and include the browser version.

  • Oh, brain Js.

  • We'll have that here.

  • Our training data.

  • It's gonna have two different outcomes.

  • One is gonna count from 1 to 5, and the next one is gonna count from five, 21 Well defined our training data manually.

  • That'll be a constant called training data, and it is gonna be in a ray and in that array will have two arrays.

  • The 1st 1 will be 1234 and five.

  • The next one will be 5432 No one, that's it.

  • That's our entire training data.

  • Next will define our neural net.

  • Constant imminent equals.

  • Now this is a new name, space and brain new brain dot recurrent dot long short term memory or l s t m time step now to train of the neural meant we'll give it our training data using the train method, net dot train training data and let's see what actually comes out of the neuron that while we're training it by logging, we're gonna give it a log function.

  • Let's go in and see what happens.

  • All right?

  • A trained really fast, Very cool.

  • But let's remove the logging now that we know that it can train and let's see, actually, what comes out of the nobleman so console dot log net dot run and we're gonna give it part of one of the Rays that we defined in the beginning.

  • So 1234 and that's it for the 1st 1 And we'll do the same for the second, right, because we wanna as well count down from five knot dot Run and we'll do 543 and two.

  • All right, let's see what comes out.

  • There we go.

  • Awesome.

  • We got exactly what we wanted.

  • And that is how you count.

  • Using another moment in our first run, we gave it an array of 123 and four expecting of five.

  • And that's what we got.

  • 4.98 And in the 2nd 1 we sent in a 543 two, and we're expecting a one just like we have a peer in training data and we got a 1.0 five.

  • That's really exactly what we wanted now as a bonus track, adding another training array that counts from 10 to 5, or even from 5 to 10 in our last tutorial, we used a long short term memory time step Neural network to count.

  • It's a mouthful.

  • And in this tutorial, we were going to use that same sort of nets, and we're gonna work up to predicting stock market data.

  • So let's take a look at how our data is shaped.

  • First, our rob data is gonna be where all of our values live.

  • Now, this isn't yet training data.

  • We're going to turn it into that.

  • Our values are gonna be from an object, and that object is going to have properties open, high, low and close each one of those air numbers.

  • Now, let's take a look at our raw data for a moment.

  • Now, if you look for example, at the open property, you'll see it's quite a bit larger than numbers that we've used previously, which were from 0 to 1 or 0 to 10.

  • The values repeat and you'll see that open, high, low and close.

  • Follow a similar pattern.

  • What we want to do, though, because the neural net was in Stan.

  • She ated with values between zero and one, and that is sort of a language that the neural net speaks.

  • If we just send these values into the neural net, it's gonna take it quite a bit of time to sort of ramp up its understanding off these larger numbers.

  • If you can imagine, it's like walking up to somebody who is only ever heard whispers and then just yelling at them.

  • It would be first off a rude and secondly, it would just be loud.

  • It wouldn't be what they were used to.

  • What we want to do is make it easy for the neural net to interpret this data, and this is a very common practice.

  • So let's start out by writing a function, and this function is going to normalize our data.

  • But since normalizes the most friendly term, let's just call it scale down.

  • That's our function.

  • Name and scale down is going to accept a single object and we're gonna call that a step.

  • So that's a step in time.

  • Now, the same object that we have coming in, we just want to turn down those values.

  • And so we're gonna return an object, and that object will have open and for the time being, will go ahead in return, stepped up, open or define it rather with stepdad open.

  • Gonna seem for hi et cetera.

  • And that is our scale down function, however, were not normalizing anything.

  • Yes, And since if we go over to our training data once more and we look at these values, one of the lowest ones that we can come to is this value 138 or so?

  • So what we're gonna do to get these values easily into the neural net?

  • It's simply divide by that value.

  • So 138 there.

  • And that's our normal function.

  • Quite simple.

  • And to prove that it works, let's go ahead And consul, log out.

  • Council log scaled down, Buddhist grab the first item in our raw data here.

  • We could see the values are fairly close to between one and zero.

  • Now, if you're positive just for a moment.

  • Think about how we go the other way.

  • Scaling up or, as they say, D normalizing.

  • Okay, so now what we're gonna do before we do any machine learning, we're actually going to write the inverse of this function, which would be the de normalized or, in practical terms, the scale up function.

  • So function scale up, and it's going to take a step, and we're gonna return that same sort of signature that seem sort of data.

  • But rather than dividing by 138 the exact inverse of dividing is multiplying.

  • So we will multiply by 138.

  • And now we have our scale up function referred to normally as de normalize normalize brings it down, de normalize brings it up and to test them both side by side.

  • Well, go ahead and counseled out log scale.

  • Uh, it'll look kind of funny.

  • Scaled down, uh, and I will have a raw data.

  • We'll send that in and what we get there you go are scaled up and scaled down functions.

  • We're gonna use these in our next moment, but this is a very common practice across no moments in general, not just for current neural nets is normalizing our values and more common approach to normalizing your data would be to subtract the lowest of value from all the other values that you're sitting into the neural nets and then to divide buying the highest value minus of lowest value.

  • That sounds kind of confusing at first.

  • If we were to take one of these lines, for example, uh, this 1st 1 scaled down and we used it right here, and the net result of this is, let's say, stepped out open was 140.

  • We're going to subtract if 138 was the lowest, and then we'll divide the say.

  • The highest value is 147 minus 138.

  • We would get and we'll subtract 1 30 heat from 1 47 and then equals nine.

  • And so the net result is too divided by nine.

  • And that equals 0.222222 to do.

  • And what's important about this is it's the in value is between zero and one.

  • Try and rewrite the scale down and scale up functions to incorporate the more generalized approach, which would be to subtract the lowest value divided by the highest minus the lowest value.

  • In our last tutorial, we talked about normalizing data, and in this tutorial, we're gonna write the neural nets and we're gonna put that normalized data into it.

  • So first, let's scale all of our raw data.

  • We're gonna call this scaled data.

  • It's not yet training data, but it's close.

  • We're going to say raw data dot map and we're gonna map over all those values using our scale down function.

  • So now are scaled data will have all of those new values that are normalized next.

  • What we want to do is rather than feed in one long array of all these objects whose properties So the neural net memorizes this one long pattern.

  • What the neural net to understand smaller patterns and two predict off of those.

  • And so how we gonna do that is we're gonna create finally, our training data and our training data is going to be an array of a raise.

  • We're gonna take our skilled data, and we're going to slice it into chunks of five starting at the first index, and we're gonna progress by five index is each time.

  • So that is our training data.

  • We can console, log it out, make sure it looks right.

  • Very nice.

  • Okay, so it's an array of rays.

  • That's an important concept.

  • Okay, so now that we have our data chunked and normalized and ready to go into the neural net, we're gonna write our neural net so constant met equals new brain dot Current, that long, short term memory time step.

  • That's our neural net, and we're going to find it with a few options.

  • Now, this is important are scaled down.

  • Data has four different properties here open, high, low and close.

  • That represents one point in time or one step through time.

  • They are known that is going to have an input signs off those properties being them.

  • Four properties are input, size will be four, and it's very rare to deviate from that with output size.

  • So we'll go ahead and put an output size of Flora's well.

  • And so now we wanted to find our hidden layers, and our hidden layers are simply going to be eight neurons and eight neurons.

  • So input size of four hidden layers of ate and ate and an output size off floor.

  • And now we can learn our stock market data net dot train training data.

  • No.

  • Now we're gonna tweak our training options just a little bit here.

  • We're gonna put our learning rate at 0.5 and the reason we're going to do that so it doesn't sort of shoot past the values that we're looking for.

  • We want very small increments toward our goal, and the next is our error threshold.

  • Now the longer and the more data that you end up using with your neural net potentially longer, it's gonna take to train.

  • And so her this being just in the Web browser and we want to train to a sufficiently good error, I'm gonna turn it down to 0.2 and as well, let's go in and log out our values.

  • Let's clean this up, all right.

  • And now let's learn.

  • Let's see what comes out.

  • Very cool.

  • All right, So it trained.

  • Let's look out what really matters.

  • So that's Consul, that log, and that'll be that Doc run.

  • And let's just give it the first item in our training DEA.

  • So our training data and let's see what comes out Very cool.

  • So our Net actually learned something and return something.

  • Now there's a problem here.

  • If we look at our original values, they're all in the mid 100 forties or so.

  • But our values here or not, they're close to is your own one.

  • So what do we want to do there?

  • Well, I'll give you a moment to think about it, Okay?

  • This is where we finally use scale up.

  • This is where we de normalize our values.

  • Here we go.

  • When I had an added that as a rapper around Net Run and we'll run it again.

  • And there we go.

  • The values that look very familiar are one forties or so.

  • One thing that would be really useful is to not just look one step into the future, which is what we're doing right here.

  • That's what net dot run would be.

  • It's gonna take all of our existing steps and say, Hey, this is the potential next step.

  • What we want to do is actually look and see what the next two or three steps.

  • Maybe in our last tutorial, we predicted the next step for stock market data And in this tutorial we're going to predict the next three steps.

  • That's really interesting and cool, and it's actually quite simple.

  • Uh, so we're gonna actually just change one little method.

  • So rather than use net dot run gonna comment this out And I'm gonna say, consul dot log, we're going to do met dot forecast And let's say we just have ah couple numbers to go on.

  • We're going to send in our training data, but we're going to only send in ah, couple of steps of that data and then we're going to say with forecasts that we like the next three steps and it doesn't stop there.

  • It's actually going to return an array.

  • And before we had wrapped net dot run in scale up.

  • Whereas here in that dot forecast because we're getting in a ray rather than an object, we should be the next step.

  • The array would be the next steps, and so we can take this and Matt to scale up.

  • All right, here we go and there's our data.

  • That's what then that has learned.

  • And it's it's next prediction.

  • So you've got the beginning of one.

  • There's our second and three.

  • I think the council doesn't log every will of the cursor, But the idea is there, and so is the data.

  • In this tutorial, we're going to take a recurrent neural network, and we're gonna learn math, and we're gonna do it character by character.

  • And at first, that sounds kind of weird, but you'll notice something interesting at first With our training data here, there are no numbers, not directly.

  • Aah!

  • These air strings, even though there are numbers inside of them and we're gonna feed these directly into the neural net, this is where it gets interesting, especially with recurrent neural nets.

  • So the way that ever current neural networks is it has an input map, and that map is kind of like an array and that array maps a value that is coming into the neural net to a you're on, and it does so by index.

  • And this is how it works.

  • By first sitting in our training data, the training data is analyzed and we pull out unique values like zero ah, the next value being plus and the next value being equals because zero is repeating up here.

  • We've already established it equals would be the next 10 eyes repeating again and zero and plus repeats again in the next unique value we have is one and the next item in your ray.

  • And that happens throughout our entire training set.

  • So 123 for five, six, seven eat and nine and so we built sort of a nen put map, and the reason why that's important is the input map lines up with neuron.

  • So our input map is more or less the same size as our input size, and that's calculated internally inside of the neural net.

  • So that's it's important, but we don't have to think about it so much.

  • So input map is the same size as input size, and what that means is for each one of these characters, they get their own neuron.

  • And so if we step through a problem, for example, like zero plus zero equals zero to the net, we feed in a brand new array each time.

  • With that one value activated.

  • And so our input internally may look like this.

  • We've got, uh, a bunch of zeros, and so the inputs literally look like Okay, so input link equals input size.

  • So each one of these values is sort of attached to an input neuron.

  • And this is why that matters.

  • Each time we feed in one of these characters, it goes all the way through the neural net.

  • And then we could do that over and over and over again.

  • So the math problem zero plus zero equals one literally looks like this to the moment it looks like zero plus zero, and that's the 3rd 1 equals zero.

  • So that is our first math problem, sort of internally to the neural net.

  • But it does everything sort of automatically for us, and this is a good kind of magic.

  • It's not the bad kind.

  • It's the kind that is predictable.

  • So we'll go and comment that out and this out to All right, so we're ready to go ahead and predict our numbers.

  • So this is what happens next.

  • We're going to find our net.

  • Constant nets equals new brain doctor recurrence dot along short term memory and we're gonna have hidden layers of 20 and we're gonna net dot train on our training data and we're going to set our error threshold 20 that zero 25 and we're gonna go in the log out our value, all right, so we'll go out and learn it and we'll see what happens.

  • Very cool.

  • Now, a word of caution here. 01:02:32.210 -->

Hey, welcome to neural nets in JavaScript with brain Js.

字幕と単語

ワンタップで英和辞典検索 単語をクリックすると、意味が表示されます

B1 中級

JavaScriptを使ったニューラルネットワーク - Brain.jsを使ったフルコース (Neural Networks with JavaScript - Full Course using Brain.js)

  • 0 0
    林宜悉 に公開 2021 年 01 月 14 日
動画の中の単語