Placeholder Image

字幕表 動画を再生する

  • What's going on?

  • Everybody welcome apart 10 of the machine learning or deep learning and hell like three Siri's.

  • Up to this point, we just kind of started talking about her concepts, making our imports, talking about how we're going to structure things.

  • Uh, let's just go ahead and jump into it.

  • So at this point, we've shuffled all of the files and we're ready to separate them out into training and validation files.

  • They're already shuffled, so we can just iterating up to whatever the, you know, validation amount is for the validation stuff.

  • And then after that, do the training.

  • Also, we only needed that once.

  • We don't need it every time through the pox.

  • And like I said, I'm not going to use the tensorflow generators stuff because I think it just over complicates things just not really well done for the user, in my opinion.

  • So we want to make sure we have some logic that allows us tow.

  • Just load this data one time, even if we want toe, you know, because basically, we could also set this to true once we have those files and we don't have to love him at all.

  • But If we do need to load in the data, we just want to load it in one time.

  • Not every time, per iPAQ.

  • All right, let's get to it.

  • So, first of all, we want to say, give if load trained files.

  • This means we have to load them.

  • So now what we want to do before we even enter the pox is we wanna load in, um, the validation files.

  • So if load train files, so if we happen to have them, then we can just say test x equals, uh, n p dot load test x dot N P.

  • Y right.

  • That'll be there.

  • It's not there.

  • So we're gonna have to this, like, cause this is false anyways, But if it was, wouldn't that be nice?

  • We could load it, but we can't.

  • Therefore else.

  • So now what we need to do is, um, basically, we'll start with two empty lists.

  • Test why?

  • And then we want to generate over all of our files.

  • So we're gonna say four f in t.

  • Q t.

  • Q d.

  • M.

  • We're just going to use this toe like let us know where we are in each process.

  • Um, and then that's gonna be training file names up to the validation game count.

  • So the first what is it?

  • 50.

  • I think we chose.

  • Yeah, 1st 50 that'll be our validation data.

  • So then what we're gonna say is data equals np dot Load that f that file.

  • You know, that data?

  • It's an entire game.

  • So it's a list of lists where the list itself is a bunch of these.

  • You know, it's not really the game visualization, but it sort of is just all the data from each of the coordinates basically on the game map that we're curious about.

  • So that'll be the So the zero with element is the 33 by 33 data.

  • And then the first element is the move.

  • That was associate ID.

  • You know?

  • What what move did the agent make based on that data?

  • Okay, so now, um, data equals np alluded.

  • So now what we need to do is we want iterated over that data.

  • So four d in data, we wanna test x dot upend D zero and then test.

  • Why, uh, pen d one.

  • And, uh, that's all good.

  • And also probably a pen np array.

  • So the wise don't actually need to be a numb pyre, Ray, But I'm pretty sure the ECs data I think tensorflow is gonna gonna bark at us if if we don't go with Honore for all the ECs data.

  • Okay, once I stun mp dot save and, um and I hit my insert tears on them.

  • What happened?

  • What is happening?

  • There we go.

  • Uh, chest X, none pie.

  • This is like the same time.

  • Is that previous tutorial where I made an egregious mistake.

  • So just go ahead and expect that this is not gonna work.

  • You know, really any sentence video just really keep your standards very low and everybody stays happy.

  • Okay, so in this case, now, we save those files that later we would look for if we believed we could load it.

  • Okay, so we've done that, and we can get away with that because it's only 50 files.

  • And even when the files were, like 50 megabytes each or whatever, we can still easily load in 50.

  • But what about in this case where, you know, we've got, like, 3000 of these files that were going to use, And while they're only one megabyte.

  • That's no big deal, right?

  • But when they're 50 it's a little bigger of a deal.

  • And then what if we even have even more games?

  • Like, what if we have 10,000 games that we're going to go with at some point?

  • We got it.

  • We have to chunk thes and load them in his batches.

  • So that's what we want to d'oh!

  • Now, the next question is, how do we separate these into batches?

  • Well, um, you know, a bunch of different ways we could slice and then use some sort of counter that keeps adding plus 50 and slices.

  • Based on that, you really could do that.

  • Or the thing I always do is I go to google dot com and then I search.

  • Um uh, Chunk split.

  • Lissy, split a list into chunks.

  • Python the surprise.

  • Get us where I want to be.

  • How do you Yeah, click on this very first result.

  • And this is a beautiful generator for doing just that.

  • You passed the list and then you pass.

  • How many items do you want?

  • Each of the list and it will just automatically chunk it and then you can iterated over those chunks very nicely.

  • I just I use this all the time.

  • I find this to be one of the most useful stack overflow copy pastas that you could possibly make.

  • So, uh, the other thing I'll do is just a copy that just in case I post this code somewhere sores black.

  • Okay, so, um great.

  • Well done, everybody good chunking.

  • Okay, Now, the next thing that we're gonna dio is define our neural network.

  • Now, in the case, you know, you might already have a previous model, just like before.

  • So if load pre ve model So if that's true, then you can say model equals t f dot Keira Stott isn't models.

  • I think it stopped models yet dot models don't load model, and then preview model name Easy is that we don't have a model, so it's else, and then we need to start specifying our model.

  • So model equals sequential will go with a sequential type of model.

  • Um e I think we're gonna steal from myself due to do, um, it's good.

  • Care us again.

  • And then it was what part?

  • Three part of me just wants to just take a copy and paste that code.

  • Let's go to the bottom.

  • This looks good, huh?

  • We don't want the binary cross entropy.

  • Um, but the rest should be fine.

  • So I think what I'm gonna do, huh?

  • Uh, everything down to the final dense layer, I think, is what we'll take.

  • Uh, so we will do this.

  • Probably just copy that.

  • This is probably not gonna end well, but hey, nothing ventured, nothing gained.

  • That looks like a regular Tabas.

  • Well, let's fix that.

  • No, no, no.

  • I thought we fix this before.

  • What?

  • Swayed?

  • Oh, my gosh, is the whole thing.

  • I could have sworn we, um Oh, this is a really painful I thought already set this.

  • What do I gotta do this again?

  • I just Now, I gotta fix all this because it's gonna probably get angry at me.

  • But ah, why I gotta do this?

  • This is super annoying.

  • There's got to be, like, a better way to translate all of this stuff, But I'm gonna do all these.

  • That's really weird.

  • I swear in now I've lost my man.

  • It's cool.

  • I swear in one of the previous tutorials I fixed this already.

  • I'm not sure why this one suddenly has real tabs anyways.

  • Definitely want spaces.

  • So weaken trance, Transfer it to other other places.

  • Okay, um, where was I?

  • So, uh, we probably don't need this.

  • Let's do a 64 64.

  • Let's do Let's do a three by 64 activation can state rectified Linear.

  • That's fine.

  • Flattened to a dense 64.

  • Ah, well, keep activation, sigmoid.

  • We don't have one Now we're going to go with five options, and then we need to choose sparse, uh, sports categorical cross entropy on the fit mint.

  • Um, also a pry it Let me think Here on X.

  • Well, we just don't have X.

  • Also, by this point, we won't even we won't have any of the exes.

  • So, in fact, what we probably need to say is test x dot shape.

  • So we definitely need that thing to be an umpire at this point.

  • Um, test X.

  • But also test X itself is not an umpire.

  • A.

  • We need to convert that to probably be a numb pyre.

  • A.

  • We just do that here.

  • Test X equals NP array test X because we still can't get a shape.

  • It's a list of numb pie.

  • Raise up until we do this.

  • So we've got that.

  • Then we got this random function in there.

  • That's fine.

  • Well, we can clean this stuff up later.

  • For now, Uh, let's make it work.

  • I think we're I think we're almost done here.

  • Um, so the only other thing that has been coming up recently padding equals same is, for some reason and the newer versions of tensorflow I get a lot of these, like, shaping issues, so I would recommend adding patting same to every convolution a ll in pooling layer.

  • Hopefully, that is where I think that's all where it's meant to be, Um, because you have to specify and for some reason, that defaults have changed.

  • And then now if if everything doesn't add up perfectly, you'll get an error, and it's really it's really tedious.

  • That's something that you would want to happen in the lower level tensorflow, but not in the higher level tensorflow layers a p.

  • I think it's really dumb that Dave, like, gone back to having that stuff.

  • Have to, you know, I used to not have to think about it.

  • If you use the tensorflow layers and I think that is a better choice personally, but it is what it is.

  • Okay, Okay, that's our model.

  • Fantastic.

  • So then we're going to specify our optimizer, and that's gonna be a t f dot keira stott optimizers dot Capital a Adam.

  • And then we'll say the learning rate learning rate 0.1 So one inning, Let's just do this way when the negative three.

  • Um, we might find that when you need it for 25.

  • Makes more sense.

  • I think to start will get away with 20 negative three.

  • But we'll probably end up having to check a few.

  • Also, we consent to set of decay.

  • Also all said that 20 negative three.

  • Okay, then model model dot com pile and we will say los equals sparse, categorical cross and Tropea.

  • That's really long.

  • We need, like an S C c ah, shorthand.

  • I know they have it for, like, MSC for means where?

  • There.

  • We need sec, please.

  • Maybe that's a thing.

  • Now someone try it, Let me know.

  • Optimizer optimizer equals the opt that we just defined.

  • And then the metrics that will track metrics equals accuracy.

  • Fantastic.

  • Okay, so step one is Let's see if this compiles, Huh?

  • So I thought on python scented Bought.

  • Oh, no, that's not what I want for, uh, python.

  • It was sent a cent train.

  • Not the part nine, but this one.

  • Okay.

  • Okay, so it looks like our model does compile, and, uh, we are ready to go through a box back to our code.

  • Lovely, lovely code.

  • Okay, so now four feet in range.

  • Uh, Pop, what if that was, ah, print.

  • Uh, current currently working on Hee Park Hee Park hee Cool.

  • So we'll know which epoch we're on.

  • Um Then what we're gonna say is training file chunks that is equal to that chunks generator that we built here and then you pass l n n.

  • So the entire list is all of our files.

  • So that's training file names after the validation, or I'm sorry, who almost pissed up there.

  • Okay, so, validation game count.

  • Onward.

  • And then, uh, training, training chunks, Eyes done.

  • Okay.

  • I said fudged, by the way.

  • Okay.

  • Training file chunks.

  • Um, probably actually.

  • Just define that up here.

  • That should be fine.

  • Thio, put that up there.

  • We don't need to find that every park.

  • Okay.

  • Great.

  • Four index and training files, Uh, in a new and humor eight training file chunks Prince working on data chunk idee X plus one out of, um, this out of this divided by training chunk size.

  • So that might wind up giving us quite long.

  • Let's say round that 22 decimal places.

  • Otherwise, that might be like a super long decimal or something.

  • And really, what we should say is math dot seal, But I'm not going to do that.

  • I just want the general gist of where we are.

  • Um, okay, so now what will say is if load if load trained files for E is greater than zero.

  • So if we've already done one whole iPAQ, we have the data, so we can actually say X equals np dot load.

  • Um, and then we'll call this, um, ex dash something dot n p y.

  • And this would be I d x.

  • As if I'm asking you guys.

  • Um, but yeah, that should That should be what you want.

  • Copy that, paste.

  • Otherwise, why equals np load?

  • Why?

  • Under the ideal scenario, then else, uh, X equals empty list.

  • Why equals empty list four f ine ti que dia training file you No training files.

  • What do we want to do?

  • We're gonna say a data equals np dot load f Really?

  • This is all pretty similar to what we had before.

  • Um, also, we