Placeholder Image

字幕表 動画を再生する

  • what is going on everybody.

  • And welcome to a tutorial slash showcasing of the Care Oz tuner package.

  • So were the most common questions I get on deep learning tutorials and content in general.

  • Is people asking How did you know to do a number of layers or why?

  • And neurons?

  • Or why did you do drop out?

  • Why did that degree?

  • Why batch norm?

  • All these things like, Why did you do that?

  • And the answer.

  • That question has always been a trial and error, and anybody who tries to tell you they knew what model neural network was going to work is a dirty liar.

  • It's trial and error now.

  • Of course, there's some time.

  • Some tasks like M n'est, for example, Ah, paper bag could solve amnesties to 90% accuracy or more.

  • So obviously, there's some problems there just so simple.

  • Any network will do on.

  • And then there is some neural networks that will solve most problems, like especially image problem stuff like that.

  • But for real world problems that aren't solved yet, the solution is trial and error, and this has historically involved me writing for loops to solve it, and then in that four loop.

  • I just tweet things on.

  • And then I run that overnight, and then I saved, like, validation, accuracy or loss or both.

  • And then in the morning, I just see Okay, these these are the attributes, like three layers at 64 nodes per layer.

  • It seems to be the thing.

  • And then, you know, now I'll test with batch Norm because every time you change one little thing like drop out or not, 50% 20% 10% batch, norm or not And all those things you got to keep testing.

  • So anyway, it's historically just been ugly for loose, to be honest with you, but recently came across the care Oz tuner package, which does everything I was doing, but better.

  • And it does a few other things that I wasn't even doing, which is pretty cool.

  • So I thought I would share it with you guys.

  • Basically, the crux of it is you've got a model, and then you can define little hyper parameter objects inside that model, and then you give you create this tuner object, and then it changes those hyper parameters and the hyper primers You can specify you don't have to Not everything has to be a tunable parameter, but the ones that you do are You could do anything like an int float.

  • You could do a choice Boo lean and the instant floats or like a range with a step size.

  • So anyways, um, that's pretty cool to get it.

  • You just pip install care OSS dash tuner.

  • Also, I'm using caress tuner one point open.

  • Oh, now, as a tutorial person, this throws up tons of red flags for me, but I'm still gonna do it anyways.

  • But one point open no means one of two things Either will never get updated again, Which would be sad or to is going to be updated aton in the next year.

  • And this tutorial could be rendered out of date very quickly.

  • If you're hitting errors, check the comment section, Google the errors or you can pick install care, US tuner, the exact version that I'm using.

  • So once you have that, you're ready to get started.

  • But first, a quick shout out to the sponsor and supporter of this video kite, which is an M l n a.

  • I based auto complete engine four python and it works in, like all the major editors, sublime text TVs, code pie charm, Adam Spider them.

  • But why?

  • Oh, anyway, on the major enters and it's actually a good auto complete.

  • I honestly, I hate auto complete.

  • So when they reached out and we're like, interested in becoming a partner, I was like, Well, see, but it's actually pretty good.

  • Um, it took me a little bit too.

  • Realize how good it was.

  • And in fact, I used.

  • It isn't for about 100 hours now, and the real test was when I removed it like I just disabled it just to see what the difference was.

  • I was like, Oh, wow, there's so many big difference is the biggest thing is like the auto complete.

  • It's not just variables.

  • It's like methods and even like snippets of code and, like the imports are so much nicer, huge differences.

  • I'll show some of them here, but I highly encourage you guys.

  • I'll put a link in the description.

  • If you have to check it out.

  • It's super cool, and it comes with a co pilot as well.

  • Just kind of a standalone app.

  • Um, if you want it anyways and basically It's like live updating documentation for whatever you're working on, which again is super useful.

  • So, yeah, it's it's really exciting sponsorship.

  • I don't take like any sponsorships if you haven't noticed.

  • Like I don't do V P N's mobile games all that stuff because it just doesn't really make any sense.

  • But this is actually a really cool product.

  • So I'm excited for them to be supporting the free education and, uh, definitely check him out.

  • It's the auto complete is the best.

  • Um so Okay, cool.

  • So let's go ahead and get started.

  • So first we need a data set and we need a model.

  • There's, like, so many things that we need just to even get started with care Oz tuner.

  • So I'm gonna try to, like, truncate this as much as possible.

  • But first we need a data set.

  • The data set that we're gonna use this fashion m n'est so it's like m n'est on Li.

  • Like I said, M nus is like too easy, so it's kind of no good for showing.

  • This s so we're gonna use fashion illness.

  • So say from tensor flowed.

  • Kare aas dot Uh, data sets were going to import can't.

  • Just as a example.

  • Here are all the data sets, fashion amnesty.

  • So then, ah, fashion m n'est dot load data is what we need to load the data and then just a shell out completely kite copilot, come down here.

  • It returns this right here.

  • So I'm actually just gonna copy in pasta.

  • This, um since most people probably aren't running Cook I co pilot.

  • Right now, you can go to the text based version of this tutorial or download kite real quick.

  • Uh, anyway, so here's all your data.

  • Uh, and I think I'm just gonna show, like, a quick visual example of the data and then will produce copy and paste a model.

  • Honestly, um, don't waste too much time.

  • So first, let's just quickly display, um, one of the data.

  • So we're gonna say, um, import Matt plot Lived on pie plot as b l t Lovely auto complete.

  • Thank you very much.

  • P l t dot M show.

  • Oh, no.

  • It tried to help me in show on that will do.

  • X er ex train will go with the zero with, uh and then peel tea dot show.

  • And then I'm also gonna do a C map.

  • See, map equals gray here on Lee because it's gonna be all colorful if I don't.

  • And people are gonna be like, Wow, that's loud that people are gonna be kind of confused.

  • So I'm gonna say, uh, Python 37 Kate Turner tutorial.

  • So this is kind of the data.

  • So again, it's a 28 by 28.

  • Like feminist, it's black and white.

  • It's got 10 classifications.

  • It's just it's articles of clothing rather than handwritten digits.

  • So in this case, it's like a boot or a shoe of some kind.

  • I don't really know what that is, but, um okay, so let's do a different one.

  • Let's see, hopefully we can get something a little more recognizable, possibly.

  • Okay, so some sort of short sweater thing.

  • Okay, so that's what we're dealing with.

  • So it's just a little more challenging to get, like, 98% accuracy as compared to, like, feminist.

  • So this is a thing that we can practice tuning on.

  • M.

  • Nus just doesn't work.

  • So zones we have this, we actually need a model.

  • Now, again, I don't really see any benefit in this tutorial for me to like, right out this entire model for you guys.

  • It would just be a waste of time.

  • So I'm actually just going to copy and paste.

  • I'll put all either copy and paste this into the description, or I'll put a just or injures, go to the text based version of the editorial.

  • But this should all be.

  • Except for this HP here, This should all be totally understandable by you guys.

  • There should be nothing here.

  • That is like, What?

  • What's that?

  • So, um, and that's so hyper print or this h p stands for hyper parameters, which is what we're going to use to go through.

  • This is a comment that I made for the actual tutorial.

  • Um, so this is creates our model object returns the model pretty simple.

  • So from here, you could define the model and do a dot fit, for example.

  • So the other thing that we need to do is import.

  • Um, so let's go ahead in from tensor Flowed are actually from tens of low.

  • We're gonna import Kare aas.

  • Uh, then we need to import all that layer information.

  • So from tensorflow dont's kare aas dot layers import All of these Basically, we're going to need every single one.

  • So dense, actually using dropout but calm Studi.

  • Yes, uh, way activation.

  • We've gotta flatten.

  • And we've got some max pooling.

  • Cool.

  • Done.

  • Ran over my face a little bit again.

  • There is the text based version of editorial and again, really just trying to run through this.

  • There shouldn't really be anything confusing Thio anybody here.

  • So once we have this, um, you know, real simply, we contest this.

  • I just want to make sure it works.

  • So I'm gonna say, uh, model equals.

  • I wouldn't call that build model eso them from here.

  • We should be able to say, like, model dot fit.

  • Uh, yet again.

  • Cool thing from Kite is like this, like, entire snippet boom done for you and the even kind of tab through it.

  • So, for example, batch size.

  • Uh, let's go 64 a pox.

  • So than again, I'm just having through that, which is pretty cool.

  • Validation data is going to be a two pool of information in this case.

  • What we're gonna do, we're gonna say ex train.

  • Why train so x train or x test Rather tabbed me over X test.

  • Why test.

  • And then actually, in this case, I don't care about verbose.

  • I had it there because I was building this tutorial in a notebook first.

  • So, uh, Okay, so then we can do that.

  • So we need to have our ex data.

  • So X train, And then this will be or why train and we need to re shape the data before complains to us.

  • Extreme equals X trained on reshape and its negative 1 28 21 Again, if this is confusing to you, I strongly encourage you to go back to one of the deep learning playlists.

  • Uh, okay, so that should all work.

  • I do want to test it really quick to make sure it works before we get into the actual tuning aspect of things.

  • So I'm gonna save, uh, run that again, please work.

  • Oh, it's because we have the HP there.

  • Let me just move it real quick.

  • Uh, put this back, please.

  • Work.

  • Whoa.

  • Okay, so it's training.

  • We can actually see.

  • It's already working.

  • Um, we could just go toe one iPAQ So, for example, after one iPod is actually not that accurate, like 77%.

  • Not much.

  • Um But whatever you got, I could just tell you it's not above 90%.

  • Okay, so the question would be okay.

  • What do we do from here?

  • Look, can we actually do better than this model?

  • So the, you know, the fashion M nus data set is still easily learn Herbal.

  • Uh, what's not easy is being 99% accurate as opposed to, like, m n'est.

  • So So what you would do from this point, Like if you're me and you're trying to compete in Chicago, competition is now you're just gonna start changing stuff.

  • But what we could do with the carols tuner package is weak, unlike automate that entire process.

  • So, uh, what I'd like to do now is I'm gonna move that over, and, um, we're going to import a couple of things again.

  • I'm gonna try to keep this super short someone this shortest possible.

  • Anyway, I'm going to copy and paste.

  • And so, from Carrie's tuner dot tutors in import Random search and then from caress tuner dot engine dot hyper parameters were just importing hyper parameters.

  • And again, these are the things that allow us to do, um, like the in the float in ranges Choice, Boo Lean.

  • And I want to say there's another one, but I forget it.

  • But anyway, again, textbook version has links to, like, all the docks.

  • I'm just trying to show you a general idea of how all this works.

  • The next thing I want to do is I'm going to import time, and then we're going to specify logger equals.

  • And for now, yeah, eventually is kind of a silly one.

  • But anyway, this is a stupid f string to use, But whatever time, time, time for nowthe Logar will just be a time stamp.

  • But you could add other things to this if you want it.

  • Um, but again, trying to keep it short and simple.

  • So, um okay, so build model.

  • Now we will pass the hyper parameters, and so far nothing is actually unique.

  • Here s O.

  • The next thing that we're gonna do is I'm gonna come down to the very bottom here, and I'm gonna comment out these two things because there's no longer will even worked.

  • And in fact, I'm just gonna get rid of them because that build model is no longer functional in the way that we have as long as we're gonna pass hyper parameter there.

  • So, uh, so we're gonna First, you're gonna specify the actual tuner that we intend to use, and that's gonna be a random search.

  • And here we're going to pass a bunch of things first.

  • Is the actual function that we intend to use Build model?

  • Nope.

  • Arms there.

  • The random search object here will automatically do the hyper parameter stuff for us.

  • So you just passed the function name here for your model, then we've got objective.

  • This is the thing that we're interested in tracking.

  • So, in our case, Val Accuracy, that's what I'm interested in.

  • Um, they were going to say Max trials.

  • I'm gonna set this too.

  • For now.

  • I don't know.

  • For now, one we have No, we have no dynamism here, so it doesn't matter.

  • So I'm gonna say that the one I'll explain that a moment, and then we're gonna say executions, lips, executions under scorer per trialled equals one.

  • And, uh, then we just give it the directory, so directory equals longer.

  • Cool.

  • So S O Max trials, execution, patrol.

  • So trials is having so like things can get.

  • You think options can explode very quickly when we're allowing ourselves to have ah dynamic number of layers, a dynamic number of nodes per layer or features per layer, Um, and then a bunch of Boolean operations, like Do we want drop out?

  • And then if we do what range of drop this can explode toe like thousands or even millions of combinations pretty quick, like with with a two layer confident, pretty hard to get to millions.

  • But it could get to a huge number, like 1000 different combinations of models is not going to be very hard.

  • So here, when we say Max trials is just how many random pickings do we wanna have and then executions per trial?

  • This is what you like.

  • If you're just trying to search for a model that learns at all, then I would keep this toe one.

  • If you're trying to eke out the best performance, I would set this toe like three or five, or maybe even more.

  • Um, as long as you're like shooting in the dark just trying to find something that works, or just a general idea of what seems to work, I would keep this low.

  • But the point of this is is each dynamic version.

  • You're gonna train it this many times, So if you've got some sort of, you know when it randomly says, Hey, let's do four layers at 64 nodes per layer, it will train that one time.

  • But you might want to say two or three times because if you're trying to eke out like 1% accuracy given to different runs from the exact same model, you might find that you get more than a difference.

  • That's higher than 1%.

  • So if you're really trying to eke out performance one, you might be seeking a model that has a smaller, standard deviation of validation accuracy.

  • But also, you might want to run it a few times to figure out what the true average is, because you might have just gotten lucky where maybe you got unlucky or whatever, so hopefully that makes sense.

  • If that doesn't make sense, to feel free to ask in a comment below.

  • So that's our tuner object.

  • Um, and then now what we're gonna do is actually do a tuner search.

  • So tuner dot search and in this case, we're gonna specify are ECs data, which will be ex train.

  • And then our why data will be.

  • Why train?

  • And I think I'll go ahead and tab that over from.

  • Actually, I suppose if we want to be all I think it's correct.

  • Pepe to remove those, uh, x Why?

  • Um And then we need a pox.

  • So how many packs did you want to train every single time again?

  • This really depends on your needs.

  • I'm gonna set it to one just because two iterated over this stuff, it still is gonna take a while.

  • And in fact, we're probably not even gonna do it.

  • But locally, I'm just suggesting maybe start with one, uh, batch size again.

  • This is gonna vary by you know what?

  • You're running this on.

  • I'm gonna say 64 then validation data that I did.

  • I did.

  • Dang it.

  • E type it Foundation data equals I've been so spoiled.

  • I haven't really had a type too much.

  • Uh, ex, uh, test.

  • And then why test?

  • Okay, so now what this will do is search, given these parameters, right?

  • And then what kind of stuff is it gonna search?

  • Is gonna based on this information here, we're only gonna want run one trial again.

  • I'm going to just quickly, uh, run this to do a bug check, and then we'll actually start making the model truly dynamic.

  • So candid runs good.

  • So now what we're gonna do is come to our actual model and start adding things that make it dynamic.

  • So, first of all, s o for our input layer.

  • Let's say we take 32 and we want 32.

  • D'oh!

  • Yeah, sure.

  • Maybe be 32.

  • But maybe we wanted to be anything from 32 22 56.

  • Um, but we don't want to say like 32 33 34 right?

  • We won't have a step size there.

  • So the way that we do that is hp dot capital I in't.

  • And then we're gonna give it a name.

  • And so I'm gonna say input.

  • Um, input units not shape.

  • Uh, and then it's the starting value max value.

  • So 32 22 56 and then the step size, I'm gonna say 32.

  • So, um, in fact, I'll just I'll just add the actual names to this one.

  • I'm gonna do it every time, but just to make it super clear men value max value next value on and then this is her step.

  • So I'm just making sure it's on screen.

  • I'm not covering it.

  • So now the input units is a dynamic value that will be randomly chosen, but somewhere in that range.

  • Okay, so, uh, yeah, look good.

  • So again, I'm gonna save that really quickly and just make sure we run No bugs.

  • Cool.

  • So in theory, that is with a different you know, who knows what.

  • I don't know which one it was in that case, Uh, the tuner does save all that information.

  • I'm just again bug checking at this stage.

  • So, uh, Rams air search.

  • And so it's okay.

  • We got ah, unique or a dynamic number of, uh, for the input to this this confident, and then we can come down here and then So the input layer is a little unique because it has an input shape that we need to make sure we retained.

  • But then the subsequent model or the subsequent layers can, um are all basically the same.

  • And in fact, I'm gonna remove Max pooling because that's gonna cause trouble for us.

  • We're gonna run out of things to pull if we in some cases.

  • So I'm gonna remove that.

  • And now we're gonna say is, um we're going Thio four i in range, and again we're gonna pick HP int, And then the name in this case will be in underscore layers and we will go from 1 to 4.

  • And since we want the step size to be one, we don't need to input anything more.

  • So from here will do that.

  • So now, um, model ad comp Tootie.

  • And then again, what if we want this 32 again to be unique?

  • So I'm gonna take HP in't here.

  • Copy that.

  • Paste that over that 32 rather than input units, I'm actually going to say, um, underscore.

  • I will make this an F string.

  • Um, I'm just a calm I units and again, 32 to 36.

  • Cool.

  • So looks good to me.

  • Amato at really huge.

  • Okay, I think that's good.

  • We can go ahead and save that.

  • Um, let's come down to here.

  • Go ahead and run that.

  • Make sure it runs.

  • It does.

  • Okay.

  • So I could continue to let that run, and in fact, it might see I think we said one iPAQ.

  • So hopefully at the end of it, least one pocket should give us a brief summary, I would hope.

  • Um Oh, that's right.

  • We said so at the end of it.

  • Okay.

  • So we can see Fine.

  • In this case, we had a neural network, so that input confident, um was so the input units was 32.

  • Then we had to con flares, so it's a little bit out of order, actually.

  • But the input unit was here, right?

  • So we actually had a 32.

  • So is 32 by 96 by 32 not by but each layer.

  • So the input layer is 32 like that.

  • Continent has 32 features and then feeds into a 96 features than the 32 features.

  • So it says and layers is too.

  • But it's actually three plus one more.

  • It's just end layers is the answer to this four loop here, Uh, here.

  • Right.

  • So the actual number of layers and layers plus one, I hope that makes sense.

  • Um, coming back down here.

  • So then this tells us what the score is Also tells us what the best step before that score was, which is useful.

  • But we only did one iPAQ.

  • Anyways, um okay, so we get the information.

  • But of course, this was relatively meaningless, because we, um we only we only tested one thing.

  • So?

  • So the next thing to do would be to make this a much, much bigger test.

  • But each test takes that says 19 seconds.

  • So and it's gonna vary depending on how many layers you do and all that.

  • So it's just going to vary.

  • What I've done is I went ahead and created.

  • I just went ahead and saved it as a pickle object.

  • So I took tuner at the end.

  • You have you get this tuner objects.

  • So at this stage, all I did was imported Pickle.

  • I saved the tuner object to a pickle.

  • I thought I had saved it, but I made a mistake.

  • So ended up just retraining.

  • Everything s so here I am, like, an hour later.

  • I've got my people.

  • I've got my directory.

  • Um, and just to really quickly show you inside this directory and then inside of untitled project, we have all of the trials that ran.

  • And then for each of the trials.

  • We have a checkpoint, Um, which obviously is the model.

  • And then the trial that Jason file contains.

  • Obviously your i D.

  • But then the hyper primers, more importantly, and the score that that model God.

  • So even if you didn't have the pickle, you could still go back in time and get this stuff.

  • But anyways, it would be kind of challenging and tedious and annoying, especially if you've got many iterations per trial.

  • Ah, you'd have to pair up, I guess, by exact hyper parameters.

  • I don't really know.

  • Anyway, it would be annoying.

  • So it's nice to have the pickles saved instead just to come back later.

  • So, um so yeah, so cool.

  • And so you have that.

  • And then also, I want to say, here you've got is it the oracle?

  • I think it's inside the oracle that has the best information.

  • It's definite tuner zero.

  • Ah, but you can also interact with the tuner object, which is why I saved it is a pickle.

  • So first of all, we can get the best hyper parameters right out of the gate.

  • Uh, just by running that and in fact, before I do that, let me.

  • Let me comment out this this and then we don't want to do the search again.

  • So I'm gonna comment that out.

  • Cool.

  • Uh, okay.

  • So let's just run that real quick and we get here, like the best hyper parameters.

  • Now, I don't really find that very easy to read, but then the next thing that is a little more useful to me, uh, is the results summary.

  • So this should give us the top, I think, by default Top 10 models s.

  • Oh, here they are.

  • And as we can see here, the top 10 got basically the best was 87.

  • But it's anything from 86 to 87.

  • It's, you know, within what 5% accuracy it looks like or six, um, so pretty, pretty close there.

  • And also 87 is the very best.

  • Interestingly enough, in the other one that I trained that I lost, it was the best was 89.6.

  • So I do think we could for sure find something more than 90.

  • And if you run this, it's totally possible that you'll find some that are better than 90.

  • And the reason for this difference is even though it seems like we made very minimal things to make dynamic.

  • We actually did a lot.

  • And so so to actually run through all those I don't actually totally even know.

  • So you could have up to four and then four layers of each unique convolution lairs and then off all of those combinations times all the possible combinations here.

  • I mean, it would just be a huge number.

  • It's gonna be hundreds or even in the thousands on someone.

  • Someone go ahead and do the math if you want.

  • Uh, anyways, it's a lot, so there's a lot of combinations, but that just goes to show that I already know that somehow with this exact code, you can get actually 89%.

  • It's just a function of running through all the combination.

  • So again, it's very nice if something could do it just automatically for you.

  • Eso Finally, the last thing I will show is that you can actually, from here get the actual model.

  • So a tensorflow model, so print to not get best models and then I'm doing a dot summary here.

  • But, uh, this is an actual model, so you could actually do a dot Predict as well.

  • I'm just gonna do it dot Summary Just so you can see it.

  • I think this is the easiest way.

  • Like the hyper parameters is still kind of hard to be like.

  • Okay, but how would I build this model again?

  • Whereas a dot summary really just makes a lot of sense to me.

  • So anyway, we'll just do that really quick, But mostly it's just to show you it's an actual tensorflow model.

  • So as we can see, you got the input.

  • Ah, the input layer activation pooling.

  • Right.

  • So you just build this model exactly to this to these specs.

  • So Okay, I think that's enough information.

  • Hopefully you guys have enjoyed.

  • Like I said, this is like one of the most common questions I get.

  • The answer is probably not as intriguing as you Maybe would have hoped, but hopefully, caress tuner can actually make your lives easier if you have questions, comments, concerns, whatever.

  • Feel free to leave them below and again shout out to kite if you want to try them out, they are.

  • It's totally free plugin, and like I said, it's it's pretty awesome, like I really enjoying it so hopefully you guys will enjoy it as well.

  • And thanks to them for supporting the video, um, that's it.

  • I will see you guys in another video.

what is going on everybody.

字幕と単語

ワンタップで英和辞典検索 単語をクリックすると、意味が表示されます

B1 中級

Keras-Tunerによるニューラルネットワーク構造の最適化 (Optimizing Neural Network Structures with Keras-Tuner)

  • 2 0
    林宜悉 に公開 2021 年 01 月 14 日
動画の中の単語