Placeholder Image

字幕表 動画を再生する

  • what is going on everybody.

  • And welcome to a tutorial slash showcasing of the Care Oz tuner package.

  • So were the most common questions I get on deep learning tutorials and content in general.

  • Is people asking How did you know to do a number of layers or why?

  • And neurons?

  • Or why did you do drop out?

  • Why did that degree?

  • Why batch norm?

  • All these things like, Why did you do that?

  • And the answer.

  • That question has always been a trial and error, and anybody who tries to tell you they knew what model neural network was going to work is a dirty liar.

  • It's trial and error now.

  • Of course, there's some time.

  • Some tasks like M n'est, for example, Ah, paper bag could solve amnesties to 90% accuracy or more.

  • So obviously, there's some problems there just so simple.

  • Any network will do on.

  • And then there is some neural networks that will solve most problems, like especially image problem stuff like that.

  • But for real world problems that aren't solved yet, the solution is trial and error, and this has historically involved me writing for loops to solve it, and then in that four loop.

  • I just tweet things on.

  • And then I run that overnight, and then I saved, like, validation, accuracy or loss or both.

  • And then in the morning, I just see Okay, these these are the attributes, like three layers at 64 nodes per layer.

  • It seems to be the thing.

  • And then, you know, now I'll test with batch Norm because every time you change one little thing like drop out or not, 50% 20% 10% batch, norm or not And all those things you got to keep testing.

  • So anyway, it's historically just been ugly for loose, to be honest with you, but recently came across the care Oz tuner package, which does everything I was doing, but better.

  • And it does a few other things that I wasn't even doing, which is pretty cool.

  • So I thought I would share it with you guys.

  • Basically, the crux of it is you've got a model, and then you can define little hyper parameter objects inside that model, and then you give you create this tuner object, and then it changes those hyper parameters and the hyper primers You can specify you don't have to Not everything has to be a tunable parameter, but the ones that you do are You could do anything like an int float.

  • You could do a choice Boo lean and the instant floats or like a range with a step size.

  • So anyways, um, that's pretty cool to get it.

  • You just pip install care OSS dash tuner.

  • Also, I'm using caress tuner one point open.

  • Oh, now, as a tutorial person, this throws up tons of red flags for me, but I'm still gonna do it anyways.

  • But one point open no means one of two things Either will never get updated again, Which would be sad or to is going to be updated aton in the next year.

  • And this tutorial could be rendered out of date very quickly.

  • If you're hitting errors, check the comment section, Google the errors or you can pick install care, US tuner, the exact version that I'm using.

  • So once you have that, you're ready to get started.

  • But first, a quick shout out to the sponsor and supporter of this video kite, which is an M l n a.

  • I based auto complete engine four python and it works in, like all the major editors, sublime text TVs, code pie charm, Adam Spider them.

  • But why?

  • Oh, anyway, on the major enters and it's actually a good auto complete.

  • I honestly, I hate auto complete.

  • So when they reached out and we're like, interested in becoming a partner, I was like, Well, see, but it's actually pretty good.

  • Um, it took me a little bit too.

  • Realize how good it was.

  • And in fact, I used.

  • It isn't for about 100 hours now, and the real test was when I removed it like I just disabled it just to see what the difference was.

  • I was like, Oh, wow, there's so many big difference is the biggest thing is like the auto complete.

  • It's not just variables.

  • It's like methods and even like snippets of code and, like the imports are so much nicer, huge differences.

  • I'll show some of them here, but I highly encourage you guys.

  • I'll put a link in the description.

  • If you have to check it out.

  • It's super cool, and it comes with a co pilot as well.

  • Just kind of a standalone app.

  • Um, if you want it anyways and basically It's like live updating documentation for whatever you're working on, which again is super useful.

  • So, yeah, it's it's really exciting sponsorship.

  • I don't take like any sponsorships if you haven't noticed.

  • Like I don't do V P N's mobile games all that stuff because it just doesn't really make any sense.

  • But this is actually a really cool product.

  • So I'm excited for them to be supporting the free education and, uh, definitely check him out.

  • It's the auto complete is the best.

  • Um so Okay, cool.

  • So let's go ahead and get started.

  • So first we need a data set and we need a model.

  • There's, like, so many things that we need just to even get started with care Oz tuner.

  • So I'm gonna try to, like, truncate this as much as possible.

  • But first we need a data set.

  • The data set that we're gonna use this fashion m n'est so it's like m n'est on Li.

  • Like I said, M nus is like too easy, so it's kind of no good for showing.

  • This s so we're gonna use fashion illness.

  • So say from tensor flowed.

  • Kare aas dot Uh, data sets were going to import can't.

  • Just as a example.

  • Here are all the data sets, fashion amnesty.

  • So then, ah, fashion m n'est dot load data is what we need to load the data and then just a shell out completely kite copilot, come down here.

  • It returns this right here.

  • So I'm actually just gonna copy in pasta.

  • This, um since most people probably aren't running Cook I co pilot.

  • Right now, you can go to the text based version of this tutorial or download kite real quick.

  • Uh, anyway, so here's all your data.

  • Uh, and I think I'm just gonna show, like, a quick visual example of the data and then will produce copy and paste a model.

  • Honestly, um, don't waste too much time.

  • So first, let's just quickly display, um, one of the data.

  • So we're gonna say, um, import Matt plot Lived on pie plot as b l t Lovely auto complete.

  • Thank you very much.

  • P l t dot M show.

  • Oh, no.

  • It tried to help me in show on that will do.

  • X er ex train will go with the zero with, uh and then peel tea dot show.

  • And then I'm also gonna do a C map.

  • See, map equals gray here on Lee because it's gonna be all colorful if I don't.

  • And people are gonna be like, Wow, that's loud that people are gonna be kind of confused.

  • So I'm gonna say, uh, Python 37 Kate Turner tutorial.

  • So this is kind of the data.

  • So again, it's a 28 by 28.

  • Like feminist, it's black and white.

  • It's got 10 classifications.

  • It's just it's articles of clothing rather than handwritten digits.

  • So in this case, it's like a boot or a shoe of some kind.

  • I don't really know what that is, but, um okay, so let's do a different one.

  • Let's see, hopefully we can get something a little more recognizable, possibly.

  • Okay, so some sort of short sweater thing.

  • Okay, so that's what we're dealing with.

  • So it's just a little more challenging to get, like, 98% accuracy as compared to, like, feminist.

  • So this is a thing that we can practice tuning on.

  • M.

  • Nus just doesn't work.

  • So zones we have this, we actually need a model.

  • Now, again, I don't really see any benefit in this tutorial for me to like, right out this entire model for you guys.

  • It would just be a waste of time.

  • So I'm actually just going to copy and paste.

  • I'll put all either copy and paste this into the description, or I'll put a just or injures, go to the text based version of the editorial.

  • But this should all be.

  • Except for this HP here, This should all be totally understandable by you guys.

  • There should be nothing here.

  • That is like, What?

  • What's that?

  • So, um, and that's so hyper print or this h p stands for hyper parameters, which is what we're going to use to go through.

  • This is a comment that I made for the actual tutorial.

  • Um, so this is creates our model object returns the model pretty simple.

  • So from here, you could define the model and do a dot fit, for example.

  • So the other thing that we need to do is import.

  • Um, so let's go ahead in from tensor Flowed are actually from tens of low.

  • We're gonna import Kare aas.

  • Uh, then we need to import all that layer information.

  • So from tensorflow dont's kare aas dot layers import All of these Basically, we're going to need every single one.

  • So dense, actually using dropout but calm Studi.

  • Yes, uh, way activation.

  • We've gotta flatten.

  • And we've got some max pooling.

  • Cool.

  • Done.

  • Ran over my face a little bit again.

  • There is the text based version of editorial and again, really just trying to run through this.

  • There shouldn't really be anything confusing Thio anybody here.

  • So once we have this, um, you know, real simply, we contest this.

  • I just want to make sure it works.

  • So I'm gonna say, uh, model equals.

  • I wouldn't call that build model eso them from here.

  • We should be able to say, like, model dot fit.

  • Uh, yet again.

  • Cool thing from Kite is like this, like, entire snippet boom done for you and the even kind of tab through it.

  • So, for example, batch size.

  • Uh, let's go 64 a pox.

  • So than again, I'm just having through that, which is pretty cool.

  • Validation data is going to be a two pool of information in this case.

  • What we're gonna do, we're gonna say ex train.

  • Why train so x train or x test Rather tabbed me over X test.

  • Why test.

  • And then actually, in this case, I don't care about verbose.

  • I had it there because I was building this tutorial in a notebook first.

  • So, uh, Okay, so then we can do that.

  • So we need to have our ex data.

  • So X train, And then this will be or why train and we need to re shape the data before complains to us.

  • Extreme equals X trained on reshape and its negative 1 28 21 Again, if this is confusing to you, I strongly encourage you to go back to one of the deep learning playlists.

  • Uh, okay, so that should all work.

  • I do want to test it really quick to make sure it works before we get into the actual tuning aspect of things.

  • So I'm gonna save, uh, run that again, please work.

  • Oh, it's because we have the HP there.

  • Let me just move it real quick.

  • Uh, put this back, please.

  • Work.

  • Whoa.

  • Okay, so it's training.

  • We can actually see.

  • It's already working.

  • Um, we could just go toe one iPAQ So, for example, after one iPod is actually not that accurate, like 77%.

  • Not much.

  • Um But whatever you got, I could just tell you it's not above 90%.

  • Okay, so the question would be okay.

  • What do we do from here?

  • Look, can we actually do better than this model?

  • So the, you know, the fashion M nus data set is still easily learn Herbal.

  • Uh, what's not easy is being 99% accurate as opposed to, like, m n'est.

  • So So what you would do from this point, Like if you're me and you're trying to compete in Chicago, competition is now you're just gonna start changing stuff.

  • But what we could do with the carols tuner package is weak, unlike automate that entire process.

  • So, uh, what I'd like to do now is I'm gonna move that over, and, um, we're going to import a couple of things again.

  • I'm gonna try to keep this super short someone this shortest possible.

  • Anyway, I'm going to copy and paste.

  • And so, from Carrie's tuner dot tutors in import Random search and then from caress tuner dot engine dot hyper parameters were just importing hyper parameters.

  • And again, these are the things that allow us to do, um, like the in the float in ranges Choice, Boo Lean.

  • And I want to say there's another one, but I forget it.

  • But anyway, again, textbook version has links to, like, all the docks.

  • I'm just trying to show you a general idea of how all this works.

  • The next thing I want to do is I'm going to import time, and then we're going to specify logger equals.

  • And for now, yeah, eventually is kind of a silly one.

  • But anyway, this is a stupid f string to use, But whatever time, time, time for nowthe Logar will just be a time stamp.

  • But you could add other things to this if you want it.

  • Um, but again, trying to keep it short and simple.

  • So, um okay, so build model.

  • Now we will pass the hyper parameters, and so far nothing is actually unique.

  • Here s O.

  • The next thing that we're gonna do is I'm gonna come down to the very bottom here, and I'm gonna comment out these two things because there's no longer will even worked.

  • And in fact, I'm just gonna get rid of them because that build model is no longer functional in the way that we have as long as we're gonna pass hyper parameter there.

  • So, uh, so we're gonna First, you're gonna specify the actual tuner that we intend to use, and that's gonna be a random search.

  • And here we're going to pass a bunch of things first.

  • Is the actual function that we intend to use Build model?

  • Nope.

  • Arms there.

  • The random search object here will automatically do the hyper parameter stuff for us.

  • So you just passed the function name here for your model, then we've got objective.

  • This is the thing that we're interested in tracking.

  • So, in our case, Val Accuracy, that's what I'm interested in.

  • Um, they were going to say Max trials.

  • I'm gonna set this too.

  • For now.

  • I don't know.

  • For now, one we have No, we have no dynamism here, so it doesn't matter.

  • So I'm gonna say that the one I'll explain that a moment, and then we're gonna say executions, lips, executions under scorer per trialled equals one.

  • And, uh, then we just give it the directory, so directory equals longer.

  • Cool.

  • So S O Max trials, execution, patrol.

  • So trials is having so like things can get.

  • You think options can explode very quickly when we're allowing ourselves to have ah dynamic number of layers, a dynamic number of nodes per layer or features per layer, Um, and then a bunch of Boolean operations, like Do we want drop out?

  • And then if we do what range of drop this can explode toe like thousands or even millions of combinations pretty quick, like with with a two layer confident, pretty hard to get to millions.

  • But it could get to a huge number, like 1000 different combinations of models is not going to be very hard.

  • So here, when we say Max trials is just how many random pickings do we wanna have and then executions per trial?

  • This is what you like.

  • If you're just trying to search for a model that learns at all, then I would keep this toe one.

  • If you're trying to eke out the best performance, I would set this toe like three or five, or maybe even more.

  • Um, as long as you're like shooting in the dark just trying to find something that works, or just a general idea of what seems to work, I would keep this low.

  • But the point of this is is each dynamic version.

  • You're gonna train it this many times, So if you've got some sort of, you know when it randomly says, Hey, let's do four