Placeholder Image

字幕表 動画を再生する

  • what is going on?

  • Everybody.

  • And welcome to an exciting channel update both for me and hopefully for some of you.

  • So stay tuned for that.

  • What you see behind me is the box.

  • Apex four.

  • Although box, everything is so customizable.

  • There's really no specific model I can give you.

  • But what I can say is the interior of this PC is very special.

  • So let me show you.

  • So this is from a company called Box there.

  • B o X exits two exes or Boxx dot com, Their company based out of Austin, Texas.

  • And they build, like, high end data analysis, video editing, rendering those kinds of machines.

  • They also have things like GPU servers, anything you could want for your high end, high performance computing tasks.

  • They definitely have a So what you see here is a beautiful to Artie X 8000 GPS.

  • Also a titan.

  • Artie X.

  • This I just happened to slap in there.

  • I had one lying around as one does s o.

  • I threw that in there.

  • So yeah, the cable here and also the cables for this, uh, hard drive that's on me.

  • That's my job.

  • Don't hold that against them.

  • Uh and so, yeah, that's pretty much the heart of this machine, at least to me.

  • Then you've got to intel.

  • Xeon Gold.

  • Si, pues I can never keep it straight.

  • What the model numbers are for Intel, Xeon Gold, Jeep CP.

  • Use rather.

  • What I can say is that 12 quarters each so 24 threat records therefore like it's 48 core threads, basically, which is indeed useful for pre processing.

  • But I will just admit most of my work involves the G P is for sure.

  • Now the Ram here is 256 gigabytes of RAM again, very useful for pre processing, also very useful for things like reinforcement, learning offer like memory and and such So yeah, quite the honking machine.

  • There's also one terror by m dot to drive the cp use our water cooled.

  • I'm trying to think if I'm missing anything, it's a 1300 watt ps.

  • You fully customizable.

  • Unlike so not too long ago, I reviewed the Lenovo Data Science workstation and my biggest complaint there was you couldn't upgrade or change like GPS.

  • You couldn't even Aggie pees even though the slots were there to Aggie pews.

  • You couldn't do it because the P S U was like, all hardwired.

  • There was no you couldn't customize it with, like, extra plugs and stuff.

  • Where is here?

  • There's extra plugs.

  • I could do whatever I wanted to do from the point of buying the machine, so that's really cool.

  • Now what the heck?

  • Deuce we or what?

  • Does somebody use a machine like this four?

  • So a little bit ago, I did a cloud versus local computing kind of comparison to kind of talk to you guys about, like, at what point?

  • Should you by actual local hardware, I'm not here to sell you on a machine like this.

  • I will say I mean, if you're interested in high end machines, you can you can go You could get anywhere from, like, $1000 b have a decent ish machine to start off doing deep learning all the way up to something like this, which is closer to like, $30,000.

  • Now the argument to should you buy should you not by is the same.

  • No matter how much you know, computing power there is there.

  • It's all kind of the same argument and generally it boils down.

  • Two.

  • You can check out that video, but I'll summarize it.

  • So in the video, I kind of show you based on the actual prices on websites, you know, like Aws and Lynn owed and Azure and all that.

  • But the gist of that is after about a cumulative of two months.

  • So either if you're training something for two months straight, or you're running a model for two months straight, although I will just remind everybody that to train a model, you definitely need G pews to run a model, often times unless you have just an incredible amount of queries per second to this model toe.

  • Ron Amato you can usually get away with running it on CPU and RAM, so keep that in mind.

  • But if you're training models for two months cumulatively over the course of maybe one or possibly two years, depending on how acceptable it is to you to kind of run on older hardware then and that also is a bit of a stipulation because the Cloud GPU providers, most of them are still on K A.

  • T G pews.

  • Some of them have some V one hundreds and nothing more, right?

  • Leonard has the most recent.

  • I believe they're offering still R t x 6000 Jeep use, and you can get upto four of those on a single machine anyway, so that's kind of the argument of whether or not you should buy it.

  • But then, you know, what do we even do with 96 or actually with the Titan R T X I threw in there?

  • That's actually 120 gigabytes of V Ram.

  • What the heck do we do with that?

  • Obviously, you know, the clearest thing is to have just, like a monster model.

  • Okay, so a model that you could not have on smaller bits of hardware.

  • So right now, what I'm working on is a chap pot model.

  • This has kind of been the, uh, the white whale for me because we were I was able to train.

  • Um, I say we because now it's Daniel and I are working on this, But the first chap up model that I ever built was the one that we used for the twitch stream in that model eyes the best one I've ever gotten.

  • And it was very small, and it didn't have very much data, and it's not the greatest model.

  • It's not actually that great.

  • And so trying to make that better has just been like the most impossible thing ever.

  • So right now I'm working on Is this this model here that is currently in training that has had some pretty good results?

  • Is 20 total layers of 10 24 units per layer.

  • I mean, this is a monster off the model.

  • It is maxing out both e r T x eight thousands.

  • I'm not using the Titan R TX on Lee, because so these cards are 48 gigabytes of eerie in 48 gigabytes of Vera M 24 gigabytes of the Ram.

  • So I'm actually not using the Titan or T X because just because of the architecture of what we're actually training here, it just makes sense.

  • One of these G pews becomes the encoder.

  • One of the G pews becomes the decoders 10 layers on this 1 10 layers on that one.

  • It just works out that way, which is why having multi G pews isn't necessarily always going toe work or be faster.

  • If it is all on the same machine, it does.

  • It does work pretty well.

  • Um but you do pay a price for sending data from this GP to this GPS, so if it's just one time, it's okay.

  • But as you scale out geep use, it does become a little more challenging to get things to keep being fast.

  • But again, you can build models that you just simply could not build without multiple G pews.

  • In this case, especially if you're able to unify, utilized the envy link, it is effectively pretty close to being one whole GPU.

  • So anyway, yeah, so very exciting.

  • Lots of things to look forward to on the channel for what I can do.

  • And the other thing I was thinking about was with a machine like this, I know that most people watching this video will never be.

  • Either they won't have a machine like this or they won't even have access to it.

  • Even in the cloud, this machine would be quite expensive to run hourly.

  • So I'm sure I think what this would be.

  • I mean, it would probably be maybe closer, like, $10 an hour tow run this machine in the cloud.

  • I mean, so so you really don't have time to train a model.

  • And for example, since I've received this machine, I've been running it 24 7 for two months straight, just constantly doing R and D on this chap.

  • But, um, that that adds up really quick.

  • And like I said that after about the two month market, start having a wonder eyes this making much sense s o u t run in the cloud versus man.

  • Oh, I could just actually own the hardware, and it would it would be cheaper.

  • So, uh, anyway, so that's what I've been doing.

  • But then I started wondering, What if I know?

  • I don't know how this will work.

  • I don't know.

  • I know how it will work, but I don't know if it will work.

  • And that is I'm considering opening the door to allowing you guys, the community, to make a submission of code.

  • It needs to be code that's on get hub.

  • Send me a link to the get hub to either Harrison that python programming that net that's about the best thing to do is just email it to me.

  • So Harrison at Python programming yet, But you can come into discord.

  • You could post a comment or whatever make a submission of something you think either it's your your code or maybe some larger projects.

  • So, like the chap, that thing, uh, least it started off as being Basically it was, I don't think I made any change to the Googles and Mt.

  • And then that's neural machine translation.

  • And that's really for language, like one language to another language on.

  • And that is what the current best chap has ever been.

  • It was it was just straight and empty.

  • And instead of doing, you know, English to German.

  • I did English to English.

  • So in English, comment to an English response and that works.

  • It works pretty well for a deep learning chap out model, but, um, I want more.

  • And so, yeah, we Now we, as in me and Daniel have been trying to slowly kind of tweak things and change things and see, can we do a little better and recently actually had it amazing output from one of these models?

  • And I know what I kind of want to do in the next few steps and try to figure out, Can I eke out slightly better responses there, but anyway, it just takes a long time.

  • That's why we want a machine like this.

  • But getting back to what I was saying, um, I'm willing to open the door to just giving the opportunity to the community to submit some idea, not ideas.

  • No coat.

  • It has to be running code.

  • But if you have something like that, either it's your project or someone else's project that you think makes sense for a machine like this.

  • So it needs to be in it.

  • Don't send me cats versus dogs.

  • Don't send me some M nus.

  • You know, um, send me something that you either is your project or someone else's project that you think would benefit from this machine, and I will least consider putting it up.

  • If I can't read the code and understand it to 100%.

  • It's not being run on the machine.

  • Um, but I just thought, you know, that would be kind of cool if it can work.

  • I don't even know if that will work.

  • I don't know if that is just gonna be too much wasted time or something, but I'm willing to just see what do people have because I know there has to be a separation between people who have really cool ideas, either in like reinforcement, learning video stuff, audio stuff, um, all kinds of really cool projects in code, but it just there's no access to a machine like this.

  • So I'm very interested to see um, like I said, No promises.

  • Can't say that anything will for sure happened there, but I just want to see.

  • So anyway, that opportunity is there.

  • Finally, just as a channel update, the neural networks from Scratch Siris probably 1 to 2 months one month is really cutting it close.

  • I highly doubt that the videos will start coming out.

  • My goal there is to finish.

  • The public will public.

  • It's for backers.

  • But it finished that that draft from cover to cover.

  • So get a full first draft done, and then my plan is to go start doing the videos on then, as we're doing the videos, keep kind of honing in on that draft and improving that draft because my main focus is delivering the book on time to people who have purchased the book.

  • If you have purchased the book, you should already have access to that draft in that draft just got a huge update.

  • Basically, everything from beginning to training a model is done.

  • So now we're getting into kind of the actual tweaking a model doing having, like, training and testing data and what what to look for as their model trains.

  • So in terms of actually coding the neural network, that's all done now.

  • So So if you are on that, uh, definitely go check out the draft if you're not aware of that update.

  • Also, if you want access to that draft, if you're just dying to get access to the neural network from scratch Siri's you can pre purchase the pre order the book and you get access to the draft.

  • It's in a Google doc, and so you can not only see what other people's comments were.

  • You can ask questions and stuff yourself, like in line with the text on that's actually been working pretty well.

  • So, um, a couple of hiccups on the way, getting that many people I didn't expect there to be like, I don't even know what we're up like maybe 1300 people now.

  • So getting that many people into a Google doc was actually quite challenging, which is sort of surprising for being a Google product, but anyway, that was interesting, but that's all solved.

  • So, yeah, make sure if you do have access that or if you should have access, that you do have access.

  • Uh, and again, you can either email me or post a comment below a supposed.

  • But the best thing is probably emailed me in that case.

  • So anyway, I think that is everything.

  • If you've got questions, comments, concerns, whatever on any of this stuff the chapel out stuff to submission stuff, neural networks from scratch stuff.

  • Feel free to leave those below.

  • Otherwise I will see you guys in another video.

what is going on?

字幕と単語

ワンタップで英和辞典検索 単語をクリックすると、意味が表示されます

B1 中級

120GBのVRAM (120GB of VRAM)

  • 1 0
    林宜悉 に公開 2021 年 01 月 14 日
動画の中の単語