Placeholder Image

字幕表 動画を再生する

  • what's going on, everybody.

  • And welcome to a video about kind of an overview of the current high end cloud GPU providers, as well as a bit of a tutorial for how to actually make efficient use of these servers.

  • So you're not wasting time and money doing pretty processing or even data set, download or upload to a server, which can take a very, very long time and just waste like hundreds of dollars.

  • So with that, we've got a lot of information to cover.

  • Let's get into it.

  • So the first thing that we'd want to do is just kind of quickly as quickly as possible.

  • Compare all these providers.

  • There's so many numbers that you've got to think about and look at it could get super confusing.

  • Hopefully, I can make it really easy for you.

  • First of all, the every provider has released.

  • Most providers have a cluster of deep user different Jeep use that they offer, but the best, the flagship GPU that pretty much everyone is offering is the V 100 so that Tesla V 100 from NVIDIA this GPU has 16 gigabytes of the ram per jeep.

  • You there is a 32 gigabyte, very into which you can see an example here.

  • Um and I think aws is the only one offering it.

  • I'm not positive.

  • I'll try to remember to look as we go through the prices, but you have to be spending a lot of money.

  • This is $30 an hour machine.

  • So I don't think many of you guys were looking for that.

  • But it is.

  • I think also eight of us has the most GPU V Ram person singular machine that you could have.

  • So this is before you might think about doing distributed tense or floor pytorch or whatever, Um, which is always a huge mess.

  • So if you want the largest models possible, AWS is still gonna be the victor.

  • But in terms of the 100 purvey 100 GPU $3.6 you can sign up for one year and three year reserve prices, which lowers it.

  • But you'd be a fool to sign up for a GPU for three years were even probably one year.

  • So I'm not really gonna look at those prices, uh, moving on so on to azure also V 100 gp use one thing start paying a little bit of attention to his CPU and ram for a lot of deep learning tasks, this might be irrelevant to you, like you might not actually need very much at all.

  • Like maybe the only thing you're doing is like io or something like that from, like a data set.

  • But if you're doing something like reinforcement learning, the more V ram the better, and often also with reinforcement learning depending on what sort of environment you're using, more CPU, the better.

  • But my not also always be the case that Morsi few the better.

  • So So, uh, that is going to be a per use case scenario that I can't possibly dive in for you guys, but basically, Azure is exactly the same price.

  • But they do have different very variable, very varying amounts of ram storage.

  • And how many cores of CPU they all very so.

  • Pay attention.

  • If if any of those things start to matter to you, pay attention moving along.

  • Uh, Google Cloud also has to be 100.

  • Also has, uh, Well, actually, they're cheaper.

  • I first numbers in my head.

  • I thought there were $3 is well to 48.

  • I wonder if they were always to 48 anyways, to 48 per jeep.

  • You so actually cheaper than both aws hand azure.

  • Um and, uh, they have up to 1 28 gigabytes missing what they had only four times.

  • So uh, less Wow, not very many.

  • So actually, azure is kind of losing in in terms of how much possible, Although I wonder if this this one's a little more expensive.

  • So what?

  • Also we have the same amount of Ram.

  • Same cores.

  • I wonder if this is your 32 gigabyte variant V 100.

  • Not sure.

  • Uh, anyway, moving on to paper space, My longtime favorite cloud she BU provider simply because you spend it up, it's ready to go.

  • It's just super simple.

  • You've got a virtual desktop already.

  • It's just super easy.

  • Try to get going on paper space.

  • There be 102 30 an hour.

  • So even this is the cheapest so far.

  • Another reason why I really enjoyed going with them.

  • Also, they offer other GPU is like the P 6000 which has 24 gigabytes of GPU memory, which is fun to play with.

  • Uh, so then moving along.

  • Oh, that one thing That's kind of a downside for paper space, though, is you only have one GPU per machine.

  • So you're maxing out and either 16 gigabytes of V Aram here or 24 on the P 6000.

  • But you can't.

  • You can't have multiple G pews per machine.

  • So that's kind of a bummer moving on to Lynn Owed, which just started their GPU plans they have, rather than the V 100 the R T x 6000 which is kind of an interesting move.

  • Uh, so at first I had to take some time to look in what the difference is.

  • So the V 100 g pew, obviously 16 gigabytes of the Ram The R T X 6000 is 24 gigabytes of your, um so a bit more v ram.

  • Also, the other thing that matters is how quickly can we process.

  • Tensor is basically a raise, so the V 100 has 112 tensor T flops.

  • The Arctic 6000 has 130 10 30 flops.

  • So Maur operations per second, more memory for half the price, at least of AWS and azure.

  • It's a little closer to paper space, but again, you can have up to four of these are t X six thousands.

  • So, uh, that's a pretty amazing offering.

  • A TV on is with you.

  • I don't know if they're even.

  • You've got to be operating at a loss at this stage.

  • But anyway, super cool.

  • So this is who we're gonna end up using.

  • I do have referral link for Linda, and sometimes people get a little fidgety when, uh, have referral links.

  • Honestly, I have a relationship with Amazon Azure, Google Paper Space, and Lynn owed all of them.

  • They all want the air time.

  • It just so happens that right now, Lin owed is offering the most absurd deal possible.

  • So we're gonna cope with Lynn owed s.

  • So that's what I'm going to use.

  • But if you have a different provider, you're watching this later, and someone else has better offer you can.

  • You can.

  • You can use the same methodology that we're gonna use on any of these providers.

  • So, uh, so hopefully it will still be of use even after Maybe Leonard is not offering the best deal possible.

  • So first of all.

  • You'll go ahead and need to create account log in again.

  • You can use its lynda dot com.

  • If you got a little dot com slash Syntex, you should get here and you could just sign up for an account there.

  • You'll get a $20 credit, but I think you actually have to spend $20 then you will get your credit.

  • So So keep that in mind, but you'll still at some point get a $20 credit.

  • So Ah, once you do that, let me go ahead and log into my account.

  • You should look at something like this when you get to your dashboard.

  • So how do we How do we want to efficiently do this?

  • So your GPU servers coming to be many dollars an hour, as opposed to like If we were to look at Theo, other types of servers were talking like pennies per hour.

  • So So the first thing we want to do is we're gonna have, like, a simple, um, a simple, virtual private server that is going to serve as our sort of house of data.

  • So at least on Leonard, if you go to create, we're gonna create a Lynn owed.

  • This is their name for a V P s.

  • So we'll go there.

  • We want a boon to for this.

  • We can use 1904 Uh, but hopefully don't forget, but wanting is 18 04 for at least our g p server.

  • And in fact, I'll just I'll just get in the habit.

  • So 18 where is 18?

  • Over Here it is.

  • And we want to do that for the long term.

  • Support makes updating later down the road much easier.

  • Also, it has long term sport.

  • So, uh, 18 0 for a boon to pick a region.

  • In my case, I'm gonna go with Dallas, Texas.

  • I haven't checked all the regions, but some, actually not Dallas, Texas, because test X is the problem.

  • Newark, New Jersey.

  • So with all of these providers, they don't offer GP use in the same region.

  • Like like they have more regions of, like, CPU ram storage.

  • Then they dio have, like, g pews at those regions.

  • So some of these places have, like, 40 locations around the world that you could choose from, but not for G pews.

  • And so, in Leonard's case, I happen to know they have deep you instances in Newark, New Jersey.

  • So I'm gonna put everything in Newark, New Jersey.

  • So what we're building right now is, or where we're going to store our data, So you can either have a data storage VPs.

  • You could have a data pre processing VPs and so on.

  • Uh, I wouldn't recommend it, at least on Lynn owed.

  • The thing that makes the most sense would be to do your storage and pre processing, probably on the same machine.

  • But we'll talk about that.

  • Maybe later, if I if I remember.

  • But for now, I'm just gonna go with a standard.

  • Lynn owed this two gigabyte.

  • You could even go with the nan owed for even cheaper.

  • But I kind of don't like one gigabyte of memory, because sometimes that's not even enough to install certain packages.

  • So I'm gonna go with this one.

  • Um, and then we'll come down here.

  • I'm gonna call this data server.

  • Uh, and then we're gonna make a password.

  • Jellyfish.

  • It's a great password.

  • And you might want to think about backups.

  • I've never regretted having backups, but this is just a quick machine that I'm just using for an example.

  • So I think we're ready.

  • So we're gonna go ahead and spin this one up.

  • So again, if you're following along, I would strongly recommend you.

  • You probably use Newark, New Jersey, and then later you can look for regions that are maybe closer to you.

  • Although it shouldn't really matter where it is for cloud GPU stuff.

  • I don't think it matters.

  • Um, unless you're uploading the data set from your local machine May.

  • But either way, it's not gonna cost you very much money in this case, because this is a $10 a month server, as opposed to, you know, a dollar 50 is a dollar 50 times 24 s o.

  • You know, you're paying more than that per day, more than double that per day.

  • Anyway.

  • Uh, let's go ahead and create.

  • Did I forget something?

  • Do I need a puncture?

  • Uh, about capital jelly?

  • Jelly fish, exclamation mark, do you think about that?

  • Is that a good password?

  • If I forget to, um, destroy this machine.

  • So we're riding me because you all know now know the password to my lovely I P address.

  • Right here.

  • Uh, okay, So we're gonna wait for this to set up.

  • Um, So basically, what this is gonna do is we're going to have a volume that we're going to attach to the server.

  • And again, this is a This is a concept that exists.

  • I definitely need to be west.

  • Definitely on Google Cloud.

  • I haven't used azure to any serious degree, but I'm willing to bet they do it to basically all these providers have this sort of, uh, structure where you've got your Yes, you could have a VPs, but you can also spin ups totally separate storage container type things.

  • Um, because often you need more storage than you need on that is available on some of these, like, just kind of pre packaged virtual private servers.

  • So, uh, get lost.

  • Okay, so this looks like it's good to go.

  • Here's your I P address.

  • We can copy that.

  • And, um, I actually don't think we need to log in just yet.

  • The other thing, the next thing that we want to do is we wanna have storage.

  • Like, in this case, I'm actually not going to need more than 50 gigabytes of storage.

  • If you don't need more than 50 year bites of storage.

  • Then you don't have to do the next step, right.

  • And you could also spin up.

  • Like, as you go bigger on your linens, you'll get more storage in Leonard.

  • Also, your GPU machine probably has a pretty hefty amount of storage.

  • But that machine, we just want to spin it up and use it as quickly as possible.

  • So we don't want We were trying to avoid the download or upload of a data set that machines that machine has to be on and you're being billed so long as those machines exist, whether it's on AWS azure Google.

  • If those machines eggs exist, you're getting billed for them.

  • So even if you turn him off, that GPU is still being dedicated to you and you're still paying a dollar 50 an hour or $3 an hour or $30 an hour, so you want to use it as quickly as possible.

  • So So we got her data server.

  • I believe it's probably online.

  • Now we're gonna do is we're gonna goto volumes.

  • We're gonna add a volume.

  • I'm gonna call this ml storage Size 20 is fine again.

  • This is this is just example.

  • But, you know, you probably want, like, a terabyte or like, 10 terabytes or something.

  • How freaking big did I go over 1000 gigabytes.

  • Yeah, but if I don't want a 10 terabytes, that's a lot of money per month.

  • Anyway, you know, you might want 500 gigabytes or something reasonable.

  • Anyway, um, you also could, in theory, have your storage on in some other provider.

  • But part of what we're trying to do is we wanna have our volume in our data source.

  • We wanna have that in the exact same region.

  • So when we go to transfer that data, it has the fastest transfer rate possible.

  • So hopefully you can upload and download data at 100 plus megabytes per second.

  • If if it's like from your local machine, you might only get, like, five or 15 or something terrible is gonna take forever.

  • So we're trying to get this to be as quick as possible.

  • Um, and at least here, there's an even better method that'll show you guys near the end, but we'll go with 100 region again.

  • We're gonna go in Newark, New Jersey.

  • We can just automatically select a limited.

  • You can change this later, but I'm gonna say data server.

  • Does that make sense?

  • Um uh, I don't really care about tag.

  • Um, I don't even think you have to do it.

  • So let's go ahead and hit.

  • Submit there, and we should get our volume.

  • So now what we want to do is we need to run these commands.

  • And, uh, I guess I'll just make another window here leno dot com Go log in.

  • What I want is so there's our volume click on the server.

  • I'm trying to get a right piece.

  • So here's our i p address.

  • Boop.

  • So now, coming over to either If you're on Mac or Lennox, just open your terminal.

  • If you're on Windows, either download a program called Putty or I'm using Bash on a boon to on Windows.

  • You could go to, like the AP story thinking get it or do you just enable it?

  • I don't even know.

  • It might even be there by default.

  • Now, I honestly do not know.

  • Um I just know I have it, and it makes it easier to do stuff like this.

  • So, uh, again use whatever, uh, there's a 1,000,000 options for Windows, basically.

  • But if you're on Mac or Lennox, just open terminal.

  • If you're on windows and you having a problem with with this step, feel free to post comment below or join us on discord.

  • That's discord.

  • G slash Centex.

  • We can definitely help you out or go to the text based version of the tutorial.

  • There's instructions there.

  • So we're gonna sshh roots at, uh, can't There you right.

  • Click Thio Paste it in that address.

  • Uh, and then the first time you connect, you're gonna get a message like this basically is just like, Hey, we've never seen this fingerprint before.

  • So if this is the first time you're connected to that server, then this is totally fine.

  • If you don't think this is the first time you've connected to that I p address, uh, something is wrong.

  • There should be a red flag, but for now, yes, it's correct.

  • Cool.

  • Uh, now the password capital, jelly fish, exclamation mark.