Placeholder Image

字幕表 動画を再生する

  • My name is Garth Henson has was mentioned.

  • I do work with Walt Disney Company.

  • I've been with the company about five years now.

  • Thank you.

  • That's how I feel still every day.

  • Um, I worked out of the Seattle office for about four years.

  • I work with corporate technology there.

  • What we do is basically platform tooling anything we need to build to enable the different brands to do what they need to D'oh.

  • So right now I'm working with ESPN.

  • We work with ABC News.

  • We work with parks and Resorts.

  • We work with all the different brands from an infrastructure architecture application.

  • Basically, whatever we need to build right now, it's kind of fun because most of what I build is node on the back end and we do full stack all the way up.

  • We do react in the front end.

  • And so most of the stuff we're working is JavaScript and end.

  • One of the reasons I am excited about this talk is this is something that is an ongoing project is we're trying to figure out which applications we should move to, not only into the cloud, but into a serverless architecture.

  • The nice thing is that in most cases we can keep running JavaScript and to end with maybe a little bit of bashing there for some of our doctor containers and entry points in that type of thing.

  • So that's what we're gonna talk about today.

  • Um, server Lis is one of those very interesting terms.

  • It's been a buzzword for a long time, and it's one that I really don't like.

  • Honestly, civilised.

  • Sounds like we just don't have service anymore.

  • Which, of course, we know they just they're gonna go away and everything runs on magic, right?

  • At least at Disney.

  • That's how it works.

  • But what is server Lis?

  • Several is computing based on a Wikipedia definition.

  • Here, it's really a cloud computing execution model.

  • It's a process.

  • It's a way of thinking in which the cloud provider dynamically manages allocation of machinery sources and then the pricing itself is generally based on the actual consumption of the processing.

  • So whatever you use is what you pay for.

  • So obviously that means I'm not there.

  • Yes, arms up.

  • So this this diagram kind of breaks down into a couple different categories.

  • All of the things we would talk about as a cloud offering.

  • And so most of these, I hope many people in here familiar with some of the different broad categories.

  • I as is pretty common, you're gonna have infrastructures of service.

  • Whether that's an on Prem data Center that you're having to poll resource is from an hour, Kate, or you're using a C two instances or you're using compute instances from whatever provider cloud provider you use.

  • You basically have to provisioned what you need, and you have to assemble all of your pieces before you can deploy the code to it, right?

  • That's that.

  • We're familiar with that.

  • And then you've got the SAS offerings, which I think most of us are so familiar with where I just don't either want to take the time or the money to build it.

  • Rather put the money into paying for the service and just use it where it stands and let them manage the S.

  • L.

  • A's.

  • Let them manage everything else and the support layers.

  • We don't want to take that on ourselves, so we're gonna buy the service from somebody else, use it in place and continue processing.

  • And then more recently, we're getting more and more capabilities of moving things into the cloud on top of your past offering.

  • So you've got your platform is a service that you're going to start leveraging and what's really interesting and we'll talk a little bit more about this is with things like direct connect for AWS, and there's other things with some of the other service offerings as well.

  • You can actually start running some of these platform offerings on your own private networks, and so, even though it's still managed completely remotely by your service provider, you can actually run it on your private network as well.

  • So you starting to get the best of both worlds where you can start leveraging these other pieces.

  • But yet keep your connectivity the way it needs to be for your broader application, and then on this diagram is, well, you'll notice that towards the top here we have the functions and the databases of service.

  • Some of your things, like you're no Sequels firebase or you've got your dynamodb type of thing.

  • Those air, typically what we group into what we would really call server list.

  • Those are the things that basically everybody says I I just use it, it'll it'll scale.

  • And you just pay for what you use, which is true.

  • But we need to make sure we understand what that actually means.

  • So this is we're in a smaller room.

  • I would pause for questions, but I will say now, if you have questions, please do come talk to me afterwards.

  • I don't have all the answers.

  • I'm still learning a lot of this, but I do want to share what we've what we've figured out so far.

  • So, of course, once we understand all of these, this means this brings us to the conclusion I've heard so many times.

  • No more servers.

  • Yeah, everything's awesome.

  • Would have to worry about anything anymore.

  • Uh, no, actually, all it means is it's not my servers, the problem or the hiccups, that the things we have to be aware of don't go away.

  • They just moved to another location.

  • The concerns we have to be aware off.

  • Just change.

  • They don't go away.

  • Um, why do we care?

  • Why does it matter?

  • Well, if we look at the benefits of Serverless, there's several.

  • And then there's other things we have to be aware of.

  • As we move into this architecture.

  • First of all, cost cost really is almost unanimously cited as the driving reason to move to survey this adoption.

  • Everyone's like, Oh, it's just cheaper.

  • Everything should be serverless.

  • Everything's cheaper, which can be on demand.

  • Execution built in the elasticity, optimizes utilization, up time, reliability.

  • Everything's higher.

  • This is This is all true.

  • This is great.

  • It's fantastic.

  • But I want to see there's a There's a specific word right there.

  • The savings can be insane.

  • If they're built correctly for the appropriate applications, they are incredibly cheaper to run.

  • However, there is that key word of if we build them properly and we use that we use that for the appropriate application.

  • It's cheaper, so keep that in mind.

  • Scalability, by definition, we're using service offerings were using hosted and managed solutions, and so scale abilities built into it.

  • Reliability.

  • It's really based off your provider's S L A.

  • So we have all of these great potential benefits that we can get out of moving applications into a serverless architecture.

  • But we need to be aware I love this quote.

  • The most common currency to pay for benefits and application architecture is complexity So how do we get?

  • I don't know.

  • I don't have the diagram.

  • A pair.

  • But you've got different pivots that you can actually pull on in any application, right?

  • Any architecture management comes down.

  • Says, hey, this has to run less expensively.

  • Okay, great.

  • It's gonna be slower now, or it's gonna be.

  • There's only certain things we can pull on.

  • And in order to reduce the cost and increase the performance and the scalability and reliability, the easiest way to do that is to increase complexity.

  • Now, what does that look like, though?

  • Well, in this diagram, we kind of see the stepping stones between moving from our old monolithic applications into a standard what we would call the micro service infrastructure or ecosystem.

  • And basically what happens is is from step one to Step two, all we've done is we've broken all of those bits of logic, all of those sections of the code that we're running all in one big application.

  • We've broken them out into their own smaller service's, which is great.

  • But now we have all this wiring in between.

  • So we introduced some complexity of the wiring breaking a monolith in tow into micro Service is now.

  • Did we gain performance?

  • Reliability?

  • Yeah, if it's done right, absolutely.

  • But we've added some type of complexity.

  • So when we're moving from micro service infrastructure or architecture into a full service of stack were introducing Phar Mor wiring in between all of the pieces.

  • So the complexity is potentially much greater for the overall application, even if the individual pieces of code that we as engineers are managing are much, much simpler to maintain.

  • So the overall application architecture we've got to be aware of as we go and then all of this, of course, is built on top of some type of ah ah, platform that were paying for or we've spun up internally.

  • So moving on here here, some of the things that we have to be aware of as they're heading up here, says Serverless is cheaper.

  • Not necessarily simpler.

  • Um, anybody played with us three much and had any issues with eventual consistency.

  • Couple hands.

  • Awesome.

  • Okay, so if you're not aware of what eventual consistency, consistency is s three and I told Miles was gonna have to apologize to him.

  • I don't know if he's in here.

  • All of my examples are eight of us.

  • I am intentionally doing pretty generic examples because these are things you can do on any service provider.

  • I'm not doing a sales pitch rate of us.

  • I just just what I know the best.

  • But eventual consistency basically means you condone drop an object into an s three bucket for storage.

  • And it may or may not be available for a read right away because it has the replicate across different data centers.

  • It's got to replicate it cost regions.

  • And when you're get requests or your poll request goes back out, if it hasn't replicated all the way across, you might hit a bucket or our region where it hasn't propagated yet.

  • And so even though you just wrote it and you got a success, it may not be available to read yet.

  • And so we actually had a couple projects were like several is also we're gonna do one lamb two here, and it's gonna drop something in the S three, and it's gonna fire this thing off over here, and then the other land is gonna pick it up and do more processing with it.

  • And it wasn't there.

  • And the entire flow grinds to a halt.

  • And we realized that we need to call it a B s.

  • Guys, the game is on the phone.

  • We're talking with them.

  • And the question, of course.

  • What did you read The S L.

  • A.

  • No, I didn't.

  • And as three is eventual consistency, there is a window of time in which they're still within.

  • There s L A.

  • And as long as you can read it back out within a certain length of time, they're okay.

  • And so these are things like the fault tolerance a synchronicity is You're doing multiple Lando's.

  • They're stateless.

  • You've got things like this that you have to start juggling where you do it in an application level on a monolith or even a micro service is where each micro service could maintain some type of state.

  • You now are introducing logical chunks that are stateless that have to be aware or be able to retrieve or rebuild or saturate or do something to be able to continue processing, and so things that before we're somewhat intuitive within the application, architecture are now becoming things we have to manually think through and deal with, or at least be aware of late and see the only reason I bring it up here.

  • Every new network hop can introduce Leighton, see.

  • So if you have something, let's say we've got three Micro Service's and they're talking nicely together and every new request, it's all three of them and it's a handoff.

  • You've got the inbound request, and then you've got at least two hops, so you've got three.

  • But if we break that out and it turns into six different Landers or whatever, you're doubling the number of network connections, the network ops you're having to make to process something you've got to be aware of.

  • The increased Leighton see potentially.

  • Okay, And again, these are all just be aware of them.

  • They're not.

  • I'm not saying Don't do it.

  • I'm just saying, Be aware of what the potential Gotsch is.

  • Our, um Let's see.

  • Fault, tolerance.

  • What happens if an asynchronous lambda spends up and fails and it can't process?

  • And it's responding to something that I pulled off of a cue?

  • Have you thought through what you're dead?

  • Letter Q.

  • Look like, are you going to actually do some retrial logic?

  • What is your fault?

  • Tolerance for it What you re try all these things we have to start thinking through on a a higher level or broader scale.

  • Um, ap eyes messaging scheme is how are you gonna have all of the different pieces talk to each other if they're on the same process?

  • All of these different things.

  • Rolling upgrades.

  • How do you manage your actual upgrades of the individual Ramdas or whatever Whatever piece you're upgrading, uh, are you changing the signature of the payload?

  • You're dropping.

  • What?

  • What are all of the different pieces like that, missy?

  • And there's more.

  • So I've heard from a lot of people that a just do everything server.

  • Listen, it's a sol vote a cure everything.

  • And it's really not if you if you approach it as a cure all.

  • If you approach it as a silver bullet, that's gonna fix all of our problems.

  • I'm afraid people are gonna be very disappointed now again.

  • I say all of this and I'm a very big proponent of using server list, so don't get me wrong.

  • It is a fantastic tool.

  • But as with any other tool, you've gotta use the right tool for the right job and so every application.

  • If you're considering server lists as a potential architecture, you need to evaluate your actual system what it is you're trying to build and what the benefits are.

  • What is your r a y?

  • What's the cost gonna be?

  • How many requests are you gonna have on?

  • And we'll look at a couple of examples here in a minute where we can actually talk through some of those concerns.

  • Um, so let's build a little bit, um, this slide, we're gonna talk through a couple of the options in the AWS tool set that actually we use regularly for serverless applications.

  • So broader architectures some of these are arguable.

  • They fall into a slightly different categories that maybe somebody like, Well, that's not pure server list.

  • That's the bags or something.

  • Yeah, I get that.

  • But overall, from a civilised architecture standpoint, here's a few things to be aware of.

  • We used 53 anytime we can to do D.

  • N s resolution because it ties so seamlessly into some of the other pieces, like the FBI gateway.

  • If you have not looked into the FBI management structures for several of the different cloud providers AP gateways great there, the way you can define and manage configuration for even authentication and header parsing different things like that.

  • It's pretty, pretty slick, um, Landis themselves.

  • This is this is pretty standard.

  • This is what every provider has is their cloud functions.

  • You provide enough logic to run as a single logical execution, and you want to break it down and make sure you don't throw too much in there.

  • You're gonna run within restrictions in the different providers, such as within Lambda Ziff.

  • You're running more than I believe It's a five Meg payload in a post request.

  • In order into the context into the Lambda is your cap.

  • It also has a time limit of a five minute processing time, so anything beyond that and it will automatically shut it down.

  • So there are things like that you just have to be aware of, which will help you discern where, in your logic, to slice up the pieces to make it affordable and still be able to process.

  • So lamb does, um, let's see, what's next s three.

  • This is your object stores.

  • This is basically I'm a TBS makes it very distinct that it is not a static asset store.

  • It is an object story.

  • You basically could drop anything you want in there.

  • Um, this is the one that has given us trouble with eventual consistency.

  • You just have to be aware of that on.

  • Then we'll look at dynamodb.

  • This is our new no sequel store.

  • Highly scalable, incredibly fast connection speeds reach speeds, right speeds, um, and then es que es.