Placeholder Image

字幕表 動画を再生する

  • My name is Garth Henson has was mentioned.

  • I do work with Walt Disney Company.

  • I've been with the company about five years now.

  • Thank you.

  • That's how I feel still every day.

  • Um, I worked out of the Seattle office for about four years.

  • I work with corporate technology there.

  • What we do is basically platform tooling anything we need to build to enable the different brands to do what they need to D'oh.

  • So right now I'm working with ESPN.

  • We work with ABC News.

  • We work with parks and Resorts.

  • We work with all the different brands from an infrastructure architecture application.

  • Basically, whatever we need to build right now, it's kind of fun because most of what I build is node on the back end and we do full stack all the way up.

  • We do react in the front end.

  • And so most of the stuff we're working is JavaScript and end.

  • One of the reasons I am excited about this talk is this is something that is an ongoing project is we're trying to figure out which applications we should move to, not only into the cloud, but into a serverless architecture.

  • The nice thing is that in most cases we can keep running JavaScript and to end with maybe a little bit of bashing there for some of our doctor containers and entry points in that type of thing.

  • So that's what we're gonna talk about today.

  • Um, server Lis is one of those very interesting terms.

  • It's been a buzzword for a long time, and it's one that I really don't like.

  • Honestly, civilised.

  • Sounds like we just don't have service anymore.

  • Which, of course, we know they just they're gonna go away and everything runs on magic, right?

  • At least at Disney.

  • That's how it works.

  • But what is server Lis?

  • Several is computing based on a Wikipedia definition.

  • Here, it's really a cloud computing execution model.

  • It's a process.

  • It's a way of thinking in which the cloud provider dynamically manages allocation of machinery sources and then the pricing itself is generally based on the actual consumption of the processing.

  • So whatever you use is what you pay for.

  • So obviously that means I'm not there.

  • Yes, arms up.

  • So this this diagram kind of breaks down into a couple different categories.

  • All of the things we would talk about as a cloud offering.

  • And so most of these, I hope many people in here familiar with some of the different broad categories.

  • I as is pretty common, you're gonna have infrastructures of service.

  • Whether that's an on Prem data Center that you're having to poll resource is from an hour, Kate, or you're using a C two instances or you're using compute instances from whatever provider cloud provider you use.

  • You basically have to provisioned what you need, and you have to assemble all of your pieces before you can deploy the code to it, right?

  • That's that.

  • We're familiar with that.

  • And then you've got the SAS offerings, which I think most of us are so familiar with where I just don't either want to take the time or the money to build it.

  • Rather put the money into paying for the service and just use it where it stands and let them manage the S.

  • L.

  • A's.

  • Let them manage everything else and the support layers.

  • We don't want to take that on ourselves, so we're gonna buy the service from somebody else, use it in place and continue processing.

  • And then more recently, we're getting more and more capabilities of moving things into the cloud on top of your past offering.

  • So you've got your platform is a service that you're going to start leveraging and what's really interesting and we'll talk a little bit more about this is with things like direct connect for AWS, and there's other things with some of the other service offerings as well.

  • You can actually start running some of these platform offerings on your own private networks, and so, even though it's still managed completely remotely by your service provider, you can actually run it on your private network as well.

  • So you starting to get the best of both worlds where you can start leveraging these other pieces.

  • But yet keep your connectivity the way it needs to be for your broader application, and then on this diagram is, well, you'll notice that towards the top here we have the functions and the databases of service.

  • Some of your things, like you're no Sequels firebase or you've got your dynamodb type of thing.

  • Those air, typically what we group into what we would really call server list.

  • Those are the things that basically everybody says I I just use it, it'll it'll scale.

  • And you just pay for what you use, which is true.

  • But we need to make sure we understand what that actually means.

  • So this is we're in a smaller room.

  • I would pause for questions, but I will say now, if you have questions, please do come talk to me afterwards.

  • I don't have all the answers.

  • I'm still learning a lot of this, but I do want to share what we've what we've figured out so far.

  • So, of course, once we understand all of these, this means this brings us to the conclusion I've heard so many times.

  • No more servers.

  • Yeah, everything's awesome.

  • Would have to worry about anything anymore.

  • Uh, no, actually, all it means is it's not my servers, the problem or the hiccups, that the things we have to be aware of don't go away.

  • They just moved to another location.

  • The concerns we have to be aware off.

  • Just change.

  • They don't go away.

  • Um, why do we care?

  • Why does it matter?

  • Well, if we look at the benefits of Serverless, there's several.

  • And then there's other things we have to be aware of.

  • As we move into this architecture.

  • First of all, cost cost really is almost unanimously cited as the driving reason to move to survey this adoption.

  • Everyone's like, Oh, it's just cheaper.

  • Everything should be serverless.

  • Everything's cheaper, which can be on demand.

  • Execution built in the elasticity, optimizes utilization, up time, reliability.

  • Everything's higher.

  • This is This is all true.

  • This is great.

  • It's fantastic.

  • But I want to see there's a There's a specific word right there.

  • The savings can be insane.

  • If they're built correctly for the appropriate applications, they are incredibly cheaper to run.

  • However, there is that key word of if we build them properly and we use that we use that for the appropriate application.

  • It's cheaper, so keep that in mind.

  • Scalability, by definition, we're using service offerings were using hosted and managed solutions, and so scale abilities built into it.

  • Reliability.

  • It's really based off your provider's S L A.

  • So we have all of these great potential benefits that we can get out of moving applications into a serverless architecture.

  • But we need to be aware I love this quote.

  • The most common currency to pay for benefits and application architecture is complexity So how do we get?

  • I don't know.

  • I don't have the diagram.

  • A pair.

  • But you've got different pivots that you can actually pull on in any application, right?

  • Any architecture management comes down.

  • Says, hey, this has to run less expensively.

  • Okay, great.

  • It's gonna be slower now, or it's gonna be.

  • There's only certain things we can pull on.

  • And in order to reduce the cost and increase the performance and the scalability and reliability, the easiest way to do that is to increase complexity.

  • Now, what does that look like, though?

  • Well, in this diagram, we kind of see the stepping stones between moving from our old monolithic applications into a standard what we would call the micro service infrastructure or ecosystem.

  • And basically what happens is is from step one to Step two, all we've done is we've broken all of those bits of logic, all of those sections of the code that we're running all in one big application.

  • We've broken them out into their own smaller service's, which is great.

  • But now we have all this wiring in between.

  • So we introduced some complexity of the wiring breaking a monolith in tow into micro Service is now.

  • Did we gain performance?

  • Reliability?

  • Yeah, if it's done right, absolutely.

  • But we've added some type of complexity.

  • So when we're moving from micro service infrastructure or architecture into a full service of stack were introducing Phar Mor wiring in between all of the pieces.

  • So the complexity is potentially much greater for the overall application, even if the individual pieces of code that we as engineers are managing are much, much simpler to maintain.

  • So the overall application architecture we've got to be aware of as we go and then all of this, of course, is built on top of some type of ah ah, platform that were paying for or we've spun up internally.

  • So moving on here here, some of the things that we have to be aware of as they're heading up here, says Serverless is cheaper.

  • Not necessarily simpler.

  • Um, anybody played with us three much and had any issues with eventual consistency.

  • Couple hands.

  • Awesome.

  • Okay, so if you're not aware of what eventual consistency, consistency is s three and I told Miles was gonna have to apologize to him.

  • I don't know if he's in here.

  • All of my examples are eight of us.

  • I am intentionally doing pretty generic examples because these are things you can do on any service provider.

  • I'm not doing a sales pitch rate of us.

  • I just just what I know the best.

  • But eventual consistency basically means you condone drop an object into an s three bucket for storage.

  • And it may or may not be available for a read right away because it has the replicate across different data centers.

  • It's got to replicate it cost regions.

  • And when you're get requests or your poll request goes back out, if it hasn't replicated all the way across, you might hit a bucket or our region where it hasn't propagated yet.

  • And so even though you just wrote it and you got a success, it may not be available to read yet.

  • And so we actually had a couple projects were like several is also we're gonna do one lamb two here, and it's gonna drop something in the S three, and it's gonna fire this thing off over here, and then the other land is gonna pick it up and do more processing with it.

  • And it wasn't there.

  • And the entire flow grinds to a halt.

  • And we realized that we need to call it a B s.

  • Guys, the game is on the phone.

  • We're talking with them.

  • And the question, of course.

  • What did you read The S L.

  • A.

  • No, I didn't.

  • And as three is eventual consistency, there is a window of time in which they're still within.

  • There s L A.

  • And as long as you can read it back out within a certain length of time, they're okay.

  • And so these are things like the fault tolerance a synchronicity is You're doing multiple Lando's.

  • They're stateless.

  • You've got things like this that you have to start juggling where you do it in an application level on a monolith or even a micro service is where each micro service could maintain some type of state.

  • You now are introducing logical chunks that are stateless that have to be aware or be able to retrieve or rebuild or saturate or do something to be able to continue processing, and so things that before we're somewhat intuitive within the application, architecture are now becoming things we have to manually think through and deal with, or at least be aware of late and see the only reason I bring it up here.

  • Every new network hop can introduce Leighton, see.

  • So if you have something, let's say we've got three Micro Service's and they're talking nicely together and every new request, it's all three of them and it's a handoff.

  • You've got the inbound request, and then you've got at least two hops, so you've got three.

  • But if we break that out and it turns into six different Landers or whatever, you're doubling the number of network connections, the network ops you're having to make to process something you've got to be aware of.

  • The increased Leighton see potentially.

  • Okay, And again, these are all just be aware of them.

  • They're not.

  • I'm not saying Don't do it.

  • I'm just saying, Be aware of what the potential Gotsch is.

  • Our, um Let's see.

  • Fault, tolerance.

  • What happens if an asynchronous lambda spends up and fails and it can't process?

  • And it's responding to something that I pulled off of a cue?

  • Have you thought through what you're dead?

  • Letter Q.

  • Look like, are you going to actually do some retrial logic?

  • What is your fault?

  • Tolerance for it What you re try all these things we have to start thinking through on a a higher level or broader scale.

  • Um, ap eyes messaging scheme is how are you gonna have all of the different pieces talk to each other if they're on the same process?

  • All of these different things.

  • Rolling upgrades.

  • How do you manage your actual upgrades of the individual Ramdas or whatever Whatever piece you're upgrading, uh, are you changing the signature of the payload?

  • You're dropping.

  • What?

  • What are all of the different pieces like that, missy?

  • And there's more.

  • So I've heard from a lot of people that a just do everything server.

  • Listen, it's a sol vote a cure everything.

  • And it's really not if you if you approach it as a cure all.

  • If you approach it as a silver bullet, that's gonna fix all of our problems.

  • I'm afraid people are gonna be very disappointed now again.

  • I say all of this and I'm a very big proponent of using server list, so don't get me wrong.

  • It is a fantastic tool.

  • But as with any other tool, you've gotta use the right tool for the right job and so every application.

  • If you're considering server lists as a potential architecture, you need to evaluate your actual system what it is you're trying to build and what the benefits are.

  • What is your r a y?

  • What's the cost gonna be?

  • How many requests are you gonna have on?

  • And we'll look at a couple of examples here in a minute where we can actually talk through some of those concerns.

  • Um, so let's build a little bit, um, this slide, we're gonna talk through a couple of the options in the AWS tool set that actually we use regularly for serverless applications.

  • So broader architectures some of these are arguable.

  • They fall into a slightly different categories that maybe somebody like, Well, that's not pure server list.

  • That's the bags or something.

  • Yeah, I get that.

  • But overall, from a civilised architecture standpoint, here's a few things to be aware of.

  • We used 53 anytime we can to do D.

  • N s resolution because it ties so seamlessly into some of the other pieces, like the FBI gateway.

  • If you have not looked into the FBI management structures for several of the different cloud providers AP gateways great there, the way you can define and manage configuration for even authentication and header parsing different things like that.

  • It's pretty, pretty slick, um, Landis themselves.

  • This is this is pretty standard.

  • This is what every provider has is their cloud functions.

  • You provide enough logic to run as a single logical execution, and you want to break it down and make sure you don't throw too much in there.

  • You're gonna run within restrictions in the different providers, such as within Lambda Ziff.

  • You're running more than I believe It's a five Meg payload in a post request.

  • In order into the context into the Lambda is your cap.

  • It also has a time limit of a five minute processing time, so anything beyond that and it will automatically shut it down.

  • So there are things like that you just have to be aware of, which will help you discern where, in your logic, to slice up the pieces to make it affordable and still be able to process.

  • So lamb does, um, let's see, what's next s three.

  • This is your object stores.

  • This is basically I'm a TBS makes it very distinct that it is not a static asset store.

  • It is an object story.

  • You basically could drop anything you want in there.

  • Um, this is the one that has given us trouble with eventual consistency.

  • You just have to be aware of that on.

  • Then we'll look at dynamodb.

  • This is our new no sequel store.

  • Highly scalable, incredibly fast connection speeds reach speeds, right speeds, um, and then es que es.

  • And the reason I threw these up on here because these are the ones will actually talk through on a couple of the architectural diagrams that will look at the moment.

  • So sus is just what it sounds like.

  • It's a queuing service.

  • It's a host that I manage queuing service.

  • You set it up and you start dropping items into it, and then you can reach out to it and check to see if there's anything on the cue for you to resolve.

  • And it supports both standard queuing as well as five folk.

  • Ewing.

  • So you can you can do a lot of different things with it.

  • So 11 of the example I want to talk through is a problem space.

  • I ran into where we had a couple AP eyes internally.

  • These were AP eyes that had been spun up.

  • And as we all know, there's no such thing as a proof of concept that stays a proof of concept.

  • Any time you show it to somebody, somebody starts using it and now you have to support it.

  • And we had several AP eyes that had been built internally for different teams, and a security audit came through and realized that we had multiple AP eyes that had access to data that we're not fronted with any type of authentication now.

  • Good news is they were only on the internal network.

  • However, they still were not funded with authentication.

  • So we were tasked with figuring out a way to try and front existing AP eyes with basically ino off style authentication of some sort.

  • But we could give a P I keys out to only those who need access to it, and we could do some type of token exchange to grant access into those AP eyes.

  • So if you've ever done something like that before, where you've had an existing a p I, and you've had to try and put something in front of it, it could be extremely challenging.

  • Um, in this case, we came up with something that worked decently.

  • Well, see if I can point over here.

  • Essentially, the goal is we have an application layer down here.

  • So this is something that was already existing.

  • Okay, the applications there, the FBI is being hit.

  • Um, there's no security layer in front of it.

  • And so what we were able to do is take a, uh, right here, the FBI gateway and set up a basically just an authorization password exchange, so you can take your FBI key in your secret, and you can hit an end point here that will reach out into the thing.

  • This is kind of where the application layer, I guess, from the law thing that we created.

  • And there's a lamb to hear that.

  • Your authenticator, that really just checks your password.

  • If you're in the FBI Keys that have been granted, it checks to make sure you've got the appropriate role it creates.

  • A new token drops a token into another D B table.

  • Another dynamo table.

  • The reason we chose this architecture is dynamodb tables allow you to specify a t t l on every item that you drop into the table and at that ti TL it will auto expire that item and remove it from your table.

  • So for this solution, it worked really well because if we had, you know, let's say we had five minute tokens and we only wanted them good for a certain amount of time.

  • We could drop them into our token table and they would auto expire in that amount time.

  • We didn't have to do any manual cleanup, which was fantastic in this case.

  • Once that's done, the authenticator simply returns with the token that they just generated and the user or application.

  • Whatever it is, it's hitting the end point, then can pick up that token, pass it off to the A P I that they were trying to reach anyway and provide an authorization header with a bear token on it.

  • That then fires off a secondary landa that all it knows how to do is authorize.

  • The only thing that's landed does is it takes the header and it looks in here to see if it exists, and if it has the appropriate scopes for you to execute what you're trying to execute and a P I gateway right here actually has a concept of authorize er and the authorize, er basically says you can check your token.

  • You can build up on eight of us.

  • Uh, what's the term?

  • Lost my lost by trying to thought.

  • But you basically build up your policy.

  • That's what it is that grants or denies access to what they're wanting.

  • And the FBI Gateway will receive that policy and actually cash it at that layer.

  • So all subsequent requests that that end point will actually use the same policy that's been cashed.

  • So you can actually replay tokens without having to overload the number of executions of your Lando's, which ends up saving even more money.

  • So I gotta jump ahead.

  • I just got got the signal here.

  • So application layer, you end up doing the handoff once everything has been granted and you move forward, Um, this one, I think it's a little more approachable.

  • This is a full on.

  • If we wanted to do something like a mailing list, we could actually have the raw 53 layer of the D.

  • N s resolution hit an A P I gateway.

  • We can have the gateway simply drop payloads into, like, a registration bucket.

  • So Atlantic and fire off.

  • Oh, great.

  • You wanted to register.

  • We're gonna drop drop your email address into dynamodb table, and then you might have an administrator coming later and one actually send an email and they can hit a different end point on the FBI gateway.

  • And it'll kick you over and build up a queue of all the e mails that need to be sent.

  • And then you can process them however you like.

  • I want to jump real quick and see if I can do a quick demo or not.

  • So give me one moment, just a couple minutes here and all right, so this is a whirlwind.

  • I will tell you now, I'm gonna probably have to cut this very short.

  • So if you have questions or you want to walk through this with me later, please find me.

  • I'm happy to talk through as much of this as I can.

  • AP I Gateway essentially allows you to just specify all of your end points like you would.

  • It's almost if you've used swagger or anything like that, or use happy Js or any anything that where you can do a declarative style AP eyes or routes.

  • That's basically what we can do here.

  • And then you wired up directly to Lambda.

  • So you've got up here.

  • You see the type is a Lambda proxy on the integration request.

  • Basically, what that tells the AP gateway to do is to do a little bit of pre processing and create a context that's gonna hand off to your land of function.

  • And so then you write a little bit of note in your lamb does over here slow.

  • If I where we are, Here we go.

  • Oh, come on.

  • If that loads, that's great.

  • But there's some node.

  • It's just a function that accepts a context in some additional parameters, depending on the type of lambda you want.

  • There's a lot more complexity.

  • I'm smoothing a little bit of it over, but it gets handed a contacts.

  • From that context, you can basically look the payload that was parsed out.

  • You could look at the headers.

  • You can make all kinds of different decisions within your lambda of what to do further.

  • So in this case I'm trying to load the email, sending one.

  • In this case, we would actually grabbed from dynamodb our entire list of e mails that have been registered.

  • We would then take the body that was sent to us, and we just kind of lined them up together and send often emailed each one of the users.

  • So it's a really, really rudimentary email management or marketing type thing.

  • But that's that's essentially all that there is in it is the ability to wire up you're a P I directly into a lambda that knows how to process glean the data that's been dropped in from somewhere else.

  • So that never did loaded it.

  • No.

  • All right, well, come to be later, we'll what?

  • We'll run through it.

  • So again, that's all this this diagram is showing in this case, What I was doing is dropping it into a Q and SQs, and then we'd be able to pull that Q and pull out the generated emails to send off.

  • You could have an iron port or something like that to actually process major email loads.

  • So besides that ah, little bit of homework, there are multiple, multiple, multiple things to consider we have not talked about.

  • This is a very high level we have things like I am rolls, you've got security models.

  • You've got V P.

  • C for your private cloud.

  • You have direct connect, which I did mention which lets you run things on your internal or business network s so that you can actually control your security little bit better or have security shared between internal and cloud applications.

  • If you're using my sequel, post grass, something like that, there are offerings to let you do that with your database and let them scale it.

  • Our host for you and again by them pretty much all the providers have this type of thing.

  • You've got cashing layers.

  • You've got all kinds of different things you can use to to accent or compliment the server lis offerings we've looked at to build out a much broader and more secure application.

  • So I just did the demo, and so that's it.

  • Thank you very much.

My name is Garth Henson has was mentioned.

字幕と単語

ワンタップで英和辞典検索 単語をクリックすると、意味が表示されます

B1 中級

私のサーバーはどこへ行ったのか - ガース・ヘンソン|JSConf Hawaii 2019 (Where Did All My Servers Go? - Garth Henson | JSConf Hawaii 2019)

  • 1 0
    林宜悉 に公開 2021 年 01 月 14 日
動画の中の単語