Placeholder Image

字幕表 動画を再生する

  • It's an interesting one, and it's sort of our two.

  • To Gigi, it's process is the same speed.

  • To understand this one, you need to understand what the megahertz of the gigahertz.

  • So the killer hurts.

  • You go back far enough on terror hurts processes that fast.

  • I dodged a tigress.

  • We need to think about what that's actually describing.

  • We tend to think that we can describe the speed repressive by looking at its gigahertz writing, so 1.6 get hurt.

  • Suppose it's faster than 1.5 figureheads press service faster than 1.6, and so, to an extent, that's true.

  • But it only really fits if you've got the same model of processes.

  • But if you compare different, even different iterations of Intel's core I 70 architecture or say Kuroi, seven compared Thio rise and chip, then, namely rise and ship.

  • Then it breaks down, and it's down to what the megahertz is describing and then have the CPS built inside.

  • So the Bank of heads of the gigahertz is describing how fast the clock, which synchronizes the computer, runs away to think about.

  • This is if you think about the conductor of an orchestra he's keeping time.

  • The orchestra's employing in time.

  • That's what the clock in the CPI just it keeps track of time off.

  • What's happening on if you think back to the previous video?

  • Did about a couple of years ago on pipelines in a modern CPU have a Siri's off steps that goes through that the very simple we could label this is we gotta fetch step a decode step on dhe and execute step.

  • This is similar to what the rich alarmist CPU hadn't.

  • So, in the first clock cycle of these two coats way start fetching the first instruction than in the second clock cycle.

  • We start decoding that instruction and fetching the second instruction that in the third clock cycle we actually get around to executing this one.

  • We decode the second and we start fetching the third on dso honor.

  • This goes on.

  • So then get the fourth.

  • We're decoding the third and we execute the second on DSO on.

  • This continues providing that we can actually satisfy things we don't get any pipeline stools.

  • This happens as long as we don't require part of CPU this here say to be here.

  • So, for example, that CP Canary access one thing from memory a time then we can't fetch in effect from memory.

  • At the same time in the States, we get a store, watch the other video for that.

  • This man that little more than one thing at time.

  • The answer is yes, we are.

  • We're trying to speed up the execution of our CPU and realizing that Ashley, when we're executing things, we're not using this bit normally.

  • So if we spray things have been to a pipeline that we can have all bits of the CPU happening, we can think about these being synchronized to the clock.

  • We do this bit here in the first clock cycle.

  • This is a second walk cycle, the third clock cycle, the fourth clock cycle.

  • So if we keep this structure would make a clock cycle's shorter, I we're gonna hire megahertz speed, high gigahertz speed.

  • Then things get faster.

  • So this works fine on dhe.

  • If we increase the clock speed, then we can decrease the amount of time that each of these steps take.

  • But that becomes a limit because these are implemented in digital logic than after a while.

  • The logic itself will take a per certain amount of time and we won't be able to reduce it anymore because otherwise the clock speed would be taking over before we finish running logic.

  • It's what's called the propagation delay in the digital logic.

  • So we got a minimum amount of time, and actually it'll probably be governed by one of these steps.

  • So it's gonna be a limit on how fast we get our clocks be based on the logic, but we can get around that by actually making our pipeline longer.

  • So what we could say that say we break it down into three steps but into six smaller steps and they would do parts of what was being done in here and so on.

  • So this one might fetch despite Part Deco.

  • This might finish decoding.

  • This might get something from a register on.

  • These two might do part of the execution, and so I'm making up different things here.

  • There's various ways you can build these things, and in this case we got 123456 steps, so it will take longer for our pipeline.

  • Forget full.

  • But when it does, if we can run a full Pelle will be able to run faster with a faster clock speed because each of these steps take up less time.

  • Of course, the problem is, is if we don't get a bubble in our part line, then it'll take longer to refill.

  • So here we got a bubble.

  • We passed it.

  • A what?

  • Or to cycle delay.

  • Here, we'd have, ah, 45 2nd delay it possible.

  • So if this was save running, it one gets and this was at 1.25 cycle delay would take longer than one cycle to lay here, and so and so.

  • What you can see is that CPU designed the architect of the way.

  • The internal bits of built has as much influence on how fast the program runs as the clock speed on increasing the clock speed.

  • But changing design doesn't necessarily mean that it'll be faster, even though it's running question now.

  • Hopefully you can get it, and sometimes you have to redesign your program to get the best advantage out of it.

  • So the changes in the architecture having effect.

  • So what you need to do is design your CPU so that you try and avoid getting those bubbles in the pipeline that actually, even though you've got a faster CPU because we're now excuse these elections are a much quicker basis.

  • We want to keep the pipeline fall.

  • So what you end up doing is designing other bits of things, things like super skater, architectural.

  • If you talked about before what?

  • We can run more than one instruction at the same time.

  • You have out of order execution where you move things around to try and avoid the bubbles and you To do that, you rename registers and have more registers and things, all sorts of things going on.

  • You have a branch predict, which is trying to make sure that we don't get the bubbles in the first place by choosing the right instructions and so on.

  • And all of that can have an influence on how fast your CPU actually runs the code.

  • As much as this clock speed to the clock speed does tell you how fast is.

  • But you can't really use it to compare between different CPU use of different times.

  • We can execute that.

  • Multiply d up there, we think.

  • Okay, Can we do the other at the same time?

  • Well, no because we need the result of that as well.

  • So we can then execute the ad down here before finally and it just fits on the paper that so we can actually squash things up on.

  • We're going to save some time.

It's an interesting one, and it's sort of our two.

字幕と単語

ワンタップで英和辞典検索 単語をクリックすると、意味が表示されます

B1 中級

コンピュータの速度 - コンピュータマニア (Computer Speeds - Computerphile)

  • 3 1
    林宜悉 に公開 2021 年 01 月 14 日
動画の中の単語