Placeholder Image

字幕表 動画を再生する

  • >> Narrator: From Denver, Colorado, it's theCUBE.

  • Covering Super Computing 17.

  • Brought to you by Intel.

  • (techno music)

  • Hey, welcome back, everybody.

  • Jeff Frick here with theCUBE.

  • We're in Denver, Colorado at

  • the Super Computing Conference 2017.

  • About 12 thousand people, talking

  • about the outer edges of computing.

  • It's pretty amazing.

  • The keynote was huge.

  • The square kilometer array, a new

  • vocabulary word I learned today.

  • It's pretty exciting times,

  • and we're excited to have our next guest.

  • He's Bill Jenkins.

  • He's a Product Line Manager for AI on FPGAs at Intel.

  • Bill, welcome.

  • Thank you very much for having me.

  • Nice to meet you, and nice to talk to you today.

  • So you're right in the middle of this machine-learning

  • AI storm, which we keep hearing more and more about.

  • Kind of the next generation of big data, if you will.

  • That's right.

  • It's the most dynamic industry I've seen

  • since the telecom industry back in the 90s.

  • It's evolving every day, every month.

  • Intel's been making some announcements.

  • Using this combination of software programming and FPGAs

  • on the acceleration stack to get more

  • performance out of the data center.

  • Did I get that right?

  • Sure, yeah, yeah.

  • Pretty exciting.

  • The use of both hardware, as well as software on top of it,

  • to open up the solution stack, open up the ecosystem.

  • What of those things are you working on specifically?

  • I really build first the enabling technology that brings

  • the FPGA into that Intel ecosystem.

  • Where Intel is trying to provide that solution

  • from top to bottom to deliver AI products.

  • >> Jeff: Right.

  • Into that market.

  • FPGAs are a key piece of that because we provide a different

  • way to accelerate those machine-learning and AI workloads.

  • Where we can be an offload engine to a CPU.

  • We can be inline analytics to offload the system,

  • and get higher performance that way.

  • We tie into that overall Intel

  • ecosystem of tools and products.

  • Right.

  • So that's a pretty interesting piece because

  • the real-time streaming data is all the rage now, right?

  • Not in batch.

  • You want to get it now.

  • So how do you get it in?

  • How do you get it written to the database?

  • How do you get it into the micro-processor?

  • That's a really, really important piece.

  • That's different than even two years ago.

  • You didn't really hear much about real-time.

  • I think it's, like I said, it's evolving quite a bit.

  • Now, a lot of people deal with training.

  • It's the science behind it.

  • The data scientists work to figure out what topologies

  • they want to deploy and how they want to deploy 'em.

  • But now, people are building products around it.

  • >> Jeff: Right.

  • And once they start deploying these technologies

  • into products, they realize that they don't want to

  • compensate for limitations in hardware.

  • They want to work around them.

  • A lot of this evolution that we're building is to try to

  • find ways to more efficiently do that compute.

  • What we call inferencing, the actual deployed

  • machine-learning scoring, as they will.

  • >> Jeff: Right.

  • In a product, it's all about how

  • quickly can I get the data out.

  • It's not about waiting two seconds to start the processing.

  • You know, in an autonomous-driven car where

  • someone's crossing the road, I'm not waiting

  • two seconds to figure out it's a person.

  • Right, right.

  • I need it right away.

  • So I need to be able to do that with video feeds,

  • right off a disk drive, from the ethernet data coming in.

  • I want to do that directly in line, so that my processor

  • can do what it's good at, and we offload that processor

  • to get better system performance.

  • Right.

  • And then on the machine-learning

  • specifically, 'cause that is all the rage.

  • And it is learning.

  • So there is a real-time aspect to it.

  • You talked about autonomous vehicles.

  • But there's also continuous learning over time,

  • that's not necessarily dependent on learning immediately.

  • Right.

  • But continuous improvement over time.

  • What are some of the unique challenges in machine-learning?

  • And what are some of the ways that

  • you guys are trying to address those?

  • Once you've trained the network,

  • people always have to go back and retrain.

  • They say okay, I've got a good accuracy,

  • but I want better performance.

  • Then they start lowering the precision,

  • and they say well, today we're at 32-bit, maybe 16-bit.

  • Then they start looking into eight.

  • But the problem is, their accuracy drops.

  • So they retrain that into eight topology, that network,

  • to get the performance benefit,

  • but with the higher accuracy.

  • The flexibility of FPGA actually allows people to take

  • that network at 32-bit, with the 32-bit trained weights,

  • but deploy it in lower precision.

  • So we can abstract away the fact that the hardware's

  • so flexible, we can do what we call

  • floating point 11-bit floating point.

  • Or even 8-bit floating point.

  • Even here today at the show, we've got a binary and ternary

  • demo, showcasing the flexibility that the FPGA can provide

  • today with that building block piece

  • of hardware that the FPGA can be.

  • And really provide, not only the topologies that

  • people are trying to build today, but tomorrow.

  • >> Jeff: Right.

  • Future proofing their hardware.

  • But then the precisions that they may want to do.

  • So that they don't have to retrain.

  • They can get less than a 1% accuracy loss, but they can

  • lower that precision to get all the performance benefits

  • of that data scientist's work to

  • come up with a new architecture.

  • Right.

  • But it's interesting 'cause there's trade-offs, right?

  • >> Bill: Sure.

  • There's no optimum solution.

  • It's optimum as to what you're trying to optimize for.

  • >> Bill: Right.

  • So really, the ability to change the ability to continue

  • to work on those learning algorithms,

  • to be able to change your priority, is pretty key.

  • Yeah, a lot of times today, you want this.

  • So this has been the mantra of the FPGA for 30 plus years.

  • You deploy it today, and it works fine.

  • Maybe you build an ASIC out of it.

  • But what you want tomorrow is going to be different.

  • So maybe if it's changing so rapidly,

  • you build the ASIC because there's runway to that.

  • But if there isn't, you may just say, I have the FPGA,

  • I can just reprogram it to do what's

  • the next architecture, the next methodology.

  • Right.

  • So it gives you that future proofing.

  • That capability to sustain different topologies.

  • Different architectures, different precisions.

  • To kind of keep people going with the same piece of hardware.

  • Without having to say, spin up a new ASIC every year.

  • >> Jeff: Right, right.

  • Which, even then, it's so dynamic it's probably faster

  • then, every year, the way things are going today.

  • So the other thing you mentioned is topography,

  • and it's not the same topography you mentioned,

  • but this whole idea of edge.

  • Sure.

  • So moving more and more compute,

  • and store, and smarts to the edge.

  • 'Cause there's just not going to be time, you mentioned

  • autonomous vehicles, a lot of applications

  • to get everything back up into the cloud.

  • Back into the data center.

  • You guys are pushing this technology,

  • not only in the data center, but progressively

  • closer and closer to the edge.

  • Absolutely.

  • The data center has a need.

  • It's always going to be there, but they're getting big.

  • The amount of data that we're trying

  • to process every day is growing.

  • I always say that the telecom industry

  • started the Information Age.

  • Well, the Information Age has done

  • a great job of collecting a lot of data.

  • We have to process that.

  • If you think about where, maybe I'll

  • allude back to autonomous vehicles.

  • You're talking about thousands of

  • gigabytes, per day, of data generated.

  • Smart factories.

  • Exabytes of data generated a day.

  • What are you going to do with all that?

  • It has to be processed.

  • We need that compute in the data center.

  • But we have to start pushing it out into the edge,

  • where I start thinking, well even

  • a show like this, I want security.

  • So, I want to do real-time weapons detection, right?

  • Security prevention.

  • I want to do smart city applications.

  • Just monitoring how traffic moves through a mall,

  • so that I can control lighting and heating.

  • All of these things at the edge, in the camera,

  • that's deployed on the street.

  • In the camera that's deployed in a mall.

  • All of that, we want to make those smarter,

  • so that we can do more compute.

  • To offload the amount of data that needs

  • to be sent back to the data center.

  • >> Jeff: Right.

  • As much as possible.

  • Relevant data gets sent back.

  • No shortage of demand for compute

  • store networking, is there?

  • No, no.

  • It's really a heterogeneous world, right?

  • We need all the different compute.

  • We need all the different aspects of

  • transmission of the data with 5G.

  • We need disk space to store it.

  • >> Jeff: Right.

  • We need cooling to cool it.

  • It's really becoming a heterogeneous world.

  • All right, well, I'm going to give you the last word.

  • I can't believe we're in November of 2017.

  • Yeah.

  • Which is bananas.

  • What are you working on for 2018?

  • What are some of your priorities?

  • If we talk a year from now, what

  • are we going to be talking about?

  • Intel's acquired a lot of companies

  • over the past couple years now on AI.

  • You're seeing a lot of merging of

  • the FPGA into that ecosystem.

  • We've got the Nervana.

  • We've got Movidius.

  • We've got Mobileye acquisitions.

  • Saffron Technologies.

  • All of these things, when the FPGA is kind of a key piece

  • of that because it gives you that flexibility

  • of the hardware, to extend those pieces.

  • You're going to see a lot more stuff in the cloud.

  • A lot more stuff with partners next year.

  • And really enabling that edge to data center compute,

  • with things like binary neural networks,

  • ternary neural networks.

  • All the different next generation of topologies

  • to kind of keep that leading edge flexibility

  • that the FPGA can provide for people's products tomorrow.

  • >> Jeff: Exciting times.

  • Yeah, great.

  • All right, Bill Jenkins.

  • There's a lot going on in computing.

  • If you're not getting your computer science

  • degree, kids, think about it again.

  • He's Bill Jenkins.

  • I'm Jeff Frick.

  • You're watching theCUBE from Super Computing 2017.

  • Thanks for watching.

  • Thank you.

  • (techno music)

>> Narrator: From Denver, Colorado, it's theCUBE.

字幕と単語

ワンタップで英和辞典検索 単語をクリックすると、意味が表示されます

B1 中級

ビル・ジェンキンス、インテル|スーパーコンピューティング2017 (Bill Jenkins, Intel | Super Computing 2017)

  • 6 0
    alex に公開 2021 年 01 月 14 日
動画の中の単語