Placeholder Image

字幕表 動画を再生する

  • Let's get back to the Google I/O developers conference in Mountain View, California.

  • With the spotlight on hardware this year,

  • Google CEO Sundar Pichai announced a new artificial intelligence supercomputer chip

  • looking to transform the search giant into an AI first company and a real cloud computing contender.

  • We caught up with Scott Huffman, Google's vice president of engineering, and asked just what this supercomputer chip means for Google.

  • Well we're really excited to be able to have the computing power to be able to really harness all of the newer machine learning algorithms.

  • One of the things that is exciting about these new algorithms is they're verywhat in computer science terms we call — "highly parallelizable".

  • So you can do many computations at once and get very high scale and process a lot of data that way.

  • And the new chips are really designed to do that from the ground up.

  • So really designed to do the kinds of machine learning processing that we're using a lot of.

  • Digital assistants are all the rage now, but Google Home sales are still dwarfed by Amazon Echo.

  • How is the Google Assistant different from Siri, different from Alexa, or Cortana?

  • So, one thing that we're very excited about with the Google Assistant,

  • is the ability to actually go across all the different devices and contacts in your life.

  • So as you go from your house, to your car and your commute, to out and about on your day,

  • we want the same assistant to really be available to help you in all those different places.

  • And so today, we're really excited to deploy the assistant out to all the iPhones,

  • make it available to iPhone users in the US,

  • and we're in the process, of course, of rolling out across all the Android phones, Google Home, Android Auto, Android TV, Android Wear,

  • so really making that assistant always available to you no matter what you're doing.

  • Now what is it gonna take for voice technology to actually improve?

  • Because, you know, I've used all of these devices, and in my own experience, it's still rather crude.

  • Well, so we think we're making a lot of progress, but one of the big things and Sundar talked a little about it today,

  • is really using broad-scale data and neural algorithms in order to improve the technology.

  • So we've been actually pretty significantly overhauling, kind of, all of our algorithms under the hood every couple of years

  • to take advantage of the new computing power that we have and new and larger amounts of data.

  • And every time that we do that, we see a pretty big jump in improvement.

  • One of the things that we did as we worked on bringing Google Home to the market that was really an exciting thing,

  • is because Google Home needs to work at a distance, I might be standing far away,

  • there's a lot more noise in the microphone signal.

  • And so by adding essentially artificial noise into our training data,

  • we were able to have our neural network actually be able to recognize things at a far distance away.

  • So, these kinds of algorithms are very powerful for kind of, shaping recognition in different environments.

  • What is one place that you see the assistant going, where it hasn't gone yet?

  • Today we're enabling a voice conversation into a great set of functionality

  • but that... the users are doing sort of the obvious things on every device.

  • But I don't think we fully realized yet the vision of having any kind of conversation you want,

  • having it really be understood, and then having the assistant tap in to all the different services in the world and a seamless way to do that.

  • That's really the vision.

  • And so I think we have a long way to go.

  • One example that we showed some beginnings of today that we're really excited about is something we call Google Lens.

  • And this is just the realization that you know, speaking out loud is great,

  • but when I'm talking with my friend, a lot of times what I do is point at something,

  • and then we talk about that. We talk about what we see.

  • And with Google's advances in computer vision and computer, kind of, image understanding,

  • the assistant is actually going to begin to have that capability over the next few months,

  • so that I'll be able to open my camera, my viewfinder, and then begin to talk to the assistant about what I see.

  • And so we're really excited about that.

Let's get back to the Google I/O developers conference in Mountain View, California.

字幕と単語

ワンタップで英和辞典検索 単語をクリックすると、意味が表示されます

B1 中級

グーグルがAI技術をどのように向上させているか (How Google Is Improving AI Technology)

  • 66 4
    Colleen Jao に公開 2021 年 01 月 14 日
動画の中の単語