Placeholder Image

字幕表 動画を再生する

  • >> SUNDAR PICHAI: Good morning everyone.

  • Thank you for joining us.

  • As we were preparing for this event, we were all devastated by

  • the news coming out of Las Vegas as I'm sure all of you were.

  • And that is coming off of a challenging past few weeks with

  • hurricanes Harvey, Irma, and Maria and other events around

  • the world.

  • It's been hard to see the suffering but I have been moved

  • and inspired by everyday heroism, people opening up their

  • homes and to first responders literally risking their lives to

  • save other people.

  • Our hearts and prayers are with the victims and families

  • impacted by these terrible events.

  • We are working closely with many relief agencies in affected

  • areas and we are committed to doing our part.

  • It is a true privilege to be at the SFJAZZ Center.

  • It's a great American institution for jazz performance

  • and education and it is really good to see familiar faces in

  • the audience.

  • As always, I want to give a shout out to people joining us

  • on the livestream globally from around the world.

  • Since last year and since Google I/O, we've been working hard,

  • continuing our shift from a mobile first to an AI

  • first world.

  • We are rethinking all of our core products and working hard

  • to solve user problems by applying machine learning

  • and AI.

  • Let me give you an example.

  • Recently, I visited Lagos in Nigeria.

  • It is a city of twenty-one million people.

  • It is an incredibly dynamic, vibrant, and ever growing city.

  • Many people are coming online for the first time.

  • So it's very exciting unless you happen to be in the Google Maps

  • team and you have to map this city.

  • And it is so - it is changing so fast and normally we map a place

  • by using Street View and doing a lot of stuff automatically but

  • it's difficult to do that in a place like Lagos because the

  • city is changing.

  • You can't always see the signage clearly and there are variable

  • address conventions.

  • Things aren't sequential.

  • So for example, take that house there.

  • If you squint hard, you can see the street number there.

  • It is number three to the left of the gate.

  • That was relatively easy.

  • Onto a harder problem now.

  • That house, that is what we see from Street View.

  • I think as humans, it's probably pretty hard.

  • Maybe one or two of you can spot it out.

  • But our computer vision systems, thanks to machine learning, can

  • pick it out, identify the street number, and start

  • mapping, mapping the house.

  • So we approach Lagos completely differently.

  • We deployed machine learning from the ground up and just in

  • five months, the team was able to map five thousand kilometers

  • of new roads, fifty thousand new addresses, and a hundred

  • thousand businesses.

  • It's something which makes a real difference for millions of

  • users there as Google Maps is popular.

  • And we think this approach is broadly applicable.

  • Let's come closer to home in parking in San Francisco.

  • I don't even try it anymore but for those of you who try it, we

  • again use machine learning.

  • We understand location data.

  • We try to understand patterns.

  • Are cars circling around?

  • And the color shows the density of parking and we can analyze it

  • throughout the day and predict parking difficulty and in Google

  • Maps, give you options.

  • A simple example but it's the kind of everyday use case for

  • which we are using machine learning to make a difference.

  • The best example I can think of, what we have talked before, is

  • Google Translation.

  • I literally remember many years ago adding translation in Chrome

  • and making it automatic so that if you land on a page different

  • from your language, we do that for you.

  • Fast forward to today.

  • With the power of machine learning and our neural machine

  • translation, we serve over two billion translations in many,

  • many languages every single day.

  • To me, it shows the power of staying at a problem, constantly

  • using computer science to make it better, and seeing users

  • respond to it at scale.

  • This is why we are excited about the shift from a mobile first to

  • an AI first world.

  • It is not just about applying machine learning in our products

  • but it's radically rethinking how computing should work.

  • At a higher level in an AI first world, I believe computers

  • should adapt to how people live their lives rather than people

  • having to adapt to computers.

  • And so we think about four core attributes as part of

  • this experience.

  • First, people should be able to interact with computing in a

  • natural and seamless way.

  • Mobile took us a step in this direction with multi-touch but

  • increasingly, it needs to be conversational, sensory.

  • We need to be able to use our voice, gestures, and vision to

  • make the experience much more seamless.

  • Second, it is going to be ambient.

  • Computing is going to evolve beyond the phone, be there in

  • many screens around you when you need it, working for you.

  • Third, we think it needs to be thoughtfully contextual.

  • Mobile gave us limited context.

  • You know, with identity, your location, we were able to

  • improve the experience significantly.

  • In an AI first world, we can have a lot more context and

  • apply it thoughtfully.

  • For example, if you're into fitness and you land in a new

  • city, we can suggest running routes, maybe gyms nearby, and

  • healthy eating options.

  • In my case being a vegetarian and having a weakness for

  • desserts, maybe suggest the right restaurants for me.

  • Finally and probably the most important of all, computing

  • needs to learn and adapt constantly over time.

  • It just doesn't work that way today.

  • In mobile, you know, developers write software and constantly

  • ship updates but you know, let me give a small example.

  • I use Google Calendar all the time.

  • On Sundays, I try to get a weekly view of how my week looks

  • like for once the work week starts, say on a Monday or a

  • Tuesday, I'm trying to get a view into what the next few

  • hours looks like.

  • I have to constantly toggle back and forth.

  • Google Calendar should automatically understand my

  • context and show me the right view.

  • It's a very simple example but software needs to fundamentally

  • change how it works.

  • It needs to learn and adapt and that applies to important things

  • like security and privacy as well.

  • Today, a lot of us deal with security and privacy by putting

  • the onus back on users.

  • We give them many settings and toggles to improve those.

  • But in an AI first world, we can learn and adapt and do it

  • thoughtfully for our users.

  • For example, if it is a notification for your doctor's

  • appointment, we need to treat it sensitively and differently than

  • just telling you when you need to start driving to work.

  • So we are really excited by the shift and that is why we are

  • here today.

  • We have been working on software and hardware together because

  • that is the best way to drive the shifts in computing forward.

  • But we think we are at a unique moment in time where we can

  • bring a combination of AI and software and hardware to bring a

  • different perspective to solving problems for users.

  • We are very confident about our approach here because we are at

  • the forefront of driving the shifts with AI.

  • Three months ago at Google I/O, our Google AI teams announced a

  • new approach called AutoML.

  • AutoML is just our machines automatically generating machine

  • learning models.

  • Today, these are handcrafted by machine learning scientists and

  • literally only a few thousands of scientists around the world

  • can do this, design the number of layers, weight and connect

  • the neurons appropriately.

  • It's very hard to do.

  • We want to democratize this.

  • We want to bring this to more people.

  • We want to enable hundreds of thousands of developers to be

  • able to do it.

  • So we have been working on this technology called AutoML and

  • just in the past month for a standard task like image

  • classification, understanding images, our AutoML models are

  • now not only more accurate than the best human generated models,

  • but they are more resource efficient.

  • So it is pretty amazing to see.

  • We are now taking it a step further.

  • Let me talk about another use case, object detection.

  • When we say object detection, it's just a fancy name for

  • computers trying to delineate and understand images, being

  • able to draw bounding boxes and distinguish between all of the

  • vehicles there, scooters, mopeds, motorcycles, and even

  • pick out the bike in front.