Placeholder Image

字幕表 動画を再生する

  • [MUSIC PLAYING]

  • DIANE GREENE: Hello.

  • FEI-FEI LI: Hi.

  • DIANE GREENE: Who's interested in AI?

  • [CHEERING]

  • Me too.

  • Me three.

  • OK.

  • So I'm the moderator today.

  • I'm Diane Greene, and I'm running Google Cloud

  • and on the Alphabet board.

  • And I'm going to briefly introduce

  • our really amazing guests we have here.

  • I also live on the Stanford campus,

  • so I've known one of our guests for a long time,

  • because she's a neighbor.

  • So let me just introduce them.

  • First is Dr. Fei-Fei Li, and she is the Chief Scientist

  • for Google Cloud.

  • She also runs AI Lab at Stanford University, the Vision Lab,

  • and then she also founded SAILORS,

  • which is now AI4ALL, which you'll

  • hear about a little bit later.

  • And is there anything you want to add to that, Fei-Fei?

  • FEI-FEI LI: I'm your neighbor.

  • [LAUGHTER]

  • That's the best.

  • DIANE GREENE: And so now we have Greg Corrado.

  • And actually there's one amazing coincidence.

  • Both Fei-Fei and Greg were undergraduate physics majors

  • at Princeton together at the same time.

  • And didn't really know each other that well

  • in the 18-person class.

  • FEI-FEI LI: We were studying too hard.

  • GREG CORRADO: No, it was kind of surprising to go

  • to undergrad together, neither of us in computer science,

  • and then rejoin later only once we were here at Google.

  • DIANE GREENE: All paths lead to AI and neural networks

  • and so forth.

  • But anyhow, so Greg is the Principal Scientist

  • in the Google Brain Group.

  • He co-founded it.

  • And more recently, he's been doing

  • a lot of amazing work in health with neural networks

  • and machine learning.

  • He has a PhD in neuroscience from Stanford.

  • And so he came into AI in a very interesting way.

  • And maybe he'll talk about the similarities between the brain

  • and what's going on in AI.

  • Would you like to add anything else?

  • GREG CORRADO: No, sounds good.

  • DIANE GREENE: OK.

  • So I thought since both of them have been involved in the AI

  • field for a while and it's recently

  • become a really big deal, but it'd

  • be nice to get a little perspective on the history,

  • yours in Vision and yours in neuroscience, about AI

  • and how it was so natural for it to evolve to where it is now

  • and what you're doing.

  • And start with Fei-Fei.

  • FEI-FEI LI: I guess I'll start.

  • So first of all, AI is a very nascent field

  • in the history of science of human civilization.

  • This is a field of only 60 years of age.

  • And it started with a very, very simple but fundamental quest--

  • is can machines think?

  • And we all know thinkers and thought leaders

  • like Alan Turing challenged humanity with that question.

  • Can machines think?

  • So about 60 years ago, a group of very pioneering scientists,

  • computer scientists like Marvin Minsky, John McCarthy,

  • started really this field.

  • In fact, John McCarthy, who founded Stanford's AI lab,

  • coined the very word artificial intelligence.

  • So where do we begin to build machines that think?

  • Humanity is best at looking inward in ourselves

  • and try to draw inspiration from who we are.

  • So we started thinking about building machines that

  • resemble human thinking.

  • And when you think about human intelligence,

  • you start thinking about different aspects like ability

  • to reason and ability to see and ability

  • to hear, to speak, to move around, make decisions,

  • manipulate.

  • So AI started from that very core, foundational dream

  • 60 years ago, started to proliferate

  • as a field of multiple subfield, which includes robotics,

  • computer vision, natural language processing,

  • speech recognition.

  • And then a very important development

  • happened around the '80s and '90s,

  • which is a sister field called machine learning started

  • to blossom.

  • And that's a field combining statistical learnings,

  • statistics, with computer science.

  • And combining the quest of machine intelligence, which

  • is what AI was born out of, with the tools and capabilities

  • of machine learning.

  • AI as a field went through an extremely

  • fruitful, productive, blossoming period of time.

  • And fast-forward to the second decade of 21st century.

  • The latest machine learning booming that we are observing

  • is called deep learning, which has

  • a deep root in neuroscience, which I'll let you talk about.

  • And so combining deep learning as

  • a powerful statistical machine learning tool

  • with the quest of making machines more intelligent.

  • Whether it's to see or is it to hear or to speak,

  • we're seeing this blossom.

  • And last I just want to say, three critical factors

  • converged around the last decade,

  • which is the 2000s and the beginning of 2010s, which are

  • the three computing factors.

  • One is the advance of hardware that

  • enabled more powerful and capable computing.

  • Second is the emergence of big data,

  • powerful data that can drive the statistical learning

  • algorithms.

  • And I was lucky to be involved myself in some of the effort.

  • And then the third one is the advances of machine learning

  • and deep learning algorithms.

  • So this convergence of three major factors

  • brought us the AI boom that we're seeing today.

  • And Google has been investing in all three areas,

  • honestly, earlier than the curve.

  • Most of the effort started even in early 2000s.

  • And as a company, we're doing a lot of AI work

  • from research to products.

  • GREG CORRADO: And it's been really interesting to watch

  • the divergence in exploration in various academic fields

  • and then the re-convergence as we see ideas that are aligned.

  • So it wasn't, as Fei-Fei says, it wasn't so long

  • ago that fields like cognitive science, neuroscience,

  • artificial intelligence, even things

  • that we don't talk about much more like cybernetics,

  • were really all aligned in a single discipline.

  • And then they've moved apart from each other

  • and explored these ideas independently

  • for a couple of decades.

  • And then with the renaissance in artificial neural networks

  • and deep learning, we're starting

  • to see some re-convergence.

  • So some of these ideas that were popular

  • only in a small community for a couple of decades

  • are now coming back into the mainstream

  • of what artificial intelligence is, what statistical pattern

  • recognition is, and it's really been delightful to see.

  • But it's not just one idea.

  • It's actually multiple ideas that you

  • see that were maintained for a long time in fields

  • like cognitive science that are coming back into the fold.

  • So another example beyond deep learning

  • is actually reinforcement learning.

  • So for the longest time, if you looked at a university

  • catalog of courses and you were looking

  • for any mention of reinforcement learning whatsoever,

  • you were going to find it in a psychology

  • department or a cognitive science department.

  • But today, as we all know, we look

  • at reinforcement learning as a new opportunity,

  • as something that we actually look

  • at for the future of AI that might be something that's

  • important to get machines to really learn

  • in completely dynamic environments,

  • in environments where they have to explore entirely

  • new stimuli.

  • So I've been really excited to see how this convergence has

  • happened back in the direction from those ideas

  • into mainstream computer science.

  • And I think that there's some hope for exchange

  • back in the other direction.

  • So neuroscientists and cognitive scientists

  • today are starting to ask whether we

  • can take the kind of computer vision models

  • that Fei-Fei helped pioneer and use those as hypotheses for how

  • it is that neural systems actually compute, how

  • our own biological brains see.

  • And I think that that's really exciting

  • to see this kind of exchange between disciplines

  • that have been separated for a little while.

  • DIANE GREENE: You know, one little piece of history I think

  • that's also interesting is what you did, Fei-Fei,

  • with ImageNet, which is a nice way of explaining building

  • these neural networks where you labeled all these images

  • and then people could refine their algorithms by--

  • go ahead and explain that just real quickly.

  • FEI-FEI LI: OK, sure.

  • So about 10 years ago, the whole community of computer vision,

  • which is a subfield of AI, was working on a holy grail problem

  • of object recognition, which is you open your eyes,

  • you can see the world full of objects

  • like flowers, chairs, people.

  • And that's a building block of visual intelligence

  • and intelligence in general.

  • And to crack that problem, we were building, as a field,

  • different machine learning models.

  • We're making small progress, but we're hitting a lot of walls.

  • And when my student and I started working on this problem

  • and started thinking deeply about what

  • is missing in the way we're approaching this problem,

  • we recognize this important interplay

  • between data as statistical machine learning models.

  • They really reinforce each other in very deep mathematical ways

  • that we're not going to talk about the details here.

  • That realization was also inspired by human vision.

  • If you look at how children learn,

  • it's a lot of learning through big data

  • experiences and exploration.

  • So combining that, we decided to put together

  • a pretty epic effort of we wanted

  • to label all the images we can get on the internet.

  • And of course, we Google Searched a lot

  • and we downloaded billions of images

  • and used crowdsourcing technology

  • to label all the images, organize them

  • into a data set of 50 million images, organized

  • in 22,000 categories of objects, and put that together,

  • and that's the ImageNet project.

  • And we democratized it to the research world

  • and released the open source.

  • And then starting in 2010, we held

  • an international challenge for the whole AI community

  • called ImageNet Challenge.

  • And one of the teams from Toronto,

  • which is now at Google, won the ImageNet Challenge

  • with the deep learning convolutional neural network

  • model.

  • And that was year 2012.

  • And a lot of people think the combination of ImageNet

  • and the deep learning model in 2012

  • was the onset of what Greg--

  • DIANE GREENE: A way to compare how they were doing.

  • And it was really good.

  • So yeah.

  • And so Greg, you've been doing a lot of brain-inspired research,

  • very interesting research.

  • And I know you've been doing a lot of very impactful research

  • in the health area.

  • Could you tell us a little bit about that?

  • GREG CORRADO: Sure.

  • So I mean, I think the ImageNet example actually

  • sort of sets a playbook for how we

  • can try to approach a problem.

  • The kind of machine learning and AI

  • that is most practical and most useful today

  • is ones where machines learn through imitation.

  • It's an imitation game where if you have examples

  • of a task being performed correctly,

  • the machine can learn to imitate this.

  • And this is called supervised learning.

  • And so what happened in the image recognition

  • case is that by Fei-Fei building an object recognition data set,

  • we could all focus on that problem

  • in a really concrete, tractable way

  • in order to compare different methods.

  • And it turned out that methods like deep learning

  • and artificial neural networks were