Placeholder Image

字幕表 動画を再生する

  • [MUSIC PLAYING]

  • IRENE ALVARADO: Hi, everyone.

  • My name is Irene, and I work at a place

  • called the Creative Lab, a team inside of Google.

  • And some of us are interested in creating

  • what we call experiments to showcase

  • and make more accessible some of the machine learning research

  • that's coming out of Google.

  • And a lot of our work goes to this site

  • called the Experiments with Google site.

  • Now, before I talk about some of the products on the site,

  • let me just say that we're really

  • inspired by pioneering AI researcher Seymour Papert, who

  • wrote a lot about learning theories

  • in humans and essentially kind of how

  • to make learning not suck.

  • So this is one of his great quotes.

  • "Every maker of video games knows something

  • that the makers of curriculum don't seem to understand.

  • You'll never see a video being advertised as being easy.

  • Kids who do not like school will tell you

  • it's not because it's too hard.

  • It's because it's boring."

  • So if there are some parents in the room,

  • you might be agreeing with this statement.

  • So I'll show you some projects that

  • were inspired by this thinking that learning should

  • be engaging, made in collaboration

  • with the TensorFlow.js team and many other research

  • teams at Google.

  • So this is the first one.

  • It's called Teachable Machine.

  • And essentially it's a KNN classifier that

  • runs entirely in the browser.

  • And it lets you train three classes of images

  • that trigger different kinds of inputs, like GIFs and sound.

  • So I don't have time to demo it, but I'll

  • show you what happens after you train a model with the tool.

  • So can I get the video?

  • [VIDEO PLAYBACK]

  • [SPOOKY ORGAN MUSIC]

  • [BIRDS CHIRPING]

  • [SPOOKY ORGAN MUSIC]

  • [BIRDS CHIRPING]

  • [SPOOKY ORGAN MUSIC]

  • [BIRDS CHIRPING]

  • [SPOOKY ORGAN MUSIC]

  • [END PLAYBACK]

  • See it choosing between two classes.

  • Yeah, so, hopefully, you get how it works.

  • Alex Chen, the creator, he trained a class

  • to recognize the bird origami and another class

  • to recognize the spooky person origami.

  • OK, back to the slides.

  • Thank you.

  • So we released the experiment online.

  • All the inference and training is happening in the browser.

  • And we also released the open source--

  • we open sourced the boilerplate code that

  • went along with the experiment.

  • And what happened next was that we were really kind of taken

  • aback by all the stories of teachers around the world,

  • like this one, who started using Teachable Machine to introduce

  • ML into the classroom.

  • Here's another example of kids learning

  • about smart cities and kind of training

  • the computer to recognize handmade stop signs.

  • This was really amazing.

  • And finally, we heard from another renowned and pioneering

  • researcher, Hal Abelson, who teaches at MIT, that he

  • had been using Teachable Machine to introduce ML

  • to policymakers.

  • And for a lot of them, it was the first time

  • that they had ever trained a model.

  • So needless to say, we're really happy that although simple

  • in nature, Teachable Machine ended up

  • being a really good tool for educators and people

  • that were new to machine learning.

  • So here's another example.

  • This one's called Move Mirror.

  • And the concept is really simple.

  • You strike a pose in front of a webcam,

  • and you get an image with a matching pose.

  • And again, this is all happening on the web.

  • So here's another example of, actually, people using it

  • in the form of an installation.

  • People do really funny moves.

  • And again, this is happening on a phone,

  • but on the phone's browser.

  • And so the story for this one was

  • that in order to make the experiment really accessible,

  • we had to take the tech to the web,

  • so that we wouldn't require users

  • to have a complicated tech setup or to use IR cameras or depth

  • sensors, which can be expensive.

  • So PoseNet was born.

  • To our knowledge, it's the first pose estimation

  • model for the web.

  • And it's open source.

  • It runs locally in your browser.

  • And it uses good ol' RGB webcams.

  • So again, we were really taken aback

  • by all the creative projects that we saw popping up online.

  • Just to give you a sense, the one on the left

  • is a musical interface.

  • The one in the middle is a ping pong game

  • that you can use with your head.

  • I really want to play that one.

  • And the one on the right is a kind of performative motion

  • capture animation.

  • But we also started hearing from people

  • in the accessibility world that they were using PoseNet.

  • So we decided to partner with a bunch of groups

  • that work at the intersection of disability and technology,

  • like the NYU Ability Project, and musicians, artists, makers

  • in the accessibility world.

  • And out of that collaboration came a set of creative tools

  • that we're calling Creatability.

  • And a lot of them use PoseNet for users

  • who have motor impairments to be able to interface

  • with a computer with their whole bodies

  • instead of through a keyboard and a mouse.

  • So again, I don't have time to demo these.

  • But just give you a sense, the one on the bottom left

  • is a visualization tool made by a musician named

  • Jay Zimmerman, who's deaf, and the one on the top right

  • is an accessible musical instrument

  • made by a group called Open Up Music.

  • And we just took their designs and kind of moved it

  • to the web.

  • So again, all of the components that

  • made this project are accessible and they've been open sourced.

  • So just a step back for a second,

  • if we were to think about what made these projects successful

  • or at least useful for other people,

  • we can see that they were all interactive and accessible

  • through the browser.

  • So it really lowered the barrier of entry for a lot of people.

  • They all had an open-source component,

  • so that people could kind of look under the hood,

  • see what's happening, modify them, play with them.

  • And then, finally, they're all free,

  • because the processing is happening locally

  • in the browser with TensorFlow.js.

  • And that gave us privacy, so that we

  • didn't have to send images of people's bodies

  • and faces to any servers.

  • So again, all the projects that I went through kind of quickly,

  • they're on the Experiments.withGoogle.com

  • site.

  • And even though these were created in-house,

  • we actually feature work by more than 1,700 developers

  • from around the world.

  • So if any of this resonates with you,

  • this is really an open invitation for you

  • to submit your work.

  • And I hope to have showed that you never

  • know who you might inspire or who

  • might take your work and kind of innovate on top of it

  • and use in really creative ways.

  • Thank you.

  • [APPLAUSE]

  • [MUSIC PLAYING]

[MUSIC PLAYING]

字幕と単語

ワンタップで英和辞典検索 単語をクリックすると、意味が表示されます

B1 中級

AIの実験。プレイでAIにアクセスできるようにする(TF Dev Summit '19 (AI Experiments: Making AI Accessible through Play (TF Dev Summit ‘19))

  • 2 0
    林宜悉 に公開 2021 年 01 月 14 日
動画の中の単語