Placeholder Image

字幕表 動画を再生する

  • DAMIEN HENRY: Hello everyone.

  • My name is Damien.

  • And I feel very lucky today, because two great artists,

  • Cyril Diagne and Mario Klingemann

  • will join me on stage in a few minutes

  • so you can see what they do when they use machine learning.

  • So if you want to go to the bathroom or text,

  • please do it while I'm doing the intro.

  • But when they are there, this is a really [INAUDIBLE].

  • So I'm working for the Cultural Institute in Paris.

  • And the mission of the Cultural Institute

  • is to help to a museum, an institution,

  • to digitalize and to shared their culture, their assets

  • online.

  • We are working with more than 1,000 museums.

  • And it means that if you want to discover

  • a new museum every week, it will take you 20 years to do so.

  • What we do in the Cultural Institute is this app.

  • It's named the Google Arts and Culture App.

  • It's really a beautiful app.

  • And if you have not downloaded yet, you should do.

  • There are a ton of really incredible features.

  • And one of my favorites is named Gigapixel.

  • Gigapixel is done using the art camera.

  • And the art camera is able to catch

  • every detail in a painting-- so every crack, every blue stuff,

  • you can see them in the app.

  • You can zoom in very deeply in the app, to see,

  • for instance, "Starry Night."

  • But I'm not working on this app.

  • I'm working in a space named The Lab.

  • It's in the middle of Paris.

  • It's a beautiful space.

  • I feel lucky every day when I go there.

  • And it's a space dedicated for creativity.

  • And just as a fun fact, it's where the Cardboard is born.

  • David Coz and I use a laser cutter

  • there to create the very first Cardboard-- the one that was

  • unveiled at I/O two years ago.

  • That's what I have to show today.

  • And last year, we also worked at the lab,

  • for instance, on this picture.

  • You can see some early prototype of the Cardboard

  • that was unveiled at I/O Google last year.

  • But here for the Cardboard today,

  • even if I have still a strange relationship with the VR team,

  • I also have a small team in Paris that's is named CILEx.

  • It stands for Cultural Institute Experiment Team.

  • And what we do, we do experiments

  • with creative [INAUDIBLE] and artists.

  • We are very passionate about three different axes.

  • We try to engage more people to enjoy culture.

  • So we try to find fun ways for people to watch more paintings.

  • We try to find also a new way to organize information.

  • So our user can have a journey in our database,

  • and [INAUDIBLE] can learn something out of it.

  • And obviously, because we have seven million assets

  • in the database, we try to analyze

  • them to discover new insights.

  • So this talk is about machine learning, obviously.

  • And just take 30 seconds to remind of the definition.

  • Just to make things simple, let's imagine

  • that you are writing an algorithm

  • to check if a picture is a cat picture.

  • You can do it yourself, by trying

  • to analyze all the pixels one by one, but obviously,

  • it's difficult. Or what you can do is, using machine learning,

  • having an algorithm that will learn by itself

  • what are the good features to check

  • if a picture is a cat picture.

  • So the key, I think, is this is happening now.

  • Machine learning is not the future.

  • Machine learning is something that everybody in this audience

  • can do, can try.

  • If you know how to code, you can try machine learning.

  • For example, did you know that you

  • can create some "Mario Brothers" levels just using a [INAUDIBLE]

  • networks.

  • Or you can make a color movie from a black and white.

  • Or you can make a 3D movie from a 2D movie.

  • So things that seem difficult or impossible

  • are something that you can do now using machine learning

  • and neural networks.

  • And as an example, this one is "Inside the Brother."

  • It's David R from Japan, designer and artist.

  • And he just decided to make these simple games

  • with two volley players.

  • And in fact, they play extremely well just

  • using a very, very simple neural network

  • that he displays on the screen.

  • So because machine learning is so useful and widespread now,

  • there is no doubt that it will have a huge impact

  • on art and on artists.

  • So that's why we decide something like one year

  • ago to create a machine learning residency in the lab.

  • So we asked artists to join us and to create great experience.

  • So now I will leave to Mario Klingemann with our latest

  • artist in residence.

  • MARIO KLINGEMANN: Thank you, Damien.

  • [APPLAUSE]

  • Hi, everybody.

  • My name's Mario Klingemann.

  • And I'm a code artists.

  • And might sound like I'm kind of really good at indentation

  • or write beautiful code.

  • But that's not really what it is.

  • It just says that I'm using code and algorithms to produce

  • things that look interesting.

  • And some of them might even be called art.

  • I don't know-- I'm not the one to decide that.

  • Like any other artist, I have this problem.

  • You look around you, and it looks like everything

  • has already been done.

  • I mean, in times of Google, you come up with a great idea,

  • you Google it, and think, oh, well, OK, done already.

  • So it seems there are no empty spaces anymore--

  • no white spaces where you can make your mark,

  • where you can kind of be original.

  • On the other hand, if you look at it,

  • there are no real-- humans are incapable of having

  • original ideas.

  • Ideas are always just recombination of something

  • some other people have done before.

  • You take concept A and concept B,

  • and the idea is finding a new connection between them.

  • And so this is where, for me, the computer can help

  • me finding these connections.

  • So in theory, all I have to do is

  • go through every possible permutation.

  • And the computer will offer me new combinations that,

  • hopefully, not have been done.

  • And all I have to do is sit back,

  • and let the whatever it has created pass by,

  • and decide if I like it or not.

  • So in a way, I'm becoming more of a curator than a creator.

  • I'll show you a short example.

  • So this is a tool I call Ernst.

  • It's a kind of an homage to Max Ernst, an artist

  • famous for his surreal collages back in the early 20th century.

  • And what he did, he created these collages from things

  • he found in papers, and catalogs, et cetera.

  • So I decided, well, maybe I'll build my own collage tool.

  • And in this case, I'm using assets

  • found in the vast collection of public domain images

  • by the Internet Archive.

  • And I wrote me a tool that helps me to automatically

  • cut them out.

  • And then I say, OK, if I give you these five elements, what

  • can you do with them?

  • And then it produces me stuff like these.

  • And unlike Max Ernst, I have the possibility

  • to also scale material.

  • And then you get these.

  • If you have like pipes, you get fractal structures,

  • things that look like plants.

  • And the process is very like this.

  • I have this tool with all the library elements,

  • and then it just starts combining them in random ways.

  • And sometimes, I see something that I like.

  • Very often I see things that are just horrible,

  • or just total chaos.

  • But yet, sometimes there's something that looks like this.

  • I call that, for example, "Run, Hipster, Run."

  • And I must say, coming up with funny titles

  • or very interesting titles is, of course, a nice perk

  • of this way of working.

  • But of course, there's still this problem

  • that I still have to look through a lot of images which

  • are just noise, just chaos.

  • So wouldn't it be nice if the machine could

  • learn what I like, what my tastes are, or, even better,

  • what other people like.

  • And then I can sell them better.

  • So I realized I have to first understand

  • what do humans find interesting in images?

  • What is it that makes one image more artful than another one?

  • And this directed my view to this growing amount

  • of digital archives.

  • And those are now-- there are lots of museums and libraries

  • out there that start digitizing all their old books

  • and paintings, just like the Cultural Institute

  • helps museums doing that.

  • And so I first stumbled upon this about two years ago,

  • when the British Library uploaded one million images

  • that were automatically extracted from books spanning

  • from 1500 to 1899.

  • There was only a little kind of a problem with it,

  • because all these illustrations and photos

  • were cut out automatically.

  • So they had OCR scans, and then they

  • knew there would be an image in a certain area.

  • But unless you didn't look yourself at the image,

  • you wouldn't know what's actually on it.

  • So if you were looking for, let's

  • say, a portrait of Shakespeare, you

  • would have to manually go through every image

  • until you maybe struck upon it.

  • So I thought that's a bit tricky.

  • Maybe I can help them with classifying their material,

  • and training the computer.

  • OK, this is a portrait.

  • This is a map.

  • So I started in a way figuring out ways how I could do that.

  • And eventually, I was able to tag

  • about 400,000 images for them.

  • I mean, this was a kind of group effort.

  • Everybody could join in.

  • But working with this material, I realized,

  • is such a joyful experience, because in the beginning,

  • I was just interested in the machine learning.

  • But actually, you suddenly realize

  • there's this goldmine, this huge mine of material.

  • And sometimes, really, you go through lots

  • of things that seem to be boring or you're not interested in.

  • And then you strike upon a beautiful illustration

  • or something.

  • And I realized that is actually a huge part

  • of the fun of the process.

  • So for example, take this rock or stone axe.

  • Well, once you go through this material,

  • you start recognizing patterns.

  • And so sometimes there comes this rock by.

  • And you say, OK, well, I don't care.

  • But then the second one, and you say, oh, maybe I

  • should start at a rock collection or rock category.

  • And then what happens is, suddenly

  • you are happy when every time you come another one of those.

  • And then, well, what I do is I start

  • arranging them, and putting them kind of in a new context.

  • And then you start actually starting

  • to appreciate the craftsmanship that went into this.

  • And also you can-- once you put lots of very similar things

  • together, you can much better distinguish

  • between the slight differences in there.

  • Yes, so I start doing this.

  • For example, here on the left side,

  • you see a piece called to 36 anonymous profiles.

  • There's all these 100s, 1,000s of geological profiles, which,

  • again, you probably, if you're not interested or are

  • a geologist, you wouldn't care.

  • But like this, it becomes a really interesting field

  • or just a way to bring these things that maybe sometimes

  • have been hidden for 100 years in a book,

  • and nobody has watched them.

  • And now you can bring them back to life.

  • Or on the right side, a piece I call "16 Very Sad Girls."

  • Again, I don't know why they have

  • so many sad girls in there.

  • But of course, again, that makes you question what

  • was happening at that time?

  • So it actually motivates you to search back

  • and, well, what's the story behind this?

  • But this all I started kind of on my own.

  • And this was not-- I wouldn't say it wasn't

  • deep learning what I was doing.

  • It was more classical machine learning.

  • Because I was always a little bit afraid

  • of going down this path.

  • I heard, oh, you need expensive machines.

  • And it's kind of hard to set up the machine.

  • So I needed something to get me motivated to go

  • through the painful process of installing everything to get

  • a machine learning system.

  • Luckily, about a year ago, this came along.

  • I don't know if you have seen this picture,

  • but when I saw it, thought, oh, my god.

  • This looks kind of weird, and I have never seen this before.

  • Fortunately, about a week later, after this leaked,

  • it was clear some engineers at Google

  • had come up with this new technique called Deep Dream.

  • And it's based on machine learning.

  • So of course, I wanted to know how can I do this myself?

  • Fortunately, what they did-- they actually

  • shared an IPython Notebook on GitHub,

  • with all the source code, even the trained model.

  • And it was able to dig myself into this.

  • Back in the days, there was no TensorFlow flow yet,

  • so this was still in Caffe.

  • But this allowed me to finally kind of go by baby steps

  • into learning this technique.

  • And obviously, I started-- like probably a lot of us--

  • I started to having a lot of fun with this.

  • You put in an image, and you wonder, oh, my god,

  • what I get out of this?

  • Because that's what it is.

  • You put something in.

  • You had no idea what you would get.

  • So this was fun for, let's say, a few weeks.

  • But then I started thinking, OK, I

  • think I could make some improvements to this code.

  • And I would call this Lucid Dreaming--

  • so maybe get a little bit more control or change the outcome.

  • So I figured out there are three points

  • I might be able to turn it into a different direction.

  • So the first one-- I'm not sure if you noticed it--

  • these pictures all kind of a bit psychedelic, colorful.

  • So I'm from Germany.

  • We are kind of very earnest people,

  • so I like it rather a bit toned down.

  • So very simple thing-- desaturated a bit.

  • And desaturation is super easy.

  • So all I needed was to add a single line of code, which is

  • the one you see at the bottom.

  • What you do is you take the average between the RGB values.

  • And then you can have a linear interpolation

  • between the gray-scale version and the color version.

  • And depending on how crazy you want it,

  • you pick a factor in the middle.

  • And so as an example here, on the left side

  • psychedelic Mario, on the right, the puppy face gray-scale one.

  • So the second thing-- that issue that you

  • don't know what you will get out,

  • or when you get something out, it

  • will be a selection of slug, puppy, or eye.

  • So can I get a little bit more control?

  • Well, in order to do that, we have

  • to have a look at how this thing works.

  • So this is the classic convolutional network, which

  • is the Google Net in this case.

  • That was also the architecture that

  • was used for the early Deep Dream experiments.

  • So what you do, you put something

  • in on the left-- an image-- and it

  • passes through all these convolutional layers,

  • and soft Max.

  • Well, we don't go into depth there.

  • In the end, you get out what the machine thinks-- a probability

  • for certain categories.

  • What Deep Dream does is, you put in an image at the beginning.

  • But then, instead of going all through all the network,

  • you stop at a certain layer.

  • And at that layer, there are certain neurons activated.

  • So depending on what the network thinks it sees there,

  • some neurons will get activated and others will not.

  • And what it does is then it emphasizes

  • those activations even more, and sends it back up the chain.

  • And then this will, in a way, kind of

  • emphasize all those elements.

  • Where it thought it has discovered, let's say, a cat.

  • Then it will improve the cattiness of that image.

  • If you look at this, what actually happens

  • inside this network-- and I'm coming from the kind of,

  • let's say, a pixel arts background,

  • or I'm doing lots of with bitmaps.

  • So I look at it this way.

  • Then it's actually bitmaps all the way down.

  • So in the top layers, you can still

  • see there is some convolutions going

  • on that reminds you of sharpening

  • filters or something.

  • The deeper you go down the net, actually, it

  • gets more and more abstract.

  • But for me, even those dots are still little bitmaps.

  • And then I can do stuff with them.

  • I can increase the brightness, reduce or enhance the contrast,

  • blur things.

  • Or what I also can do-- I can treat it

  • like a brain surgeon in a way.

  • Like I poke in, and say, OK, when I poke in here,

  • does the finger wiggle or the toe?

  • And what I can do then is, I can just

  • say, well, how about if I turn every neuron off

  • instead of, maybe, for example, the 10 most activated ones,

  • and send it back up?

  • The code to do that is, again, rather simple.

  • So again, each of these little bitmaps is-- well,

  • for me, it's a bitmap or it's a little array of numbers.

  • So what I can do for each of these cells,

  • I just sum up all the pixels in there.

  • Then I sort them by how much they summed up to.

  • And in the end, I just keep the top 10, for example-- or if I'm

  • just interested in a single category, the most activated

  • one-- and replace all the other values with 0,

  • and send it back up.

  • What I get then is something that looks like this.

  • So it's kind of just reducing everything

  • to a single category the network thinks to have seen.

  • And I like these, really, because they are, again,

  • totally out of this world.

  • Some remind me of organic patterns.

  • Sometimes you can see, OK, this might have come

  • from an eye detector or so.

  • But definitely, it doesn't contain any traces

  • of slugs or eyes again.

  • But of course, there's still this other issue.

  • And I call it the PuppyLeak, or rather, I kind of

  • reveal why are there so many puppies?

  • Like why does Deep Dream have such a love for puppies?

  • Well, the reason is that their network--

  • the original network-- that was used

  • for the first release of Deep Dream

  • was based on the ImageNet Large Scale Visual Recognition

  • Competition, which is kind of the Olympics of image

  • recognition.

  • And in 2014, what they did-- they added 150 new categories.

  • And it was all dog breeds.

  • So that old rule applies-- gerbil in, gerbil out.

  • So whatever you train it with, that's what you will get.

  • So then I thought, OK, maybe I just

  • have to train this network with something else.

  • But then kind of new to it, I heard these stories

  • that it takes four weeks on a super-powerful GPU

  • to train a network.

  • And I'm a poor artist.

  • I can't afford an NVIDIA rack with these things.

  • But then I came across this technique

  • which is really astonishing.

  • It's called fine tuning.

  • And what it does is, you can take

  • an already trained network, for example, the Google Net.

  • And all you have to do is to cut off the bottom layer.

  • And then you can retrain it with new material.

  • And what happens is, all the top layers

  • are pretty much the same, no matter what you train it with,

  • because they look for abstract elements like edges,

  • curvature, or things like that.

  • So that doesn't need to be retrained.

  • So doing that, you can take a trained network.

  • You feed in new images.

  • Instead of taking four weeks, you

  • can train a network overnight.

  • And the way I do it-- well, I tried it with I

  • called it MNIST with a twist.

  • I in my works with these archives,

  • I come across a lot of these decorative initials.

  • And I thought, is there a way I could actually

  • train it to recognize A, B, C that

  • come in all different kinds of shapes?

  • Well, I tried.

  • And I must admit, there is a manual process involved.

  • Because what I have to do is, the way I do it,

  • I really start folders on my hard drive,

  • and go manually, and drag and drop whatever I find in this.

  • Let's say, oh, another A. I drop in the A folder.

  • I don't have to do this with the thousands of images.

  • Actually, it turns out I can just

  • start with-- it's enough to take 50 for each category.

  • I'm pretty sure there are people who know much more than me

  • about machine learning.

  • They say, oh, my god, that will never work,

  • and it will totally overfit.

  • It doesn't matter.

  • Actually, it works.

  • So I start with just let's say 20 to 50 images per category,

  • and then train the network using the fine-tuning technique.

  • I let it kind of simmer for two hours,

  • so it gets a little bit accustomed to the data.

  • And then I use this to show me what

  • it thinks these letters are.

  • So I train it a bit.

  • I give it a bunch of random images.

  • And it says, I think it's an A. I say, no, it's not an A.

  • And then a B?

  • Oh, yes.

  • And actually, it gets better and better.

  • Because what I do is, whenever it finds something,

  • it gets added to my training set.

  • And I can repeat this process.

  • And in order to kind of help me with this process of saying,

  • yes/no, I realized that kind of left-swipe right-swipe

  • is a real popular way to decide if you are into a specimen

  • or not.

  • And it's actually a super-fast way.

  • So I can go back and forth, and in one hour

  • I think I can go through 1,000 images and say if it's correct

  • or not.

  • And as a result-- so for example here, this

  • is stuff where it has correctly recognized that it's an A.

  • And you can see, it's really surprising.

  • It comes in so many different shapes

  • that, well, it's just amazing how powerful

  • these networks are.

  • Then sometimes, it gives me something like this.

  • And I don't know.

  • It looks like a ruin to me.

  • So the machine said it's a B. And then I said, no, hmm.

  • So I actually went to the original scan in the book

  • to check it out.

  • And indeed, it is a B So it's really magic.

  • And of course, if everything you have is a hammer,

  • everything looks like a nail.

  • It start seeing letters in everything.

  • So it gives me these things.

  • But again, it's beautiful, so maybe something I

  • have not been looking for.

  • But of course, I was about Deep Dreaming.

  • So this is then what happens if I use this newly

  • trained network on material.

  • And you can definitely see it takes

  • an entirely different twist.

  • It suddenly has this typographic feel to it.

  • And another example-- not sure if you recognize the lady

  • on the left and the right.

  • But it's actually the Mona Lisa in,

  • let's say it has a linocut aspect to me.

  • But yes, there's no more puppies involved at all.

  • But OK, one more thing-- Deep Dream

  • had this great opportunity in the spring this year,

  • where the Grey Area Foundation located in San Francisco

  • was doing a charity auction to fund their projects.

  • And I was really honored to be invited to contribute

  • some artworks there.

  • So I kind of ate my own dog food and tried

  • to create something that is not obviously Deep Dream.

  • So I created this piece called "The Archimedes

  • Principle," which reminds me of a ball floating in water.

  • But the one thing I didn't mention yet is my residency.

  • And the reason is, it just started.

  • But I can tell you, I feel like a child in a candy store.

  • It's really amazing.

  • I have this huge amount of data to play with.

  • I have kind of metadata.

  • I have super-smart people-- much smarter

  • than me-- that I can ask extremely stupid questions.

  • And I've already started working,

  • but it's still something I want to fine-tune.

  • But I also had the privilege to see what Cyril was actually

  • doing there for a while.

  • And so, with no further ado, let's

  • get Cyril onstage so he can show you some awesome things.

  • Thank you.

  • [APPLAUSE]

  • CYRIL DIAGNE: Thank you, Mario.

  • Well, my name is Cyril Diagne.

  • And I'm a digital interaction artist.

  • So a question I get often is, what is a digital interaction

  • artist?

  • So it's actually very simple.

  • I try to create poetic moments for people

  • when they are interacting with digital systems.

  • And the process is also fairly simple.

  • It basically boils down to two steps.

  • One step is just plain, really-- just

  • like a kid getting my hands on some technology,

  • and playing without any goal, just

  • for the pleasure of learning new things and having fun.

  • And then sometimes, a poetic outcome appears.

  • And what that lets you do is, for example,

  • swing through the stars, or having your face covered

  • with some generative masks, or creating a video of people

  • dancing together, and so on, and so on,

  • and other things that I've been doing over the last years.

  • But you might be wondering, OK, what about machine learning?

  • It turns out one year ago, I didn't know anything

  • about machine learning.

  • It all started when Damien came to me.

  • He asked me, hi, Cyril, how are you?

  • I'm good, thanks.

  • How are you?

  • Not too bad.

  • And he asked me, what can you do with seven

  • million cultural artifacts?

  • I had to stop for a moment.

  • What do you mean seven million cultural artifacts?

  • What can I do?

  • He said, yeah, well, the Cultural Institute

  • has been working for several years

  • with thousands of partners.

  • We have this great database with amazing content.

  • Well, what can you do with it?

  • Well, I really had to stop, because those

  • are seven million opportunities to learn something new

  • about a culture, about some events--

  • really incredibly high-quality content.

  • So in order not to panic, I did what every coder

  • would do in this situation.

  • I [INAUDIBLE] all the assets and plotted to them in a sine wave.

  • Because then, I did not have to think about it anymore.

  • That was done.

  • And from there, well, I did the most obvious things.

  • I started to plot them by time, by color, by medium.

  • And, well, you find some interesting things

  • along the way.

  • But you can't help but think there's

  • got to be more-- all this great material, all

  • this great technology-- there's got to be

  • more that we can get out of it.

  • And so this is where machine learning came across.

  • Especially when you go to Google every day,

  • doesn't take too long until someone points you

  • to some amazing things.

  • And so one of the first experiments that we did

  • is using a popular technology now,

  • which is the machine learning annotation.

  • So it's the technology that allows

  • you to find the photos that you took in the forest

  • by typing "trees," the photos of my friend Tony, et cetera.

  • But what about the less expected labels-- the labels

  • you would not think in the first place to write down?

  • And what about less-expected images as well--

  • images that you would not expect to find anywhere there?

  • Well, we were curious as well.

  • So we sent it all over to the service.

  • And when we got the results back,

  • well, I fell off my chair.

  • I will show you.

  • Can we get onto the demo?

  • So basically, what we can see here

  • is that we got back around 4,000 different unique labels.

  • And we had to spend hours.

  • It was too amazing.

  • The things that were detected were really incredible.

  • Let's go over one example that you would expect to work,

  • because it does, but in quite an amazing way.

  • So let's look for horses.

  • There.

  • So as you can see, some beautiful artworks of horse.

  • Let's just take this one, for example-- beautiful painting

  • from Xia Xiaowan.

  • It's quite incredible, because I don't

  • know how he got his inspiration to picture horses

  • from this angle, because I didn't even

  • know it was possible, or how the machine managed

  • to detect that it was a horse.

  • But it does.

  • And, well, it's quite amazing.

  • And also, you get examples after examples of, well,

  • really incredible things like this,

  • for example, calligraphic artwork of, again, a horse.

  • But that really goes toward the abstract representation

  • from Yuan Xikun.

  • And again, a beautiful artwork, but it quite blows

  • my mind that now algorithms are capable of distinguishing

  • this much amount of details in these images.

  • So unfortunately, I can't go over all those examples.

  • But we are going to release this online soon.

  • So please take a day off to go through all of them,

  • because there are some pretty amazing examples.

  • And we said, OK, now that we realize it really works, what

  • about more tricky examples?

  • Like for example, let's put something

  • that leans toward emotion, like "calm," for example.

  • Let's have a look.

  • Calm-- there.

  • Yeah, there is this thing where we didn't know what to expect.

  • When you click on the label and when you look at the content,

  • indeed, yes, I see.

  • I understand your reference, computer.

  • These are indeed calm sceneries, beautiful landscapes-- yeah,

  • it's peaceful.

  • So as we went over and over, we came

  • across also labels that we would have not thought to write down.

  • Like for example, one that I quite

  • like-- I didn't know it was a thing--

  • but we found this "lady in waiting" collection,

  • automatically created, and sent back by the algorithms.

  • But look at this beautiful collection

  • of incredible artworks from various centuries of,

  • I don't know, maybe-- I guess it was

  • a thing-- "Lady in Waiting."

  • Oh, well, apart from that one, maybe.

  • And then maybe one last example-- this one I really

  • fell off my chair.

  • I had no idea what to expect.

  • I was, like, that must be some glitch.

  • But then when it appeared, well, it just makes sense.

  • Yeah, I mean-- that's right.

  • So yes we'll release this online soon.

  • We have to polish a few things first.

  • But what you can see is before the neural net is

  • able to label all these images, first

  • he is representing all the images in highly dimensional

  • world, let's say.

  • But it really qualifies as the world.

  • Basically, the neural net, to be able to apply those labels,

  • he positions every assets at a particular position

  • in this world.

  • Well, I don't know about you, but I want to visit that world.

  • What does it look like?

  • Is there an island of the Impressionists with blue hats,

  • or the highway of Roman statues?

  • I don't know, but what I suggest is you hop on with me,

  • and we just have a quick tour.

  • I will give you a quick tour.

  • OK, so let's start from these two assets here.

  • Well, they seem to have in common

  • that they are a landscape.

  • But let's take a step back.

  • We can see other artworks appearing with, well,

  • it looks like some series of landscapes-- more landscapes.

  • But if you look around, then we can see

  • that we're actually surrounded.

  • We literally woke up in this, I don't know, the Island of Art.

  • And well, let's have a tour, actually.

  • So here we can see a scene with animals, I guess.

  • As we continue, I think it's this direction.

  • So let's give it a little while for it to load.

  • People start to appear in the picture,

  • along with those animals.

  • And if we step back even more, actually,

  • let me take you to one of my favorite spots

  • directly-- this one.

  • I call it "The Shore of Portraits."

  • Look at this.

  • Let's give it a little while to load.

  • And so this-- sorry, I'm going really quick,

  • because we have a lot of things that I want to show.

  • But this is a TSNE Map.

  • So TSNE stands for T-distributed Stochastic Neighbor Editing,

  • which basically thanks all those 128 dimensions

  • and flattens it to just two dimensions,

  • so that you can plot them, and make it interactive, and easier

  • to travel across.

  • So here is a very simplistic diagram

  • that shows how you can create the map.

  • So basically, we take the image.

  • We feed it into the neural net which

  • extracts the raw features, which is a 128 dimensional vector.

  • Then in two steps, we reduce those dimensions to two.

  • And then it becomes-- you choose the way you want to plug them.

  • But this you can do now with eight lines of Python.

  • So it's not something that is reserved

  • for scientists or researchers.

  • Thanks to some amazing open source libraries like sklearn,

  • it is something you can do with eight lines of Python.

  • You just load the CSV of your raw features.

  • Well, I won't go over all those lines,

  • but you do a truncated SVD, which is the first [INAUDIBLE]

  • that reduces to 50 dimensions.

  • And then the TSNE is able to create this nice map

  • of two-dimensional vectors.

  • And then, as we saw the "Shore of Portraits,"

  • we got an idea, which led to what

  • we call the "Portrait Matcher."

  • So basically, the Cultural Institute--

  • we detected in the Art Project Channel about 40,000 faces.

  • So the question came quite naturally--

  • what if you could browse this database with your own face?

  • And it has to be real time, otherwise it did not happen.

  • So can we switch back to the demo?

  • All right, let's try that.

  • Let's see if it works.

  • All right, so let's see.

  • Oh, wow, OK.

  • I think we might-- OK, so we still

  • have to polish this one a little bit.

  • but if you come to our lab in Paris,

  • we have a much refined setup, which works much better.

  • But anyway, let's move on.

  • All right, thank you.

  • [APPLAUSE]

  • OK, can we switch back to the slides?

  • All right, but one good thing is that this experiment led us

  • to another iteration.

  • It came from another popular thing that

  • happens is when you're in front of a painting,

  • and you see someone drawn, and you feel like, oh, I

  • feel like I know this person.

  • Or oh, that definitely looks like my neighbor, Tony.

  • And actually, as it turns out, there

  • is this great model that's been done by some researchers

  • at Google which is called FaceNet.

  • And this model is able to achieve

  • 99.63% accuracy on the label face

  • in the wild, which is the academic de facto

  • for this type of research.

  • So basically this neural net embeds in an Euclidean space

  • faces that are from the same identity

  • closer together than faces with dissimilar identities.

  • So basically, same people are closer in this space

  • than different people.

  • And what it sends you back, basically,

  • is, again, 128 dimensional vector

  • that represents the embedding.

  • And so we had to give it a try.

  • So who knows these guys?

  • OK, they're very popular in Europe, too.

  • So we took them as a starting point.

  • Let's try to find.

  • Let's see how well this model performs.

  • But let's not include in the mix other pictures of them,

  • because I'm sure it would work, so that would be boring.

  • What if, instead, we forced the system

  • to send us only pictures that are paintings?

  • Well, again, I can tell you when I saw the result,

  • I fell off my chair.

  • And yes, I did spend a lot of time on the ground

  • during this residency.

  • OK, I'm really excited to press this next button.

  • All right, this is what we got back.

  • I mean, Jared in the middle-- even though there

  • is a huge-- I mean, there is a gender mismatch-- that

  • could be his sister, right?

  • Or even Richard Hendricks, just on the right of him, like,

  • it even got the curliness of the hair.

  • It is just amazing.

  • So of course, you can imagine from there

  • how many tries we did.

  • Everyone wants to find his doppelganger in the database.

  • And here, who would have known that Richard Hendricks had

  • in half-naked man painted at the Tate Britain right now?

  • Let's take these guys, for example--

  • some of my personal heroes.

  • Let's see what we get.

  • And there, again-- even though this beautiful artwork

  • from Shepard Fairey for Obama campaign in 2008

  • is highly stylized with only four colors,

  • the algorithm still managed to find the matching.

  • And yes, that is quite amazing.

  • It would have not been fair not to try it with ourselves.

  • So we gave it a try here at I/O. And sorry, Mario,

  • but-- [LAUGHTER] the blue suits you so well.

  • So this is really fun.

  • And thanks again, for Damien and everyone

  • at the Cultural Institute for offering us

  • this really great opportunity.

  • And I will hand it over to you again.

  • So thank you very much.

  • [APPLAUSE]

  • DAMIEN HENRY: Thank you, Cyril.

  • Thank you, Mario.

  • I'm very happy to work with these guys.

  • And I'm very lucky, I guess.

  • So what's next?

  • So if you want to stay in touch with us,

  • please download our app.

  • It's a way to keep a link with you.

  • And our goal is to inspire you to try machine learning, too.

  • So if you want to give it a try, I recommend TensorFlow.

  • It's an open-source framework, easy to start with.

  • As a tutorial, the one done by Mr. Karpathy

  • is really, really good.

  • It really helps me to understand what's

  • going on with machine learning.

  • And the Cultural Institute is not the only team within Google

  • that is really passionate about machine learning.

  • So for instance, there is the MEI Initiative.

  • And this link is really good reading.

  • So I encourage you to try it.

  • And the last one is about the team,

  • named Magenta, completely dedicated to generate art

  • using machine learning.

  • So thanks, you, thanks everyone.

  • and that's it.

  • [APPLAUSE]

  • [MUSIC PLAYING]

DAMIEN HENRY: Hello everyone.

字幕と単語

ワンタップで英和辞典検索 単語をクリックすると、意味が表示されます

A2 初級

機械学習とアート - Google I/O 2016 (Machine learning & art - Google I/O 2016)

  • 141 8
    Yiyang Lee に公開 2021 年 01 月 14 日
動画の中の単語