Placeholder Image

字幕表 動画を再生する

  • My name is Steve Milton, as Lisa just said, and would like to welcome you all here. Would

  • absolutely love to thank CMJ as well as Microsoft for making this all possible.

  • Also we’d like to begin by introducing everybody else up on stage here. We have Charlie Whitney,

  • Dave Rife, and Kamil Nawratil. I almost got it.

  • Listen. This is who we are. Thought it would be great to give you guys a little bit of

  • background on the project. I am the founding partner of Listen. We are a creative and strategic

  • group just down the street.

  • And most of our work is focused, and many of the projects we do, are focused on the

  • intersection of music and marketing. The two projects that were going to talk about

  • today are part of an ongoing partnership that we have with Microsoft.

  • Delqa is the first project were going to look at, that we worked on with Matthew Dear.

  • And the second, something that’s actually launching this week, was a live show visual

  • system for Neon Indian.

  • When we think about music and technology and we think about how technology has really over

  • history shaped how musicians are able to express themselves and how audiences receive music

  • and then how we also define what is music.

  • Technology definitely plays a role in that. And this is true now more than ever, and is

  • core to the work that were doing with Microsoft.

  • Specifically our work involves developing and concepting ideas and projects like this,

  • so we can bring together musicians with technologists and talented folks like the people who are

  • here on stage today.

  • Delqa is a project here, were going to watch a video in just a minute which will

  • help set it up, and then Dave and Charlie are going to talk a bit about, uh, get under

  • the hood a little bit on this project.

  • But it was a project, it was an installation that went up at the New Inc. space, at the

  • New Museum, over the summer. And you know, I think I’m going to let the video speak

  • for itself.

  • Technology’s allowing for this openness. It’s this living, breathing sound sculpture.

  • You can play with it, and tweak it.

  • I wouldn’t make the song that I made for DELQA at any other point in my career or my

  • life because it only works for a space like that.

  • To have that opportunity, and to meet wonderful new people and learn from them and see what

  • theyre doing and get ideas that I wouldn’t have come up with, I mean, that’s the best

  • . . . that’s art.

  • We saw this project as an opportunity to create something that had never been done, pulling

  • together an amazing artist with a team of amazing artists using Microsoft’s technology.

  • Seeing the guys break it apart, I’m seeing the guts of the system. It’s a lot of math

  • and a lot of numbers. These guys are smart; theyre really smart.

  • The big idea that we were exploring was being able to step inside of Matthew’s music.

  • This piece is really about creating a sonic environment. So one of the ways that were

  • going to achieve that is by installing 40 loud speakers throughout the space.

  • Those will all be linked together into a spatial audio system that allows for a three-dimensional

  • field of sound.

  • Youll be inside of all these different musical ideas that make up one beautiful composition.

  • We were able to leverage the Microsoft Kinect’s ability to track people’s movements spatially

  • through this experience.

  • The Kinect is everywhere in this thing. Youre almost always covered by a Kinect. We specifically

  • sourced this fabric material. It was opaque to the depth cameras, so we can actually read

  • the surface of the fabric, but it was transparent to the colored infrared cameras, so we can

  • actually see through it.

  • As people push on the walls and displace the membranes, theyre actually able to change

  • the quality of the music.

  • Different areas of the installation affect the music in different ways. Sometimes a user

  • will be changing the actual part that’s playing. Sometimes theyll be introducing

  • new parts into the music.

  • We wanted every interaction to feel very powerful for the audience. So, if we take the drum

  • part for example, as they press in, the rhythmic density of it becomes much more complex, but

  • were also changing the tamboral qualities of those synthesized drum sounds.

  • All of these parameters allow the audience to have a multidimensional influence across

  • the composition.

  • So you could visit this piece on a number of occasions and youll hear something different

  • each time. It would never be the same if you went there twice.

  • Our desire was to create a new type of architecture, influenced by how we experience sound.

  • Doing interactive design is really about creating a magical experience, and having really powerful

  • things like a Kinect that help you create those experiences.

  • There’s so much that we can do with a project like this.

  • The more accessible these technologies become, the more people are able to use them and do

  • surprising things with them.

  • People want to make music, people want to be part of the music making process. Technology

  • allows that.

  • Your mind is totally free to wander. There’s no right or wrong answer. It’s just however

  • far you want to push it.

  • To be able to bend music, make music, and put that in different worlds, that’s what

  • makes this fun.

  • So, uhthat should help to give it a little bit of context around the project, what we

  • were up to and how that all came together. Now I think I’ll hand it over to Dave to

  • talk a little bit about the composition and the sound and light and even a bit more.

  • Check, check, check, were here. So I’m glad that video played. I can kind of be done,

  • now. But no, well go a little bit under the hood – [To Kamil] yeah, you can go ahead.

  • I lead a company with Gabe Liberti, who is out there somewhere. Where are you Gabe? Raise

  • your hand so everyone knowsthat’s Gabe Liberti!

  • We have a creative studio called Dave and Gabe. We do a lot of sound and light work

  • with immersive and interactive environments. I have a background in architectural acoustics,

  • and, how sound behaves in space.

  • Gabe has a really strong background in production, and you can see why we ended up doing a lot

  • of the work on this project.

  • If you go to the next slide, youll see some of our work. It all involves interaction,

  • it, most of it, involves sound and lights. Go ahead, go to the next one, oh good that

  • was out

  • I think one of the first things that we talked about as a team, that entire team that you

  • saw in the video, was, what is this interaction going to feel like?

  • Like, how are you going to trigger these different moments in the sound, and how are you going

  • to create this composition based on what Matt already wrote.

  • And so, we really loved the idea of something being physical. Actually like physically touching

  • something, something like an interface, but something more architectural.

  • So we had these two different types of meshes. One of them was kind ofspandexeylike

  • this, and one was more like a net.

  • And as you pushed into it, you would change the composition, you would change different

  • parts of this music, and so based on how youre pushing like this, and also where you are

  • in the space, like if youre pushing on this side or that side, you would trigger

  • different sounds and different effects.

  • You saw in the video -- this is how it kind of works. So that’s great. We can go to

  • the next one. But very tactile, was what we were going with.

  • And then, at the same time, we were working with Matt on the composition. So Matt totally

  • got it right away. He met with the whole team and totally understood this would be rad as

  • an environment. I’ll create some music.

  • But how does it work in terms of making that music interactive? Because typically youre

  • listening just over headphones or out of your speakers and youre just taking it in as

  • the listener.

  • But what we were all going after was putting you inside of his music, so pulling his music

  • apart and surrounding you with it, but also giving you as a listener agency to change

  • what’s going on, to manipulate the sounds that are happening. [To Kamil] No, not yet

  • not yet.

  • So, what you see here is the Ableton session. So Matt created his composition in Ableton,

  • and then we all shared that session together.

  • And we worked with him to take specific tracks, so youre seeing six different tracks here,

  • so six different parts of the music.

  • But we write multiple clips per track. Right? So there’s different energies within each

  • one of those tracks.

  • There’s different kind of sounds that are happening within that, but it’s all aligned

  • to one type of sound, so like, one might be a high hat, one might be a kick drum, one

  • might be arpeggios, that’s all kind of in the vertical there, and then within each one

  • of those are different sounds, so that when you push into the netting, you can trigger

  • those different sounds. [To Kamil] So if you go to the next one we should see that.

  • This is kind of the way that we made up the media nodes that were synthesizing some of

  • the drum sounds, so you see, as we push in, you get more rhythmically complex, a little

  • bit more dense patterns, and then as you push out all of that goes back awaynext slide

  • And so I don’t knowgo ahead and hit it again so it plays the movieso you

  • can kind of see right in the middle, um, there’s a kick selector, and then this is the Microtonic

  • VSC. Right in the middle, youll see, right around heresomebody is starting to push

  • in to the music and you see that little arrow clicking down on the different tunes, and

  • you hear the different complexity that’s happening within the music.

  • I think it’s a little out of sync with the visuals, but you can also see within the Microtonic,

  • there’s different filters happening, and that’s all based on the real time information

  • about where people are pushing as well.

  • So this is a custom tool that we wrote within Max for Live that Yotam Mann, that you saw

  • in the video, wrote. In order to be able to make this music more interactive, and not

  • just like a static copy of what’s going on.

  • And then so now we have the interaction that we want to do, and we have the music and the

  • music is flexible. So now we wanted to reproduce it over a huge audio system that’s all around

  • the listener.

  • And so we had 44 loud speakers. And the point of this was not for loudnessit’s not

  • to make the thing really loudit’s so that we can pull the tracks apart. We can

  • take the stems out, and give each its own place to be. And we can also give all of the

  • stems, or some of the stems, we give them trajectories, so there’s sounds that are

  • moving around the space all around you, and there’s some sounds that are moving based

  • on what youre doing within the architecture.

  • So we laid it out in six different zones—A, B, C, D—so A, B, and C and D were kind of

  • four different instruments, and then E and F were when you climbed up into the middle

  • of the netting, you’d get this kind of drone type thing happening around your ears if you

  • were there, and if you weren’t there it wouldn’t be there. Go ahead.

  • And then this is kind of what it looked like in one zone. So we had ten loud speakers for

  • each of those quadrants, and some of the sounds would be really diffuse and be happening all

  • around you if you weren’t pushing into the net, and then if you pushed into the net it

  • would go local to where you were.

  • So you could really like spatialize where the sound was happening just based on what

  • you were doing, on top of all of the compositional stuff that was happening. And you already

  • saw that. I think that this is just a really funny clip because Gabe is in slow mo with

  • a speaker on his head. I had to show that, sorry. Keep going, oh there it is again. Wonderful.

  • And then in terms of the spatialization we used MaxMSP. We had some of the tracks within

  • the piece. These were not so much interactive, but they were always moving around you.

  • So what youre looking at here on the right side of the screen is like a top down view

  • of all the loud speakers in the space, so youre looking at the floor.

  • And then those green dots are actually sound sources, so that’s like particular parts

  • of his track that are always like slowly rotating in the space, and then we had these high hats,

  • which is kind of that crazy one, randomly going all around, and that ended up feeling

  • like it was like a fly kind of going all around your ear. It was super cool.

  • And this is kind of what it looked like to mix the piece. It’s really interesting to

  • mix a piece over a large spatial system like this, and also interactive, so we had to set

  • up in front of each one of the quadrants and get the local mix right and then sit and listen

  • to the overall piece and get the overall thing right, so it’s a ton of variables, but that’s

  • kind of what it looked like.

  • And then youll notice here, go ahead, that there’s lighting in the space too that was

  • also reactive to what you were doing in the space and it was also tied to the music, so

  • Ableton was kind of driving some of the lighting as well.

  • So what youre seeing here is again a planned view overhead of the lighting system, a whole

  • bunch of DMX-addressable wash lighting that we hit the netting with. Go ahead

  • And then we used TouchDesigner to drive that. So we had these different color palettes within

  • touch for each part of the song. There were kind of three parts of the song in the timeline.

  • And then as you pushed in, youll see here, so this is the live installation and youll

  • see different color palettes happening, that’s based on people actually pushing and pulling.

  • This is the software working in real time. And then youll see, this is what it kind

  • of looked like to be there. And this is the arpeggio, as you can hear.

  • The lights go in and out, the lights also change color based on how far in you are,

  • and this particular instrument was an arpeggiator, so as you push in you get higher frequencies.

  • This is a binaural recording. If you all were listening over headphones right now, as Gabe

  • is pushing with his right hand and soon with his left you’d hear it spatialize to that

  • side and the other side.

  • And then this is what it looked like in the end. I don’t know how many of you saw it.

  • But again, around the perimeter you were pushing and pulling that kind of spandex mesh and

  • then in the middle you were encouraged to climb up in the top and kind of chill out

  • and listen to the entire piece.

  • I should also say that the point of this was so that everybody could manipulate these different

  • tracks around the room, but all contribute to the one composition together, so it wasn’t

  • like you were isolated if you were listening on one side, you would hear the entire piece

  • happening.

  • And I was just going to end on this to say that we used eight Kinects and Charlie here

  • who’s brilliant wrote a really rad application that he’s going to tell you about, but it

  • was all done over OSC over the network, so the Kinects and Cinder were driving Ableton,

  • Max for Live, and MaxMSP. And that was also driving TouchDesigner for the lights. Charlie,

  • take it away.

  • Alright. So I am Charlie Whitney and sometimes I go by Sharkbox. And I was responsible for

  • doing a lot of the programming that was taking the Kinect and taking this data, and then

  • shipping it out to the other pieces so we could make cool music.

  • I’m going to show you some protoypey kind of things because obviously we couldn’t

  • build this thing full scale. This was a really big installation process. It was fabricated

  • from scratch by our friends The Principals.

  • And so this is us in their shop. Their very messy shop at the time. And this is just a

  • piece of this fabric. So we tested a bunch of different fabrics, and we needed something

  • that felt really good.

  • So my background is in installation work. So I’m a coder but I do a lot of kind of

  • media art installation, and if you want people to touch something, it’s got to feel really

  • good.

  • So we had a couple other pieces of fabric that were maybe more readable by the Kinect,

  • but they didn’t feel as good. And if youre in somewhere where it’s this completely

  • immersive thing with all of these 40 channels of sound, its non directional kind of, so

  • you don’t know where its coming from. So you really feel like youre inside something.

  • So we wanted to encourage people to touch and explore this thing that youre inside

  • of to make you really comfortable. So this is a single swatch of this fabric that we

  • had. We ended up stitching three of these together for the outer panels.

  • And these deform really easily. Theyre really soft so that we can get these sort

  • of highlighted areas where youre pushing out.

  • And like I said in the video, the holes in this are a weird size, where the Kinect 2

  • is a Time of Flight camera as opposed to the Kinect 1, which is more like laser grid. And

  • for whatever reason the size of the fabric is justthe holes are big enough so that

  • the depth camera hits em and stops.

  • But all the other cameras see right through them. And we had a really funny experience

  • when we were testing this where if you push too far, if youre really stretching it

  • out youre actually stretching the holes out and there becomes a point where if you

  • push it too far the holes actually became see through and all of a sudden the depth

  • camera would pop through and you would see a person behind there. It was really weird.

  • So we almost had to work around that but it’s a cool interactionmaybe something in

  • the future.

  • Here’s another pretty dirty prototype. This is one of the first things we did. So that

  • same piece of fabric we were just looking at. And there was even a video that Dave showed

  • earlier, just pushing on this one piece of fabric, and what you can see here is there’s

  • these 4 blue dots.

  • And this was our first attempt at getting differences in the fabric, like what happens

  • when you press up, down, left, and right. So what were literally doing is just sampling

  • the depth at the fabric underneath these blue points.

  • So on this, the top point is fully triggered. So we had a four-channel sound system hooked

  • up just to mess around with this and we were just seeingcan you do up, down, left,

  • rightwhat is actually possible? I don’t know if anybody had done as fine an articulation

  • of fabric, and it was a lot of testing to figure out what this thing could be.

  • We weren’t sure at the beginning. Cool.

  • So this is actually, this is Dave from above. This is something we were going to try to

  • do. We had this idea for these kind of cylindrical things that you would go inside of or be on

  • the outside of and we wanted to see if you could test the fabric deforming from up above.

  • These were going to be these really weird percussive-like almost drums we were going

  • to make, but it didn’t really work, so this disappeared, but it was a cool point of the

  • process. Actually could you go back to the blank one? I think that was a video? Alright,

  • no? Alright, keep going.

  • So this is where the Kinects were actually positioned. So this is just the front side

  • of it. There were four on the front and four on the back side, so this is just looking

  • from the front.

  • And you can see that we have two in the far corners, which were looking at that really

  • fine, that soft touchable mesh that I was talking about. And these two kind of in the

  • middle were just looking at the cargo net fabric in the middle.

  • And this is kind of what they were covering. So the blue ones are the ones that were covering

  • the fabric net, and then the pink is covering the cargo. So this is very approximate, I

  • just drew this, but it gives you an idea of how much coverage we really could get.

  • The Kinect 2 can see I think 120 degrees, maybe even a little bit more, so we were able

  • to get close and really wide with these, and get a really good amount of coverage.

  • And then we were able to get like an x, y position of where somebody was touching on

  • this fabric. We can get the x, we can kind of approximate the y, and we were trying to

  • get a z position, how far someone pushes into the fabric.

  • And it was a little bit tricky because were not hitting it head on. But what we found

  • was that if you just push a little bit it only deforms the fabric a tiny bit, but when

  • you start pushing huge, the whole fabric moves. So it was a much better metric to kind of

  • see how much fabric was moving instead of how far they were pushing it.

  • Because a bigger push would move more fabric. It’s kind of weird but I’ll show you an

  • example.

  • So this is a video that’s maybe playingall right so this is actually an application

  • that you can download. We took some of our tools and made it into an app that you can

  • download. So you can see this pink box that’s happeningthis is sort of the amount of

  • fabric that is being displaced. And then there’s a dot in the center, and this is sort of our

  • x, y position.

  • So up in the upper right hand corner we have the image from the depth camera. The left

  • side is the debug, what’s happening after all these sliders and filters are messed around

  • with.

  • And that thing down here is what is called our region of interest, so that’s what were

  • really looking at and trying to figure out like where is this depth. It’s just a little

  • snapshot. So this will actually output OSC, so if youre a musician and you want to

  • do something with soft tracking you can just boot this up and it will start sending you

  • messages that you can map to Ableton or Max or whatever.

  • And it’s a little bit blue but this is the address, it’s http://github.com/cwhitney/DelqaTools

  • and it’s free. It’s open source. It’s pretty rad that Microsoft let us put that

  • up. So check it out.

  • Okay cool. Thanks guys. So…. yeah. There you go.

  • I think were going to open it up to questions but first were going to go take a look

  • at this Neon Indian project.

  • I think just worth saying about this one, were going to watch a little preview video

  • thatll give you a sense of what is about to premiere with the Neon Indian show this

  • week, actually tomorrow at Webster Hall.

  • But really again, like with Matthew Dear, Alan from Neon Indian, we sat with him. We

  • had some conversations about how can the band take their show to the next level. Started

  • kicking around some ideas, and really arrived at what I think is a really great output.

  • Let’s play this video and then we can hop into it.

  • Short and sweet. But you get the picture. So, Kamil.

  • Hello? Thank you Steve. Hello everyone. My name is Kamil Nawratil. I’m the Creative

  • Director at VolvoxLabs. Were based in Brooklyn. We specialize sort of in interactive installations

  • as well as content creation as well as kind of pushing all these ideas into the physical

  • world.

  • So what was interesting with this project is, so essentially, before we worked with

  • other DJs and musicians, we thought about how to recreate the virtual into the physical

  • world and add something on stage.

  • So, you know, digitally fabricate walls that we projection map or use screens that we project

  • through to add another dimension on stage.

  • But this one, we actually took it backwards and sort of used the Kinect sensors to recreate

  • the actual reality, the human geometry, and put it into our digital software.

  • So we called itReconstructing the Realities,” and essentially the project consists of three

  • elements, so first were going to use the Kinects to scan the environment where the

  • band is. So were going to look at the stage, what’s around, and use the Kinect to get

  • the geometry data to put in our software and use it to generate visuals.

  • This will thenobviously were also going to look at the band members to look at their

  • movements, get data from that, and project it back behind them as an extension of the

  • reality that theyre in.

  • Also we will be able to use artist content to sort of feed the color schemes and different

  • effects within the virtual system. So again, quickly, about the process. Scanning

  • – I will show in the bit, I have a little demobut the Kinect has a really cool

  • feature where you can actually point it at something and retrieve geometrical data as

  • an object and sort of use it anywhere in other 3D packages to texture it, or spin around

  • it, shade it, so it’s really interesting.

  • The second part is looking at the musicians through the Kinect sensors. So we have five

  • Kinects on the stage. Each one generates particles and different instances within our software

  • which is TouchDesigner to drive the visuals and represent the music in real time.

  • Thirdly, we can get inputs from the musicians, so midi OSC data, doesn’t matter we can

  • get it in, and drive colors and shapes and scale of all these objects. That also ties

  • into the style of the band or the DJ that possibly can use this package later on.

  • And then at the end we combine it all together to recreate reality.

  • So as Steve mentioned, we sat down with Neon Indian, with Alan, and he gave us his references,

  • his visual aesthetic to inspire us a little bit and sort of add our style on top of it.

  • So what we came up with is three different styles within the show. He’s a big fan of

  • glitchythings, so there’s a lot of glitchiness. Maybe you saw it on Fallon last

  • night. There was a nice little glitchy river flowing.

  • But were also pushing the new technology so you can see these particle people walking

  • around, its going to be the band members, as well as virtual cameras spinning around

  • the stage, so it should be a pretty cool mix of different stuff.

  • So scans. I’ll show you a quick demo of what the Kinect can do.

  • Just to let you know, Kinect, the awesome thing about it is that Microsoft opened up

  • the SDK for it, so you can access pretty much any information that the Kinect provides,

  • so whether it’s the tracking of your face, tracking of your movement, sound tracking,

  • as well as scanning of the 3D environment, the environment where the Kinect sits.

  • What I’m using here is Fusion Explorer, so you can see the environment, there’s

  • the depth camera, the camera pose finder, but what it really does

  • So this is how well scan the stage, obviously it’s Dave, but we can look at room, and

  • stage and instruments, and well essentially add that in front of all the dynamic simulation

  • particle people.

  • But the cool thing about this is you can actually walk around an object and get a 360 scan that

  • you can use later on. You can actually texture it, too, there’s a capture color mode.

  • Were using the depth image to get information from space and generate different reactions

  • within our software to where the musicians are and where their hands are, and where we

  • are in space essentially. So Kinect is really great for that. It can actually tell you a

  • lot about the environment that youre in.

  • And here you can see the effects based on the depth images and point cloud images. Point

  • cloud image is essentially the position of ever pixel that the camera looks at in x,

  • y, z. So here you can see all these instances attached to those points. It’s really interesting

  • how you can actually see the 3D geometry of the shapes that the camera is looking at.

  • And quickly about the set up on stage. So we have 5 Kinects. Alan, the main guy, were

  • looking at his whole silhouette. He’s very dynamic on stage, so were going to try

  • to capture his entire movement and reproject that as dynamic simulations.

  • All the other guys, well focus on their movements, on the instruments, and whatever

  • theyre doing to create the sound, which is a little bit different than what we used

  • to do before with musicians and bands, mainly DJs, because since were looking at their

  • movements while theyre creating the music, this is already giving us sound-reactive visual,

  • which is different than when working with DJs where youre trying to really syncopate

  • everything to the beat and the DJ just stands there really.

  • So were trying to make up for that with visuals. But this one, were actually getting

  • a freebie by looking through these sensors at the musicians and getting these sound-reactive

  • visuals.

  • So here you can see all the Kinects are looking now at a person in space. This is within TouchDesigner

  • software.

  • So were able to look at the points in the 3D environment and position them wherever

  • we are, so that during the performance we can actually move the virtual cameras from

  • one guy to another. Let’s say Drew at this point is making the most movements, we’d

  • probably cut to that to sort of show the situation and the action.

  • All that is happening on GPU. GPUs are getting super powerful these days and anything that’s

  • real time needs to be sent through that. So we developed a bunch of techniques to look

  • at the depth camera image and generate particles based on that. So we have hundreds of thousands

  • of objects and geometries happening at the same time from five people and running at

  • 60 frames per seconds, so that’s really amazing.

  • So when Steve approached us he also mentioned that this will be touring and this should

  • be packaged so that anyone can tap into it, like I mentioned before where you can add

  • your content in and drive the visual style of your own show.

  • So we created this DJ interface where you can access everything from, change the virtual

  • camera positioning, change the colors, choose your tracks, and actually also play with the

  • Kinect cameras. You can see on top there, there’s five.

  • You can manipulate the depth image and what the camera can see in terms of distance. And

  • on the bottom you can see all the virtual cameras switchers, so whoever is interesting

  • at this point you can switch to that super quickly, as well you can set it to automatic

  • mode so it’s going to just switch around on its own.

  • So. Great job Kamil. Round of applause.

My name is Steve Milton, as Lisa just said, and would like to welcome you all here. Would

字幕と単語

ワンタップで英和辞典検索 単語をクリックすると、意味が表示されます

A2 初級

CMJマスタークラス|Shaping The Future.音楽、テクノロジー、クリエイティブコラボレーション (CMJ Masterclass | Shaping The Future: Music, Technology and Creative Collaboration)

  • 136 12
    songwen8778 に公開 2021 年 01 月 14 日
動画の中の単語