Placeholder Image

字幕表 動画を再生する

  • CHRISTIAN PLAGEMANN: Good morning, everyone,

  • to this morning session on Cardboard,

  • virtual reality for Android.

  • I've got with me here David Coz.

  • He's a software engineer at Google's Cultural Institute

  • in Paris.

  • My name is Christian Plagemann.

  • I'm a research scientist and team

  • lead for physical interaction research here in Mountain View.

  • And we have Boris Smus, who's a prototyper software engineer

  • and researcher also in our group in Mountain View.

  • So this talk is about a piece of cardboard.

  • Or let's say what it actually is about,

  • is about some surprising things that your phone

  • can do when you just add a little bit of extra equipment.

  • And for those of you who've been here yesterday,

  • after the keynote and in the afternoon,

  • they know of course exactly what I'm talking about.

  • For those who are watching from remote,

  • let me just a replay a few moments of yesterday.

  • [VIDEO PLAYBACK]

  • -We're going to hand each and every one of you a cardboard.

  • -Here you go.

  • -There's a piece of adhesive tape.

  • [CHATTER]

  • -Wow, that's great.

  • -Oh, dang.

  • -Oh, wow.

  • -Wow.

  • -Oh, fancy.

  • -I think I love this.

  • -Wow, that's crazy.

  • -Oh my god!

  • -Oh, this is incredible.

  • -Oh, that's insane.

  • -Oh, cool.

  • [GASP]

  • -Wow.

  • That's awesome.

  • -Cool.

  • That's fantastic.

  • --[GASP] It says "Fly to Space," when you look up into the sky.

  • [LAUGHTER]

  • -This is so [BLEEP] clever.

  • [LAUGHTER]

  • [END VIDEOPLAYBACK]

  • [APPLAUSE]

  • CHRISTIAN PLAGEMANN: OK.

  • Here goes my clicker.

  • So what you got yesterday, all of you,

  • were these simple cardboard boxes.

  • And what that basically does is you insert your phone into it,

  • and it kind of turns your phone into a very basic

  • virtual reality headset.

  • And what you can do with that is you

  • can look around and experience content in another place,

  • whether it's real or artificial, in a totally different way.

  • Like not on a small postcard size,

  • but actually all around you.

  • And we are making the hardware design

  • and an open-source software toolkit

  • openly available to everyone, because we

  • want people to get started.

  • There's a lot of development in this area already,

  • and we want to really kickstart a lot more of that.

  • And that's why we want to get it into people's hands.

  • And to just to add a few other surprising things that

  • happened yesterday, so just two hours or three hours

  • after we posted our designs on the website,

  • somebody actually came up already

  • with their own construction of it.

  • It's a little bit rough around the edges.

  • [LAUGHTER]

  • But wait another two hours, and on the first online store,

  • there was actually, you could order Google's Cardboard VR

  • Tookit.

  • [LAUGHTER AND APPLAUSE]

  • And we actually express ordered one yesterday.

  • So we kind of ran out, so maybe we

  • can get some more to give out at the conference,

  • but they haven't arrived yet.

  • We got the lenses people can order on the store

  • through our friends Durovis Dive,

  • and those actually all sold out in 20 minutes.

  • So this is really what we had hoped for,

  • just 100 times faster than we kind of anticipated.

  • And the goal of this talk today, which

  • is obviously brief for a big topic like this,

  • is we want to give you a rough overview of how

  • virtual reality on a mobile phone works in principle.

  • We want to explain to you how this project came about

  • and why we started it.

  • It was really like a side project.

  • And most importantly for you as developers,

  • we want to walk you through our open source software

  • toolkit and a sample app, so that you can start hacking

  • right away.

  • And with this, I hand over to David Coz.

  • He actually started this as a side project

  • six months ago in Paris.

  • This is David.

  • DAVID COZ: Thank you, Christian.

  • [APPLAUSE]

  • Hi, everybody.

  • So the first thing to know is that I'm a big VR fan.

  • I just love, you know, the sense of immersion

  • it gives, the fact that you can just look around you

  • in a very natural way at all this content, be it

  • created content or virtual reality, pure content.

  • And there's been so much progress

  • in the space in the past few years.

  • We have, obviously, Oculus Rift doing a great job

  • at putting VR back into the media

  • attention with an awesome device.

  • And also, a trend of mobile phone-based viewers,

  • like the Dorovis Dive or other do-it-yourself projects.

  • And with my friend Damien in Paris,

  • about six months ago, we were really enthusiastic

  • about this progress, and we're thinking,

  • how can we accelerate this even more,

  • so that not just tech addicts can experience VR,

  • but you know, anybody with a smartphone?

  • So our idea was really to build the simplest and cheapest way

  • of turning your smartphone into A VR viewer.

  • And that's how we-- we started to collaborate

  • about six months ago in the Cultural Institute Lab

  • at our office in Paris, which is,

  • you know, a lab space where engineers and artists,

  • thinkers, can collaborate on projects.

  • And we started to really naturally try it

  • with cardboard, because it's so easy.

  • And we know we cut some cardboard into a box,

  • and quickly came up with is really rough design.

  • Looks a bit like the one you saw earlier, with Christian.

  • That really worked well already, and I

  • decided to work on this as 20% project, which

  • is a side project you can do at Google.

  • And we used the laser cutter to build the first batch

  • of almost identical goggles that we showed around at the office.

  • People got really excited, so I decided to take this with me

  • to our main office here in Mountain View.

  • And that's how I met Christian and his team.

  • And they got really excited and enthusiastic

  • about this product, and together we

  • decided to take it a few steps further,

  • in terms of hardware design-- also, the software

  • toolkit and the experiences.

  • And now actually to run some demos,

  • I'm going to ask Boris to build a headset for us.

  • So Boris, everyone is watching you.

  • You have 30 seconds.

  • No pressure.

  • So while Boris is assembling the viewer,

  • let me emphasize why we actually chose cardboard

  • and we actually stick to it.

  • The first reason is we wanted the viewer

  • to feel and look really simple.

  • Everybody's familiar with cardboard.

  • And the idea there is that your phone is really

  • the piece that does the magic here.

  • Your phones are so powerful, it was just

  • great to add this simple viewer on top of it.

  • The second reason is we want you guys to just take

  • scissors, staplers, and just go crazy at it and modify it.

  • And it's just so natural and easy with cardboard.

  • We put the plans online, if you go to our main website.

  • And it's open for grabs.

  • Just go crazy at it and send us your modifications.

  • We'll be really happy to see them.

  • So Boris, a bit slower than the rehearsal.

  • Yeah.

  • OK, great job, Boris.

  • [APPLAUSE]

  • So now that we have these viewer, the question is,

  • what can we do with this?

  • And the first thing that we got really excited about

  • is the possibility to just go anywhere

  • on Earth, leveraging, of course, the geodata we have at Google.

  • So we built this experience with Google Earth,

  • using geodata that we have.

  • And Boris is going to take you on a tour through this.

  • So you can see here the kind of traditional VR screen settings,

  • with two screens, one for each eye.

  • And you can see how the display adapts

  • to the head movements in a really reactive way.

  • Look at how Boris is moving, and this is adapting quite fast.

  • So your phones are doing quite a good job at this, actually.

  • And then using the magnet clicker on the side

  • that we added, Boris can start to fly around the mountains.

  • And it's one thing-- I guess if you guys will try it.

  • It's one thing to see it like this.

  • It's totally another thing to try it,

  • because you have this nice 3D effect.

  • And of course, here the cables adjust for screen mirroring.

  • Otherwise, you can just walk everywhere.

  • And Boris can also start to look up and fly to space.

  • And from there, you can access a selection of nice places

  • that we selected for you guys-- and precached, actually,

  • so that we don't have Wi-Fi problems at the I/O.

  • And now he just teleports himself too Chicago

  • and can look around.

  • So we really like this idea of virtually visiting any place.

  • And not only can you visit a place,

  • but you can also use VR to learn more about them.

  • And that was really important for us,

  • for my team in the Cultural Institute in Paris.

  • We wanted to build this tour where you can, like,

  • take people, take even kids, to learn more about these places.

  • So we used the Street View data that we

  • acquired for this really iconic place.

  • This is the Palace of Versailles in France.

  • And you have a charming local guide

  • that tells you stories about this place,

  • and you can just really look around.

  • It's a very nice way of discovering content.

  • Another kind of interesting thing we tried

  • is this YouTube integration, where basically, it's

  • a massive-- yes, it's flickering a bit.

  • Yeah.

  • It's a massive, giant home theater room

  • where you have this big screen in front of your eyes

  • so you can play videos.

  • And imagine it's like having a giant screen just, like,

  • two feet from your face.

  • And we also kind of made use of, you know,

  • all this space that you have around you.

  • So it's a really interesting use of VR,

  • where we put video thumbnails, and by looking

  • at them-- whoops, something went down, I think.

  • So by looking at them and using the magnet clicker,

  • you can very naturally select a video and play it.

  • It's a bit shaky.

  • So we're really excited about this kind of integration.

  • We think we can-- we have lots of product

  • at Google, lots of products.

  • We can use VR to kind of enhance existing products.

  • And one similar such integration we're really proud about

  • is Google Maps integration.

  • There's actually, right now, as of today, integrations--

  • we have a Street View VR mode for Street View.

  • There's something wrong with the cables.

  • So you can basically go anywhere where

  • we have Street View, pull up the Street View data,

  • and then you double tap.

  • You double-tap on the arrow icon,

  • and you have a side-by-side screen

  • rendering of Street View.

  • So you can put this on your Cardboard device

  • and get a really good feel for the place.

  • And this actually works on any other phone-based VR viewer.

  • So Durovis Dive, Altergaze, any view.

  • So we're really happy about this.

  • It's not just a demo anymore.

  • It's actually in a live product.

  • Another thing, as I guess you noticed,

  • these were all Android-based applications.

  • But you can also very simply, very easily

  • build similar applications with a browser.

  • And we have this set of Chrome Experiments

  • that you can find out on our website.

  • And this is just using JavaScript and HTML5.

  • It's very easy with libraries like Three.js.

  • So I encourage you to go on our website and check out.

  • The source is absolutely small and nice.

  • And it has nice music.

  • Here we don't have sound, but you have nice, catchy music,

  • so try it.

  • So we have a couple of more demos.

  • We don't have really all the time

  • in the world to present them.

  • I wanted to emphasize two things.

  • The first thing is the Photo Sphere demo is really cool.

  • You take your Photo Sphere and you can really naturally look

  • around at a place you visited before.

  • I think it totally makes sense with Photo Sphere.

  • The second thing is Windy Day.

  • They actually have a booth right there

  • at I/O. I encourage you to check it out.

  • They produced this really nice storytelling movie

  • in a 3D space, and you can just look around,

  • and depending on where you're looking at,

  • the story progresses.

  • So yeah, again, we really want to get

  • you inspired by all these experiences and demos.

  • You can go on g.co/cardboard and learn more about them,

  • get the plans for this viewer.

  • And more importantly, we really want

  • you to build on top of what we provide,

  • in terms of the viewer, the toolkit, and the demos.

  • And Christian is going to tell you more about how you do this.

  • [APPLAUSE]

  • CHRISTIAN PLAGEMANN: So there's really one important thing

  • we want to emphasize, which is this is just a placeholder.

  • Like, this is a piece of cardboard.

  • I think TechCrunch wrote this very nicely yesterday.

  • Is that an Oculus?

  • Of course not.

  • It's cardboard.

  • But we think it can get you started, like, fairly quickly.

  • So that's the point of it.

  • And kind of the principles behind VR

  • are pretty uniformly the same.

  • And I just want to spend a few minutes on describing those

  • to you, before Boris goes into the coding part.

  • So what do you need to actually do

  • to turn a phone into a VR headset?

  • I want to describe three aspects that we put into this viewer.

  • The one, obviously, is the optics.

  • That's kind of one of the crucial components.

  • The other one is you want to interact with the experience,

  • so we built this little clicker.

  • And the NFC tag, which is a neat little idea that lets a phone

  • go automatically from a phone experience into a VR experience

  • and back, like, seamlessly.

  • So how does it work?

  • So imagine you rip off the top, which you can easily

  • do-- you just need to build another one then-- to basically

  • realize there are two holes, and then at a certain distance,

  • there's the screen.

  • So what happens if the user brings the device really

  • close to the eyes, your eyes look through these holes

  • and you have a fairly wide field of view through these holes.

  • And then what happens next is when you put lenses

  • into these holes, that basically concentrates

  • your wide field of view onto a small image area on the screen,

  • like for the left eye and for the right eye.

  • And "all," in quotation marks, that you

  • have to do as developers is render

  • images on these small areas, left and right, that

  • make your brain believe you're in a different place.

  • So that has to be very fast, accurate, as high-resolution

  • as possible.

  • It has to be synchronized well with your head motion.

  • So let's look a little bit closer

  • into this rendering pipeline.

  • So imagine you want to put the user into a virtual space.

  • One of the critical things you need to know, of course,

  • is head where is the user looking at?

  • And what we can do is, the mobile phones, the recent ones

  • of the recent years, they give you very nice, actually,

  • orientation tracking.

  • They have [INAUDIBLE].

  • And Android provides you-- and all kind

  • of other mobile operating systems, too,

  • or you write your own libraries-- provide you

  • a way of integrating all of these readings over time

  • and give you a relatively fast stream of 3D orientation

  • estimates, like the [INAUDIBLE] of where the device is headed.

  • So if you have those, you can pick your favorite kind

  • of 3D environment, like library, like OpenGL, for example,

  • and you can define the center of this virtual universe

  • as the head of the user.

  • And then you basically put two cameras,

  • for the left eye and the right eye, at eye distance

  • on a virtual rig, and you orient these cameras

  • based on these orientation estimates from the phone.

  • And then you do just the regular rendering

  • that you do anyways for any kind of 3D application.

  • You render one for the left, one for the right eye,

  • and you get these streams of images out.

  • And obviously, you want to adjust

  • the field of view of the camera and kind of the general set-up

  • to match the geometry of your viewer.

  • One last thing that you need to do before you put that actually

  • onto the screen is you have to adjust for lens distortion.

  • So because this lens kind of widens your field of view,

  • you get this distortion effect, which

  • is called the pincushion distortion.

  • Which basically means that outer-image areas

  • are enlarged more than central ones.

  • Luckily, what you can do is you can come up

  • with a model for this.

  • You can kind of fit a function either analytically,

  • because you know how this device is set up

  • and what the lenses are, or you just measure that.

  • Using a camera and a calibration pattern,

  • you can fit a function to this, and then you

  • can invert this function.

  • So this means if you write a shader that basically takes

  • a regularly-rendered image and distorts with this, what

  • is called barrel distortion, this

  • has the effect that if you put that through a lens,

  • the undistorted image arrives in your eyes.

  • And that's how you basically render this entire thing.

  • The next aspect I want to talk about

  • is you have these images on your eye,

  • and they kind of track your head motion,

  • but how do interact with them?

  • You kind of hid the touchscreen in this viewer,

  • and you don't necessarily want to stick your fingers in

  • and kind of obstruct your view.

  • So we kind of had this neat little idea

  • of putting magnets inside.

  • There's a fixed magnet on the inside,

  • and there's this movable magnetic ring

  • on the outside, which if you move this magnet,

  • you can basically detect the change of magnetic field

  • with a magnetometer.

  • So kind of what drives the compass.

  • And I want to show you real quick

  • how beautiful these signals are.

  • We were kind of surprised how clean they are.

  • So what you see here is somebody--

  • you see three lines, which is the X, Y, and Z direction,

  • like three coordinates of the magnetometer, which

  • kind of tells you how the magnetic field is oriented.

  • And you see when I move this magnet, what kind

  • of beautiful signals we get.

  • And so you can write a very simple classifier that picks up

  • the up and down and translates that into an input event.

  • The downside of this, of course, is you're losing the compass.

  • So inside this experience, we don't know, anymore,

  • where kind of north is.

  • Because we always think, oh, north is where the magnet is.

  • But we figured universal environments,

  • it's much more important you have some way of interacting.

  • Like flying, not flying, starting/stopping a video,

  • opening up a box, or whatever you want to do.

  • So that's a trade-off, and you're

  • completely free to come up with other awesome ideas

  • to interact.

  • The last thing I want to mention is the NFC tag.

  • So we have just a regular NFC tag--

  • we used a fairly big one so that it

  • fits different phones-- where we put a special URL on it

  • that our framework handles, that basically tells the phone, hey,

  • I'm now in the 3D viewer.

  • Your app wants to maybe switch into an immersive mode.

  • So the long-term vision for this is,

  • for example, you have a YouTube app

  • and you're watching some kind of talk or a sports game.

  • It's nice, but when you put it into the viewer,

  • you have the exact same sports game, but you can look around.

  • And you can hop from, like, player to player,

  • or all these kind of things.

  • So you can think about all kinds of applications

  • that can go seamlessly back and forth and in and out.

  • That's where kind of also this mobile concept

  • is a bit different from these really head-mounted concepts.

  • But with a stapler, you can, of course,

  • transform this into head-mounted.

  • And this is all nice.

  • It's not that super difficult to do

  • this kind of rendering in type end.

  • But we wanted to make it as easy as possible,

  • so we packaged, basically, the hard pieces up

  • into one open-source library, which

  • you can find on our website.

  • And Boris will walk you through kind of the APIs

  • and how you can start coding.

  • [APPLAUSE]

  • BORIS SMUS: Thanks, Christian.

  • Check, check, check, check, check.

  • All right.

  • So as you saw, we demoed a bunch of things

  • at the beginning of the talk.

  • Some of them were written in Android,

  • but the coin collector example was actually written in Chrome

  • using Three.js, which is a lightweight wrapper

  • around WebGL to make developing 3D

  • applications for the web super easy.

  • And we used, also, JavaScript events

  • to track the position of the user's head.

  • So it's an easy way to get started.

  • But of course, with Android, we can get a little bit

  • closer to the metal.

  • So the toolkit that we are open-sourcing today

  • is an Android-based VR toolkit, as Christian introduced,

  • designed to make developing VR applications as

  • easy as possible.

  • So today, if you go to g.co/cardboard,

  • what you get is the soon-to-be open-sourced--

  • we're working on a couple of tweaks-- VR toolkit,

  • so this is just a JAR file, and a sample that we'll be talking

  • about more throughout this session.

  • So what is the sample we're going to be building?

  • It is a treasure hunt application.

  • So the idea here is there's an object hidden somewhere

  • inside your field of view around you,

  • and your goal is to find it and collect it.

  • The way you collect it is, of course, through the magnet,

  • and the more objects you get, the more points you get.

  • Very simple.

  • So we'll start with our favorite IDE.

  • Get the JAR from the website.

  • Put it in as a regular dependency.

  • And the first thing we do is create our own custom activity.

  • But instead of extending from the default Android activity,

  • we'll extend from the Cardboard activity,

  • which the toolkit provides.

  • This does a few things.

  • Firstly, the screen is forced to stay on.

  • The screen is placed in landscape mode,

  • and we hide all of the UI on the top

  • and on the bottom of the phone screen,

  • typically the navigation bar and the notification bar.

  • We also disable the volume keys, because often you

  • might accidentally trigger them.

  • These are, of course, just defaults and can be overridden,

  • but it's a good starting point.

  • still, we have nothing on the screen for now.

  • So what we'll do is we'll create an instance

  • of the CardboardView, which is a view that

  • extends from GLSurfaceView, which

  • is the standard way that you render 3D content for Android.

  • The way we'll do this is in our activity,

  • we override the onCreate method with a series of simple steps.

  • Here we find the view in our layout hierarchy.

  • Next, we tell the Cardboard activity

  • that we should be using this particular view,

  • set the renderer, and we're good to go.

  • So before I go too deep into the renderer itself,

  • let me quickly recap what Christian just told you

  • about the rendering pipeline.

  • So there's four basic steps.

  • First, we need to estimate eye positions,

  • to figure out where to place the two

  • virtual cameras in our scene.

  • Next, we render the scene.

  • So we take the world space and project it

  • into eye space for both eyes.

  • Then each eye needs to have its lens distortion corrected.

  • And finally, we place the two displays side

  • by side in landscape stereo mode.

  • As far as what the toolkit provides for you,

  • it's actually quite a lot.

  • So the first thing, we have a head tracker

  • that we're releasing.

  • This also includes an eye model.

  • So we have a basic distance estimation

  • between two average human eyes, so we

  • know where to place the cameras.

  • Once the scene is rendered, we correct for lens distortion

  • based on the optical properties of the Cardboard device.

  • And finally, we place the images side by side,

  • leaving you only the rendering.

  • The second step is the only thing you need to worry about.

  • So to do that, we make a custom renderer,

  • the TreasureHuntRenderer here, and you

  • can see it implements an interface

  • that our framework provides.

  • And there's two methods you need to care about here.

  • The first one is onNewFrame.

  • This gets called every time a frame is rendered,

  • and it takes in the HeadTransform, which

  • is roughly the heading of where the user is looking.

  • So here we'll save it for later, because we're

  • going to need to know if the user's looking at the treasure

  • or not.

  • The other thing we can do in onNewFrame is update the model.

  • So typically there's an update/draw cycle

  • in many graphics applications.

  • So we'll just increment a time count here.

  • So that's the first method.

  • The next is we need to implement onDrawEye.

  • And this gets called twice for each frame,

  • once for the left eye and once for the right.

  • And, of course, we pass in the position

  • of each eye, or of the eye order rendering,

  • which includes a bunch of parameters

  • that you need to draw a scene.

  • So as you expect in a GL application,

  • first we set up our matrices.

  • So we go from View and from Perspective

  • to Model-View-Projection matrix, and then we draw the scene.

  • Now it's important that we do this operation

  • as quickly as possible, because introducing any latency

  • into the system can have undesirable effects, let's say.

  • I'll go more into this later.

  • So at this point, we've rendered our scene.

  • Here you can see our treasure, which is a cube.

  • And it's, you know, it's there.

  • But of course, once we pass on to the framework,

  • we take the scene, transform it by the head orientation,

  • by each eye-- so each eye has a separate camera--

  • and we apply the lens distortion and place the results side

  • by side.

  • So we have our treasure, teasing us,

  • but so far, we haven't figured out how to collect it.

  • It's very simple.

  • So as Christian mentioned, we have these magnetometer traces.

  • And we've provided a way to detect

  • this pull-and-release gesture very easily.

  • All you need to do is, in the activity,

  • implement onCardboardTrigger.

  • And for our application here, we can

  • see the code is straightforward.

  • If the user's looking at the treasure, then we pick it up.

  • Otherwise, we tell them some instructions.

  • But lastly, we want to somehow emphasize--

  • or I want to emphasize-- that we want

  • to provide user feedback here, since we want

  • to know if the magnet pull was actually

  • detected by the framework.

  • And a good way to provide this feedback

  • is through a vibration-- but make

  • sure it's a short one, because long vibrations with the phone

  • so close to your face can be a little bit jarring--

  • or a quick AlphaBlend, just a transparency blend.

  • OK, the last piece of input is the NFC tag.

  • And as we mentioned earlier, it's

  • an easy way to make these amphibious applications that

  • can adjust to being inserted or removed from the VR enclosure.

  • So in our example, for the purpose of this illustration,

  • we may want to make our treasure hunt app also

  • work when you take it out of Cardboard.

  • In that case, we want to present not a stereo view,

  • but a single view.

  • So the toolkit provides this very easy flag

  • that you can toggle, which is VRModeEnabled.

  • And all you do is you flip that flag

  • and go between the stereo and regular modes.

  • Of course, the NFC tag has something encoded in it.

  • So you don't need to use our framework to detect it.

  • You can just use a regular intent

  • filter, as you would in an Android application,

  • to know if the tag has been scanned or not.

  • Now another benefit of having this tag in the device

  • is that if you want to make hardware modifications-- which

  • we hope you do-- then you can encode

  • these changes inside the NFC.

  • So for example, you want to increase

  • the field of view of the lenses.

  • You can change the parameters of the distortion

  • that you need to apply.

  • Or you want to increase the interpupillary distance, which

  • is the technical term for the distance between your eyes.

  • You can also encode that in the NFC tag.

  • And this way the toolkit can provide the best

  • possible rendering even for your modified device.

  • OK, so we've talked a bit about using the framework.

  • And we've used this framework extensively

  • over the course of the last couple months.

  • So what we've learned, though, is that quite obviously,

  • building an Android application is

  • very different from building a VR application.

  • So I want to leave you with three principles which we've

  • found from this development process.

  • The first one is very well-known in the VR community,

  • and it is to keep physical motion and visual motion

  • closely coupled.

  • The idea is that the brain has two systems, roughly speaking,

  • to detect motion.

  • One is the vestibular, which is sort of your sense of balance,

  • and the other is your visual, which

  • is, obviously, your eyesight.

  • And if the systems are not in sync,

  • then your illusion of reality in a VR environment can go away,

  • and you can even get motion sick.

  • So we've found that it helps, for example,

  • to relieve this problem, to place

  • the viewer in a fixed position.

  • Let them look around.

  • Or if you want to create motion, then

  • make them move forward at a constant velocity

  • in the direction of view.

  • There are probably other ways of doing this,

  • but these two things, we've found,

  • make it a pretty good experience that's not very sickening.

  • So the other thing is you need to keep latency down,

  • as I mentioned in the onDrawEye discussion,

  • because having high latency really

  • takes away from this illusion.

  • The second idea is to keep the user's head free.

  • So the YouTube VR adaptation that you saw

  • showed the user positioned inside a movie theater,

  • as opposed to placing the movies strictly in front of their eyes

  • and sticking to them when they move.

  • So the idea here is that we want to create

  • an immersive environment, and this, in practice,

  • seems to work a lot better than just putting things

  • in screenspace.

  • Lastly, really take advantage of the giant field of view

  • that Cardboard gives you.

  • And essentially what we're giving you

  • is a 80-degree FOV right off the bat, which is like being a foot

  • or two away from a 50-inch TV screen.

  • Not only that, you can look around in any direction.

  • So you have this infinite virtual screen around you.

  • And it really helps to take advantage of this.

  • So I want to emphasize that it's still

  • very early days for virtual reality.

  • And our toolkit is just the beginning.

  • It's just the tip of the iceberg,

  • and there's a lot more to explore.

  • Obviously smartphones have tons of additional sensors

  • that we haven't tapped into.

  • There's a microphone present.

  • There's an accelerometer.

  • I've Seen demos where you can detect jumps.

  • You can, of course, interface with accessories

  • like game pads and steering wheels, et cetera.

  • And also, I'm particularly excited about combining this 3D

  • immersive video experience with a sound experience, as well.

  • So imagine wearing a pair of stereo headphones, in addition.

  • So of course, also, we have a camera in the device.

  • So I want to call David up to show us

  • a demo of something that uses the camera, as well.

  • So David is going to fire up a demonstration in which he

  • can manipulate a virtual object in front

  • of him using the camera inside of the phone.

  • So this is using the Qualcomm Vieuphoria library,

  • which is a pretty advanced object--

  • DAVID COZ: Whoops.

  • BORIS SMUS: A pretty advanced configuration library.

  • And we've augmented our demo.

  • DAVID COZ: Sorry for that.

  • BORIS SMUS: No problem.

  • So anyway, the idea here is David

  • will have-- actually, your Cardboard comes with a QR

  • code, which this demo will be showing you.

  • So essentially, when you look at the QR code,

  • you can manipulate this piece of cardboard in front of you.

  • So give us a second here as we try to get it going.

  • So as you can see, it's just a slightly modified version

  • of the app.

  • So you can see, as I move this piece of cardboard,

  • the Vieuphoria library is actually tracking the marker.

  • DAVID COZ: And I can still look around in a very natural way.

  • BORIS SMUS: So we can combine-- by just

  • having a simple marker and a camera,

  • we can get this really cool effect.

  • So imagine if you were to combine our technology

  • with a Tango device, which gives you

  • six degrees of tracking in almost any environment.

  • Great.

  • Thanks, David.

  • So with that, definitely check out our website,

  • where you can download the toolkit itself.

  • It'll be open source soon.

  • We have docs.

  • We have a tutorial for you to get

  • started that goes through this sample.

  • Also, tons of samples, including Chrome ones,

  • on chromeexperiments.com, and Android ones, the ones

  • that we showed you today, which is available in the Play Store.

  • And if you want to hack on the physical model,

  • then we have all the plans there for you.

  • So we can't wait to see what you come up with.

  • Thank you all for listening, and we'll

  • take about five minutes of questions.

  • [APPLAUSE]

  • CHRISTIAN PLAGEMANN: So for the questions,

  • if you could walk up to the microphones, that'd

  • probably be easiest.

  • BORIS SMUS: Yes, go ahead.

  • AUDIENCE: Hi.

  • Oh, this is so cool.

  • I really love it.

  • I just had a quick question for you.

  • I hope you can hear me.

  • The trigger that you had, you seem

  • to have kind of like tied it to the magnetic thing.

  • So I was wondering, is it possible to do it

  • through double-clicks and things just

  • by using the one waveform you have?

  • Or does that need the framework to actually support

  • those methods?

  • BORIS SMUS: Right, so thank you for the question.

  • The question was about supporting double-clicks

  • using the magnet clicker.

  • And the answer is we can do a lot.

  • The problem is right now we're using

  • a calibrated magnetometer, which is the thing that's

  • used mostly for the compass.

  • And the thing with the calibrated magnetometer

  • is it calibrates every so often, which is this event,

  • and it drastically messes up the signals.

  • So a lot of modern phones-- I think the Nexus 4 and 5--

  • have an uncalibrated magnetometer, which you can use

  • and it does not have this calibration event.

  • So with that out of the equation, we can do a lot more.

  • We've even thought about, like, having a full joypad

  • on the side of the cardboard device,

  • with the magnet able to move in any direction.

  • So this is something we can do once we've

  • switched to-- once enough phones, I guess,

  • have uncalibrated magnetometers.

  • AUDIENCE: Can you elaborate-- is there

  • any way for users to readily generate pictorial content?

  • For instance, if you're taking a vacation

  • and you want to capture, what do you do?

  • CHRISTIAN PLAGEMANN: Yeah.

  • So one thing we haven't demoed right here,

  • but you should come up to the Sandbox and really try it out,

  • is the Photo Sphere viewer.

  • So there's actually an app that someone in our team

  • wrote that can actually show you photo spheres in this viewer.

  • So you can actually take those with your device,

  • and it actually takes the local files from your device,

  • and that works well.

  • And of course, you could come up with a picture view

  • just for regular pictures that-- just very large.

  • And there's actually, the app has an intent filter already,

  • so you can just click on any photo sphere in the web,

  • for example, and it would launch that particular viewer,

  • or give you the option.

  • BORIS SMUS: Yes?

  • Go ahead.

  • AUDIENCE: Yeah, two questions real quick.

  • One is I know there's a lot of 3D content on YouTube.

  • Might this just become a good way

  • to view that stereoscopic content?

  • Right now, it looked like it was 2D content

  • in the examples you had.

  • Is that right?

  • DAVID COZ: Yep.

  • Yeah, so those were 2D videos, just regular videos.

  • But it's true that you have a lot

  • of side-by-side videos on YouTube.

  • That's actually how we kind of proved

  • that this concept was working, was putting a YouTube

  • side-by-side video and have the depth effect.

  • So I yeah, the YouTube team might want to do this

  • in the future, but we cannot really comment on this,

  • I guess, now.

  • It's a bit too early.

  • AUDIENCE: OK.

  • CHRISTIAN PLAGEMANN: It's a natural thing.

  • Like it's very clearly possible, and there are actually

  • apps on the store that do this.

  • AUDIENCE: And the other question is

  • it seems like a compass would be really

  • great to have in these kinds of situation,

  • especially for navigation-type tools.

  • Might we see a future version of this

  • that just uses voice commands to navigate instead of the magnet

  • to click?

  • BORIS SMUS: So the question was, can we replace the magnet

  • with voice commands in order to free up the compass?

  • AUDIENCE: Right.

  • BORIS SMUS: I think-- so voice commands can

  • be a bit unwieldy for a quick action.

  • I mean, it's certainly something to explore.

  • I mean, there's certainly a microphone in the device.

  • So you can do so much more.

  • Like one of the things that we thought about

  • was combining a tap on the side, the accelerometer impulse,

  • with the microphone signature of a tap.

  • So you can imagine doing all sorts of different input modes

  • that would free up the magnet, or free up the magnetometer,

  • for the compass.

  • AUDIENCE: Cool, thanks.

  • BORIS SMUS: Yeah, there are many ways.

  • One particular reason why we didn't look at speech

  • was we give it out to 6,000 I/O attendees,

  • and everyone tries to speech-control it,

  • then it gets a bit loud.

  • AUDIENCE: Could you manipulate objects with your hands?

  • Have you tried that?

  • CHRISTIAN PLAGEMANN: Yeah, of definitely.

  • AUDIENCE: I know you did the QR Code, but--

  • CHRISTIAN PLAGEMANN: I mean, take

  • a gesture-recognition library, like you can recognize

  • the finger, you could do these kind of things.

  • AUDIENCE: But it is possible with the kit

  • that you put out today?

  • CHRISTIAN PLAGEMANN: No, it's not in the kit already,

  • but there are many ways to do this.

  • It's again-- I mean, it's computer vision, right?

  • So you need to detect hands, need to detect gestures,

  • and it's a regular computer vision problem.

  • But actually, the cameras in these are really good.

  • BORIS SMUS: There's also some interesting prototypes

  • with the Oculus, for example, and a Leap attached to it,

  • where you get a full hand-tracker in front of you.

  • So you could look into that, as well.

  • AUDIENCE: So there's no [INAUDIBLE] reference.

  • How do you guys compensate?

  • Do you guys have any compensation for the drift?

  • Because over time, like when you're rotating sometimes,

  • and when you come back, it's not in the same location.

  • Does the experience--

  • DAVID COZ: So the question is about heading reference?

  • Like, yeah--

  • AUDIENCE: Yeah, because over time, your [INAUDIBLE] has--

  • DAVID COZ: Yeah, so it's true that right now, we

  • don't have any reference, because the compass is

  • kind of modified by the magnet.

  • So you can have this problem.

  • It depends a lot on the phones, actually,

  • on the kind of calibration that you have in your sensors.

  • It seems that on, for example, recent phones, like Nexus 5K,

  • it doesn't drift so much.

  • But on all the phones we tried, there

  • was kind of a significant drift.

  • So it's something that we want to work on in the future.

  • BORIS SMUS: Are you guys planning

  • on working also position tracking

  • using just a 2D camera?

  • That would be a great Tango integration, right?

  • So Tango does 6D tracking, just with the reference

  • of the scene around you.

  • DAVID COZ: We actually built a Tango-compatible Cardboard

  • device.

  • It's very easy.

  • Like you just need to increase the width of the Cardboard.

  • CHRISTIAN PLAGEMANN: And actually, the 2D

  • camera itself can provide a pretty good, actually,

  • drift compensation.

  • So you can actually-- I mean, you track features over time.

  • And then usually, these drifts are very, very slow,

  • so usually they accumulate over time.

  • Like these are usually not big drifts

  • that are kind of showing--

  • AUDIENCE: Yeah, it's just, like, depending

  • how fast the camera can detect the features, then?

  • CHRISTIAN PLAGEMANN: Uh, sure.

  • Yes.

  • AUDIENCE: Thank you.

  • AUDIENCE: Windy Day is fantastic on the Cardboard.

  • I was wondering if you expect to get

  • the rights to the other Spotlight Stories?

  • CHRISTIAN PLAGEMANN: Oh, we were absolutely, absolutely amazed

  • by what the-- like we talked directly to the Spotlights

  • team, and it was actually the Spotlights team, like

  • the [INAUDIBLE] team in the previous presentation,

  • that integrated all this tech into the Cardboard,

  • and it works so well.

  • And I'm pretty sure they would be more than happy to integrate

  • their others, as well.

  • DAVID COZ: The limitation here was just the size,

  • because we embed all the assets in the application.

  • So it was just a question of size.

  • AUDIENCE: Thank you.

  • BORIS SMUS: Go ahead.

  • AUDIENCE: One of the things that came to mind as an alternative

  • to your magnetic button-- which I think is very clever,

  • but I'd really like the magnetometer.

  • One of the things I saw at AWE was,

  • what you can buy online for $20, is a little pocket Bluetooth

  • game controller.

  • So I think that would be a good thing.

  • And then I'd like to know when you think you'll have here

  • your Cardboard Eyetracker ready.

  • Because I think that's a good thing.

  • [LAUGHTER]

  • CHRISTIAN PLAGEMANN: Yeah, that'd be nice, too.

  • It's tricky.

  • We thought about it.

  • Like using the inward-facing cameras.

  • Like one of the major design problems with this

  • is there's so many different phones out there.

  • Like to basically, just to find a hole that kind of fits

  • all the outward-facing cameras, and to kind of find

  • the right form factor that fits most,

  • that was already challenging.

  • And now if we want to come up with the optics that

  • project the inward-facing into your eyes,

  • it's a bit more work.

  • Oh, but totally agree.

  • Would be an amazing input.

  • BORIS SMUS: OK.

  • That's it.

  • Thank you very much, guys.

  • CHRISTIAN PLAGEMANN: Thank you.

  • [APPLAUSE]

CHRISTIAN PLAGEMANN: Good morning, everyone,

字幕と単語

ワンタップで英和辞典検索 単語をクリックすると、意味が表示されます

B1 中級

Google I/O 2014 - Cardboard.Android用のVR (Google I/O 2014 - Cardboard: VR for Android)

  • 45 9
    colin に公開 2021 年 01 月 14 日
動画の中の単語