GearVR as CardBoard


Three days in. Light is but an illusion now. Walls are constantly dancing. One is making fun of me. I CAN HEAR THEM SHUT UP SHUT UP SHUT UP. Send help.

I discovered you can use the GearVR as a "CardBoard" (which is becoming the generic term for the smartphone holder approach, I think) fairly easily. After that I started realizing that the GearVR actually is a fancy smartphone holder, only with better orientational sensors. Which comes with a side of limited walled garden sandbox. Do not like.

To use your GearVR as a CardBoard is relatively simple, though the experience varies. It comes down to NOT plugging your Note into the usb, but using the usb plug to clip the phone in, so to speak. The GearVR USB port can be angled a little to make it easier of locking it in. But at its widest angle the phone will fit right under it. Together with the regular lock on the other side, the phone actually stays locked in pretty tight. I'm not even trying to hold it in place anymore (which is a bit scary, too).

The downsides? The touch screen is really sensitive and you'll often trigger random hotspots for no apparent reason while trying to lock it in place. Really annoying since you'll have to start over. And of course you have no direct control of the device. It does tentatively support the joypad as a generic device, though only the back button seems to work for me right now. But that's fine because often all you'll want is to cancel the accidental tap anyways.

Another drawback of the GearVR in general is that there's not even a way to keep charging your device while using it. That means that you'll inevitably run out of batteries (in matter of one or two hours, not days). Especially under heavy VR use. And then you'll have to wait a few hours for it to charge. My CardBoard solution doesn't fix this btw since the usb port will be tightly blocked. They could have at least made the GearVR a hub and allow another usb entry to charge a connected device. But no, it's really just a CardBoard with sensors. Good sensors though.

I'm slightly worried that this approach will screw up the usb connector at some point. Or that the connector will damage my Note somehow. But I think it's doable if you're just really careful. On that note, normally removing the phone from the GearVR already feels dangerous. Feels like you don't need a lot of force at the top of the phone to break off the connector when trying to remove it. The system is a little fickle to begin with.

As an aside I still haven't found a good way of fixing the nose-pressure problem though. It really hurts and it's the weight of the Note that seems to be causing it. I find it hard to believe I'm the only one with that problem. I think for me the fix would be if there was a strap that reached over my head all the way to the bottom of the device, rather than the top there is one now. The top just won't lift the pressure enough. Enough about that. My CardBoard (and WoodBoard...) should arrive today. Curious to see how that works out.

So why abuse the device like this? Well, because the GearVR store is too limited. I haven't even used it after day one, actually. Using it as a cardboard allows you to use any apk, any app from the Play store, and of course, the web. The web! After that anything's possible ;)

I hope somebody will make it possible in the future to make use of the good sensor that's built into the GearVR outside of the sandbox. But I guess it's Samsung's whole point not to let that happen. Well it's inevitable that somebody will escape it, of course. But whether that's gonna be useful to everybody is another thing.

Yesterday I set out to do some experimenting with canvas. I had a simple proof of concept for 3d raycasting environment (think Wolfenstein) I made a while back. With this purpose in mind, actually. It supported the html5 orientation api or keyboard. I thought I had also done joypad api, but I'm not so sure about that anymore. So this app works great on desktop, or flat screens in general. There's a fisheye correction for raycasting of course and yeah, it looked nice and sharp.

For VR it's different. I was taking a look at WebVR but the links I found were all pointing to ThreeJS. Some descriptions about WebVR made me think that the api was more about getting device telemetry than actual displaying. So I asked Brandon Jones, who's implementing/researching WebVR for google, about this and his initial response enforced my thought; you'll get some measurements but I'll have to do this paiting myself. The measurements will be important for tweaking the experience later. But I was hoping for an api that gave me to panels to paint to and which would take care of the rest for you. By "rest" I mean applying the, apparently, so called "barrel distortion" effect to flat 2d images. That basically means going from this to this. If you're going "but but but!" now, keep reading

While it seems trivial when looking at it, it's actually not something that 2d canvas will help you much with. It's pretty much like applying a pixel shader manually. Too slow for comfort. I've tried, but there's no way that'll reach the desired performance if I have to do the distortion "manually" as a post process step.

However. Just at the end of my evening Brandon replied to my disappointment that applying the distortion should in fact be supported by WebVR. So I'll have to dig into the (seemingly non-existing?) API docs a bit deeper and figure out how to get it to play nice, without using ThreeJS. Why not Three? Well, I think ThreeJS is great but I'm not comfortable with WebGL myself. There's a lot going on and I don't want to spend most of my time now getting to learn Three. I'd also throw in the WebGL on mobile devices argument, but if you're gonna run a WebVR "compliant" browser anyways, I don't think WebGL is a problem anymore.

As far as my proof of concept goes, I was surprised. So I ended up using a double flat 2d canvas. In fact, I didn't even get to the point where the camera perspective is different for each eye (though the rendering is already separated, only think left is to apply the positional delta). And even without the barrel distortion, it's already working pretty decently. Clearly you lose a lot of screen estate and have a huge fishbowl effect, but it works and you get that sense of being in another world.

What surprises me is that there's a depth perception at all with this approach. I thought you needed two displaced images for the brain to do depth computations. But I was seeing depth just fine without it. Well, I think. I didn't notice the absence of it.

I'll dig deeper into WebVR, find me a proper spec for as so far as there is one, hope it doesn't require ThreeJS anyways (I'm unfoundedly fearing it will), and get my proof of concept properly up'n'running. I want the wolfenstein environment, working with a joypad, or with vertical tilt for speed. If you add a bar to indicate your speed, affected by how level you are currently looking, I think you get far in terms of movement. Combined with head tracking of course.

Fun fun fun :)