Skyspheres Released: 360 Panos in WebVR on all platforms

Skyspheres Released: 360 Panos in WebVR on all platforms

1107 312 flowadmin

I’ve been working on getting WebVR projects working across all platforms and in all modes, and I’m declaring cautious success!

You can see the result here: skysphere.flow.gl.

skysphere screenshot

This is a simple app to show 360 photos that I’ve taken around California. There are four aspects that I needed to figure out:

  1. 360 image capture and editing
  2. WebVR across all platforms
  3. Control panel interface
  4. Text labeling

360 Image capture and editing

Photography is a serious hobby of mine, and in the last year I’ve used my Samsung Note 4 to capture dozens of 360 photos on my various travels and hikes around California.

I used the built-in 360 Panorama feature in my Samsung Note 4 using the Samsung Camera photosphere extension. I found this interface superior to the Google camera app, because of the little sphere icon that fills in as you take the 40 or so images required. (Note that this is possible with any Android phone, with the built-in Android 360 capture.) The result is certainly not perfect, and even though I did some serious tweaking to the original files in Photoshop to fix some stitching errors, there are still some issues. But overall, it is exciting to think that this is possible with any phone, today.

Sierra Buttes Ridge 2015-07-13x1024

Photoshop editing is pretty easy, except for the view when looking directly down and up. Getting the stairs below me at Sierra Buttes to line up through photo editing was impossible using straight Photoshop without jumping though complicated hoops… oh well, good enough!

Sierra Buttes Steps 2015-07-13x1024

WebVR across all platforms

I used Boris Smus’s WebVR boilerplate: https://github.com/borismus/webvr-boilerplate.

When I started the project a few months ago, getting it working across all platforms was a huge pain, but it kept getting better, and just as soon as I had it all working pretty well, Boris (at Google) released this boilerplate and I adopted his. (What’s a little bit of wasted effort when living life on the bleeding edge!)

To enter stereo VR mode, just click the little “cardboard” icon in the lower corner.

Control panel interface

I spent way too much time thinking about and experimenting with this control panel.

The design expectations included:

  • no obstructions (control panel or text) of the 360 content
  • ability to navigate to the next photo without going ‘back’ to a menu screen
  • ability to see what the next photo is… avoid typical blind forward/back mechanisms. Since these photos are big, slower connections will take a while, so only pull in the big 360 photos with intention.

I liked the idea of finding the control panel by looking down, so I experimented with finding the right pitch of the head to have it appear.

skysphere screenshot control panel

I wanted it to always appear in front of the camera, but I wanted to avoid doing complex math, so my trick here was to create an invisible object as a child of the camera so that it will always be in front of the camera at a certain z distance. Then I use that object’s location as the location to pop up the control panel. Easy, no math!

Once the control panel pops up, then it locks into that position, so that additional gaze interactions will move a cursor over the particular photo to be selected.

I haven’t seen a lot of similar control panel techniques, so I’m interested in getting your feedback on Twitter @jmarshworks.

Text labelling

The typical text labeling mechanisms for 360 photos seems to lock the text front and center of the camera, or up in the center of the view, with the text always directly facing the camera. Although this is highly readable, it also can be distracting until it fades. So I adopted a different design, just to see what it felt like (VR interaction design requires experimentation at this point!) I’m obsessed with beautiful letter forms in 3D space, so maybe I’ve gone overboard.

skysphere text example

I actually put the text plane in space, and this plane responds as though it were a real 3D object, particularly in that it’s not locked into facing the camera. While still clearly an artificial addition to the content, I felt like the sense of presence was stronger because it moves naturally. Do you?

Note: I haven’t seen people do this! It seems to make so much sense to me, and I enjoy it much better.

Conclusion

I will next experiment with adding audio to the experience. For each of the photos taken, I had recorded several minutes of ambient audio in stereo on my phone. (Simple, I know, not binaural, nor with a stereo array to spatialize correctly.) My intent was to just see if I could enhance the experience with the simplest audio that any user could capture with the phone. But I lost the files due to a phone glitch!

So I’ll be more careful with the audio files, and next time will see if a streaming audio loop will provide a significant benefit in presence to still photos.