Odecee experts James Dunwoody and Suzanne Chamberlain won the last Sprout (our internal innovation event) with their work on StageFright, a VR program built for the HTC Vive that allows users to improve their presenting skills by putting them on a virtual stage in front of a virtual audience.
Post-Sprout, Suzy led a small team of developers who worked over several weeks to take StageFright to the next level. Here, she shares some of what she learned…
Walking out across a stage with over 200 pairs of eyes staring up at me, I briefly test to ensure the slide-clicker in my hand is working before clearing my throat. The audience begins to hush. I take a few steps to the left and attempt to move the microphone stand a bit closer, but only manage to knock it over. With a self-composing sigh, I bend down to pick it back up; in the process, I notice something odd about the man second from the left in the front row. His neck doesn’t look right… it’s as though he is straining to read something well above my head. Another thing: his eyes are solid black. With another sigh, I remove the VR headset, tab back to Blender and open the model of the man from the front row.
For the last few weeks our team has been living double lives – spending half our time as software engineers and the other half as the star speakers at a VR conference in the virtual world of StageFright. Working on StageFright has been strange, with colleagues constantly telling you how weird you look as you step over invisible furniture and pick up things that aren’t actually there. Of course, they immediately understand when they try it out for themselves. Room-scale VR is incredibly immersive, so much so that the famous “plank walk” examples result in what to outsiders appear to be bizarre breakdowns – you can see a few here. Trust me, their reactions are real. After weeks of walking through virtual chairs and dropping non-existent coffee cups on my feet, I still can’t take one step off of the virtual stage without inching my foot forward to check for solid ground first.
But aside from the strange things it makes us do, what else have we learnt from our time developing for virtual reality?
Modelling is time-consuming; it’s an artform that requires you to be prepared to say “it’s good enough!” and move on. The objects in VR can’t all be works of art or the actual experience you want for your users would be lost to distractions of the world. Trying to keep production costs to a minimum, we used Blender, which was more than capable of creating the low-poly models we needed for virtual reality. Blender has a steep learning curve though, so if you are new to 3D modelling read or watch a lot of tutorials and make a cheat sheet of the hotkeys before you begin to stress.
Setting the Scene
Arranging the scene is one-part interior design, one-part searching for reference images and one-part constant frustration as objects seem to float slightly off the ground even though you know you set their Y value correctly. Getting the lighting just right so it feels non-threatening but still realistic – all the while trying to hide any mistakes you made in the modelling – can be a serious challenge. We found mob programming and regular user testing were vital for resolving scaling, lighting and modelling issues.
We chose Unity as our engine because it’s easy to use, has a large community and is free for non-profit use. We found the SteamVR assets/plugins for the HTC Vive were very straightforward for the basic interaction. (Though getting gaze detection working was a bit tricky due to the head-mounted-display object apparently being “inactive” and thus not being returned in the GetComponent call. But I digress…)
Over the weeks we spent on the project, we were able to build a world in which users could pick up objects, see a rolling average of their speaking volume, see where in the audience they hadn’t been looking enough, use a clicker to go back and forth through their slides, use a laser pointer to highlight their slides, teleport, control the experience through multiple user menus, silence a noisy crowd by speaking, select from multiple presentation locations and a number of other touches that enhanced their experience.
People love VR. We were never in short supply of volunteers to ‘play’ in the virtual world. People gave positive, but meaningful, feedback and we had a lot of adjustments to make. It was fascinating to see how even small variations in users’ heights completely changed the experience, especially when the lecterns were too high for some players to read from, let alone see over. Another issue was the distance of the pop-up menu we expected people to tap – it was out of reach for some players, but felt “too close” to others. The controllers were also interesting, with some people bumping them on the headset while others thought they had intersected items they had not yet reached. We often think of usability as colours and font sizes; rarely do we have to consider the user’s physical attributes to such an extent. I look forward to seeing the various ways people solve this problem in VR over the coming months and years.
Getting to work full time with room-scale VR has been a childhood dream come true. The pride you feel when you look around a room and see something you made in real life is a thrill, but the feeling when you walk around the room you made from your imagination is something else entirely. It’s not a feeling of power or control, it’s a warm fuzzy feeling you want to share with as many people as possible. I strongly encourage anyone thinking of dabbling in VR to give it a go. You can learn so much and bring smiles to so many faces.
Tags: Blender, HTC Vive, Object modelling, Room-scale VR, SteamVR, Virtual reality
This post was written by Suzy Chamberlain