Developing for VR has become increasingly popular with the release of more and more headsets. This is great for the VR community, since being able to reach more people allows for more growth, new ideas, and new ways to implement cool features. However, the story is a bit more disastrous for developers.
Developing for Multiple Platforms
Supporting multiple platforms seems ideal for developers to be able to reach the largest audience possible, but there are myriad difficulties to take into account. For example, some platforms take advantage of controllers for input, such as the Valve Index or the Oculus Quest. Others don’t use controllers and instead rely on buttons on the headset, like the Google Cardboard. To make it even wackier, the Quest 2 gives users the option to use controllers or just their hands (with supported apps).
Even just by looking at input for different VR platforms, we can see that there are many different implementations, thus lots of work needed to support all of them. In this post, I’ll be focusing on how I would (hypothetically) take my VR Escape Room made for the Quest 2 and port it to Google Cardboard.
Differences Between Platforms
The Quest 2 and Google Cardboard are wildly different platforms, let’s go over some of the key differences.
Firstly, the Google Cardboard is only a headset. Really, it’s not even that, it’s just something you put your phone in to hold it to your face. This means that there are no controllers to use to receive user input. The Quest 2, on the other hand, has two controllers that track position and rotation along with having a ton of buttons for input.
Next, Quest 2 supports six degrees of freedom. This means that the headset tracks three degrees (or axes) of rotation (tilting head up/down, left/right, and turning head to look left/right) and three axes of movement (up/down, left/right, and forward/backward). In contrast, Google Cardboard only supports three degrees of freedom (head tilt/rotation). This means that moving around in physical space does not get translated to the virtual world.
Last, let’s talk specs. The Quest 2 is a (relatively) powerful system, designed specifically for VR, whereas the whole idea of the Google Cardboard is to allow anyone to use their phone as a headset. This means that the developer can’t count on the phone having a certain resolution or level of processing power.
Input and Movement
Like I addressed earlier, the Quest 2 and Google Cardboard have different ways to receive input. In my escape room game, a big part of it is being able to pick up, inspect, and move different objects in the environment. With the Quest 2, this is pretty easy since the controllers give the headset their position in space, meaning I can just map the players’ hands (holding the controllers) into the game world. The buttons on the controllers also allowed the players to pick up the items, move them, and inspect them. Lastly, the user was able to move around the virtual world by either using the joystick on the right-hand controller, or by pointing the controller while holding a button to teleport.
Unfortunately, with most implementations of Google Cardboard (there is Cardboard support for controllers, but I’m going to assume that most people don’t have them), I wouldn’t have any of these input luxuries. The first issue, is not having virtual hands to pick up items and interact with them. To solve this issue, I would probably use a method similar to the way that most apps implement UI interaction with Google Cardboard. A dot is placed in the middle of the screen, which the player would use as a cursor. Personally, I’m not a fan of pressing buttons on the headset for input, so to select something with the cursor, I would have the player stare at the item they wish to interact with. It sounds pretty funny, but having a time indicator (maybe a circle around the cursor) that shows how much longer the player would need to look directly at an item to pick it up would make it feel much more natural.
In order to be able to inspect items (spoiler: an important aspect of the escape room), I’d have the item slowly rotate while the player is “holding” it to reveal all sides. Now, how are we supposed to put the object down? Well, often you’d want to put the item on top of a surface, so I could have the game trigger the timing indicator whenever the player is looking down past a certain angle and then place the object in front of where the player is looking. This would probably take some getting used to, but I think it’s the most natural way to implement interacting with objects given the constraints of the Cardboard.
Lastly, we need to be able to move around. This one is tough, so I think the best way to implement it would actually be to modify the way the game is played, just by a little. I think I would create hotspots in the escape room, maybe spots on the floor, maybe translucent cylinders, dispersed around the room that the player could see and use the cursor to select to teleport to. This would take some of the fun out of the escape room, since there would be a limited number of places in the room the player could go, but I could add some decoy hotspots to try to throw the player off.
The first thing that would need addressed is the difference in compute power between the Quest 2 and Google Cardboard. Since cardboard can’t actually do any computing, I’d need to account for running on almost any phone. This in itself is quite the task, since one phone’s power can be very different from the next.
With the Quest 2, I was able to test the game to make sure everything ran smoothly, but with there being so many different phones, I’m going to need a different approach. Rather than build different versions for different phones, I’d instead implement a more dynamic way to change the power required of the phone. This is often referred to as Level of Detail (LOD). Thankfully, Unity (the tool I use to develop VR applications) has a very simple and robust LOD editor. This allows me to group all phones into different categories: very low, low, medium, high, and very high. Unity determines what level the phone is capable of (by looking at the phone’s specs) and then uses the settings that I’ve specified for its category. This means I have a finite number of groups to tune for and I could even make it so there’s only three groups, like low, medium, and high. Then, for each group, I would tune settings such as texture quality and model detail to better support the type of phone that it is running on.
Finally, I’d need to consider resolution differences. This is a relatively simple fix, since all I would need to do is apply some anti-aliasing settings to the Cardboard version, so that the world doesn’t look like a stair stepped mess. However, there still may be a screen door effect on lower-resolution devices, but unfortunately there’s nothing I can do to fix that.
I’ve already addressed most of the changes I would need to make during development to make the game playable on Google Cardboard, but there’s also the bigger-picture question of: How do I make it actually playable on a Cardboard device?
Thankfully, Unity natively supports exporting an app to Android, so I wouldn’t need a workaround there. Also, Google has a pretty nice Unity integration package, so building the Cardboard tools into the app should be fairly straight forward.
However! The dream that I and many other AR/VR developers are waiting on is OpenXR, which would allow ONE implementation to be compatible UNIVERSALLY! Of course, this still requires a lot of work from the folks at OpenXR (but they’re getting closer!) and it needs to be worked into development tools such as Unity, but hopefully in the near future, developers will be able to write their app once and have significant support across many, many platforms.
Porting a game like an escape room from Quest 2 to Google Cardboard would be quite the undertaking. The differences between the headsets and the mechanics of the game would require some significant changes and a lot of fine tuning to be able to be enjoyable on the Cardboard, but I think it’s possible (and would make for a fun challenge).
Thank you for getting all the way through (or skipping to the end)! My name is Ben Keener and I am a Full-Stack Software Engineer who loves learning and aspires to bring good ideas to life! If you’d like to give me feedback on this blog post or my bad puns, you can reach/stalk me in a number of ways: LinkedIn, Twitter, and GitHub.