Tracking and Interaction Systems in Virtual Reality
Whenever an environment is designed, be it inside a video game or virtual simulation, one of the major aspects of its development includes how the user will move about inside it and to what degree of interaction mechanics can be implemented within it. This is one of the major problems that has always baffled developers of virtual reality environments – what is the best way to track the user so that they can move about and interact with our environment?
Movement is an integral part of any virtual ecosystem. Without movement, a user is restricted to a confined space, which can greatly reduce the “immersion factor” of the environment that many developers consider to be a crucial element of their virtual environment. Let us look at the initial hardware offered by two major VR device manufacturers, and their takes on solving the virtual environment interaction problem. The earliest consumer version of the Oculus Rift (the CV1) used to be shipped with Microsoft’s Xbox One controller. Along with that, the CV1 used a rotational and positional tracking system in a technology known as the Oculus Constellation System. Here, the system used to track a series of precisely positioned infrared LEDs on the surface of the device, and it blinked in a specific pattern. Using the configuration, the Constellation sensor could track the HMD (Head Mounted Device) using only one sensor. However, many issues cropped up with this setup. Users reported that if there was an obstruction between too many infrared markers and the Rift sensor, the tracking would fail to accurately depict the location of the user.
It was during this period, that other developers realized the need for a better control system for the user so that the interactions and movements could be seamless, and more natural. Around the same time, the HTC Vive (along with Valve’s SteamVR Tracking platform) released and the controller that was shipped with it could be attributed to having displayed a great leap when it came to user interaction.
The Vive Controllers had all the same buttons, trackpad and triggers as any normal gamepad; however, it also included a ring that had 24 infrared sensors that could determine the location of the controller. With a few tweaks, developers were able to massively upgrade the hand tracking inside the virtual environment, which led to a more fluid interaction system. The Vive also shipped with the Lighthouse Tracking system – basically a setup with two (as opposed to CV1’s one) Vive Base Stations, that could create a 360-degree virtual space, as big as 15x15 foot. These base stations emit infrared pulses at a constant rate (60 pulses per second) that are then used to track the headset as well as both the controllers.
This directly led to Oculus, later that year, releasing their standalone controllers for the Rift, called the Oculus touch, that improved the design in such a way that the controls felt more natural than holding a traditional gamepad. Even though they included a second sensor with the Touch package, it was still believed that the tracking was not up to the mark, as of their rivals, and a setup that included more sensors was mandatory for seamless tracking. This led to Oculus rethinking the design of their control and tracking mechanism and started moving towards a multi-sensor setup (in the present day, a three or four sensor setup is recommended).
Much more recently, Valve has introduced its Knuckles controllers. These are backward compatible with the Lighthouse Tracking system, so the technology behind it is like the Vive Controllers (using external trackers to track the HMD and controllers). However, one major upgrade it provides is that the Knuckles controllers use internal sensors and built-in capacitors to track the movement of the individual fingers of the user. To calibrate the whole hand into the virtual environment, all the user needs to do is close the hand so that each finger is completely set on the individual sensors and that’s it. As the years go by, tracking technology is not only getting more advanced but more user-friendly.
Since then, not many differences have cropped up in terms of the general rules followed for tracking the user in virtual environments – use a multi-sensor setup to define the virtual space, and track the points of interest on the HMD and controllers, when they are inside that defined space, so as to reconstruct the position of the player inside the virtual environment. In terms of advancements, however, the devices are getting better with every new iteration – sharper screens, better tracking algorithms, custom in-built sensors etc. have all started to crop up even from smaller vendors, making tracking in Virtual Reality a lot more seamless.