To generate the VE's images according to the movements of the user's body, it is fundamental to rapidly acquire data about the different body positions in the 3-D space, and transmit them to the computer in order to elaborate the environment's modifications to be issued in response to the user's actions. This can be done by dedicated devices typically named Trackers, which are - as we said before - often integrated with HMDs.

The technologies used predominantly in tracking are four:

1. Technologies for tracking

2. Glove and body suit technology

1. Technologies for tracking


2. Glove and Body-Suit Technologies

Once an accurate representation of a virtual space is guaranteed a mandatory requirement is the design of devices and instruments that can enable and accommodate movement. The classic solution is 3-D mouse for navigation. To check the positive result of adopting a 3-D mouse we need to do a little review of basic geometry and of some elementary math.

An object can be defined mathematically by its position and orientation in space, this definition is achieved with a three-dimensional Cartesian co-ordinate system (or more simply, the X, Y and Z axis). This representation allows the dynamic view as axes can be oriented in any manner.

Our location along each axis defines our relative position in space at a given time (left and right, also known as horizontal panning (X-axis); up and down, or vertical panning (Y-axis); in and out, or zooming (Z-axis)). The angle of rotation about the X, Y and Z axes defines our relative orientation at a given time (pitch, also referred to as rotation about the X- axis; yaw, rotation about the Y-axis; roll, rotation about the Z-axis). Taken together, the movement (translation) and the orientation (rotation) about the three axes are what are referred to as six-degrees of freedom.

This abstract technical description is well grounded in everyday like and in our common experience. As a matter of facts we all live in a three dimensional world. In this world and even without consciously thinking about it, we position and orient ourselves through all six-degrees of freedom. And, most importantly, we move about or control objects by simultaneously combining various movements and orientations. This is known as simultaneous six-degrees of freedom, or S6DOF.

The combinations vary depending on the planned action, but the ability to control all six-degrees of freedom, at some point in time, is critical, whether you're moving in the real world - or virtual space. Realism in virtual space is key. So to mimic real world moves in virtual space the user interface must provide simultaneous six-degrees of freedom control.

Given these characteristics it appears that the mouse and joystick and many other input devices can't stand up to the complexity and constraints of S6DOF demand. Using a mouse, the most that can be performed, with even the best designed mouse driven interface, is two degrees of freedom simultaneously. To compensate for the two-degree-of-freedom limitation of the mouse, application developers force their users to switch between various "modes," using a host of techniques such as holding down different mouse buttons, hitting various function keys, or clicking on various on-screen icons. This of course is not perceived as a great advantage as the use of the pointing device becomes not very intuitive if not problematic.

A less involving use of 3-D mouse is the one adopted together with software solution similar to button or cockpit interacting devices. For instance some browsers require you to pick on a "zoom" icon and roll the mouse to move in an out (zoom), then pick on another icon such as a "rotate" icon to pitch, roll or yaw, then pick on the "left/right/up/down" icon to pan in the desired location, and so on. Still other applications require you to hold down the right, left and/or middle mouse buttons to switch between panning, zooming and rotating.

The truth is that while most applications can claim support of all six-degrees of freedom, it is physically impossible to accomplish more than two simultaneously using a 2D device like a mouse. Instead of being able to move smoothly and intuitively throughout your 3D world, you move about with serial movements like a robot. This takes too much time and, more importantly, requires an unnatural and unspontaneous conscious thought to master movement and deambulation.

A true 3D input device instead can provide simultaneous six-degrees of freedom. The existence of a special dedicated control mechanism of a 3D input device provides the possibility to move effortlessly fly in any direction, twisting and turning, soaring and diving, all simultaneously. And while it might take a few minutes to get confident with the device, soon we are navigating through 3D space without consciously thinking about how to do it.

In the past, full simultaneous six-degree of freedom 3D input devices were considered specialised. Due to their relatively high cost, their use were limited to "high-end" VR and mechanical engineers, using high-end workstation based computer aided-design (CAD) applications. But just as 3D applications, such as animation, design, action games and 3D Internet solutions have shifted to the personal computer, so too have 3D input devices. New technologies and manufacturing techniques are lowering the cost of these devices down to just above the cost of a high-end mouse or 2D specialised joystick, making them much more widely available, at an affordable price.

The typical input device used in Virtual Environments is the "data glove", which is somehow like a hi-tech hand-shaped pointing device. The 'data glove' is put on the hand, and can then be 'seen' as a floating hand in the VE. It can be used to initiate commands. For example, in Virtual Spaces where gravity does not exist, pointing the glove upwards makes the person appear to fly. Pointing downwards takes him or her safely back to the ground. In this regard, the virtual hand is like a cursor on a standard PC, able to execute commands by pointing at a particular icon and clicking. Glove devices measure the shape of the hand as the fingers and palm flex. Over the past decade, especially in the last few years, many researchers have built hand and gesture measuring devices for computer input. Let us take a look at the main relevant technologies which have appeared in the market.

The first possibility is to measure finger position using a mechanical exoskeleton mounted on the hand and connected to each finger. The linkages are such that the angles at their joints vary as the finger joint is bent. This change in angular position is measured by an array of Hall-effect sensors at the mechanical joints. The exoskeleton is mounted to the hand using Velcro strips, and because their position can vary, the glove must be recalibrated at the start of each session. The exoskeleton is also somewhat heavy, causing fatigue if used for extended periods.

Other gloves use a separate optical fibre loop passed over each joint to be measured. Light shone from a LED passes down the fibre and has its intensity measured by a phototransistor at the other end. At the position of the joint, the wall of the fibre is treated such that no light is absorbed when the fibre is straight, but as the fibre is bent, increasing amounts of the light are refracted away. The optical fibres are mounted on the glove which means that the fibres can move around as the glove is put on and removed, requiring calibration at the beginning of each session.

In a different way, finger bend is measured at each joint using a pair of strain gauges. The strain gauges are thin film devices whose resistance varies slightly according to applied strain. Two of these are placed at each joint so that as the finger bends, one gauge is in compression and the other in tension. Connecting the strain gauges in a Wheatstone bridge circuit results in an output voltage linearly proportional to joint bend.

Finally, it is possible to implement a low price technology coating a supportive substrate with a double layer of conductive ink consisting of lots of carbon particles. As the finger bends the substrate, the length of its surface is increased, spacing out the carbon particles and increasing the sensor's resistance. If the sensor is bent in the opposite direction, then the carbon particles are pushed closer together, reducing the resistance of the sensor. The change in resistance is often much greater than when using strain gauges, but the interface is similar to using a Wheatstone bridge.

One problem with the glove is the need to provide a sense of touch to increase the haptic experience of the wearer. Consider what happens when a human reaches out to grip a virtual object. Although the virtual image shows he or she is gripping an object, the human cannot feel any resistance to the hand tightening movement. Work is under way to achieve this illusion by making the glove resist further closure. Gloves are not without their difficulties. They can be tiring and often give users a feeling of artificial. Indeed, some commentators question the future of the data glove as an input device, though there seems little alternative when it comes to sensual output to the hand.

Thus, the glove may well become more popular than it is by now, when VR technology has advanced to the stage that the glove can be used to feel around in the virtual world. Although our eyes and ears are important for gathering information about the world around us in general, there is no doubt that the hands come into their own when we actually come into contact with real world objects. The tips of the fingers have the highest density of nerve endings anywhere in the body, and without a feeling of touch even the most highly photorealistic world would seem intangible as the user floated through it.

Commercial devices to add touch to virtual worlds have only the most rudimentary capabilities at present. These devices can be broadly separated in tactile feedback, force feedback and thermo feedback. The dividing line between tactile and force feedback can often be blurred. Generally, tactile feedback provides information about the texture or roughness of the surface of the object. Force feedback, however, allows the user to feel the structure of the object. For example, if you were to hold a virtual tennis ball, tactile feedback would allow you to feel the fibres of the surface, whereas force feedback would allow you to feel a solid ball that squeezed a bit under pressure. Thermo feedback simplistically allows you to feel the temperature of the object, however the change in temperature as your virtual finger touches a virtual object can significantly help the feeling of immersion in a virtual world. Let's take a look at the available touch technologies

Force Feedback

Force feedback systems allow you to feel the forces that would be exerted against you by a virtual object. If you are holding a virtual brick, then the force exerted on your hand is that exerted by its acceleration due to gravity. If you're doing a virtual handstand on a virtual table, then the force exerted on your hands is the force of your own body accelerating due to gravity, and the force feedback system should be able to support you in the same way as a real table would. Because all forces have to be generated in relation to something, current force feedback systems require an exoskeleton of mechanical linkages.

Hand-specific force feedback systems currently use one of two approaches to this exoskeleton. The first approach uses an exoskeleton mounted on the outside of the hand, similar to the ones used for electromechanical tracking. The linkages consist of a number of pulleys which are attached to small motors using long cables. The motors are mounted away from the hand to reduce weight, but can exert a force on various points of the fingers by pulling the appropriate cable. The second system consists of a set of small pneumatic pistons between the finger tips and a base plate on the palm of the hand. Forces can be applied to the finger tips only, by applying pressure to the pistons.

Because these systems reflect all their forces back to somewhere on the hand or wrist, they can allow you to grasp a virtual object and feel its shape, but cannot stop you to from passing your hand through that object. In order to prevent this, the exoskeleton must be must be extended to a base mounted on the floor via more linkages along the arm and body, or an external system similar to a robot arm.

Tactile Feedback

The purpose of tactile feedback is to convey a sense of the texture of the surface of the object, and may also convey some impression of the surface geometry of the object. A number of technologies can be used to achieve this.


Either voice coils or piezo-electric vibrators can be used to vibrate a surface against a finger tip at various frequencies. Single frequency systems are simply activated when the finger makes contact with the surface of a virtual object, in order to indicate the collision. In other systems either the amplitude and/or frequency of the vibrations can be controlled in order to present impressions of different surfaces.


Similar to vibro-tactile devices, electro-tactile devices create vibrations of various frequencies and amplitudes in the finger tip. Unlike vibro-tactile systems, no moving parts are involved, but instead the sensation is caused by electrical impulses applied the skin.

Micro Pin Arrays

Unlike the simple vibrations of the previous two methods, micro pin arrays can create the sensation of complex surface textures. These devices consist of a small matrix of miniature pins that can be individually extended onto the finger tip. one technique for actuating these pins is to have them connected to memory metals that shrink when heated, pulling the pin up, and then extend when cooled, allowing the pin to retract. An advantage that these miniature pin array systems have over the simple vibro-tactile devices is their ability to create the impression of the edges of object surfaces.


The previous systems are designed to create a sense of the texture of the surface at the finger types, but do not give any impression of the shape of the object being touched. Pneumatic systems place miniature air pockets at strategic points in the glove. These air pockets can be individually expanded to a specific pressure to generate simple patterns emulating the forces felt when touching the real object. Although this system works well for when the hand is just touching the surface of the virtual object, there is nothing to stop the user from passing his fingers easily through the object, and spoiling the illusion.

Thermo feedback

For virtual worlds where it is acceptable to only touch objects warmer than the hand, miniature heating coils can be used to create the correct temperature at the finger tips. In most cases, however, both positive and negative temperature differences have to be generated. To accomplish this, a miniature device known as a Peltier effect heat pump can be used. The Peltier effect is caused when a current flows through the junction of two dissimilar conductors, and can either cool or heat the junction depending on the direction of the current. Practical Peltier effect devices contain many of these couples connected in series, where the dissimilar conductors are p- and n-doped semiconductors.

If a Peltier effect heat pump is connected to a dc power supply, heat will be absorbed at one face, making it cold, and rejected at the other face, making it hot. The temperature differential between the two faces is proportional to the current flowing through the device, and reversing the current reverses the temperature flow. If the current is controlled by the virtual environment computer, the appropriate temperature, hot or cold, can be generated at the finger tips in response to the virtual hand touching a virtual object.

Thanks to the last advances in VR devices, users can by now not only fit a glove, but actually wear an entire lightweight body suit. This suit has optic fibre cables or motion sensors at the major joints allowing the VR computers to track the user's movements precisely. The more advanced suits, even those used in arcade style VR games, can cost up to $20 000 (KEcu 16.9).

For any questions or requests, please contact