September 7, 2009

What makes a virtual environment immersive?

What makes a virtual world or campus, immersive learning environment, or 3D business application immersive? Immersiveness isn’t all or nothing. It’s not determined by whether the software used is a Web browser or thick client. Instead, it’s a continuum that is determined by 1) the degree to which the user’s senses are engaged, and 2) the desirability and meaningfulness of the activity in which the user is participating. Below is a description of factors that make virtual environments or experiences more or less immersive: visual, tactile, auditory, and collaboration and interactivity (see Fig. 1). A virtual environment doesn’t need to score high in all of these areas to be immersive, but the more “highs” it gets, the more immersive it is (see Fig. 2).

 What makes a virtual environment immersive?

Fig 2: The Immersiveness Continuum

Factor Low Immersiveness High Immersiveness
Visual
Rich graphics The environment looks cartoony or avatars look strange or move in a disconcerting way. Realistic-looking lighting, shapes, textures, avatars, plants, etc. At the high end, graphics are photorealistic. Or, for abstract experiences (e.g., chemistry and mathematics) the visuals contain a high level of detailed information.
Avatars Users do not have graphical representations of themselves in the environment. Users have configurable or customizable avatars with which they identify.
3D environment Much or all of the environment comprises 2D images. The environment uses three-dimensional representations of geometric data. Avatars and objects take up and can move in 3D space.
Ability to control viewpoint The user’s viewpoint into the environment is static or limited to a few pre-selected perspectives. The user has full control over their visual focus in the environment. They can zoom and pan in all directions.
Physics No physics engine, or a very basic one A sophisticated physics engine that simulates properties like mass, velocity, gravity, friction, and wind resistance. The environment weather and collision detection.
Size of display The display fills only part of the user’s computer screen The display fills the user’s entire computer screen
Tactile
Haptics No support for haptic devices The user experiences the environment through the sense of touch, via a controller or input device. Through a handheld device, glove, etc. the user feels vibrations, forces, pressure, or motion. An example of this is the Wii controller.
Auditory
Voice No built-in voice over IP. Or if the system has VoIP, it is not spatialized; instead, it sounds similar to a phone call. Spatialized, 3D audio. When an avatar is standing to your avatar’s left, you hear that person’s voice in your left speaker. Voices of those whose avatars are closer to yours are louder than those who are farther away. At the high end, voice colorization allows users to modify the way others’ voices sound to make it easier to differentiate among speakers.
Non-voice sounds Sound is mono. Sounds are stereo and spatialized.
Collaboration and Interactivity
Integrated collaboration, communication, and productivity tools The environment lacks functionality like built-in voice, screen sharing, collaborative document editing, etc., requiring people to leave the environment (e.g., using the ALT-TAB key combination on a PC to switch applications) to get their work done. Within the environment participants can communicate with each other via public or private voice chat, local or group or private text chat, messaging, document and object sharing, screen sharing, etc. The applications and information the user needs to complete a task (e.g., have a meeting, deliver a presentation, collaborate on a model) are accessible from and can be displayed within the virtual environment (e.g., via screen sharing or real-time document editing).
Gesture and emotion Avatars do not lip sync. Ability to express emotion visually is limited. Gestures are basic. Avatars lip sync while users are talking. Users can express emotion visually through their avatars. Today this usually is done by clicking on a menu of icons but in the future it will become more natural through the use of cameras, which will project the user’s movements and expressions onto an avatar.
Interactivity Objects in the environment are static. Using the mouse or other input device, the user can click on an object to display an item or change the way an item behaves). The user can flip switches to rev up a turbine, sit in the driver’s seat and operate a vehicle, etc.

Erica Driver

Erica Driver was a co-founder and principal at ThinkBalm. She is a leading industry analyst and consultant with 15 years of experience in the software industry. She is quoted in mainstream and industry trade press including the Boston Globe, The Wall Street Journal, The New York Times, CIO, and Computerworld. Prior to co-founding ThinkBalm, Erica was a principal analyst at Forrester Research, where she launched the company’s Web3D coverage as part of her enterprise collaboration research. She was also the co-conspirator behind Forrester’s Information Workplace concepts and research. Prior to her tenure at Forrester, she was a Director at Giga Information Group (now part of Forrester) and an analyst at Hurwitz & Associates. She began her career in IT as a system administrator and Lotus Notes developer. Erica is a graduate of Harvard University.

where to buy viagra buy generic 100mg viagra online
buy amoxicillin online can you buy amoxicillin over the counter
buy ivermectin online buy ivermectin for humans
viagra before and after photos how long does viagra last
buy viagra online where can i buy viagra