||The Situation Engine: a hyper-immersive platform for construction workplace simulation and learning
||Newton S,Lowe R,Kember R,Wang R,Davey S
||The prospect of being able to place an individual within an entirely interactive, simulated environment has long been held, but only recently is it being realized. Flight simulators were the first to provide a hyper-immersive experience using a combination of very detailed and accurate models of aircraft systems, high-resolution visualization and motion platforms. More recently, advanced video game technologies have been coupled with augmented reality systems and sophisticated tracking technologies to provide hyper-immersive experiences of battlefield conditions, crime scenes, operating theatres, industrial processes, etc. A key problem for developers of any hyper-immersive environment is the significant overhead costs of modeling, programming, display technologies and motion simulation. The Situation Engine is an application platform that provides for specific and managed building and construction experience to be made available using low-cost, advanced digital technologies. The same engine can drive a multitude of learning situations. Multiple users collectively occupy the same simulated workplace but experience that situation individually by individual movement through the space. Head tracking, gesture recognition, voice communication, 3D head-mounted displays, location-based sound and embedded learning resources have all been incorporated into the Situation Engine at minimal cost. The total enabling technology cost per participant is currently around $600 Australian. This paper will focus on the hyper-immersive nature of the Situation Engine. In particular, the distinction between immersion (as a quantitative measure of sensory fidelity) and presence (as a qualitative perception of ‘being there’) will be articulated and clarified. The paper also highlights one of the various ways in which hyper-immersion is manifested in the Situation Engine: gestural control. Gestural control has been implemented using a Microsoft Kinect™ and proprietary gesture detection algorithms to monitor a range of gestures in parallel, including gestures that are context dependent.
|Year of publication:
||Simulation,Hyper-Immersion,Cost,Situation Engine,Gestural Control
Newton S,Lowe R,Kember R,Wang R,Davey S (2013).
The Situation Engine: a hyper-immersive platform for construction workplace simulation and learning. CONVR 2013,