Skip to main content
Log in

Experiences in mixed reality-based collocated after action review

  • Original Article
  • Published:
Virtual Reality Aims and scope Submit manuscript

Abstract

After action review (AAR) is a widely used training practice in which trainees and trainers review past training experiences and performance for the purpose of learning. AAR has often been conducted with video-based systems whereby a video of the action is reviewed afterward, usually at another location. This paper proposes collocated AAR of training experiences through mixed reality (MR). Collocated AAR allows users to review past training experiences in situ with the user’s current, real-world experience, i.e., the AAR is conducted at the same location where the action being reviewed occurred. MR enables a user-controlled egocentric viewpoint, augmentation such as a visual overlay of virtual information like conceptual visualizations, and playback of recorded training experiences collocated with the user’s current experience or that of an expert. Collocated AAR presents novel challenges for MR, such as collocating time, interactions, and visualizations of previous and current experiences. We created a collocated AAR system for anesthesia education, the augmented anesthesia machine visualization, and interactive debriefing system. The system enables collocated AAR in two applications related to anesthesia training: anesthesia machine operation training and skin disinfection training with a mannequin patient simulator. Collocated AAR was evaluated in two informal pilot studies by students (n = 19) and an educator (n = 1) not directly affiliated with the project. We review the anecdotal data collected from the studies and point toward ways to refine and improve collocated AAR.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13

Similar content being viewed by others

References

  • Bier EA, Stone MC, Pier K, Buxton W, DeRose TD (1993) Toolglass and magic lenses: the see-through interface. In: Proceedings of the 20th annual conference on computer graphics and interactive techniques, pp 73–80

  • Chua PT, Crivella R, Daly B, Hu N, Schaaf R, Ventura D, Camill T, Hodgins J, Pausch R (2003) Training for physical tasks in virtual environments: Tai Chi. Proc IEEE Virtual Real 2003:87–94

    Google Scholar 

  • Department of the Army (1993) Washington, DC. Training Circular 25–20: A leader’s guide to after-action reviews

  • Fischler I, Kaschub CE, Lizdas DE, Lampotang S (2008) Understanding of anesthesia machine function is enhanced with a transparent reality simulation. Simul Health Care 3:26–32

    Article  Google Scholar 

  • Fishwick PA (1995) Simulation model design and execution: building digital worlds. Prentice Hall, USA

    Google Scholar 

  • Fishwick PA (2004) Toward an integrative multimodeling interface: a human-computer interface approach to interrelating model structures. Simulation 80(9):421

    Article  Google Scholar 

  • Ishii H, Ullmer B (1997) Tangible bits: towards seamless interfaces between people, bits and atoms. In: Proceedings of the SIGCHI conference on Human factors in computing systems, pp 234–241

  • Keen R, Spain J (1992) Computer simulation in biology; a basic introduction. Wiley, New York

    Google Scholar 

  • Kester L, Kirschner P, van Merrinboer J (2006) Just-in-time information presentation: improving learning a troubleshooting skill. Contemp Educ Psychol 31(2):167–185

    Article  Google Scholar 

  • Knerr BW, Lampton DR, Martin GA, Washburn DA, Cope D (2002) Developing an after action review system for virtual dismounted infantry simulations. In the interservice industry training, simulation & education conference (I/ITSEC)

  • Lampotang S, Lizdas DE, Gravenstein N, Liem EB (2006) Transparent reality, a simulation based on interactive dynamic graphical models emphasizing visualization. Educ Technol 46(1):55–59

    Google Scholar 

  • Lizdas DE, Gravenstein N, Lampotang S (2009) Transparent reality simulation of skin prepping. Simul Healthcare. 4(5):193. http://vam.anest.ufl.edu/skinprep/

    Google Scholar 

  • Looser J, Billinghurst M, Cockburn A (2004) Through the looking glass: the use of lenses as an interface tool for Augmented Reality interfaces. In: Proceedings of the 2nd international conference on Computer graphics and interactive techniques in Australasia and South East Asia, pp 204–211

  • Lynch C, Ashley K, Aleven V, Pinkwart N (2006) Defining “ill-defined domains”; a literature survey. In: Proceedings of the workshop on intelligent tutoring systems for ill-defined domains at the 8th international conference on intelligent tutoring systems (pp 1–10) National Central University, Jhongli (Taiwan)

  • Milgram P, Kishino F (1994) A taxonomy of mixed reality visual displays. IEICE Trans Inf Syst 77(12):1321–1329

    Google Scholar 

  • Morie J, Iyer K, Luigi D, Williams J, Dozois A, Rizzo A (2005) Development of a data management tool for investigating multivariate space and free will experiences in virtual reality. Appl Psychophysiol Biofeedback 30(3):319–331

    Article  Google Scholar 

  • Park M, Fishwick PA (2005) Integrating dynamic and geometry model components through ontology-based inference. Simulation 81(12):795

    Article  Google Scholar 

  • Quarles J, Lampotang S, Fischler I, Fishwick P, Lok B (2008a) A mixed reality approach for merging abstract and concrete knowledge. IEEE Virtual Real, Reno, pp 27–34

  • Quarles J, Lampotang S, Fischler I, Fishwick P, Lok B (2008b) Tangible user interfaces compensate for low spatial cognition. IEEE 3D User Interfaces Reno, pp 11–18

  • Quarles J, Lampotang S, Fischler I, Fishwick P, Lok B (2008c) Collocated AAR: augmenting after action review with mixed reality. IEEE Int Symp Mixed Augment Reality Cambridge, pp 107–116

  • Raij A, Lok B (2008) IPSVIZ: an after-action review tool for human virtual human experiences. IEEE Virt Reality, Reno, pp 91–98

  • Sielhorst T, Blum T, Navab N (2005) Synchronizing 3D movements for quantitative comparison and simultaneous visualization of actions. In: Proceedings of the Fourth IEEE and ACM International Symposium on Mixed and Augmented, 2005, pp 38–47

  • Smith R, Allen G (1994) After action review in military training simulations. In WSC’94: Proceedings of the 26th conference on Winter Simulation, 845–849, San Diego, USA

  • Studiocode (2008) Retrieved 28 April 2008 from http://www.studiocodegroup.com

  • Suebnukarn S, Haddawy P (2006) A Bayesian approach to generating tutorial hints in a collaborative medical problem-based learning system. Artif Intell Med 38(1):5–24

    Article  Google Scholar 

  • Viega J, Conway MJ, Williams G, Pausch R (1996) 3D magic lenses. In: Proceedings of the 9th annual ACM symposium on User interface software and technology, pp 51–58

  • Virtual anesthesia machine (2010) Retrieved 21 April 2010, from http://vam.anest.ufl.edu/wip.html

  • Yamnill S, McLean G (2001) Theories supporting transfer of training. Human Resour Dev Q 12(2):195

    Article  Google Scholar 

Download references

Acknowledgments

This research is supported in part by National Science Foundation Grant IIS-0643557. A special thanks goes to Nikolaus Gravenstein, David Lizdas, Andrew Raij, Kyle Johnsen, Cynthia Kaschub, and the study participants. Some of the technology mentioned here is UF patent pending.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to John Quarles.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Quarles, J., Lampotang, S., Fischler, I. et al. Experiences in mixed reality-based collocated after action review. Virtual Reality 17, 239–252 (2013). https://doi.org/10.1007/s10055-013-0229-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10055-013-0229-6

Keywords

Navigation