skip to main content
10.1145/3385959.3418448acmconferencesArticle/Chapter ViewAbstractPublication PagessuiConference Proceedingsconference-collections
research-article

Evaluating Interaction Cue Purpose and Timing for Learning and Retaining Virtual Reality Training

Authors Info & Claims
Published:30 October 2020Publication History

ABSTRACT

Interaction cues inform users about potential actions to take. Tutorials, games, educational systems, and training applications often employ interaction cues to direct users to take specific actions at particular moments. Prior studies have investigated many aspects of interaction cues, such as the feedforward and perceived affordances that often accompany them. However, two less-researched aspects of interaction cues include the effects of their purpose (i.e., the type of task conveyed) and their timing (i.e., when they are presented). In this paper, we present a study that evaluates the effects of interaction cue purpose and timing on performance while learning and retaining tasks with a virtual reality (VR) training application. Our results indicate that participants retained manipulation tasks significantly better than travel or selection tasks, despite both being significantly easier to complete than the manipulation tasks. Our results also indicate that immediate interaction cues afforded significantly faster learning and better retention than delayed interaction cues.

References

  1. Shaowen Bardzell. 2008. Systems of Signs and Affordances: Interaction Cues in 3D Games. Extending Experiences. Structure, analysis and design of computer game player experience: 191–209.Google ScholarGoogle Scholar
  2. Jared Breakall, Christopher Randles, and Roy Tasker. 2019. Development and use of a multiple-choice item writing flaws evaluation instrument in the context of general chemistry. Chemistry Education Research and Practice 20, 2: 369–382.Google ScholarGoogle ScholarCross RefCross Ref
  3. Fabio Buttussi and Luca Chittaro. 2008. MOPET: A context-aware and user-adaptive wearable system for fitness training. Artificial Intelligence in Medicine 42, 2: 153–163.Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Jacky C.P. Chan, Howard Leung, Jeff K.T. Tang, and Taku Komura. 2011. A Virtual Reality Dance Training System Using Motion Capture Technology. IEEE Transactions on Learning Technologies 4, 2: 187–195.Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Kwangsu Cho, Ju-Hwan Lee, Byeong-Taek Lee, and Eunil Park. 2015. Effects of Feedforward in In-Air Remote Pointing. International Journal of Human–Computer Interaction 31, 2: 89–100.Google ScholarGoogle Scholar
  6. Sven Coppers, Kris Luyten, Davy Vanacken, David Navarre, Philippe Palanque, and Christine Gris. 2019. Fortunettes: Feedforward about the future state of GUI widgets. Proceedings of the ACM on Human-Computer Interaction 3, EICS.Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. William Delamare, Thomas Janssoone, Céline Coutrix, and Laurence Nigay. 2016. Designing 3D Gesture Guidance: Visual Feedback and Feedforward Design Options. Proceedings of the International Working Conference on Advanced Visual Interfaces, Association for Computing Machinery, 152–159.Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Kody R Dillman, Terrance Tin Hoi Mok, Anthony Tang, Lora Oehlberg, and Alex Mitchell. 2018. A Visual Interaction Cue Framework from Video Game Environments for Augmented Reality. Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, ACM, 140:1–140:12.Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Tom Djajadiningrat, Kees Overbeeke, and Stephan Wensveen. 2002. But How, Donald, Tell Us How?: On the Creation of Meaning in Interaction Design Through Feedforward and Inherent Feedback. Proceedings of the 4th Conference on Designing Interactive Systems: Processes, Practices, Methods, and Techniques, ACM, 285–291.Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Olov Engwall. 2012. Analysis of and feedback on phonetic features in pronunciation training with a virtual teacher. Computer Assisted Language Learning 25, 1: 37–64.Google ScholarGoogle ScholarCross RefCross Ref
  11. Chris Galloway. 2017. Blink and they're gone: PR and the battle for attention. Public Relations Review 43, 5: 969–977.Google ScholarGoogle ScholarCross RefCross Ref
  12. Rex Hartson. 2003. Cognitive, physical, sensory, and functional affordances in interaction design. Behaviour & Information Technology 22, 5: 315–338.Google ScholarGoogle ScholarCross RefCross Ref
  13. Rex Hartson and Pardha S Pyla. 2012. The UX Book: Process and guidelines for ensuring a quality user experience. Elsevier.Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Luis Hernandez, Javier Taibo, Antonio Seoane, Ruben Lopez, and Rocio Lopez. 2004. The experience of the Empty Museum: displaying cultural contents on an immersive, walkable VR room. Proceedings Computer Graphics International, 2004., 436–443.Google ScholarGoogle ScholarCross RefCross Ref
  15. Michael J. Howell, Nicolas S. Herrera, Alec G. Moore, and Ryan P. McMahan. 2016. A reproducible olfactory display for exploring olfaction in immersive media experiences. Multimedia Tools and Applications 75, 20: 12311–12330.Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Xinyu Hu, Alec G. Moore, James Coleman Eubanks, Afham Ahmed Aiyaz, and Ryan P. McMahan. 2020. The Effects of Delayed Interaction Cues in Virtual Reality Training.2020 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW), IEEE, 63–69.Google ScholarGoogle Scholar
  17. Rajiv Khadka and Amy Banić. 2020. Effects of Egocentric Versus Exocentric Virtual Object Storage Technique on Cognition in Virtual Environments.2020 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW), 205–209.Google ScholarGoogle Scholar
  18. Bui Minh Khuong, Kiyoshi Kiyokawa, Andrew Miller, Joseph J. LaViola Jr., Tomohiro Mashita, and Haruo Takemura. 2014. The effectiveness of an AR-based context-aware assembly support system in object assembly.2014 IEEE Virtual Reality (VR), 57–62.Google ScholarGoogle Scholar
  19. Chengyuan Lai, Ryan P. McMahan, Midori Kitagawa, and Iolani Connolly. 2016. Geometry Explorer: Facilitating Geometry Education with Virtual Reality. International Conference on Virtual, Augmented and Mixed Reality (VAMR), Springer, 702–713.Google ScholarGoogle Scholar
  20. Joseph J. LaViola Jr., Ernst Kruijff, Ryan P. McMahan, Doug Bowman, and Ivan Poupyrev. 2017.3D User Interfaces: Theory and Practice (2nd ed.). Addison-Wesley Professional, Boston.Google ScholarGoogle Scholar
  21. Ximena López, Jorge Valenzuela, Miguel Nussbaum, and Chin-Chung Tsai. 2015. Some recommendations for the reporting of quantitative studies. Computers & Education 91: 106–110.Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Ryan P. McMahan and Nicolas S. Herrera. 2016. AFFECT: Altered-Fidelity Framework for Enhancing Cognition and Training. Frontiers in ICT 3: 29:1–15.Google ScholarGoogle ScholarCross RefCross Ref
  23. Ryan P. McMahan, Regis Kopper, and Doug A. Bowman. 2014. Principles for Designing Effective 3D Interaction Techniques. In K.S. Hale and K.M. Stanney, eds., Handbook of Virtual Environments. CRC Press, Boca Raton, 299–325.Google ScholarGoogle Scholar
  24. Alec G. Moore, Xinyu Hu, James Coleman Eubanks, Afham Ahmed Aiyaz, and Ryan P. McMahan. 2020. A Formative Evaluation Methodology for VR Training Simulations.2020 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW), IEEE, 125–132.Google ScholarGoogle Scholar
  25. Alec G. Moore, Marwan Kodeih, Anoushka Singhania, Angelina Wu, Tassneen Bashir, and Ryan P. McMahan. 2019. The Importance of Intersection Disambiguation for Virtual Hand Techniques.2019 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), IEEE, 450–457.Google ScholarGoogle Scholar
  26. Maria Murcia-López and Anthony Steed. 2018. A Comparison of Virtual and Physical Training Transfer of Bimanual Assembly Tasks. IEEE Transactions on Visualization and Computer Graphics 24, 4: 1574–1583.Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. Donald A Norman. 1999. Affordance, Conventions, and Design. interactions 6, 3: 38–43.Google ScholarGoogle Scholar
  28. Néstor Ordaz, David Romero, Dominic Gorecky, and Héctor R Siller. 2015. Serious Games and Virtual Simulator for Automotive Manufacturing Education & Training. Procedia Computer Science 75: 267–274.Google ScholarGoogle ScholarCross RefCross Ref
  29. Fred Paas, Alexander Renkl, and John Sweller. 2003. Cognitive Load Theory and Instructional Design: Recent Developments. Educational Psychologist 38, 1: 1–4.Google ScholarGoogle ScholarCross RefCross Ref
  30. Tabitha C. Peck, Laura E. Sockol, and Sarah M. Hancock. 2020. Mind the Gap: The Underrepresentation of Female Participants and Authors in Virtual Reality Research. IEEE Transactions on Visualization and Computer Graphics 26, 5: 1945–1954.Google ScholarGoogle ScholarCross RefCross Ref
  31. Philippe Renevier, Laurence Nigay, J Bouchet, and L Pasqualetti. 2005. Generic Interaction Techniques for Mobile Collaborative Mixed Systems BT - Computer-Aided Design of User Interfaces IV. Springer Netherlands, 309–322.Google ScholarGoogle Scholar
  32. Michael Rohs and Philipp Zweifel. 2005. A Conceptual Framework for Camera Phone-Based Interaction Techniques BT - Pervasive Computing. Springer Berlin Heidelberg, 171–189.Google ScholarGoogle Scholar
  33. Rebecca Polley Sanchez, Chelsea M Bartel, Emily Brown, and Melissa DeRosier. 2014. The acceptability and efficacy of an intelligent social tutoring system. Computers & Education 78: 321–332.Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. Steven M Smith and Edward Vela. 2001. Environmental context-dependent memory: A review and meta-analysis. Psychonomic Bulletin & Review 8, 2: 203–220.Google ScholarGoogle ScholarCross RefCross Ref
  35. Evan A Suma, Samantha L Finkelstein, Seth Clark, Paula Goolkasian, and Larry F Hodges. 2010. Effects of travel technique and gender on a divided attention task in a virtual environment.3DUI 2010 - IEEE Symposium on 3D User Interfaces 2010, Proceedings, 27–34.Google ScholarGoogle Scholar
  36. Evan Suma, Samantha Finkelstein, Myra Reid, Sabarish Babu, Amy Ulinski, and Larry F. Hodges. 2010. Evaluation of the cognitive effects of travel technique in complex real and virtual environments. IEEE Transactions on Visualization and Computer Graphics 16, 4: 690–702.Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. F. Tyndiuk, V. Lespinet-Najib, G. Thomas, and C. Schlick. 2004. Impact of Large Displays on Virtual Reality Task Performance. Proceedings of the 3rd International Conference on Computer Graphics, Virtual Reality, Visualisation and Interaction in Africa, Association for Computing Machinery, 61–65.Google ScholarGoogle Scholar
  38. F. Tyndiuk, G. Thomas, V. Lespinet-Najib, and C. Schlick. 2005. Cognitive Comparison of 3D Interaction in Front of Large vs. Small Displays. Proceedings of the ACM Symposium on Virtual Reality Software and Technology, Association for Computing Machinery, 117–123.Google ScholarGoogle Scholar
  39. Jo Vermeulen, Kris Luyten, Elise van den Hoven, and Karin Coninx. 2013. Crossing the Bridge over Norman's Gulf of Execution: Revealing Feedforward's True Identity. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, ACM, 1931–1940.Google ScholarGoogle Scholar
  40. Jia Wang and Robert W. Lindeman. 2015. Object impersonation: Towards effective interaction in tablet- and HMD-based hybrid virtual environments.2015 IEEE Virtual Reality (VR), 111–118.Google ScholarGoogle Scholar
  41. S. A. G. Wensveen, J. P. Djajadiningrat, and C. J. Overbeeke. 2004. Interaction Frogger: A Design Framework to Couple Action and Function through Feedback and Feedforward. Proceedings of the 5th Conference on Designing Interactive Systems: Processes, Practices, Methods, and Techniques, Association for Computing Machinery, 177–184.Google ScholarGoogle Scholar
  42. Andrew Yoshimura, Adil Khokhar, and Christoph W. Borst. 2019. Eye-gaze-triggered Visual Cues to Restore Attention in Educational VR.2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), 1255–1256.Google ScholarGoogle Scholar

Recommendations

Comments

Login options

Check if you have access through your login credentials or your institution to get full access on this article.

Sign in
  • Published in

    cover image ACM Conferences
    SUI '20: Proceedings of the 2020 ACM Symposium on Spatial User Interaction
    October 2020
    188 pages
    ISBN:9781450379434
    DOI:10.1145/3385959

    Copyright © 2020 ACM

    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    • Published: 30 October 2020

    Permissions

    Request permissions about this article.

    Request Permissions

    Check for updates

    Qualifiers

    • research-article
    • Research
    • Refereed limited

    Acceptance Rates

    Overall Acceptance Rate86of279submissions,31%

    Upcoming Conference

    SUI '24
    ACM Symposium on Spatial User Interaction
    October 7 - 8, 2024
    Trier , Germany

PDF Format

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format .

View HTML Format