skip to main content
10.1145/3543174.3545257acmconferencesArticle/Chapter ViewAbstractPublication PagesautomotiveuiConference Proceedingsconference-collections
research-article
Public Access

Gesture and Voice Commands to Interact With AR Windshield Display in Automated Vehicle: A Remote Elicitation Study

Published:17 September 2022Publication History

ABSTRACT

Augmented reality (AR) windshield display (WSD) offers promising ways to engage in non-driving tasks in automated vehicles. Previous studies explored different ways WSD can be used to present driving and other tasks-related information and how that can affect driving performance, user experience, and performance in secondary tasks. Our goal for this study was to examine how drivers expect to use gesture and voice commands for interacting with WSD for performing complex, multi-step personal and work-related tasks in an automated vehicle. In this remote unmoderated online elicitation study, 31 participants proposed 373 gestures and 373 voice commands for performing 24 tasks. We analyzed the elicited interactions, their preferred modality of interaction, and the reasons behind this preference. Lastly, we discuss our results and their implications for designing AR WSD in automated vehicles.

References

  1. Abdullah X Ali, Meredith Ringel Morris, and Jacob O Wobbrock. 2019. Crowdlicit: A system for conducting distributed end-user elicitation and identification studies. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. Association for Computing Machinery, New York, NY, USA, 1–12.Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Micah Alpern and Katie Minardo. 2003. Developing a car gesture interface for use as a secondary task. In CHI’03 extended abstracts on Human factors in computing systems. Association for Computing Machinery, New York, NY, USA, 932–933.Google ScholarGoogle Scholar
  3. Leonardo Angelini, Jürgen Baumgartner, Francesco Carrino, Stefano Carrino, Maurizio Caon, Omar Khaled, Jürgen Sauer, Denis Lalanne, Elena Mugellini, and Andreas Sonderegger. 2016. Comparing gesture, speech and touch interaction modalities for in-vehicle infotainment systems. In Actes de la 28ième conférence francophone sur l’Interaction Homme-Machine. HAL, Lyon, France, 188–196.Google ScholarGoogle Scholar
  4. Leonardo Angelini, Francesco Carrino, Stefano Carrino, Maurizio Caon, Omar Abou Khaled, Jürgen Baumgartner, Andreas Sonderegger, Denis Lalanne, and Elena Mugellini. 2014. Gesturing on the steering wheel: A user-elicited taxonomy. In Proceedings of the 6th international conference on automotive user interfaces and interactive vehicular applications. Association for Computing Machinery, New York, NY, USA, 1–8.Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Lisa Anthony, Quincy Brown, Jaye Nias, Berthel Tate, and Shreya Mohan. 2012. Interaction and recognition challenges in interpreting children’s touch and gesture input on mobile devices. In Proceedings of the 2012 ACM international conference on Interactive tabletops and surfaces. Association for Computing Machinery, New York, NY, USA, 225–234.Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. J Alejandro Betancur, Nicolás Gómez, Mario Castro, Frederic Merienne, and Daniel Suárez. 2018. User experience comparison among touchless, haptic and voice Head-Up Displays interfaces in automobiles. International Journal on Interactive Design and Manufacturing (IJIDeM) 12, 4(2018), 1469–1479.Google ScholarGoogle ScholarCross RefCross Ref
  7. Laura-Bianca Bilius and Radu-Daniel Vatavu. 2020. A multistudy investigation of drivers and passengers’ gesture and voice input preferences for in-vehicle interactions. Journal of Intelligent Transportation Systems 25, 2 (2020), 197–220.Google ScholarGoogle ScholarCross RefCross Ref
  8. Daniel Brand, Kevin Büchele, and Alexander Meschtscherjakov. 2016. Pointing at the HUD: Gesture interaction using a leap motion. In Adjunct Proceedings of the 8th International Conference on Automotive User Interfaces and Interactive Vehicular Applications. Association for Computing Machinery, New York, NY, USA, 167–172.Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Charlynn Burd, Michael Burrows, and Brian McKenzie. 2021. Travel time to work in the united states: 2019. American Community Survey Reports, United States Census Bureau 2 (2021), 2021.Google ScholarGoogle Scholar
  10. Gary Burnett, Elizabeth Crundall, David Large, Glyn Lawson, and Lee Skrypchuk. 2013. A study of unidirectional swipe gestures on in-vehicle touch screens. In Proceedings of the 5th International Conference on Automotive User Interfaces and Interactive Vehicular Applications. Association for Computing Machinery, New York, NY, USA, 22–29.Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Edwin Chan, Teddy Seyed, Wolfgang Stuerzlinger, Xing-Dong Yang, and Frank Maurer. 2016. User elicitation on single-hand microgestures. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems. Association for Computing Machinery, New York, NY, USA, 3403–3414.Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Vassilis Charissis and Stylianos Papanastasiou. 2010. Human–machine collaboration through vehicle head up display interface. Cognition, Technology & Work 12, 1 (2010), 41–50.Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. On-Road Automated Driving (ORAD) committee. 2021. Taxonomy and Definitions for Terms Related to Driving Automation Systems for On-Road Motor Vehicles. https://saemobilus.sae.org/content/j3016_202104. Accessed: 2022-01-5.Google ScholarGoogle Scholar
  14. Henrik Detjen, Sarah Faltaous, Stefan Geisler, and Stefan Schneegass. 2019. User-defined voice and mid-air gesture commands for maneuver-based interventions in automated vehicles. In Proceedings of Mensch und Computer 2019. Association for Computing Machinery, New York, NY, USA, 341–348.Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Julien Epps, Serge Lichman, and Mike Wu. 2006. A study of hand shape use in tabletop gesture interaction. In CHI’06 extended abstracts on human factors in computing systems. Association for Computing Machinery, New York, NY, USA, 748–753.Google ScholarGoogle Scholar
  16. Hessam Jahani Fariman, Hasan J Alyamani, Manolya Kavakli, and Len Hamey. 2016. Designing a user-defined gesture vocabulary for an in-vehicle climate control system. In Proceedings of the 28th Australian Conference on Computer-Human Interaction. Association for Computing Machinery, New York, NY, USA, 391–395.Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Alexander Feierle, Fabian Schlichtherle, and Klaus Bengler. 2021. Augmented Reality Head-Up Display: A Visual Support During Malfunctions in Partially Automated Driving?IEEE Transactions on Intelligent Transportation Systems 23, 5(2021), 4853–4865.Google ScholarGoogle Scholar
  18. Leah Findlater, Ben Lee, and Jacob Wobbrock. 2012. Beyond QWERTY: augmenting touch screen keyboards with multi-touch gestures for non-alphanumeric input. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. Association for Computing Machinery, New York, NY, USA, 2679–2682.Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Kikuo Fujimura, Lijie Xu, Cuong Tran, Rishabh Bhandari, and Victor Ng-Thow-Hing. 2013. Driver queries using wheel-constrained finger pointing and 3-D head-up display visual feedback. In Proceedings of the 5th International Conference on Automotive User Interfaces and Interactive Vehicular Applications. Association for Computing Machinery, New York, NY, USA, 56–62.Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Joseph L Gabbard, Gregory M Fitch, and Hyungil Kim. 2014. Behind the glass: Driver challenges and opportunities for AR automotive applications. Proc. IEEE 102, 2 (2014), 124–136.Google ScholarGoogle ScholarCross RefCross Ref
  21. Michael A Gerber, Ronald Schroeter, Li Xiaomeng, and Mohammed Elhenawy. 2020. Self-interruptions of non-driving related tasks in automated vehicles: Mobile vs head-up display. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. Association for Computing Machinery, New York, NY, USA, 1–9.Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. José Ignacio Giménez-Nadal, José Alberto Molina, and Jorge Velilla. 2022. Trends in commuting time of European workers: A cross-country analysis. Transport Policy 116(2022), 327–342.Google ScholarGoogle ScholarCross RefCross Ref
  23. Yves Guiard. 1987. Asymmetric division of labor in human skilled bimanual action: The kinematic chain as a model. Journal of motor behavior 19, 4 (1987), 486–517.Google ScholarGoogle ScholarCross RefCross Ref
  24. Renate Haeuslschmid, Bastian Pfleging, and Florian Alt. 2016. A design space to support the development of windshield applications for the car. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems. Association for Computing Machinery, New York, NY, USA, 5076–5091.Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. Renate Häuslschmid, Sven Osterwald, Marcus Lang, and Andreas Butz. 2015. Augmenting the driver’s view with peripheral information on a windshield display. In Proceedings of the 20th International Conference on Intelligent User Interfaces. Association for Computing Machinery, New York, NY, USA, 311–321.Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. Fabian Hoffmann, Miriam-Ida Tyroller, Felix Wende, and Niels Henze. 2019. User-defined interaction for smart homes: voice, touch, or mid-air gestures?. In Proceedings of the 18th International Conference on Mobile and Ubiquitous Multimedia. Association for Computing Machinery, New York, NY, USA, 1–7.Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. Hessam Jahani, Hasan J Alyamani, Manolya Kavakli, Arindam Dey, and Mark Billinghurst. 2017. User evaluation of hand gestures for designing an intelligent in-vehicle interface. In International Conference on Design Science Research in Information System and Technology. Springer, Springer, Berlin/Heidelberg, Germany, 104–121.Google ScholarGoogle ScholarCross RefCross Ref
  28. Dagmar Kern and Albrecht Schmidt. 2009. Design space for driver-based automotive user interfaces. In Proceedings of the 1st International Conference on Automotive User Interfaces and Interactive Vehicular Applications. Association for Computing Machinery, New York, NY, USA, 3–10.Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. SeungJun Kim and Anind K Dey. 2009. Simulated augmented reality windshield display as a cognitive mapping aid for elder driver navigation. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. Association for Computing Machinery, New York, NY, USA, 133–142.Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. Anne Köpsel and Nikola Bubalo. 2015. Benefiting from legacy bias. interactions 22, 5 (2015), 44–47.Google ScholarGoogle Scholar
  31. Andrew L Kun. 2018. Human-machine interaction for vehicles: Review and outlook. Foundations and Trends® in Human–Computer Interaction 11, 4(2018), 201–293.Google ScholarGoogle Scholar
  32. Andrew L Kun, Orit Shaer, Andreas Riener, Stephen Brewster, and Clemens Schartmüller. 2019. AutoWork 2019: workshop on the future of work and well-being in automated vehicles. In Proceedings of the 11th International Conference on Automotive User Interfaces and Interactive Vehicular Applications: Adjunct Proceedings. Association for Computing Machinery, New York, NY, USA, 56–62.Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. Alexander Kunze, Stephen J Summerskill, Russell Marshall, and Ashleigh J Filtness. 2018. Augmented reality displays for communicating uncertainty information in automated driving. In Proceedings of the 10th international conference on automotive user interfaces and interactive vehicular applications. Association for Computing Machinery, New York, NY, USA, 164–175.Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. Byron Lahey, Audrey Girouard, Winslow Burleson, and Roel Vertegaal. 2011. PaperPhone: understanding the use of bend gestures in mobile devices with flexible electronic paper displays. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. Association for Computing Machinery, New York, NY, USA, 1303–1312.Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. Jae Yeol Lee, Gue Won Rhee, and Dong Woo Seo. 2010. Hand gesture-based tangible interactions for manipulating virtual objects in a mixed reality environment. The International Journal of Advanced Manufacturing Technology 51, 9(2010), 1069–1082.Google ScholarGoogle ScholarCross RefCross Ref
  36. Sang Hun Lee, Se-One Yoon, and Jae Hoon Shin. 2015. On-wheel finger gesture control for in-vehicle systems on central consoles. In Adjunct Proceedings of the 7th International Conference on Automotive User Interfaces and Interactive Vehicular Applications. Association for Computing Machinery, New York, NY, USA, 94–99.Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. Xiaomeng Li, Ronald Schroeter, Andry Rakotonirainy, Jonny Kuo, and Michael G Lenné. 2020. Effects of different non-driving-related-task display modes on drivers’ eye-movement patterns during take-over in an automated vehicle. Transportation research part F: traffic psychology and behaviour 70 (2020), 135–148.Google ScholarGoogle Scholar
  38. Xiaomeng Li, Ronald Schroeter, Andry Rakotonirainy, Jonny Kuo, and Michael G Lenné. 2021. Get Ready for Take-Overs: Using Head-Up Display for Drivers to Engage in Non–Driving-Related Tasks in Automated Vehicles. Human factors 0, 0 (2021), 00187208211056200.Google ScholarGoogle Scholar
  39. Hongnan Lin. 2019. Using passenger elicitation for developing gesture design guidelines for adjusting highly automated vehicle dynamics. In Companion Publication of the 2019 on Designing Interactive Systems Conference 2019 Companion. Association for Computing Machinery, New York, NY, USA, 97–100.Google ScholarGoogle ScholarDigital LibraryDigital Library
  40. Patrick Lindemann, Tae-Young Lee, and Gerhard Rigoll. 2018. Catch my drift: Elevating situation awareness for highly automated driving with an explanatory windshield display user interface. Multimodal Technologies and Interaction 2, 4 (2018), 71.Google ScholarGoogle ScholarCross RefCross Ref
  41. Patrick Lindemann, Tae-Young Lee, and Gerhard Rigoll. 2018. Supporting driver situation awareness for autonomous urban driving with an augmented-reality windshield display. In 2018 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct). IEEE, IEEE, Piscataway, NJ, 358–363.Google ScholarGoogle ScholarCross RefCross Ref
  42. Lutz Lorenz, Philipp Kerschbaum, and Josef Schumann. 2014. Designing take over scenarios for automated driving: How does augmented reality support the driver to get back into the loop?. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting, Vol. 58. Sage Publications Sage CA: Los Angeles, CA, Sage Publications, Thousand Oaks, CA, 1681–1685.Google ScholarGoogle ScholarCross RefCross Ref
  43. Keenan R May, Thomas M Gable, and Bruce N Walker. 2017. Designing an in-vehicle air gesture set using elicitation methods. In Proceedings of the 9th International Conference on Automotive User Interfaces and Interactive Vehicular Applications. Association for Computing Machinery, New York, NY, USA, 74–83.Google ScholarGoogle ScholarDigital LibraryDigital Library
  44. Christophe Mignot, Claude Valot, and Noelle Carbonell. 1993. An experimental study of future “natural” multimodal human-computer interaction. In INTERACT’93 and CHI’93 Conference Companion on Human Factors in Computing Systems. Association for Computing Machinery, New York, NY, USA, 67–68.Google ScholarGoogle Scholar
  45. W Thomas Miller and Andrew L Kun. 2013. Using speech, GUIs and buttons in police vehicles: field data on user preferences for the Project54 system. In Proceedings of the 5th International Conference on Automotive User Interfaces and Interactive Vehicular Applications. Association for Computing Machinery, New York, NY, USA, 108–113.Google ScholarGoogle ScholarDigital LibraryDigital Library
  46. Meredith Ringel Morris. 2012. Web on the wall: insights from a multimodal interaction elicitation study. In Proceedings of the 2012 ACM international conference on Interactive tabletops and surfaces. Association for Computing Machinery, New York, NY, USA, 95–104.Google ScholarGoogle ScholarDigital LibraryDigital Library
  47. Meredith Ringel Morris, Andreea Danielescu, Steven Drucker, Danyel Fisher, Bongshin Lee, MC Schraefel, and Jacob O Wobbrock. 2014. Reducing legacy bias in gesture elicitation studies. interactions 21, 3 (2014), 40–45.Google ScholarGoogle ScholarDigital LibraryDigital Library
  48. Meredith Ringel Morris, Jacob O Wobbrock, and Andrew D Wilson. 2010. Understanding users’ preferences for surface gestures. In Proceedings of graphics interface 2010. Canadian Information Processing Society, 403 King Street West, Suite 205 Toronto, Ont. M5U 1LSCanada, 261–268.Google ScholarGoogle Scholar
  49. Divyabharathi Nagaraju, Alberta Ansah, Nabil Al Nahin Ch, Caitlin Mills, Christian P Janssen, Orit Shaer, and Andrew L Kun. 2021. How Will Drivers Take Back Control in Automated Vehicles? A Driving Simulator Test of an Interleaving Framework. In 13th International Conference on Automotive User Interfaces and Interactive Vehicular Applications. Association for Computing Machinery, New York, NY, USA, 20–27.Google ScholarGoogle ScholarDigital LibraryDigital Library
  50. Vijayakumar Nanjappan, Rongkai Shi, Hai-Ning Liang, Kim King-Tong Lau, Yong Yue, and Katie Atkinson. 2019. Towards a taxonomy for in-vehicle interactions using wearable smart textiles: insights from a user-elicitation study. Multimodal Technologies and Interaction 3, 2 (2019), 33.Google ScholarGoogle ScholarCross RefCross Ref
  51. Brian Normile and Jane Ulitskaya. 2021. Which Cars Have Head-Up Displays?cars. https://www.cars.com/articles/which-cars-have-head-up-displays-434824/Google ScholarGoogle Scholar
  52. Stefan Palan and Christian Schitter. 2018. Prolific. ac—A subject pool for online experiments. Journal of Behavioral and Experimental Finance 17 (2018), 22–27.Google ScholarGoogle ScholarCross RefCross Ref
  53. Ekaterina Peshkova, Martin Hitz, and David Ahlström. 2016. Exploring user-defined gestures and voice commands to control an unmanned aerial vehicle. In International Conference on Intelligent Technologies for Interactive Entertainment. Springer, Springer, Berlin/Heidelberg, Germany, 47–62.Google ScholarGoogle Scholar
  54. L Petersen, L Robert, X Yang, and D Tilbury. 2019. Situational Awareness, Driver’s Trust in Automated Driving Systems and Secondary Task Performance. SAE Int. J. of CAV 2, 2 (2019), 0.Google ScholarGoogle ScholarCross RefCross Ref
  55. Bastian Pfleging, Maurice Rang, and Nora Broy. 2016. Investigating user needs for non-driving-related activities during automated driving. In Proceedings of the 15th international conference on mobile and ubiquitous multimedia. Association for Computing Machinery, New York, NY, USA, 91–99.Google ScholarGoogle ScholarDigital LibraryDigital Library
  56. Thammathip Piumsomboon, Adrian Clark, Mark Billinghurst, and Andy Cockburn. 2013. User-defined gestures for augmented reality. In IFIP Conference on Human-Computer Interaction. Springer, Springer, Berlin/Heidelberg, Germany, 282–299.Google ScholarGoogle ScholarDigital LibraryDigital Library
  57. Jonas Radlmayr, Christian Gold, Lutz Lorenz, Mehdi Farid, and Klaus Bengler. 2014. How traffic situations and non-driving related tasks affect the take-over quality in highly automated driving. In Proceedings of the human factors and ergonomics society annual meeting, Vol. 58. Sage Publications Sage CA: Los Angeles, CA, Sage Publications, Thousand Oaks, CA, 2063–2067.Google ScholarGoogle ScholarCross RefCross Ref
  58. Andreas Riegler, Bilal Aksoy, Andreas Riener, and Clemens Holzmann. 2020. Gaze-based Interaction with Windshield Displays for Automated Driving: Impact of Dwell Time and Feedback Design on Task Performance and Subjective Workload. In 12th International Conference on Automotive User Interfaces and Interactive Vehicular Applications. Association for Computing Machinery, New York, NY, USA, 151–160.Google ScholarGoogle ScholarDigital LibraryDigital Library
  59. Andreas Riegler, Philipp Wintersberger, Andreas Riener, and Clemens Holzmann. 2019. Augmented Reality Windshield Displays and Their Potential to Enhance User Experience in Automated Driving. i-com 18, 2 (2019), 127–149.Google ScholarGoogle Scholar
  60. Sandrine Robbe. 1998. An empirical study of speech and gesture interaction: Toward the definition of ergonomic design guidelines. In CHI 98 Conference Summary on Human Factors in Computing Systems. Association for Computing Machinery, New York, NY, USA, 349–350.Google ScholarGoogle ScholarDigital LibraryDigital Library
  61. Sandrine Robbe-Reiter, Noëlle Carbonell, and Pierre Dauchy. 2000. Expression constraints in multimodal human-computer interaction. In Proceedings of the 5th international conference on Intelligent user interfaces. Association for Computing Machinery, New York, NY, USA, 225–228.Google ScholarGoogle ScholarDigital LibraryDigital Library
  62. Jaime Ruiz, Yang Li, and Edward Lank. 2011. User-defined motion gestures for mobile interaction. In Proceedings of the SIGCHI conference on human factors in computing systems. Association for Computing Machinery, New York, NY, USA, 197–206.Google ScholarGoogle ScholarDigital LibraryDigital Library
  63. Clemens Schartmüller, Andreas Riener, Philipp Wintersberger, and Anna-Katharina Frison. 2018. Workaholistic: on balancing typing-and handover-performance in automated driving. In Proceedings of the 20th international conference on human-computer interaction with mobile devices and services. Association for Computing Machinery, New York, NY, USA, 1–12.Google ScholarGoogle ScholarDigital LibraryDigital Library
  64. Albrecht Schmidt, Anind K Dey, Andrew L Kun, and Wolfgang Spiessl. 2010. Automotive user interfaces: human computer interaction in the car. In CHI’10 Extended Abstracts on Human Factors in Computing Systems. Association for Computing Machinery, New York, NY, USA, 3177–3180.Google ScholarGoogle Scholar
  65. Orit Shaer and Eva Hornecker. 2010. Tangible user interfaces: past, present, and future directions. Now Publishers Inc, Hanover, MA 02339, USA.Google ScholarGoogle Scholar
  66. Yasuhiro Takaki, Yohei Urano, Shinji Kashiwada, Hiroshi Ando, and Koji Nakamura. 2011. Super multi-view windshield display for long-distance image information presentation. Optics express 19, 2 (2011), 704–716.Google ScholarGoogle Scholar
  67. Thomaz Teodorovicz, Andrew L Kun, Raffaella Sadun, and Orit Shaer. 2022. Multitasking while driving: A time use study of commuting knowledge workers to assess current and future uses. International Journal of Human-Computer Studies 162 (2022), 102789.Google ScholarGoogle ScholarDigital LibraryDigital Library
  68. Kathryn G Tippey, Elayaraj Sivaraj, and Thomas K Ferris. 2017. Driving while interacting with Google Glass: Investigating the combined effect of head-up display and hands-free input on driving safety and multitask performance. Human factors 59, 4 (2017), 671–688.Google ScholarGoogle Scholar
  69. Consuelo Valdes, Diana Eastman, Casey Grote, Shantanu Thatte, Orit Shaer, Ali Mazalek, Brygg Ullmer, and Miriam K Konkel. 2014. Exploring the design space of gestural interaction with active tokens through user-defined gestures. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. Association for Computing Machinery, New York, NY, USA, 4107–4116.Google ScholarGoogle ScholarDigital LibraryDigital Library
  70. Emma van Amersfoorth, Lotte Roefs, Quinta Bonekamp, Laurent Schuermans, and Bastian Pfleging. 2019. Increasing driver awareness through translucency on windshield displays. In Proceedings of the 11th International Conference on Automotive User Interfaces and Interactive Vehicular Applications: Adjunct Proceedings. Association for Computing Machinery, New York, NY, USA, 156–160.Google ScholarGoogle ScholarDigital LibraryDigital Library
  71. Radu-Daniel Vatavu and Jacob O Wobbrock. 2015. Formalizing agreement analysis for elicitation studies: new measures, significance test, and toolkit. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems. Association for Computing Machinery, New York, NY, USA, 1325–1334.Google ScholarGoogle ScholarDigital LibraryDigital Library
  72. Sarah Theres Völkel, Daniel Buschek, Malin Eiband, Benjamin R Cowan, and Heinrich Hussmann. 2021. Eliciting and Analysing Users’ Envisioned Dialogues with Perfect Voice Assistants. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. Association for Computing Machinery, New York, NY, USA, 1–15.Google ScholarGoogle ScholarDigital LibraryDigital Library
  73. Ying Wang, Shengfan He, Zuerhumuer Mohedan, Yueyan Zhu, Lijun Jiang, and Zhelin Li. 2014. Design and evaluation of a steering wheel-mount speech interface for drivers’ mobile use in car. In 17th International IEEE Conference on Intelligent Transportation Systems (ITSC). IEEE, IEEE, Piscataway, NJ, 673–678.Google ScholarGoogle ScholarCross RefCross Ref
  74. Florian Weidner and Wolfgang Broll. 2019. Interact with your car: a user-elicited gesture set to inform future in-car user interfaces. In Proceedings of the 18th International Conference on Mobile and Ubiquitous Multimedia. Association for Computing Machinery, New York, NY, USA, 1–12.Google ScholarGoogle ScholarDigital LibraryDigital Library
  75. Garrett Weinberg, Bret Harsham, and Zeljko Medenica. 2011. Evaluating the usability of a head-up display for selection from choice lists in cars. In Proceedings of the 3rd International Conference on Automotive User Interfaces and Interactive Vehicular Applications. Association for Computing Machinery, New York, NY, USA, 39–46.Google ScholarGoogle ScholarDigital LibraryDigital Library
  76. Philipp Wintersberger, Tamara von Sawitzky, Anna-Katharina Frison, and Andreas Riener. 2017. Traffic augmentation as a means to increase trust in automated driving systems. In Proceedings of the 12th biannual conference on italian sigchi chapter. Association for Computing Machinery, New York, NY, USA, 1–7.Google ScholarGoogle ScholarDigital LibraryDigital Library
  77. Jacob O Wobbrock, Htet Htet Aung, Brandon Rothrock, and Brad A Myers. 2005. Maximizing the guessability of symbolic input. In CHI’05 extended abstracts on Human Factors in Computing Systems. Association for Computing Machinery, New York, NY, USA, 1869–1872.Google ScholarGoogle Scholar
  78. Jacob O Wobbrock, Meredith Ringel Morris, and Andrew D Wilson. 2009. User-defined gestures for surface computing. In Proceedings of the SIGCHI conference on human factors in computing systems. Association for Computing Machinery, New York, NY, USA, 1083–1092.Google ScholarGoogle ScholarDigital LibraryDigital Library
  79. Huiyue Wu, Yu Wang, Jiayi Liu, Jiali Qiu, and Xiaolong Luke Zhang. 2020. User-defined gesture interaction for in-vehicle information systems. Multimedia Tools and Applications 79, 1 (2020), 263–288.Google ScholarGoogle ScholarDigital LibraryDigital Library
  80. Huiyue Wu, Yu Wang, Jiali Qiu, Jiayi Liu, and Xiaolong Zhang. 2019. User-defined gesture interaction for immersive VR shopping applications. Behaviour & Information Technology 38, 7 (2019), 726–741.Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. Gesture and Voice Commands to Interact With AR Windshield Display in Automated Vehicle: A Remote Elicitation Study

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Published in

        cover image ACM Conferences
        AutomotiveUI '22: Proceedings of the 14th International Conference on Automotive User Interfaces and Interactive Vehicular Applications
        September 2022
        371 pages
        ISBN:9781450394154
        DOI:10.1145/3543174

        Copyright © 2022 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 17 September 2022

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article
        • Research
        • Refereed limited

        Acceptance Rates

        Overall Acceptance Rate248of566submissions,44%

        Upcoming Conference

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      HTML Format

      View this article in HTML Format .

      View HTML Format