skip to main content
research-article

A Spiral into the Mind: Gaze Spiral Visualization for Mobile Eye Tracking

Published:17 May 2022Publication History
Skip Abstract Section

Abstract

Comparing mobile eye tracking data from multiple participants without information about areas of interest (AOIs) is challenging because of individual timing and coordinate systems. We present a technique, the gaze spiral, that visualizes individual recordings based on image content of the stimulus. The spiral layout of the slitscan visualization is used to create a compact representation of scanpaths. The visualization provides an overview of multiple recordings even for long time spans and helps identify and annotate recurring patterns within recordings. The gaze spirals can also serve as glyphs that can be projected to 2D space based on established scanpath metrics in order to interpret the metrics and identify groups of similar viewing behavior. We present examples based on two egocentric datasets to demonstrate the effectiveness of our approach for annotation and comparison tasks. Our examples show that the technique has the potential to let users compare even long-term recordings of pervasive scenarios without manual annotation.

Skip Supplemental Material Section

Supplemental Material

References

  1. Wolfgang Aigner, Silvia Miksch, Heidrun Schumann, and Christian Tominski. 2011. Visualization of Time-Oriented Data. Springer-Verlag, London.Google ScholarGoogle Scholar
  2. Nicola C. Anderson, Fraser Anderson, Alan Kingstone, and Walter F. Bischof. 2015. A comparison of scanpath comparison methods. Behavior Research Methods 47, 4(2015), 1377--1392.Google ScholarGoogle ScholarCross RefCross Ref
  3. Donald J. Berndt and James Clifford. 1994. Using Dynamic Time Warping to find patterns in time series. In Knowledge Discovery in Databases: Papers from the 1994 AAAI Workshop, Seattle, Washington. Technical Report WS-94-03, Usama M. Fayyad and Ramasamy Uthurusamy (Eds.). 359--370.Google ScholarGoogle Scholar
  4. F. N. Bezerra and E. Lima. 2006. Low cost soccer video summaries based on visual rhythm. In Proceedings of the ACM International Workshop on Multimedia Information Retrieval. 71--78.Google ScholarGoogle Scholar
  5. T. Blascheck, K. Kurzhals, M. Raschke, M. Burch, D. Weiskopf, and T. Ertl. 2017. Visualization of eye tracking data: A taxonomy and survey. Computer Graphics Forum 36, 8 (2017), 260--284.Google ScholarGoogle ScholarCross RefCross Ref
  6. Tanja Blascheck, Kuno Kurzhals, Michael Raschke, Stefan Strohmaier, Daniel Weiskopf, and Thomas Ertl. 2016. AOI hierarchies for visual exploration of fixation sequences. In Proceedings of the ACM Symposium on Eye Tracking Research and Applications. 111--118.Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Rita Borgo, Min Chen, Ben Daubney, Edward Grundy, Gunther Heidemann, Benjamin Höferlin, Markus Höferlin, Heike Leitte, Daniel Weiskopf, and Xianghua Xie. 2012. State of the art report on video-based graphics and video visualization. Computer Graphics Forum 31, 8 (2012), 2450--2477.Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Andreas Bulling, Andrew T. Duchowski, and Päivi Majaranta. 2011. PETMEI 2011: the 1st international workshop on pervasive eye tracking and mobile eye-based interaction. In Proceedings of the 13th International Conference on Ubiquitous Computing. 627--628.Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Michael Burch. 2018. Identifying similar eye movement patterns with t-SNE. In Proceedings of the Conference on Vision, Modeling, and Visualization. 111--118.Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Nora Castner, Thomas C. Kuebler, Katharina Scheiter, Juliane Richter, Therese Eder, Fabian Huettig, Constanze Keutel, and Enkelejda Kasneci. 2020. Deep semantic gaze embedding and scanpath comparison for expertise classification during OPT viewing. In Proceedings of the ACM Symposium on Eye Tracking Research and Applications. 18:1--18:10.Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Revital Dafner, Daniel Cohen-Or, and Yossi Matias. 2000. Context-based space filling curves. Computer Graphics Forum 19, 3 (2000), 209--218.Google ScholarGoogle ScholarCross RefCross Ref
  12. Kenneth Holmqvist, Marcus Nyström, and Fiona Mulvey. 2012. Eye tracker data quality: What it is and how to measure it. In Proceedings of the ACM Symposium on Eye Tracking Research and Applications. 45--52.Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Qiang Ji and Xiaojie Yang. 2002. Real-time eye, gaze, and face pose tracking for monitoring driver vigilance. Real-time Imaging 8, 5 (2002), 357--377.Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Peter Kiefer, Ioannis Giannopoulos, Martin Raubal, and Andrew Duchowski. 2017. Eye tracking for spatial research: Cognition, computation, challenges. Spatial Cognition & Computation 17, 1-2 (2017), 1--19.Google ScholarGoogle ScholarCross RefCross Ref
  15. Maurice Koch, Kuno Kurzhals, and Daniel Weiskopf. 2018. Image-based scanpath comparison with slit-scan visualization. In Proceedings of the ACM Symposium on Eye Tracking Research and Applications. 55:1--55:5.Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Kai Kunze, Yuzuko Utsumi, Yuki Shiga, Koichi Kise, and Andreas Bulling. 2013. I know what you are reading: Recognition of document types using mobile eye tracking. In Proceedings of the 2013 International Symposium on Wearable Computers. 113--116.Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Kuno Kurzhals. 2021. Image-based projection labeling for mobile eye tracking. In Proceedings of the ACM Symposium on Eye Tracking Research and Applications. 1--12.Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Kuno Kurzhals, Michael Burch, Tanja Blascheck, Gennady Andrienko, Natalia Andrienko, and Daniel Weiskopf. 2017. A task-based view on the visual analysis of eye-tracking data. In Eye Tracking and Visualization: Foundations, Techniques, and Applications (ETVIS 2015), Michael Burch, Lewis Chuang, Brian Fisher, Albrecht Schmidt, and Daniel Weiskopf (Eds.). Springer International Publishing AG, Cham, Switzerland, 3--22.Google ScholarGoogle Scholar
  19. Kuno Kurzhals, Florian Heimerl, and Daniel Weiskopf. 2014. ISeeCube: Visual analysis of gaze data for video. In Proceedings of the ACM Symposium on Eye Tracking Research and Applications. 43--50.Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Kuno Kurzhals, Marcel Hlawatsch, Michael Burch, and Daniel Weiskopf. 2016a. Fixation-image charts. In Proceedings of the ACM Symposium on Eye Tracking Research and Applications. 11--18.Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Kuno Kurzhals, Marcel Hlawatsch, Florian Heimerl, Michael Burch, Thomas Ertl, and Daniel Weiskopf. 2015. Gaze stripes: Image-based visualization of eye tracking data. IEEE Transactions on Visualization and Computer Graphics 22, 1 (2015), 1005--1014.Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Kuno Kurzhals, Marcel Hlawatsch, Christof Seeger, and Daniel Weiskopf. 2016b. Visual analytics for mobile eye tracking. IEEE Transactions on Visualization and Computer Graphics 23, 1 (2016), 301--310.Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. Kuno Kurzhals, Nils Rodrigues, Maurice Koch, Michael Stoll, Andres Bruhn, Andreas Bulling, and Daniel Weiskopf. 2020. Visual analytics and annotation of pervasive eye tracking video. In Proceedings of the ACM Symposium on Eye Tracking Research and Applications. 1--9.Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Kuno Kurzhals and Daniel Weiskopf. 2016. Visualizing eye tracking data with gaze-guided slit-scans. In Proceedings of the IEEE Workshop on Eye Tracking and Visualization (ETVIS). 45--49.Google ScholarGoogle ScholarCross RefCross Ref
  25. Michael F. Land and Mary Hayhoe. 2001. In what ways do eye movements contribute to everyday activities? Vision Research 41, 25-26 (2001), 3559--3565.Google ScholarGoogle ScholarCross RefCross Ref
  26. Vladimir I. Levenshtein et al. 1966. Binary codes capable of correcting deletions, insertions, and reversals. In Soviet physics doklady, Vol. 10. 707--710.Google ScholarGoogle Scholar
  27. Yin Li, Miao Liu, and James M Rehg. 2018. In the eye of beholder: Joint learning of gaze and actions in first person video. In Proceedings of the European Conference on Computer Vision (ECCV). 619--635.Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. Leland McInnes, John Healy, and James Melville. 2018. UMAP: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018).Google ScholarGoogle Scholar
  29. Simona Naspetti, Roberto Pierdicca, Serena Mandolesi, Marina Paolanti, Emanuele Frontoni, and Raffaele Zanoli. 2016. Automatic analysis of eye-tracking data for augmented reality applications: A prospective outlook. In International Conference on Augmented Reality, Virtual Reality and Computer Graphics. 217--230.Google ScholarGoogle ScholarCross RefCross Ref
  30. Rudolf Netzel, Bettina Ohlhausen, Kuno Kurzhals, Robin Woods, Michael Burch, and Daniel Weiskopf. 2017. User performance and reading strategies for metro maps: An eye tracking study. Spatial Cognition & Computation 17, 1-2 (2017), 39--64.Google ScholarGoogle ScholarCross RefCross Ref
  31. Sohail Rafiqi, Chatchai Wangwiwattana, Jasmine Kim, Ephrem Fernandez, Suku Nair, and Eric C. Larson. 2015. PupilWare: Towards pervasive cognitive load measurement using commodity devices. In Proceedings of the ACM International Conference on PErvasive Technologies Related to Assistive Environments (PETRA). 42:1--42:8 pages.Google ScholarGoogle Scholar
  32. Keith Rayner. 1998. Eye movements in reading and information processing: 20 years of research. Psychological Bulletin 124, 3 (1998), 372--422.Google ScholarGoogle ScholarCross RefCross Ref
  33. Eyal M. Reingold and Heather Sheridan. 2011. Eye movements and visual expertise in chess and medicine. In The Oxford Handbook of Eye Movements, Simon P. Liversedge, Iain D. Gilchrist, and Stefan Everling (Eds.). Oxford University Press, Oxford, 523--550.Google ScholarGoogle Scholar
  34. Daniel C. Richardson and Rick Dale. 2005. Looking to understand: The coupling between speakers' and listeners' eye movements and its relationship to discourse comprehension. Cognitive Science 29, 6 (2005), 1045--1060.Google ScholarGoogle ScholarCross RefCross Ref
  35. Christoph Schulz, Michael Burch, Fabian Beck, and Daniel Weiskopf. 2017. Visual data cleansing of low-level eye-tracking data. In Eye Tracking and Visualization: Foundations, Techniques, and Applications (ETVIS 2015), Michael Burch, Lewis Chuang, Brian Fisher, Albrecht Schmidt, and Daniel Weiskopf (Eds.). Springer International Publishing AG, Cham, Switzerland, 199--216.Google ScholarGoogle Scholar
  36. Steven M. Seitz and Jiwon Kim. 2003. Multiperspective imaging. IEEE Computer Graphics and Applications 23, 6 (2003), 16--19.Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. Temple F. Smith and Michael S. Waterman. 1981. Identification of common molecular subsequences. Journal of Molecular Biology 147, 1 (1981), 195--197.Google ScholarGoogle ScholarCross RefCross Ref
  38. Julian Steil, Michael Xuelin Huang, and Andreas Bulling. 2018. Fixation detection for head-mounted eye tracking based on visual similarity of gaze targets. In Proceedings of the ACM Symposium on Eye Tracking Research and Applications. 23:1--23:9.Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. Anthony Tang, Saul Greenberg, and Sidney Fels. 2008. Exploring video streams using slit-tear visualizations. In Proceedings of the Working Conference on Advanced Visual Interfaces. 191--198.Google ScholarGoogle ScholarDigital LibraryDigital Library
  40. Marc Weber, Marc Alexa, and Wolfgang Müller. 2001. Visualizing time-series on spirals. In Proceedings of the IEEE Symposium on Information Visualization. 7--14.Google ScholarGoogle ScholarDigital LibraryDigital Library
  41. Julian Wolf, Stephan Hess, David Bachmann, Quentin Lohmeyer, and Mirko Meboldt. 2018. Automating areas of interest analysis in mobile eye tracking experiments based on machine learning. Journal of Eye Movement Research 11, 6 (2018). 1--6.Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. A Spiral into the Mind: Gaze Spiral Visualization for Mobile Eye Tracking

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in

    Full Access

    • Published in

      cover image Proceedings of the ACM on Computer Graphics and Interactive Techniques
      Proceedings of the ACM on Computer Graphics and Interactive Techniques  Volume 5, Issue 2
      May 2022
      66 pages
      EISSN:2577-6193
      DOI:10.1145/3538410
      Issue’s Table of Contents

      Copyright © 2022 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 17 May 2022
      Published in pacmcgit Volume 5, Issue 2

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article
      • Research
      • Refereed

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader