skip to main content
10.1145/1357054.1357179acmconferencesArticle/Chapter ViewAbstractPublication PageschiConference Proceedingsconference-collections
research-article

Wedge: clutter-free visualization of off-screen locations

Published:06 April 2008Publication History

ABSTRACT

To overcome display limitations of small-screen devices, researchers have proposed techniques that point users to objects located off-screen. Arrow-based techniques such as City Lights convey only direction. Halo conveys direction and distance, but is susceptible to clutter resulting from overlapping halos. We present Wedge, a visualization technique that conveys direction and distance, yet avoids overlap and clutter. Wedge represents each off-screen location using an acute isosceles triangle: the tip coincides with the off-screen locations, and the two corners are located on-screen. A wedge conveys location awareness primarily by means of its two legs pointing towards the target. Wedges avoid overlap programmatically by repelling each other, causing them to rotate until overlap is resolved. As a result, wedges can be applied to numbers and configurations of targets that would lead to clutter if visualized using halos. We report on a user study comparing Wedge and Halo for three off-screen tasks. Participants were significantly more accurate when using Wedge than when using Halo.

References

  1. Baudisch, P. and Rosenholtz, R. (2003). Halo: A technique for visualizing off-screen locations. Proc. CHI 2003, 481--488. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Baudisch, P., Good, N., Bellotti, V., and Schraedley, P. (2002). Keeping things in context: A comparative evaluation of focus plus context screens, overviews, and zooming. Proc. CHI 2002, 259--266. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Bederson, B. B., Hollan, J. D., Perlin, K., Meyer, J., Bacon, D., and Furnas, G. (1996). Pad++: A zoomable graphical sketchpad for exploring alternate interface physics, Visual Languages and Computation, 7, 3--31.Google ScholarGoogle ScholarCross RefCross Ref
  4. Burigat S., Chittaro L., and Gabrielli S. (2006). Visualizing locations of off-screen objects on mobile devices: A comparative evaluation of three approaches, Proc. MobileHCI 2006, 239--246. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Carpendale, M. S. T. and Montagnese, C. (2001). A framework for unifying presentation space. Proc. UIST 2001, 61--70. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Elder, J. and Zucker, S. (1993). The effect of contour closure on the rapid discrimination of two-dimensional shapes. Vision Research, 33(7), 981--991.Google ScholarGoogle ScholarCross RefCross Ref
  7. Gustafson, S. and Irani, P. (2007). Comparing visualizations for tracking off-screen moving targets. Proc. CHI 2007, 2399--2404. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Guttman, S. E., and Kellman, P. J. (2002). Do spatial factors influence the microgenesis of illusory contours? Journal of Vision, 2, 355a.Google ScholarGoogle ScholarCross RefCross Ref
  9. Hornbæk, K. and Frøkjær, E. (2001). Reading of electronic documents: the usability of linear, fisheye, and overview+detail interfaces. Proc. CHI 2001, 293--300. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Irani, P., Gutwin, C., and Yang, X. D. (2006). Improving selection of off-screen targets with hopping. Proc. CHI 2006, 299--308. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Lam, H. and Baudisch, P. (2005). Summary Thumbnails: Readable overviews for small screen web browsers. Proc. CHI 2005, 681--690. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Mackinlay, J. D., Good, L., Zellweger, P. T., Stefik, M., and Baudisch, P. (2003). City Lights: Contextual views in minimal space. Proc. CHI 2003, 838--839. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Marsh, T. and Wright, P. (2000) Using cinematography conventions to inform guidelines for the design and evaluation of virtual off-screen space. Proc. AAAI 2000 Spring Symp. Ser. Smart Graphics, 123--127.Google ScholarGoogle Scholar
  14. Murray, R., Sekuler, A., and Bennett, P. (2001). Time course of amodal completion revealed by a shape discrimination task. Psychonomic Bulletin, 8, 713--720.Google ScholarGoogle ScholarCross RefCross Ref
  15. Nacenta, M., Subramanian, S., Sallam, S., Champoux, B., and Gutwin, C. (2006). Perspective Cursor: Perspective-based interaction for multi-display environments. Proc. CHI 2006, 289--298. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Nekrasovski, D., Bodnar, A., McGrenere, J., Guimbretière, F., and Munzner, T. (2006). An evaluation of pan & zoom and rubber sheet navigation with and without an overview. Proc. CHI 2006, 11--20. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Rohs, M. and Essl, G. (2006). Which one is better? - Information navigation techniques for spatially aware handheld displays. Proc. ICMI 2006, 100--107. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Sarkar, M. and Brown, M. (1992). Graphical fisheye views of graphs. Proc. CHI 1992, 83--91. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Sekuler, A. and Murray, R. (2001). Amodal completion: A case studying grouping. In T. Shipley & P. Kellman (Eds.), From Fragments to Objects: Segmentation and Grouping in Vision, New York: Elsevier, 265--293.Google ScholarGoogle ScholarCross RefCross Ref
  20. Sekuler, A. and Palmer, S. (1992). Perception of partly occluded objects: A microgenetic analysis. Journal of Experimental Psychology: General, 121, 95--111.Google ScholarGoogle ScholarCross RefCross Ref
  21. Sekuler, A., Palmer, S., and Flynn, C. (1994). Local and global processes in visual completion. Psychological Science, 5, 260--267.Google ScholarGoogle ScholarCross RefCross Ref
  22. Shore, D. and Enns, T. (1997). Shape completion time depends on the size of the occluded region. Experimental Psychology: Human Perception and Performance, 23, 980--998.Google ScholarGoogle ScholarCross RefCross Ref
  23. Skopik, A. and Gutwin, C. (2005). Improving revisitation in fisheye views with visit wear. Proc. CHI 2005, 771--780. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Ware, C. and Lewis, M. (1995). The DragMag image magnifier. Proc. CHI 1995, 407--408. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Wedge: clutter-free visualization of off-screen locations

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      CHI '08: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
      April 2008
      1870 pages
      ISBN:9781605580111
      DOI:10.1145/1357054

      Copyright © 2008 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 6 April 2008

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article

      Acceptance Rates

      CHI '08 Paper Acceptance Rate157of714submissions,22%Overall Acceptance Rate6,199of26,314submissions,24%

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader