Skip to main content

Tracking the Saliency Features in Images Based on Human Observation Statistics

  • Conference paper
Computational Intelligence for Multimedia Understanding (MUSCLE 2011)

Abstract

We address the statistical inference of saliency features in the images based on human eye-tracking measurements. Training videos were recorded by a head-mounted wearable eye-tracker device, where the position of the eye fixation relative to the recorded image was annotated. From the same video records, artificial saliency points (SIFT) were measured by computer vision algorithms which were clustered to describe the images with a manageable amount of descriptors. The measured human eye-tracking (fixation pattern) and the estimated saliency points are fused in a statistical model, where the eye-tracking supports us with transition probabilities among the possible image feature points. This HVS-based statistical model results in the estimation of possible tracking paths and region of interest areas of the human vision. The proposed method may help in image saliency analysis, better compression of region of interest areas and in the development of more efficient human-computer-interaction devices.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 54.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 72.00
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Similar content being viewed by others

References

  1. Arrington Research, GigE-60 head-mounted eye-tracker, http://www.arringtonresearch.com/LaptopEyeTracker1.html

  2. Arrington Research, GigE-60 head-mounted eye-tracker (image), http://www.arringtonresearch.com/news.html

  3. iView X Hi-Speed eye-tracker, http://www.smivision.com/en/gaze-and-eye-tracking-systems/products/iview-x-hi-speed.htm

  4. MIT Street Scene Database (2007), http://cbcl.mit.edu/software-datasets/streetscenes/

  5. Ground Truth Database (2009), http://www.cs.washington.edu/research/imagedatabase/

  6. Avraham, T., Lindenbaum, M.: Esaliency (extended saliency): Meaningful attention using stochastic image modeling. IEEE Transactions on Pattern Analysis and Machine Intelligence 32, 693–708 (2010)

    Article  Google Scholar 

  7. Banko, E.M., Gal, V., Kortvelyes, J., Kovacs, G., Vidnyanszky, Z.: Dissociating the effect of noise on sensory processing and overall decision difficulty. Journal of Neuroscience 31(7), 2663–2674 (2011)

    Article  Google Scholar 

  8. Betz, T., Kietzmann, T.C., Wilming, N., Konig, P.: Investigating task-dependent top-down effects on overt visual attention. Journal of vision 10(3) (2010)

    Google Scholar 

  9. Csurka, G., Dance, C., Fan, L., Willamowski, J., Bray, C.: Visual categorization with bags of keypoints (2004)

    Google Scholar 

  10. Harel, J., Koch, C., Perona, P.: Graph-based visual saliency. In: Advances in Neural Information Processing Systems 19, pp. 545–552. MIT Press (2007)

    Google Scholar 

  11. Itti, L., Koch, C., Niebur, E.: A model of saliency-based visual attention for rapid scene analysis. IEEE Trans. Pattern Analysis and Machine Intelligence 20(11), 1254–1259 (1998)

    Article  Google Scholar 

  12. Liu, T., Yuan, Z., Sun, J., Wang, J., Zheng, N., Tang, X., Shum, H.Y.: Learning to detect a salient object. IEEE Transactions on Pattern Analysis and Machine Intelligence 33, 353–367 (2011)

    Article  Google Scholar 

  13. Lowe, D.G.: Distinctive Image Features from Scale-Invariant Keypoints. International Journal of Computer Vision 60, 91–110 (2004)

    Article  Google Scholar 

  14. Nyström, M., Holmqvist, K.: An adaptive algorithm for fixation, saccade, and glissade detection in eyetracking data. Behavior Research Methods 42, 188–204 (2010)

    Article  Google Scholar 

  15. Sivic, J., Zisserman, A.: Video google: a text retrieval approach to object matching in videos. In: Ninth IEEE International Conference on Computer Vision, vol. 2, pp. 1470–1477 (2003)

    Google Scholar 

  16. Smith, K., Ba, S., Odobez, J.M., Gatica-Perez, D.: Tracking the Visual Focus of Attention for a Varying Number of Wandering People. IEEE Transactions on Pattern Analysis and Machine Intelligence 30(7), 1212–1229 (2008)

    Article  Google Scholar 

  17. Szolgay, D., Benedek, C., Sziranyi, T.: Fast template matching for measuring visit frequencies of dynamic web advertisements. In: International Conference on Computer Vision Theory and Applications (VISAPP), Porto, Portugal (2008)

    Google Scholar 

  18. Tseng, P.H.H., Carmi, R., Cameron, I.G., Munoz, D.P., Itti, L.: Quantifying center bias of observers in free viewing of dynamic natural scenes. Journal of vision 9(7) (2009)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2012 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Szalai, S., Szirányi, T., Vidnyanszky, Z. (2012). Tracking the Saliency Features in Images Based on Human Observation Statistics. In: Salerno, E., Çetin, A.E., Salvetti, O. (eds) Computational Intelligence for Multimedia Understanding. MUSCLE 2011. Lecture Notes in Computer Science, vol 7252. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-32436-9_19

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-32436-9_19

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-32435-2

  • Online ISBN: 978-3-642-32436-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics