Skip to main content

Advanced Statistical Methods for Eye Movement Analysis and Modelling: A Gentle Introduction

  • Chapter
  • First Online:
Eye Movement Research

Abstract

In this Chapter we consider eye movements and, in particular, the resulting sequence of gaze shifts to be the observable outcome of a stochastic process. Crucially, we show that, under such assumption, a wide variety of tools become available for analyses and modelling beyond conventional statistical methods. Such tools encompass random walk analyses and more complex techniques borrowed from the Pattern Recognition and Machine Learning fields. After a brief, though critical, probabilistic tour of current computational models of eye movements and visual attention, we lay down the basis for gaze shift pattern analysis. To this end, the concepts of Markov Processes, the Wiener process and related random walks within the Gaussian framework of the Central Limit Theorem will be introduced. Then, we will deliberately violate fundamental assumptions of the Central Limit Theorem to elicit a larger perspective, rooted in statistical physics, for analysing and modelling eye movements in terms of anomalous, non-Gaussian, random walks and modern foraging theory. Eventually, by resorting to Statistical Machine Learning techniques, we discuss how the analyses of movement patterns can develop into the inference of hidden patterns of the mind: inferring the observer’s task, assessing cognitive impairments, classifying expertise.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 149.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 199.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Actually, the first who noted the Brownian motion was the Dutch physician, Jan Ingen-Housz in 1794, in the Austrian court of Empress Maria Theresa. He observed that finely powdered charcoal floating on an alcohol surface executed a highly random motion.

  2. 2.

    Freely available at https://web.math.princeton.edu/~nelson/books/bmotion.pdf.

  3. 3.

    More precisely, they used the latest version of Itti’s salience algorithm, available at http://www.saliencytoolbox.net (Walther & Koch, 2006), with defaults parameters setting. One may argue that since then the methods of saliency computation have developed and improved significantly so far. However, if one compares the predictive power results obtained by salience maps obtained within the very complex computational framework of deep networks, e.g., via the PDP system (with fine tuning) (Jetley, Murray, & Vig, 2016), against a simple central bias map (saliency inversely proportional to distance from centre, blind to image information), one can read an AUC performance of 0.875 against 0.780 on the large VOCA dataset (Jetley et al., 2016) (on the same dataset, the Itti et al. model achieves 0.533 AUC). Note that a central bias map can be computed in a few Matlab lines (Mathe & Sminchisescu, 2013).

  4. 4.

    \(\arg \max _{x} f(x)\) is the mathematical shorthand for “find the value of the argument x that maximizes \(f(\cdot )\)”.

  5. 5.

    Given a posterior distribution \(P(X \mid Y)\) the MAP rule is just about choosing the argument \(X=x\) for which \(P(X \mid Y)\) reaches its maximum value (the \(\arg \max \)) ; thus, if \(P(X \mid Y)\) is a Gaussian distribution, then the \(\arg \max \) corresponds to the mode, which for the Gaussian is also the mean value.

  6. 6.

    This gives an intuitive insight into the notion of P(1, 2) as a density.

  7. 7.

    If we have a function of more than one variable, e.g., \(f(x,y, z,\ldots )\), we can calculate the derivative with respect to one of those variables, with the others kept fixed. Thus, if we want to compute \(\frac{\partial f(x,y, z,\ldots )}{\partial x}\), we define the increment \(\Delta f= f(\left[ x + \Delta x\right] , y,z,\ldots ) - f(x,y, z,\ldots )\) and we construct the partial derivative as in the simple derivative case as \(\frac{\partial f(x,y, z,\ldots )}{\partial x}=\lim _{\Delta x \rightarrow 0} \frac{\Delta f}{\Delta x}\). By the same method we can obtain the partial derivative with respect to any of the other variables.

  8. 8.

    This is physicists’ preferred notation, which you are likely to most frequently run into when dealing with these problems. In other more mathematically inclined papers and books you will find the expectation notation \(E\left[ f(X)\right] \) or Ef(X).

  9. 9.

    A salience map, and thus the potential field V derived from salience, varies in space (as shown in Fig. 9.32). The map of such variation, namely the rate of change of V in any spatial direction, is captured by the vector field \(\nabla V\). To keep things simple, think of \(\nabla \) as a “vector” of components \((\frac{\partial }{\partial x}, \frac{\partial }{\partial y})\). When \(\nabla \) is applied to the field V, i.e. \(\nabla V=(\frac{\partial V}{\partial x}, \frac{\partial V}{\partial y})\), the gradient of V is obtained.

  10. 10.

    Matlab software for the simulation is freely downloadable at http://www.mathworks.com/matlabcentral/fileexchange/38512.

  11. 11.

    http://antoinecoutrot.magix.net/public/code.html.

References

  • Aks, D. J., Zelinsky, G. J., & Sprott, J. C. (2002). Memory across eye-movements: 1/f dynamic in visual search. Nonlinear Dynamics, Psychology, and Life Sciences, 6(1), 1–25.

    Article  Google Scholar 

  • Bachelier, L. (1900). Théorie de la spéculation. Gauthier-Villars.

    Google Scholar 

  • Barber, D., Cemgil, A. T., & Chiappa, S. (2011). Bayesian time series models. Cambridge: Cambridge University Press.

    Book  Google Scholar 

  • Baronchelli, A., & Radicchi, F. (2013). Lévy flights in human behavior and cognition. Chaos, Solitons & Fractals, 56, 101–105.

    Article  Google Scholar 

  • Begum, M., Karray, F., Mann, G., & Gosine, R. (2010). A probabilistic model of overt visual attention for cognitive robots. IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, 40(5), 1305–1318.

    Article  Google Scholar 

  • Bishop, C. M. (2006). Pattern recognition and machine learning (Information science and statistics). New York Inc, Secaucus, NJ: Springer.

    Google Scholar 

  • Boccignone, G., & Ferraro, M. (2004). Modelling gaze shift as a constrained random walk. Physica A: Statistical Mechanics and its Applications, 331(1–2), 207–218.

    Article  Google Scholar 

  • Boccignone, G., & Ferraro, M. (2011). The active sampling of gaze-shifts. In G. Maino & G. Foresti (Eds.), Image analysis and processing ICIAP 2011, Lecture Notes in Computer Science (Vol. 6978, pp. 187–196). Berlin/Heidelberg: Springer.

    Google Scholar 

  • Boccignone, G., & Ferraro, M. (2013a). Feed and fly control of visual scanpaths for foveation image processing. Annals of Telecommunications, 68 (3–4), 201–217.

    Google Scholar 

  • Boccignone, G., & Ferraro, M. (2013b). Gaze shift behavior on video as composite information foraging. Signal Processing: Image Communication, 28(8), 949–966.

    Google Scholar 

  • Boccignone, G., & Ferraro, M. (2014). Ecological sampling of gaze shifts. IEEE Transactions on Cybernetics, 44(2), 266–279.

    Article  PubMed  Google Scholar 

  • Boccignone, G., Ferraro, M., & Caelli, T. (2001). An information-theoretic approach to active vision. In Proceedings 11th International Conference on Image Analysis and Processing, (ICIAP) (pp. 340–345). New York, NY: IEEE Press.

    Google Scholar 

  • Boccignone, G., Ferraro, M., Crespi, S., Robino, C., & de’Sperati, C. (2014). Detecting expert’s eye using a multiple-kernel relevance vector machine. Journal of Eye Movement Research, 7(2), 1–15.

    Google Scholar 

  • Boccignone, G., Marcelli, A., Napoletano, P., Di Fiore, G., Iacovoni, G., & Morsa, S. (2008). Bayesian integration of face and low-level cues for foveated video coding. IEEE Transactions on Circuits and Systems for Video Technology, 18(12), 1727–1740.

    Article  Google Scholar 

  • Borji, A., & Itti, L. (2013). State-of-the-art in visual attention modeling. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(1), 185–207.

    Article  PubMed  Google Scholar 

  • Borji, A., Sihite, D.N., & Itti, L. (2012). An object-based Bayesian framework for top-down visual attention. In Twenty-Sixth AAAI Conference on Artificial Intelligence.

    Google Scholar 

  • Brockmann, D., & Geisel, T. (2000). The ecology of gaze shifts. Neurocomputing, 32(1), 643–650.

    Article  Google Scholar 

  • Bundesen, C. (1998). A computational theory of visual attention. Philosophical Transactions of the Royal Society of London. Series B: Biological Sciences, 353(1373), 1271–1281.

    Article  PubMed  PubMed Central  Google Scholar 

  • Cain, M. S., Vul, E., Clark, K., & Mitroff, S. R. (2012). A bayesian optimal foraging model of human visual search. Psychological Science, 23(9), 1047–1054.

    Article  PubMed  Google Scholar 

  • Canosa, R. (2009). Real-world vision: Selective perception and task. ACM Transactions on Applied Perception, 6(2), 11.

    Article  Google Scholar 

  • Carpenter, R., & Williams, M. (1995). Neural computation of log likelihood in control of saccadic eye movements. Nature, 377(6544), 59–62.

    Article  PubMed  Google Scholar 

  • Cerf, M., Frady, E., & Koch, C. (2009). Faces and text attract gaze independent of the task: Experimental data and computer model. Journal of Vision,9(12).

    Google Scholar 

  • Cerf, M., Harel, J., Einhäuser, W., & Koch, C. (2008). Predicting human gaze using low-level saliency combined with face detection. Advances in Neural Information Processing Systems, 20.

    Google Scholar 

  • Chambers, J., Mallows, C., & Stuck, B. (1976). A method for simulating stable random variables. Journal of the American Statistical Association, 71(354), 340–344.

    Article  Google Scholar 

  • Chernyak, D. A., & Stark, L. W. (2001). Top–down guided eye movements. IEEE Transactions on Systems Man Cybernetics - B,31, 514–522.

    Google Scholar 

  • Chikkerur, S., Serre, T., Tan, C., & Poggio, T. (2010). What and where: A Bayesian inference theory of attention. Vision Research, 50(22), 2233–2247.

    Article  PubMed  Google Scholar 

  • Clavelli, A., Karatzas, D., Lladós, J., Ferraro, M., & Boccignone, G. (2014). Modelling task-dependent eye guidance to objects in pictures. Cognitive Computation, 6(3), 558–584.

    Article  Google Scholar 

  • Codling, E., Plank, M., & Benhamou, S. (2008). Random walk models in biology. Journal of the Royal Society Interface, 5(25), 813.

    Article  PubMed  PubMed Central  Google Scholar 

  • Coen-Cagli, R., Coraggio, P., Napoletano, P., & Boccignone, G. (2008). What the draughtsman’s hand tells the draughtsman’s eye: A sensorimotor account of drawing. International Journal of Pattern Recognition and Artificial Intelligence, 22(05), 1015–1029.

    Article  Google Scholar 

  • Coen-Cagli, R., Coraggio, P., Napoletano, P., Schwartz, O., Ferraro, M., & Boccignone, G. (2009). Visuomotor characterization of eye movements in a drawing task. Vision Research, 49(8), 810–818.

    Article  PubMed  Google Scholar 

  • Costa, T., Boccignone, G., Cauda, F., & Ferraro, M. (2016). The foraging brain: Evidence of levy dynamics in brain networks. PloS one,11(9), e0161,702.

    Google Scholar 

  • Coutrot, A., Binetti, N., Harrison, C., Mareschal, I., & Johnston, A. (2016). Face exploration dynamics differentiate men and women. Journal of Vision, 16(14), 16–16.

    Article  PubMed  Google Scholar 

  • Coutrot, A., Hsiao, J. H., & Chan, A. B. (2017). Scanpath modeling and classification with hidden markov models. Behavior Research Methods. https://doi.org/10.3758/s13428-017-0876-8.

    Article  PubMed Central  Google Scholar 

  • Cowpertwait, P. S., & Metcalfe, A. V. (2009). Introductory time series with R. Dordrecht: Springer.

    Google Scholar 

  • Damoulas, T., & Girolami, M. A. (2009). Combining feature spaces for classification. Pattern Recognition, 42(11), 2671–2683.

    Article  Google Scholar 

  • deCroon, G., Postma, E., & van den Herik, H. J. (2011). Adaptive gaze control for object detection. Cognitive Computation, 3, 264–278.

    Article  Google Scholar 

  • Desimone, R., & Duncan, J. (1995). Neural mechanisms of selective visual attention. Annual Review of Neuroscience, 18(1), 193–222.

    Article  PubMed  Google Scholar 

  • Doob, J. L. (1942). The brownian movement and stochastic equations. Annals of Mathematics, pp. 351–369.

    Article  Google Scholar 

  • Dorr, M., Martinetz, T., Gegenfurtner, K., & Barth, E. (2010). Variability of eye movements when viewing dynamic natural scenes. Journal of Vision,10, (10).

    Google Scholar 

  • Dubkov, A. A., Spagnolo, B., & Uchaikin, V. V. (2008). Lévy flight superdiffusion: An introduction. International Journal of Bifurcation and Chaos, 18(09), 2649–2672.

    Article  Google Scholar 

  • Einhäuser, W., Spain, M., & Perona, P. (2008). Objects predict fixations better than early saliency. Journal of Vision, 8(14). https://doi.org/10.1167/8.14.18, http://www.journalofvision.org/content/8/14/18.abstract.

    Article  PubMed  Google Scholar 

  • Einstein, A. (1905). On the motion required by the molecular kinetic theory of heat of small particles suspended in a stationary liquid. Annalen der Physik, 17, 549–560.

    Article  Google Scholar 

  • Einstein, A. (1906). Zur theorie der brownschen bewegung. Annalen der Physik,324(2), 371–381.

    Article  Google Scholar 

  • Elazary, L., & Itti, L. (2010). A bayesian model for efficient visual search and recognition. Vision Research, 50(14), 1338–1352.

    Article  PubMed  Google Scholar 

  • Ellis, S., & Stark, L. (1986). Statistical dependency in visual scanning. Human Factors: The Journal of the Human Factors and Ergonomics Society, 28(4), 421–438.

    Article  Google Scholar 

  • Engbert, R. (2006). Microsaccades: A microcosm for research on oculomotor control, attention, and visual perception. Progress in Brain Research, 154, 177–192.

    Article  PubMed  Google Scholar 

  • Engbert, R., Mergenthaler, K., Sinn, P., & Pikovsky, A. (2011). An integrated model of fixational eye movements and microsaccades. Proceedings of the National Academy of Sciences, 108(39), E765–E770.

    Article  Google Scholar 

  • Feng, G. (2006). Eye movements as time-series random variables: A stochastic model of eye movement control in reading. Cognitive Systems Research, 7(1), 70–95.

    Article  Google Scholar 

  • Foulsham, T., & Underwood, G. (2008). What can saliency models predict about eye movements? spatial and sequential aspects of fixations during encoding and recognition. Journal of Vision,8(2).

    Google Scholar 

  • Frintrop, S., Rome, E., & Christensen, H. (2010). Computational visual attention systems and their cognitive foundations: A survey. ACM Transactions on Applied Perception,7(1), 6.

    Google Scholar 

  • Gardiner, C. (2009). Stochastic methods: A handbook for the natural and social sciences. Springer series in synergetics. Berlin, Heidelberg: Springer.

    Google Scholar 

  • Gnedenko, B., & Kolmogórov, A. (1954). Limit distributions for sums of independent random variables. Addison-Wesley Pub. Co.

    Google Scholar 

  • Goodfellow, I., Bengio, Y., & Courville, A. (2016). Deep learning. MIT Press. http://www.deeplearningbook.org

  • Hacisalihzade, S., Stark, L., & Allen, J. (1992). Visual perception and sequences of eye movement fixations: A stochastic modeling approach. IEEE Transactions on Systems, Man, and Cybernetics,22(3), 474–481.

    Google Scholar 

  • Haji-Abolhassani, A., & Clark, J. J. (2013). A computational model for task inference in visual search. Journal of Vision, 13(3), 29.

    Article  PubMed  Google Scholar 

  • Harel, J., Koch, C., & Perona, P. (2007). Graph-based visual saliency. In Advances in neural information processing systems (Vol. 19, pp. 545–552). Cambridge, MA: MIT Press.

    Google Scholar 

  • Heinke, D., & Backhaus, A. (2011). Modelling visual search with the selective attention for identification model (VS-SAIM): A novel explanation for visual search asymmetries. Cognitive Computation, 3(1), 185–205.

    Article  PubMed  Google Scholar 

  • Heinke, D., & Humphreys, G. W. (2003). Attention, spatial representation, and visual neglect: Simulating emergent attention and spatial memory in the selective attention for identification model (SAIM). Psychological Review, 110(1), 29.

    Article  PubMed  Google Scholar 

  • Heinke, D., & Humphreys, G. W. (2005). Computational models of visual selective attention: A review. Connectionist Models in Cognitive Psychology, 1(4), 273–312.

    Google Scholar 

  • Henderson, J. M., Shinkareva, S. V., Wang, J., Luke, S. G., & Olejarczyk, J. (2013). Predicting cognitive state from eye movements. PLoS ONE,8(5), e64,937.

    Google Scholar 

  • Higham, D. (2001). An algorithmic introduction to numerical simulation of stochastic differential equations. SIAM Review, pp. 525–546.

    Article  Google Scholar 

  • Hills, T. T. (2006). Animal foraging and the evolution of goal-directed cognition. Cognitive Science, 30(1), 3–41.

    Article  PubMed  Google Scholar 

  • Ho Phuoc, T., Guérin-Dugué, A., & Guyader, N. (2009). A computational saliency model integrating saccade programming. In Proceedings of the International Conference on Bio-inspired Systems and Signal Processing (pp. 57–64). Porto, Portugal.

    Google Scholar 

  • Horowitz, T., & Wolfe, J. (1998). Visual search has no memory. Nature, 394(6693), 575–577.

    Article  PubMed  Google Scholar 

  • Huang, K. (2001). Introduction to statistical physics. Boca Raton, FL: CRC Press.

    Google Scholar 

  • Humphreys, G. W., & Muller, H. J. (1993). Search via recursive rejection (SERR): A connectionist model of visual search. Cognitive Psychology, 25(1), 43–110.

    Article  Google Scholar 

  • Insua, D., Ruggeri, F., & Wiper, M. (2012). Bayesian analysis of stochastic process models. Hoboken, NJ: Wiley.

    Google Scholar 

  • Itti, L., Koch, C., & Niebur, E. (1998). A model of saliency-based visual attention for rapid scene analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence, 20, 1254–1259.

    Article  Google Scholar 

  • Jarrow, R., & Protter, P. (2004). A short history of stochastic integration and mathematical finance: The early years, 1880–1970. Lecture Notes-Monograph Series, pp. 75–91.

    Google Scholar 

  • Jaynes, E. T. (2003). Probability theory: The logic of science. New York, NY: Cambridge University Press.

    Book  Google Scholar 

  • Jetley, S., Murray, N., & Vig, E. (2016). End-to-end saliency mapping via probability distribution prediction. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 5753–5761).

    Google Scholar 

  • Jiang, L., Xu, M., Ye, Z., & Wang, Z. (2015). Image saliency detection with sparse representation of learnt texture atoms. In Proceedings of the IEEE International Conference on Computer Vision Workshops (pp. 54–62).

    Google Scholar 

  • Judd, T., Ehinger, K., Durand, F., & Torralba, A. (2009). Learning to predict where humans look. In IEEE 12th International conference on Computer Vision (pp. 2106–2113). New York, NY: IEEE.

    Google Scholar 

  • Keech, T., & Resca, L. (2010). Eye movements in active visual search: A computable phenomenological model. Attention, Perception, & Psychophysics, 72(2), 285–307.

    Article  Google Scholar 

  • Kienzle, W., Franz, M. O., Schölkopf, B., & Wichmann, F. A. (2009). Center-surround patterns emerge as optimal predictors for human saccade targets. Journal of Vision, 9(5), 7–7.

    Article  PubMed  Google Scholar 

  • Kienzle, W., Wichmann, F. A., Franz, M. O., & Schölkopf, B. (2006). A nonparametric approach to bottom-up visual saliency. In Advances in neural information processing systems (pp. 689–696).

    Google Scholar 

  • Kimura, A., Pang, D., Takeuchi, T., Yamato, J., & Kashino, K. (2008). Dynamic Markov random fields for stochastic modeling of visual attention. In Proceedings of ICPR ’08 (pp. 1–5). New York, NY: IEEE.

    Google Scholar 

  • Koch, C., & Ullman, S. (1985). Shifts in selective visual attention: Towards the underlying neural circuitry. Human Neurobiology, 4(4), 219–27.

    PubMed  Google Scholar 

  • Koller, D., & Friedman, N. (2009). Probabilistic graphical models: Principles and techniques. Cambridge, MA: MIT press.

    Google Scholar 

  • Kolmogorov, A., & Gnedenko, B. (1954). Limit distributions for sums of independent random variables. Cambridge, MA: Addison-Wesley.

    Google Scholar 

  • Kolmogorov, A. N. (1941). Dissipation of energy in isotropic turbulence. Doklady Akademii Nauk SSSR, 32, 325–327.

    Google Scholar 

  • Koutrouvelis, I. (1980). Regression-type estimation of the parameters of stable laws. Journal of the American Statistical Association, pp. 918–928.

    Article  Google Scholar 

  • Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. In F. Pereira, C. J. C. Burges, L. Bottou, & K. Q. Weinberger (Eds.), Advances in Neural Information Processing Systems (Vol. 25, pp. 1097–1105). Curran Associates, Inc.

    Google Scholar 

  • Kruthiventi, S. S., Ayush, K., & Babu, R. V. (2015). Deepfix: A fully convolutional neural network for predicting human eye fixations. arXiv preprint arXiv:1510.02927.

  • Kümmerer, M., Theis, L., & Bethge, M. (2014). Deep gaze I: Boosting saliency prediction with feature maps trained on imagenet. arXiv preprint arXiv:1411.1045.

  • Lagun, D., Manzanares, C., Zola, S. M., Buffalo, E. A., & Agichtein, E. (2011). Detecting cognitive impairment by eye movement analysis using automatic classification algorithms. Journal of Neuroscience Methods, 201(1), 196–203.

    Article  PubMed  PubMed Central  Google Scholar 

  • Laing, C., & Lord, G. J. (2010). Stochastic methods in neuroscience. Oxford: Oxford University Press.

    Google Scholar 

  • Lang, C., Liu, G., Yu, J., & Yan, S. (2012). Saliency detection by multitask sparsity pursuit. IEEE Transactions on Image Processing,21(3), 1327–1338.

    Google Scholar 

  • Langevin, P. (1908). Sur la théorie du mouvement brownien. Comptes-Rendus de Académie Sciences Paris,146(530–533), 530.

    Google Scholar 

  • Le Meur, O., & Coutrot, A. (2016). Introducing context-dependent and spatially-variant viewing biases in saccadic models. Vision Research, 121, 72–84.

    Article  PubMed  Google Scholar 

  • LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521(7553), 436–444.

    Article  PubMed  Google Scholar 

  • Liberati, A., Fadda, R., Doneddu, G., Congiu, S., Javarone, M. A., Striano, T., & Chessa, A. (2017). A statistical physics perspective to understand social visual attention in autism spectrum disorder. Perception,46(8), 889–913.

    Google Scholar 

  • Lin, Y., Kong, S., Wang, D., & Zhuang, Y. (2014). Saliency detection within a deep convolutional architecture. In Workshops at the Twenty-Eighth AAAI Conference on Artificial Intelligence.

    Google Scholar 

  • Logan, G. D. (1996). The code theory of visual attention: An integration of space-based and object-based attention. Psychological Review, 103(4), 603.

    Article  PubMed  Google Scholar 

  • MacKay, D. (2002). Information theory. Inference and learning algorithms. Cambridge, MA: Cambridge University Press.

    Google Scholar 

  • Makarava, N., Bettenbühl, M., Engbert, R., & Holschneider, M. (2012). Bayesian estimation of the scaling parameter of fixational eye movements. EPL,100(4), 40,003.

    Google Scholar 

  • Mandelbrot, B. (1963). The variation of certain speculative prices. The Journal of Business, 36(4), 394–419.

    Article  Google Scholar 

  • Mandelbrot, B. B., & Van Ness, J. W. (1968). Fractional brownian motions, fractional noises and applications. SIAM Review, 10(4), 422–437.

    Article  Google Scholar 

  • Mantegna, R. N., Stanley, H. E., et al. (2000). An introduction to econophysics: Correlations and complexity in finance. Cambridge, MA: Cambridge University Press.

    Google Scholar 

  • Marat, S., Rahman, A., Pellerin, D., Guyader, N., & Houzet, D. (2013). Improving visual saliency by adding Ôface feature mapÕand Ôcenter biaÕ. Cognitive Computation, 5(1), 63–75.

    Article  Google Scholar 

  • Marr, D. (1982). Vision. New York, NY: W.H. Freeman.

    Google Scholar 

  • Martinez-Conde, S., Otero-Millan, J., & Macknik, S. L. (2013). The impact of microsaccades on vision: Towards a unified theory of saccadic function. Nature Reviews Neuroscience, 14(2), 83–96.

    Article  PubMed  Google Scholar 

  • Mathe, S., Sminchisescu, C. (2013). Action from still image dataset and inverse optimal control to learn task specific visual scanpaths. In Advances in neural information processing systems (pp. 1923–1931).

    Google Scholar 

  • Mathe, S., & Sminchisescu, C. (2015). Actions in the eye: Dynamic gaze datasets and learnt saliency models for visual recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 37(7), 1408–1424.

    Article  PubMed  Google Scholar 

  • Méndez, V., Campos, D., & Bartumeus, F. (2014). Stochastic foundations in movement ecology: Anomalous diffusion. Front propagation and random searches. Springer series in synergetics. Berlin, Heidelberg: Springer.

    Book  Google Scholar 

  • Meyer, P. A. (2009). Stochastic processes from 1950 to the present. Electronic Journal for History of Probability and Statistics, 5(1), 1–42.

    Google Scholar 

  • Mozer, M. C. (1987). Early parallel processing in reading: A connectionist approach. Lawrence Erlbaum Associates, Inc.

    Google Scholar 

  • Murphy, K. P. (2012). Machine learning: A probabilistic perspective. Cambridge, MA: MIT press.

    Google Scholar 

  • Najemnik, J., & Geisler, W. (2005). Optimal eye movement strategies in visual search. Nature, 434(7031), 387–391.

    Article  PubMed  Google Scholar 

  • Napoletano, P., Boccignone, G., & Tisato, F. (2015). Attentive monitoring of multiple video streams driven by a Bayesian foraging strategy. IEEE Transactions on Image Processing, 24(11), 3266–3281.

    Article  PubMed  Google Scholar 

  • Nelson, E. (1967). Dynamical theories of Brownian motion. Princeton, NJ: Princeton University Press.

    Google Scholar 

  • Newman, M. E. (2005). Power laws, pareto distributions and zipf’s law. Contemporary Physics, 46(5), 323–351.

    Article  Google Scholar 

  • Nolan, J. (1997). Numerical calculation of stable densities and distribution functions. Communications in Statistics-Stochastic Models, 13(4), 759–774.

    Article  Google Scholar 

  • Noorani, I., & Carpenter, R. (2016). The LATER model of reaction time and decision. Neuroscience & Biobehavioral Reviews, 64, 229–251.

    Article  Google Scholar 

  • Osborne, M. F. (1959). Brownian motion in the stock market. Operations Research, 7(2), 145–173.

    Article  Google Scholar 

  • Otero-Millan, J., Macknik, S. L., Langston, R. E., & Martinez-Conde, S. (2013). An oculomotor continuum from exploration to fixation. Proceedings of the National Academy of Sciences, 110(15), 6175–6180.

    Article  Google Scholar 

  • Over, E., Hooge, I., Vlaskamp, B., & Erkelens, C. (2007). Coarse-to-fine eye movement strategy in visual search. Vision Research, 47, 2272–2280.

    Article  PubMed  Google Scholar 

  • Ozaki, T. (2012). Time series modeling of neuroscience data. CRC Press.

    Google Scholar 

  • Palmer, J., Verghese, P., & Pavel, M. (2000). The psychophysics of visual search. Vision Research, 40(10), 1227–1268.

    Article  PubMed  Google Scholar 

  • Papoulis, A., & Pillai, S. U. (2002). Probability, random variables, and stochastic processes. New York, NY: McGraw-Hill.

    Google Scholar 

  • Paul, L. (1954). Théorie de l’addition des variables aléatoires. Paris: Gauthiers-Villars.

    Google Scholar 

  • Paul, W., & Baschnagel, J. (2013). Stochastic processes: From physics to finance. Berlin, Heidelberg: Springer International Publishing.

    Book  Google Scholar 

  • Phaf, R. H., Van der Heijden, A., & Hudson, P. T. (1990). Slam: A connectionist model for attention in visual selection tasks. Cognitive Psychology, 22(3), 273–341.

    Article  PubMed  Google Scholar 

  • Plank, M., & James, A. (2008). Optimal foraging: Lévy pattern or process? Journal of The Royal Society Interface, 5(26), 1077.

    Article  PubMed Central  PubMed  Google Scholar 

  • Moscoso del Prado Martin, F. (2008). A theory of reaction time distributions. http://cogprints.org/6310/1/recinormal.pdf

  • Psorakis, I., Damoulas, T., & Girolami, M. A. (2010). Multiclass relevance vector machines: Sparsity and accuracy. IEEE Transactions on Neural Networks, 21(10), 1588–1598.

    Article  PubMed  Google Scholar 

  • Ramos-Fernandez, G., Mateos, J., Miramontes, O., Cocho, G., Larralde, H., & Ayala-Orozco, B. (2004). Lévy walk patterns in the foraging movements of spider monkeys (Ateles geoffroyi). Behavioral Ecology and Sociobiology, 55(3), 223–230.

    Article  Google Scholar 

  • Rao, R. P., Zelinsky, G. J., Hayhoe, M. M., & Ballard, D. H. (2002). Eye movements in iconic visual search. Vision Research, 42(11), 1447–1463.

    Article  PubMed  Google Scholar 

  • Ratcliff, R., & McKoon, G. (2008). The diffusion decision model: Theory and data for two-choice decision tasks. Neural Computation, 20(4), 873–922.

    Article  PubMed  PubMed Central  Google Scholar 

  • Rensink, R. (2000). The dynamic representation of scenes. Visual Cognition, 1(3), 17–42.

    Article  Google Scholar 

  • Reynolds, A. (2008). How many animals really do the Lévy walk? Comment. Ecology, 89(8), 2347–2351.

    Article  PubMed  Google Scholar 

  • Reynolds, A. (2008). Optimal random Lévy-loop searching: New insights into the searching behaviours of central-place foragers. EPL (Europhysics Letters),82, 20,001.

    Article  Google Scholar 

  • Richardson, L. F. (1926). Atmospheric diffusion shown on a distance-neighbour graph. Proceedings of the Royal Society of London. Series A,110(756), 709–737.

    Article  Google Scholar 

  • Rogers, S., & Girolami, M. (2011). A first course in machine learning. Boca Raton, FL: CRC Press.

    Google Scholar 

  • Rutishauser, U., & Koch, C. (2007). Probabilistic modeling of eye movement data during conjunction search via feature-based attention. Journal of Vision,7(6).

    Google Scholar 

  • Schinckus, C. (2013). How physicists made stable lévy processes physically plausible. Brazilian Journal of Physics, 43(4), 281–293.

    Article  Google Scholar 

  • Scholl, B. (2001). Objects and attention: The state of the art. Cognition, 80(1–2), 1–46.

    Article  PubMed  Google Scholar 

  • Schuster, P. (2016). Stochasticity in processes. Berlin: Springer.

    Book  Google Scholar 

  • Schütz, A., Braun, D., & Gegenfurtner, K. (2011). Eye movements and perception: A selective review. Journal of Vision,11(5).

    Google Scholar 

  • Seo, H., & Milanfar, P. (2009). Static and space-time visual saliency detection by self-resemblance. Journal of Vision, 9(12), 1–27.

    Article  PubMed  Google Scholar 

  • Shen, C., & Zhao, Q. (2014). Learning to predict eye fixations for semantic contents using multi-layer sparse network. Neurocomputing, 138, 61–68.

    Article  Google Scholar 

  • Siegert, S., & Friedrich, R. (2001). Modeling of nonlinear Lévy processes by data analysis. Physical Review E,64(4), 041,107.

    Google Scholar 

  • Srinivas, S., Sarvadevabhatla, R. K., Mopuri, K. R., Prabhu, N., Kruthiventi, S., & Radhakrishnan, V. B. (2016). A taxonomy of deep convolutional neural nets for computer vision. Frontiers in Robotics and AI,2(36). https://doi.org/10.3389/frobt.2015.00036, http://www.frontiersin.org/vision_systems_theory,_tools_and_applications/10.3389/frobt.2015.00036/abstract

  • Stephen, D., Mirman, D., Magnuson, J., & Dixon, J. (2009). Lévy-like diffusion in eye movements during spoken-language comprehension. Physical Review E,79(5), 056,114.

    Google Scholar 

  • Stigler, G. J. (1964). Public regulation of the securities markets. The Journal of Business, 37(2), 117–142.

    Article  Google Scholar 

  • Sun, Y., Fisher, R., Wang, F., & Gomes, H. M. (2008). A computer vision model for visual-object-based attention and eye movements. Computer Vision and Image Understanding, 112(2), 126–142.

    Article  Google Scholar 

  • Tatler, B. (2007). The central fixation bias in scene viewing: Selecting an optimal viewing position independently of motor biases and image feature distributions. Journal of Vision,7(14).

    Article  PubMed  Google Scholar 

  • Tatler, B., Baddeley, R., & Vincent, B. (2006). The long and the short of it: Spatial statistics at fixation vary with saccade amplitude and task. Vision Research, 46(12), 1857–1862.

    Article  PubMed  Google Scholar 

  • Tatler, B., Hayhoe, M., Land, M., & Ballard, D. (2011). Eye guidance in natural vision: Reinterpreting salience. Journal of Vision,11(5).

    Google Scholar 

  • Tatler, B., & Vincent, B. (2008). Systematic tendencies in scene viewing. Journal of Eye Movement Research, 2(2), 1–18.

    Google Scholar 

  • Tatler, B., & Vincent, B. (2009). The prominence of behavioural biases in eye guidance. Visual Cognition, 17(6–7), 1029–1054.

    Article  Google Scholar 

  • Torralba, A. (2003). Contextual priming for object detection. International Journal of Computer Vision, 53, 153–167.

    Article  Google Scholar 

  • Treisman, A. (1998). Feature binding, attention and object perception. Philosophical Transactions of the Royal Society of London. Series B: Biological Sciences,353(1373), 1295–1306.

    Article  PubMed  PubMed Central  Google Scholar 

  • Treisman, A. M., & Gelade, G. (1980). A feature-integration theory of attention. Cognitive Psychology, 12(1), 97–136.

    Article  PubMed  Google Scholar 

  • Trillenberg, P., Gross, C., & Shelhamer, M. (2001). Random walks, random sequences, and nonlinear dynamics in human optokinetic nystagmus. Journal of Applied Physiology, 91(4), 1750–1759.

    Article  PubMed  Google Scholar 

  • Uhlenbeck, G. E., & Ornstein, L. S. (1930). On the theory of the brownian motion. Physical Review, 36(5), 823.

    Article  Google Scholar 

  • Van Der Linde, I., Rajashekar, U., Bovik, A. C., & Cormack, L. K. (2009). Doves: A database of visual eye movements. Spatial Vision, 22(2), 161–177.

    Article  Google Scholar 

  • Van Kampen, N. G. (2001). Stochastic processes in physics and chemistry. Amsterdam, NL: North Holland.

    Google Scholar 

  • Vig, E., Dorr, M., Cox, D. (2014). Large-scale optimization of hierarchical features for saliency prediction in natural images. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 2798–2805).

    Google Scholar 

  • Viswanathan, G., Raposo, E., & da Luz, M. (2008). Lévy flights and superdiffusion in the context of biological encounters and random searches. Physics of Life Reviews, 5(3), 133–150.

    Article  Google Scholar 

  • Viswanathan, G. M., Da Luz, M. G., Raposo, E. P., & Stanley, H. E. (2011). The physics of foraging: An introduction to random searches and biological encounters. Cambridge, MA: Cambridge University Press.

    Book  Google Scholar 

  • Walther, D., & Koch, C. (2006). Modeling attention to salient proto-objects. Neural Networks, 19(9), 1395–1407.

    Article  PubMed  Google Scholar 

  • Wang, K., Wang, S., & Ji, Q. (2016). Deep eye fixation map learning for calibration-free eye gaze tracking. In Proceedings of the Ninth Biennial ACM Symposium on Eye Tracking Research & Applications (pp. 47–55). New York, NY: ACM.

    Google Scholar 

  • Wiener, N. (1930). Generalized harmonic analysis. Acta Mathematica, 55(1), 117–258.

    Article  Google Scholar 

  • Wischnewski, M., Belardinelli, A., Schneider, W., & Steil, J. (2010). Where to look next? Combining static and dynamic proto-objects in a TVA-based model of visual attention. Cognitive Computation, 2(4), 326–343.

    Article  Google Scholar 

  • Wolfe, J. M. (1994). Guided search 2.0 a revised model of visual search. Psychonomic Bulletin & Review,1(2), 202–238.

    Article  Google Scholar 

  • Wolfe, J. M. (2013). When is it time to move to the next raspberry bush? Foraging rules in human visual search. Journal of Vision, 13(3), 10.

    Article  PubMed  PubMed Central  Google Scholar 

  • Yan, J., Zhu, M., Liu, H., & Liu, Y. (2010). Visual saliency detection via sparsity pursuit. Signal Processing Letters, IEEE, 17(8), 739–742.

    Article  Google Scholar 

  • Yang, S. C. H., Wolpert, D. M., & Lengyel, M. (2016). Theoretical perspectives on active sensing. Current Opinion in Behavioral Sciences, 11, 100–108.

    Article  Google Scholar 

  • Yarbus, A. (1967). Eye movements and vision. New York, NY: Plenum Press.

    Book  Google Scholar 

  • Yu, J. G., Zhao, J., Tian, J., & Tan, Y. (2014). Maximal entropy random walk for region-based visual saliency. IEEE Transactions on Cybernetics, 44(9), 1661–1672.

    Article  PubMed  Google Scholar 

  • Zelinsky, G. J. (2008). A theory of eye movements during target acquisition. Psychological Review, 115(4), 787.

    Article  PubMed  PubMed Central  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Giuseppe Boccignone .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Boccignone, G. (2019). Advanced Statistical Methods for Eye Movement Analysis and Modelling: A Gentle Introduction. In: Klein, C., Ettinger, U. (eds) Eye Movement Research. Studies in Neuroscience, Psychology and Behavioral Economics. Springer, Cham. https://doi.org/10.1007/978-3-030-20085-5_9

Download citation

Publish with us

Policies and ethics