Review
Contextual cueing of visual attention

https://doi.org/10.1016/S1364-6613(00)01476-5Get rights and content

Abstract

Visual context information constrains what to expect and where to look, facilitating search for and recognition of objects embedded in complex displays. This article reviews a new paradigm called contextual cueing, which presents well-defined, novel visual contexts and aims to understand how contextual information is learned and how it guides the deployment of visual attention. In addition, the contextual cueing task is well suited to the study of the neural substrate of contextual learning. For example, amnesic patients with hippocampal damage are impaired in their learning of novel contextual information, even though learning in the contextual cueing task does not appear to rely on conscious retrieval of contextual memory traces. We argue that contextual information is important because it embodies invariant properties of the visual environment such as stable spatial layout information as well as object covariation information. Sensitivity to these statistical regularities allows us to interact more effectively with the visual world.

Section snippets

The contextual cueing paradigm: spatial context

Why is contextual information helpful? First, contextual information provides useful constraints on the range of possible objects that can be expected to occur within that context. For example, the number and variety of visual objects that typically occur within a specific scene, such as a kitchen, is clearly smaller than the rather larger number of all possible visual objects that people can recognize. Such constraints may provide computational benefits to object recognition. Second,

Object cueing

Spatial context learning is ecologically significant because major landmarks and the configurations of various objects in the environment (such as your office or kitchen) tend to be stable over time. These provide useful navigation and orienting cues. Thus, sensitivity to such spatial configurations is adaptive.

However, it is clear that visual contexts are defined by other attributes besides spatial layout information. In particular, the identities of objects are important. Observers may rely

Dynamic event cueing

These studies above show that contextual cueing can be driven by spatial configuration information and by shape identity information. Most visual contexts can be characterized along these two dimensions, but another important property of the visual environment concerns how objects change over time. Let us return to the example of driving. Although both spatial configuration (where is the car switching lane located relative to your car?) and shape identity (a car versus a pedestrian) information

Neural basis of contextual (relational) learning

As reviewed above, the contextual cueing paradigm illustrates how visual context information guides visual attention. The use of novel visual contexts also allows us to examine how contextual information is learned, and this forms the basis of investigations into the neural basis of contextual learning.

The hippocampus and associated medial temporal lobe structures (henceforth referred to as the hippocampal system) are likely candidates for contextual learning in the brain52, 53, 54, 55, 56, 57,

Conclusions

The contextual cueing paradigm illustrates how visual context information is learned to guide visual behavior. Contextual information is important because it embodies important properties of the visual environment, namely, stable spatial layout information, object identity covariation information, and regularities in dynamic visual events as they unfold over time. Sensitivity to such regularities presented in visual context serves to guide visual attention, object recognition and action.

Outstanding questions

  • How does contextual information constrain and facilitate visual processing beyond its role in guiding visual attention in contextual cueing tasks?

  • How do different types of contextual information guide visual processing? For example, Ingrid Olson and I are investigating how temporal context, defined by sequences of discrete visual events, guides visual processing.

  • Context must be sensitive to the observer’s behavioral goals and intentions. For any given scene, different contextual cues are

Acknowledgements

I thank Yuhong Jiang, Elizabeth Phelps and Ken Nakayama for their collaborations on the work and ideas described here. I also thank Ray Klein and Keith Rayner for very helpful comments on an earlier draft. Preparation of this article was supported by National Science Foundation Grant BCS-9817349.

References (72)

  • R.E. Clark et al.

    Classical conditioning and brain systems: the role of awareness

    Science

    (1998)
  • J.M. Wolfe

    Guided Search 2.0: a revised model of guided search

    Psychonomic Bull. Rev.

    (1994)
  • J.M. Wolfe

    Visual search

  • M.M. Chun et al.

    Visual attention

  • S. Yantis

    Control of Visual Attention

  • S. Yantis et al.

    Stimulus-driven attentional capture: evidence from equiluminant visual objects

    J. Exp. Psychol. Hum. Percept. Perform.

    (1994)
  • H. Intraub

    The representation of visual scenes

    Trends Cognit. Sci.

    (1997)
  • D. Simons et al.

    Change blindness

    Trends Cognit. Sci.

    (1997)
  • J.M. Henderson et al.

    High-level scene perception

    Annu. Rev. Psychol.

    (1999)
  • K. Rayner

    Eye movements in reading and information processing: 20 years of research

    Psychol. Bull.

    (1998)
  • S.P. Liversedge et al.

    Saccadic eye movements and cognition

    Trends Cognit. Sci.

    (2000)
  • G.R. Loftus et al.

    Cognitive determinants of fixation location during picture viewing

    J. Exp. Psychol. Hum. Percept. Perform.

    (1978)
  • I. Biederman

    Scene perception: detecting and judging objects undergoing relational violations

    Cognit. Psychol.

    (1982)
  • I. Biederman

    Perceiving real-world scenes

    Science

    (1972)
  • S.J. Boyce

    Effect of background information on object identification

    J. Exp. Psychol. Hum. Percept. Perform.

    (1989)
  • A. Friedman

    Framing pictures: the role of knowledge in automatized encoding and memory for gist

    J. Exp. Psychol. Gen.

    (1979)
  • S.E. Palmer

    The effects of contextual scenes on the identification of objects

    Mem. Cognit.

    (1975)
  • A. Hollingworth et al.

    Does consistent scene context facilitate object perception?

    J. Exp. Psychol. Gen.

    (1998)
  • S. Ullman

    High-level Vision: Object Recognition and Visual Cognition

    (1996)
  • M.M. Chun et al.

    Contextual cueing: implicit learning and memory of visual context guides spatial attention

    Cognit. Psychol.

    (1998)
  • M.M. Chun et al.

    Top-down attentional guidance based on implicit learning of visual covariation

    Psychol. Sci.

    (1999)
  • R.A. Rensink

    To see or not to see: the need for attention to perceive changes in scenes

    Psychol. Sci.

    (1997)
  • I. Biederman

    On the semantics of a glance at a scene

  • M.C. Potter

    Meaning in visual search

    Science

    (1975)
  • M.M. Chun et al.

    Memory deficits for implicit contextual information in amnesic subjects with hippocampal damage

    Nat. Neurosci.

    (1999)
  • J.J. Gibson

    The Senses Considered as Perceptual Systems

    (1966)
  • Cited by (572)

    • Adapting attentional control settings in a shape-changing environment

      2024, Attention, Perception, and Psychophysics
    • Representing the World in Language and Thought

      2024, Topics in Cognitive Science
    View all citing articles on Scopus
    View full text