Elsevier

NeuroImage

Volume 86, 1 February 2014, Pages 317-325
NeuroImage

Fearful faces heighten the cortical representation of contextual threat

https://doi.org/10.1016/j.neuroimage.2013.10.008Get rights and content

Highlights

  • We investigated mutual effects of faces and affective context on visual processing.

  • Frequency-tagging allows separating cortical processing of faces vs. context scenes.

  • Presence of fearful faces heightens cortical processing of threatening visual context.

  • Fearful faces signal danger and lead to vigilance for threat in the observer.

Abstract

Perception of facial expressions is typically investigated by presenting isolated face stimuli. In everyday life, however, faces are rarely seen without a surrounding visual context that affects perception and interpretation of the facial expression. Conversely, fearful faces may act as a cue, heightening the sensitivity of the visual system to effectively detect potential threat in the environment. In the present study, we used steady-state visually evoked potentials (ssVEPs) to examine the mutual effects of facial expressions (fearful, neutral, happy) and affective visual context (pleasant, neutral, threat). By assigning two different flicker frequencies (12 vs. 15 Hz) to the face and the visual context scene, cortical activity to the concurrent stimuli was separated, which represents a novel approach to independently tracking the cortical processes associated with the face and the context. Twenty healthy students viewed flickering faces overlaid on flickering visual scenes, while performing a simple change-detection task at fixation, and high-density EEG was recorded. Arousing background scenes generally drove larger ssVEP amplitudes than neutral scenes. Importantly, background and expression interacted: When viewing fearful facial expressions, the ssVEP in response to threat context was amplified compared to other backgrounds. Together, these findings suggest that fearful faces elicit vigilance for potential threat in the visual periphery.

Introduction

Viewing fearful facial expressions enhances basic perceptual processes, such as contrast sensitivity, orientation sensitivity, and spatial resolution (Bocanegra and Zeelenberg, 2009b, Bocanegra and Zeelenberg, 2011a, Phelps et al., 2006). These effects are often attributed to the signal character of fearful faces: Fear in another person suggests the presence of a potential threat, but the source of threat is unclear (Whalen et al., 2009). Thus, a fearful face may act as a cue that prompts heightened perceptual sensitivity to threat in the environment. This notion is also in line with several theories of anxiety, which assume that anxiety enhances sensory sensitivity in general (Lang et al., 2000, McNaughton and Gray, 2000).

In this vein, visual search tasks have demonstrated that a fearful face can increase search efficiency for task-relevant objects, even when those objects are non-threatening (Becker, 2009, Olatunji et al., 2011). It has been demonstrated that facially expressing fear enhances sensory sensitivity (Susskind et al., 2008) as well, which concurs with Darwin's assumption that facial expressions modify preparedness for perception and action. In this perspective, expressing fear alters the sensory response, augmenting or diminishing the sensitivity to the environment (Darwin, 1872). Thus, cuing observers with depictions of a fearful face or experiencing fear both may result in heightened attention and visual processing. The question arises regarding the extent to which such changes also affect the processing of environmental cues in which the observer or the face cue is embedded.

Recent evidence suggests that concurrent contextual stimuli impact the processing of facial expressions in a content-specific fashion (for a recent review, see Wieser and Brosch, 2012). The most common finding in this field of research has been that congruent context facilitates and accelerates emotion recognition, whereas incongruent context interferes with emotion recognition (e.g., Aviezer et al., 2008, Carroll and Russell, 1996, de Gelder and Vroomen, 2000). The perception of emotional faces seems to depend on an interaction of facial expression and contextual information (Herring et al., 2011, Neta et al., 2011), and associations between context and faces are routinely established (Aviezer et al., 2011, Barrett and Kensinger, 2010, Hayes et al., 2010). Using the N170 component of the visually evoked brain potential as an index of face perception, it was shown that the presence of a face in a fearful context enhanced the N170 amplitude compared to a face in neutral contexts, but this effect was strongest for fearful faces (Righart and de Gelder, 2006, Righart and de Gelder, 2008a, Righart and de Gelder, 2008b). These findings suggest that the context in which a face appears may influence how it is encoded. In addition, faces without any context showed the largest N170 amplitudes possibly reflecting competition for attentional resources between visual scene context and facial expressions (Righart and de Gelder, 2006). This effect was replicated in a later study where N170 amplitudes were increased for fearful faces in fearful scenes as compared to fearful faces in happy scenes (Righart and de Gelder, 2008a). These results show that the information provided by the facial expression is combined with the scene context during the early stages of face processing. However, from a methodological point of view, it should be noted that the larger N170 in response to expressive faces with simultaneously presented scenes reflects the brain response to both stimuli (face and scene), and thus cannot be taken as a pure index of face processing.

Electrophysiological testing of hypotheses concerning the relative amount of cortical processing of concurrent stimuli is typically made difficult by the fact that the neural responses to concurrent stimuli are not distinct. The steady-state visually evoked potential (ssVEP) methodology together with “frequency-tagging” allows researchers to separately quantify responses to multiple visual objects, which are simultaneously present in the field of view (e.g., Miskovic and Keil, 2013, Wang et al., 2007, Wieser and Keil, 2011, Wieser et al., 2011, Wieser et al., 2012, Zhang et al., 2011). The ssVEP is an oscillatory response to stimuli periodically modulated in contrast or luminance (i.e., flickered), in which the fundamental frequency of the electrocortical response recorded from the scalp equals that of the driving stimulus (Müller et al., 1998, Regan, 1989). Of significant advantage, the driven oscillatory ssVEP is precisely defined in the frequency domain as well as time–frequency domain, and can consequently be reliably separated from noise (i.e., all features of the ongoing EEG that do not oscillate at the frequency of the stimulus train). Amplitude modulation of this signal reflects sustained sensory processing modulated both by intrinsic factors (e.g., Keil et al., 2003) and extrinsic, task-related processes (e.g., Andersen and Müller, 2010). Importantly, because the ssVEP is by definition a stationary and sustained oscillation in sensory neural populations, its modulation by tasks and goals is thought to be effected through sustained re-entrant processes (Keil et al., 2009). The effect of such re-entrant modulation can be observed through phase analyses (Keil et al., 2005) or by measuring the time-varying ssVEP amplitude in response to physically identical stimuli (e.g., Wieser and Keil, 2011). Generators of the flicker-evoked and contrast-reversal ssVEP have been localized to the extended visual cortex (Müller et al., 1997), with strong contributions from retinotopic areas, and also from cortices higher in the visual hierarchy (Di Russo et al., 2007). Similarly, source estimation has indicated an early visual cortical origin of the face-evoked flicker-ssVEP (Wieser and Keil, 2011). Frequency-tagging refers to the feasibility of assigning different frequencies to stimuli simultaneously presented in the visual field, whose signals can be separated in the frequency domain (Appelbaum et al., 2006, Wang et al., 2007, Wieser and Keil, 2011) and submitted to time–frequency analyses to provide a continuous measure of the visual resource allocation to a specific stimulus amid competing cues. As a consequence, this method is ideally suited for the investigation of competition between facial expressions and affective pictures. Recently, ssVEP studies have suggested that affectively engaging stimuli prompt strong competition effects, associated with reduction of the response amplitude elicited by a concurrent stimulus or task (Hindi Attar et al., 2010a, Hindi Attar et al., 2010b, Müller et al., 2008, Müller et al., 2011). Thus, one may hypothesize that prioritized processing of facial expressions is at the expense of processing the visual scene and vice versa, a fact that has been neglected in this line of research so far.

Recently, studies in the cognitive and affective neurosciences have increasingly used the steady-state visually evoked potential (ssVEP) to study different aspects of face processing, including processing of emotional expression as well as face identification (e.g., Ales et al., 2012, Gruss et al., 2012, McTeague et al., 2011, Rossion and Boremanse, 2011, Rossion et al., 2012). Of note, these studies revealed different sensor locations for maximal resonating oscillatory responses in the visual cortex, which were either predominantly expressed over medial–occipital sensors (Gruss et al., 2012, McTeague et al., 2011) or over right temporo-occipital clusters approximately at sensor locations where the face-sensitive N170 in the ERP is normally maximally expressed (e.g., Ales et al., 2012, Rossion and Boremanse, 2011, Rossion et al., 2012). These differences are mostly due to differences in stimulus presentation or experimental design as the ssVEP can be driven in lower-tier visual cortices using high contrast luminance modulation with square-wave stimulation (e.g., Gruss et al., 2012) or in higher order cortices such as the fusiform cortex using sinusoidal modulation of face-specific contrast (e.g., Rossion and Boremanse, 2011).

The main goal of the present study was to examine the effects of viewing facial expressions on the cortical processing of contextual cues and vice versa. To this end, steady-state visually evoked potentials (ssVEPs) together with frequency-tagging were employed, yielding separate continuous estimates of sensory cortical engagement for the face and the context stimulus. We examined the following alternative hypotheses: 1) If competition between faces and visual scenes takes place, the ssVEP signal evoked by faces should be reduced when the face is embedded in affective compared to neutral scenes, whereas at the same time, cortical processing of the background visual scenes should be reduced when emotional compared to neutral facial expressions are presented. 2) If fearful facial expressions enhance attentional sensitivity, then enhanced ssVEP amplitudes for background scenes should be observed when a fearful face is present. Specific enhancement of the threat context during fearful face viewing would indicate that peripheral sensitivity is enhanced to amplify threat features selectively, rather than any content.

Section snippets

Participants

Twenty participants (20–27 years old, M = 22.80, SD = 2.46; 10 females, right-handed) were recruited from general psychology classes at the University of Würzburg who received course credit for participation. All of the participants had no family history of photic epilepsy, and reported normal or corrected-to-normal vision. Written consent was obtained from all participants. All procedures were approved by the institutional review board of the University of Würzburg.

Design and procedure

Twenty-four pictures (12 female;

Steady-state visually evoked potentials (ssVEPs)

The analysis over the whole time period (200–3000 ms) revealed a significant interaction of tagged stimulus × visual scene × facial expression, F(4, 82) = 4.19, p = .039, ηp2 = .12. This effect was followed up by ANOVAs performed for faces and visual scenes, separately. For faces, no significant effects of facial expressions were observed, except of a tendency of higher activity for happy facial expressions, F(2,38) = 2.96, p = .064, ηp2 = .14 (Fig. 4). Simple contrasts highlighted this difference for happy

Discussion

The present study investigated the interaction between facial expressions and surrounding context information in visuo-cortical processing using ssVEP frequency tagging. Flickering streams of facial expressions were overlaid on affective visual scenes flickering at a different frequency. This allowed us to obtain continuous electrocortical indices reflecting processing of each stimulus individually. Whereas electrocortical responses to facial expressions were not altered by visual background,

Conclusion

The present study capitalized on frequency-tagging in order to disentangle mutual influences of facial expressions and affective context scenes. This novel approach provides evidence that the presence of fearful faces increases vigilance specifically for threatening visual scenes, which may contain information about potential sources of threat in the environment. Vigilance to threat is reflected in increased large-scale (neural mass) electro-cortical activity in visual cortices. Taken together,

References (72)

  • E.A. Phelps et al.

    Contributions of the amygdala to emotion processing: from animal models to human behavior

    Neuron

    (2005)
  • G. Pourtois et al.

    Brain mechanisms for emotional influences on perception and attention: what is magic and what is not

    Biol. Psychol.

    (2013)
  • B. Rossion et al.

    A steady-state visual evoked potential approach to individual face perception: effect of inversion, contrast-reversal and temporal dynamics

    Neuroimage

    (2012)
  • P. Vuilleumier

    How brains beware: neural mechanisms of emotional attention

    Trends Cogn. Sci.

    (2005)
  • J. Wang et al.

    The neural correlates of feature-based selective attention when viewing spatially and temporally overlapping images

    Neuropsychologia

    (2007)
  • P. Zhang et al.

    Binocular rivalry requires visual attention

    Neuron

    (2011)
  • R.B. Adams et al.

    Effects of direct and averted gaze on the perception of facially communicated emotion

    Emotion

    (2005)
  • J.M. Ales et al.

    An objective method for measuring face detection thresholds using the sweep steady-state visual evoked response

    J. Vis.

    (2012)
  • S.K. Andersen et al.

    Behavioral performance follows the time course of neural facilitation and suppression during cued shifts of feature-selective attention

    Proc. Natl. Acad. Sci. U. S. A.

    (2010)
  • L.G. Appelbaum et al.

    Cue-invariant networks for figure and background processing in human visual cortex

    J. Neurosci.

    (2006)
  • H. Aviezer et al.

    Angry, disgusted, or afraid? Studies on the malleability of emotion perception

    Psychol. Sci.

    (2008)
  • H. Aviezer et al.

    The automaticity of emotional face-context integration

    Emotion

    (2011)
  • L.F. Barrett et al.

    Context is routinely encoded during emotion perception

    Psychol. Sci.

    (2010)
  • M.W. Becker

    Panic search fear produces efficient visual search for nonthreatening objects

    Psychol. Sci.

    (2009)
  • B.R. Bocanegra et al.

    Dissociating emotion-induced blindness and hypervision

    Emotion

    (2009)
  • B.R. Bocanegra et al.

    Emotion improves and impairs early vision

    Psychol. Sci.

    (2009)
  • B.R. Bocanegra et al.

    Emotion-induced trade-offs in spatiotemporal vision

    J. Exp. Psychol. Gen.

    (2011)
  • B.R. Bocanegra et al.

    Emotional cues enhance the attentional effects on spatial and temporal resolution

    Psychon. Bull. Rev.

    (2011)
  • J.M. Carroll et al.

    Do facial expressions signal specific emotions? Judging emotion from the face in context

    J. Pers. Soc. Psychol.

    (1996)
  • M. Catani et al.

    Occipito‐temporal connections in the human brain

    Brain

    (2003)
  • C. Darwin

    The expression of the emotions in man and animals

    (1872)
  • B. de Gelder et al.

    The perception of emotions by ear and by eye

    Cogn. Emot.

    (2000)
  • F. Di Russo et al.

    Spatiotemporal analysis of the cortical sources of the steady-state visual evoked potential

    Hum. Brain Mapp.

    (2007)
  • L.F. Gruss et al.

    Face-evoked steady-state visual potentials: effects of presentation rate and face inversion

    Front. Hum. Neurosci.

    (2012)
  • S.M. Hayes et al.

    Neural mechanisms of context effects on face recognition: automatic binding and context shift decrements

    J. Cogn. Neurosci.

    (2010)
  • D.R. Herring et al.

    Electrophysiological responses to evaluative priming: the LPP is sensitive to incongruity

    Emotion

    (2011)
  • Cited by (59)

    • Eye gaze direction modulates nonconscious affective contextual effect

      2022, Consciousness and Cognition
      Citation Excerpt :

      This discrepancy might be attributed to the fact that the processing of facial expressions often has divergent evolutionary implications among different emotions. For example, a fearful face often signalizes a potential threat in the environment (Wieser & Keil, 2014), forcing people to make a rapid decision to ‘fight-or-flight’, whereas a happy facial expression may serve as a safety signal that conveys the affiliative intent of others (Mehu & Dunbar, 2008; Mehu et al., 2007). Moreover, it is believed that there is an ‘automatic’ threat-sensitive mechanism to help people survive or cope with danger, even without consciousness or attention (Hedger et al., 2016).

    • Influence of scene-based expectation on facial expression perception: The moderating effect of cognitive load

      2022, Biological Psychology
      Citation Excerpt :

      In their study, scene stimuli were presented with the face stimuli simultaneously, which caused participants to pay more attention to the scene picture and thus, interfered with the structural encoding of facial expression. Wieser and Keil (2014) directly measured individuals’ distribution of attention to compound stimuli of expressions and scenes using steady-state visually evoked potentials and found that individuals devoted more attentional resources to scene pictures when participants observed the compound stimuli of faces and scenes, which supported our speculation above. In the priming paradigm, the enhanced amplitude of the EEG was interpreted as an increase in the engagement of cognitive resources because the target was more difficult to integrate with the context (see review by Kutas & Federmeier, 2011).

    • Reinforcement history shapes primary visual cortical responses: An SSVEP study

      2021, Biological Psychology
      Citation Excerpt :

      There are also preliminary indications that SSVEP may index associability in tasks that did not use conditioning, but rather emotional faces. Wieser and Keil (2014) demonstrated that when a face is shown embedded in a visual context, the expression on the face (fearful, neutral, happy) modulates the electrocortical response to the context in a manner that broadly resembles overshadowing. In a second study, Wieser et al. (2011) simultaneously showed people two faces that differed in their expressions, and found that angry faces biased attentional prioritization, but only in highly anxious individuals.

    View all citing articles on Scopus
    View full text