Fearful faces heighten the cortical representation of contextual threat
Introduction
Viewing fearful facial expressions enhances basic perceptual processes, such as contrast sensitivity, orientation sensitivity, and spatial resolution (Bocanegra and Zeelenberg, 2009b, Bocanegra and Zeelenberg, 2011a, Phelps et al., 2006). These effects are often attributed to the signal character of fearful faces: Fear in another person suggests the presence of a potential threat, but the source of threat is unclear (Whalen et al., 2009). Thus, a fearful face may act as a cue that prompts heightened perceptual sensitivity to threat in the environment. This notion is also in line with several theories of anxiety, which assume that anxiety enhances sensory sensitivity in general (Lang et al., 2000, McNaughton and Gray, 2000).
In this vein, visual search tasks have demonstrated that a fearful face can increase search efficiency for task-relevant objects, even when those objects are non-threatening (Becker, 2009, Olatunji et al., 2011). It has been demonstrated that facially expressing fear enhances sensory sensitivity (Susskind et al., 2008) as well, which concurs with Darwin's assumption that facial expressions modify preparedness for perception and action. In this perspective, expressing fear alters the sensory response, augmenting or diminishing the sensitivity to the environment (Darwin, 1872). Thus, cuing observers with depictions of a fearful face or experiencing fear both may result in heightened attention and visual processing. The question arises regarding the extent to which such changes also affect the processing of environmental cues in which the observer or the face cue is embedded.
Recent evidence suggests that concurrent contextual stimuli impact the processing of facial expressions in a content-specific fashion (for a recent review, see Wieser and Brosch, 2012). The most common finding in this field of research has been that congruent context facilitates and accelerates emotion recognition, whereas incongruent context interferes with emotion recognition (e.g., Aviezer et al., 2008, Carroll and Russell, 1996, de Gelder and Vroomen, 2000). The perception of emotional faces seems to depend on an interaction of facial expression and contextual information (Herring et al., 2011, Neta et al., 2011), and associations between context and faces are routinely established (Aviezer et al., 2011, Barrett and Kensinger, 2010, Hayes et al., 2010). Using the N170 component of the visually evoked brain potential as an index of face perception, it was shown that the presence of a face in a fearful context enhanced the N170 amplitude compared to a face in neutral contexts, but this effect was strongest for fearful faces (Righart and de Gelder, 2006, Righart and de Gelder, 2008a, Righart and de Gelder, 2008b). These findings suggest that the context in which a face appears may influence how it is encoded. In addition, faces without any context showed the largest N170 amplitudes possibly reflecting competition for attentional resources between visual scene context and facial expressions (Righart and de Gelder, 2006). This effect was replicated in a later study where N170 amplitudes were increased for fearful faces in fearful scenes as compared to fearful faces in happy scenes (Righart and de Gelder, 2008a). These results show that the information provided by the facial expression is combined with the scene context during the early stages of face processing. However, from a methodological point of view, it should be noted that the larger N170 in response to expressive faces with simultaneously presented scenes reflects the brain response to both stimuli (face and scene), and thus cannot be taken as a pure index of face processing.
Electrophysiological testing of hypotheses concerning the relative amount of cortical processing of concurrent stimuli is typically made difficult by the fact that the neural responses to concurrent stimuli are not distinct. The steady-state visually evoked potential (ssVEP) methodology together with “frequency-tagging” allows researchers to separately quantify responses to multiple visual objects, which are simultaneously present in the field of view (e.g., Miskovic and Keil, 2013, Wang et al., 2007, Wieser and Keil, 2011, Wieser et al., 2011, Wieser et al., 2012, Zhang et al., 2011). The ssVEP is an oscillatory response to stimuli periodically modulated in contrast or luminance (i.e., flickered), in which the fundamental frequency of the electrocortical response recorded from the scalp equals that of the driving stimulus (Müller et al., 1998, Regan, 1989). Of significant advantage, the driven oscillatory ssVEP is precisely defined in the frequency domain as well as time–frequency domain, and can consequently be reliably separated from noise (i.e., all features of the ongoing EEG that do not oscillate at the frequency of the stimulus train). Amplitude modulation of this signal reflects sustained sensory processing modulated both by intrinsic factors (e.g., Keil et al., 2003) and extrinsic, task-related processes (e.g., Andersen and Müller, 2010). Importantly, because the ssVEP is by definition a stationary and sustained oscillation in sensory neural populations, its modulation by tasks and goals is thought to be effected through sustained re-entrant processes (Keil et al., 2009). The effect of such re-entrant modulation can be observed through phase analyses (Keil et al., 2005) or by measuring the time-varying ssVEP amplitude in response to physically identical stimuli (e.g., Wieser and Keil, 2011). Generators of the flicker-evoked and contrast-reversal ssVEP have been localized to the extended visual cortex (Müller et al., 1997), with strong contributions from retinotopic areas, and also from cortices higher in the visual hierarchy (Di Russo et al., 2007). Similarly, source estimation has indicated an early visual cortical origin of the face-evoked flicker-ssVEP (Wieser and Keil, 2011). Frequency-tagging refers to the feasibility of assigning different frequencies to stimuli simultaneously presented in the visual field, whose signals can be separated in the frequency domain (Appelbaum et al., 2006, Wang et al., 2007, Wieser and Keil, 2011) and submitted to time–frequency analyses to provide a continuous measure of the visual resource allocation to a specific stimulus amid competing cues. As a consequence, this method is ideally suited for the investigation of competition between facial expressions and affective pictures. Recently, ssVEP studies have suggested that affectively engaging stimuli prompt strong competition effects, associated with reduction of the response amplitude elicited by a concurrent stimulus or task (Hindi Attar et al., 2010a, Hindi Attar et al., 2010b, Müller et al., 2008, Müller et al., 2011). Thus, one may hypothesize that prioritized processing of facial expressions is at the expense of processing the visual scene and vice versa, a fact that has been neglected in this line of research so far.
Recently, studies in the cognitive and affective neurosciences have increasingly used the steady-state visually evoked potential (ssVEP) to study different aspects of face processing, including processing of emotional expression as well as face identification (e.g., Ales et al., 2012, Gruss et al., 2012, McTeague et al., 2011, Rossion and Boremanse, 2011, Rossion et al., 2012). Of note, these studies revealed different sensor locations for maximal resonating oscillatory responses in the visual cortex, which were either predominantly expressed over medial–occipital sensors (Gruss et al., 2012, McTeague et al., 2011) or over right temporo-occipital clusters approximately at sensor locations where the face-sensitive N170 in the ERP is normally maximally expressed (e.g., Ales et al., 2012, Rossion and Boremanse, 2011, Rossion et al., 2012). These differences are mostly due to differences in stimulus presentation or experimental design as the ssVEP can be driven in lower-tier visual cortices using high contrast luminance modulation with square-wave stimulation (e.g., Gruss et al., 2012) or in higher order cortices such as the fusiform cortex using sinusoidal modulation of face-specific contrast (e.g., Rossion and Boremanse, 2011).
The main goal of the present study was to examine the effects of viewing facial expressions on the cortical processing of contextual cues and vice versa. To this end, steady-state visually evoked potentials (ssVEPs) together with frequency-tagging were employed, yielding separate continuous estimates of sensory cortical engagement for the face and the context stimulus. We examined the following alternative hypotheses: 1) If competition between faces and visual scenes takes place, the ssVEP signal evoked by faces should be reduced when the face is embedded in affective compared to neutral scenes, whereas at the same time, cortical processing of the background visual scenes should be reduced when emotional compared to neutral facial expressions are presented. 2) If fearful facial expressions enhance attentional sensitivity, then enhanced ssVEP amplitudes for background scenes should be observed when a fearful face is present. Specific enhancement of the threat context during fearful face viewing would indicate that peripheral sensitivity is enhanced to amplify threat features selectively, rather than any content.
Section snippets
Participants
Twenty participants (20–27 years old, M = 22.80, SD = 2.46; 10 females, right-handed) were recruited from general psychology classes at the University of Würzburg who received course credit for participation. All of the participants had no family history of photic epilepsy, and reported normal or corrected-to-normal vision. Written consent was obtained from all participants. All procedures were approved by the institutional review board of the University of Würzburg.
Design and procedure
Twenty-four pictures (12 female;
Steady-state visually evoked potentials (ssVEPs)
The analysis over the whole time period (200–3000 ms) revealed a significant interaction of tagged stimulus × visual scene × facial expression, F(4, 82) = 4.19, p = .039, ηp2 = .12. This effect was followed up by ANOVAs performed for faces and visual scenes, separately. For faces, no significant effects of facial expressions were observed, except of a tendency of higher activity for happy facial expressions, F(2,38) = 2.96, p = .064, ηp2 = .14 (Fig. 4). Simple contrasts highlighted this difference for happy
Discussion
The present study investigated the interaction between facial expressions and surrounding context information in visuo-cortical processing using ssVEP frequency tagging. Flickering streams of facial expressions were overlaid on affective visual scenes flickering at a different frequency. This allowed us to obtain continuous electrocortical indices reflecting processing of each stimulus individually. Whereas electrocortical responses to facial expressions were not altered by visual background,
Conclusion
The present study capitalized on frequency-tagging in order to disentangle mutual influences of facial expressions and affective context scenes. This novel approach provides evidence that the presence of fearful faces increases vigilance specifically for threatening visual scenes, which may contain information about potential sources of threat in the environment. Vigilance to threat is reflected in increased large-scale (neural mass) electro-cortical activity in visual cortices. Taken together,
References (72)
- et al.
Topographic organization of projections from the amygdala to the visual cortex in the macaque monkey
Neuroscience
(2003) - et al.
Measuring emotion: the self-assessment manikin and the semantic differential
J. Behav. Ther. Exp. Psychiatry
(1994) - et al.
Time course of affective bias in visual attention: convergent evidence from steady-state visual evoked potentials and behavioral data
Neuroimage
(2010) Why visual attention and awareness are different
Trends Cogn. Sci.
(2003)- et al.
The distinct modes of vision offered by feedforward and recurrent processing
Trends Neurosci.
(2000) - et al.
Fear and anxiety: animal models and human cognitive psychophysiology
J. Affect. Disord.
(2000) - et al.
Fluctuations of steady-state VEPs — interaction of driven evoked-potentials and the EEG
Electroencephalogr. Clin. Neurophysiol.
(1991) - et al.
Anxiolytic action on the behavioural inhibition system implies multiple types of arousal contribute to anxiety
J. Affect. Disord.
(2000) - et al.
Social vision: sustained perceptual enhancement of affective facial cues in social anxiety
Neuroimage
(2011) - et al.
Subcortical discrimination of unperceived objects during binocular rivalry
Neuron
(2004)
Contributions of the amygdala to emotion processing: from animal models to human behavior
Neuron
Brain mechanisms for emotional influences on perception and attention: what is magic and what is not
Biol. Psychol.
A steady-state visual evoked potential approach to individual face perception: effect of inversion, contrast-reversal and temporal dynamics
Neuroimage
How brains beware: neural mechanisms of emotional attention
Trends Cogn. Sci.
The neural correlates of feature-based selective attention when viewing spatially and temporally overlapping images
Neuropsychologia
Binocular rivalry requires visual attention
Neuron
Effects of direct and averted gaze on the perception of facially communicated emotion
Emotion
An objective method for measuring face detection thresholds using the sweep steady-state visual evoked response
J. Vis.
Behavioral performance follows the time course of neural facilitation and suppression during cued shifts of feature-selective attention
Proc. Natl. Acad. Sci. U. S. A.
Cue-invariant networks for figure and background processing in human visual cortex
J. Neurosci.
Angry, disgusted, or afraid? Studies on the malleability of emotion perception
Psychol. Sci.
The automaticity of emotional face-context integration
Emotion
Context is routinely encoded during emotion perception
Psychol. Sci.
Panic search fear produces efficient visual search for nonthreatening objects
Psychol. Sci.
Dissociating emotion-induced blindness and hypervision
Emotion
Emotion improves and impairs early vision
Psychol. Sci.
Emotion-induced trade-offs in spatiotemporal vision
J. Exp. Psychol. Gen.
Emotional cues enhance the attentional effects on spatial and temporal resolution
Psychon. Bull. Rev.
Do facial expressions signal specific emotions? Judging emotion from the face in context
J. Pers. Soc. Psychol.
Occipito‐temporal connections in the human brain
Brain
The expression of the emotions in man and animals
The perception of emotions by ear and by eye
Cogn. Emot.
Spatiotemporal analysis of the cortical sources of the steady-state visual evoked potential
Hum. Brain Mapp.
Face-evoked steady-state visual potentials: effects of presentation rate and face inversion
Front. Hum. Neurosci.
Neural mechanisms of context effects on face recognition: automatic binding and context shift decrements
J. Cogn. Neurosci.
Electrophysiological responses to evaluative priming: the LPP is sensitive to incongruity
Emotion
Cited by (59)
Eye gaze direction modulates nonconscious affective contextual effect
2022, Consciousness and CognitionCitation Excerpt :This discrepancy might be attributed to the fact that the processing of facial expressions often has divergent evolutionary implications among different emotions. For example, a fearful face often signalizes a potential threat in the environment (Wieser & Keil, 2014), forcing people to make a rapid decision to ‘fight-or-flight’, whereas a happy facial expression may serve as a safety signal that conveys the affiliative intent of others (Mehu & Dunbar, 2008; Mehu et al., 2007). Moreover, it is believed that there is an ‘automatic’ threat-sensitive mechanism to help people survive or cope with danger, even without consciousness or attention (Hedger et al., 2016).
Influence of scene-based expectation on facial expression perception: The moderating effect of cognitive load
2022, Biological PsychologyCitation Excerpt :In their study, scene stimuli were presented with the face stimuli simultaneously, which caused participants to pay more attention to the scene picture and thus, interfered with the structural encoding of facial expression. Wieser and Keil (2014) directly measured individuals’ distribution of attention to compound stimuli of expressions and scenes using steady-state visually evoked potentials and found that individuals devoted more attentional resources to scene pictures when participants observed the compound stimuli of faces and scenes, which supported our speculation above. In the priming paradigm, the enhanced amplitude of the EEG was interpreted as an increase in the engagement of cognitive resources because the target was more difficult to integrate with the context (see review by Kutas & Federmeier, 2011).
Reinforcement history shapes primary visual cortical responses: An SSVEP study
2021, Biological PsychologyCitation Excerpt :There are also preliminary indications that SSVEP may index associability in tasks that did not use conditioning, but rather emotional faces. Wieser and Keil (2014) demonstrated that when a face is shown embedded in a visual context, the expression on the face (fearful, neutral, happy) modulates the electrocortical response to the context in a manner that broadly resembles overshadowing. In a second study, Wieser et al. (2011) simultaneously showed people two faces that differed in their expressions, and found that angry faces biased attentional prioritization, but only in highly anxious individuals.