An ERP study of category priming: Evidence of early lexical semantic access
Introduction
In the past, two main paradigms have been used to study semantic access. One paradigm uses single-word semantic priming (Meyer and Schvaneveldt, 1971, Neely, 1976; see Neely, 1991 for a review), and the other uses words in sentence contexts (Fischler and Bloom, 1979, Schuberth and Eimas, 1977). A common finding from these early behavioral studies is that when a lexical decision is required, the response time to the target word for this judgment is facilitated if it has been preceded either by a semantically related word or by a semantically congruent sentence, particularly when the target word provides a highly likely completion of the sentence. While these early studies confirmed the effects of semantic context on visual word recognition, it is nonetheless difficult to specify the exact stage or stages of word processing that are related to the behavioral effects, and therefore is not able to address the question of when semantic access occurs.
With its excellent temporal resolution, electrophysiological methods (mainly through event-related potentials, ERPs, and occasionally through direct single cell recordings) have proved to be more appropriate for this type of question, compared with both behavioral and other brain imaging approaches (e.g., fMRI and PET). In a now classic study, Kutas and Hillyard (1980) recorded ERPs to single words in a sentence in which a semantic context was set with the target word at the end. What they found was a negative brain wave component that peaked at around 400 ms (the N400) that was sensitive to semantic congruity. This robust effect suggests that semantic factors influence word processing at least as early as 250 ms, when N400 component arises. Subsequent to this publication, much of the ERP work on semantic priming on word recognition, using either single-word priming paradigm or a sentence format, has focused on the N400 component (see Osterhout and Holcomb, 1995 for a review). Despite its consistent replication and reasonable accordance with the usual reading speed of 200–300 words per minute, there are studies suggesting that lexical semantic access might occur earlier, within 200 ms of the stimulus presentation.
For example, as the average fixation duration in reading is about 250 ms, with a large proportion of it being used for motor planning and initiation of the next saccade eye movement, it is likely that a great amount of lexical information including lexical semantics should have been processed within the first 200 ms (Rayner, 1998, Rayner et al., 1983). When the saccadic eye movements were removed through rapid serial visual presentation, reading speed could be dramatically improved and could even reach over 1000/min with comprehension accuracy as high as 75% (Rubin and Turano, 1992). Although such a remarkable reading speed found by Rubin and Turano might be due to the easier materials used in their study (text for grades 4–9) in comparison with other studies (Masson, 1983) or when syntactic structures were particularly examined for comprehension (Kennedy and Murray, 1984), it nonetheless serves as strong evidence that lexical semantics can be accessed very quickly.
This is further supported by some recent ERP studies. For example, Sereno et al. (2003) examined the N1 component for semantically ambiguous words (e.g., bank) that could have both a dominant meaning (e.g., related to money) and a subordinate meaning (e.g., related to river). Those semantically ambiguous words were presented under two contexts, one being neutral and the other one biasing towards the subordinate meaning. What they found was that the amplitude of N1 component occurring between 132 and 192 ms after stimulus onset was sensitive to semantic context. When presented in a neutral context where it is associated with its more frequently used dominant meaning, the N1 amplitude for semantically ambiguous words is comparable to N1 amplitude for high frequency words. In a context associated with the less frequently used subordinate meaning, the N1 amplitude of the same words became comparable to the N1 amplitude for low frequency words. These results suggest that lexical semantic access must have occurred early enough to influence the N1 component during this period (e.g., between 132 and 192 ms). Consistent with this result using linear regression analyses, Hauk et al., 2006a, Hauk et al., 2006b found that semantic coherence, defined as the degree to which words sharing a root morpheme (e.g., gold, golden, goldsmith) are semantically related to each other significantly correlated with ERP data during the time window around 160 ms. Similarly, Penolazzi et al. (2007) found that the cloze probability (i.e., the probability of a target word appearing as a completion of a sentence) was reflected in ERP data at around 180 ms. Even more surprisingly, some studies have reported even earlier semantic effect at about 100 ms (Pulvermuller et al., 2001, Skrandies, 1998). Therefore, when considered together with results from both eye movement and ERP studies, it seems likely that lexical semantic access must occur prior to 200 ms after stimulus onset, while there is some debate as to when exactly it occurs (Sereno and Rayner, 2003).
In the current study, we decided to take another approach by asking participants to perform the standard lexical decision task in two conditions, one of which has more emphasis on semantic meanings of the words. This was achieved by having all words within each block of the semantic condition belong to the same category. Across subjects, the same stimuli appeared in both the (non-semantically blocked) lexical decision and the semantic condition. By using this paradigm, we found in a pilot study (unpublished data) a semantic effect in the N1 component, stronger on the left, between 150 and 200 ms. However, because the word length differentiated words and pseudowords (pseudowords were shorter to ease reading), it was not possible to draw straightforward conclusions, as there have been studies showing that word length may influence early ERP components by both itself (Hauk et al., 2006a, Hauk and Pulvermuller, 2004) and through interactions with other factors, such as word frequency (Assadollahi and Pulvermuller, 2003) and semantics (Penolazzi et al., 2007). We therefore conducted the current study, this time controlling for the length of the stimulus items.
Section snippets
Subjects
Fourteen university students (12 females, mean age = 20) participated in the current study. No participants reported any language deficits (e.g., dyslexia), attentional problems (e.g., ADHD), motor problems, psychiatric history, or conditions that might affect the nervous system (e.g., severe head injuries and epilepsy). All participants were right-handed native English speakers with normal or corrected to normal vision. After the study, a research participation credit or $15 compensation was
Behavioral data
Response times (RT) for words were shorter overall than for nonwords, F(1,13) = 52.7, p < .001, and this lexicality effect was further modified by a lexicality × task interaction, F(1,13) = 10.3, p = .007, larger in the LD task (difference RT = 77.7 ms, t(1,13) = 7.7, p < .001) than in the LS task (difference RT = 45.2 ms, t(1,13) = 4.7, p < .001) (Fig. 3). In addition, the main effect for the type of task was also significant, F(1,13) = 56.4, p < .001. While RT was overall shorter for the LS task, the task effect was
Effect of lexical status at 100 ms
We found in the current study that the earliest lexicality effect occurred by 100 ms. A similar lexicality effect in the P1 was reported by Sereno et al. (1998), in which the P1 differentiated words from both pseudowords and consonant strings, while the latter two did not differ from each other. While this early differentiation between words and pseudowords could be due to lexical semantic access, it is more likely based on visual processing of orthographic structures at a sensory/perceptual
Summary and conclusions
Using the same words and nonwords in two tasks that differed in emphasis on the ease of semantic retrieval, we examined the early ERP components P1 and N1 for visual word recognition. Several key findings were obtained. First, we observed that words and pseudowords could be differentiated very quickly at about 100 ms. The brain mechanism underlying this early lexicality effect probably depends on neural encoding of orthographic structures in visual cortex, which is specific enough that such
Acknowledgments
Funded in part from grants to SJS from NSERC and CFI of Canada. We would like thank Jane Dywan and the other members of the Cognitive and Affective Neuroscience Laboratory of Brock University for their discussions and help in data collection, and two anonymous reviews for constructive feedback.
References (36)
- et al.
Tuning of the human left fusiform gyrus to sublexical orthographic structure
NeuroImage
(2006) - et al.
Automatic and attentional processes in the effects of sentence contexts on word recognition
Journal of Verbal Learning and Verbal Behavior
(1979) - et al.
The time course of visual word recognition as revealed by linear regression analysis of ERP data
NeuroImage
(2006) - et al.
Effects of word length and frequency on the human event-related potential
Clinical Neurophysiology
(2004) - et al.
Reference-free identification of components of checkerboard-evoked multichannel potential fields
Electroencephalography Clinical Neurophysiology
(1980) The gate for reading: reflections on the recognition potential
Brain Research Reviews
(2007)- et al.
Early semantic context integration and lexical access as revealed by event-related brain potentials
Biological Psychology
(2007) - et al.
The myth of the visual word form area
NeuroImage
(2003) - et al.
Reading without saccadic eye movements
Vision Research
(1992) - et al.
Measuring word recognition in reading: eye movements and event-related potentials
Trends in Cognitive Sciences
(2003)
Evoked potential correlates of semantic meaning—a brain mapping study
Cognitive Brain Research
Subprocesses of performance monitoring: a dissociation of error processing and response competition revealed by event-related fMRI and ERPs
NeuroImage
Early influences of word length and frequency: a group study using MEG
NeuroReport
The role of meaning in word recognition
Applied Multiple Regression/Correlation Analysis for the Behavioral Sciences
The visual word form area: spatial and temporal characterization of an initial stage of reading in normal subjects and posterior split-brain patients
Brain
Language-specific tuning of visual cortex? Functional properties of the visual word form area
Brain
The role of the posterior fusiform gyrus in reading
Journal of Cognitive Neuroscience
Cited by (46)
Ultra-rapid and automatic interplay between L1 and L2 semantics in late bilinguals: EEG evidence
2022, CortexCitation Excerpt :Indeed, the left angular gyrus, found here as one of the most likely brain sources of the early deflection for semantic L1-L2 similarity, has been claimed to be a critical region for the efficient automatic retrieval of semantic information (Davey et al., 2015). Previous research on monolinguals has reported the modulation of a similarly early ERP negativity in response to semantic manipulations, and hence considered to reflect the interactive nature of the linguistic processing in L1 (Hauk, Davis et al., 2006; Hauk, Patterson et al., 2006; Hauk et al., 2012; Penolazzi et al., 2007; Segalowitz & Zheng, 2009; Sereno, Brewer, & O'Donnell, 2003). Such negativity has been typically found between 120 and 200 msec, and even earlier (around 100 msec) in some cases (e.g., Pulvermüller et al., 2001).
Compounding matters: Event-related potential evidence for early semantic access to compound words
2019, CognitionCitation Excerpt :The finding that word frequency modulates P100 amplitude was unsurprising and is consistent with previous work. Segalowitz and Zheng (2009) suggested that the larger P100 amplitudes for words than nonwords may be accounted for by bigram frequency effects. Hauk, Pattersonauk, et al. (2006) demonstrated that orthographically atypical words elicit more pronounced P100s than do typical words.
Early sensitivity of left perisylvian cortex to relationality in nouns and verbs
2017, Neuropsychologia