Elsevier

Neuropsychologia

Volume 41, Issue 8, 2003, Pages 1008-1019
Neuropsychologia

The dynamic nature of language lateralization: effects of lexical and prosodic factors

https://doi.org/10.1016/S0028-3932(02)00315-9Get rights and content

Abstract

In dichotic listening, a right ear advantage for linguistic tasks reflects left hemisphere specialization, and a left ear advantage for prosodic tasks reflects right hemisphere specialization. Three experiments used a response hand manipulation with a dichotic listening task to distinguish between direct access (relative specialization) and callosal relay (absolute specialization) explanations of perceptual asymmetries for linguistic and prosodic processing. Experiment 1 found evidence for direct access in linguistic processing and callosal relay in prosodic processing. Direct access for linguistic processing was found to depend on lexical status (Experiment 2) and affective prosody (Experiment 3). Results are interpreted in terms of a dynamic model of hemispheric specialization in which right hemisphere contributions to linguistic processing emerge when stimuli are words, and when they are spoken with affective prosody.

Introduction

It has been well-established that the dichotic listening paradigm provides an estimate of hemispheric specialization [21], [38], [45]. Typically, a right ear advantage (REA) is observed for linguistic processing, reflecting left hemisphere specialization, and a left ear advantage (LEA) is observed for some forms of nonlinguistic processing, reflecting right hemisphere specialization. The direction of the ear advantage depends on the task, and not on the nature of the stimulus itself. For example, when messages are presented in different emotional tones of voice, an REA is observed when attending to linguistic content, but an LEA is observed when attending to the tone of voice, or emotional prosody [6], [24].

The ear advantage on a dichotic listening task can arise through one of two possible mechanisms, which Zaidel et al. have termed “direct access” and “callosal relay” ([43], see also Moscovitch [26], [27] who uses the terms “efficiency model” and “strict localization”, respectively). In both models dichotic stimuli are projected from each ear to the opposite hemispheres via the contralateral auditory pathways. Under dichotic competition, the ipsilateral pathways are thought to be suppressed [20]. In the callosal relay model, hemispheric specialization is absolute, and the stimuli from both ears are ultimately processed in the dominant hemisphere. An REA for linguistic processing thus arises because the stimulus from the left ear must be relayed across the callosum from the right hemisphere to the left hemisphere, resulting in a delay and possible degradation of the signal. Similarly, the LEA observed on a prosodic task arises because the right ear signal must be relayed from the left hemisphere to the right hemisphere. In the direct access model, each hemisphere is capable of performing the task, but one is superior to the other. Stimuli are processed in the hemisphere to which they are projected. The REA for linguistic processing now results because the left hemisphere is better (faster and/or more accurate) than the right at performing the task.

If one’s goal is to determine which hemisphere is better at a task, it does not matter which mechanism leads to the ear advantage. However, if one wishes to make inferences about locus of processing, or to evaluate relative versus absolute specialization, this distinction is very important. For example, the left ear score on a linguistic dichotic task might reflect right hemisphere processing of the stimulus (direct access), but it might also reflect the efficiency of callosal transfer. The present experiments empirically determine the mechanisms through which the ear advantages arise in dichotic listening as a function of both task and stimulus parameters. Experiment 1 examines a dichotic listening task that has previously been demonstrated to produce an REA for linguistic processing and an LEA for prosodic processing [6], [7]. Experiments 2 and 3 examine the effect of semantic and prosodic factors on the mechanism underlying the REA.

Many verbal dichotic effects are considered to arise through callosal relay [43]. This conclusion is based primarily on the finding that split-brain patients demonstrate extinction of the left ear signal under dichotic conditions [25], [28], [35], [36], suggesting that the left ear signal cannot be processed in the right hemisphere. However, claims of left ear extinction may be somewhat exaggerated. A careful literature review of dichotic listening in the split-brain patients described above reveals that only 2 of 16 patients demonstrate complete extinction of the left ear. Furthermore, it is possible that left ear attenuation reflects an attentional neglect of the left ear that arises under conditions of bilateral competition, and not right hemisphere incompetence. A similar phenomenon has been demonstrated in the visual modality, in which split-brain patients may be able to respond to words presented in the left visual field on unilateral trials, but not on bilateral trials [39]. Lassonde et al. [23] have suggested that left ear suppression in split-brain patients reflects trauma to the right hemisphere during surgery and not callosal disconnection. They report a split-brain patient (SL) who demonstrated extinction of the left ear immediately after surgery, but demonstrated normal dichotic performance after 5 years. An alternative explanation for left ear performance in split-brain patients is a failure of ipsilateral suppression that may occur as a result of cortical reorganization, so that the left ear stimulus was actually projected to the left hemisphere. We conclude that the data from these patients is equivocal on this issue.

The callosal relay model is based on the strong assumption of absolute hemispheric specialization. However, considerable evidence indicates that the right hemisphere has some competence in speech comprehension [15], [16], [44]. It has been proposed that callosal relay may be necessary for consonant–vowel syllables that have no semantic association, but that direct access might be observed when the stimuli are words [10], [36]. Thus the lexical status of the stimuli may play a key role in the mechanisms that underlie the ear advantage.

Almost nothing is known about the mechanism leading to the LEA for prosodic processing, except that it reflects right hemisphere specialization. Patients with right hemisphere damage may have deficits in both the production and comprehension of affective prosody [4], [18], [33], patients undergoing the Wada procedure lose the ability to express affective prosody during right-side injection [31], and dichotic listening studies in normals consistently report LEAs for comprehension of affective prosody [14], [34], [37]. There is very little data as to the competence of the left hemisphere in prosodic processing, although there is some evidence that linguistic prosody may not be as strongly lateralized as affective prosody, or may even be lateralized to the left hemisphere [2], [18].

Fortunately, it is possible to distinguish between direct access and callosal relay interpretations of the ear advantage in normal subjects. A number of criteria that identify a direct access pattern of processing have been described by Moscovitch [26], [27] and by Zaidel et al. [43]. A simple method relies on the use of a response hand manipulation: callosal relay predicts an ear advantage that does not interact with response hand, direct access predicts the ear advantage will be attenuated when the response is made with the hand contralateral to the dominant ear. The logic for assessing an REA follows: under conditions of callosal relay, stimuli from both ears are processed by the left hemisphere, which then generates the motor response when responding with the right hand, or relays the response to the right hemisphere when responding with the left hand. The REA arises solely from the time required for the left ear signal to cross the callosum from the right hemisphere to the left hemisphere. One therefore expects a main effect of ear (arising from interhemispheric transmission time), possibly a main effect of response hand, reflecting the additional time required to relay the response to the right hemisphere when responding with the left hand (although this effect depends on an equivalence of right and left hands, and is therefore questionable), but importantly, no interaction between ear and hand. The magnitude of the ear advantage should be the same whether the participant is responding with the left hand or the right hand. Under conditions of direct access, however, different predictions arise. Here the basic ear advantage arises because of a difference in processing efficiency between the left and right hemispheres, but the magnitude of that ear advantage is modified by the responding hand. When responding with the right hand, the REA is enhanced, because the left hemisphere has more immediate access than the right hemisphere to the responding right hand. However, when responding with the left hand, the REA is attenuated, because now the right hemisphere has more immediate access than the left hemisphere to the responding left hand. Thus an interaction between ear and response hand should be observed such that the REA is attenuated when responding with the left hand. A similar logic holds for right hemisphere processes such as prosody. Callosal relay should result in an LEA that does not interact with response hand. However, direct access should produce an LEA that is attenuated when responding with the right hand. The three experiments presented here used a response hand manipulation to examine the mechanisms producing the ear advantages for linguistic and prosodic processing.

Section snippets

Experiment 1

The dichotic task in Experiment 1 was developed by Bryden and MacRae [6] and consists of dichotically-paired words, spoken in different emotional tones of voice. Using this task, a number of studies have reported an REA when subjects are instructed to listen for a target word, and an LEA when instructed to listen for a target tone of voice (e.g. [7], [17]). Direct access can be inferred if there is an interaction between ear and response hand such that the ear advantage is attenuated when

Experiment 2

Experiment 2 was designed to test the hypothesis that the direct access pattern observed for linguistic processing in Experiment 1 was a result of the lexical/semantic nature of the stimuli. This experiment was a replication of Experiment 1, except that the stimuli were the nonsense words “baka”, “paka”, “taka”, and “daka”, spoken in tones of voice that were happy, angry, sad, and neutral. If the right hemisphere demonstrates competence for words, but not non-words, then a callosal relay

Experiment 3

Experiment 2 suggests that the pattern of direct access for linguistic processing that was observed in Experiment 1 was the result of the semantic/lexical nature of the stimuli. However, the stimuli were also exceptional in that they were spoken in emotional tones of voice. Thus it is possible that the presence of affective prosody interacted with the semantic/lexical nature of the stimuli to facilitate right hemisphere processing. Experiment 3 tested the hypothesis that the affective prosody

References (45)

  • R.G Ley et al.

    A dissociation of right and left hemispheric effects for recognizing emotional tone and verbal content

    Brain and Cognition

    (1982)
  • M Moscovitch

    Afferent and efferent models of visual perceptual asymmetries: theoretical and empirical implications

    Neuropsychologia

    (1986)
  • D.S O’Leary et al.

    A positron emission tomography study of binaurally and dichotically presented stimuli: effects of level of language and directed attention

    Brain and Language

    (1996)
  • E.D Ross et al.

    Acoustic analysis of affective prosody during right-sided Wada Test: a within-subjects verification of the right hemisphere’s role in language

    Brain and Language

    (1988)
  • B.E Shapiro et al.

    The role of the right hemisphere in the control of speech prosody in prepositional and affective contexts

    Brain and Language

    (1985)
  • F Shipley-Brown et al.

    Hemispheric processing of affective and linguistic intonation contours in normal subjects

    Brain and Language

    (1988)
  • R Sparks et al.

    Dichotic listening in man after section of neocortical commissures

    Cortex

    (1968)
  • S.P Springer et al.

    Dichotic testing of partial and complete split brain subjects

    Neuropsychologia

    (1975)
  • E Strauss et al.

    Performance on a free-recall verbal dichotic listening task and cerebral dominance determined by the carotid amytal test

    Neuropsychologia

    (1987)
  • E.L Teng et al.

    Interhemispheric interaction during simultaneous bilateral presentation of letters or digits in commissurotomized patients

    Neuropsychologia

    (1973)
  • C Umilta et al.

    Evidence of interhemispheric transmission in laterality effects

    Neuropsychologia

    (1985)
  • E Zaidel

    Auditory vocabulary of the right hemisphere following brain bisection or hemidecortication

    Cortex

    (1976)
  • Cited by (63)

    • Do men and women have different ECG responses to sad pictures?

      2017, Biomedical Signal Processing and Control
      Citation Excerpt :

      Decreased HR and SCL have been documented in response to the sad pictures [3]. Furthermore, sadness associated with the distinctive brain regions; specifically, motivates the right hemisphere [4]. On the other hand, these changes alter with respect to the individual characteristics.

    View all citing articles on Scopus
    View full text