Elsevier

Cognition

Volume 85, Issue 3, October 2002, Pages B61-B69
Cognition

Brief article
Semantic distance effects on object and action naming

https://doi.org/10.1016/S0010-0277(02)00107-5Get rights and content

Abstract

Graded interference effects were tested in a naming task, in parallel for objects and actions. Participants named either object or action pictures presented in the context of other pictures (blocks) that were either semantically very similar, or somewhat semantically similar or semantically dissimilar. We found that naming latencies for both object and action words were modulated by the semantic similarity between the exemplars in each block, providing evidence in both domains of graded semantic effects.

Introduction

Miller and Fellbaum (1991) wrote: “When psychologists think about the organization of lexical memory it is nearly always the organization of nouns that they have in mind” (p. 214). Even more specifically, we may add, often it is nouns referring to objects that we have in mind.

Although the object-noun domain is certainly relevant to studies of lexical memory, it only represents part of adults' lexical knowledge; theories and tools developed to investigate semantic organization must generalize beyond words for things. In this article we take up this challenge, addressing the question: can we capture semantic relatedness effects in object-noun and action-verb domains using parallel principles and tools?

Meaning similarity among words affects many tasks involving speech production. For example, speakers are slower in naming a picture when a meaning-related distracter word is presented, relative to an unrelated word (e.g. Glaser & Düngelhoff, 1984). Similar interference effects arise when speakers name pictures in the context of naming other pictures from the same semantic field, relative to naming pictures in the context of other pictures from different semantic fields (semantic context effects; Damian et al., 2001, Kroll and Stewart, 1994).

It is generally agreed upon that these effects reflect properties of the conceptually-driven lexical retrieval process (Levelt, Roelofs, & Meyer, 1999). During retrieval, a target lexico-semantic representation is activated, along with other meaning-related representations. Picture-word interference and semantic context effects reflect competition between these different representations (Damian et al., 2001). The reliability of semantic interference effects in production tasks, and the observation that these effects are relatively impervious to lexical dimensions such as frequency, length and phonological overlap (Levelt et al., 1999) render them well-suited to investigate the semantic representation of words in memory. Here, we capitalize on semantic context effects to investigate the fine-grained semantic organization of words referring to objects and actions.

Many previous studies investigating words referring to objects (e.g. Schriefers, Meyer, & Levelt, 1990) and one study concerning actions (Roelofs, 1993) have established differences in naming latencies between semantically related and unrelated conditions. However, interference effects in other content domains (numbers and colors) have been shown to be graded: modulated by the degree of semantic similarity between the to-be-named target and the distracter. For example, Klopfer (1996) reported that, in a Stroop-like task, words representing colors perceptually similar to the color to be named produced greater interference than words representing perceptually dissimilar colors. Distance effects also occur in the number domain (Brysbaert, 1995, Moyer and Landauer, 1967, Pavesi and Umiltà, 1998). However, colors and numbers may be special content domains because it is easy to describe their primary conceptual dimensions: hue and saturation for colors, quantity for numbers.

What about more complex domains: objects and actions? Because identifying the primary conceptual dimensions is far more difficult in these domains, can graded effects be observed? To date little empirical work has addressed this question (beyond studies using the “release from proactive interference” paradigm1; Wickens, 1970, Wickens et al., 1963), although a number of different theories of meaning representation predict semantic distance effects, especially those assuming distributed semantic representations and featural overlap between them (e.g. Martin and Chao, 2001, McRae et al., 1997, Plaut, 1995).

Furthermore, semantic representations of objects and actions are different. Within the object domain, category membership has powerful effects, most striking in patients who are selectively impaired or spared in one category of knowledge, such as animals (Caramazza & Shelton, 1998), body-parts (Shelton, Fouch, & Caramazza, 1998) and fruits and vegetables (Hart, Berndt, & Caramazza, 1985). These findings have led some researchers to postulate that domains playing a fundamental role for our survival (e.g. animals, plants and body-parts) are represented categorically in semantic memory within dedicated neural substrates (Caramazza & Shelton, 1998). In this view, semantic distance effects may not be observed between evolutionarily motivated categories. These should act as isolable clusters because they are independent from other domains of knowledge. In contrast, graded effects may be observed between categories which are not evolutionarily motivated. This contrasts with proposals according to which featural overlap (regardless of category membership) determines semantic similarity within and between categories (e.g. Martin & Chao, 2001). Furthermore, it remains an empirical question whether “categorical” and “featural overlap” effects for objects may be disentangled in behavioral tasks.

In the action domain, category boundaries are not as well defined.2 For example, consider “speaking”. Is this verb categorized as “communication”, like “teaching”, or perhaps as “body-noise”, like “snoring”? To account for such differences between objects and actions, Huttenlocher and Lui (1979) proposed that category membership and well-defined hierarchical organization are important organizational principles in the object domain, while the action domain is organized in a matrix-like manner, with exemplars in different fields sharing general properties (e.g. intentionality) crossing semantic fields, and lacking clear hierarchical organization. It is an empirical question whether such differences have consequences for the likelihood of observing graded effects in both domains.

In order to assess graded semantic similarity effects, we operationalized semantic similarity in a way that allows us to select materials in both the object and action domains. We used empirically-based measures of semantic distance for 456 English words (referring to objects and to actions), obtained in the following manner (for detailed descriptions of the methods, see Vinson & Vigliocco, 2002). First, feature norms were obtained by asking speakers of English to generate features that define and describe each word. Second, we used self-organizing maps (SOMs) (Kohonen, 1997) to reduce the dimensionality of the featural space and obtain a semantic space in which each word is represented as the unit best responding to an input vector in the resulting map. Semantic distance is operationalized as the Euclidean distance between two best responding units in the space. This model, hence, serves as an empirical tool to select materials on the basis of semantic distance, without making a priori or arbitrary decisions about the degree of semantic similarity between words.

Section snippets

Participants

Ninety-four native English speakers from the UCL community participated in the experiment in exchange for payments of £3. All had normal or corrected-to-normal vision.

Materials

Action and object pictures were selected separately based upon semantic distance. Selected items were picturable objects or actions from separable semantic fields. Items were relatively similar within a semantic field, and separable (in terms of semantic distance) between fields. Finally, between-field distances were such that two

Errors

Each participant's responses were scored for errors, including failure to detect initial word onset, false detections, and erroneous or dysfluent utterances (6.9% of all trials). Analysis of variance (ANOVA) showed no significant effect of semantic context (same, near, and far) on the number of errors, either for object or action naming (all F<1). Error trials were then removed, as were response latencies faster than 250 ms or slower than 1500 ms (1.6% of all trials).

Response latencies

Fig. 1 reports the response

Discussion

We obtained parallel, graded patterns of semantic context interference for separate sets of object nouns and action verbs, selected not on the basis of a priori category membership, but based upon semantic distances obtained from a model of lexical-semantic representation derived from speaker-generated features (Vinson & Vigliocco, 2002).

It is important to note here that semantic distance is correlated with visual similarity (especially in tasks using pictures; see Vitkovitch, Humphreys, &

Acknowledgements

The work reported here was supported by a Human Frontier Science Program Grant (RG148/2000) to Gabriella Vigliocco.

References (28)

  • J. Druks et al.

    An object and action naming battery

    (2000)
  • W.R. Glaser et al.

    The time course of picture-word interference

    Journal of Experimental Psychology: Human Perception and Performance

    (1984)
  • J. Hart et al.

    Category-specific naming following cerebral infarction

    Nature

    (1985)
  • J. Huttenlocher et al.

    The semantic organization of some simple nouns and verbs

    Journal of Verbal Learning and Verbal Behavior

    (1979)
  • Cited by (111)

    View all citing articles on Scopus
    View full text