Next Article in Journal
Stroke-Associated Cortical Deafness: A Systematic Review of Clinical and Radiological Characteristics
Next Article in Special Issue
Personality Traits Modulate the Effect of tDCS on Reading Speed of Social Sentences
Previous Article in Journal
A Pilot Study Evaluating the Effects of Early Intervention for Italian Siblings of Children with Autism Spectrum Disorder
Previous Article in Special Issue
Functional Connectivity at Rest between the Human Medial Posterior Parietal Cortex and the Primary Motor Cortex Detected by Paired-Pulse Transcranial Magnetic Stimulation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Photographs of Actions: What Makes Them Special Cues to Social Perception

by
Leopold Kislinger
Independent Researcher, Cranachstraße 39/3, 4060 Leonding, Austria
Brain Sci. 2021, 11(11), 1382; https://doi.org/10.3390/brainsci11111382
Submission received: 14 July 2021 / Revised: 18 October 2021 / Accepted: 18 October 2021 / Published: 22 October 2021
(This article belongs to the Special Issue Neuromodulation of Language, Cognition and Emotion)

Abstract

:
I have reviewed studies on neural responses to pictured actions in the action observation network (AON) and the cognitive functions of these responses. Based on this review, I have analyzed the specific representational characteristics of action photographs. There has been consensus that AON responses provide viewers with knowledge of observed or pictured actions, but there has been controversy about the properties of this knowledge. Is this knowledge causally provided by AON activities or is it dependent on conceptual processing? What elements of actions does it refer to, and how generalized or specific is it? The answers to these questions have come from studies that used transcranial magnetic stimulation (TMS) to stimulate motor or somatosensory cortices. In conjunction with electromyography (EMG), TMS allows researchers to examine changes of the excitability in the corticospinal tract and muscles of people viewing pictured actions. The timing of these changes and muscle specificity enable inferences to be drawn about the cognitive products of processing pictured actions in the AON. Based on a review of studies using TMS and other neuroscience methods, I have proposed a novel hypothetical account that describes the characteristics of action photographs that make them effective cues to social perception. This account includes predictions that can be tested experimentally.

1. Photographs of Actions: What Makes Them Special Cues to Social Perception

I will introduce action photographs (henceforth, photos) as visual cues that evoke neural activations in the action observation network (AON) of viewers and address controversial explanations of the cognitive products of these activations. In the first section, I will briefly define two concepts that play a central role in many of the studies reviewed in this paper: action representation and knowledge of actions. Then, I will give an overview of findings on the cognitive products of the activations in the AON, coming primarily from studies in which live actions or video clips of actions were used as stimuli. Next, I will address the cognitive responses to action photos and the particular social perception that they convey. In the following section, I will analyze the particular representational characteristics of action photos that are assumed to be the most relevant to this particular perception. I will illustrate these characteristics with some picture examples, in which action photos have been modified so that only the picture information which is relevant with regard to the processing in the AON in the first 300 milliseconds after picture onset could be seen. I will conclude by addressing questions for future research.
The present review is based on the following assumptions: Action photos convey to viewers a similar particular social perception as observing live actions. This social perception is based on neural activities in the AON and physiological excitation states in the body outside the brain. Within a few hundred milliseconds of processing, seeing action photos conveys to viewers a specific motor and somatosensory knowledge. Only a part of the components or elements of the actions depicted are relevant with regards to this knowledge. To examine these assumptions, I have reviewed studies on the AON that provides relevant information.
The AON is a network of regions in the occipito-temporal, temporal, parietal, and prefrontal cortices [1,2]. Neural responses to action photos in the AON have been observed in many studies (for references, see Table 1; this table also takes into account emotional areas or centers that are involved in processing pictured actions). The AON includes the brain regions that contain mirror neurons [1,3]. Researchers first discovered mirror neurons in the premotor cortex of macaque monkeys [4,5]. They are a type of visuomotor neuron, which are activated when monkeys see another individual performing a certain action and when they perform this action themselves. Mirror neurons have also been found in parietal regions; the different regions that contain mirror neurons together form the mirror neuron system [3]. Analogous neurons and mechanisms were subsequently also found in humans who observed live or videoed actions [6]. For a review, see [7]. Mirror neurons “vicariously” build the motor activation patterns that are currently present in the motor system of the observed individual [8]. Vicarious neural activations associated with somatosensory components of actions have been observed in somatosensory cortices and the insula [9,10,11]. The AON also includes brain areas that do not contain mirror neurons but are nevertheless specifically involved in the processing of action-related visual information [10,12,13] (see also Table 1).
There is a broad consensus in the research literature that the processing of observed live actions in the AON provides observers with knowledge of what others are doing and that this is a key skill of social cognition [8,23,24,25,38,42,43]. There is, however, controversy about the properties of the knowledge conveyed by neural AON activities: is this knowledge only provided by AON activities, or is it dependent on conceptual and semantic processing [15,44,45,46,47,48,49]? Is it primarily related to understanding the goal of actions [3,5,15], or does it relate equally to several elements of actions, such as grips, movements of body parts, somatosensory processes, objects involved, or context [50,51,52]? A key issue is how generalized or specific the action-related information is that comprises this knowledge [7,47,50].
This question is also relevant with regards to action photos. Do the AON responses convey knowledge of the specific properties of the concrete pictured actions [2,8,13,36,53,54,55,56], or do they provide an abstract understanding based on categorization processes [3,5,15]? Decisive answers to this question come from studies that have used the technique of transcranial magnetic stimulation (TMS) to modulate the neural activity of specific brain areas (Table 2). TMS studies suggest that AON responses to action photos represent specific properties and elements of the actions depicted [36,55,56,57,58,59,60,61,62,63]. Studies in which TMS was used in conjunction with electromyography (EMG) also provide findings about the time course of AON responses to action photos (Table 2). Data on the timing of these responses are of central importance for the delimitation of the picture-related perceptual processes from other cognitive operations that are also related to AON activations. Such cognitive operations, for example, are drawing inferences from action-related picture information [55,64,65,66] or voluntary activities, like mentalizing [64,67] and motor imagery [68,69,70].
To the best of my knowledge, no hypothetical account has been proposed to date that has identified the representational core characteristics of action photos and the specific perception they convey. Based on a review of TMS studies on AON responses as well as investigations using a wide range of neuroscientific methods, the present article intends to fill this gap. The proposed characteristics of action photos and cognitive processes correspond to predictions that can be tested experimentally.

2. Definitions of Key Terms and Concepts

In this section, I will briefly define key terms and concepts that many researchers use or refer to when studying the cognitive functions of neural activity in the AON. The first subsection describes properties of neural representations that are relevant with regards to the particular perception conveyed by action photos. The second subsection describes the knowledge of actions that is represented by fast AON responses to action photos and the key elements that this knowledge encompasses.

2.1. Neural Representations of Actions

Little is known about how motor actions are represented in the brain [78,79,80,81]. While performing a goal-directed action, pieces of the processed information are stored in the motor, somatosensory, visual, corticospinal, and muscular system of the acting individual for a short time [79,80]. Information stored in the brain, called memory content here, is allocated to a neural structure (specific neurons, synapses, or neural circuits) through special mechanisms [82]. Allocation mechanisms determine the locations at which memory content is stored and how much storage space is allocated to it [82,83].
According to the engram theory of memory [83,84], memories are stored in the brain by means of neural engrams. An engram is a specific pattern that is permanently carried by a specific population of neurons and represents a specific memory. When such an engram is activated, the memory that is carried by it is expressed or put into effect. Memory engrams can be composed of widely distributed neural ensembles [84]. In addition, information that is being processed is often assigned to populations of neurons in the brain that overlap. Memories about experiences that are organized in such a way that they share individual components with other memories are interconnected and organized within associative networks [85].
Representations of actions are presumably carried by neural engrams that span widely and are distributed in cortical and subcortical regions [78]. Cortical sensorimotor, premotor, and associative regions (cortical motor, pre-motor, and association regions); cerebellar sensorimotor regions; basal ganglia; and the spinal cord are likely involved [78,79]. In relation to how generalized or specific the knowledge is that individuals associate with observed actions, it may be relevant that memory engrams are functionally heterogeneous. They contain neural ensembles for both memory discrimination and memory generalization [86]. A memory engram that represents a motor action, therefore, could include neural ensembles that support generalization operations and neural ensembles that support specification. In this case, motor and somatosensory knowledge that is expressed by neural activations in the AON would have a certain degree of specificity, which could be indicated along a continuous scale. At one end of this scale are abstract categories of actions or action elements; at the other end are rich sensorimotor memories that relate to specific actions experiences.

2.2. Knowledge of Actions

The word knowledge means “to be in possession of information” [87] (p. 148). The information that is known can relate to different modes, such as concepts, categories, words, movements, emotions, somatosensory processes, or physiological excitation states [88]. Knowledge enables individuals to “act on the information known” [87] (p. 149).
The processing of visual information by the AON provides people with knowledge of actions being seen (for review, see [3,7]). According to one influential view, this knowledge is the result of a fast automatic “transformation” [3] (p. 655) of the received visual information into a motor representation of an action. The ability to perform this transformation has developed over the course of evolution, because this ability was advantageous for individuals living together in groups and supported fast adaptive social behavior. According to this view, observers come into possession of knowledge about an observed action on the basis of evolutionarily inherited brain mechanisms that work automatically. Knowledge of actions can also result from individual past experiences. In this case, the special properties of the neurons of the AON and mirror neuron systems are the result of individual associative learning [60,70,89]. It allows individuals to associate an observed action with a certain action-related knowledge. If people see an action that they are not familiar with, they can still recognize and understand it if it comprises components that are available in the viewer’s motor or somatosensory memory [90].
The knowledge of actions comprises neural or mental [88] representations of various action components or elements. The literature on action representations suggests that recognizing and understanding observed actions involves six key elements:
  • Movements of body parts and the spatial and temporal properties of these movements, such as distance, direction or trajectory, speed, acceleration, or duration [50,91,92].
  • Internal models for the control of the muscle activities that generate the movements. Various scholars have described the control of actions by signals from the brain using motor programs [93] or on the basis of models in which individuals select motor commands for a specific context, depending on multiple internal and external factors [92]. Motor programs are representations of rules for the execution of movements, according to which the spatial and temporal activity patterns of certain muscles are organized and controlled [93]. These programs are supposed to be stored in motor brain structures in a generalized or abstract format. “Models for motor control” [94], on the other hand, describe the control of actions more in connection with adjustments to specific action contexts and courses. Individuals transform sensory information into motor commands. The resulting movements produce sensory outcomes that provide feedback for further motor control.
  • Somatosensory processes or sensations, for example, in relation to proprioception, the processing of haptic or tactile information, heat, cold, or pain [9,10,74,92,95,96]. A crucial property of the knowledge about movements, internal models, and somatosensory processes and sensations is that this knowledge includes information about changes over time and outcomes of these changes [10,56,92,94]. Individuals can use this change-related information to anticipate the immediate further course of an action that they are performing or observing [56,62,73,97]. Somatosensory anticipation plays a particularly important role in performing actions [10,79,96]. It conveys information about the immediate somatosensory consequences of movements, for example, proprioceptive or tactile stimulation.
  • Objects and contexts associated with actions [92,94]. Actions are often directed towards objects or include the use of objects, for example, food, clothing, tools, vessels with drinks, or weapons [98,99,100].
  • Knowledge of the desired outcomes of movements, that is, of action goals [91]. Knowledge of goals includes goals at different hierarchical levels. The overarching goal of an action is often referred to in the literature as the “intention” [3,50]. Goals of actions are related to motives, needs, or desires and have a certain importance or value. For this reason, mental representations of motor actions fundamentally include an emotional component [11,55,57].
  • Knowledge of the relevance or emotional value of actions or contexts of actions [11,55]. The term emotion refers to a response to an object or event that is important to individuals and requires them to prepare for an appropriate action [88,101].

3. Cognitive Products of Processing Observed Actions in the AON

Before I address findings on the specific cognitive operations that take place in viewers of action photos, I will give a brief overview of explanations of the cognitive correlates or products of the neural activities in the AON, coming primarily from studies in which live actions or video clips of actions were used as stimuli. These studies suggest five cognitive products: (1) action understanding, (2) knowledge of specific properties of the observed actions, (3) changes in motor and somatosensory excitability, (4) activation of a motivational or emotional state, and (5) experiences that are accompanied by conscious awareness.
The word “cognitive” is used in a broad sense here. It denotes processes in the nervous system that are related to the use of information for the selection of adaptive behavior or problem solving. What is generated through cognitive processes is meaning. The term “cognitive function” describes a specific contribution of neural information processing to the well-being, prosperity, survival, or reproductive success of individuals [102].

3.1. Action Understanding

Research on the neural processing of visual information about actions has focused on understanding [3,50,74,103,104]. Understanding actions primarily relates to gaining knowledge about the goals and intentions underlying the observed movements [3,5,15]. An observer, for example, sees another individual grasping an apple and understands that the individual wants to eat the apple. Regarding the timing of processing, the findings suggest that an observed grip is associated with an action goal, including information about an object involved or the action context, from around 250 ms after movement onset [46,105]. Processing in the extrastriate body area (EBA), middle temporal area (MT), and inferior parietal regions takes place in the time window of 120 to 200 ms after the movement onset [19,46,106,107,108].

3.1.1. Generalization and Categorization

In explanations of AON responses that focus on action understanding, generalization and categorization play a central role [3,5,15]. In categorization, the quantity of information received is extremely minimized. The observer assigns the visual information to a specific group of action-related objects or events, like “catch”, “grasp”, “fight”, or “ball.” Such categories correspond to quick hypotheses about the basic meanings of pictured action elements and make corresponding action-related knowledge available [15,97,109].
Fast motor categorization is presumably based on signals from the magnocellular system. This is a special processing pathway from the retina to the cortex [45,110,111,112]. The magnocellular system processes information that has low spatial frequencies and is wavelength-insensitive [113,114,115]. The system is highly sensitive to contrast, has a low susceptibility to visual illusions, and is relatively fast. It fulfills an important function in the rapid localization of potentially relevant objects and movements in the visual field, which enables rapid motor behavioral responses [45,116,117]. The magnocellular pathway also projects into inferior parietal regions that belong to the AON and contain visuo-somatosensory neurons. Visuo-somatosensory neurons and neural ensembles in the somatosensory cortices could establish somatosensory categorization ([36]; for the review, see [51]). Regarding the topic of this review, it is important to note that the magnocellular system is also activated by seeing static pictures [116,117].

3.1.2. Conceptual and Semantic Processes in Action Categorization

Generalization and categorization can be achieved by the activities of AON neurons with motor properties [5]. Such categorization is supported by the organization of the motor cortex into types of movement [3,118]. In the studies included in this review, however, action categorization is linked to conceptual and semantic processes. The researchers who discovered the mirror neurons associated these cells with the evolution and use of the human language from their first publication [4]. Rizzolatti and Craighero [104] spoke of the “semantics” (p. 184) of the mirror neuron system. Action categorization is assumed to involve interactions between the AON and areas of the ventral visual stream [43,44,45,48,49,50]. These interactions run via neural bi-directional connections that give the AON access to conceptual and semantic information. The ventral visual stream, in turn, gets access to action-related information [48].
Explanations of the cognitive products of the processes in the AON or the mirror neuron system in connection with semantic processes are questionable [51]. The activation of mirror neurons based on semantic processes requires that an observed action has already been recognized when the mirror neurons start to work. The explanation of the mirror neuron system as the neural mechanism that conveys the understanding of observed actions would thereby become circular [7].
The question of whether the processing in the AON causally provides specific knowledge about observed actions that is not conveyed through conceptual or semantic processes is difficult to investigate. The processing of conceptual action-related information in the ventral visual stream also leads to activations in the AON [65,119]. In addition, the AON responses to visual action-related information overlap with processes of motor imagery [68,69,70] and mentalizing [64,67], which also involve semantic processes. One group of findings on AON responses is particularly useful in clarifying the causal role of the AON in understanding actions: the specificity of the motor and somatosensory AON responses in relation to the concrete observed actions. I will refer to this in the next subsection.

3.2. Knowledge of Specific Properties of Observed Actions

Categorization involves generalization and a massive reduction of the information that is received and processed. Neural responses in the AON presumably also reflect the formation of motor and/or somatosensory activation patterns that represent specific properties of the observed actions more comprehensively than categories [8,36,44,50,54]. Specific properties of actions, for example, relate to properties of grips, movements, somatosensory activities, body postures, or objects. To my knowledge, there has been no report of a specific neural activation pattern in the AON that exactly and comprehensively expresses an entire concrete observed action. Two sets of evidence, however, suggest that representations of specific elements of observed actions are established in the AON.
The first set is related to features and functions of the brain areas included in the AON. The inferior parietal cortex and the ventral premotor cortex contain a high proportion of visual neurons, as well as many visuomotor and visuo-somatosensory neurons [5,6,42,120]. Such neurons reflect functions in the fast, accurate, and flexible visual guidance of actions in unique environments [121] and in the localization of possibly relevant individuals, movements, or objects in the visual field, which enables adaptive motor responses [43,45,116,117].
The second set of evidence is related to findings of studies in which the neural processes in certain areas of the AON were disturbed by TMS during action observation [10,12,54,63,64,73,74,122]. In this way, lesion-like effects were generated. These effects allowed researchers to investigate the specific contributions of cortical regions to the perception and understanding of actions, as well as causal links between these regions. The impairment of processes in somatosensory cortices provided decisive information about the fundamental role of the somatosensory cortices in the perception and understanding of observed actions [10,12,54,64,74]. Notably, lesion-like effects generated by TMS impaired the perception and recognition of specific properties of postures or movements of body parts [54,63] or of properties of the objects involved in actions [74]. These effects indicate that the neural representations of action elements that are established in the AON are more specific and comprehensive than would be the case with mere categorization. The formation of such action-specific neural representations takes a certain amount of time and begins around 150 ms after stimulus onset [7,26,59,66].

3.3. Changes in Motor and Somatosensory Excitability

Many researchers have used TMS in conjunction with electromyography (EMG) to investigate the effects of action observation on the excitability in the corticospinal tract and muscles (Table 2). For review, see also [7]. TMS, for example, has been used to stimulate the region of the primary motor cortex (M1), which is involved in preparing for a specific grasping movement. This stimulation leads to action potentials along the corticospinal pathway and generates larger or smaller motor-evoked potentials (MEPs) in muscles. The amplitude of the MEPs is measured transcutaneously using electromyography (EMG). It corresponds to the level of motor excitability [7].
If the MEPs are larger than in the baseline condition, this indicates an excitatory process. Action observation, in this case, results in a “facilitation” effect [72]. There is a muscle-specific modulation of the excitability when the MEPs, recorded from a muscle that was involved in an observed action, were changed during action observation, compared to the MEPs that were recorded during a baseline condition. In a systematic review of studies in which modulations of the corticospinal excitability were elicited by single pulse TMS, Naish and colleagues [7] found clear evidence of muscle-specific modulation in 16 of 24 studies. According to Naish et al., muscle specificity occurs from around 200 ms after the onset of observed movements; changes in excitability that occur earlier are not muscle-specific and are likely related to motivated visual selection or attention. When seeing emotionally charged actions, muscle specificity may occur earlier. In a study using TMS in conjunction with EMG, Borgomaneri and colleagues [55] found, in viewers of fearful body expressions, a selective reduction in excitability in a hand muscle involved in grasping. This reduction was measured 70–90 ms after stimulus onset and reflects a muscle-specific modulation of motor excitability when processing complex visual input in a very early time window.
An increase in muscle-specific excitability during action observation reflects cognitive functions related to the preparation or effective execution of observed movements, imitation [3,72], or empathy [58,77]. If the MEPs are smaller than in the baseline condition, the observation of the action is associated with an “inhibition” effect [7]. Inhibitory muscle-specific activities play a role in contexts in which movements are better not made [7,55]. This can be the case when it is advantageous for the observer to suppress an approach tendency, involuntary behavioral mimicry, or the imitation of an observed action.
Facilitation effects occur not only in muscles, but also in the muscle spindles, the proprioceptive receptors, that would be involved in actually performing the action [71]. In addition, seeing touch in connection with actions may modulate the excitability of skin receptors through descending projection trajectories from the somatosensory cortex [36]. Via upstream effects, the cortical representation of the proprioceptive elements of an observed action could thus have a physiological basis in the musculature and skin of the observer. Several researchers reported neural downstream projections into organs outside the brain that are related to the ability to react quickly and appropriately to action-related stimuli [55,56,62,73,123]. The AON may also code for the autonomic correlates of observed actions [124]. Observed actions can affect viewers’ cardiac activity [101,123,125,126,127]. Pictured physical exertion can be as effective or even more effective than the emotional value of a depicted action [128]. There are many findings on genital responses that are caused by pictures of sexual actions, for example, hemodynamic changes in the vaginal epithelium, changes in the skin temperature of the labia minora, or changes in penile erection [31,32,129].

3.4. Activation of a Motivational or Emotional State

Wanting to achieve a goal through body movements includes a motivational component. Facial expressions, as well as body and hand postures, may also provide emotional information to viewers [11,55,57,106,107]. In the brain responses of viewers, there are interactions between the processing of the motor, somatosensory, and emotional elements of observed behaviors. For a review, see [14,94]. The neural basis of the emotional processes involves the regions of the AON, as well as the amygdala, orbitofrontal cortex (OFC), insula, and anterior cingulate cortex (ACC) [130] (see also Table 1).
Actions with a higher emotional value evoke stronger responses in the AON of observers than actions with a smaller emotional value [55,57,58,59,61,131,132]. The emotional value may be related to both unpleasant states or events, like fear or anger, and positive or pleasant events, like happiness [58,61,77,131]. The processing of the emotional value of an observed action is closely related to the social functions of the AON [23,25,38,130]. These functions are related, for example, to the activation of a physical readiness to react appropriately to an observed behavior [55,95] or the activation of a state that reflects the emotional state of the observed individual [11,130].

3.5. Experiences That Are Accompanied by Conscious Awareness

Motor and somatosensory responses to observed actions in the AON and at the corticospinal and muscular level, as well as emotional reactions, can result in experiences or feelings (of movement, exertion, touch, pain, warmth, cold, threat, or pleasure) that are accompanied by awareness. For a review, see [97]. Experiences or feelings arise in a gradual transition between non-conscious and conscious processing [64,133,134]. Conscious experiences related to observed actions are mainly based on somatosensory and emotional processes [8,11,134,135,136]. Participating somatosensory structures that evoke conscious experiences (secondary somatosensory cortex and insula) interact with motor structures, the activities of which do not in themselves correlate with conscious sensation [134,137]. Sensory, motor, emotional, and motivational information is integrated through the insula, anterior cingulate cortex (ACC), and orbitofrontal cortex (OFC) [11,135]. Together with the amygdala, these structures contribute to conscious, emotionally charged experiences of pictured actions [136]. Such experiences or feelings probably play a central role in the attractiveness that video clips or photos of outdoor activities, sports, fighting, or sex have for many people [102].

4. Cognitive Products of Neural Responses to Action Photos in the AON

The AON has evolved as a brain system that processes visual information related to movements of other individuals. Photos are static pictures. Nevertheless, they evoke similar neural responses in the AON as live actions or video clips of actions (Table 1 and Table 2). These responses can be related to different cognitive activities.
Photos of events can generate retinal images similar to events that take place in the real world. Events depicted in photos share a large number of visual stimulus features with the relevant events that took place in real life. Photos are realistic images, yet they only represent a fraction of the multimodal sensory information received by people who observed a real-life event that was photographed. The recording of photos involves a tearing off of information, that is, abstraction, but there are also two significant properties that have been added to the events depicted: duration and interpretation. In photos, changing visual patterns have been given a duration. A depicted event that, when it actually took place, only lasted 1/1000 of a second, can be viewed for any length of time. A photo represents a complex dynamic event through a single image. Viewers of the photo see the visual appearance that the real event conveyed at a certain moment, at a certain point in space, and limited by a certain frame. The meaning that this temporal and spatial extract from the overall information suggests, stands for the entire complex event. Photos allow viewers to gradually “unpack” the information contained in such images and to reconstruct the depicted event in their imagination. In this sense, events that are depicted in photos can be processed and understood through different cognitive activities: rapid perceptual processing, step-by-step conceptual and semantic decoding, or imagery. All of these cognitive activities can be linked to neural activations in the AON of people who are looking at action photos. If action photos convey a similar social perception to seeing live actions, the AON responses to photos must primarily be perceptual processes. In the following, I will briefly discuss the different possible cognitive activities with which AON responses to photos can be related.
Viewers can recognize action photos by associating pictured elements with action-related concepts and integrate these concepts into a representation of an action. The processing of the motor and somatosensory aspects of the concepts involved induces top-down activations in the AON [64,65,119]. In this case, recognizing and understanding action photos would be similar to reading sentences or texts [138]. The interpretation of the recognition and understanding of action photos as a cognitive operation, which is similar to reading, however, is inconsistent with the findings on the specificity and timing of the AON responses to photos. A photo of a boy jumping to catch a ball (as shown in the middle panel of Figure 1) contains a large amount of information that is specific to the boy, his movements, the ball, and the conditions under which the action is performed. The magnocellular system and neurons in the AON of viewers extract and process information relating to the face, hands, body parts, and movement from the entire photo. The findings reviewed so far suggest that a mental representation of the concrete action depicted is established in viewers within 300 ms after the picture’s onset. This representation is fundamentally expressed by visual, motor, and somatosensory neural activation patterns. In order to understand the sentence “the boy jumps to catch the ball,” each individual word must be heard or read and related to the following words. Silently reading the sentence takes about 1.5 s at an average reading speed [139]. The formation of a mental representation that includes the information of the entire sentence may therefore only occur after 1.5 s from the beginning of reading the sentence. This representation is still related to highly generalized action-related information.
Another reason for AON responses to action photos may be that these responses are related to voluntary cognitive operations that occur after the picture has been recognized, based on processing in the ventral visual stream. Such cognitive operations may be motor imagery [68,69,70], mentalizing [64,67], or verbalizations based on semantic representations [3,65,119]. These operations are also associated with neural activities in the AON, but if the AON responses to photos were causally related to these operations, then the responses would occur in a later processing time window after the picture onset. The responses that occur in the time window up to about 300 ms after picture onset are fundamentally related to the processing of the incoming visual information [46,53,66,70,76,105,141].
The findings on AON responses to specific properties of actions depicted in photos in the early processing time window suggest that these responses are related to perceptual processes [46,53,66,70,76,105,141]. The visual information provided by an action photo has properties that are processed by visual cortical areas as incomplete body-related information (EBA) and movement-related information (MT) (Table 1). EBA and MT then forward the processing output to areas of the AON. Photos provide information that is incomplete compared to live actions. Urgesi and colleagues [56] pointed out that the visual information that people receive from live actions is also often incomplete in natural environments. Moving body parts or objects may be obscured, and obstacles can obstruct the view. For this reason, brain mechanisms have evolved that complete fragmentary movement cues [51,142]. The neural processing of action photos makes use of such mechanisms and generates similar cognitive products as the AON responses to observed live or videoed actions [15]. Figure 2 shows the suggested relationships between the representational characteristics of action photos and the outcomes of their cognitive processing.

5. Specific Representational Characteristics of Action Photos

The present review suggests six major representational characteristics of action photos, which influence the strength of the motor and somatosensory neural activations in the AON and the associated corticospinal downstream projections.

5.1. Clarity of the Pictured Movements

A clearly recognizable visual representation of movement is difficult to achieve with static pictures [138,143]. In the case of sharp, detailed photos, viewers often cannot recognize whether the depicted individuals or body parts were moving or in a static pose. Motion blur, i.e., the blurred image of a body part or object along its movement trajectory, provides movement-related information, but it does not provide any information about the direction of the movement, and it reduces the clarity of the picture [143,144]. Certain abstract visual patterns, in connection with expansion or rotation, also may suggest movement [142]. A high degree of clarity in the depiction of movement is given when photos provide suggestive clues to the positions that the pictured body parts occupied immediately before and immediately after the moment the photo was taken [22]. Characteristic views of actions support the quick recognition of the course of the action that is immediately following [109].

5.2. Visibility of Muscle Activities and Skin

Pictured movements are generated by muscle activities. These activities, muscle contractions, or deformations of the skin can be represented visibly in photos [70]. Muscle activities involve some level of exertion. If greater exertion is depicted, this may elicit stronger reactions in the AON than lower exertion [18].

5.3. Visibility of Somatosensory Operations or Sensations

Skin, deformations of skin, muscle contractions, body or hand postures, and facial expressions provide clues about the somatosensory processes involved in an action. The somatic component of the activations that photos evoke in viewers can relate, for example, to touch [36], pain [29,37,39], or interoception [27,32]. The bottom panel of Figure 1 shows an interaction in which somatosensory processing obviously plays a central role. The scuffling boys have their eyes closed to protect them from injury. Their perception and action control are based primarily on information from somatosensory receptors.

5.4. Clarity of the Involved Object or Context

The meanings of pictured graspable, familiar objects are presumably also processed in the areas included in Table 1 [98,99,100,145]. In the case of tools, in addition to familiarity with use, elongation may be a decisive property of form with regard to processing in the areas mentioned [98,100]. The meanings of objects, the recognition of which requires complex knowledge of abstract concepts or sociocultural practices (as is the case with medical syringes, bank notes, or hand mixers), are processed in the ventral visual stream [121]. Recognizing the meanings of observed or pictured body movements or somatic sensations often requires the inclusion of information from the environment of an action [44,46,105,146,147]. The action of a woman with the palm of her hand on the cheek of a boy, for example, is made understandable by a bottle of sunscreen on a table next to her. Integrating information about movements of body parts with contextual information requires substantial involvement of conceptual processing and occurs from around 250 ms after picture onset [46,105].

5.5. Clarity of the Action Goal

If viewers easily recognize the pictured body parts, their movements, and interactions with another individual or object, they can quickly associate the pictured information with an action goal [52,148]: “the child wants to eat the berry”, “the boy wants to catch the ball”, or “the boys want to push each other down” (Figure 1). Viewers do not have to analyze complex contextual information to ascribe a reason to each of these actions. The association of natural, ambiguous scenes with more complex action goals takes longer and occurs from 250 ms after picture onset [46,105].
Recognizing the goal of a pictured action implies anticipating the final state of the action [91]. Photos that clearly represent movements and the somatosensory processes involved in an action contain more or less predictive information about the achievement or non-achievement of the goal of a pictured action [56]. The point in time at which an action is depicted during its course influences the strength of the reactions in the AON. Pictured actions that have not yet reached their final state evoke stronger activations than completed actions [56].
The goal of an action does not have to be the most important information in a photo. What is most relevant to viewers can also relate to special muscle activities [52], intense exertion [18], somatosensory sensations [29], or the emotional value of bodily activities [32,55,57,58,59]. Understanding the goal pursued by a person slicing a cucumber is probably not a primary processing goal when this person is obviously about to cut their thumb [39] (sample picture in Figure 1).

5.6. Emotional Value of the Action or Sensation

Facial expressions [149,150], visible skin [151], and expressive behavior [55,108,152] are effective emotional stimuli [101]. Particularly effective with regards to the strength of the reactions in the AON are photos in which the viewers see actions or sensations that would require them to react quickly if they were to actually perceive the events in the environment [153]. For example, people who would see two boys fighting in a rough and tumble play who might get injured, as shown in the bottom panel of Figure 1, may want to step in and separate the two. Seeing a child grasping a sweet berry to eat would allow observers to remain passive.
Other factors may also influence the emotional value that viewers assign to pictured actions, such as the view from which an action is represented. Viewers may find an action seen from the first-person perspective more relevant than the same action from the third-person perspective [13,29,34,39,53,54,141,154,155,156]. It is unclear, however, whether this is actually the case, because it is not known how abstract or specific the cognitive products are that correlate with the motor and somatosensory activations in the AON [154]. For the same reason, it is still unclear whether the distance from which an action is pictured in a photo modulates the reactions in the AON.

5.7. Questions for Future Research

Descriptions of the specific characteristics of action photos, the brain responses they evoke, and the cognitive abilities that such pictures convey are highly speculative at the present time. There is still little knowledge about the properties and processing of action representations in the human brain. There are only a few data on responses of individual neurons to action observation in humans that come from single-unit recording. There are still no data from experiments in which optogenetics was used within the human brain to research the processing of specific action-related representations by neural ensembles [11,84]. TMS combined with EMG and combinations of neuromodulation techniques with non-invasive functional neuroimaging techniques make it possible to attain useful evidence on a number of open questions:
What are the properties of modulations in corticospinal excitability in people who see a photo of an action, compared to modulations in corticospinal excitability in people who observe the same or a similar action while it is actually being performed? What are the time courses of the modulations? Are there similar excitatory or inhibitory effects on the motor activity in both conditions? Is there a similar muscle specificity?
Do visual, motor, and somatosensory neurons in the AON independently categorize elements of complex motor actions depicted in photos taken in natural situations? Do such categorization processes persist if the processing of the visual input through the ventral visual stream is perturbed by the use of TMS or transcranial electrical stimulation (tDCS)?
Does looking at photos activate neural ensembles in viewers that represent specific properties of the concrete actions depicted more comprehensively than through categorization? If so, how high is the degree of specificity that is achieved in the first 150 ms after picture onset, and how high is it after 300 ms? Do the proposed six representational core characteristics of action photos actually correspond to the elements that most strongly influence the strength of the activations in the AON? Is this list complete?
Does the amount of detail in action photos influence the strength of the neural responses in the AON in the first 150 ms after picture onset? Is there an optimal amount of visual details in action photos in order to achieve maximum activation of the AON in the time window of processing of up to 300 ms?

6. Conclusions

I reviewed studies that investigated AON responses to pictured actions to examine the assumption that action photos convey special socio-cognitive skills to viewers. Findings from studies that used TMS and EMG, as well as a wide variety of other investigation techniques, suggest that seeing action photos conveys similar processes of social perception as observing live actions. Six representational characteristics of action photos are most relevant in terms of this particular social perception: the clarity of the pictured movements, the visibility of muscle activities and skin, the visibility of somatosensory activities or sensations, the clarity of the involved object or context, the clarity of the action goal, and the emotional value of the pictured action or sensation. People generally use photos for social purposes, such as relating to other people, animals, plants, things, and places or making sense of a complex social world [102]. Viewing action photos enables people to relive what others have done and felt and to prepare their own motor behaviors. These cognitive abilities are based on rapid activations of visual, motor, and somatosensory neurons; activations in brain structures that are involved in processing emotions; as well as specific modulations of excitation states in the body outside the brain.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Acknowledgments

I thank Jenna Hicken for personal assistance in translating and editing the manuscript.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Cross, E.S.; Kraemer, D.J.M.; Hamilton, A.; Kelley, W.M.; Grafton, S.T. Sensitivity of the Action Observation Network to Physical and Observational Learning. Cereb. Cortex 2009, 19, 315–326. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  2. Feurra, M.; Blagovechtchenski, E.; Nikulin, V.V.; Nazarova, M.; Lebedeva, A.; Pozdeeva, D.; Yurevich, M.; Rossi, S. State-Dependent Effects of Transcranial Oscillatory Currents on the Motor System during Action Observation. Sci. Rep. 2019, 9, 1–11. [Google Scholar] [CrossRef] [Green Version]
  3. Rizzolatti, G.; Cattaneo, L.; Destro, M.F.; Rozzi, S. Cortical Mechanisms Underlying the Organization of Goal-Directed Actions and Mirror Neuron-Based Action Understanding. Physiol. Rev. 2014, 94, 655–706. [Google Scholar] [CrossRef] [PubMed]
  4. Di Pellegrino, G.; Fadiga, L.; Fogassi, L.; Gallese, V.; Rizzolatti, G. Understanding motor events: A neurophysiological study. Exp. Brain Res. 1992, 91, 176–180. [Google Scholar] [CrossRef]
  5. Gallese, V.; Fadiga, L.; Fogassi, L.; Rizzolatti, G. Action recognition in the premotor cortex. Brain 1996, 119, 593–610. [Google Scholar] [CrossRef] [Green Version]
  6. Mukamel, R.; Ekstrom, A.; Kaplan, J.; Iacoboni, M.; Fried, I. Single-Neuron Responses in Humans during Execution and Observation of Actions. Curr. Biol. 2010, 20, 750–756. [Google Scholar] [CrossRef] [Green Version]
  7. Naish, K.R.; Houston-Price, C.; Bremner, A.J.; Holmes, N.P. Effects of action observation on corticospinal excitability: Muscle specificity, direction, and timing of the mirror response. Neuropsychologia 2014, 64, 331–348. [Google Scholar] [CrossRef]
  8. Keysers, C.; Kaas, J.H.; Gazzola, V. Somatosensation in social perception. Nat. Rev. Neurosci. 2010, 11, 417–428. [Google Scholar] [CrossRef]
  9. Gazzola, V.; Keysers, C. The Observation and Execution of Actions Share Motor and Somatosensory Voxels in all Tested Subjects: Single-Subject Analyses of Unsmoothed fMRI Data. Cereb. Cortex 2009, 19, 1239–1255. [Google Scholar] [CrossRef] [Green Version]
  10. Valchev, N.; Gazzola, V.; Avenanti, A.; Keysers, C. Primary somatosensory contribution to action observation brain activity—combining fMRI and cTBS. Soc. Cogn. Affect. Neurosci. 2016, 11, 1205–1217. [Google Scholar] [CrossRef]
  11. Keysers, C.; Paracampo, R.; Gazzola, V. What neuromodulation and lesion studies tell us about the function of the mirror neuron system and embodied cognition. Curr. Opin. Psychol. 2018, 24, 35–40. [Google Scholar] [CrossRef] [Green Version]
  12. Jacquet, P.O.; Avenanti, A. Perturbing the Action Observation Network During Perception and Categorization of Actions’ Goals and Grips: State-Dependency and Virtual Lesion TMS Effects. Cereb. Cortex 2015, 25, 598–608. [Google Scholar] [CrossRef] [Green Version]
  13. Neal, A.; Kilner, J.M. What is simulated in the action observation network when we observe actions? Eur. J. Neurosci. 2010, 32, 1765–1770. [Google Scholar] [CrossRef] [Green Version]
  14. Downing, P.E.; Peelen, M.; Wiggett, A.J.; Tew, B.D. The role of the extrastriate body area in action perception. Soc. Neurosci. 2006, 1, 52–62. [Google Scholar] [CrossRef]
  15. Hafri, A.; Trueswell, J.C.; Epstein, R.A. Neural Representations of Observed Actions Generalize across Static and Dynamic Visual Input. J. Neurosci. 2017, 37, 3056–3071. [Google Scholar] [CrossRef]
  16. Lu, Z.; Li, X.; Meng, M. Encodings of implied motion for animate and inanimate object categories in the two visual pathways. NeuroImage 2016, 125, 668–680. [Google Scholar] [CrossRef]
  17. O’Toole, A.J.; Natu, V.; An, X.; Rice, A.; Ryland, J.; Phillips, P.J. The neural representation of faces and bodies in motion and at rest. NeuroImage 2014, 91, 1–11. [Google Scholar] [CrossRef] [PubMed]
  18. Proverbio, A.M.; Riva, F.; Zani, A. Observation of Static Pictures of Dynamic Actions Enhances the Activity of Movement-Related Brain Areas. PLoS ONE 2009, 4, e5389. [Google Scholar] [CrossRef] [PubMed]
  19. Thierry, G.; Pegna, A.; Dodds, C.; Roberts, M.; Basan, S.; Downing, P. An event-related potential component sensitive to images of the human body. NeuroImage 2006, 32, 871–879. [Google Scholar] [CrossRef] [PubMed]
  20. Hermsdörfer, J.; Goldenberg, G.; Wachsmuth, C.; Conrad, B.; Baumannb, A.-; Bartenstein, P.; Schwaiger, M.; Boecker, H. Cortical Correlates of Gesture Processing: Clues to the Cerebral Mechanisms Underlying Apraxia during the Imitation of Meaningless Gestures. NeuroImage 2001, 14, 149–161. [Google Scholar] [CrossRef]
  21. Kolesar, T.A.; Kornelsen, J.; Smith, S.D. Separating neural activity associated with emotion and implied motion: An fMRI study. Emotion 2017, 17, 131–140. [Google Scholar] [CrossRef]
  22. Kourtzi, Z.; Kanwisher, N. Activation in Human MT/MST by Static Images with Implied Motion. J. Cogn. Neurosci. 2000, 12, 48–55. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  23. Arioli, M.; Perani, D.; Cappa, S.; Proverbio, A.M.; Zani, A.; Falini, A.; Canessa, N. Affective and cooperative social interactions modulate effective connectivity within and between the mirror and mentalizing systems. Hum. Brain Mapp. 2018, 39, 1412–1427. [Google Scholar] [CrossRef] [PubMed]
  24. Canessa, N.; Alemanno, F.; Riva, F.; Zani, A.; Proverbio, A.M.; Mannara, N.; Perani, D.; Cappa, S. The Neural Bases of Social Intention Understanding: The Role of Interaction Goals. PLoS ONE 2012, 7, e42347. [Google Scholar] [CrossRef] [PubMed]
  25. Pierno, A.C.; Becchio, C.; Turella, L.; Tubaldi, F.; Castiello, U. Observing social interactions: The effect of gaze. Soc. Neurosci. 2008, 3, 51–59. [Google Scholar] [CrossRef]
  26. Proverbio, A.M.; Riva, F.; Paganelli, L.; Cappa, S.; Canessa, N.; Perani, D.; Zani, A. Neural Coding of Cooperative vs. Affective Human Interactions: 150 ms to Code the Action’s Purpose. PLoS ONE 2011, 6, e22026. [Google Scholar] [CrossRef] [PubMed]
  27. Bühler, M.; Vollstädt-Klein, S.; Klemen, J.; Smolka, M.N. Does erotic stimulus presentation design affect brain activation patterns? Event-related vs. blocked fMRI designs. Behav. Brain Funct. 2008, 4, 30. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  28. Ferretti, A.; Caulo, M.; Del Gratta, C.; Di Matteo, R.; Merla, A.; Montorsi, F.; Pizzella, V.; Pompa, P.; Rigatti, P.; Rossini, P.M.; et al. Dynamics of male sexual arousal: Distinct components of brain activation revealed by fMRI. NeuroImage 2005, 26, 1086–1096. [Google Scholar] [CrossRef]
  29. Gu, X.; Han, S. Attention and reality constraints on the neural processes of empathy for pain. NeuroImage 2007, 36, 256–267. [Google Scholar] [CrossRef]
  30. Ogawa, K.; Inui, T. Neural representation of observed actions in the parietal and premotor cortex. NeuroImage 2011, 56, 728–735. [Google Scholar] [CrossRef]
  31. Redouté, J.; Stoléru, S.; Grégoire, M.-C.; Costes, N.; Cinotti, L.; Lavenne, F.; Le Bars, D.; Forest, M.G.; Pujol, J.-F. Brain processing of visual sexual stimuli in human males. Hum. Brain Mapp. 2000, 11, 162–177. [Google Scholar] [CrossRef]
  32. Wehrum, S.; Klucken, T.; Kagerer, S.; Walter, B.; Hermann, A.; Vaitl, D.; Stark, R. Gender Commonalities and Differences in the Neural Processing of Visual Sexual Stimuli. J. Sex. Med. 2013, 10, 1328–1342. [Google Scholar] [CrossRef] [PubMed]
  33. Johnson-Frey, S.H.; Maloof, F.R.; Newman-Norlund, R.; Farrer, C.; Inati, S.; Grafton, S.T. Actions or Hand-Object Interactions? Human Inferior Frontal Cortex and Action Observation. Neuron 2003, 39, 1053–1058. [Google Scholar] [CrossRef] [Green Version]
  34. Mazzarella, E.; Ramsey, R.; Conson, M.; Hamilton, A. Brain systems for visual perspective taking and action perception. Soc. Neurosci. 2013, 8, 248–267. [Google Scholar] [CrossRef] [PubMed]
  35. Watson, C.E.; Cardillo, E.R.; Bromberger, B.; Chatterjee, A. The specificity of action knowledge in sensory and motor systems. Front. Psychol. 2014, 5, 494. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  36. Bolognini, N.; Rossetti, A.; Convento, S.; Vallar, G. Understanding Others’ Feelings: The Role of the Right Primary Somatosensory Cortex in Encoding the Affective Valence of Others’ Touch. J. Neurosci. 2013, 33, 4201–4205. [Google Scholar] [CrossRef] [Green Version]
  37. Cheng, Y.; Yang, C.-Y.; Lin, C.-P.; Lee, P.-L.; Decety, J. The perception of pain in others suppresses somatosensory oscillations: A magnetoencephalography study. NeuroImage 2008, 40, 1833–1840. [Google Scholar] [CrossRef]
  38. Deuse, L.; Rademacher, L.; Winkler, L.; Schultz, R.; Gründer, G.; Lammertz, S. Neural correlates of naturalistic social cognition: Brain-behavior relationships in healthy adults. Soc. Cogn. Affect. Neurosci. 2016, 11, 1741–1751. [Google Scholar] [CrossRef] [Green Version]
  39. Jackson, P.L.; Meltzoff, A.N.; Decety, J. How do we perceive the pain of others? A window into the neural processes involved in empathy. NeuroImage 2005, 24, 771–779. [Google Scholar] [CrossRef] [Green Version]
  40. Hadjikhani, N.; de Gelder, B. Seeing Fearful Body Expressions Activates the Fusiform Cortex and Amygdala. Curr. Biol. 2003, 13, 2201–2205. [Google Scholar] [CrossRef]
  41. Poyo Solanas, M.P.; Zhan, M.; Vaessen, M.; Hortensius, R.; Engelen, T.; De Gelder, B. Looking at the face and seeing the whole body. Neural basis of combined face and body expressions. Soc. Cogn. Affect. Neurosci. 2018, 13, 135–144. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  42. Keysers, C.; Perrett, D. Demystifying social cognition: A Hebbian perspective. Trends Cogn. Sci. 2004, 8, 501–507. [Google Scholar] [CrossRef] [PubMed]
  43. Pitcher, D.; Ungerleider, L.G. Evidence for a Third Visual Pathway Specialized for Social Perception. Trends Cogn. Sci. 2021, 25, 100–110. [Google Scholar] [CrossRef] [PubMed]
  44. Amoruso, L.; Finisguerra, A.; Urgesi, C. Spatial frequency tuning of motor responses reveals differential contribution of dorsal and ventral systems to action comprehension. Proc. Natl. Acad. Sci. USA 2020, 117, 13151–13161. [Google Scholar] [CrossRef] [PubMed]
  45. Bar, M.; Kassam, K.S.; Ghuman, A.; Boshyan, J.; Schmid, A.M.; Dale, A.M.; Hamalainen, M.S.; Marinkovic, K.; Schacter, D.; Rosen, B.R.; et al. Top-down facilitation of visual recognition. Proc. Natl. Acad. Sci. USA 2006, 103, 449–454. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  46. Decroix, J.; Roger, C.; Kalénine, S. Neural dynamics of grip and goal integration during the processing of others’ actions with objects: An ERP study. Sci. Rep. 2020, 10, 1–11. [Google Scholar] [CrossRef] [Green Version]
  47. Hickok, G. Eight Problems for the Mirror Neuron Theory of Action Understanding in Monkeys and Humans. J. Cogn. Neurosci. 2009, 21, 1229–1243. [Google Scholar] [CrossRef] [Green Version]
  48. Hutchison, R.M.; Gallivan, J.P. Functional coupling between frontoparietal and occipitotemporal pathways during action and perception. Cortex 2018, 98, 8–27. [Google Scholar] [CrossRef]
  49. Wurm, M.F.; Caramazza, A. Distinct roles of temporal and frontoparietal cortex in representing actions across vision and language. Nat. Commun. 2019, 10, 1–10. [Google Scholar] [CrossRef] [Green Version]
  50. Kilner, J.M. More than one pathway to action understanding. Trends Cogn. Sci. 2011, 15, 352–357. [Google Scholar] [CrossRef] [Green Version]
  51. Kislinger, L. Photographs Beyond Concepts: Access to Actions and Sensations. Rev. Gen. Psychol. 2021, 25, 44–59. [Google Scholar] [CrossRef]
  52. Mc Cabe, S.I.; Villalta, J.I.; Saunier, G.; Grafton, S.T.; Della-Maggiore, V. The Relative Influence of Goal and Kinematics on Corticospinal Excitability Depends on the Information Provided to the Observer. Cereb. Cortex 2015, 25, 2229–2237. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  53. Angelini, M.; Fabbri-Destro, M.; Lopomo, N.F.; Gobbo, M.; Rizzolatti, G.; Avanzini, P. Perspective-dependent reactivity of sensorimotor mu rhythm in alpha and beta ranges during action observation: An EEG study. Sci. Rep. 2018, 8, 1–11. [Google Scholar] [CrossRef] [PubMed]
  54. Bolognini, N.; Rossetti, A.; Maravita, A.; Miniussi, C. Seeing touch in the somatosensory cortex: A TMS study of the visual perception of touch. Hum. Brain Mapp. 2011, 32, 2104–2114. [Google Scholar] [CrossRef] [Green Version]
  55. Borgomaneri, S.; Vitale, F.; Avenanti, A. Early changes in corticospinal excitability when seeing fearful body expressions. Sci. Rep. 2015, 5, srep14122. [Google Scholar] [CrossRef]
  56. Urgesi, C.; Maieron, M.; Avenanti, A.; Tidoni, E.; Fabbro, F.; Aglioti, S.M. Simulating the Future of Actions in the Human Corticospinal System. Cereb. Cortex 2010, 20, 2511–2521. [Google Scholar] [CrossRef] [Green Version]
  57. Borgomaneri, S.; Gazzola, V.; Avenanti, A. Motor mapping of implied actions during perception of emotional body language. Brain Stimul. 2012, 5, 70–76. [Google Scholar] [CrossRef] [PubMed]
  58. Borgomaneri, S.; Gazzola, V.; Avenanti, A. Temporal dynamics of motor cortex excitability during perception of natural emotional scenes. Soc. Cogn. Affect. Neurosci. 2014, 9, 1451–1457. [Google Scholar] [CrossRef] [Green Version]
  59. Borgomaneri, S.; Gazzola, V.; Avenanti, A. Transcranial magnetic stimulation reveals two functionally distinct stages of motor cortex involvement during perception of emotional body language. Brain Struct. Funct. 2015, 220, 2765–2781. [Google Scholar] [CrossRef] [Green Version]
  60. Catmur, C.; Mars, R.B.; Rushworth, M.F.; Heyes, C. Making Mirrors: Premotor Cortex Stimulation Enhances Mirror and Counter-mirror Motor Facilitation. J. Cogn. Neurosci. 2011, 23, 2352–2362. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  61. Hajcak, G.; Molnar, C.; George, M.S.; Bolger, K.; Koola, J.; Nahas, Z. Emotion facilitates action: A transcranial magnetic stimulation study of motor cortex excitability during picture viewing. Psychophysiology 2007, 44, 91–97. [Google Scholar] [CrossRef] [PubMed]
  62. Urgesi, C.; Moro, V.; Candidi, M.; Aglioti, S.M. Mapping Implied Body Actions in the Human Motor System. J. Neurosci. 2006, 26, 7942–7949. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  63. Urgesi, C.; Candidi, M.; Ionta, S.; Aglioti, S.M. Representation of body identity and body actions in extrastriate body area and ventral premotor cortex. Nat. Neurosci. 2007, 10, 30–31. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  64. Avanzini, P.; Destro, M.F.; Campi, C.; Pascarella, A.; Barchiesi, G.; Cattaneo, L.; Rizzolatti, G. Spatiotemporal dynamics in understanding hand--object interactions. Proc. Natl. Acad. Sci. USA 2013, 110, 15878–15885. [Google Scholar] [CrossRef] [Green Version]
  65. Hauk, O.; Shtyrov, Y.; Pulvermuller, F. The time course of action and action-word comprehension in the human brain as revealed by neurophysiology. J. Physiol. 2008, 102, 50–58. [Google Scholar] [CrossRef] [Green Version]
  66. Ubaldi, S.; Barchiesi, G.; Cattaneo, L. Bottom-Up and Top-Down Visuomotor Responses to Action Observation. Cereb. Cortex 2015, 25, 1032–1041. [Google Scholar] [CrossRef] [Green Version]
  67. Catmur, C. Understanding intentions from actions: Direct perception, inference, and the roles of mirror and mentalizing systems. Conscious. Cogn. 2015, 36, 426–433. [Google Scholar] [CrossRef] [Green Version]
  68. Fazekas, P.; Nanay, B.; Pearson, J. Offline perception: An introduction. Philos. Trans. R. Soc. B Biol. Sci. 2021, 376, 20190686. [Google Scholar] [CrossRef]
  69. Koenig-Robert, R.; Pearson, J. Why do imagery and perception look and feel so different? Philos. Trans. R. Soc. B Biol. Sci. 2021, 376, 20190703. [Google Scholar] [CrossRef]
  70. Orlandi, A.; Arno, E.; Proverbio, A.M. The Effect of Expertise on Kinesthetic Motor Imagery of Complex Actions. Brain Topogr. 2020, 33, 238–254. [Google Scholar] [CrossRef] [PubMed]
  71. Urgesi, C.; Candidi, M.; Fabbro, F.; Romani, M.; Aglioti, S.M. Motor facilitation during action observation: Topographic mapping of the target muscle and influence of the onlooker’s posture. Eur. J. Neurosci. 2006, 23, 2522–2530. [Google Scholar] [CrossRef]
  72. Fadiga, L.; Fogassi, L.; Pavesi, G.; Rizzolatti, G. Motor facilitation during action observation: A magnetic stimulation study. J. Neurophysiol. 1995, 73, 2608–2611. [Google Scholar] [CrossRef] [PubMed]
  73. Avenanti, A.; Paracampo, R.; Annella, L.; Tidoni, E.; Aglioti, S.M. Boosting and Decreasing Action Prediction Abilities Through Excitatory and Inhibitory tDCS of Inferior Frontal Cortex. Cereb. Cortex 2018, 28, 1282–1296. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  74. Valchev, N.; Tidoni, E.; Hamilton, A.F.D.C.; Gazzola, V.; Avenanti, A. Primary somatosensory cortex necessary for the perception of weight from other people’s action: A continuous theta-burst TMS experiment. NeuroImage 2017, 152, 195–206. [Google Scholar] [CrossRef]
  75. Avikainen, S.; Forss, N.; Hari, R. Modulated Activation of the Human SI and SII Cortices during Observation of Hand Actions. NeuroImage 2002, 15, 640–646. [Google Scholar] [CrossRef]
  76. Barchiesi, G.; Cattaneo, L. Early and late motor responses to action observation. Soc. Cogn. Affect. Neurosci. 2013, 8, 711–719. [Google Scholar] [CrossRef] [Green Version]
  77. Van Loon, A.M.; van den Wildenberg, W.P.; Van Stegeren, A.H.; Ridderinkhof, K.R.; Hajcak, G. Emotional stimuli modulate readiness for action: A transcranial magnetic stimulation study. Cogn. Affect. Behav. Neurosci. 2010, 10, 174–181. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  78. Berlot, E.; Popp, N.J.; Diedrichsen, J. In search of the engram, 2017. Curr. Opin. Behav. Sci. 2018, 20, 56–60. [Google Scholar] [CrossRef]
  79. Hamano, Y.H.; Sugawara, S.K.; Yoshimoto, T.; Sadato, N. The motor engram as a dynamic change of the cortical network during early sequence learning: An fMRI study. Neurosci. Res. 2020, 153, 27–39. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  80. Huber, D.; Gutnisky, D.; Peron, S.; O’Connor, D.H.; Wiegert, J.S.; Tian, L.; Oertner, T.; Looger, L.L.; Svoboda, K. Multiple dynamic representations in the motor cortex during sensorimotor learning. Nature 2012, 484, 473–478. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  81. Schellekens, W.; Petridou, N.; Ramsey, N.F. Detailed somatotopy in primary motor and somatosensory cortex revealed by Gaussian population receptive fields. NeuroImage 2018, 179, 337–347. [Google Scholar] [CrossRef]
  82. Rogerson, T.; Cai, D.; Frank, A.; Sano, Y.; Shobe, J.; Lopez-Aranda, M.F.; Silva, A.J. Synaptic tagging during memory allocation. Nat. Rev. Neurosci. 2014, 15, 157–169. [Google Scholar] [CrossRef]
  83. Tonegawa, S.; Liu, X.; Ramirez, S.; Redondo, R. Memory Engram Cells Have Come of Age. Neuron 2015, 87, 918–931. [Google Scholar] [CrossRef] [Green Version]
  84. Josselyn, S.A.; Kohler, S.; Frankland, P.W. Finding the engram. Nat. Rev. Neurosci. 2015, 16, 521–534. [Google Scholar] [CrossRef]
  85. Rashid, A.J.; Yan, C.; Mercaldo, V.; Hsiang, H.-L.; Park, S.; Cole, C.J.; De Cristofaro, A.; Yu, J.; Ramakrishnan, C.; Lee, S.Y.; et al. Competition between engrams influences fear memory formation and recall. Science 2016, 353, 383–387. [Google Scholar] [CrossRef] [Green Version]
  86. Sun, X.; Bernstein, M.J.; Meng, M.; Rao, S.; Sorensen, A.; Yao, L.; Zhang, X.; Anikeeva, P.O.; Lin, Y. Functionally Distinct Neuronal Ensembles within the Memory Engram. Cell 2020, 181, 410–423.e17. [Google Scholar] [CrossRef] [PubMed]
  87. Bennett, M.R.; Hacker, P.M. Philosophical Foundations of Neuroscience; Blackwell Publishing: Oxford, UK, 2003. [Google Scholar]
  88. Damasio, A.R. Self Comes to Mind: Constructing the Conscious Brain; William Heinemann: London, UK, 2010. [Google Scholar]
  89. Calvo-Merino, B.; Glaser, D.; Grèzes, J.; Passingham, R.; Haggard, P. Action Observation and Acquired Motor Skills: An fMRI Study with Expert Dancers. Cereb. Cortex 2005, 15, 1243–1249. [Google Scholar] [CrossRef] [Green Version]
  90. Byrne, R.W. Imitation as behaviour parsing. Philos. Trans. R. Soc. B Biol. Sci. 2003, 358, 529–536. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  91. Grafton, S.T.; Hamilton, A. Evidence for a distributed hierarchy of action representation in the brain. Hum. Mov. Sci. 2007, 26, 590–616. [Google Scholar] [CrossRef] [Green Version]
  92. Wolpert, D.M.; Ghahramani, Z. Computational principles of movement neuroscience. Nat. Neurosci. 2000, 3, 1212–1217. [Google Scholar] [CrossRef] [PubMed]
  93. Schmidt, R.A. A schema theory of discrete motor skill learning. Psychol. Rev. 1975, 82, 225–260. [Google Scholar] [CrossRef]
  94. Wolpert, D.; Kawato, M. Multiple paired forward and inverse models for motor control. Neural Netw. 1998, 11, 1317–1329. [Google Scholar] [CrossRef]
  95. Del Vecchio, M.; Caruana, F.; Sartori, I.; Pelliccia, V.; Zauli, F.M.; Russo, G.L.; Rizzolatti, G.; Avanzini, P. Action execution and action observation elicit mirror responses with the same temporal profile in human SII. Commun. Biol. 2020, 3, 1–8. [Google Scholar] [CrossRef] [Green Version]
  96. Hantman, A.W.; Jessell, T.M. Clarke’s column neurons as the focus of a corticospinal corollary circuit. Nat. Neurosci. 2010, 13, 1233–1239. [Google Scholar] [CrossRef]
  97. Kilner, J.M.; Friston, K.; Frith, C. Predictive coding: An account of the mirror neuron system. Cogn. Process. 2007, 8, 159–166. [Google Scholar] [CrossRef] [Green Version]
  98. Chen, J.; Snow, J.C.; Culham, J.C.; Goodale, A.M. What Role Does “Elongation” Play in “Tool-Specific” Activation and Connectivity in the Dorsal and Ventral Visual Streams? Cereb. Cortex 2018, 28, 1117–1131. [Google Scholar] [CrossRef]
  99. Hebart, M.N.; Hesselmann, G. What Visual Information Is Processed in the Human Dorsal Stream? J. Neurosci. 2012, 32, 8107–8109. [Google Scholar] [CrossRef] [PubMed]
  100. Sakuraba, S.; Sakai, S.; Yamanaka, M.; Yokosawa, K.; Hirayama, K. Does the Human Dorsal Stream Really Process a Category for Tools? J. Neurosci. 2012, 32, 3949–3953. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  101. Bradley, M.M.; Codispoti, M.; Cuthbert, B.N.; Lang, P.J. Emotion and motivation I: Defensive and appetitive reactions in picture processing. Emotion 2001, 1, 276–298. [Google Scholar] [CrossRef] [PubMed]
  102. Kislinger, L.; Kotrschal, K. Hunters and Gatherers of Pictures: Why Photography Has Become a Human Universal. Front. Psychol. 2021, 12, 654474. [Google Scholar] [CrossRef]
  103. Nelissen, K.; Borra, E.; Gerbella, M.; Rozzi, S.; Luppino, G.; Vanduffel, W.; Rizzolatti, G.; Orban, G. Action Observation Circuits in the Macaque Monkey Cortex. J. Neurosci. 2011, 31, 3743–3756. [Google Scholar] [CrossRef] [PubMed]
  104. Rizzolatti, G.; Craighero, L. The mirror-neuron system. Annu. Rev. Neurosci. 2004, 27, 169–192. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  105. Amoruso, L.; Gelormini, C.; Aboitiz, F.A.; González, M.A.; Manes, F.; Cardona, J.F.; Ibanez, A. N400 ERPs for actions: Building meaning in context. Front. Hum. Neurosci. 2013, 7, 57. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  106. Espírito Santo, M.G.E.; Maxim, O.S.; Schürmann, M. N1 responses to images of hands in occipito-temporal event-related potentials. Neuropsychologia 2017, 106, 83–89. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  107. Moreau, Q.; Parrotta, E.; Era, V.; Martelli, M.L.; Candidi, M. Role of the occipito-temporal theta rhythm in hand visual identification. J. Neurophysiol. 2020, 123, 167–177. [Google Scholar] [CrossRef] [PubMed]
  108. Peelen, M.; Downing, P. The neural basis of visual body perception. Nat. Rev. Neurosci. 2007, 8, 636–648. [Google Scholar] [CrossRef] [PubMed]
  109. Hafri, A.; Papafragou, A.; Trueswell, J.C. Getting the gist of events: Recognition of two-participant actions from brief displays. J. Exp. Psychol. Gen. 2013, 142, 880–905. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  110. Geuzebroek, A.C.; van den Berg, A.V. Eccentricity scale independence for scene perception in the first tens of milliseconds. J. Vis. 2018, 18, 9. [Google Scholar] [CrossRef] [Green Version]
  111. Lee, B.B.; Martin, P.; Grünert, U. Retinal connectivity and primate vision. Prog. Retin. Eye Res. 2010, 29, 622–639. [Google Scholar] [CrossRef] [Green Version]
  112. Musel, B.; Bordier, C.; Dojat, M.; Pichat, C.; Chokron, S.; Le Bas, J.-F.; Peyrin, C. Retinotopic and Lateralized Processing of Spatial Frequencies in Human Visual Cortex during Scene Categorization. J. Cogn. Neurosci. 2013, 25, 1315–1331. [Google Scholar] [CrossRef] [Green Version]
  113. Cohen, M.A.; Rubenstein, J. How much color do we see in the blink of an eye? Cognition 2020, 200, 104268. [Google Scholar] [CrossRef]
  114. Edwards, M.; Goodhew, S.C.; Badcock, D.R. Using perceptual tasks to selectively measure magnocellular and parvocellular performance: Rationale and a user’s guide. Psychon. Bull. Rev. 2021, 28, 1029–1050. [Google Scholar] [CrossRef] [PubMed]
  115. Masri, R.A.; Grünert, U.; Martin, P.R. Analysis of Parvocellular and Magnocellular Visual Pathways in Human Retina. J. Neurosci. 2020, 40, 8132–8148. [Google Scholar] [CrossRef] [PubMed]
  116. Carretié, L.; Kessel, D.; García-Rubio, M.J.; Giménez-Fernández, T.; Hoyos, S.; Hernández-Lorca, M. Magnocellular Bias in Exogenous Attention to Biologically Salient Stimuli as Revealed by Manipulating Their Luminosity and Color. J. Cogn. Neurosci. 2017, 29, 1699–1711. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  117. Cheng, A.; Eysel, U.T.; Vidyasagar, T.R. The role of the magnocellular pathway in serial deployment of visual attention. Eur. J. Neurosci. 2004, 20, 2188–2192. [Google Scholar] [CrossRef]
  118. Graziano, M.S. New Insights into Motor Cortex. Neuron 2011, 71, 387–388. [Google Scholar] [CrossRef] [Green Version]
  119. Binder, J.R.; Desai, R.H. The neurobiology of semantic memory. Trends Cogn. Sci. 2011, 15, 527–536. [Google Scholar] [CrossRef] [Green Version]
  120. Gallese, V.; Fadiga, L.; Fogassi, L.; Rizzolatti, G. Action representation and the inferior parietal lobule. In Common Mechanisms in Perception and Action: Attention and Performance XIX; Prinz, W., Hommel, B., Eds.; Oxford University Press: New York, NY, USA, 2002; pp. 334–355. [Google Scholar]
  121. Goodale, M.A.; Milner, A. Separate visual pathways for perception and action. Trends Neurosci. 1992, 15, 20–25. [Google Scholar] [CrossRef]
  122. Urgesi, C.; Candidi, M.; Avenanti, A. Neuroanatomical substrates of action perception and understanding: An anatomic likelihood estimation meta-analysis of lesion-symptom mapping studies in brain injured patients. Front. Hum. Neurosci. 2014, 8, 344. [Google Scholar] [CrossRef] [Green Version]
  123. Bradley, M.M.; Lang, P.J. The International Affective Picture System (IAPS) in the study of emotion and attention. In Handbook of Emotion Elicitation and Assessment; Coan, J.A., Allen, J.J.B., Eds.; Oxford University Press: New York, NY, USA, 2007; pp. 29–46. [Google Scholar]
  124. Mouras, H.; Stoléru, S.; Moulier, V.; Pélégrini-Issac, M.; Rouxel, R.; Grandjean, B.; Glutron, D.; Bittoun, J. Activation of mirror-neuron system by erotic video clips predicts degree of induced erection: An fMRI study. NeuroImage 2008, 42, 1142–1150. [Google Scholar] [CrossRef] [PubMed]
  125. Codispoti, M.; Bradley, M.M.; Lang, P.J. Affective reactions to briefly presented pictures. Psychophysiology 2001, 38, 474–478. [Google Scholar] [CrossRef]
  126. Lang, P.J.; Greenwald, M.K.; Bradley, M.M.; Hamm, A.O. Looking at pictures: Affective, facial, visceral, and behavioral reactions. Psychophysiology 1993, 30, 261–273. [Google Scholar] [CrossRef]
  127. Strigo, I.A.; Craig, A.D. Interoception, homeostatic emotions and sympathovagal balance. Philos. Trans. R. Soc. B Biol. Sci. 2016, 371, 20160010. [Google Scholar] [CrossRef] [Green Version]
  128. Bolliet, O.; Collet, C.; Dittmar, A. Observation of Action and Autonomic Nervous System Responses. Percept. Mot. Ski. 2005, 101, 195–202. [Google Scholar] [CrossRef]
  129. Chivers, M.L.; Seto, M.; Lalumière, M.L.; Laan, E.; Grimbos, T. Agreement of Self-Reported and Genital Measures of Sexual Arousal in Men and Women: A Meta-Analysis. Arch. Sex. Behav. 2010, 39, 5–56. [Google Scholar] [CrossRef] [Green Version]
  130. Bastiaansen, J.; Thioux, M.; Keysers, C. Evidence for mirror systems in emotions. Philos. Trans. R. Soc. B Biol. Sci. 2009, 364, 2391–2404. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  131. Borhani, K.; Ladavas, E.; Maier, M.E.; Avenanti, A.; Bertini, C. Emotional and movement-related body postures modulate visual processing. Soc. Cogn. Affect. Neurosci. 2015, 10, 1092–1101. [Google Scholar] [CrossRef] [Green Version]
  132. Goldberg, H.; Preminger, S.; Malach, R. The emotion–action link? Naturalistic emotional stimuli preferentially activate the human dorsal visual stream. NeuroImage 2014, 84, 254–264. [Google Scholar] [CrossRef] [PubMed]
  133. Fazekas, P.; Overgaard, M. A Multi-Factor Account of Degrees of Awareness. Cogn. Sci. 2018, 42, 1833–1859. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  134. Rudrauf, D.; Lachaux, J.-P.; Damasio, A.; Baillet, S.; Hugueville, L.; Martinerie, J.; Damasio, H.; Renault, B. Enter feelings: Somatosensory responses following early stages of visual induction of emotion. Int. J. Psychophysiol. 2009, 72, 13–23. [Google Scholar] [CrossRef]
  135. Barrett, L.F.; Bar, M. See it with feeling: Affective predictions during object perception. Philos. Trans. R. Soc. B Biol. Sci. 2009, 364, 1325–1334. [Google Scholar] [CrossRef] [PubMed]
  136. Pourtois, G.; Schettino, A.; Vuilleumier, P. Brain mechanisms for emotional influences on perception and attention: What is magic and what is not. Biol. Psychol. 2013, 92, 492–512. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  137. Dijkerman, H.C.; de Haan, E. Somatosensory processes subserving perception and action. Behav. Brain Sci. 2007, 30, 189–201. [Google Scholar] [CrossRef] [PubMed]
  138. Gombrich, E.H. Moment and Movement in Art. J. Warbg. Court. Inst. 1964, 27, 293. [Google Scholar] [CrossRef]
  139. Brysbaert, M. How many words do we read per minute? A review and meta-analysis of reading rate. J. Mem. Lang. 2019, 109, 104047. [Google Scholar] [CrossRef] [Green Version]
  140. Hayes, T.R.; Henderson, J.M. Center bias outperforms image salience but not semantics in accounting for attention during scene viewing. Atten. Percept. Psychophys. 2020, 82, 985–994. [Google Scholar] [CrossRef]
  141. Fan, Y.; Han, S. Temporal dynamic of neural mechanisms involved in empathy for pain: An event-related brain potential study. Neuropsychologia 2008, 46, 160–173. [Google Scholar] [CrossRef]
  142. Zeki, S.; Watson, J.; Frackowiak, R. Going beyond the information given: The relation of illusory visual motion to brain activity. Proc. R. Soc. B Boil. Sci. 1993, 252, 215–222. [Google Scholar] [CrossRef]
  143. Cutting, J.E. Representing Motion in a Static Image: Constraints and Parallels in Art, Science, and Popular Culture. Perception 2002, 31, 1165–1193. [Google Scholar] [CrossRef]
  144. Winawer, J.; Huk, A.C.; Boroditsky, L. A Motion Aftereffect From Still Photographs Depicting Motion. Psychol. Sci. 2008, 19, 276–283. [Google Scholar] [CrossRef]
  145. Chao, L.; Martin, A. Representation of Manipulable Man-Made Objects in the Dorsal Stream. NeuroImage 2000, 12, 478–484. [Google Scholar] [CrossRef] [Green Version]
  146. Iacoboni, M.; Molnar-Szakacs, I.; Gallese, V.; Buccino, G.; Mazziotta, J.C.; Rizzolatti, G. Grasping the Intentions of Others with One’s Own Mirror Neuron System. PLoS Biol. 2005, 3, e79. [Google Scholar] [CrossRef] [Green Version]
  147. Ortigue, S.; Sinigaglia, C.; Rizzolatti, G.; Grafton, S.T. Understanding Actions of Others: The Electrodynamics of the Left and Right Hemispheres. A High-Density EEG Neuroimaging Study. PLoS ONE 2010, 5, e12160. [Google Scholar] [CrossRef] [PubMed]
  148. Amoruso, L.; Finisguerra, A. Low or High-Level Motor Coding? The Role of Stimulus Complexity. Front. Hum. Neurosci. 2019, 13, 332. [Google Scholar] [CrossRef]
  149. Liu, L.; Ioannides, A.A. Emotion Separation Is Completed Early and It Depends on Visual Field Presentation. PLoS ONE 2010, 5, e9790. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  150. Wood, A.; Rychlowska, M.; Korb, S.; Niedenthal, P. Fashioning the Face: Sensorimotor Simulation Contributes to Facial Expression Recognition. Trends Cogn. Sci. 2016, 20, 227–240. [Google Scholar] [CrossRef] [PubMed]
  151. Hietanen, J.K.; Nummenmaa, L. The Naked Truth: The Face and Body Sensitive N170 Response Is Enhanced for Nude Bodies. PLoS ONE 2011, 6, e24408. [Google Scholar] [CrossRef]
  152. De Gelder, B.; De Borst, A.; Watson, R. The perception of emotion in body expressions. Wiley Interdiscip. Rev. Cogn. Sci. 2015, 6, 149–158. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  153. Brosch, T.; Sander, D.; Pourtois, G.; Scherer, K.R. Beyond Fear. Psychol. Sci. 2008, 19, 362–370. [Google Scholar] [CrossRef]
  154. Fiave, P.A.; Nelissen, K. Motor resonance in monkey parietal and premotor cortex during action observation: Influence of viewing perspective and effector identity. NeuroImage 2021, 224, 117398. [Google Scholar] [CrossRef]
  155. Ge, S.; Liu, H.; Lin, P.; Gao, J.; Xiao, C.; Li, Z. Neural Basis of Action Observation and Understanding from First- and Third-Person Perspectives: An fMRI Study. Front. Behav. Neurosci. 2018, 12, 283. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  156. Vogt, S.; Taylor, P.; Hopkins, B. Visuomotor priming by pictures of hand postures: Perspective matters. Neuropsychologia 2003, 41, 941–951. [Google Scholar] [CrossRef]
Figure 1. Visual information of action photos that is assumed to be relevant to rapid responses in the action observation network (AON). Note. The picture on the left is the original photo. The modified picture on the right is a hypothetical illustration, of which information from the original photo is relevant with regard to processing in the AON in the early time window up to around 300 milliseconds after picture onset. It has been assumed that the photos were presented to the viewers on their smartphone, which they were holding, and that they had their gaze directed to the display when the photos appeared. The photos had a size of 1505 × 1080 pixels. The viewing angle was 10 × 14°. The spatial resolution of the proposed modifications was based on recent findings on the magnocellular visual pathway [110,111,112,114,115]. The extraction of coarse-scale information during the processing of the picture information in the magnocellular pathway suggests that a distinction between foveal and peripheral vision is not necessary in the first hundreds of ms after picture onset. This assumption is also supported by the analysis of the processing of visual information about moving faces or bodies by Pitcher and Ungerleider [43]. Viewers, however, do not process the entire picture information uniformly but rather select the information in the center of the picture rather than information along its edges [140]. Picture information that is irrelevant to the processing in the AON has been removed in the modifications. They only represent bodies, body parts, faces, and objects that are involved in the actions. All changes to the original photos were made by using Adobe Photoshop (version 21.1.3, Adobe Systems, San Jose, CA, USA). The original color photos were converted into grayscale pictures. Based on studies on magnocellular performance, the modifications had a spatial resolution of 4 cycles per degree [114,115]. The spatial filtering was applied by using a Gaussian blur filter with a 9-pixel kernel for low-pass filtering. In illustrating the center bias of processing, I used a selection mask that corresponded to the “Weight Matrix” in Hayes and Henderson [140] (Figure 2, Panel h). Brightness and contrast were reduced according to the distance to the center of the image. The overall brightness of the modifications corresponded to the original photos. The photos were taken by the author.
Figure 1. Visual information of action photos that is assumed to be relevant to rapid responses in the action observation network (AON). Note. The picture on the left is the original photo. The modified picture on the right is a hypothetical illustration, of which information from the original photo is relevant with regard to processing in the AON in the early time window up to around 300 milliseconds after picture onset. It has been assumed that the photos were presented to the viewers on their smartphone, which they were holding, and that they had their gaze directed to the display when the photos appeared. The photos had a size of 1505 × 1080 pixels. The viewing angle was 10 × 14°. The spatial resolution of the proposed modifications was based on recent findings on the magnocellular visual pathway [110,111,112,114,115]. The extraction of coarse-scale information during the processing of the picture information in the magnocellular pathway suggests that a distinction between foveal and peripheral vision is not necessary in the first hundreds of ms after picture onset. This assumption is also supported by the analysis of the processing of visual information about moving faces or bodies by Pitcher and Ungerleider [43]. Viewers, however, do not process the entire picture information uniformly but rather select the information in the center of the picture rather than information along its edges [140]. Picture information that is irrelevant to the processing in the AON has been removed in the modifications. They only represent bodies, body parts, faces, and objects that are involved in the actions. All changes to the original photos were made by using Adobe Photoshop (version 21.1.3, Adobe Systems, San Jose, CA, USA). The original color photos were converted into grayscale pictures. Based on studies on magnocellular performance, the modifications had a spatial resolution of 4 cycles per degree [114,115]. The spatial filtering was applied by using a Gaussian blur filter with a 9-pixel kernel for low-pass filtering. In illustrating the center bias of processing, I used a selection mask that corresponded to the “Weight Matrix” in Hayes and Henderson [140] (Figure 2, Panel h). Brightness and contrast were reduced according to the distance to the center of the image. The overall brightness of the modifications corresponded to the original photos. The photos were taken by the author.
Brainsci 11 01382 g001
Figure 2. Schematic illustration of the relationship between the representational characteristics of action photos and outcomes of their cognitive processing in the AON. Note. The small, differently shaped objects stand for action elements. These elements relate to movements of body parts, the motor control of muscle activities, somatosensory processes or sensations, action-related objects or contexts, goals, and emotional value. An arrow means “evokes.” The color red signifies “activation”.
Figure 2. Schematic illustration of the relationship between the representational characteristics of action photos and outcomes of their cognitive processing in the AON. Note. The small, differently shaped objects stand for action elements. These elements relate to movements of body parts, the motor control of muscle activities, somatosensory processes or sensations, action-related objects or contexts, goals, and emotional value. An arrow means “evokes.” The color red signifies “activation”.
Brainsci 11 01382 g002
Table 1. Cortical areas and neuroscientific studies in which responses to photos of body movements or actions were observed and which were included in the present review.
Table 1. Cortical areas and neuroscientific studies in which responses to photos of body movements or actions were observed and which were included in the present review.
Brain AreasStudies
Extrastriate body area (EBA)Downing et al., 2006 [14]; Hafri et al., 2017 [15]; Lu et al., 2016 [16]; O’Toole et al., 2014 [17]; Proverbio et al., 2009 [18]; Thierry et al., 2006 [19]
Middle temporal area (MT)Hafri et al., 2017 [15]; Hermsdörfer et al., 2001 [20]; Kolesar et al., 2017 [21]; Kourtzi & Kanwisher, 2000 [22]; Lu et al., 2016 [16]; O’Toole et al., 2014 [17]; Proverbio et al., 2009 [18]
Additional regions of the posterior superior temporal sulcus (pSTS)Arioli et al., 2018 [23]; Canessa et al., 2012 [24]; Hafri et al., 2017 [15]; Hermsdörfer et al., 2001 [20]; Kourtzi & Kanwisher, 2000 [22]; O’Toole et al., 2014 [17]; Pierno et al., 2008 [25]; Proverbio et al., 2011 [26]
Inferior parietal lobule (IPL) and/or intraparietal sulus (IPS)Bühler et al., 2008 [27]; Canessa et al., 2012 [24]; Ferretti et al., 2005 [28]; Gu & Han, 2007 [29]; Hafri et al., 2017 [15]; Hermsdörfer et al., 2001 [20]; Kolesar et al., 2017 [21]; Ogawa & Inui, 2011 [30]; Proverbio et al., 2009 [18]; Redouté et al., 2000 [31]; Wehrum et al., 2013 [32]
Premotor cortex (PMC) and/or inferior frontal gyrus (IFG)Arioli et al., 2018 [23]; Canessa et al., 2012 [24]; Hafri et al., 2017 [15]; Johnson-Frey et al., 2003 [33]; Kolesar et al., 2017 [21]; Mazzarella et al., 2013 [34]; Ogawa & Inui, 2011 [30]; Pierno et al., 2008 [25]; Proverbio et al., 2009 [18]; Watson et al., 2014 [35]
Primary and/or secondary somatosensory cortex (S1, S2)Bolognini et al., 2013 [36]; Bühler et al., 2008 [27]; Cheng et al., 2008 [37]; Gu & Han, 2007 [29]; Proverbio et al., 2011 [26]
InsulaArioli et al., 2018 [23]; Bühler et al., 2008 [27]; Deuse et al., 2016 [38]; Gu & Han, 2007 [29]; Jackson et al., 2005 [39]; Kolesar et al., 2017 [21]; Wehrum et al., 2013 [32]
Anterior cingulate cortex (ACC)Bühler et al., 2008 [27]; Gu & Han, 2007 [29]; Jackson et al., 2005 [39]; Kolesar et al., 2017 [21]; Proverbio et al., 2009 [18], 2011 [26]; Redouté et al., 2000 [31]; Wehrum et al., 2013 [32]
Orbitofrontal cortex (OFC)Bühler et al., 2008 [27]; Deuse et al., 2016 [38]; Redouté et al., 2000 [31]; Wehrum et al., 2013 [32]
AmygdalaDeuse et al., 2016 [38]; Ferretti et al., 2005 [28]; Hadjikhani & de Gelder, 2003 [40]; Pierno et al., 2008 [25]; Poyo Solanas et al., 2018 [41]
Table 2. Findings on the processing of observed or pictured actions from studies which used transcranial magnetic stimulation (TMS) or methods of non-invasive electrical stimulation.
Table 2. Findings on the processing of observed or pictured actions from studies which used transcranial magnetic stimulation (TMS) or methods of non-invasive electrical stimulation.
Changes in Corticospinal Excitability, Motor Facilitation, and/or Downstream Modulation
Photos as stimuliAmoruso et al., 2020 [44]; Borgomaneri et al., 2014 [58], 2015 [59], 2015 [55]; Catmur et al., 2011 [60]; Hajcak et al., 2007 [61]; Urgesi et al., 2006 [62], 2010 [56]
Video clips (or apparent motion cues) as stimuliMc Cabe et al., 2015 [52]; Ubaldi et al., 2015 [66]; Urgesi et al., 2006 [71]
Real actions as stimuliFadiga et al., 1995 [72]; Feurra et al., 2019 [2]
Muscle specificity in changed motor excitability and facilitation
Photos as stimuliAmoruso et al., 2020 [44]; Borgomaneri et al., 2015 [55]; Catmur et al., 2011 [60]; Urgesi et al., 2006 [62], 2010 [56]
Real actions as stimuliFadiga et al., 1995 [72]
Activations in the AON form a neural simulation of a pictured action
Photos as stimuliBolognini et al., 2013 [36]; Borgomaneri et al., 2012 [57], 2015 [59]; Urgesi et al., 2006 [62], 2010 [56]
Video clips as stimuliBolognini et al., 2011 [54]; Avenanti et al., 2018 [73] a; Jacquet & Avenanti, 2015 [12]
Specific causal contributions from certain brain areas in action perception
Photos as stimuliBolognini et al., 2013 [36]; Catmur et al., 2011 [60]; Urgesi et al., 2007 [63]
Video clips as stimuliAvenanti et al., 2018 [73] a; Bolognini et al., 2011 [54]; Jacquet & Avenanti, 2015 [12]; Valchev et al., 2016 [10], 2017 [74]
Involvement of somatosensory activations in action perception
Photos as stimuliBolognini et al., 2013 [36]
Video clips as stimuliBolognini et al., 2011 [54]; Jacquet & Avenanti, 2015 [12]; Valchev et al., 2016 [10], 2017 [74]
Real actions as stimuliAvikainen et al., 2002 [75] b
Time courses of neural processing stages of pictured actions
Photos as stimuliAvanzini et al., 2013 [64]; Borgomaneri et al., 2014 [58], 2015 [59], 2015 [55]
Video clips (or apparent motion cues) as stimuliBarchiesi & Cattaneo, 2013 [76]; Ubaldi et al., 2015 [66]
Close connections between the processing of body postures and emotional value or behavioral relevance
Photos as stimuliBorgomaneri et al., 2012 [57], 2014 [58], 2015 [59], 2015 [55]; Hajcak et al., 2007 [61]; van Loon et al., 2010 [77]
a The method used in this study was transcranial direct current stimulation (tDCS). b The method used in this study was median nerve stimulation (MNS).
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Kislinger, L. Photographs of Actions: What Makes Them Special Cues to Social Perception. Brain Sci. 2021, 11, 1382. https://doi.org/10.3390/brainsci11111382

AMA Style

Kislinger L. Photographs of Actions: What Makes Them Special Cues to Social Perception. Brain Sciences. 2021; 11(11):1382. https://doi.org/10.3390/brainsci11111382

Chicago/Turabian Style

Kislinger, Leopold. 2021. "Photographs of Actions: What Makes Them Special Cues to Social Perception" Brain Sciences 11, no. 11: 1382. https://doi.org/10.3390/brainsci11111382

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop