Abstract
It was more than 45 years ago that Gunnar Johansson invented the point-light display technique. This showed for the first time that kinematics is crucial for action recognition, and that humans are very sensitive to their conspecifics’ movements. As a result, many of today’s researchers use point-light displays to better understand the mechanisms behind this recognition ability. In this paper, we propose PLAViMoP, a new database of 3D point-light displays representing everyday human actions (global and fine-motor control movements), sports movements, facial expressions, interactions, and robotic movements. Access to the database is free, at https://plavimop.prd.fr/en/motions. Moreover, it incorporates a search engine to facilitate action retrieval. In this paper, we describe the construction, functioning, and assessment of the PLAViMoP database. Each sequence was analyzed according to four parameters: type of movement, movement label, sex of the actor, and age of the actor. We provide both the mean scores for each assessment of each point-light display, and the comparisons between the different categories of sequences. Our results are discussed in the light of the literature and the suitability of our stimuli for research and applications.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
Introduction
Since the discovery of humans’ ability to perceive, recognize, and interpret human movements, several researchers have tried to identify the mechanism behind this ability (see Blake & Shiffrar, 2007; Decatoire et al., 2018; Pavlova, 2012, for a review). One very interesting methodology that consists in solely presenting kinematic information is point-light display (PLD), developed by Johansson (1973). In his groundbreaking experiment, this researcher asked observers to watch videos showing an actor performing an action. The actor was represented by point lights that indicated the motion of the actor’s joints (head, shoulders, elbows, wrists, hips, knees and ankles). Despite the paucity of these stimuli, which showed neither the actor’s body nor the context of the action, observers were able to recognize the represented action quickly and accurately. On the strength of these initial results, this paradigm has been extensively used over the past 40 years. It has shown that humans are not only capable of recognizing the actions that are performed, but can also perceive many characteristics of the actor, including his/her identity (Beardsworth & Buckner, 1981; Cutting & Kozlowski, 1977; Loula, Prasad, Harber, & Shiffrar, 2005; Sevdalis & Keller, 2011; Troje, Westhoff, & Lavrov, 2005; Westhoff & Troje, 2007), sex (Kozlowski & Cutting, 1977; Ma, Paterson, & Pollick, 2006; Mather & Murdoch, 1994; Pollick, Kay, Heim, & Stringer, 2005; Runeson & Frykholm, 1983), emotional state (Atkinson, Dittrich, Gemmell, & Young2004; Chouchourelou, Matsuka, Harber, & Shiffrar, 2006; Clarke, Bradshaw, Field, Hampson, & Rose, 2005), and intentions (Chaminade, Meary, Orliaguet, & Decety, 2001; Martel, Bidet-Ildei, & Coello, 2011; Runeson & Frykholm, 1983; Sevdalis & Keller, 2011). This is possible whether the sequences feature one or two actors (Manera, von der Lühe, Schilbach, Verfaillie, & Becchio, 2016; Okruszek & Chrustowicz, 2020). Using these very depleted stimuli, other studies have shown that humans can infer the characteristics of objects with which the actor interacts, such as weight (Runeson & Frykholm, 1981) and size (Jokisch & Troje, 2003).
This incredible ability appears to be related to the use of a specific brain network (Giese & Poggio, 2003) that includes not only brain areas involved in the visual perception of movements, but also areas involved in the production and interpretation of actions (Urgesi, Candidi, & Avenanti, 2014), such as the mirror system, where several areas are activated at the same time when producing or observing actions (Cattaneo & Rizzolatti, 2009; Iacoboni & Mazziotta, 2007; Sale & Franceschini, 2012). Moreover, several authors have shown that the motor repertoire of observers influences the perceptual processing of PLDs: a rich motor repertoire acquired through practice enhances the perceptual processing (Bidet-Ildei, Beauprez, & Badets, 2010; Chary et al., 2004; Louis-Dam, Orliaguet, & Coello, 1999; Pavlova, Bidet-Ildei, Sokolov, Braun, & Krageloh-Mann, 2009), confirming the link between action production and action observation in PLDs (Ulloa & Pineda, 2007). Finally, many researchers have shown that PLDs can help researchers understand social (see Pavlova, 2012, for a review) and cognitive skills such as language (see Bidet-Ildei, Decatoire, & Gil, 2020, for a review) or number processing (Bidet-Ildei, Decatoire, & Gil, 2015), and therefore constitute a valuable tool in the research community.
Despite the number of groups of researchers using PLDs across the world, few have tried to develop PLD databases that can be accessed by the community. Since 2004, to the best of our knowledge, ten databases have been made available to the communityFootnote 1 and referred to indexed articles (see Table 1).
Here, we describe a new database (Point-Light Action Visualization and Modification Platform, PLAViMoP; https://plavimop.prd.fr/en/motions) featuring currently 177 human (whole body, upper body, face, with one or two agents) and 21 robotic 3D PLD sequences. The major advantages of this database are that it (1) brings together different types of stimuli (human, robotic) within a single tool, (2) includes a highly intuitive search engine that allows videos to be retrieved quickly and easily, (3) offers free downloads via the Internet and is scalable (anyone can upload a new PLD sequence), and (4) includes an online recognition test that can be performed by all visitors and provides a recognition rate for each PLD. This permit to facilitate the access to the movements of interest and to have a current idea of the recognition of each stimulus. Moreover, the presence of robotic movements is particularly interesting as control or to better understand the role of motor and/or visual system in the recognition of PLD. Each PLD sequence is accessible in two formats: MP4, to be directly used in experiments, and C3D, to be modified (see Decatoire et al., 2018, and https://plavimop.prd.fr/en/software for ways of making spatial and kinematic modifications).
After describing how the database was built, and how it works, we explain how we assessed it. We end by identifying its limitations but also its future prospects.
Database creation
The PLAViMoP database contains both human and robotic PLD movements. All of these were captured using either the Qualisys MacReflex motion capture system (Qualisys; Gothenburg, Sweden) consisting of 16 Oqus 7 + cameras, or the Vicon motion capture system (VICON™ Inc., Denver, CO) composed of 20 MX-T40 cameras. Both systems’ frame rates were set at 100 Hz. The resulting videos were then analyzed with Qualisys Track Manager (QTM) or Nexus 1.8 to compute the 3D coordinates of reflective markers during the movements, and finally to build the corresponding PLD sequences.Footnote 2
Human movements
Global movements
Two adults (one male and one female aged 20 years, with no motor, sensory or cognitive disorders) participated in the data collection. Each actor wore 33 reflective spherical markers measuring 14 mm in diameter (see Appendix 1 for detailed location), but after the reconstruction, each PLD only represented 19 points (see Appendix 1 for details). For each actor, we recorded 40 global movements: 31 representing everyday actions (e.g., walking, sitting down), and nine representing sporting gestures (e.g., golf swing, press-up). A full list of global movements is provided in Table 2.
Fine-motor movements
Two adults (one left-handed male aged 25 years and one right-handed female aged 38 years with no motor, sensory or cognitive disorders) participated in the data collection. Each actor wore nine reflective spherical markers measuring 6.4 mm in diameter and 38 reflective hemispherical auto-adhesive markers measuring 4 mm in diameter (see Appendix 2 for detailed locations). For each actor, we recorded 28 movements: 26 representing everyday actions (e.g., writing, drinking), and two representing experimental tasks (e.g., pointing with a tool). A full list of the fine-motor movements is provided in Table 3.
Facial expressions
We recruited two adults (one male aged 50 years and one female aged 30 years with no motor, sensory, or cognitive disorders) who regularly acted in theaters. Each wore 41 markers: 38 4-mm hemispherical facial markers, and three 6.4-mm spherical markers for the sternum and shoulders (see Appendix 3 for details). Importantly, the position of the eyes was not directly recorded, but was calculated a posteriori from the position of dots placed at the outer canthus of each eye. This choice was made to render our PLDs more face-like, as a pretest including 15 adult participants showed that without eyes, our PLD faces seemed strange and ghostly. For each actor, 12 facial expressions were recorded (e.g., happiness, sadness). A full list of facial expressions is provided in Table 4.
Interactions
Four actors (two for global interactions and two for facial interactions) participated. They were the actors we had recruited for the global movements and facial expressions. Using the same markers (i.e., for either global movements or facial expressions), we recorded 14 interactions: ten global and four facial sequences. A full list of the interactions is provided in Table 5.
Robotic movements
Robotic movements were recorded from the humanoid robot Nao (SoftBank Robotics; https://www.ald.softbankrobotics.com). We placed 25 markers measuring 6.4 mm in diameter on Nao (Appendix 4), and recorded 21 actions (for a full list, see Table 6).
Database functioning
The PLAViMoP database comes in two parts: a search engine and a viewing window (see Fig. 1).
The search engine makes it easy to retrieve PLDs in the dataset. Users can select several criteria (e.g., type of movement, sex, age). These criteria are programmed to evolve with the dataset development. For example, if a movement performed by a child is added to the dataset, Children will appear in the age category.
The viewing window displays the type of movement, the tags (e.g., sex, category, age), and an extract of each movement. All sequences can be downloaded in two formats: MP4, where the motion is seen from one choose viewpoint (an angle of 45° for the majority of displays), and C3D. C3D files contain the 3D coordinates of the motion, affording the possibility of constructing point-light versions of the actions seen from any desired perspective (Daems & Verfaillie, 1999; Olofsson, Nyberg, & Nilsson, 1997; Verfaillie, 1993, 2000). With this format, it is also possible to add trajectories or modify the spatial and temporal parameters of the motion (for further details, see PLAViMoP software; https://plavimop.prd.fr/en/software, Decatoire et al., 2018). Users can click on a “Show details” button to obtain details of how a given movement was captured and information about the recognition rate.
Database assessment
The PLAViMoP platform was built to be scalable and autonomous. It includes an online recognition test that can be performed by all visitors. This recognition test lasts 5 min and consists of the recognition of five PLD sequences chosen randomly within the PLAViMoP database. Before starting the test, participants have to indicate their age, sex, and level of sporting activity. These data are stored anonymously. After the test, participants receive feedback about their recognition performance. Movement recognition data are updated as soon as a new person completes the recognition test. The results provided below are based on the data that were available for this online recognition test in June 2021.
Participants
A total of 703 participants (Mage = 25.6 ± 10.5 years; 59% women; 78% amateur athletes, 22% no sport practice) participated in the online recognition test.
Stimuli
The stimuli were extracted from the PLAViMoP database (see above for details of its construction).
Procedure
Participants each had to judge five PLD sequences extracted randomly from the PLAViMoP database. For each sequence, they had to indicate the sex of the actor (male, female, mixed, or none), the type of motion (human, animal, robotic), the label of the action (chosen from a list of 108 labels of the PLD sequences), and the age of the actor: adult (18–64 years), older adult (> 65 years), or child (none at present). No time limit was given to participants. Each sequence was played in a loop until the judgment was completed. At the end of the test, participants were given feedback about their performance (see Appendix 5).
Results
On June 15, 2021, we extracted all the results and divided them into five categories (global human movement, fine human movement, facial expression, interaction, and robotic movement). For each PLD sequence, we calculated the number of observers (male, female, total) who judged it, the mean age of these observers, and the percentages of observers who correctly recognized the type, label, sex, and age. As the evaluation method does not allow us to know precisely which displays were evaluated by the same person, we decided to carry out independent analyzes. Moreover, given the violation of normality, sphericity and/or homogeneity of values, it was difficult to find adequate statistical analysis. For us, the best statistical analyses were nonparametric tests. Mann–Whitney comparisons have been used for comparisons of sub-category in each category and Kruskal–Wallis test for the comparisons between categories. For all analyses, the effect size was given by the eta squared calculated with Psychometrica (Lenhard & Lenhard, 2017) and a calculation of the power was made a posteriori with gpower (Erdfelder, Faul, & Buchner, 1996).
Human movements
The results were analyzed separately according to action (global, fine, expression, interaction), and were then compared using a nonparametric Kruskal–Wallis test with action type (global, fine, expression, interaction) as a between-participants factor.
Global movements
Results are set out in Table 2. On average, each PLD sequence was judged by 19 observers aged 26 ± 2 years. The mean percentage of correct recognition responses was 62%, with 81% for type, 64% for label, 35% for sex, and 70% for age. The maximum label recognition score (100%) was achieved for “Walk Man”, “Walk Woman”, and “Walked aged Man”, and the minimum recognition score (0%) for “Volleyball Block Man”. Both daily-life actions and sports gestures are well recognized concerning the type of movement (mean everyday actions = 77.6%, SD = 20.3%; mean sport actions = 91.27%, SD = 6.42%), the label of action (mean everyday actions = 62.29%, SD = 25.05%; mean sport actions = 70.55%, SD = 26.49%) and the age of the actor (mean everyday actions = 68.77%, SD = 17.15%; mean sport actions = 74.61%, SD = 16.80%). Concerning, the sex of the actor, the recognition is relatively low for both types of actions (mean everyday actions = 34.43%, SD = 24.47%; mean sport actions = 35.89%, SD = 24.20%). Mann–Whitney comparisons indicated that there was no difference in the percentage of correct recognition responses between everyday actions and sport gestures, concerning label (U62,18 = 429, p = 0.13, ɳ2 = 0.02, 1-β = 0.18), sex (U62,18 = 550, p = 0.94, ɳ2 = 0.001, 1-β = 0.05), and age (U62,18 = 448, p = 0.21, ɳ2 = 0.01, 1-β = 0.11). Concerning the type of movement, sport gestures were better recognized than everyday actions (U62,18 = 323, p < 0.01, ɳ2 = 0.07, 1-β = 0.53).
Fine-motor movements
Results are set out in Table 3. On average, each PLD sequence was judged by 16 observers aged 25.2 ± 2.4 years. The mean percentage of correct recognition responses was 60%, with 87% for type, 52% for label, 23% for sex, and 78% for age. The maximum label recognition score (100%) was achieved for “Drink Woman” and “Light a Match Man”, and the minimum recognition score (0%) for “Sign Man”. Both daily-life actions and experimental tasks are well recognized concerning the type of movement (mean daily-life actions = 89.0%, SD = 14.6%; mean experimental task = 73%, SD = 11.2%) and the age of the actor (mean daily-life actions = 79.5%, SD = 11.4%; mean experimental task = 59.2%, SD = 7.89%) even in both cases Mann–Whitney comparisons indicate a better recognition for daily-life actions (U53,4 = 185, p < 0.05, ɳ2 = 0.11, 1-β = 0.26 and U53,4 = 199, p < 0.01, ɳ2 = 0.15, 1-β = 0.36, respectively). Concerning the recognition of the label of action (mean daily-life actions = 53.9%, SD = 29.2%; mean experimental task = 26%, SD = 15.8%) and the sex of the actor (mean daily-life actions = 24.0%, SD = 16.2%; mean experimental task = 23.0%, SD = 8.8%), they were relatively low for both subcategories with no difference between them even if we can observe a tendency to have a better recognition of label of action for daily life than experimental task (U53,4 = 153, p = 0.07, ɳ2 = 0.04, 1-β = 0.12 for the label of action and U53,4 = 96.5, p = 0.78, ɳ2 = 0.02, 1-β = 0.08 for the sex of actor).
Facial expressions
Results are set out in Table 4. On average, each PLD sequence was judged by 18 observers aged 25.3 ± 3.3 years. The mean percentage of correct recognition responses was 50%, with 73% for type, 28% for label, 33% for sex, and 65% for age. The highest label recognition score (78%) was for “Laugh Man”, and the lowest recognition score (0%) was for “Doubt Woman”, “Doubt Man” and “Pain Woman”.
Interactions
Results are set out in Table 5. On average, each PLD sequence was judged by 19 observers aged 25.7 ± 3 years. The mean percentage of correct recognition responses was 71%, with 87% for type, 53% for label, 61% for sex, and 82% for age. The highest label recognition score (85%) was for “Dance Duo”, and the lowest recognition score (0%) was for “Tell a Joke Duo”. Both global and facial expressions are well recognized concerning the type of movement (mean global = 93.5%, SD = 4.6%; mean expression = 72.8%, SD = 8.3%), the sex of the actor (mean global = 59.7%, SD = 15.7%; mean expression = 65.2%, SD = 14.4%) and the age of the actor (mean global = 85.3%, SD = 9.2%; mean expression = 76.8%, SD = 6.4%). Concerning the label of action, the recognition is lower for both subcategories (mean global = 62.4%, SD = 26.1%; mean expression = 29.5%, SD = 31.8%). Mann–Whitney comparisons indicated that global and facial interactions did not differ on the percentage of correct recognition responses for label (U10,4 = 8, p = 0.10, ɳ2 = 0.21, 1-β = 0.36), sex (U10,4 = 26, p = 0.43, ɳ2 = 0.05, 1-β = 0.12), and age (U10,4 = 9, p = 0.14, ɳ2 = 0.17, 1-β = 0.29). Concerning type of movement, global interactions were better recognized than facial interactions (U10,4 = 0, p < 0.01, ɳ2 = 0.57, 1-β = 0.94).
Effect of action type
Concerning the correct recognition of type of movement (see Fig. 2A), a Kruskal–Wallis test indicated a significant effect of action type (W3 = 19.65, p < 0.01, ɳ2 = 0.09, 1-β = 0.86), with better recognition for fine-motor movement (mean = 87.8%, SD = 19.1%) and interaction (mean = 87.6%, SD = 11.2%) sequences than for global (mean = 80.7%, SD = 18.9%) and facial expression (mean = 73%, SD = 19.1%) sequences.
Concerning correct label recognition (see Fig. 2B), a Kruskal–Wallis test indicated an effect of action type (W3 = 25.26, p < 0.01, ɳ2 = 0.13, 1-β = 0.97), with better recognition for global (mean = 64.1%, SD = 25.1%) and interaction (mean = 53%, SD = 30.7%) sequences than for fine (mean = 51.9%, SD = 29.3%) and facial expression (mean = 29%, SD = 26.9%) sequences.
Concerning correct recognition of sex (see Fig. 2C), a Kruskal–Wallis test indicated an effect of action type (W3 = 27.19, p < 0.01, ɳ2 = 0.14, 1-β = 0.98), with better recognition for interaction (mean = 61.3%, SD = 15%) sequences than for global (mean = 34.7%, SD = 23.5%), facial expression (mean = 33.9%, SD = 22.9%) and fine-motor (mean = 23.9%, SD = 15.7%) sequences.
Concerning correct recognition of age (see Fig. 2D), a Kruskal–Wallis test indicated an effect of action type (W3 = 20.7, p < 0.01, ɳ2 = 0.10, 1-β = 0.90), with better recognition for interaction (mean = 82.9%, SD = 9.1%) and fine (mean = 78%, SD = 912.3%) sequences than for global (mean = 70.08%, SD = 17.9%) and facial expression (mean = 65.4%, SD = 12.7%) sequences.
Robotic movements
Results are set out in Table 6. Each PLD sequence was judged by 16 observers aged 25 ± 2.5 years. The mean percentage of correct recognition responses was 56.4%, with 38% for type, 62.4% for label, 62.8% for sex, and 62.8% for age. The best label recognition (92%) was for “Give Robot” and “Tai Chi Robot”, and the worst (4%) for “Clean Robot”.
Effect of movement type
We compared performances on human and robotic PLD sequences with a nonparametric Kruskal–Wallis test, with movement type (human, robotic) as a between-participants factor.
Concerning recognition of type of movement (see Fig. 3A), the Kruskal–Wallis test revealed a significant effect of action type (W1 = 31.47, p < 0.01, ɳ2 = 0.16, 1-β = 0.99), with better recognition for human sequences (mean = 82.5%, SD = 17.8%) than for robotic ones (mean = 38.2%, SD = 29.1%).
Concerning label recognition (see Fig. 3B), the Kruskal–Wallis test did not reveal any significant difference between human and robotic sequences (W1 = 1.08, p = 0.30). Concerning recognition of the actor’s sex (see Fig. 3C), the Kruskal–Wallis test indicated an effect of action type (W1 = 24.7, p < 0.01, ɳ2 = 0.12, 1-β = 0.99), with better recognition for robotic sequences (mean = 62.8%, SD = 19.3%) than for human ones (mean = 33.2%, SD = 22.6%).
Concerning recognition of the actor’s age (see Fig. 3D), the Kruskal–Wallis test indicated an effect of action type (W1 = 4.4, p < 0.05, ɳ2 = 0.02, 1-β = 0.5), with better recognition for human sequences (mean = 73.08%, SD = 15.8%) than for robotic ones (mean = 62.2%, SD = 23.3%).
Discussion
In this paper, we describe the construction, functioning and assessment of the new PLAViMoP database, which currently contains 196 PLD sequences. After discussing the results of the recognition test, we set out the limitations and perspectives.
First of all, our analysis showed that observers were able to correctly label the majority of our stimuli (mean action label recognition = 55%). Most importantly, the level of label recognition was above 70% for 71 PLD sequences (36 representing global human movements, 18 representing fine human movements, three representing facial expressions, five representing human interactions, and nine representing robotic movements), indicating that they could be used without hesitation. Here, we discuss the results for human sequences, then for robotic sequences, and conclude by highlighting the usefulness of the PLAViMoP database for research and application.
Recognition of human movements
Mean label recognition for human PLD sequences was 54% (with a chance level at 0.92%),Footnote 3 with better recognition for global and interaction movements than for fine movements and facial expressions. The level of correct label recognition for global and interaction sequences confirmed previous research highlighting a good ability to recognize these types of stimuli (Lapenta et al., 2017; Manera et al., 2010, 2016; Okruszek & Chrustowicz, 2020). Moreover, it suggests that it is better for PLD sequences to show the whole body rather than just part of it (face or upper torso) (Atkinson, Vuong, & Smithson, 2012). Concerning the weakness of facial expression recognition observed in our study, this may be because we used not only prototypical facial expressions such as anger, happiness, disgust and surprise, but other facial expressions too, such as doubt and boredom. Prototypical facial expressions were fairly well recognized (> 57%), especially happiness (79.5%), in line with what is usually reported in the literature (Bidet-Ildei, Decatoire, & Gil, 2020; Leppänen & Hietanen, 2004). Concerning fine movements, the difficulty may have arisen from the labels we provided, which may have been too close to be distinctive (e.g., “write” vs. “sign” or “point” vs. “point with a tool”). However, to our knowledge, no study has previously studied the recognition of fine upper body movement, so we do not have any means of comparison.
Concerning recognition of the actor’s sex, we found relatively weak scores, with mean recognition of 38%, and only 34% for global human movements (see Pollick, Kay, Heim, & Stringer, 2005, for a comparison). These scores may be explained either by the 45° angle of our PLD sequences (Daems & Verfaillie, 1999) or by the ambiguity of the question. The choice of categories (male, female, mixed, none) may have encouraged some participants to answer “mixed” or “none” when they were not completely sure of their response. Recognition of the actor’s sex was particularly difficult for fine and facial expression sequences, in line with the idea that center of moment (Kozlowski & Cutting, 1977; Pollick et al., 2005), and more particularly the sway of the hips and shoulders (Lapenta et al., 2017; Mather & Murdoch, 1994), is crucial for this recognition.
Concerning recognition of type of movement and actor’s age, we obtained good results, with 82.5% correct recognition for type of movement and 73% for actor’s age. This suggests that adults can easily distinguish between human, robotic, and animal movements, as well as between children, young adults, and older ones. These abilities may be related to the level of activation of the motor network, which is known to be stronger for movements that belong to the individual’s motor repertoire (Calvo-Merino, Glaser, Grezes, Passingham, & Haggard, 2005). However, our results need to be confirmed, as we only had two types of movement (robotic and human) and two ages (young and older) for the moment in the database. Moreover, the database currently contains just one PLD sequence representing a movement performed by an older actor.
Robotic movements
Mean label recognition for robotic PLD movements was 62%, which is comparable to what we obtained for human movements. Interestingly, while observers more accurately judged human sequences in terms of type of movement and actor’s age, judgements did not differ concerning label recognition, and were actually better for robotic sequences when it came to recognizing the actor’s sex. As with humans, the weakness in recognizing the type of movement and the age of the actor in the robotic sequences can be explained by reduced activation of the motor network. This may also be in line with the fact that results were systematically more variable for robotic movements than for human ones. However, it is important to note that the ability to recognize the action label was equivalent for robotic and human sequences, suggesting that judgements of PLD sequences do not have to systematically rely on motor experience (Chaminade et al., 2001; Pavlova et al., 2009). This finding is consistent with the idea that motor representations can be built from visual experience, as has already been demonstrated in the literature (Beauprez et al., 2019, 2020). One other possibility is that the movements of the humanoid robot Nao can activate motor resonance (Agnew, Bhakoo, & Puri, 2007), insofar as its programs were inspired by human movements. Concerning observers’ ability to recognize the sex of the actor in robotic PLD sequences, this can be explained simply by the lower level of ambiguity: if observers recognize a robotic action, there is therefore less ambiguity about their response concerning the sex of the actor (i.e., none).
Limitations
First, although the PLAViMoP database contains many different PLD sequences, there is currently no variability (only one repetition) for each proposed sequence. This could be problematic for protocols that require many repetitions of the same stimulus. However, as mentioned above, PLAViMoP is scalable, so several trials of the same action will probably be available in the future.
A second limitation is the a priori label chosen for each sequence. Some of these labels are currently too close (e.g., “write” vs. “sign” or “drink” vs. “drink from a bottle”) to be correctly classified. Consequently, in the majority of cases, actual recognition of the sequences was better than results suggest. It is important to bear in mind that the responses given by each observer for each sequence are accessible to everybody who is registered on the platform (consult “Show details” in viewing window). Consequently, we encourage researchers to regularly consult the latest available data and to access to individual responses.
Third, as the test is available on line, it is difficult to control the way it is performed (participant’ state, attention paid to test, reaction time, etc.). For this reason, it is important to have as large a number of responses as possible, and we encourage everybody to share the link to the recognition test (https://plavimop.prd.fr/en/recognition). Although the majority (136/197) of PLD sequences were judged by at least 15 participants in the present study, some were judged by only a few observers, and it will therefore be better to wait for more responses before using these.
Finally, we should be careful with our results because the procedure adopted in the recognition test does not allow to determine which display has been judged by the same participant. Consequently, the statistics realized cannot be purely dependent or independent. In the present experiment, we decided to use independent statistics, but future research should control which display is associated to each participant. Moreover, the analysis of effect size and power sometimes displayed weak or moderate effects. Consequently, future analyses of the stimuli included on PLAViMoP database could be carried out with classical experimental procedures to better control the different parameters of analysis and to predict a priori both the number of stimuli and participants to have significant effects at p < 0.05 and a power of 0.80.
Conclusions and perspectives
Despite these limitations, we think that the PLAViMoP database could be a very useful tool for working with PLD sequences. Accessible to everyone, it allows large numbers of human and robotic PLD sequences to be retrieved quickly and easily. The presence of robotic movements could be potentially very interesting for researchers in psychology to be used as a control for biological motions or for researchers in robotics to better understand how humans can perceive and judge robotic actions. Moreover, the choice of format (MP4 or C3D) means that they can either be used directly for practice or can be spatially and/or kinematically modified (e.g., by adding masking dots or scrambling the action; see Decatoire et al., 2019, and https://plavimop.prd.fr/en/software for possible transformations). The PLAViMoP database could therefore be useful for researchers by allowing them to directly access normative PLD stimuli to study perceptual, motor, cognitive, and/or social issues. Moreover, PLAViMoP could make PLDs more accessible to practitioners for use in sport or rehabilitation. Many studies have shown that action observation can improve performance in sports (see for example Faelli et al., 2019 and Francisco, Decatoire, & Bidet-Ildei, 2022 for recent studies), as well as motor (see Ryan et al., 2021; Sarasso, Gemma, Agosta, Filippi, & Gatti, 2015 for reviews) and cognitive (see Marangolo & Caltagirone, 2014, for a review) abilities, but it has seldom been used until now, mainly because accessing stimuli is difficult.
In conclusion, PLAViMoP constitutes a useful tool for researchers and practitioners that can work with PLD stimuli. Moreover, the collaborative and scalable options could lead to interesting opportunities in the future. Concerning the perspectives, we are working to develop the database by adding more specialist sports movements (e.g., judo, soccer) and movements performed by individuals with motor and/or cognitive disabilities. Moreover, we aim to add several sequences for each action, in order to add some variability.
Notes
Some other databases are available on the Internet but they are not related to an indexed paper. You can consult for example: http://mocap.cs.cmu.edu/, http://www.jeroenvanboxtel.com/MocapDatabases.html, https://mocap.cs.sfu.ca/, https://mocap.web.th-koeln.de/, https://fling.seas.upenn.edu/~mocap/cgi-bin/Database.php
Since the writing of this paper some movements of a markerless motion capture system consisting of 4 Kinect azure DK cameras from Microsoft have been added. The frame rate of this system was lower as it was fixed at 30 Hz. The resulting videos were then analysed with iPiStudio-pro to calculate the 3D coordinates of the non-marker members, before being transformed into a PLD sequence with a MATLAB program. See https://plavimop.prd.fr/en/motions (judo and Goalkeeper dive).
Each label must be recognized among 108 possibilities.
References
Agnew, Z. K., Bhakoo, K. K., & Puri, B. K. (2007). The human mirror system: A motor resonance theory of mind-reading. Brain Research Reviews, 54(2), 286–293.
Atkinson, A. P., Dittrich, W. H., Gemmell, A. J., & Young, A. W. (2004). Emotion perception from dynamic and static body expressions in point-light and full-light displays. Perception, 33(6), 717–746.
Atkinson, A. P., Vuong, Q. C., & Smithson, H. E. (2012). Modulation of the face- and body-selective visual regions by the motion and emotion of point-light face and body stimuli. NeuroImage, 59(2), 1700–1712. https://doi.org/10.1016/j.neuroimage.2011.08.073
Badets, A., Bidet-Ildei, C., & Pesenti, M. (2015). Influence of biological kinematics on abstract concept processing. Quarterly Journal of Experimental Psychology (hove), 68(3), 608–618. https://doi.org/10.1080/17470218.2014.964737
Beardsworth, T., & Buckner, T. (1981). The ability to recognize oneself from a video recording of one’s movements without seeing one’s body. Bulletin of the Psychonomic Society, 18(1), 19–22.
Beauprez, S. A., Bidet-Ildei, C., & Hiraki, K. (2019). Does watching Han Solo or C-3PO similarly influence our language processing? Psychological Research. https://doi.org/10.1007/s00426-019-01169-3
Beauprez, S.-A., Blandin, Y., Almecija, Y., & Bidet-Ildei, C. (2020). Physical and observational practices of unusual actions prime action verb processing. Brain and Cognition, 138, 103630. https://doi.org/10.1016/j.bandc.2019.103630
Bidet-Ildei, C., Beauprez, S. A., & Badets, A. (2020a). A review of literature on the link between action observation and action language: Advancing a shared semantic theory. New Ideas in Psychology, 58, 100777. https://doi.org/10.1016/j.newideapsych.2019.100777
Bidet-Ildei, C., Chauvin, A., & Coello, Y. (2010). Observing or producing a motor action improves later perception of biological motion: Evidence for a gender effect. Acta Psychologica (amst), 134(2), 215–224. https://doi.org/10.1016/j.actpsy.2010.02.002
Bidet-Ildei, C., Decatoire, A., & Gil, S. (2020b). Recognition of emotions from facial point-light displays. Frontiers in Psychology, 11, 1062. https://doi.org/10.3389/fpsyg.2020.01062
Blake, R., & Shiffrar, M. (2007). Perception of human motion. Annual Review of Psychology, 58, 47–73.
Calvo-Merino, B., Glaser, D. E., Grezes, J., Passingham, R. E., & Haggard, P. (2005). Action observation and acquired motor skills: An FMRI study with expert dancers. Cerebral Cortex, 15(8), 1243–1249.
Cattaneo, L., & Rizzolatti, G. (2009). The mirror neuron system. Archives of Neurology, 66(5), 557–560. https://doi.org/10.1001/archneurol.2009.41
Chaminade, T., Meary, D., Orliaguet, J. P., & Decety, J. (2001). Is perceptual anticipation a motor simulation? A PET Study. Neuroreport, 12(17), 3669–3674.
Chary, C., Meary, D., Orliaguet, J. P., David, D., Moreaud, O., & Kandel, S. (2004). Influence of motor disorders on the visual perception of human movements in a case of peripheral dysgraphia. Neurocase, 10(3), 223–232.
Chouchourelou, A., Matsuka, T., Harber, K., & Shiffrar, M. (2006). The visual analysis of emotional actions. Social Neuroscience, 1, 63–74.
Clarke, T. J., Bradshaw, M. F., Field, D. T., Hampson, S. E., & Rose, D. (2005). The perception of emotion from body movement in point-light displays of interpersonal dialogue. Perception, 34(10), 1171–1180.
Cutting, J. E., & Kozlowski, L. (1977). Recognizing friends by their walk: Gait perception without familiarity cues. Bulletin of the Psychonomic Society, 9, 353–356.
Daems, A., & Verfaillie, K. (1999). Viewpoint-dependent priming effects in the perception of human actions and body postures. Visual Cognition, 6, 665–693.
Decatoire, A., Beauprez, S. A., Pylouster, J., Lacouture, P., Blandin, Y., & Bidet-Ildei, C. (2018). PLAViMoP: How to standardize and simplify the use of point-light displays. Behavior Research Methods. https://doi.org/10.3758/s13428-018-1112-x
Decatoire, A., Beauprez, S.-A., Pylouster, J., Lacouture, P., Blandin, Y., & Bidet-Ildei, C. (2019). PLAViMoP: How to standardize and simplify the use of point-light displays. Behavior Research Methods, 51(6), 2573–2596. https://doi.org/10.3758/s13428-018-1112-x
Erdfelder, E., Faul, F., & Buchner, A. (1996). GPOWER: A general power analysis program. Behavior Research Methods, Instruments, & Computers, 28(1), 1–11. https://doi.org/10.3758/BF03203630
Faelli, E., Strassera, L., Pelosin, E., Perasso, L., Ferrando, V., Bisio, A., & Ruggeri, P. (2019). Action observation combined with conventional training improves the rugby lineout throwing performance: A pilot study. Frontiers in Psychology, 10, 889. https://doi.org/10.3389/fpsyg.2019.00889
Francisco, V., Decatoire, A., & Bidet-Ildei, C. (2022). Action observation and motor learning: The role of action observation in learning judo techniques. European Journal of Sport Science, 1–23. https://doi.org/10.1080/17461391.2022.2036816
Giese, M. A., & Poggio, T. (2003). Neural mechanisms for the recognition of biological movements. Nature Review Neuroscience, 4(3), 179–192.
Iacoboni, M., & Mazziotta, J. C. (2007). Mirror neuron system: Basic findings and clinical applications. Annals of Neurology, 62(3), 213–218.
Johansson, G. (1973). Visual perception of biological motion and a model for its analysis. Perception & Psychophysics, 14, 201–211.
Jokisch, D., & Troje, N. F. (2003). Biological motion as a cue for the perception of size. Journal of Vision, 3(4), 1–1. https://doi.org/10.1167/3.4.1
Kozlowski, L., & Cutting, J. E. (1977). Recognizing the sex of a walker from dynamic point-light displays. Perception & Psychophysics, 21, 575–580.
Lapenta, O. M., Xavier, A. P., Côrrea, S. C., & Boggio, P. S. (2017). Human biological and nonbiological point-light movements: Creation and validation of the dataset. Behavior Research Methods, 49(6), 2083–2092. https://doi.org/10.3758/s13428-016-0843-9
Lenhard, W., & Lenhard, A. (2017). Computation of Effect Sizes . Unpublished. https://doi.org/10.13140/RG.2.2.17823.92329
Leppänen, J. M., & Hietanen, J. K. (2004). Positive facial expressions are recognized faster than negative facial expressions, but why? Psychological Research Psychologische Forschung, 69(1–2), 22–29. https://doi.org/10.1007/s00426-003-0157-2
Louis-Dam, A., Orliaguet, J.-P., & Coello, Y. (1999). Perceptual anticipation in grasping movement: When does it become possible? In M. G. Grealy & J. A. Thomson (Eds.), Studies in Perception and Action. Lawrence Erlbaum Associates.
Loula, F., Prasad, S., Harber, K., & Shiffrar, M. (2005). Recognizing people from their movement. Journal of Experimental Psychology Human Perception and Performance, 31(1), 210–220.
Ma, Y., Paterson, H. M., & Pollick, F. E. (2006). A motion capture library for the study of identity, gender, and emotion perception from biological motion. Behavior Research Methods, 38(1), 134–141. https://doi.org/10.3758/bf03192758
Manera, V., Schouten, B., Becchio, C., Bara, B. G., & Verfaillie, K. (2010). Inferring intentions from biological motion: A stimulus set of point-light communicative interactions. Behavior Research Methods, 42(1), 168–178. https://doi.org/10.3758/BRM.42.1.168
Manera, V., von der Lühe, T., Schilbach, L., Verfaillie, K., & Becchio, C. (2016). Communicative interactions in point-light displays: Choosing among multiple response alternatives. Behavior Research Methods, 48(4), 1580–1590. https://doi.org/10.3758/s13428-015-0669-x
Marangolo, P., & Caltagirone, C. (2014). Options to enhance recovery from aphasia by means of non-invasive brain stimulation and action observation therapy. Expert Review of Neurotherapeutics, 14(1), 75–91. https://doi.org/10.1586/14737175.2014.864555
Martel, L., Bidet-Ildei, C., & Coello, Y. (2011). Anticipating the terminal position of an observed action: Effect of kinematic, structural, and identity information. Visual Cognition, 19(6), 785–798. https://doi.org/10.1080/13506285.2011.587847
Mather, G., & Murdoch, L. (1994). Gender discrimination in biological motion displays based on dynamic cues. Proceedings of the Royal Society of London. Series B: Biological Sciences, 258(1353), 273–279. https://doi.org/10.1098/rspb.1994.0173
Okruszek, Ł, & Chrustowicz, M. (2020). Social perception and interaction database-a novel tool to study social cognitive processes with point-light displays. Frontiers in Psychiatry, 11, 123. https://doi.org/10.3389/fpsyt.2020.00123
Olofsson, U., Nyberg, L., & Nilsson, L. (1997). Priming and recognition of human motion patterns. Visual Cognition, 4, 373–382.
Pavlova, M. (2012). Biological motion processing as a hallmark of social cognition. Cerebral Cortex, 22(5), 981–995. https://doi.org/10.1093/cercor/bhr156
Pavlova, M., Bidet-Ildei, C., Sokolov, A. N., Braun, C., & Krageloh-Mann, I. (2009). Neuromagnetic response to body motion and brain connectivity. Journal of Cognitive Neuroscience, 21(5), 837–846.
Pollick, F. E., Kay, J. W., Heim, K., & Stringer, R. (2005). Gender recognition from point-light walkers. Journal of Experimental Psychology. Human Perception and Performance, 31(6), 1247–1265. https://doi.org/10.1037/0096-1523.31.6.1247
Pollick, F. E., Lestou, V., Ryu, J., & Cho, S. B. (2002). Estimating the efficiency of recognizing gender and affect from biological motion. Vision Research, 42(20), 2345–2355.
Runeson, S., & Frykholm, G. (1981). Visual perception of lifted weight. Journal of Experimental Psychology Human Perception and Performance, 7(4), 733–740.
Runeson, S., & Frykholm, G. (1983). Kinematic specification of dynamics as an informational basis for person and action perception: Expectation, gender recognition, and deceptive intention. Journal of Experimental Psychology: General, 112, 585–615.
Ryan, D., Fullen, B., Rio, E., Segurado, R., Stokes, D., & O’Sullivan, C. (2021). Effect of action observation therapy in the rehabilitation of neurologic and musculoskeletal conditions: A systematic review. Archives of Rehabilitation Research and Clinical Translation, 3(1), 100106. https://doi.org/10.1016/j.arrct.2021.100106
Sale, P., & Franceschini, M. (2012). Action observation and mirror neuron network: A tool for motor stroke rehabilitation. European Journal of Physical and Rehabilitation Medicine, 48(2), 313–318.
Sarasso, E., Gemma, M., Agosta, F., Filippi, M., & Gatti, R. (2015). Action observation training to improve motor function recovery: A systematic review. Archives of Physiotherapy, 5(1), 14. https://doi.org/10.1186/s40945-015-0013-x
Sevdalis, V., & Keller, P. E. (2011). Perceiving performer identity and intended expression intensity in point-light displays of dance. Psychological Research Psychologische Forschung, 75(5), 423–434. https://doi.org/10.1007/s00426-010-0312-5
Shipley, T. F., & Brumberg, J. S. (2004). Markerless motion-capture for point-light displays. Available at http://astro.temple.edu/~tshipley/mocap/MarkerlessMoCap.pdf. http://astro.temple.edu/~tshipley/mocap/dotMovie.html
Troje, N. F., Westhoff, C., & Lavrov, M. (2005). Person identification from biological motion: Effects of structural and kinematic cues. Perception and Psychophysics, 67(4), 667–675.
Ulloa, E., & Pineda, J. (2007). Recognition of point-light biological motion: Mu rhythms and mirror neuron activity. Behavioural Brain Research, 183(2), 188–194. https://doi.org/10.1016/j.bbr.2007.06.007
Vanrie, J., & Verfaillie, K. (2004). Perception of biological motion: A stimulus set of human point-light actions. Behavior Research Methods Instruments & Computers, 36(4), 625–629.
Verfaillie, K. (1993). Orientation-dependent priming effects in the perception of biological motion. Journal of Experimental Psychology Human Perception and Performance, 19(5), 992–1013.
Verfaillie, K. (2000). Perceiving human locomotion: Priming effects in direction discrimination. Brain and Cognition, 44(2), 192–213.
Westhoff, C., & Troje, N. F. (2007). Kinematic cues for person identification from biological motion. Perception & Psychophysics, 69(2), 241–253.
Zaini, H., Fawcett, J. M., White, N. C., & Newman, A. J. (2013). Communicative and noncommunicative point-light actions featuring high-resolution representation of the hands and fingers. Behavior Research Methods, 45(2), 319–328. https://doi.org/10.3758/s13428-012-0273-2
Acknowledgments
Support for this research was provided by Nouvelle Aquitaine Regional Council (P-2020-BAFE-161), in partnership with the European Union (FEDER/ ERDF), and by the French Government’s Investments for the Future research program, through the Robotex Equipment of Excellence (ANR-10-EQPX-44). We would like to thank Arnaud Revel, professor at the University of La Rochelle, for the loan of the Nao robot, which allowed us to record robotic movements. We also thank the actors who kindly participated in the motion capture and the students who participated in the analysis.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Open practices statements
The data and materials presented in this paper are available online at: https://plavimop.prd.fr/en/motions
Appendices
Appendix 1
Position of markers for global and global interaction movements
Appendix 2
Position of markers for fine-motor movements
47 markers:
- 9 on the body: Head, R_Shoulder, L_Shoulder, R_Elbow, L_Elbow, R_Wrist_Int, R_Wrist_Ext, L_Wrist_Int, L_Wrist_Ext
Appendix 3
Position of markers for facial expressions and facial interactions
41 markers:
8 on eyebrows: REB1 / REB2 / REB3 / REB4 / LEB1 / LEB2 / LEB3 / LEB4
2 on eyes: ER / EL 5
5 on nose: N1 / N2 /N3 / NR /NL
8 on mouth: M1 / M2 / M3 / M4 / M5 / M6 / M7 / M8
15 on face: F1 /F2 / F3 / F4 / F5 / F6 / F7 / F8 / F9 / F10 / F11 / F12 / F13 / F14 / F15
+ RSHO, LSHO, STER
Appendix 4
Position of markers for robotic movements
25 Markers: 1 Top_Head 6 Right_Finger 11 Left_Finger_1 16 Right_Knee 21 Left_Toe 2 Right_Shoulder 7 Left_Finger 12 Left_Finger_2 17 Left_Knee 22 Right_Heel (Non visible) 3 Left_Shoulder 8 Right_Finger_1 13 Left_Finger_3 18 Right_Ankle 23 Left_Heel 4 Right_Elbow 9 Right_Finger_2 14 Right_Hip 19 Left_Ankle 24 Right_Wrist 5 Left_Elbow 10 Right_Finger_3 15 Left_Hip 20 Right_Toe 25 Left_Wrist
Appendix 5
Procedure for the online test
1) The participant was informed about the test. 2) He/she had to provide personal information, 3) He/she judged five PLD sequences (see one example shown below), and 4) He/she was given feedback. A mean recognition rate was given, and for each PLD sequence, correct responses were shown in green, and incorrect ones in red (see two examples shown below).
Rights and permissions
About this article
Cite this article
Bidet-Ildei, C., Francisco, V., Decatoire, A. et al. PLAViMoP database: A new continuously assessed and collaborative 3D point-light display dataset. Behav Res 55, 694–715 (2023). https://doi.org/10.3758/s13428-022-01850-3
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.3758/s13428-022-01850-3