Introduction

Since the discovery of humans’ ability to perceive, recognize, and interpret human movements, several researchers have tried to identify the mechanism behind this ability (see Blake & Shiffrar, 2007; Decatoire et al., 2018; Pavlova, 2012, for a review). One very interesting methodology that consists in solely presenting kinematic information is point-light display (PLD), developed by Johansson (1973). In his groundbreaking experiment, this researcher asked observers to watch videos showing an actor performing an action. The actor was represented by point lights that indicated the motion of the actor’s joints (head, shoulders, elbows, wrists, hips, knees and ankles). Despite the paucity of these stimuli, which showed neither the actor’s body nor the context of the action, observers were able to recognize the represented action quickly and accurately. On the strength of these initial results, this paradigm has been extensively used over the past 40 years. It has shown that humans are not only capable of recognizing the actions that are performed, but can also perceive many characteristics of the actor, including his/her identity (Beardsworth & Buckner, 1981; Cutting & Kozlowski, 1977; Loula, Prasad, Harber, & Shiffrar, 2005; Sevdalis & Keller, 2011; Troje, Westhoff, & Lavrov, 2005; Westhoff & Troje, 2007), sex (Kozlowski & Cutting, 1977; Ma, Paterson, & Pollick, 2006; Mather & Murdoch, 1994; Pollick, Kay, Heim, & Stringer, 2005; Runeson & Frykholm, 1983), emotional state (Atkinson, Dittrich, Gemmell, & Young2004; Chouchourelou, Matsuka, Harber, & Shiffrar, 2006; Clarke, Bradshaw, Field, Hampson, & Rose, 2005), and intentions (Chaminade, Meary, Orliaguet, & Decety, 2001; Martel, Bidet-Ildei, & Coello, 2011; Runeson & Frykholm, 1983; Sevdalis & Keller, 2011). This is possible whether the sequences feature one or two actors (Manera, von der Lühe, Schilbach, Verfaillie, & Becchio, 2016; Okruszek & Chrustowicz, 2020). Using these very depleted stimuli, other studies have shown that humans can infer the characteristics of objects with which the actor interacts, such as weight (Runeson & Frykholm, 1981) and size (Jokisch & Troje, 2003).

This incredible ability appears to be related to the use of a specific brain network (Giese & Poggio, 2003) that includes not only brain areas involved in the visual perception of movements, but also areas involved in the production and interpretation of actions (Urgesi, Candidi, & Avenanti, 2014), such as the mirror system, where several areas are activated at the same time when producing or observing actions (Cattaneo & Rizzolatti, 2009; Iacoboni & Mazziotta, 2007; Sale & Franceschini, 2012). Moreover, several authors have shown that the motor repertoire of observers influences the perceptual processing of PLDs: a rich motor repertoire acquired through practice enhances the perceptual processing (Bidet-Ildei, Beauprez, & Badets, 2010; Chary et al., 2004; Louis-Dam, Orliaguet, & Coello, 1999; Pavlova, Bidet-Ildei, Sokolov, Braun, & Krageloh-Mann, 2009), confirming the link between action production and action observation in PLDs (Ulloa & Pineda, 2007). Finally, many researchers have shown that PLDs can help researchers understand social (see Pavlova, 2012, for a review) and cognitive skills such as language (see Bidet-Ildei, Decatoire, & Gil, 2020, for a review) or number processing (Bidet-Ildei, Decatoire, & Gil, 2015), and therefore constitute a valuable tool in the research community.

Despite the number of groups of researchers using PLDs across the world, few have tried to develop PLD databases that can be accessed by the community. Since 2004, to the best of our knowledge, ten databases have been made available to the communityFootnote 1 and referred to indexed articles (see Table 1).

Table 1 Main characteristics of PLD databases made available up to now referring to indexed articles

Here, we describe a new database (Point-Light Action Visualization and Modification Platform, PLAViMoP; https://plavimop.prd.fr/en/motions) featuring currently 177 human (whole body, upper body, face, with one or two agents) and 21 robotic 3D PLD sequences. The major advantages of this database are that it (1) brings together different types of stimuli (human, robotic) within a single tool, (2) includes a highly intuitive search engine that allows videos to be retrieved quickly and easily, (3) offers free downloads via the Internet and is scalable (anyone can upload a new PLD sequence), and (4) includes an online recognition test that can be performed by all visitors and provides a recognition rate for each PLD. This permit to facilitate the access to the movements of interest and to have a current idea of the recognition of each stimulus. Moreover, the presence of robotic movements is particularly interesting as control or to better understand the role of motor and/or visual system in the recognition of PLD. Each PLD sequence is accessible in two formats: MP4, to be directly used in experiments, and C3D, to be modified (see Decatoire et al., 2018, and https://plavimop.prd.fr/en/software for ways of making spatial and kinematic modifications).

After describing how the database was built, and how it works, we explain how we assessed it. We end by identifying its limitations but also its future prospects.

Database creation

The PLAViMoP database contains both human and robotic PLD movements. All of these were captured using either the Qualisys MacReflex motion capture system (Qualisys; Gothenburg, Sweden) consisting of 16 Oqus 7 + cameras, or the Vicon motion capture system (VICON™ Inc., Denver, CO) composed of 20 MX-T40 cameras. Both systems’ frame rates were set at 100 Hz. The resulting videos were then analyzed with Qualisys Track Manager (QTM) or Nexus 1.8 to compute the 3D coordinates of reflective markers during the movements, and finally to build the corresponding PLD sequences.Footnote 2

Human movements

Global movements

Two adults (one male and one female aged 20 years, with no motor, sensory or cognitive disorders) participated in the data collection. Each actor wore 33 reflective spherical markers measuring 14 mm in diameter (see Appendix 1 for detailed location), but after the reconstruction, each PLD only represented 19 points (see Appendix 1 for details). For each actor, we recorded 40 global movements: 31 representing everyday actions (e.g., walking, sitting down), and nine representing sporting gestures (e.g., golf swing, press-up). A full list of global movements is provided in Table 2.

Table 2 Data for the assessment of global PLD sequences

Fine-motor movements

Two adults (one left-handed male aged 25 years and one right-handed female aged 38 years with no motor, sensory or cognitive disorders) participated in the data collection. Each actor wore nine reflective spherical markers measuring 6.4 mm in diameter and 38 reflective hemispherical auto-adhesive markers measuring 4 mm in diameter (see Appendix 2 for detailed locations). For each actor, we recorded 28 movements: 26 representing everyday actions (e.g., writing, drinking), and two representing experimental tasks (e.g., pointing with a tool). A full list of the fine-motor movements is provided in Table 3.

Table 3 Data for the assessment of fine PLD sequences

Facial expressions

We recruited two adults (one male aged 50 years and one female aged 30 years with no motor, sensory, or cognitive disorders) who regularly acted in theaters. Each wore 41 markers: 38 4-mm hemispherical facial markers, and three 6.4-mm spherical markers for the sternum and shoulders (see Appendix 3 for details). Importantly, the position of the eyes was not directly recorded, but was calculated a posteriori from the position of dots placed at the outer canthus of each eye. This choice was made to render our PLDs more face-like, as a pretest including 15 adult participants showed that without eyes, our PLD faces seemed strange and ghostly. For each actor, 12 facial expressions were recorded (e.g., happiness, sadness). A full list of facial expressions is provided in Table 4.

Table 4 Data for the assessment of facial expression PLD sequences

Interactions

Four actors (two for global interactions and two for facial interactions) participated. They were the actors we had recruited for the global movements and facial expressions. Using the same markers (i.e., for either global movements or facial expressions), we recorded 14 interactions: ten global and four facial sequences. A full list of the interactions is provided in Table 5.

Table 5 Data for the assessment of interaction PLD sequences

Robotic movements

Robotic movements were recorded from the humanoid robot Nao (SoftBank Robotics; https://www.ald.softbankrobotics.com). We placed 25 markers measuring 6.4 mm in diameter on Nao (Appendix 4), and recorded 21 actions (for a full list, see Table 6).

Table 6 Data for the assessment of robotic PLD sequences

Database functioning

The PLAViMoP database comes in two parts: a search engine and a viewing window (see Fig. 1).

Fig. 1
figure 1

Overview of PLAViMoP database: search engine on the left and viewing window on the right

The search engine makes it easy to retrieve PLDs in the dataset. Users can select several criteria (e.g., type of movement, sex, age). These criteria are programmed to evolve with the dataset development. For example, if a movement performed by a child is added to the dataset, Children will appear in the age category.

The viewing window displays the type of movement, the tags (e.g., sex, category, age), and an extract of each movement. All sequences can be downloaded in two formats: MP4, where the motion is seen from one choose viewpoint (an angle of 45° for the majority of displays), and C3D. C3D files contain the 3D coordinates of the motion, affording the possibility of constructing point-light versions of the actions seen from any desired perspective (Daems & Verfaillie, 1999; Olofsson, Nyberg, & Nilsson, 1997; Verfaillie, 1993, 2000). With this format, it is also possible to add trajectories or modify the spatial and temporal parameters of the motion (for further details, see PLAViMoP software; https://plavimop.prd.fr/en/software, Decatoire et al., 2018). Users can click on a “Show details” button to obtain details of how a given movement was captured and information about the recognition rate.

Database assessment

The PLAViMoP platform was built to be scalable and autonomous. It includes an online recognition test that can be performed by all visitors. This recognition test lasts 5 min and consists of the recognition of five PLD sequences chosen randomly within the PLAViMoP database. Before starting the test, participants have to indicate their age, sex, and level of sporting activity. These data are stored anonymously. After the test, participants receive feedback about their recognition performance. Movement recognition data are updated as soon as a new person completes the recognition test. The results provided below are based on the data that were available for this online recognition test in June 2021.

Participants

A total of 703 participants (Mage = 25.6 ± 10.5 years; 59% women; 78% amateur athletes, 22% no sport practice) participated in the online recognition test.

Stimuli

The stimuli were extracted from the PLAViMoP database (see above for details of its construction).

Procedure

Participants each had to judge five PLD sequences extracted randomly from the PLAViMoP database. For each sequence, they had to indicate the sex of the actor (male, female, mixed, or none), the type of motion (human, animal, robotic), the label of the action (chosen from a list of 108 labels of the PLD sequences), and the age of the actor: adult (18–64 years), older adult (> 65 years), or child (none at present). No time limit was given to participants. Each sequence was played in a loop until the judgment was completed. At the end of the test, participants were given feedback about their performance (see Appendix 5).

Results

On June 15, 2021, we extracted all the results and divided them into five categories (global human movement, fine human movement, facial expression, interaction, and robotic movement). For each PLD sequence, we calculated the number of observers (male, female, total) who judged it, the mean age of these observers, and the percentages of observers who correctly recognized the type, label, sex, and age. As the evaluation method does not allow us to know precisely which displays were evaluated by the same person, we decided to carry out independent analyzes. Moreover, given the violation of normality, sphericity and/or homogeneity of values, it was difficult to find adequate statistical analysis. For us, the best statistical analyses were nonparametric tests. Mann–Whitney comparisons have been used for comparisons of sub-category in each category and Kruskal–Wallis test for the comparisons between categories. For all analyses, the effect size was given by the eta squared calculated with Psychometrica (Lenhard & Lenhard, 2017) and a calculation of the power was made a posteriori with gpower (Erdfelder, Faul, & Buchner, 1996).

Human movements

The results were analyzed separately according to action (global, fine, expression, interaction), and were then compared using a nonparametric Kruskal–Wallis test with action type (global, fine, expression, interaction) as a between-participants factor.

Global movements

Results are set out in Table 2. On average, each PLD sequence was judged by 19 observers aged 26 ± 2 years. The mean percentage of correct recognition responses was 62%, with 81% for type, 64% for label, 35% for sex, and 70% for age. The maximum label recognition score (100%) was achieved for “Walk Man”, “Walk Woman”, and “Walked aged Man”, and the minimum recognition score (0%) for “Volleyball Block Man”. Both daily-life actions and sports gestures are well recognized concerning the type of movement (mean everyday actions = 77.6%, SD = 20.3%; mean sport actions = 91.27%, SD = 6.42%), the label of action (mean everyday actions = 62.29%, SD = 25.05%; mean sport actions = 70.55%, SD = 26.49%) and the age of the actor (mean everyday actions = 68.77%, SD = 17.15%; mean sport actions = 74.61%, SD = 16.80%). Concerning, the sex of the actor, the recognition is relatively low for both types of actions (mean everyday actions = 34.43%, SD = 24.47%; mean sport actions = 35.89%, SD = 24.20%). Mann–Whitney comparisons indicated that there was no difference in the percentage of correct recognition responses between everyday actions and sport gestures, concerning label (U62,18 = 429, p = 0.13, ɳ2 = 0.02, 1-β = 0.18), sex (U62,18 = 550, p = 0.94, ɳ2 = 0.001, 1-β = 0.05), and age (U62,18 = 448, p = 0.21, ɳ2 = 0.01, 1-β = 0.11). Concerning the type of movement, sport gestures were better recognized than everyday actions (U62,18 = 323, p < 0.01, ɳ2 = 0.07, 1-β = 0.53).

Fine-motor movements

Results are set out in Table 3. On average, each PLD sequence was judged by 16 observers aged 25.2 ± 2.4 years. The mean percentage of correct recognition responses was 60%, with 87% for type, 52% for label, 23% for sex, and 78% for age. The maximum label recognition score (100%) was achieved for “Drink Woman” and “Light a Match Man”, and the minimum recognition score (0%) for “Sign Man”. Both daily-life actions and experimental tasks are well recognized concerning the type of movement (mean daily-life actions = 89.0%, SD = 14.6%; mean experimental task = 73%, SD = 11.2%) and the age of the actor (mean daily-life actions = 79.5%, SD = 11.4%; mean experimental task = 59.2%, SD = 7.89%) even in both cases Mann–Whitney comparisons indicate a better recognition for daily-life actions (U53,4 = 185, p < 0.05, ɳ2 = 0.11, 1-β = 0.26 and U53,4 = 199, p < 0.01, ɳ2 = 0.15, 1-β = 0.36, respectively). Concerning the recognition of the label of action (mean daily-life actions = 53.9%, SD = 29.2%; mean experimental task = 26%, SD = 15.8%) and the sex of the actor (mean daily-life actions = 24.0%, SD = 16.2%; mean experimental task = 23.0%, SD = 8.8%), they were relatively low for both subcategories with no difference between them even if we can observe a tendency to have a better recognition of label of action for daily life than experimental task (U53,4 = 153, p = 0.07, ɳ2 = 0.04, 1-β = 0.12 for the label of action and U53,4 = 96.5, p = 0.78, ɳ2 = 0.02, 1-β = 0.08 for the sex of actor).

Facial expressions

Results are set out in Table 4. On average, each PLD sequence was judged by 18 observers aged 25.3 ± 3.3 years. The mean percentage of correct recognition responses was 50%, with 73% for type, 28% for label, 33% for sex, and 65% for age. The highest label recognition score (78%) was for “Laugh Man”, and the lowest recognition score (0%) was for “Doubt Woman”, “Doubt Man” and “Pain Woman”.

Interactions

Results are set out in Table 5. On average, each PLD sequence was judged by 19 observers aged 25.7 ± 3 years. The mean percentage of correct recognition responses was 71%, with 87% for type, 53% for label, 61% for sex, and 82% for age. The highest label recognition score (85%) was for “Dance Duo”, and the lowest recognition score (0%) was for “Tell a Joke Duo”. Both global and facial expressions are well recognized concerning the type of movement (mean global = 93.5%, SD = 4.6%; mean expression = 72.8%, SD = 8.3%), the sex of the actor (mean global = 59.7%, SD = 15.7%; mean expression = 65.2%, SD = 14.4%) and the age of the actor (mean global = 85.3%, SD = 9.2%; mean expression = 76.8%, SD = 6.4%). Concerning the label of action, the recognition is lower for both subcategories (mean global = 62.4%, SD = 26.1%; mean expression = 29.5%, SD = 31.8%). Mann–Whitney comparisons indicated that global and facial interactions did not differ on the percentage of correct recognition responses for label (U10,4 = 8, p = 0.10, ɳ2 = 0.21, 1-β = 0.36), sex (U10,4 = 26, p = 0.43, ɳ2 = 0.05, 1-β = 0.12), and age (U10,4 = 9, p = 0.14, ɳ2 = 0.17, 1-β = 0.29). Concerning type of movement, global interactions were better recognized than facial interactions (U10,4 = 0, p < 0.01, ɳ2 = 0.57, 1-β = 0.94).

Effect of action type

Concerning the correct recognition of type of movement (see Fig. 2A), a Kruskal–Wallis test indicated a significant effect of action type (W3 = 19.65, p < 0.01, ɳ2 = 0.09, 1-β = 0.86), with better recognition for fine-motor movement (mean = 87.8%, SD = 19.1%) and interaction (mean = 87.6%, SD = 11.2%) sequences than for global (mean = 80.7%, SD = 18.9%) and facial expression (mean = 73%, SD = 19.1%) sequences.

Fig. 2
figure 2

Mean results for the correct recognition of type of movement (A), label (B), sex (C), and age (D) for each type of human action (fine, expression, global, interaction). Error bars represent 95% confidence intervals. The horizontal dotted line represents chance level (33% for the type of movement, 0.92% for the label of action, 25% for the sex and the age of the actor)

Concerning correct label recognition (see Fig. 2B), a Kruskal–Wallis test indicated an effect of action type (W3 = 25.26, p < 0.01, ɳ2 = 0.13, 1-β = 0.97), with better recognition for global (mean = 64.1%, SD = 25.1%) and interaction (mean = 53%, SD = 30.7%) sequences than for fine (mean = 51.9%, SD = 29.3%) and facial expression (mean = 29%, SD = 26.9%) sequences.

Concerning correct recognition of sex (see Fig. 2C), a Kruskal–Wallis test indicated an effect of action type (W3 = 27.19, p < 0.01, ɳ2 = 0.14, 1-β = 0.98), with better recognition for interaction (mean = 61.3%, SD = 15%) sequences than for global (mean = 34.7%, SD = 23.5%), facial expression (mean = 33.9%, SD = 22.9%) and fine-motor (mean = 23.9%, SD = 15.7%) sequences.

Concerning correct recognition of age (see Fig. 2D), a Kruskal–Wallis test indicated an effect of action type (W3 = 20.7, p < 0.01, ɳ2 = 0.10, 1-β = 0.90), with better recognition for interaction (mean = 82.9%, SD = 9.1%) and fine (mean = 78%, SD = 912.3%) sequences than for global (mean = 70.08%, SD = 17.9%) and facial expression (mean = 65.4%, SD = 12.7%) sequences.

Robotic movements

Results are set out in Table 6. Each PLD sequence was judged by 16 observers aged 25 ± 2.5 years. The mean percentage of correct recognition responses was 56.4%, with 38% for type, 62.4% for label, 62.8% for sex, and 62.8% for age. The best label recognition (92%) was for “Give Robot” and “Tai Chi Robot”, and the worst (4%) for “Clean Robot”.

Effect of movement type

We compared performances on human and robotic PLD sequences with a nonparametric Kruskal–Wallis test, with movement type (human, robotic) as a between-participants factor.

Concerning recognition of type of movement (see Fig. 3A), the Kruskal–Wallis test revealed a significant effect of action type (W1 = 31.47, p < 0.01, ɳ2 = 0.16, 1-β = 0.99), with better recognition for human sequences (mean = 82.5%, SD = 17.8%) than for robotic ones (mean = 38.2%, SD = 29.1%).

Fig. 3
figure 3

Mean results for the correct recognition of type of movement (A), label (B), sex (C), and age (D) according to movement type (human, robot). Error bars represent confidence intervals at 95%. The horizontal dotted line represents chance level

Concerning label recognition (see Fig. 3B), the Kruskal–Wallis test did not reveal any significant difference between human and robotic sequences (W1 = 1.08, p = 0.30). Concerning recognition of the actor’s sex (see Fig. 3C), the Kruskal–Wallis test indicated an effect of action type (W1 = 24.7, p < 0.01, ɳ2 = 0.12, 1-β = 0.99), with better recognition for robotic sequences (mean = 62.8%, SD = 19.3%) than for human ones (mean = 33.2%, SD = 22.6%).

Concerning recognition of the actor’s age (see Fig. 3D), the Kruskal–Wallis test indicated an effect of action type (W1 = 4.4, p < 0.05, ɳ2 = 0.02, 1-β = 0.5), with better recognition for human sequences (mean = 73.08%, SD = 15.8%) than for robotic ones (mean = 62.2%, SD = 23.3%).

Discussion

In this paper, we describe the construction, functioning and assessment of the new PLAViMoP database, which currently contains 196 PLD sequences. After discussing the results of the recognition test, we set out the limitations and perspectives.

First of all, our analysis showed that observers were able to correctly label the majority of our stimuli (mean action label recognition = 55%). Most importantly, the level of label recognition was above 70% for 71 PLD sequences (36 representing global human movements, 18 representing fine human movements, three representing facial expressions, five representing human interactions, and nine representing robotic movements), indicating that they could be used without hesitation. Here, we discuss the results for human sequences, then for robotic sequences, and conclude by highlighting the usefulness of the PLAViMoP database for research and application.

Recognition of human movements

Mean label recognition for human PLD sequences was 54% (with a chance level at 0.92%),Footnote 3 with better recognition for global and interaction movements than for fine movements and facial expressions. The level of correct label recognition for global and interaction sequences confirmed previous research highlighting a good ability to recognize these types of stimuli (Lapenta et al., 2017; Manera et al., 2010, 2016; Okruszek & Chrustowicz, 2020). Moreover, it suggests that it is better for PLD sequences to show the whole body rather than just part of it (face or upper torso) (Atkinson, Vuong, & Smithson, 2012). Concerning the weakness of facial expression recognition observed in our study, this may be because we used not only prototypical facial expressions such as anger, happiness, disgust and surprise, but other facial expressions too, such as doubt and boredom. Prototypical facial expressions were fairly well recognized (> 57%), especially happiness (79.5%), in line with what is usually reported in the literature (Bidet-Ildei, Decatoire, & Gil, 2020; Leppänen & Hietanen, 2004). Concerning fine movements, the difficulty may have arisen from the labels we provided, which may have been too close to be distinctive (e.g., “write” vs. “sign” or “point” vs. “point with a tool”). However, to our knowledge, no study has previously studied the recognition of fine upper body movement, so we do not have any means of comparison.

Concerning recognition of the actor’s sex, we found relatively weak scores, with mean recognition of 38%, and only 34% for global human movements (see Pollick, Kay, Heim, & Stringer, 2005, for a comparison). These scores may be explained either by the 45° angle of our PLD sequences (Daems & Verfaillie, 1999) or by the ambiguity of the question. The choice of categories (male, female, mixed, none) may have encouraged some participants to answer “mixed” or “none” when they were not completely sure of their response. Recognition of the actor’s sex was particularly difficult for fine and facial expression sequences, in line with the idea that center of moment (Kozlowski & Cutting, 1977; Pollick et al., 2005), and more particularly the sway of the hips and shoulders (Lapenta et al., 2017; Mather & Murdoch, 1994), is crucial for this recognition.

Concerning recognition of type of movement and actor’s age, we obtained good results, with 82.5% correct recognition for type of movement and 73% for actor’s age. This suggests that adults can easily distinguish between human, robotic, and animal movements, as well as between children, young adults, and older ones. These abilities may be related to the level of activation of the motor network, which is known to be stronger for movements that belong to the individual’s motor repertoire (Calvo-Merino, Glaser, Grezes, Passingham, & Haggard, 2005). However, our results need to be confirmed, as we only had two types of movement (robotic and human) and two ages (young and older) for the moment in the database. Moreover, the database currently contains just one PLD sequence representing a movement performed by an older actor.

Robotic movements

Mean label recognition for robotic PLD movements was 62%, which is comparable to what we obtained for human movements. Interestingly, while observers more accurately judged human sequences in terms of type of movement and actor’s age, judgements did not differ concerning label recognition, and were actually better for robotic sequences when it came to recognizing the actor’s sex. As with humans, the weakness in recognizing the type of movement and the age of the actor in the robotic sequences can be explained by reduced activation of the motor network. This may also be in line with the fact that results were systematically more variable for robotic movements than for human ones. However, it is important to note that the ability to recognize the action label was equivalent for robotic and human sequences, suggesting that judgements of PLD sequences do not have to systematically rely on motor experience (Chaminade et al., 2001; Pavlova et al., 2009). This finding is consistent with the idea that motor representations can be built from visual experience, as has already been demonstrated in the literature (Beauprez et al., 2019, 2020). One other possibility is that the movements of the humanoid robot Nao can activate motor resonance (Agnew, Bhakoo, & Puri, 2007), insofar as its programs were inspired by human movements. Concerning observers’ ability to recognize the sex of the actor in robotic PLD sequences, this can be explained simply by the lower level of ambiguity: if observers recognize a robotic action, there is therefore less ambiguity about their response concerning the sex of the actor (i.e., none).

Limitations

First, although the PLAViMoP database contains many different PLD sequences, there is currently no variability (only one repetition) for each proposed sequence. This could be problematic for protocols that require many repetitions of the same stimulus. However, as mentioned above, PLAViMoP is scalable, so several trials of the same action will probably be available in the future.

A second limitation is the a priori label chosen for each sequence. Some of these labels are currently too close (e.g., “write” vs. “sign” or “drink” vs. “drink from a bottle”) to be correctly classified. Consequently, in the majority of cases, actual recognition of the sequences was better than results suggest. It is important to bear in mind that the responses given by each observer for each sequence are accessible to everybody who is registered on the platform (consult “Show details” in viewing window). Consequently, we encourage researchers to regularly consult the latest available data and to access to individual responses.

Third, as the test is available on line, it is difficult to control the way it is performed (participant’ state, attention paid to test, reaction time, etc.). For this reason, it is important to have as large a number of responses as possible, and we encourage everybody to share the link to the recognition test (https://plavimop.prd.fr/en/recognition). Although the majority (136/197) of PLD sequences were judged by at least 15 participants in the present study, some were judged by only a few observers, and it will therefore be better to wait for more responses before using these.

Finally, we should be careful with our results because the procedure adopted in the recognition test does not allow to determine which display has been judged by the same participant. Consequently, the statistics realized cannot be purely dependent or independent. In the present experiment, we decided to use independent statistics, but future research should control which display is associated to each participant. Moreover, the analysis of effect size and power sometimes displayed weak or moderate effects. Consequently, future analyses of the stimuli included on PLAViMoP database could be carried out with classical experimental procedures to better control the different parameters of analysis and to predict a priori both the number of stimuli and participants to have significant effects at p < 0.05 and a power of 0.80.

Conclusions and perspectives

Despite these limitations, we think that the PLAViMoP database could be a very useful tool for working with PLD sequences. Accessible to everyone, it allows large numbers of human and robotic PLD sequences to be retrieved quickly and easily. The presence of robotic movements could be potentially very interesting for researchers in psychology to be used as a control for biological motions or for researchers in robotics to better understand how humans can perceive and judge robotic actions. Moreover, the choice of format (MP4 or C3D) means that they can either be used directly for practice or can be spatially and/or kinematically modified (e.g., by adding masking dots or scrambling the action; see Decatoire et al., 2019, and https://plavimop.prd.fr/en/software for possible transformations). The PLAViMoP database could therefore be useful for researchers by allowing them to directly access normative PLD stimuli to study perceptual, motor, cognitive, and/or social issues. Moreover, PLAViMoP could make PLDs more accessible to practitioners for use in sport or rehabilitation. Many studies have shown that action observation can improve performance in sports (see for example Faelli et al., 2019 and Francisco, Decatoire, & Bidet-Ildei, 2022 for recent studies), as well as motor (see Ryan et al., 2021; Sarasso, Gemma, Agosta, Filippi, & Gatti, 2015 for reviews) and cognitive (see Marangolo & Caltagirone, 2014, for a review) abilities, but it has seldom been used until now, mainly because accessing stimuli is difficult.

In conclusion, PLAViMoP constitutes a useful tool for researchers and practitioners that can work with PLD stimuli. Moreover, the collaborative and scalable options could lead to interesting opportunities in the future. Concerning the perspectives, we are working to develop the database by adding more specialist sports movements (e.g., judo, soccer) and movements performed by individuals with motor and/or cognitive disabilities. Moreover, we aim to add several sequences for each action, in order to add some variability.