skip to main content
research-article
Open Access

Nonverbal Cues in Human–Robot Interaction: A Communication Studies Perspective

Published:15 March 2023Publication History

Skip Abstract Section

Abstract

Communication between people is characterized by a broad range of nonverbal cues. Transferring these cues into the design of robots and other artificial agents that interact with people may foster more natural, inviting, and accessible experiences. In this article, we offer a series of definitive nonverbal codes for human–robot interaction (HRI) that address the five human sensory systems (visual, auditory, haptic, olfactory, and gustatory) drawn from the field of communication studies. We discuss how these codes can be translated into design patterns for HRI using a curated sample of the communication studies and HRI literatures. As nonverbal codes are an essential mode in human communication, we argue that integrating robotic nonverbal codes in HRI will afford robots a feeling of “aliveness” or “social agency” that would otherwise be missing. We end with suggestions for research directions to stimulate work on nonverbal communication within the field of HRI and improve communication between people and robots.

Skip 1INTRODUCTION Section

1 INTRODUCTION

Communication studies has a long and rich history, with many of today's ideas being traceable back to scholars of ancient Greece and Rome. Communication is a complex process that can be defined in many ways. Before turning to applying what we know about human communication to human–robot interaction (HRI), we have to start with a useful definition of communication. In its broadest sense, communication is the discriminatory response of an organism to a stimulus [1]. According to Hauser [2], communication systems consists of three features: the signal source, a transmission channel, and a pool of perceivers. Thus, communication involves two or more communication partners who are either sending or receiving messages. In HRI, one of the communication partners is a machine; while often a social robot, the machine may be any kind of human-interfacing agent, interface, or artificial intelligence– (AI) infused environment, such as a voice assistant, smart vehicle, or conversational user interface, to name a few.1 Therefore, any definition of communication in HRI needs to account for non-human (and potentially non-humanlike) message transmission and communication behavior that is intentional or unintentional. Furthermore, definitions of communication differ in being sender or receiver oriented. Since self-agency and conscious intent in sending a message are questionable2 in the case of a robot as the sender, a receiver-based perspective is more useful in HRI. Therefore, we define communication as an interactive process whereby a receiver (i.e., a person) assigns meaning to one or more stimuli that are transmitted intentionally or unintentionally by a sender (i.e., a robot). With this definition in hand, we must now articulate what factors of human–human communication may be applied to human–robot communication and to what extent.

One key factor is nonverbal communication, which is generally defined as any transfer of messages that does not involve the use of words [3]. This also includes sounds that are not words, such as backchanneling (“uh” and “ah”) as well as the pitch, tone, and intonation of the voice. Between humans, nonverbal communication represents a significant ratio of all face-to-face communication. Human conversation, for instance, is accompanied by a variety of gestures, postures, facial expressions, and eye movements, among other nonverbal codes. Nonverbal codes describe the variety of nonverbal stimuli, or cues, that occur within a communication context. Nonverbal cues can then be described as any information within a nonverbal communication process to which a receiver assigns meaning, regardless of whether this meaning was transmitted intentionally or not by the sender. This “unspoken language” is an integral part of the message and its transmission in human–human communication. As such, nonverbal cues may be useful in cases where robots interact directly with people, as well. Indeed, recent research in HRI (e.g., References [48]) has acknowledged the importance of nonverbal cues, especially for novice users in social settings (e.g., Reference [6]). Moreover, designers of robots often attempt to consider how the appearance of a robot affects human behavior, such as by creating friendly-looking, approachable social robots. Examples include human-sized social robots like Pepper (SoftBank/Aldebaran Robotics), small, toylike robots with facial expressions for play and entertainment like Cozmo (Digital Dream Labs), and sturdy, intimidating surveillance robots like the K5 (Knightscope). Considering the growing interest in designing robots to use nonverbal cues when communicating with people, a timely and useful addition to help direct future research and practice would be an overview of common nonverbal codes established by the field of human communication. In particular, we need guidance on how these codes may be, and, where possible, have been, applied in HRI contexts as cues.

To this end, we present an overview of nonverbal communication for HRI, grounded in human communications research and existing work on robots and other artificial agents. The goals of this article are threefold: (1) to present a definition of nonverbal communication in HRI based on human communication models, (2) to provide a framework of nonverbal codes derived from human communication studies, and (3) to demonstrate how these nonverbal codes may be and have been applied in HRI. We introduce a theoretical framework of robotic nonverbal cues based on their perceptibility through the human sensory systems outlining nonverbal cues that address vision, auditory perception, haptics, olfactory, and gustatory perception. We begin with an explanation of our rationale for including nonverbal communication in HRI. We then describe the major characteristics of nonverbal communication from communication studies. Following this, we outline and describe the framework of common nonverbal codes that play an important role in human communication and are relevant for HRI. We discuss and critique examples of where these codes have already been studied, as well as provide ideas for future research and development. We end with an outlook on the next steps for explorations of nonverbal codes in HRI.

Skip 2ADAPTING HUMAN NONVERBAL CODES FOR HRI Section

2 ADAPTING HUMAN NONVERBAL CODES FOR HRI

Many in robotics believe that robots can coexist as social agents alongside humans but not necessarily as their social equals. How robots and other AI-based machines become socially embodied agents, and whether this state is static or circumstantial and unstable, remains an open question [9]. Some Japanese roboticists have argued that to become a social agent, robots should engage the world with sensitivity, sensibility, feeling, aesthetics, emotion, affection, and intuition [10], called a kokoro function. Kokoro in Japanese has a complex meaning that can be translated as any of heart, spirit, and/or mind, integrating emotion, thinking, and intention, and describing the inner world or essence of a person (or in this case a robot). Thus, robot creators in Japan often refer to robots as belonging to a “third existence,” where a robot is something between a living and non-living creature [11]. A question remains about how this inner world of a robot can be made accessible to human communication partners. Furthermore, people's reactions and internal states are often expressed through physiological changes that are visible to others. Interpretation of physiological cues are highly contextual. Facial flushing can indicate embarrassment, stress, happiness, excitement, anger, and so on. Heavy breathing can signal anxiety, even a panic attack, or be a natural state of exertion after intense exercise. These cues can add to the perceived “aliveness” of a robot and contribute to impressions of its kokoro function. Incorporating physiological cues within HRI contexts can enhance the perception of a robot as being “alive” or humanlike. For example, Lütkebohle et al. [12] simulated blushing in the robot Flobi with an array of red LEDs in its cheeks, whereas Yoshida and Yonezawa [13] developed a stuffed bear-style robot that can breathe, has a heartbeat, and has varying temperature and other involuntary bodily movements related to physiological and psychological states.

In this article, we develop the idea of giving robots a feeling of “aliveness” or “social agency” through the use of nonverbal communication modalities. In human communication, nonverbal codes are an essential mode of communication that, among other things, reveal a person's role and status (e.g., through clothing style), provide information about feelings and emotions (e.g., facial expressions, gestures), or offer redundancy in speech (e.g., gestures and body movements). Enriching robot communication with nonverbal codes derived from human communication could help human collaborators perceive robots as social agents improving mutual understanding and coordination. Explorations in nonverbal communication with robots has a history going back several decades [6, 14]. Roboticists have sought to create intuitive interactions between robots and people and increase human acceptability of robots [6, 15]. Nehaniv et al. [16] developed a taxonomy of gestures related to robotic arm movements. De Santis et al. [17] surveyed the HRI literature on physical HRI involving mechanical and control components. Previous research has looked at affective expressions in robotic body movements that did not include face or verbal components [18] or studied “tactile HRI,” or the ability of robots to detect a person's physical touch [19]. In a social twist, Van Erp and Toet [20] produced 10 guidelines for affective and interpersonal components of robotic touch. Kruse et al. [21] and later Rios-Martinez, Spalanzani, and Laugier [22] each covered social distance and other cues related to robot navigation of social spaces in their surveys. The literature review conducted by McColl et al. [14] was limited to human perception of robot intention through nonverbal gestures in decision-making scenarios. Admoni and Scassellati [23] focused on eye gaze in social HRI contexts. Saunderson and Nejat [6] focused on robotic motion and movement: kinesics, proxemics, haptics, and chronemics. Explorations along these lines are expected to continue as we continue to deploy robots in social contexts with people.

Yet the vast majority of the work so far has focused on a limited range of nonverbal cues, often exploring a single cue in isolation rather than evaluating a range of cues, which is more representative of human models [6]. Additionally, there has been a strong technical focus, with far less work exploring how robots with nonverbal cues influence people's attitudes and behavior [6]. In this article, we take one step toward addressing these gaps and limitations by applying a comprehensive human communications studies perspective to the HRI context. We now turn to describing the nature of nonverbal communication among people, including the full range of cues and how they work together.

Skip 3NONVERBAL COMMUNICATION CHARACTERISTICS Section

3 NONVERBAL COMMUNICATION CHARACTERISTICS

Communication, as we defined above, is an interactive process involving a signal source, a transmission channel, and an agent (or perceiver) acting on a stimulus. In HRI, either the agent or the source of the stimuli is a robot. What form this stimulus takes can vary. In the case of nonverbal communication, the stimuli do not take the form of speech. As such, we define nonverbal communication in HRI as any type of communication between people and robots that does not involve words but includes nonverbal utterances and behaviours. We build on a psychological model of communication that distinguishes between a sender who transmits a message and a receiver who decodes the message [24]. Communication is an interactive process where messages are continuously exchanged between sender and receiver. Here, we take a simple approach, focusing on scenarios where the robot is the sender of nonverbal cues and a person is the receiver, having to decode and assign meaning to the cues transmitted by the robot. We define nonverbal cues as any stimuli to which a receiver (person) assigns meaning, independently of whether these nonverbal cues were transmitted intentionally or unintentionally by the sender (robot). Nonverbal communication differs from verbal communication in several important ways. Understanding the features of nonverbal communication that distinguish it from verbal communication is essential for the successful implementation of nonverbal cues in HRI, but it is also one of the greatest challenges. Transferring nonverbal gestures from humans to robots is likely to fail if the prerequisites for nonverbal codes are violated. In general, humans are accustomed to interpreting nonverbal codes in other humans and as such can be sensitive to robots that deviate from learned expectations. Nonverbal communication has several characteristics that distinguish it from other communication systems. We outline seven that we deem relevant for implementing nonverbal codes into robots.

3.1 Multiple Interpretations

Nonverbal cues can be interpreted in different ways. For example, a person moving away from a robot can be a sign of disinterest, fear, or surprise or even signal an end to the interaction. In a similar way, nonverbal cues used by a robot can be interpreted by a person in different ways as well. Changing the eye color of a robot, as done in a study by Koike et al. [25], evoked a variety of different reactions in people. This poses a problem to the designer: how to make sure that the intended message has been received and understood on both sides. Including redundant signals and using a variety of non-verbal codes can help to clarify the message.

3.2 Unintentional Behavior

Nonverbal cues interpreted by a receiver may or may not have been part of the intended message. The robot's body shape and its movement might be constrained by what is technically feasible. Nonetheless, these features might be interpreted by a receiver as intentional nonverbal cues. Also, it is difficult to convey different emotional expressions through the voice of a robot, such as sounding happy, sad, or concerned. An emotionless voice independent of what is said can be confusing for a receiver and is often interpreted as a lack of empathy [26]. Additionally, when the affective quality of the voice does not match expectations, this can cause confusion or otherwise provide a negative experience [5].

3.3 Multiple Cues

There are multiple, separate nonverbal cues that play a role in communication. Clothing, facial expressions, and hand gestures all send out a message that may be interpreted individually or in concert. When the same message is sent via each cue, the combined impact of all cues is intensified. However, if cues are contradictory, then they can cause confusion and thereby undermine the impact of the message, for example, if a sender shakes his or her head left and right, a signal of “no” in most cultures, while also saying “yes” out loud. Exceptions exist, such as when a sender uses a certain code or sensory modality to send a message to a certain receiver and not others, or when a sender presents two cues simultaneously that are intended for different receivers. For example, a sender may be absorbed in playing a video game, with eyes glued to the screen, their whole body facing the game, and actively pounding away a game controller, while also carrying on a conversation about other matters with a receiver who is present but observing. On the technical side, nonverbal response and behavior frameworks will need to be developed for robots or extended from existing, similar offerings, such as Lee and Marsella's [27] nonverbal behavior generator for embodied conversational agents (ECA). Programming and markup languages developed for ECAs, such as the Behavior Markup Language [28], could be used to develop nonverbal expressions for robots in HRI contexts, as well.

3.4 Immediate Message Transmission

Nonverbal cues are always being sent, intentionally or unintentionally, when we are face-to-face with a communication partner. Basically, we cannot not communicate. As such, the mere presence of a robot involves transmission of messages that will likely be interpreted by a receiver. Furthermore, the presence or absence of specific behavior in the robot will likely be interpreted by a receiver. For example, mechanical noise generated by motors that move a robot's arms, fingers, or legs become part of the nonverbal message and can create a disturbing effect.

3.5 Continuous

Verbal messages have a clear beginning and end, indicated by syllable, word, and sentence boundaries. Nonverbal cues, however, are without discrete starting and ending points but rather flow into one another. It is difficult to isolate body movements and gestures, such as greeting someone or waving a hand. Similarly, facial expressions are not static; rather, they are a series of expressions that shift over time to indicate certain states. Indeed, creating continuous nonverbal cues is probably one of the biggest challenges in robotics. We already have some evidence of this, in that the isolated or separate nonverbal gestures that are often explored in robotics research or deployed in commercial robots tend to be difficult to interpret, leading to feelings of the robot being unnatural and uncanny [29].

3.6 Concrete

Nonverbal cues are often concrete and representational rather than abstract and arbitrary, compared to verbal messages. For example, a motion gesture can represent actual movements of objects or people in the real world, indicating a connection between form and meaning in nonverbal cues [30]. Therefore, nonverbal cues used in HRI need to be synchronized and especially match any speech content. Arbitrary and abstract gestures that are often preferred by experts when interacting with technical systems [31] should be avoided (as far as the technical limitations of robot motion allow), because they are difficult to interpret for novice users.

3.7 Contextual and Cultural

Even though some nonverbal cues, such as facial expressions for basic emotions in typical contexts (e.g., happiness, anger, fear, sadness, disgust, surprise), are interpreted similarly in most places in the world in modern times [32], many nonverbal cues are only shared among specific cultural groups and within generational cohorts. For example, Reference [33] found that Japanese and Australians’ evaluation of an android differed in perceived anthropomorphism, animacy, and perceived intelligence. Furthermore, simple cultural cues indicating in-group membership can help reduce unfavorable biases toward robots [34]. Therefore, nonverbal cues for robots need to be designed and/or carefully selected for the intended culture and cohort, as well.

Skip 4THE NONVERBAL CODES Section

4 THE NONVERBAL CODES

A variety of nonverbal codes have been discussed in communication studies. However, several factors need to be considered when selecting nonverbal codes for HRI. As mentioned above, we focus in this article on a simple communication model whereby the robot is the sender and a human is the receiver. On the one hand, nonverbal codes must take into account the physical and technical limitations of the robot, as well as its available expressive capabilities. These constraints include the size and body shape of the robot, available movement patterns, and facial expressions, among others. On the other hand, nonverbal codes must be able to be perceived by the human sensory system and interpreted in accordance with human experience and expectations. We must avoid nonverbal codes that are not detected by the human sensory system, including meaningless signals and codes that are difficult to interpret due to the person having no prior experience with code, i.e., no existing schema for directing their perception of the stimulus. Therefore, the framework presented here (Figure 1) uses the human sensory channels as a foundation. It shows how a robot could be designed to address each sensory channel with different types of nonverbal codes inspired by human communication studies. The main focus is on vision and auditory sensory channels. However, we also explore how haptics and olfactory sensory channels can be addressed and included the gustatory channel for completeness.

Fig. 1.

Fig. 1. Relation of nonverbal codes for HRI to the five human sensory channels.

4.1 Visual Nonverbal Codes

4.1.1 Physical Appearance and Object Language.

Communication is often preceded by visual observation of the communication partner, including their physical appearance. In HRI, the most relevant examples may be body type, clothing, and object language. Objects or artifacts can transmit powerful messages about the sender. Object language refers to all intentional or unintentional displays of material things [35]. Object properties such as size, shape, form, texture, and color are important communication factors. They affect the affective responses of the receiver. For example, symmetrical objects and round shapes with smooth edges stimulate a positive impression, whereas sharp objects can create a formal or negative impression.

4.1.1.1 Body type.

Early research on body type distinguishes between three typical forms: (1) endomorph ( short, round, and fat), (2) ectomorph (tall and thin), and (3) mesomorph (muscular and athletic) [36]. Peoples’ aesthetic preferences as well as expectations on a robot's capabilities based on a specific body type must be considered. A wealth of research exists on robot appearance and form factors, including body type; see Fink [37] for a review that remains relevant to this day. As Fink [33] found, robots that are more humanlike in shape tend to perform better in social interactions with people. But this can be a double-edged sword: If humanoid robots are placed in social situations but are not equipped with the behaviors necessary to meet human expectations of the humans that they represent, then user experience will suffer. Additionally, the heads and faces of robots will receive the most attention, so nonverbal codes involving these areas of the robot body may deserve the most attention. Another common theme is the relationship between body shape and perceived gender of the robot (e.g., References [38, 39]); notably, cultural context can affect interpretations of gender, as found in the cases of Wakamaru [40] and Pepper [41]. What does not exist is a link to communication studies. One exception is the work of Jacquemont and colleagues [42], who applied a communication studies perspective on robotic body types so as to match people's impressions of their temperament. Specifically, they created an explicitly endomorphic robot form, with many round and large components to its shape. However, to the best of our knowledge, results on the actual user experience of this robot have not been explored.

Applying established communication studies frameworks in HRI studies could enhance conceptual rigor and provide a way of doing systematic literature surveys on different body types across different robots, contexts, user groups, and so on. Non-human or proto-human (even alien) forms could be explored, with inspiration drawn from animated characters. For instance, Pokémon, the fictional pocket-holdable monster characters from the series of the same name, are perceived as cute and engaging despite being generally non-humanoid and even non-animaloid in body type.

4.1.1.2 Clothing.

Clothing or dress is another nonverbal code and especially important in creating a first impression. For people, clothing functions as comfort-protection and preserves modesty but also serves as cultural displays about the social class, gender, age, profession, or status in the hierarchy, to name a few. To our knowledge, research has not explored so far how clothing might change people's perception of a robot. However, we believe cloths could be beneficial especially for robots working in social contexts. Clothing can be quickly created and changed. As such, it may be one of the quickest and easiest nonverbal cues related to physical appearance to explore. Many robots, such as KASPAR [43], have removable clothing, but the use of clothing, varieties of clothing, and even manner of wear, such as topless versus bottomless, has not been well studied, despite the possibility of important effects according to human models. Future work can start to explore the possibility of designing clothing for robots, i.e., robot-centered robot clothing and robot fashion [44]. Furthermore, clothing can be an easy way to clarify the role of a specific robot in public spaces or its affiliation, similarly to how people wear a specific type of uniform or clothing that features the company logo across a variety of jobs.

4.1.1.3 Kinesics: Body Movement and Gestures.

Body movements and gestures are part of the kinesic category of nonverbal cues. These include all nonverbal cues that involve motion of certain parts of the body or the entire body [45], such as emblems, adaptors, body posture, and body language. Various taxonomies for gestures, as a particular type of body movement, have been described in the communication literature [4648]. For example, a well-used framework by McNeill [46] categorizes gestures as iconic, metaphoric, or deictic. Gesture taxonomies are provisional working tools tailored to particular research interests [46]. Therefore, classification systems for gestures in human communication can only be partially applied to HRI. Additionally, some are limited to certain parts of the body, such as the hands and arms in the case of McNeill [46]. As such, we chose to anchor this work on the taxonomy of Ekman and Friesen [48], which offers the following categories: emblems, or gestures, that are a direct verbal translation; illustrators, or movements, associated with speech; regulators that regulate speech acts; and adaptors, or repetitive movements for self-satisfaction, based on the earlier work of Efron [47]. Efron's categorization system refers to the whole body and thus seems well suited as a basis for HRI.

4.1.1.4 Emblems.

Emblems are gestures that have a direct literal translation shared by the members of a social group. These are often actions or behaviors that have a certain meaning in certain contexts. Miming gestures, for instance, illustrate a certain action that typically is taken, such as opening a cupboard door or performing a dance move. As such, emblems are often culturally defined, varying over time and across generational cohorts [49]. In most cases, it is better to avoid using such cues in HRI, because receivers might have difficulty decoding them. Conversely, robots designed for specific cultural contexts may need to use emblems to be accepted and understood. Nevertheless, while some have recognized that emblems may be culturally sensitive, most have not considered this directly within the context of their own work. We now cover some examples. Cabibihan et al. [50] compared the performance of a person and a robot at performing iconic and emblematic gestures, such as hugging, carrying a baby, and nodding. However, these were not distinguished as iconic, emblematic, or another type of nonverbal code. Carter et al. [51] built emblematic gestures into the design of a robot that plays catch with a person (throwing up its arms or shrugging on a near miss and shaking its head back-and-forth if the ball was thrown out of range), whereas Gross et al. [52] explored the use of emblematic gestures in place of speech, e.g., using a thumbs up gesture for “great.” Zheng, Liu, and Meng [53] explored how people interpret 10 “common” emblematic gestures performed by the Nao robot (Aldebaran/SoftBank Robotics), including clapping and handshaking. We may be able to guess what culture(s) are relevant to the results, based on the location of the work conducted. However, this should be stated clearly. Indeed, we should not shy away from exploring emblems in HRI. For example, a robot able to demonstrate mastery of the emblems of a specific sub-culture might be more readily accepted by that group, with its demonstration facilitating its integration into the group or work-team. Therefore, we must be conscious of cultural rules and justify our design decisions with and for the cultural group involved. We recommend taking a human-centered design approach, where the needs, expectations, and desires of the particular cultural and/or generational group are mapped out and then applied to the robot. Participatory and co-design methodologies, as well as iterative testing with members of the cultural group, are examples of how to achieve this.

4.1.1.5 Illustrators.

Illustrators are gestures that provide a description of what is being related vocally, such as the size or shape of an object demonstrated by drawing the outline of the object (iconic gestures) or an indication of the speed of an object by way of fast or slow hand movements. Illustrators are also used to highlight important points in a speech; see Streeck [30] for examples. They may be the most common type of gestures used in human communication. Salem et al. [54] evaluated how well people perceive illustrators, such as the size and shape of a vase, performed by a Honda robot. Adding pointing gestures to a Nao robot delivering vocal directions improved its performance [55]. To ensure the interpretability of the illustrators used, human gestures can be analyzed in a specific context, and these can then be transferred into a robotic hand, as was done by Sheikholeslami et al. [56]. So far, much work in robotics has focused on developing robots that perceive human gestures (e.g., [57, 58]). More work is needed the other way around: whether people can perceive illustrative gestures performed by robots accurately and the degree to which such gestures are useful. Illustrators add redundancy to speech by encoding what is being said into gestures, thus providing additional cues of how to interpret a message. Work has also started on two-way interaction between people and robots enabled by other technologies, such as wearables (e.g., Reference [59]). This sets the stage for new forms of illustrator-based bidirectional communication.

4.1.1.6 Regulators.

Regulators are body movements and gestures that manage and direct a conversation. Regulators, such as head nods, eye contact, and hand gestures pointing to a specific communication partner (deictic gestures), are used to manage the conversation flow and turn-taking between speakers. Their absence can make conversations in larger groups especially difficult. Consider the case of Zoom meetings where several people try to speak at the same time. Regulators are especially necessary in HRI when more than one person interacts with a robot simultaneously or when more than one robot interacts with one or more people simultaneously. Some previous research has incorporated regulators into a robot's nonverbal cue set. For example, Sugiyama et al. [60] developed a “facilitation approach” to increasing the effectiveness of deictic pointing gestures with the Robovie humanoid robot. Regulators, such as pointing to direct the actions of a human partner, are especially effective in combination with speech and eye gaze [54], a finding also supported in collaborative tasks [61]. Other examples of regulators used in HRI are hesitation gestures in the form of the robot slightly withdrawing its hand when it and a person reach for the same object at the same time [62] or the robot nodding while maintaining eye contact during back-channeling (verbal interjections) and turn-taking scenarios in response to more or less extraverted people [63]. In general, regulators as gestures are easy to implement into a robot's gesture repertoire. The challenge is that the timing of such gestures requires a certain situational awareness. The first stages of implementation could start with a robot using regulators to signal the beginning or end of speech. More advanced systems should be able to manage the flow of the conversation by encouraging a person to respond to specific cues or even being able to encourage one person out of a larger group of people to join the discussion, making sure everybody contributes evenly. Thus, robots could become mediators in group discussions. They might even be more readily accepted as mediators because of their perceived neutrality to specific topics.

4.1.1.7 Adaptors.

Adaptors (sometimes “adapters”) are repetitive gestures and body motions that satisfy physiological and psychological needs. People use adaptors like touching their hair or bouncing their legs to manage stress. Thus, adaptors reveal information about a person's internal emotional state. Adaptors may be conscious or unconscious, controllable or uncontrollable. As such, they could be useful in HRI for improving humanlikeness and a sense of reciprocity, especially in terms of social emotions or emotional contagion. Very few examples on the use of adaptors in HRI exist. Vocal and body adaptors (an “umm” and touching the chin) have been explored in awkward conversation settings [64] or added to increase social appropriateness at the end of a conversation [65]. However, few studies have explored adaptors as part of a robot's regular gesture repertoire. We can imagine many possibilities that should not be technically difficult to implement given what has so far been achieved with other codes. We can work up to a range of small cues working in concert to produce a larger impression intended to reveal a robot's internal state, such as a nervous-looking robot that breathes, shifts side-to-side, clutches its fingers, cowers a little, blinks rapidly, and so on. Robots displaying adaptors could also communicate a certain degree of vulnerability and insecurity that might make them more relatable and approachable, especially in first-contact situations or with inexperienced users. A robot development kit made up of the smallest gestures and behaviors may be useful for crafting a range of adaptors.

4.1.1.8 Posture and Body Language.

Posture refers to a special position of the body or the way that a person holds their body. The four basic body postures are standing, sitting, squatting, and lying down [66]. There are many variations in postures, but a basic distinction can be made between dynamic postures (when the body is in motion, e.g., walking) and static postures (when the body is not moving, e.g., sleeping or standing). Position and posture can communicate the attitude and relationship status of a sender [67]. For example, Mehrabian [68] found that when communicators who were women adopted an open posture, they were interpreted as having a more positive attitude than when they adopted a closed posture. Dynamic postures are also a rich source to express emotions, e.g., shying away when anxious, raising one's arms in an outburst of happiness. Such “emotional body language” involves the whole body and coordinated movements that facilitate the interpretation of a person's emotions [69]. Using the body language of robots to express emotions appears to be an effective tool, as well. Robotic emotional body language can be interpreted similarly to that of humans, and people can correctly identify emotions displayed in key poses [70]. A review of studies on affective-expressive movements for robots by Karg et al. [71] revealed that work so far has mainly focused on a small set of movement types restricted to the upper half of the body. They suggest that more research on the relationship between movement type and style of expressiveness is needed, e.g., whether walking up and down, slow or fast, will result in completely different impressions of the robot.

Full body motion, like jumping up and down while waving your hands toward a friend who is trying to find you in a crowd of people, is generally an effective way to attract attention. In HRI, body posture can communicate the activation status of a robot. An upright, straight posture can indicate high activation, while a relaxed, crouched posture can indicate inactivity. Similarly, lifting or lowering the head can indicate activation or inactivation. These behaviors can be seen in the Cozmo and Vector robots (Digital Dream Labs) and the Nao robot. Notably, robotic motion and movement of body parts is often accompanied by the sound of the motors involved in these actions, which can be distracting to the receiver [29]. We can explore whether subtle social human phenomena apply to robots. For instance, a robot can incline toward a speaker or mimic the conversation partner's posture to promote feelings of interest and sympathy [72]. This could have implications for power and social hierarchies, as well as robot acceptance and affect. Therefore, robots taking on different roles could communicate their status via specific postures, such as a robot teacher using power poses [73], e.g., standing upright with legs slightly apart, to create an aura of authority, or a service robot using a submissive pose, e.g., slightly bending its body or leaning toward a customer to communicate respect and attentiveness. Some work has also started to adapt human measures and instruments. In particular, McColl and Nejat [74] applied the Position Accessibility Scale from the Nonverbal Interaction States Scale [75] to an HRI context. Others may be adapted for HRI.

4.1.1.9 Proxemics.

Proxemics refers to the perception and management of space, including distance from others. For robots, proxemics is as part of its kinesthetic body language. People create boundaries and control areas of space with their body language. Cultural groups have different rules for what kind of behavior is accepted depending on the type of territory (e.g., References [76, 77]). Certain intimate behaviors that are exhibited in the home might not be acceptable in public, and people are usually expected to follow these rules. This applies to dress codes as well. A special type of perceptual or psychological space is the personal space, a kind of invisible bubble that surrounds a person. People are sensitive about violations of personal space, as it defines the acceptable distance to other people based on many contextual factors ranging from gender, relationship status, social hierarchy, and so on. Violations of personal space can trigger strong reactions, such as high arousal, anxiety, and physical aggression [78].

Proxemics has been widely studied within HRI. In an early work, Walters et al. [79] used Eysenck's model of personality traits to explore spatial zones in HRI, finding that people who were more “proactive” kept greater distance compared to others. Stark et al. [80] explored comfort levels when a robot violated the person's personal space during a collaboration activity. People's approach behavior toward robots is influenced by whether they perceive the robot as acting autonomously or as teleoperated [81]. It is also influenced by perceived liking and gaze patterns, such that people keep away from unlikable robots with sharp gazes [82]. Furthermore, previous studies by Kim and Mutlu [83] revealed a relationship between proxemic distance (close versus far), task distance (cooperative versus competitive), and power distance (supervisor versus subordinate), showing that consistency in these various kinds of distancing is key. Peters et al. [84] found that people responded akin to human–human interaction contexts when walking with virtual human characters and a humanoid robot (in this case, Pepper) in terms of social distance.

Many other factors, nonverbal and otherwise, can influence proxemics between people, and this is sure to be true in HRI contexts, as well. The experience of near-future applications on exoskeletons and exorobotics (e.g., Reference [85]), where the robot is intimately close to the human body, will need to be evaluated. While people usually have some understanding about the size of another person's personal bubble and try to avoid intrusion into this space, a similar awareness might not exist for or by robots. Clear visualizations of a robot's personal space could prevent violent, inappropriate, and/or dangerous inter/actions with and/or toward robots. Another question is whether humanlike robots are more likely to be perceived as having something like a personal bubble compared to machinelike robots or if the size of this perceived space differs. These questions might be especially useful for explaining people's approach and avoidance behaviors toward different types of robots.

4.1.2 Facial Expressions.

Facial expressions can be very complex and are often interpreted as displays of emotions. Facial expressions for basic emotions such as happy, sad, surprised, angry, scared, and disgusted are believed to be universal [86]. However, they are also psychologically constructed mental events based on internal and external factors to the person [87]. The emotion of “sadness” for one person may not match another or even match that same person in other events or times. Specific cultural rules apply when it is appropriate to show certain facial expressions. People learn early on how to modify facial displays as part of becoming good communicators. Japanese people, for example, try not to show sadness, because it might have a negative effect on others. Similarly, Western cultures often expect women to smile and withhold expressions of anger, while men can go without smiling and their expression of anger tends to be more accepted [88]. Furthermore, we learn rules for facial expressions that are adequate for specific professional environments: the so-called professional display rules. Kindergarten teachers learn to appear cheerful and kind, shop assistants smile and are friendly even when confronted with customer complaints, and so on.

A tremendous amount of work has gone into exploring facial expressions in robots. Seminal work includes Breazeal and colleague's Kismet [89], the eye-centered robot by Tojo et al. [90], Kanoh and colleague's Ifbot [91], Hashimoto and team's expressive SAYA [92], and Bennewitz and colleague's Fritz [93], to name a few. More recently, a range of high- and low-fidelity options have been explored. On the high end, the humanoid headbot Sophia by Hanson Robotics has made waves with its ultra-realistic facial expressions. On the low end, Seaborn et al. [94] used simple LED eye animations combined with nonverbal utterances to convey expressions of pleasure and dismay in a semi-humanoid robotic food waste recycling bin. Current efforts on facial displays in robots have largely focused on making robots more humanlike by giving them the appearance of being able to feel emotions. However, the purpose of facial expressions is not limited to revealing one's inner affective state; facial expressions are also an important tool for building relationships with others. Research should explore the purposeful use of facial expressions in different contexts with the goal of creating shared understandings and situational awareness between robots and humans, as well as to communicate feelings of empathy. Robots “pretending” to have emotions are perceived as uncanny by some [95]. Thus, using emotion as a social response to certain situations (e.g., a robot that fails to complete a task highlights this failure to a human partner by expressing visible anger) without the goal of revealing an inner affective state might be more valuable for HRI settings.

4.1.3 Eye Behavior (Oculesics).

It is said that eyes are the window to a person's soul. True or not, eye behavior is key to successful social interactions. Eye gaze can promote or hinder contact. Mutual gaze is usually the first step in starting an interaction, signaling awareness of the other person and a willingness to engage with each other [96]. According to Adams and Kleck [97], direct gaze is associated with approach behavior, whereas averted gaze is associated with avoidance behavior. Furthermore, direct gaze can be interpreted either as threatening or friendly, depending on other contextual cues. Recent work suggests that people do not necessarily look into each other's' eyes; instead, eye gaze fluctuates between the eye and mouth region, indicating that face gaze may be a more appropriate moniker than “eye gaze” [98]. Eye behavior might also be associated with certain character traits, as suggested by Grumet [99]. For example, the relationship between eye gaze and extraversion was explored in the ECA Linda [100], where extraversion was defined as gazing at the user 90% of the time and introversion was set to 30% of the time; however, the effect of differing eye gaze could not be experimentally detected. Another potential source for social cues is pupillary dilation. The constriction and dilation of the pupil has the primary function of controlling the amount of light admitted into the eye. However, studies suggest that pupil dilation can also be a sign of interest [101]. Furthermore, pupil size can correspond to negative emotional expressions, with smaller pupil sizes indicating higher intensity and sadness of perceived emotions [102, 103].

In HRI, eye gaze is often paired with cues that can be classified as other nonverbal codes, especially regulators; we have already described several (e.g., References [54, 61, 63]). Previous research has revealed positive effects of robotic eye contact. For example, a robot who kept eye contact with one member of a group, to the exclusion of others, was seen as favoring that member [104]. Also, shared gaze attention cue during handovers of a bottle of water sped up the handover process [105]. In contrast, van Dijk et al. [106] evaluated eye gaze in combination with verbal messages and gestures but did not find an effect. Eye gaze has also been explored as a control mechanism. For instance, Yu et al. [107] explored eye gaze for object selection by a human operator during control of a teleoperated drone. Eye gaze has long been of interest in HRI. Future work can explore more complicated situations as well as subtleties in gaze, especially gaze switching at certain moments in an interaction. Furthermore, some topics in eye gaze have received almost no attention in HRI. For instance, pupillary dilation has not been well researched yet. Changes in pupil dilation could provide additional cues to a person interacting with a robot, indicating interest (dilated pupils) or focus and intensiveness (smaller pupils). These are technically easy to implement into a robot. Design inspiration can be drawn from human models as well as from fictional representations of human(oid) characters in comics and animation.

4.2 Auditory Nonverbal Codes

4.2.1 Paralanguage.

Auditory codes are another human sensory channel sensitive to nonverbal codes. Paralinguistics is a key element of this phenomenon. Paralinguistics, also called vocalics, is the study of paralanguage, which refers to the sounds that accompany speech but are not words. This includes vocal qualities, such as pitch, tone, and intonation patterns of the voice; vocalizations, from groans to giggles to cries to yawns; and vocal segregates, such as vocal fillers like “uh,” “ah,” “um,” pauses and dragging out sounds at the start or end of words. Paralanguage can convey affect, such as how laughing out loud conveys being happy and yawning conveys feeling tired or bored. Silence is also an element of paralanguage. Silence can have a positive or negative meaning. It can convey respect, intimacy, or comfort. However, silence when a response is expected can generate a negative impression or signal rejection, such as not laughing when a joke is told or not replying to a question. In general, people are very sensitive to the emotional quality of a voice and can detect if a person is happy or sad, cynical, or sarcastic. Voice parameters indicating different emotions include pitch, pitch range, and pitch change, voice intensity, tone, and contour [108].

Paralanguage has a long and rich history in robotics, stemming from early explorations in computer voice [109]. See Seaborn et al. [5] for a review featuring many examples across the robotic spectrum. Much work has focused on replicating speech, i.e., robots made to speak human languages. Recently, there has been a turn to non-speech or at least speech that does not involve words (non-linguistic utterances) [110]. In an early paper, Read and Belpaeme [111] described non-linguistic emotive auditory cues for child–robot interaction. Savery et al. [112] created Shimi, a robot that sings without words and moves without humanoid limbs to convey emotionality. Non-linguistic utterances and other forms of vocal but non-speech paralanguage can get around the sticky issue of natural language processing. Another promising direction is robotic regulators for backchannelling, or verbal interjections, and its various forms [113]. Backchannelling can be very frequent, as in aizuchi (interjections to indicate that the listener is paying attention) during Japanese conversations, to less frequent, as well as being comprised of a mix of verbal and nonverbal cues. Certain cues, such as laughter, are virtually uncharted territory in the design of robots, even though there are robots that detect and respond to laughter (e.g., Reference [114]), as well as much work on using human laughter to measure engagement and enjoyment in HRI contexts (e.g., Reference [115]).

4.2.2 Chronemics.

Chronemics is the study of the use of time in nonverbal communication. People can have different time orientations, being focused more or less on the past, present, or future [116]. Time orientation can influence the cognitive level of behavioral functioning. Beliefs grounded in past experiences, current appraisals, and reflections on future options affect people's actions and motivation [117]. A person's time orientation can affect how they perceive time as well as their willingness to wait or be punctual. Chronemics, as such, can be expressed through body language and behavioral patterns. Since chronemics has been studied mainly in conversational settings, we have included it in the category of auditory nonverbal codes. Time distribution in conversation is an expression of each person's power, shown through the length of the conversation, turn-taking, speech time, and conversation initiation and closure.

Most of the research on chronemics has focused on conversation scenarios, notably turn-taking (e.g., References [118, 119]), conversational fillers (e.g., Reference [64]), and topical time orientation [120]. Time orientation poses several relevant questions for robots. Do robots have a time orientation? Or are they perceived to have a time orientation by people? Is a robot speaking quickly perceived differently from a robot speaking slowly? Should a robot pause when it is processing information? Should a robot be late in a culture where people are usually not on time? Could robots amplify a person's time orientation by preserving the past and extending one's existence into the future? For example, the Japanese artist Etsuko Ichihara3 created a robot with the printed face of a deceased person, imitating the person's speech and behavior. Inspired by traditional Japanese funeral rites, the robot spent 49 days (called shijukunichi, the time period during which a spirit is expected to reach the next life, according to Shinto traditions) with family members to gradually bid farewell. Taxonomies of specific varieties of chronemics may be created or adapted for developing HRI scenarios. For instance, the taxonomy of conversational interruptions developed for ECAs by Cafaro, Ravenet, and Pelachaud [121] could be translated to HRI contexts. Indeed, their procedure of involving people choosing the optimal behaviors, i.e., nonverbal cues in response to interruptions, to develop the underlying algorithm could be replicated with robots.

4.3 Haptic Nonverbal Codes

Haptics refers to somatosensorial sensation. Haptic communication is how people and other animals communicate via touch. As such, haptics might be the most primitive but also the most essential form of nonverbal communication. For example, studies have shown that foster children deprived of touch have developmental delays [122]. The affective touch hypothesis proposes that touch has an important effect on emotional well-being [123]. Positive emotions evoked through gentle strokes (e.g., grooming) seem to mediate socializing behavior [124] and appear to play a crucial role in autism [125]. Furthermore, touch deprivation is linked to depression and violence [126, 127]. However, cultural groups have different rules about who, how, and how often members can touch each other. Violations of these rules can have social and legal consequences, such as touch or body contact that is perceived as being inappropriate or unwanted. Touch initiation is a dominant behavior that controls or directs the behavior of others. For instance, people perceived to be of higher status may initiate touch. However, touching also fulfills basic psychological needs. Hugging friends and family members or stroking the warm fur of animals creates positive feelings, is comforting, and reduces stress.

Haptics has long been explored in HRI, like the seal robot by Shibata and Tanie [128] that can be held and stroked or the pet alternative robot by Yohanan and MacLean [129]. Other research has looked into surface temperature, finding that warm temperature was important when people held hands with a robot [130]. Hu and Hoffman [131] created a robot that can have goosebumps. Robots that respond to touch might provide an alternative approach to therapy for people with autism, a developmental disorder associated with various sensory-perceptional anomalies [132], as suggested in a recent systematic review on social robots interacting with autistic people [133]. However, touch in human interaction fulfills different needs but also follows social rules. We must then recognize the ethical implications of increasingly realistic touch sensations and situations in HRI [134]. Would people accept being poked by a robot to grab their attention, for instance? There are many questions to explore. Surely robots should respect existing cultural rules about touching. A robot initiating touch might be perceived as intruding into someone's personal space, or even an aggressor, depending on the context and expectations. However, a robot providing haptic feedback when touched by a person could foster intimacy and support relationship-building, an important aspect for social robots.

4.4 Olfactory Nonverbal Codes

Olfactics, or the sense of smell, is probably the least understood of all human sensations. People seem to have universal preferences for scents that may have biological and evolutionary roots. Qualities and thoughts attributed to different scents seem to play an important role. For instance, we may feel that people who smell “bad” are repellent, while people who smell “good” are attractive. Olfactory sensation seems to have an influence on social interaction as well. For example, humans seem to generate specific chemo-sensory signs to signal a specific emotional state, such as fear and happiness, and some people can fairly accurately distinguish these [135]. De Groot et al. [136] compared mood odors for disgust and fear, showing that they produced matching facial expressions in participants who were exposed to both of these odors. There is also growing evidence that every person has a unique odor signature [137] that can be identified by close relatives [138]. However, olfactics has been all but neglected in HRI so far. Even so, we believe that this is an area ripe for research. Some work in HRI [139] has explored robots that are able to detect and distinguish scents, especially in teleoperation contexts such that the robot's “nose” is an extension of the human operator's senses (e.g., Reference [140]). Work in the related field of HCI has explored olfactory displays (e.g., References [141, 142]), providing technical proofs-of-concept. One project combined robots and an olfactory display in a four-dimensional theater experience [143]. Yet, HRI work on scent is virtually nonexistent. We can translate this work to HRI. We can explore what scents are suitable for which kinds of robot forms. We can examine whether and how people may be influenced by different scents in subtle ways. For instance, can a robot's scent affect approach distance or delineate its personal bubble? We can evaluate the relationships between scent and other factors, such as personality and humanlikeness. Specific scents could also help to personalize robots and make them unique. There is much work to be done.

4.5 Gustatory Nonverbal Codes

The gustatory sense, or sense of taste, is not a nonverbal code usually considered in the communications literature. However, we include it to complete the list of the human sensory channels involved in HRI. Taste receptors can be mainly found on the human tongue. Research has shown that we can distinguish five different taste modalities: sweetness, sourness, saltiness, bitterness, and savoriness (also called umami) [144, 145]. In most HRI contexts, the gustatory sense will certainly not be relevant. However, robots used for feeding people, such as robots that assist young parents, robots in care facilities for older people, or robots for patient care, could stimulate patients’ gustatory senses during feeding. In general, gustatory sensation weakens in older people [146], giving the impression that food is tasteless and consequently reducing appetite. Also, certain medical conditions, such as “long COVID,” involve a loss of taste [147]. Some research has experimented with taste displays that can reproduce the five basic taste modalities using gels [148, 149]. Integrating this technology into feeding robots could enhance or alter the taste of food according to patient preferences to stimulate appetite and ensure patients receive the necessary amount and balance of nutritious food.

Skip 5CONCLUSION Section

5 CONCLUSION

We have provided a comprehensive overview of nonverbal codes identified in human–human communication by the field of communication studies and outlined how these nonverbal codes appeal to the human sensory systems involved in HRI, including vision, auditory perception, haptics, olfactory, and gustatory perception. We have translated these to the case of robots, offering several suggestions for research and development in robotics and HRI. We believe that the nonverbal codes we have covered in this article are especially promising for HRI research. While some topics have been explored already, notably body movement and gestures, other topics, such as haptics, olfactics, and chronemics, are ripe for exploration. Methodologically, this was a difficult article to compile, because much of the HRI literature does not use the terminology of communication studies. For instance, we found references to “deictic” as a sub-category of the nonverbal code “regulators,” but the term “regulators” is not mentioned. Additionally, while there is much work to be found in HRI under certain cross-disciplinary terms, such as “gestures,” these terms are far too broad to allow easy navigation and categorization of that literature. As such, we could only provide a sampling of examples based on the keywords used in HRI papers and the literature that we know as researchers working within or adjacent to HRI. We also aimed to include work from a variety of authors covering different regions, fields of study, and career stages so as to capture a range of ideas from the diversity of people working in this area. As such, the work presented here offers a mapping of how a communication studies framework can and does apply to HRI. Let us begin the process of interdisciplinary categorization and chart a path for future research together.

Skip ACKNOWLEDGMENTS Section

ACKNOWLEDGMENTS

Our sincere gratitude to the reviewers.

Footnotes

  1. 1 For the sake of simplicity and with relevance to this venue, we will use the term “robot” from here onward. However, we do not suggest that this material is limited to conventional robot categories. Indeed, the term “robot” can be defined in various ways and applied broadly to technologies ranging from algorithms to autonomous agents, and we use it in this spirit.

    Footnote
  2. 2 We fully expect that this view will become outdated with advances in AI, especially around machine agency and self-awareness. For now, however, it is an accurate take on reality.

    Footnote
  3. 3 Learn more about Etsuko Ichihara's “Digital Shaman Project” here: https://etsuko-ichihara.com/works/digital_shaman_project/.

    Footnote

REFERENCES

  1. [1] Stevens S. S.. 1950. Introduction: A definition of communication. J. Acoust. Soc. Am. 22, 6 (1950), 689690.Google ScholarGoogle ScholarCross RefCross Ref
  2. [2] Hauser M. D.. 1996. The Evolution of Communication. MIT Press, Cambridge, MA.Google ScholarGoogle Scholar
  3. [3] Hall J. A., Horgan T. G., and Murphy N. A.. 2019. Nonverbal communication. Annu. Rev. Psychol. 70 (2019), 271294.Google ScholarGoogle ScholarCross RefCross Ref
  4. [4] Crumpton J. and Bethel C. L.. 2016. A survey of using vocal prosody to convey emotion in robot speech. Int. J. Soc. Robot. 8, 2 (2016), 271285.Google ScholarGoogle ScholarCross RefCross Ref
  5. [5] Seaborn K. et al. 2021. Voice in human–agent interaction: A survey. ACM Comput. Surv. 54, 4 (2021), 143.Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. [6] Saunderson S. and Nejat G.. 2019. How robots influence humans: A survey of nonverbal communication in social human–robot interaction. Int. J. Soc. Robot. 11, 4 (2019), 575608.Google ScholarGoogle ScholarCross RefCross Ref
  7. [7] Badr A. A. and Abdul-Hassan A. K.. 2020. A review on voice-based interface for human-robot interaction. Iraq. J. Electr. Electr. Eng. 16, 2 (2020), 91102.Google ScholarGoogle Scholar
  8. [8] Seaborn K. and Urakami J.. 2021. Measuring voice UX quantitatively: A rapid review. In Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems, Association for Computing Machinery, New York, NY, 1--8.Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. [9] K. Seaborn, P. Pennefather, N. Miyake, and M. Otake-Matsuura. 2021. Crossing the Tepper line: An emerging ontology for describing the dynamic sociality of embodied AI. In Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems, Association for Computing Machinery, New York, NY, 1--6.Google ScholarGoogle Scholar
  10. [10] Hashimoto S.. 2006. KANSEI Robotics to open a new epoch of human-machine relationship-machine with a heart. In Proceedings of the 15th IEEE International Symposium on Robot and Human Interactive Communication (ROMAN’06). IEEE, (2006).Google ScholarGoogle ScholarCross RefCross Ref
  11. [11] Šabanović S.. 2014. Inventing Japan's ‘robotics culture’: The repeated assembly of science, technology, and culture in social robotics. Soc. Stud. Sci. 44, 3 (2014), 342367.Google ScholarGoogle ScholarCross RefCross Ref
  12. [12] Lütkebohle I. et al. 2010. The Bielefeld anthropomorphic robot head “Flobi.” In Proceedings of the IEEE International Conference on Robotics and Automation. IEEE.Google ScholarGoogle Scholar
  13. [13] Yoshida N. and Yonezawa T.. 2016. Investigating breathing expression of a stuffed-toy robot based on body-emotion model. In Proceedings of the 4th International Conference on Human Agent Interaction.Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. [14] McColl D. et al. 2016. A survey of autonomous human affect detection methods for social robots engaged in natural HRI. J. Intell. Robot. Syst. 82, 1 (2016), 101133.Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. [15] Sidner C. L. et al. 2005. Explorations in engagement for humans and robots. Artif. Intell. 166, 1-2 (2005), 140164.Google ScholarGoogle ScholarCross RefCross Ref
  16. [16] Nehaniv C. L. et al. 2005. A methodological approach relating the classification of gesture to identification of human intent in the context of human-robot interaction. In Proceedings of the IEEE International Workshop on Robot and Human Interactive Communication (ROMAN’05). IEEE.Google ScholarGoogle ScholarCross RefCross Ref
  17. [17] De Santis A. et al. 2008. An atlas of physical human–robot interaction. Mech. Mach. Theory 43, 3 (2008), 253270.Google ScholarGoogle ScholarCross RefCross Ref
  18. [18] Bethel C. L. and Murphy R. R.. 2007. Survey of non-facial/non-verbal affective expressions for appearance-constrained robots. IEEE Trans. Syst. Man Cybernet. Part C (Appl. Rev.) 38, 1 (2007), 8392.Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. [19] Argall B. D. and Billard A. G.. 2010. A survey of tactile human–robot interactions. Robot. Auton. Syst. 58, 10 (2010), 11591176.Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. [20] Van Erp J. B. and Toet A.. 2013. How to touch humans: Guidelines for social agents and robots that can touch. In Proceedings of the Humaine Association Conference on Affective Computing and Intelligent Interaction. IEEE.Google ScholarGoogle Scholar
  21. [21] Kruse T. et al. 2013. Human-aware robot navigation: A survey. Robot. Auton. Syst. 61, 12 (2013), 17261743.Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. [22] Rios-Martinez J., Spalanzani A., and Laugier C.. 2015. From proxemics theory to socially-aware navigation: A survey. Int. J. Soc. Robot. 7, 2 (2015), 137153.Google ScholarGoogle ScholarCross RefCross Ref
  23. [23] Admoni H. and Scassellati B.. 2017. Social eye gaze in human-robot interaction: A review. J. Hum.-Robot Interact 6, 1 (2017), 2563.Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. [24] Krauss R. M. and Fussell S. R.. 1996. Social psychological models of interpersonal communication. In Social Psychology: Handbook of Basic Principles (1996), 655701.Google ScholarGoogle Scholar
  25. [25] Koike K. et al. 2019. Academic emotions affected by robot eye color: An investigation of manipulability and individual-adaptability. Int. J. Adv. Comput. Sci. Appl. 10, 4, (2019).Google ScholarGoogle Scholar
  26. [26] Urakami J., Sutthithatip S., and Moore B. A.. 2020. The effect of naturalness of voice and empathic responses on enjoyment, attitudes and motivation for interacting with a voice user interface. In Proceedings of the International Conference of Human Computer Interaction (HCI’20).Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. [27] Lee J. and Marsella S.. 2006. Nonverbal behavior generator for embodied conversational agents. In International Workshop on Intelligent Virtual Agents. Springer.Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. [28] Kopp S. et al. 2006. Towards a common framework for multimodal generation: The behavior markup language. In International Workshop on Intelligent Virtual Agents (2006). Springer.Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. [29] Bourguet M.-L. et al. 2020. The impact of a social robot public speaker on audience attention. In Proceedings of the 8th International Conference on Human-Agent Interaction.Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. [30] Streeck J.. 2009. Gesturecraft: The Manu-facture of Meaning. Vol. 2. John Benjamins Publishing.Google ScholarGoogle ScholarCross RefCross Ref
  31. [31] Urakami J.. 2012. Developing and testing a human-based gesture vocabulary for tabletop systems. Hum. Factors 54, 4 (2012), 636653.Google ScholarGoogle ScholarCross RefCross Ref
  32. [32] P. Molnár. 2018. Universal facial expressions of emotion: An old controversy and new findings. In Proceeding of the Nonverbal Communication, Routledge, London, 27--46.Google ScholarGoogle Scholar
  33. [33] Haring K. S. et al. Perception of an android robot in Japan and Australia: A cross-cultural comparison. In International Conference on Social Robotics. Springer.Google ScholarGoogle Scholar
  34. [34] de Melo C. M. and Terada K.. 2019. Cooperation with autonomous machines through culture and emotion. PLoS One 14, 11 (2019), e0224758.Google ScholarGoogle ScholarCross RefCross Ref
  35. [35] Ruesch J. and Kees W.. 1974. Nonverbal Communication. University of California Press.Google ScholarGoogle Scholar
  36. [36] Kretschmer E.. 2013. Physique and Character: An Investigation of the Nature of Constitution and of the Theory. Vol. 142. Routledge.Google ScholarGoogle ScholarCross RefCross Ref
  37. [37] Fink J.. 2012. Anthropomorphism and human likeness in the design of robots and human-robot interaction. In International Conference on Social Robotics. Springer.Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. [38] Bernotat J., Eyssel F., and Sachse J.. 2019. The Female robot: How robot body shape impacts first impressions and trust towards robots. Int. J. Soc. Robot. (2019), 113.Google ScholarGoogle Scholar
  39. [39] Bernotat J., Eyssel F., and Sachse J.. 2017. Shape it—The influence of robot body shape on gender perception in robots. In International Conference on Social Robotics. Springer.Google ScholarGoogle ScholarCross RefCross Ref
  40. [40] Bender S.. 2019. Robo Sapiens Japanicus: Robots, Gender, Family, and the Japanese Nation. Oxford University Press, UK.Google ScholarGoogle Scholar
  41. [41] Seaborn K. and Frank A.. 2022. What pronouns for Pepper? A critical review of gender/ing in research. In CHI Conference on Human Factors in Computing Systems (CHI’22).Google ScholarGoogle ScholarDigital LibraryDigital Library
  42. [42] Jacquemont M. et al. 2016. Human-centric point of view for a robot partner: A cooperative project between France and Japan. In Proceedings of the 11th France-Japan & 9th Europe-Asia Congress on Mechatronics (MECATRONICS’16)/17th International Conference on Research and Education in Mechatronics (REM’16). IEEE.Google ScholarGoogle ScholarCross RefCross Ref
  43. [43] Dautenhahn K. et al. 2009. KASPAR—A minimally expressive humanoid robot for human–robot interaction research. Appl. Bionics Biomech. 6, 3–4 (2009), 369397.Google ScholarGoogle ScholarCross RefCross Ref
  44. [44] Friedman N. et al. 2021. Designing functional clothing for human-robot interaction. In Companion of the ACM/IEEE International Conference on Human-Robot Interaction.Google ScholarGoogle ScholarDigital LibraryDigital Library
  45. [45] Ekman P. and Friesen W. V.. 1969. Nonverbal leakage and clues to deception. Psychiatry 32, 1 (1969), 88106.Google ScholarGoogle ScholarCross RefCross Ref
  46. [46] McNeill D.. 2008. Gesture and Thought. University of Chicago Press.Google ScholarGoogle Scholar
  47. [47] Efron D.. 1941. Gesture and Environment, King's Crown Press, New York.Google ScholarGoogle Scholar
  48. [48] Ekman P. and Friesen W. V.. 1969. The repertoire of nonverbal behavior: Categories, origins, usage, and coding. Semiotica 1, 1 (1969), 4998.Google ScholarGoogle ScholarCross RefCross Ref
  49. [49] Molinsky A. L. et al. 2005. Cracking the nonverbal code: Intercultural competence and gesture recognition across cultures. J. Cross-Cultur. Psychol. 36, 3 (2005), 380395.Google ScholarGoogle ScholarCross RefCross Ref
  50. [50] Cabibihan J.-J., So W.-C., and Pramanik S.. 2012. Human-recognizable robotic gestures. IEEE Trans. Auton. Mental Dev. 4, 4 (2012), 305314.Google ScholarGoogle ScholarDigital LibraryDigital Library
  51. [51] Carter E. J. et al. Playing catch with robots: Incorporating social gestures into physical interactions. In Proceedings of the 23rd IEEE International Symposium on Robot and Human Interactive Communication. IEEE.Google ScholarGoogle Scholar
  52. [52] Gross S., Krenn B., and Scheutz M.. 2017. The reliability of non-verbal cues for situated reference resolution and their interplay with language: Implications for human robot interaction. In Proceedings of the 19th ACM International Conference on Multimodal Interaction.Google ScholarGoogle ScholarDigital LibraryDigital Library
  53. [53] Zheng M., Liu P. X., and Max Q.. 2019. Interpretation of human and robot emblematic gestures: How do they differ. Int. J. Robot. Autom. 34, 1 (2019), 5570.Google ScholarGoogle Scholar
  54. [54] Salem M. et al. 2011. A friendly gesture: Investigating the effect of multimodal robot behavior in human-robot interaction. In Proceedings of the IEEE International Workshop on Robot and Human Interactive Communication (ROMAN’11). IEEE. (2011).Google ScholarGoogle ScholarCross RefCross Ref
  55. [55] Lohse M. et al. 2014. Robot gestures make difficult tasks easier: The impact of gestures on perceived workload and task performance. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems.Google ScholarGoogle ScholarDigital LibraryDigital Library
  56. [56] Sheikholeslami S., Moon A., and Croft E. A.. 2017. Cooperative gestures for industry: Exploring the efficacy of robot hand configurations in expression of instructional gestures for human–robot interaction. Int. J. Robot. Res. 36, 5-7 (2017), 699720.Google ScholarGoogle ScholarDigital LibraryDigital Library
  57. [57] Matuszek C. et al. 2014. Learning from unscripted deictic gesture and language for human-robot interactions. In Proceedings of the AAAI Conference on Artificial Intelligence. 2014.Google ScholarGoogle ScholarCross RefCross Ref
  58. [58] Sigalas M., Baltzakis H., and Trahanias P.. 2010. Gesture recognition based on arm tracking for human-robot interaction. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE.Google ScholarGoogle Scholar
  59. [59] Humayoun S. R. et al. 2020. Space-free gesture interaction with humanoid robot. In Proceedings of the International Conference on Advanced Visual Interfaces.Google ScholarGoogle ScholarDigital LibraryDigital Library
  60. [60] Sugiyama O. et al. 2007. Natural deictic communication with humanoid robots. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE. (2007).Google ScholarGoogle Scholar
  61. [61] Häring M., Eichberg J., and André E.. 2012. Studies on grounding with gaze and pointing gestures in human-robot-interaction. In International Conference on Social Robotics. Springer.Google ScholarGoogle ScholarDigital LibraryDigital Library
  62. [62] Moon A. et al. 2013. Design and impact of hesitation gestures during human-robot resource conflicts. J. Hum.-Robot Interact. 2, 3 (2013), 1840.Google ScholarGoogle ScholarDigital LibraryDigital Library
  63. [63] Thepsoonthorn C., Ogawa K.-i., and Miyake Y.. 2017. Does robot's human-based gaze and head nodding behavior really win over non-human-based behavior in human-robot interaction? In Proceedings of the Companion of the ACM/IEEE International Conference on Human-Robot Interaction.Google ScholarGoogle ScholarDigital LibraryDigital Library
  64. [64] Ohshima N. et al. 2015. A conversational robot with vocal and bodily fillers for recovering from awkward silence at turn-takings. In Proceedings of the 24th IEEE International Symposium on Robot and Human Interactive Communication (ROMAN’15). IEEE.Google ScholarGoogle ScholarDigital LibraryDigital Library
  65. [65] Isaka T. et al. 2018. Study of socially appropriate robot behaviors in human-robot conversation closure. In Proceedings of the 30th Australian Conference on Computer-Human Interaction.Google ScholarGoogle ScholarDigital LibraryDigital Library
  66. [66] Matsumoto D., Hwang H. C., and Frank M. G.. 2016. The body: Postures, gait, proxemics, and haptics. In APA Handbook of Nonverbal Communication, American Psychological Association, 387400.Google ScholarGoogle ScholarCross RefCross Ref
  67. [67] Mehrabian A.. 1969. Significance of posture and position in the communication of attitude and status relationships. Psychol. Bull. 71, 5 (1969), 359.Google ScholarGoogle ScholarCross RefCross Ref
  68. [68] Mehrabian A.. 1968. Inference of attitudes from the posture, orientation, and distance of a communicator. J. Consult. Clin. Psychol. 32, 3 (1968), 296.Google ScholarGoogle ScholarCross RefCross Ref
  69. [69] De Gelder B.. 2006. Towards the neurobiology of emotional body language. Nat. Rev. Neurosci. 7, 3 (2006), 242249.Google ScholarGoogle ScholarCross RefCross Ref
  70. [70] Beck A. et al. 2012. Emotional body language displayed by artificial agents. ACM Trans. Interact. Intell. Syst. 2, 1 (2012), 129.Google ScholarGoogle ScholarDigital LibraryDigital Library
  71. [71] Karg M. et al. 2013. Body movements for affective expression: A survey of automatic recognition and generation. IEEE Trans. Affect. Comput. 4, 4 (2013), 341359.Google ScholarGoogle ScholarCross RefCross Ref
  72. [72] Chartrand T. L. and Bargh J. A.. 1999. The chameleon effect: The perception–behavior link and social interaction. J. Pers. Soc. Psychol. 76, 6 (1999), 893.Google ScholarGoogle ScholarCross RefCross Ref
  73. [73] Carney D. R., Cuddy A. J., and Yap A. J.. 2010. Power posing: Brief nonverbal displays affect neuroendocrine levels and risk tolerance. Psychol. Sci. 21, 10 (2010), 13631368.Google ScholarGoogle ScholarCross RefCross Ref
  74. [74] McColl D. and Nejat G.. 2012. Affect detection from body language during social HRI. In IEEE Proceedings of the 21st IEEE International Symposium on Robot and Human Interactive Communication (ROMAN’12). IEEE, (2012).Google ScholarGoogle ScholarCross RefCross Ref
  75. [75] Davis M. and Hadiks D.. 1994. Nonverbal aspects of therapist attunement. J. Clin. Psychol. 50, 3 (1994), 393405.Google ScholarGoogle ScholarCross RefCross Ref
  76. [76] Watson O. M. and Graves T. D.. 1966. Quantitative research in proxemic behavior 1. Am. Anthropol. 68, 4 (1966), 971985.Google ScholarGoogle ScholarCross RefCross Ref
  77. [77] Shuter R.. 1976. Proxemics and tactility in Latin America. J. Commun. 26, 3 (1976), 4652.Google ScholarGoogle ScholarCross RefCross Ref
  78. [78] Felipe N. J. and Sommer R.. 2017. Invasions of personal space. In People and Buildings, Routledge, 5464.Google ScholarGoogle Scholar
  79. [79] Walters M. L. et al. 2005. The influence of subjects' personality traits on personal spatial zones in a human-robot interaction experiment. In Proceedings of the IEEE International Workshop on Robot and Human Interactive Communication (ROMAN’05). IEEE.Google ScholarGoogle ScholarCross RefCross Ref
  80. [80] Stark J., Mota R. R., and Sharlin E.. 2018. Personal space intrusion in human-robot collaboration. In Companion of the ACM/IEEE International Conference on Human-Robot Interaction.Google ScholarGoogle ScholarDigital LibraryDigital Library
  81. [81] Takayama L. and Pantofaru C.. 2009. Influences on proxemic behaviors in human-robot interaction. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE.Google ScholarGoogle Scholar
  82. [82] Mumm J. and Mutlu B.. 2011. Human-robot proxemics: Physical and psychological distancing in human-robot interaction. In Proceedings of the 6th International Conference on Human-Robot Interaction.Google ScholarGoogle ScholarDigital LibraryDigital Library
  83. [83] Kim Y. and Mutlu B.. 2014. How social distance shapes human–robot interaction. Int. J. Hum.-Comput. Stud. 72, 12 (2014), 783795.Google ScholarGoogle ScholarCross RefCross Ref
  84. [84] Peters C. et al. 2018. Investigating social distances between humans, virtual humans and virtual robots in mixed reality. In Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems.Google ScholarGoogle ScholarDigital LibraryDigital Library
  85. [85] Sasaki T. et al. 2017. MetaLimbs: Multiple arms interaction metamorphism. In ACM SIGGRAPH 2017 Emerging Technologies. 12.Google ScholarGoogle ScholarDigital LibraryDigital Library
  86. [86] Ekman P.. 1992. An argument for basic emotions. Cogn. Emot. 6, 3-4 (1992), 169200.Google ScholarGoogle ScholarCross RefCross Ref
  87. [87] Feldman Barrett L.. 2011. Constructing emotion. Psihol. Teme 20, 3 (2011), 359380.Google ScholarGoogle Scholar
  88. [88] Traister R.. 2018. Good and Mad: The Revolutionary Power of Women's Anger. Simon & Schuster.Google ScholarGoogle Scholar
  89. [89] Breazeal C. and Scassellati B.. 1999. How to build robots that make friends and influence people. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems: Human and Environment Friendly Robots with High Intelligence and Emotional Quotients. IEEE.Google ScholarGoogle Scholar
  90. [90] Tojo T. et al. 2000. A conversational robot utilizing facial and body expressions. In Proceedings of the IEEE International Conference on Systems, Man and Cybernetics. IEEE.Google ScholarGoogle ScholarCross RefCross Ref
  91. [91] Kanoh M. et al. 2005. Emotive facial expressions of sensitivity communication robot “Ifbot.” Kansei Eng. Int. 5, 3 (2005), 3542.Google ScholarGoogle ScholarCross RefCross Ref
  92. [92] Hashimoto T. et al. 2006. Development of the face robot SAYA for rich facial expressions. In Proceedings of the SICE-ICASE International Joint Conference. IEEE.Google ScholarGoogle ScholarCross RefCross Ref
  93. [93] Bennewitz M. et al. 2007. Fritz-A humanoid communication robot. In Proceedings of the 16th IEEE International Symposium on Robot and Human Interactive Communication (ROMAN’07). IEEE, (2007).Google ScholarGoogle ScholarCross RefCross Ref
  94. [94] Seaborn K., Mähönen J., and Rogers Y.. 2020. Scaling up to tackle low levels of urban food waste recycling. In Proceedings of the ACM Designing Interactive Systems Conference.Google ScholarGoogle ScholarDigital LibraryDigital Library
  95. [95] Urakami J. et al. 2019. Users' perception of empathic expressions by an advanced intelligent system. In Proceedings of the 7th International Conference on Human-Agent Interaction.Google ScholarGoogle ScholarDigital LibraryDigital Library
  96. [96] Argyle M. and Cook M.. 1976. Gaze and Mutual Gaze.Google ScholarGoogle Scholar
  97. [97] Adams R. B. and Kleck R. E.. 2003. Perceived gaze direction and the processing of facial displays of emotion. Psychol. Sci. 14, 6 (2003), 644647.Google ScholarGoogle ScholarCross RefCross Ref
  98. [98] Rogers S. L. et al. 2018. Using dual eye tracking to uncover personal gaze patterns during social interaction. Sci. Rep. 8, 1 (2018), 19.Google ScholarGoogle ScholarCross RefCross Ref
  99. [99] Grumet G. W.. 1983. Eye contact: The core of interpersonal relatedness. Psychiatry 46, 2 (1983), 172180.Google ScholarGoogle ScholarCross RefCross Ref
  100. [100] Sajjadi P. et al. 2018. On the effect of a personality-driven ECA on perceived social presence and game experience in vr. In Proceedings of the 10th International Conference on Virtual Worlds and Games for Serious Applications (VS-Games’18). IEEE.Google ScholarGoogle ScholarCross RefCross Ref
  101. [101] Hess E. H. and Petrovich S. B.. 1987. Pupillary behavior in communication. Nonverb. Behav. Commun. (1987), 327348.Google ScholarGoogle Scholar
  102. [102] Harrison N. A. et al. 2006. Pupillary contagion: Central mechanisms engaged in sadness processing. Soc. Cogn. Affect. Neurosci. 1, 1 (2006), 517.Google ScholarGoogle ScholarCross RefCross Ref
  103. [103] Harrison N. A., Wilson C. E., and Critchley H. D.. 2007. Processing of observed pupil size modulates perception of sadness and predicts empathy. Emotion 7, 4 (2007), 724.Google ScholarGoogle ScholarCross RefCross Ref
  104. [104] Karreman D. et al. 2013. What happens when a robot favors someone? How a tour guide robot uses gaze behavior to address multiple persons while storytelling about art. In Proceedings of the 8th ACM/IEEE International Conference on Human-Robot Interaction (HRI’13). IEEE.Google ScholarGoogle ScholarCross RefCross Ref
  105. [105] Moon A. et al. 2014. Meet me where I'm gazing: How shared attention gaze affects human-robot handover timing. In Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction.Google ScholarGoogle Scholar
  106. [106] Van Dijk E. T., Torta E., and Cuijpers R. H.. 2013. Effects of eye contact and iconic gestures on message retention in human-robot interaction. Int. J. Soc. Robot. 5, 4 (2013), 491501.Google ScholarGoogle ScholarCross RefCross Ref
  107. [107] Yu M. et al. 2014. Human-robot interaction based on gaze gestures for the drone teleoperation. J. Eye Move. Res. 7, 4 (2014), 114.Google ScholarGoogle ScholarCross RefCross Ref
  108. [108] Cowie R. et al. 2001. Emotion recognition in human-computer interaction. IEEE Sign. Process. Mag. 18, 1 (2001), 3280.Google ScholarGoogle ScholarCross RefCross Ref
  109. [109] Nass C. I. and Brave S.. 2005. Wired for Speech: How Voice Activates and Advances the Human-computer Relationship. MIT Press, Cambridge, MA.Google ScholarGoogle ScholarDigital LibraryDigital Library
  110. [110] Yilmazyildiz S. et al. 2016. Review of semantic-free utterances in social human–robot interaction. Int. J. Hum.-Comput. Interact. 32, 1 (2016), 6385.Google ScholarGoogle ScholarCross RefCross Ref
  111. [111] Read R. and Belpaeme T.. 2012. How to use non-linguistic utterances to convey emotion in child-robot interaction. In Proceedings of the 7th ACM/IEEE International Conference on Human-Robot Interaction (HRI’12). IEEE.Google ScholarGoogle ScholarDigital LibraryDigital Library
  112. [112] Savery R., Rose R., and Weinberg G.. 2019. Establishing human-robot trust through music-driven robotic emotion prosody and gesture. In Proceedings of the 28th IEEE International Conference on Robot and Human Interactive Communication (ROMAN’19). IEEE.Google ScholarGoogle ScholarDigital LibraryDigital Library
  113. [113] Fujie S., Fukushima K., and Kobayashi T.. 2004. A conversation robot with back-channel feedback function based on linguistic and nonlinguistic information. In Proceedings of the ICARA International Conference on Autonomous Robots and Agents. Citeseer.Google ScholarGoogle Scholar
  114. [114] Devillers L. et al. 2015. Multimodal data collection of human-robot humorous interactions in the joker project. In Proceedings of the International Conference on Affective Computing and Intelligent Interaction (ACII’15). IEEE.Google ScholarGoogle ScholarDigital LibraryDigital Library
  115. [115] Yamaguchi K., Nergui M., and Otake M.. 2014. A robot presenting reproduced stories among older adults in group conversation. In Applied Mechanics and Materials. Trans Tech Publications.Google ScholarGoogle Scholar
  116. [116] Zimbardo P. G. and Boyd J. N.. 2015. Putting time in perspective: A valid, reliable individual-differences metric. In Time Perspective Theory; Review, Research and Application, Springer. 1755.Google ScholarGoogle ScholarCross RefCross Ref
  117. [117] Bandura A.. 2010. Self-efficacy. In The Corsini Encyclopedia of Psychology (2010), 13.Google ScholarGoogle Scholar
  118. [118] Chao C. and Thomaz A. L.. 2010. Turn taking for human-robot interaction. In Proceedings of the AAAI Fall Symposium Series.Google ScholarGoogle Scholar
  119. [119] Skantze G., Hjalmarsson A., and Oertel C.. 2014. Turn-taking, feedback and joint attention in situated human–robot interaction. Speech Commun. 65 (2014), 5066.Google ScholarGoogle ScholarCross RefCross Ref
  120. [120] Otake-Matsuura M. et al. 2021. Cognitive intervention through Photo-Integrated Conversation Moderated by Robots (PICMOR) Program: A randomized controlled trial. Front. Robot. AI (2021). 8.Google ScholarGoogle Scholar
  121. [121] Cafaro A., Ravenet B., and Pelachaud C.. 2019. Exploiting evolutionary algorithms to model nonverbal reactions to conversational interruptions in user-agent interactions. IEEE Trans. Affect. Comput. (2019).Google ScholarGoogle Scholar
  122. [122] Field T.. 2014. Touch. MIT Press.Google ScholarGoogle ScholarCross RefCross Ref
  123. [123] Vallbo Å., Olausson H., and Wessberg J.. 1999. Unmyelinated afferents constitute a second system coding tactile stimuli of the human hairy skin. J. Neurophysiol. 81, 6 (1999), 27532763.Google ScholarGoogle ScholarCross RefCross Ref
  124. [124] McGlone F., Wessberg J., and Olausson H.. 2014. Discriminative and affective touch: Sensing and feeling. Neuron 82, 4 (2014), 737755.Google ScholarGoogle ScholarCross RefCross Ref
  125. [125] Kaiser M. D. et al. 2016. Brain mechanisms for processing affective (and nonaffective) touch are atypical in autism. Cerebr. Cortex 26, 6 (2016), 27052714.Google ScholarGoogle ScholarCross RefCross Ref
  126. [126] Ardiel E. L. and Rankin C. H.. 2010. The importance of touch in development. Paediatr. Child Health 15, 3 (2010), 153156.Google ScholarGoogle ScholarCross RefCross Ref
  127. [127] Field T.. 2002. Violence and touch deprivation in adolescents. Adolescence 37, 148 (2002), 735.Google ScholarGoogle Scholar
  128. [128] Shibata T. and Tanie K.. 2001. Physical and affective interaction between human and mental commit robot. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA’01). IEEE. (2001).Google ScholarGoogle ScholarCross RefCross Ref
  129. [129] Yohanan S. and MacLean K. E.. 2012. The role of affective touch in human-robot interaction: Human intent and expectations in touching the haptic creature. Int. J. Soc. Robot. 4, 2 (2012), 163180.Google ScholarGoogle ScholarCross RefCross Ref
  130. [130] Nie J. et al. 2012. Can you hold my hand? Physical warmth in human-robot interaction. In Proceedings of the 7th ACM/IEEE International Conference on Human-Robot Interaction (HRI’12). IEEE.Google ScholarGoogle Scholar
  131. [131] Hu Y. and Hoffman G.. 2019. Using skin texture change to design emotion expression in social robots. In Proceedings of the 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI’19). IEEE.Google ScholarGoogle ScholarDigital LibraryDigital Library
  132. [132] Baranek G. T. et al. 2006. Sensory Experiences Questionnaire: Discriminating sensory features in young children with autism, developmental delays, and typical development. J. Child Psychol. Psychiatr. 47, 6 (2006), 591601.Google ScholarGoogle ScholarCross RefCross Ref
  133. [133] Pennisi P. et al. 2016. Autism and social robotics: A systematic review. Autism Res. 9, 2 (2016), 165183.Google ScholarGoogle ScholarCross RefCross Ref
  134. [134] Arnold T. and Scheutz M.. 2017. The tactile ethics of soft robotics: Designing wisely for human–robot interaction. Soft Robot. 4, 2 (2017), 8187.Google ScholarGoogle ScholarCross RefCross Ref
  135. [135] Haviland-Jones J. M. and Wilson P. J.. 2008. A “nose” for emotion. Emotions (2008), 235.Google ScholarGoogle Scholar
  136. [136] De Groot J. H. et al. 2012. Chemosignals communicate human emotions. Psychol. Sci. 23, 11 (2012), 14171424.Google ScholarGoogle ScholarCross RefCross Ref
  137. [137] Kwak J. et al. 2010. In search of the chemical basis for MHC odourtypes. Proc. Roy. Soc. B: Biol. Sci. 277, 1693 (2010), 24172425.Google ScholarGoogle Scholar
  138. [138] Lundström J. N. et al. 2009. The neuronal substrates of human olfactory based kin recognition. Human Brain Map. 30, 8 (2009), 25712580.Google ScholarGoogle ScholarCross RefCross Ref
  139. [139] Miwa H. et al. 2001. Human-like robot head that has olfactory sensation and facial color expression. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA’01). IEEE.Google ScholarGoogle ScholarCross RefCross Ref
  140. [140] Gongora A. and Gonzalez-Jimenez J.. 2019. Olfactory telerobotics: A feasible solution for teleoperated localization of gas sources? Robot. Auton. Syst. 113 (2019), 19.Google ScholarGoogle ScholarDigital LibraryDigital Library
  141. [141] Matsukura H., Yoneda T., and Ishida H.. 2013. Smelling screen: Development and evaluation of an olfactory display system for presenting a virtual odor source. IEEE Trans. Vis. Comput. Graph. 19, 4 (2013), 606615.Google ScholarGoogle ScholarDigital LibraryDigital Library
  142. [142] Patnaik B., Batch A., and Elmqvist N.. 2018. Information olfactation: Harnessing scent to convey data. IEEE Trans. Vis. Comput. Graph. 25, 1 (2018), 726736.Google ScholarGoogle ScholarDigital LibraryDigital Library
  143. [143] Casas S. et al. 2017. On a first evaluation of ROMOT—A Robotic 3D Movie theatre—for driving safety awareness. Multimodal Technol. Interact. 1, 2 (2017), 6.Google ScholarGoogle ScholarCross RefCross Ref
  144. [144] Trivedi B. P.. 2012. Gustatory system: The finer points of taste. Nature 486, 7403 (2012), S2S3.Google ScholarGoogle ScholarCross RefCross Ref
  145. [145] Witt M.. 2019. Anatomy and development of the human taste system. In Handbook of Clinical Neurology 164 (2019), 147171.Google ScholarGoogle ScholarCross RefCross Ref
  146. [146] Methven L. et al. 2012. Ageing and taste. Proc. Nutr. Soc. 71, 4 (2012), 556565.Google ScholarGoogle ScholarCross RefCross Ref
  147. [147] Burges Watson D. L. et al. 2021. Altered smell and taste: Anosmia, parosmia and the impact of long Covid-19. PLoS One 16, 9 (2021), e0256998.Google ScholarGoogle ScholarCross RefCross Ref
  148. [148] Miyashita H.. 2020. Taste display that reproduces tastes measured by a taste sensor. In Proceedings of the 33rd Annual ACM Symposium on User Interface Software and Technology.Google ScholarGoogle ScholarDigital LibraryDigital Library
  149. [149] Miyashita H.. 2020. Norimaki synthesizer: Taste display using ion electrophoresis in five gels. In Extended Abstracts of the CHI Conference on Human Factors in Computing Systems.Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Nonverbal Cues in Human–Robot Interaction: A Communication Studies Perspective

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in

        Full Access

        • Published in

          cover image ACM Transactions on Human-Robot Interaction
          ACM Transactions on Human-Robot Interaction  Volume 12, Issue 2
          Special Issue on Designing the Robot Body: Critical Perspectives on Affective Embodied Interaction
          June 2023
          316 pages
          EISSN:2573-9522
          DOI:10.1145/3586023
          Issue’s Table of Contents

          Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 15 March 2023
          • Online AM: 7 December 2022
          • Accepted: 8 September 2022
          • Revised: 28 February 2022
          • Received: 31 May 2021
          Published in thri Volume 12, Issue 2

          Permissions

          Request permissions about this article.

          Request Permissions

          Check for updates

          Qualifiers

          • research-article

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader

        HTML Format

        View this article in HTML Format .

        View HTML Format