Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

In-home and remote use of robotic body surrogates by people with profound motor deficits

  • Phillip M. Grice ,

    Roles Conceptualization, Data curation, Formal analysis, Funding acquisition, Investigation, Methodology, Software, Validation, Visualization, Writing – original draft, Writing – review & editing

    phillip.grice@gatech.edu

    Affiliation Department of Biomedical Engineering, Georgia Institute of Technology, Atlanta, GA, United States of America

  • Charles C. Kemp

    Roles Conceptualization, Formal analysis, Funding acquisition, Investigation, Methodology, Project administration, Resources, Supervision, Validation, Writing – original draft, Writing – review & editing

    Affiliation Department of Biomedical Engineering, Georgia Institute of Technology, Atlanta, GA, United States of America

Abstract

By controlling robots comparable to the human body, people with profound motor deficits could potentially perform a variety of physical tasks for themselves, improving their quality of life. The extent to which this is achievable has been unclear due to the lack of suitable interfaces by which to control robotic body surrogates and a dearth of studies involving substantial numbers of people with profound motor deficits. We developed a novel, web-based augmented reality interface that enables people with profound motor deficits to remotely control a PR2 mobile manipulator from Willow Garage, which is a human-scale, wheeled robot with two arms. We then conducted two studies to investigate the use of robotic body surrogates. In the first study, 15 novice users with profound motor deficits from across the United States controlled a PR2 in Atlanta, GA to perform a modified Action Research Arm Test (ARAT) and a simulated self-care task. Participants achieved clinically meaningful improvements on the ARAT and 12 of 15 participants (80%) successfully completed the simulated self-care task. Participants agreed that the robotic system was easy to use, was useful, and would provide a meaningful improvement in their lives. In the second study, one expert user with profound motor deficits had free use of a PR2 in his home for seven days. He performed a variety of self-care and household tasks, and also used the robot in novel ways. Taking both studies together, our results suggest that people with profound motor deficits can improve their quality of life using robotic body surrogates, and that they can gain benefit with only low-level robot autonomy and without invasive interfaces. However, methods to reduce the rate of errors and increase operational speed merit further investigation.

Introduction

Individuals with profound motor deficits currently require assistance from human caregivers to complete many physical self-care tasks. This care can create financial challenges in providing for professional caregivers, and can place both physical and emotional burdens on informal caregivers. Assistive robots that enable people with profound motor deficits to perform tasks for themselves could be beneficial. For example, the ability to care for oneself (self-care self-efficacy) correlates with improved quality of life and decreased depression in stroke patients [1], and reducing care burden may improve caregiver mortality rates [2].

Robotic body surrogates have the potential to be highly versatile assistive devices for people with profound motor deficits. For this paper, we define a robotic body surrogate to be a human-operated, remote-controlled robot that can navigate and manipulate within an environment in a manner comparable to an able-bodied human. In this way, a robotic body surrogate can serve as a substitute for the human operator being physically present in the environment. A robotic body surrogate can also have physical capabilities that the human operator does not posses, and thereby serve as an assistive device for people with motor impairments.

Controlling robotic body surrogates is a key challenge

While the versatility of robotic body surrogates could enable them to assist with a wide variety of tasks, they are complex devices with many sensors and degrees of freedom. There are many commercially available human-scale robots with wheels and one or two arms (i.e., mobile manipulators) that could potentially serve as robotic body surrogates [37]. Additionally, full humanoid robots with arms and legs have been produced by companies and researchers and are common in laboratories [8, 9]. The studies in this paper use a PR2 robot from Willow Garage that is comparable to other commercially available mobile manipulators. The PR2 is a human-scale robot with an omnidirectional wheeled base, a torso that translates vertically, two arms with grippers, a pan/tilt head with cameras, and various other sensors, such as tactile sensors.

Due to this complexity, enabling a single person to effectively control a robotic body surrogate is a critical challenge. In the DARPA Robotics Challenge (DRC), teams used an average of two or more able-bodied experts using six or more video displays to control human-scale robots that navigated and manipulated in a manner comparable to able-bodied humans [10]. The DRC was a prominent international competition involving well-regarded teams from around the world, yet the interfaces to control the robots were generally cumbersome and would be ill-suited for a single operator.

The challenge of single person control becomes even greater when the person has profound motor deficits. In this paper, we consider a person who scores nine or fewer points on the Action Research Arm Test (ARAT) with both upper limbs to have profound motor deficits. This corresponds with a person having limited voluntary motion of his or her upper limbs, typically such that the person is unable to lift either hand against gravity. Profound motor deficits limit how users can provide input to computer systems [11], and variations in peoples’ impairments and preferences make it difficult to design a single broadly accessible input [12].

Approaches to the control of robotic body surrogates

One approach to overcoming the challenge of interfacing a human with a robotic body surrogate has been to give the robot autonomous capabilities [8, 10, 13, 14]. This has the benefit of reducing the responsibilities of the human operator, which can reduce cognitive load, reduce errors, and increase efficiency. However, to date, autonomous capabilities have tended to be narrow in scope, such as a system for self-care tasks around the head [15], or unproven in broader contexts, such as efforts to enable semi-autonomous grasping and placement of objects [1618].

Another approach to overcoming this challenge has been through interfaces that more directly connect the human brain to the robot via brain-computer interfaces (BCIs). This approach has the potential to result in higher-bandwidth, lower-latency, and more intuitive interfaces to robotic body surrogates. To date, however, these efforts have focused on control of less complex systems than robotic body surrogates and have significant practical challenges. For example, cortical BCIs [1924] show promise for the control of robot arms, but the interfaces are highly invasive and remain an immature technology [25]. As another example, researchers have enabled a user in an MRI machine to remotely control a humanoid robot [26], but MRI machines are currently impractical for home use.

Our approach is to provide an augmented-reality (AR) interface running in a standard web browser with only low-level robot autonomy. The AR interface uses state-of-the-art visualization to present the robot’s sensor information and options for controlling the robot in a way that people with profound motor deficits have found easy to use. In order to meet the needs of users with profound motor deficits, we used participatory design from the outset, involving people with disabilities in the iterative development of the interface [27, 28]. The standard web browser enables people with profound motor deficits to use the same methods they already use to access the Internet to control the robot. Many commercially-available assistive input devices, such as head trackers, eye gaze trackers, or voice controls, can provide single-button mouse-type input to a web browser. Identifying the most appropriate assistive input device for an individual is a common challenge in assistive technology, and is based upon each individual’s specific deficits. Due to the great value associated with accessing the Internet, people who have profound motor deficits often learn to use a web browser, either via a commercially-available assistive input device or custom accommodations [29]. By limiting the robot’s autonomy to low-level operations, such as tactile sensor driven grasping and moving an arm with respect to inverse kinematics to achieve end effector poses, the robot performs consistently across diverse situations allowing the user to attempt to use the robot in diverse and novel ways.

Can people with profound motor deficits benefit?

The extent to which people with profound motor deficits can benefit from robotic body surrogates has been unclear. Relevant published work has primarily involved only a small number of participants, often a single participant, making the extent to which the results would generalize across other people with profound motor deficits uncertain [12, 15, 17, 3033]. The great diversity of deficits and causes of deficits increases this uncertainty. In addition, there has been a lack of evaluations based upon clinical assessments.

To address these limitations, we conducted two studies to investigate the use of robotic body surrogates. In the first study (Study 1), 15 novice users with profound motor deficits from across the United States controlled a PR2 in Atlanta, GA to perform a modified ARAT and a simulated self-care task. For this study, our experimental design and web-based interface greatly increased the size of the population from which we could recruit, since participants could be located across the country and did not need to travel to the laboratory. Many participants operated the robot from their homes. This made it feasible for us to meet our recruiting goals as determined by a statistical power analysis.

We designed Study 1 to characterize the benefit or lack of benefit provided by a robotic body surrogate in terms of an established clinical assessment (i.e., the ARAT) and to establish a performance baseline for this assessment. In addition, we used a simulated self-care task to assess performance with respect to a task that required both navigation and manipulation. We also measured perceptions of the system’s ease of use, usefulness, and potential to meaningfully improve people’s lives.

We designed the second study (Study 2) to characterize the use of a robotic body surrogate by an expert user with profound motor deficits in his home over an extended period of time. This study complements Study 1 by providing insights into how robotic body surrogates might be used on a daily basis in a real home, whereas Study 1 examined relatively brief use by novices in a controlled laboratory setting. Robotic systems that have only been tested in laboratory settings often fail when tested under real-world conditions, which can lead to misinterpretation of results [34, 35].

Overview

The next section, Methods, presents descriptions of the robot hardware, the user interface, Study 1, and Study 2. The subsequent section, Results, presents the results obtained from Study 1 and Study 2. Then, we provide a section, Discussion, with commentary on various aspects of the research, including limitations of the robotic system and implications for robotic body surrogates. Finally, we provide a brief Conclusion section that summarizes the main outcomes of the work.

Methods

In this section, we present the methods we used for this research, starting with a brief description of the robot hardware followed by a detailed presentation of the interface and descriptions of Study 1 and Study 2.

Robot hardware

For Study 1 and Study 2, we used a PR2 robot running Ubuntu 14.04 and the open-source Robot Operating System (ROS) Indigo Igloo. We modified a standard PR2 by adding dense foam padding around the metal grippers and the mobile base of the robot to reduce potential negative consequences of contact. We also added fabric-based tactile sensing skin [36, 37] around the mobile base and to both upper arms and forearms (Fig 1B and 1C). We adjusted the control gains on the PR2’s backdrivable arms to increase their compliance, and used low-velocity arm motions for additional safety. We added a Microsoft Kinect sensor to the robot’s head, which provides a 1920x1080 pixel RGB color video and a 512x424 pixel depth video. When used with our interface, the robot has 20 controllable degrees of freedom. Instructions for recreating the pressure sensitive fabric skin are available at http://pwp.gatech.edu/hrl/manipulation-with-whole-arm-tactile-sensing/. The source code for our system and interface is available in S1 Source Code. We are aware of at least one research group that has successfully run the system described (without the custom tactile sensors) on another PR2.

thumbnail
Fig 1. The robotic body surrogate (Willow Garage PR2).

(A) The PR2 robot. (B) One of the robot’s seven DoF arms, including the tactile-sensing fabric skin (gray) and foam padding (black) on the metallic gripper. (C) The base of the robot, including tactile-sensing fabric skin (blue), placed atop foam padding.

https://doi.org/10.1371/journal.pone.0212904.g001

A novel web-based augmented reality interface

We developed a novel, web-based augmented reality (AR) interface that maps single-button mouse-type input to motions of a PR2 robot (Fig 1). Users access the interface using a standard web browser [38], simplifying access and enabling control of a robot away from the user, such as in another room or across the country.

Participatory design.

We developed the interface through an iterative, collaborative participatory research process [27, 28, 3941] with Henry Evans, an individual with severe quadriplegia from a brain stem stroke, and Jane Evans, his wife and primary caregiver.

Researchers have previously noted the importance of including all interested parties, including caregivers, in the development of assistive technology [42, 43]. While some researchers have included users in the design process [44, 45], they more often focus on the technical challenges during development, and only receive feedback from users during evaluation. We have worked with Henry, Jane, and others to develop assistive robotic technologies since 2011 [15, 17]. Lessons learned from these efforts led to the development of the system reported here.

During the development of this system, we met with Henry and Jane Evans weekly using video-conferencing software. We conducted remote evaluations of new functionality approximately monthly to explore design ideas and receive user feedback. Approximately two times per year, researchers traveled to conduct in-person evaluations in the Evans’s home, culminating in the seven-day evaluation described below in Study 2. We used the insights gained through these discussions and evaluations to identify both user needs and system improvements to enable effective use by individuals from the target population. We have previously described their involvement and some aspects of the web interface design [46], and present the final hardware and software system used in the subsequent evaluations here.

Single-button mouse-type input.

Because many commercially-available assistive input devices can provide single-button mouse-type input to a web browser, designing assistive technology for use with standard mouse-type input simplifies system development, reduces the need to develop specialized interfaces [12], and promotes accessibility across medical conditions, impairments, and preferences (Fig 2). Brain-computer interfaces and other novel assistive input devices can also provide this type of input, making them a complementary technology [19, 24, 47, 48]. Additionally, while designed for use by individuals with motor impairments, this access method is also applicable to non-motor-impaired operators, and so is representative of universal design [49].

thumbnail
Fig 2. Enabling system operation through single-button mouse-type input simplifies design and provides broad accessibility.

Individuals with diverse disease or injury conditions likely have diverse and possibly changing levels of impairment. These individuals may choose to use a variety of commercially-available, off-the-shelf input devices that enable single-button mouse-type input, which can be used to operate our robotic body surrogate. The many possible combinations of disease/injury, impairment, and usable computer interface are connected here by gray lines. These devices make our system accessible across a range of sources of impairment and personal preferences. Also, system developers only need to support a single mode of interaction, reducing development and support effort. Examples: (Blue line) An individual with ALS may have limited hand function and choose to use a head-tracking mouse; (Orange line) An individual with spinal muscular atrophy (SMA) may experience upper-extremity weakness, and prefer the use of a voice-controlled mouse; (Green line) An individual with a spinal cord injury (SCI) may only retain voluntary eye movement, and use an eye-gaze based mouse. All three of these individuals can operate our system without modification, making it accessible across types and sources of motor impairment.

https://doi.org/10.1371/journal.pone.0212904.g002

Video-centric first-person perspective.

For the interface we developed, the user moves the cursor across a video-centric display [31] of a live video stream from the robot’s head-mounted camera, while AR interface elements overlaid on the video show sensor data [50], provide direct manipulation controls [51], and convey how the robot will move when commanded with the mouse button [52]. The video-centric display uses the camera view from the robot’s head to provide the user with feedback on performed actions and context for both planned actions and sensor data.

Other robotic manipulation interfaces, such as ROS RViz, often rely on 3D-rendered displays of the robot and surrounding environment, and require the user to manipulate both the robot and the virtual camera view. In a prior study, we found that even able-bodied users with experience in virtual 3D modeling had difficulty controlling a robot effectively using this type of interface, despite training and a brief practice session [37]. After using a similar interface, [53] notes that “the operator’s comfort with a general 3D GUI and related operations such as positioning a virtual camera proved to be very important” for effective task performance. In contrast, by restricting the user’s perspective to the view from the robot’s head, our interface eliminates the requirement for the user to position and orient a virtual camera in 3D space, and avoids possible confusion caused by multiple simultaneous views. This consistent first-person perspective also aids the user in assuming the role of the robot, as this is similar to the perspective from which individuals experience their daily lives.

Additional sensory feedback.

In addition to the video stream, the interface displays other sensor data from the robot in an integrated manner. For example, the interface uses joint angle sensing and a kinematic model to display interface elements around the robot’s gripper in the video stream (Fig 3). As another example, the interface displays the output of the robot’s pressure sensitive skin. If the fabric-based tactile sensors on the arms or base detect contact, a red dot or square, respectively, appears in the camera view at the location of contact (Fig 4A and 4B). If contact occurs outside of the camera view, the nearest edge or corner of the screen flashes red (Fig 4C).

thumbnail
Fig 3. The end-effector position control ring augmented reality interface with virtual preview (yellow) and goal (green) gripper displays.

(A) The control ring’s rotation remains aligned with the robot’s body. (B) The control ring appears parallel to the floor to convey vertical height. (C) A yellow virtual gripper ‘previews’ commands by displaying the pose the gripper will attempt to reach if commanded. (D) A green virtual gripper displays the gripper’s current goal, and disappears once it reaches this goal.

https://doi.org/10.1371/journal.pone.0212904.g003

thumbnail
Fig 4. Contact displays overlaid on the video interface based on data from the fabric-based tactile sensors.

(A) Contact on the forearm against the table edge. (B) Contact between the robot’s base and the wheelchair. (C) Contact with the robot’s base behind the current field of view.

https://doi.org/10.1371/journal.pone.0212904.g004

To aid depth perception, the interface provides a “3D Peek” feature, which overlays a down-sampled, RGB-D point-cloud of the volume around the gripper onto the video, using data from the Kinect sensor. This simulated view then virtually rotates, as if the camera lowers from the robot’s head to the height of the gripper, allowing a simulated view from this height (Fig 5). “3D Peek” is available in the “Right/Left Hand” modes described later.

thumbnail
Fig 5. The 3D Peek feature showing the 3D point cloud over the live camera feed, rotated to provide depth perception.

(A) The view before the 3D Peek. (B) ≈ 0.1s into the 3D Peek. (C) ≈ 0.3s into the 3D Peek. (D) 3D Peek view (holds for 2.8s).

https://doi.org/10.1371/journal.pone.0212904.g005

Modal control.

The interface uses modal control, where the same input has a different output depending upon the active mode (Fig 6). Modal control introduces the opportunity for mode errors, where the correct command, given in the wrong mode, produces an undesired result [54], and can create mode-switching delays [55]. However, dividing control of the robot’s degrees of freedom across multiple control modes allows all of the degrees of freedom to be controlled using only a mouse-type input, and also reduces visual clutter and the number of immediately available input options.

thumbnail
Fig 6. The interface used to operate the robotic body surrogate.

(A) ‘Looking’ mode. (B) ‘Spine’ mode. (C) ‘Driving’ mode. (D) ‘Hand position’ mode. (E) ‘Hand rotation’ mode. (F) ‘3D Peek’ depth display.

https://doi.org/10.1371/journal.pone.0212904.g006

We reduce mode-switching delays by making all primary modes selectable from a top-level menu on the left of the screen, and by allowing the server-side control components to run concurrently, which means that mode switching only requires client-side interface changes. To help avoid mode errors, each mode uses visually distinct AR elements to convey the current mapping from the mouse cursor and single button to robot motions (Fig 6, S1 Video), and to display relevant sensor data in the appropriate context in the camera view.

The robot executes step-wise motions for all modes except driving, but the web browser renders the AR elements on the user’s machine. This allows the interface to provide responsive, real-time feedback to the user’s input at all times, while the robot is able to perform at least small actions independently, without the jitter and lag often associated with streaming commands over unreliable networks.

The operation modes within the interface are as follows.

  1. “Looking” mode displays the mouse cursor as a pair of eyeballs, and the robot looks toward any point where the user clicks on the video.
  2. “Driving” mode allows users to drive the robot in any direction without rotating, or to rotate the robot in-place in either direction. The robot drives toward the location on the ground indicated by the cursor over the video when the user holds down the mouse button, and three overlaid traces show the selected movement direction, updating in real time. “Turn Left” and “Turn Right” buttons over the bottom corners of the camera view turn the robot in place.
  3. “Spine” mode displays a vertical slider over the right edge of the image. The slider handle indicates the relative height of the robot’s spine, and moving the handle raises or lowers the spine accordingly. These direct manipulation features use the context provided by the video feed to allow the user to specify their commands with respect to the world, rather than the robot, simplifying operation.
  4. “Left Hand” and “Right Hand” modes allow control of the position (Fig 3) and orientation (Fig 7) of the grippers in separate sub-modes, as well as opening and closing the gripper. In either mode, the head automatically tracks the robot’s fingertips, keeping the gripper centered in the video feed and eliminating the need to switch modes to keep the gripper in the camera view.
thumbnail
Fig 7. The end-effector orientation control augmented reality interface with virtual preview (yellow) and goal (green) gripper displays.

(A) 3D virtual orientation controls around end effector. (B) Hovering over the blue arrow hides other arrows and shows yellow preview. (C) After sending a command, a green virtual gripper shows the active goal. (D) Gripper position after rotating to left from (A). (E) Hovering over green arrow hides arrows, shows preview. (F) Gripper position after rotating upward from (E).

https://doi.org/10.1371/journal.pone.0212904.g007

Details for the “Left Hand” and “Right Hand” modes.

The “Left Hand” and “Right Hand” modes are essential for manipulation, playing a critical role in both Study 1 and Study 2. These modes also include a number of novel attributes. As such, we now provide a more detailed description.

Gripper position sub-mode.

To control the gripper’s position, the user clicks on a yellow virtual disk displayed around the gripper. This moves the gripper one step on a horizontal plane toward the corresponding 3D point on the virtual disk. Step sizes can be selected from XS, S, M, and L (1.5, 4, 11, and 25 cm, respectively). The selected step size remains in effect for all movements of the selected hand until adjusted by the user. Inset up and down arrow buttons move the gripper one step vertically up or down. The disk tilts to appear co-planar to the floor and rotates so the top points parallel with the robot’s base, providing additional situational awareness (Fig 3A and 3B). As a whole, this novel interface simplifies Cartesian motions with respect to the environment. The interface element also provides some of the advantages of direct manipulation interfaces [51]. Clicking a location results in the gripper moving toward both the clicked 2D location in the video and the clicked 3D location in the real world, where the real-world 3D location is defined by the clicked point on the virtual disk. In addition, the rendering of the interface element provides cues to the 3D position of the gripper with respect to the camera, since it is rendered as though it were a horizontal object with a fixed orientation relative to the robot’s base. Notably, the interface element does not obscure the user’s view of the center of manipulation.

Gripper orientation sub-mode.

To control the gripper’s orientation, six colored, virtual arrows around the robot’s wrist move the wrist in the indicated direction when clicked, rotating the gripper about the fingertips (Fig 7). Step sizes can be selected from XS, S, M, and L (, , , and rad, respectively). These virtual arrows are rendered to always appear in the same location and orientation relative to the gripper as the gripper moves. This display reduces visual clutter around the gripper [56], while providing a consistent interaction, as the gripper will always rotate in the direction indicated by the arrow, unlike alternative interfaces that provide button-pads of directional arrows, where the user must mentally map each command to the current orientation of the gripper.

Grasping.

In both the position and orientation sub-modes, the user can open or close the gripper using sliders in the bottom left or right corner of the screen. While closing, the gripper attempts to grasp items gently but securely using the method described in [57].

Gripper previews.

When the cursor hovers over any end effector control, a yellow, semi-transparent, virtual gripper appears where the command being considered would send the gripper, providing a preview [16, 52]. This preview allows users to rapidly evaluate the correctness of their currently selected input to help avoid errors or make adjustments without actually moving the robot and possibly causing unintended interactions (such as knocking over a glass). When a command is sent, a green, semi-transparent, virtual gripper appears at the goal location and disappears once the goal is reached.

Study 1: Remote evaluation using a modified action research arm test and a simulated self-care task

Study 1 characterizes the benefit or lack of benefit provided by a robotic body surrogate in terms of a modified version of the ARAT [5860] and also establishes a performance baseline with respect to this assessment. In addition, Study 1 assesses performance on a simulated self-care task that consists of using the robot to retrieve a water bottle and then bring the tip of the water bottle’s straw to the mouth of a medical mannequin. Study 1 uses questionnaires to measure perceptions of the robotic system.

Our inclusion criteria required participants at least 18 years of age, fluent in written and spoken English, able to operate a computer mouse or equivalent assistive device, and scoring nine or fewer points on the ARAT with both upper limbs. We prescreened participants verbally before enrollment. All participants were compensated $25.00 USD per hour of participation at the end of the study. The Georgia Institute of Technology Institutional Review Board (IRB) approved this study under protocol H13046. We obtained written informed consent from all participants, and all procedures were carried out according to the approved protocol guidelines.

Session 1: Demographics, technology use, and unassisted ARAT.

During the first session, we collected demographic information (age, gender, ethnicity, marital status, highest level of education completed, dominant hand, cause of motor deficit, and date of accident/injury/diagnosis). We asked participants about their prior use (if any) of robots, video games, and computer aided design software, as well as for details about their computer system, Internet bandwidth, and mouse device. We then remotely assessed motor deficit according to the ARAT by asking participants to report their ability to perform each of the ARAT items. The low ARAT score required for inclusion made this remote assessment feasible. If scoring for an item was unclear, we used a conservative score estimate (recording the highest possible score/least impairment). Participants who passed prescreening, but did not meet the criteria for impairment based on the ARAT, were not advanced to the next session.

Session 2: Guided training and practice.

Before the second session, we provided a link to a 10-minute tutorial video introducing the robot and the control interface, and a link to a Fitts’s law-based pointing test based upon [61]. This allows estimation of the participant’s throughput with their preferred pointing device. During the second session, participants used the web-based interface to operate a PR2 robot through a guided training session, which introduced all the features of the control interface, and included grasping and placing two plastic bottles from a tabletop. Participants then completed a training evaluation task using the robot without guidance. The task required coordinated use of system features to grasp a box from a nearby shelf and return it to a table in the same room. Participants who failed to complete the training task in under 35 minutes did not proceed further in the study. We specifically avoided using ARAT tasks as part of this training and practice, so that participants would not learn skills specific to the ARAT. We required at least one overnight period after the training session before proceeding to the third session.

Session 3: Modified ARAT with the robot.

In the third session, participants remotely operated the arm and gripper of the PR2 corresponding to their own dominant arm to complete the ARAT. The base, spine, and other arm controls were unavailable during this portion of the experiment. This makes the test more comparable to the ARAT as administered directly to people, but also reduces the system to nine controllable degrees of freedom (pan/tilt head, open/close gripper, and six degree of freedom gripper pose). We administered the test as closely as possible to published instructions [59], skipping the 10 cm block, flat washer, and ball bearing, which the robot’s gripper cannot grasp. Skipped items were scored as failures (0 points). We treat all finger combinations aside from Thumb and 1st finger as amputations (0 points), as the robot has only a two-finger gripper. We also allow up to eight minutes to complete each task, rather than the standard one minute, though we require completion in less than five seconds for full points on each item, per [59]. These considerations result in an expected maximum score using the robotic body surrogate of 22/57 possible points on the ARAT. After each task, an automated script returned the robot to the setup configuration. For gross movement items, we positioned the robot near a mannequin in a wheelchair, such that the mannequin’s head was centered along the mid-line between the robot’s center and the shoulder of the arm being tested, facing perpendicularly to the robot, and pointed in the direction of the arm being tested.

After completing the ARAT, we asked participants to complete a debriefing questionnaire. The questionnaire contained the following items, about which we asked participants to rate their agreement using a seven-point scale:

  1. The robotic system is easy to use for performing manipulation tasks.
  2. The robotic system is useful for performing manipulation tasks.
  3. I would prefer the robotic system to a human caregiver for manipulation tasks.

We then asked participants to complete the following sentence from the provided list of options: “Using the robotic system rather than my own arms would make my ability to perform manipulation tasks…” 1. Much worse, 2. Meaningfully worse, 3. A little worse, but not meaningfully, 4. Neither better nor worse, 5. A little better, but not meaningfully, 6. Meaningfully better, 7. Much better. We structured this sentence and options to correspond to the literature on identifying minimal clinically important differences [60]. Finally, we asked the participant to provide “any additional comments or feedback about the system.” We use a 1-tailed Wilcoxon signed rank test to compare ARAT scores using and not using the robot, and a 1-tailed 1-sample Wilcoxon signed rank test to compare improvement and rating scale responses to comparison values.

Session 4: Simulated self-care task with the robot.

In the fourth and final session, participants remotely controlled the robot with all controls, and all 22 degrees of freedom, available to simulate getting themselves a drink. Participants drove the robot to grasp a water bottle from a shelf, and then brought the tip of a straw in the water bottle to the mouth of a medical mannequin seated in a wheelchair in the same room. To indicate success, we inserted a small, round neodymium magnet in the center of the mannequin’s mouth, behind the rubber skin. We placed an M4 x 6 mm ferrous socket cap machine screw loosely into the end of the straw during the trial, and declared success when the screw adhered itself to the magnet. This required the screw in the tip of the straw to be < 1 cm from the center of the mannequin’s mouth. Unlike the constrained ARAT evaluation, this task required the participants to operate the complete robotic system, including combining mobility, altering the height of the spine, and fine manipulation for grasping and reaching in a simulated real-world environment. After completing the task, we asked participants the same debriefing items as above, replacing ‘manipulation tasks’ with ‘self-care tasks’ in all items. We also asked “if you had this robotic system in your home, what tasks would you use the robot for in your daily life?”

Enrollment, demographics, and computer access methods.

We enrolled 37 participants after a brief prescreening process. 14 participants failed to meet the full inclusion criteria after fully evaluating their ARAT score based upon their own physical capabilities, and did not proceed further in the study. Of the remaining 23 participants, two withdrew before completion citing lack of time, and another withdrew citing personal health. One participant with advanced Amyotrophic Lateral Sclerosis (ALS) became ill and passed away before completing participation. One participant did not have sufficient Internet bandwidth to operate the robot remotely, and two participants failed to schedule beyond the first session before the study was ended. One participant failed to complete the training evaluation task in the required 35 minutes in two attempts, after receiving 180 minutes of guided training and practice time. The remaining 15 participants completed the entire study successfully, and all following results derive from these participants.

Participants’ motor deficits arose from six distinct sources: Spinal Muscular Atrophy (n = 6), Muscular Dystrophy (Duchenne or Becker, n = 3), Spinal Cord Injury (n = 3), ALS (n = 1), Arthrogryposis (n = 1), and Dejerine-Sottas disease (n = 1). These 15 participants had an average age of 36.9 ± 8.7 years, included 8 men, and 13 were right-hand dominant. Participants identified themselves as Caucasian (n = 12), African American (n = 2), and Asian (n = 1). Participants had the following levels of education: high school diploma (n = 5), two-year degree (n = 3), four-year degree (n = 4), Master’s degree (n = 2), and Juris Doctorate (n = 1). Participants reported using a computer for 8.46 ± 3.92 (M ± SD, range: 2.0–16.5) hours per day, and all were experienced using their chosen accessible computer input device. Participants used their own computer input devices to remotely operate the robotic body surrogate in Atlanta, GA from locations across the United States (Fig 8).

thumbnail
Fig 8. Locations of the 15 participants with profound motor deficits who remotely operated a robotic body surrogate in Atlanta, GA (star) across long distances.

This evaluation used participants’ own, existing computer hardware and Internet connections, demonstrating our system’s performance with real-world bandwidth and latency constraints. Darker states indicate two participants from that state, lighter states indicate one participant.

https://doi.org/10.1371/journal.pone.0212904.g008

The study included seven distinct computer access methods used to operate the robot: trackball (n = 4), touchpad (n = 3), head tracking with TrackerPro (n = 1) and HeadMouse Extreme (n = 2) brand devices, standard mouse (n = 2), eye-gaze control via Tobii I-15+ (n = 1), voice control via Dragon MouseGrid (n = 1), and using a touchpad via mouthstick stylus (n = 1). Participants demonstrated a throughput of 2.36 ± 1.00 bits/s (M ± SD, range: 0.71–4.58). Data from this study may be viewed in S1 and S2 Datasets.

Study 2: Seven-day in-home evaluation

Study 2 characterizes the use of a robotic body surrogate by an expert user with profound motor deficits in his home over seven days.

The study took place from September 23rd through September 29th, 2016. This n = 1 case study was performed with Henry Evans and his wife and primary caregiver, Jane Evans, whose collaboration was instrumental in the design and development of this system. The Georgia Institute of Technology IRB approved this study under protocol H15170. We carried out all procedures according to the approved guidelines, and we obtained written informed consent from Henry and Jane Evans, including specific consent to publish identifying information. The individuals in this manuscript have also given written informed consent (as outlined in PLOS consent form) to publish these case details.

The expert user.

At the time of the study, Henry Evans was 54 years old, with motor deficits resulting from a brain stem stroke on August 29th, 2002. Henry is mute, can move his head through a limited range of motion, voluntarily contract his left elbow to a limited degree, and contract his left thumb. He states that he retains full sensation. Henry receives three points on the ARAT with his left arm, and zero with his right. Henry used a head-tracker and a single mouse button to operate the robot in 16 sessions, for a total of 22:30:50 (hh:mm:ss) of use.

System configuration, participant training, and robot maintenance.

For this evaluation, we configured the robot software to run automatically on power-up, bringing the robot to a state where Henry could operate it via the web interface. We provided Henry a link to access the interface on his local network at the beginning of the trial, which he saved as a bookmark. Before the experiment, we reviewed the official Willow Garage safety instructions for the PR2 with both Henry and Jane. We provided them with both digital and hard copies of an eight-page User Instruction Manual which we wrote for their reference during this study, including the safety instructions, use of the interface, and possible procedures they could follow for error recovery. We trained Jane to power up the PR2, to use the PR2’s emergency run-stops, and to use the PlayStation 3 joystick to drive the robot.

Two researchers lived in the participants’ home during the trial period and observed all use of the robot for data collection and participant safety. During the study, researchers plugged and unplugged the robot’s power cord as necessary to allow the participant to move the robot about the house and to maintain sufficient battery charge, and engaged the robot’s safety stop between sessions of use for additional safety. Researchers also connected a power cable to the head of the electric shaver whenever the participant successfully grasped the shaver head with the robot, so that he could operate the tool via the web interface, as this requires significant manual dexterity, but could be eliminated with a wireless, battery-powered shaver in the future. Researchers did not provide further instructions for or assistance with the operation of the robotic body surrogate during the study period.

Sessions of robot use.

Use of the robot occurred in sessions, where each session consisted of the time from when the participant started to when he stopped the web control interface. Researchers made themselves available on short notice at any point throughout the week, though the participant preferred to schedule use of the robot in advance, identifying periods during the day when he wished to perform certain tasks. This enabled researchers to prepare for data collection in advance, reducing delays or disruptions. During use sessions, researchers recorded observations and video and monitored safety. At the end of each session, we conducted a short debriefing, asking the participant to identify the top-level tasks he attempted during the session, and to rate his success, the usefulness of the robot, and the ease of using the robot, for each task.

During sessions, the robot logged all commands issued from the web interface, though not mouse movements or clicks which did not send commands. The system logged the position of the robot’s joints at 4 Hz, and recorded a 540x960 full-color image from the robot’s camera at 0.25 Hz. Throughout the week, the system logged the calibration status, battery state, run-stop state, any commands issued via the joystick, and all other data reported via the PR2’s diagnostics. Outside of sessions, we suspended formal observation to allow for a more natural and relaxed environment in the home, with the aim of reducing disruption of the participants’ typical routines.

Hierarchical task analysis.

We identified tasks and subtasks performed during the use sessions using a hierarchical task analysis style breakdown based on direct observation, video recordings, data collected from the robot, and user interviews [62]. As a stopping criterion [63], we identified more fine-grained layers of subtasks until the time-stamped data collected on the robot provided the next level of sub-task data (typically at the level of discrete Cartesian movements by either the user or the robot). For example, we break down labeling of a portion of a feeding task into ‘scoop yogurt,’ ‘bring yogurt to mouth,’ and ‘eat yogurt’ steps. We can then further decompose ‘scoop yogurt’ and ‘bring yogurt to mouth’ based upon the individual motion commands sent to the robot, which were automatically recorded and timestamped on the robot. This typically only required 2-3 subtask levels, and enabled evaluation of the large quantities of automatically collected data according to the higher-level tasks with which they were associated. We adjusted the task labeling until we reached consensus between both researchers who had observed the trial and the participant.

Results

In this section, we present the results of Study 1 and Study 2. Both Study 1 and Study 2 completed successfully without incident.

Study 1: Remote evaluation using a modified action research arm test and a simulated self-care task (n = 15)

15 participants completed all aspects of Study 1, meeting our goals as determined by a statistical power analysis. Below, we present the results from the study, organized into an analysis of the novice users’ training time and results from participants controlling the robot to perform the modified ARAT and the simulated self-care task.

Participant training time.

Participants required 54 ± 16 min (M ± SD, range: 34–90 min) to complete the guided training portion, and 18 ± 8 min (M ± SD, range: 8–33.5 min) to complete the training evaluation task. Overall, participants operated the robot for 72 ± 21 min (M ± SD, range: 44–114 min) before performing the ARAT evaluation. In post-hoc analyses we found no significant association between either training time or time to complete the evaluation task and either ARAT scores when operating the robot or improvement in ARAT score when using the robot vs. the participant’s own body (n = 15, p > .05 for Pearson’s r correlations).

Modified ARAT with the robot.

Participants showed significant improvement on the modified ARAT when remotely operating the robotic body surrogate versus the unassisted remote ARAT assessment (Fig 9). Using their own bodies, participants achieved a median score of 3/57 (range: 0–8), with 13/15 participants scoring either 0 or 3 (unable to raise either hand against gravity). Using the robot remotely, participants’ ARAT scores averaged 17.07 ± 2.87 (M ± SD, median: 17, range: 12–22, Fig 9B, S2 Video), a significant improvement (n = 15, W = 120, p = 0.00035). Participants experienced difficulty grasping the smallest object (1 cm marble), and performed best on larger objects that fit easily into the robot’s gripper (e.g. 5 cm wooden block). The score improvement of participants in our study when using the robot also significantly exceeded a conservative estimate (12 points) of the minimal clinically important difference (MCID) (n = 15, W = 96, p = 0.00147, Fig 9C). Van der Lee et al. [64] and Lang et al. [60] report the MCID for the ARAT as 5.7 and 12 points, respectively.

thumbnail
Fig 9. 15 participants with profound motor deficits operated the robotic body surrogate over long distances to perform the Action Research Arm Test (ARAT).

(A) A participant remotely performing an item from the ARAT with the robotic body surrogate: grasping, lifting, and placing a 7.5 cm wooden block. (B) Comparison of participant ARAT scores without (left) and with (right) the robot (n = 15, W = 120, p = 0.00035). (C) ARAT score improvements vs. minimal clinically important difference (MCID) reported in literature [60] (MCID = 12, n = 15, W = 96, p = 0.00147).

https://doi.org/10.1371/journal.pone.0212904.g009

Participants indicated that the body surrogate would provide a significantly meaningful improvement in ability to perform manipulation tasks (n = 15, W = 105, p = 0.00036) and self-care tasks (n = 15, W = 120, p = 0.00024) as quantified by responses to a seven-point rating scale based on Lang’s questionnaire [60], significantly exceeding a rating of ‘better, but not meaningfully’ in both cases (Fig 10). Participants also significantly agreed that the system was both useful and easy to use for both manipulation and self-care tasks (n = 15, p < 0.003 in all four cases, see Fig 11). Participants did not indicate a preference for the robot over a human caregiver for either manipulation tasks (n = 15, W = 30.5, p = 0.856) or self-care tasks (n = 15, W = 51, p = 0.719). For both types of task, the median response was ‘neither agree nor disagree,’ and the mode response was ‘agree’.

thumbnail
Fig 10. Participants indicated that the robotic body surrogate would provide a significantly meaningful improvement in their ability to perform both manipulation tasks (n = 15, W = 105, p = 0.00036) and self-care tasks (n = 15, W = 120, p = 0.00024).

Participants were asked to complete the sentence “using the robotic system rather than my own arms would make my ability to perform [manipulation tasks / self-care tasks]…” using a seven-point scale, with possible responses (based on [60]: 1. Much worse, 2. Meaningfully worse, 3. A little worse, but not meaningfully, 4. Neither better nor worse, 5. A little better, but not meaningfully, 6. Meaningfully better, and 7. Much better. The charts shows the distribution of responses to each form of this question. Significance was evaluated using a 1-tailed, 1-sample Wilcoxon signed rank test vs. a rating of 5–‘A little better, but not meaningfully’.

https://doi.org/10.1371/journal.pone.0212904.g010

thumbnail
Fig 11. Participants significantly agreed that the system was both useful (use) and easy to use (ease) for both manipulation tasks (manip.) and self-care tasks (self).

(A) Use-Manip.: W = 120, p = 0.00026. (B) Use-Self: W = 105, p = 0.0004. (C) Ease-Manip.: W = 74, p = 0.0026. (D) Ease-Self: W = 87.5, p = 0.0014. n = 15 for all. Participants were asked to rate their agreement with the statements “the robotic system is (easy to use / useful) for performing (manipulation/self care) tasks” using a seven-point scale. Allowed responses were: 1. Strongly disagree, 2. Disagree, 3. Somewhat disagree, 4. Neither agree nor disagree, 5. Somewhat agree, 6. Agree, and 7. Strongly agree. Charts show the distribution of responses to each combination of (useful/easy to use) and (manipulation tasks/self care tasks). Significant was evaluated using a 1-tailed, 1-sample Wilcoxon signed rank test vs. a rating of 4–‘Neither agree nor disagree.’

https://doi.org/10.1371/journal.pone.0212904.g011

Task completion when using our system was slow compared to able-bodied performance. Participants averaged 4:29 ± 1:55 (m:ss, M ± SD) for each completed ARAT item, while able-bodied performance without a robot is ≈ five seconds or less. For individuals with profound motor deficits, slow task performance would still increase independence by enabling people to perform tasks for themselves that would not be possible without assistance.

In post-hoc analyses, we found no significant relationship between any demographic or experience data and either ARAT scores when operating the robot or improvement in ARAT score when using the robot vs. the participant’s own body (n = 15, p > .05 for Pearson’s r correlations). Additionally, we found no significant effects between source of motor deficit, the type of pointing device used, or throughput with the chosen pointing device and either ARAT scores when operating the robot or improvement in ARAT score when using the robot vs. the participant’s own body (n = 15, p > .05 for Kruskal-Wallis H tests).

Simulated self-care task with the robot.

12 of the 15 participants (80%) successfully completed the simulated self-care task of getting a drink and bringing it to the mouth of a nearby mannequin (Fig 12). Of the three who did not complete the task successfully, two grasped the bottle, but failed to bring the tip of the straw to the mannequin’s mouth with sufficient accuracy before giving up. The third was unable to successfully grasp the water bottle from the shelf before giving up. A researcher then placed the bottle into the robot’s gripper, at which point the participant successfully brought the tip of the straw to the mannequin’s mouth.

thumbnail
Fig 12. 15 participants with profound motor deficits operated the robotic body surrogate over long distances to simulate getting themselves a drink.

(A) The layout of the task room at the beginning of the tasks. The bottle (left) is placed on a shelf, approximately two meters in front of the robot, and the mannequin in a wheelchair is placed nearby. The observing researcher sits in the back of the room. (B) A participant remotely retrieving the water bottle. (C) A participant reaching and rotating the grasped bottle toward the mannequin’s mouth. (D) The straw in the bottle at the center of the mannequin’s mouth, showing the small screw adhered to the magnet behind the mannequin’s mouth, indicating successful completion of the task.

https://doi.org/10.1371/journal.pone.0212904.g012

When successful (14/15, 93.3% of participants), participants grasped and lifted the bottle from the shelf in 647s ± 589s (M ± SD, range: 195s–2329s). When successful (13/15, 86.6% of participants), participants brought the straw tip to the mannequin’s mouth with sufficient accuracy to secure the magnet in 1194s ± 1491s (M ± SD, range: 217s–4407s) after first grasping and lifting the bottle. Participants who completed the full task without assistance (12/15, 80% of participants) completed the task in 1715s ± 1502s (M ± SD, range: 465s–4602s).

Study 2: Seven-day in-home evaluation

We present notable results from Study 2 in this section. We have also included videos and data from the study that can provide additional insight.

Hierarchical task analysis.

Following the seven-day, in-home evaluation, our hierarchical task analysis identified 59 ‘top-level’ tasks performed by Henry during the study. ‘Top-level’ tasks are not subtasks of any other ongoing tasks, but may themselves be composed of one or more sub-tasks. We identified ten types of self-care tasks and seven types of household tasks that Henry performed, many of which Henry performed multiple times during the week, with each instance counting separately toward the 59 total tasks (Fig 13). The self-care tasks included feeding himself yogurt (S3 Video), wiping his mouth, scratching his head, applying lotion to his legs (S4 Video), and shaving part of his face. Henry used the robot both from his bed and his wheelchair. He operated the robot from his bed for 85.9% of the total use time, but preferred to shave himself while seated in his wheelchair, as this allowed the robot to reach both sides of his face.

thumbnail
Fig 13. Henry Evans performed 59 separate tasks, including ten distinct types of self-care task and seven distinct types of household task, during the seven-day in-home evaluation.

This figure shows Henry Evans performing a selection of these tasks, including: (A) Wiping his face. (B) Shaving his face. (C) Flipping a light switch (Henry visible in background). (D) Feeding himself yogurt. (E) Scratching his head. (F) Applying lotion to his legs.

https://doi.org/10.1371/journal.pone.0212904.g013

Detailed hierarchical task data with associated timing information from the seven-day deployment can be found in S3 Dataset.

Human preparation of the environment.

We provided a number of common items, including an electric toothbrush, electric razor, hairbrush, lotion applicator, and silicone spoon adapted with handles designed for secure grasping by the robot’s gripper. Henry requested assistance with preparing the environment for a number of his tasks. He typically asked for tools or other items to be set out in a nearby location where he could grasp them with the robot, such as placing a spoon and a bowl of yogurt beside his bed. This is comparable to the role a caregiver could play. Henry then controlled the robot to collect the items and perform the intended task, including navigating around his home, grasping the items, and positioning the robot appropriately for performing the selected task.

Examples of anticipated tasks.

Although task performance was slow and of variable quality, Henry was able to perform a variety of tasks of his choosing that he would not have been able to perform without assistance. When feeding himself yogurt, Henry was able to achieve approximately one bite every five minutes after practice. When shaving his face, Henry was able to shave his cheeks and jaw effectively, but found it difficult to properly orient the tool to shave his neck and under his chin, and caused some skin irritation in attempting to do so. Henry was able to wipe his mouth effectively, and often did so between bites of yogurt when feeding himself. When applying lotion to his leg, Henry used the robot to remove a blanket covering his legs, and was able to apply lotion to parts of his shins, though he indicated that the lotion applicator tool did not perform as well as he would have liked, stating that “the robot work[ed] better than [the lotion applicator].”

A novel task.

Henry also discovered an unanticipated use for the robot. He controlled the robot to simultaneously hold out a hairbrush to scratch his head and a towel to wipe his mouth (Fig 13A and 13E, S5 Video). This allowed him to remain comfortable for extended periods of time in bed without requesting human assistance (two sessions approximately 2.5 hours and 1 hour in length). Henry stated that “it completely obviated the need for a human caregiver once the robot was turned on (always the goal),” and that “once set up, it worked well for hours and kept me comfortable for hours.” This was a task which designers had not anticipated, and was the most successful use of the robot in terms of task performance and user satisfaction, as the deployed research system provided a clear, consistent benefit to the user and reduced the need for caregiver assistance during these times. Although trained to do so, Jane did not directly operate the robot using the joystick during the week.

Discussion

In this section, we discuss the results and implications of the research.

Recruiting more broadly via remote participation

By using broadly accessible technologies that allow for remote operation, a comparatively large number of individuals with profound motor deficits were able to participate in our studies. This not only strengthens our results, but also demonstrates how these methods can enable improvements in future research by more effectively reaching a traditionally underserved population. The participants in our studies have profound motor deficits, comparable to those of participants in studies of highly-invasive BCI’s. Because of the difficulty of recruiting participants from this population, many prior studies have had only one or a few participant(s) [1922, 24]. Similarly, previous studies in assistive robotics have often had few participants and/or participants with less profound motor deficits [12, 3133], and in some cases include unimpaired individuals to evaluate assistive technologies [48, 65, 66].

The modified ARAT

Our use of a standardized evaluation from the medical community has a number of benefits. In administering the modified ARAT, we closely follow [59] so that the reported results will be meaningful with respect to ARAT results from other contexts, and interpretable to others familiar with the test, especially clinicians. This also allows the same test to be used in future robotic studies in a way that can reflect performance improvements enabled through alternative software and/or hardware by comparison with the performance data presented here. Furthermore, normative data for ARAT performance provides a reference for comparing our results with able-bodied performance of the same task.

Application of our modified ARAT assessment needs to be performed carefully in order for the results to be pertinent to real-world use of a robotic system. For example, an autonomous system specialized for the specific objects and tasks in the ARAT might result in a favorable score, yet perform poorly during real-world use with common objects and tasks. We mitigated this risk by only incorporating low-level robot autonomy, coupling our ARAT study with a study in a real home, and being careful not to design the interface specifically for the ARAT.

Depth perception

Even with the provided “3D Peek” view, the lack of effective depth perception with this system presents a challenge to remote operation, especially when manipulating small objects. 3D displays may be beneficial, but the required hardware and software is much less common than for conventional flat panel displays, though this may change with time. For example, consumer virtual reality systems are becoming more common.

Interestingly, Henry Evans spent 80% of his time operating the robot with a direct line of sight to the robot. Self-care tasks inherently require that the user be co-located with the robot, which reduces 3D perception issues. It is unclear if Henry would have used the system more frequently out of his line of sight if 3D perception issues were improved.

Time to complete tasks

The time required for task completion with the system exceeded able-bodied performance by a wide margin, to the extent that adaptation of the ARAT time limits was a necessary modification. This is due to multiple factors, including the slow speed of the robot, the use of discrete Cartesian movements to perform complex manipulations, and the tendency to perform actions carefully, as errors such as knocking over an object are often difficult to correct. Despite this, participant perceptions of usefulness and ease of use were both significantly positive. Perceived ease of use and perceived usefulness have been found to be predictive of acceptance of technologies [67, 68], including robots [69]. This supports the idea that even limited functionality can have a meaningful impact for individuals with profound motor deficits. It also presents opportunities for future research into methods that might improve operation speed.

Due to the slow speed of the robot and the complexity of operation, participants may not have been significantly limited in their performance by their assistive pointing device, but instead by the operation of the system. This may account for the absence of detected effects between participants’ performance using the robot and their own level of impairment, demographics, daily computer use, choice of pointing device, or Fitts’s law throughput. If the system operated more quickly, effects of chosen computer interfaces might become evident.

Task performance and opportunities for improvement

Task performance presents a notable limitation of this system that merits further investigation, as highlighted by Henry’s variable success across tasks. Based on our hierarchical task analysis, we found that the expert user spent considerable time performing tool acquisition and other ancillary tasks, such as driving the robot to and from the location in the home where it remained between sessions of use. In such cases, limited semi-autonomous capabilities, such as automatic tool-grasping or autonomous navigation, might reduce task completion time. Task-specific automation might also assist with performance quality, such as giving the robot the ability to follow surface contours automatically, which could improve performance on tasks such as shaving or applying lotion. Similarly, as shown in prior work, task-specific applications for common tasks such as feeding assistance and shaving assistance can utilize specialized interfaces and task-specific robot autonomy [15, 30].

Ultimately, a combination of approaches may prove beneficial. For example, a robotic body surrogate could provide task-specific automation for common tasks and an interface similar to what we presented for uncommon and novel tasks. In general, methods to reduce the rate of errors and increase operational speed merit further investigation.

Implications for robotic body surrogates

By using the ARAT, we showed that our system provides improvements exceeding minimal clinically important difference (MCID) as established by [60]. This indicates that our system could produce a meaningful improvement in the ability of members of our target population to perform everyday and self-care tasks independently through the use of a robotic body surrogate.

Because of the constrained nature of the ARAT, we also asked participants to perform a more complex simulated self-care task. As this task required the coordinated use of more of the robot’s degrees of freedom than the ARAT, the general success of the novice participants suggests that the system can be used for more complex, real-life tasks requiring both mobility and manipulation.

The ability of remote participants, who never saw the robot in person, to use the system effectively despite this limited training and experience supports the system’s ease of use. The training and experience in the study are especially limited compared with the potential experience an individual might gain using this type of technology if available long-term in an in-home setting. It seems likely that users’ understanding and comfort with the system, and ultimately performance, would improve with additional experience.

While Study 1 shows that a variety of users with profound motor deficits were able to operate the robotic body surrogate effectively, it does so only in a limited research setting, and for highly constrained tasks. In contrast, the Study 2 shows that an experienced user with profound motor deficits can operate the robotic body surrogate effectively in-person, for a wide variety of tasks, and in a real-world setting. Henry’s use patterns and choice of tasks reflect his particular needs, preferences, and lifestyle, and were influenced by both his daily routine and his caregivers. Within this context, Henry used the robot for a variety of tasks appropriate to his needs and outside of the scope of those tested in Study 1. He was also able to identify new opportunities for use.

The specific robotic hardware we used in our study is not sufficiently robust for longer-term use, nor is it cost-effective for consumers. Despite this, our results support the value of this type of robotic body surrogate for increasing user independence and meeting a variety of user needs, including needs unanticipated by designers.

Our results suggest that this assistive technology can be made widely accessible. The participants in both studies have profound, bilateral motor impairments, with most being unable to lift either arm against gravity. We have shown that members of this target population can operate the complex robotic device effectively using off-the-shelf assistive input devices. We would expect diverse users with lesser motor impairments to also be able to effectively use the system, although users with lesser impairments may gain less benefit from it.

Conclusion

Overall, we have shown that people with profound motor deficits can effectively use robotic body surrogates at home and in remote locations. The participants in our study had a variety of impairments and used a web browser with their preferred off-the-shelf assistive input devices, which suggests that this type of assistive technology could be used by a diverse range of people. One participant also operated the robotic body surrogate in a home setting over a seven day period with only limited assistance, indicating that this assistive technology can operate effectively outside the context of a laboratory evaluation. Together, these results suggest that robotic body surrogates could provide improved independence and self-care self-efficacy for individuals with profound motor deficits.

Supporting information

S1 Video. The robotic body surrogate and augmented reality interface.

The robotic body surrogate, along with the augmented reality (AR) interface used to operate it via a web-browser. The video demonstrates all control modes and sensor displays present in the interface.

https://doi.org/10.1371/journal.pone.0212904.s001

(MP4)

S2 Video. A user with profound motor deficits earns 22 points on the ARAT during long-distance operation of the robotic body surrogate.

A remote user operating the robotic body surrogate to earn all 22 points on the ARAT expected to be possible when using the robot remotely. The observing researcher is visible in the background of the video.

https://doi.org/10.1371/journal.pone.0212904.s002

(MP4)

S3 Video. Henry Evans uses the robotic body surrogate to feed himself yogurt.

Henry Evans, lying in bed, operates the PR2 to feed himself a scoop of yogurt. Henry grasped the spoon and scooped the yogurt from the bowl (not shown).

https://doi.org/10.1371/journal.pone.0212904.s003

(MP4)

S4 Video. Henry Evans uses the robotic body surrogate to apply lotion to his legs.

Henry Evans, lying in bed, controls the PR2 in his own home to grasp a lotion applicator, move a blanket from atop his legs, apply lotion to his shins, replace the lotion applicator, and park the robot in an out-of-the-way location.

https://doi.org/10.1371/journal.pone.0212904.s004

(MP4)

S5 Video. Henry Evans uses the robotic body surrogate to wipe his mouth and scratch his head.

Henry Evans, lying in bed, operates the PR2 to wipe his face with a towel and scratch his head with a hairbrush. Henry conceived of this method of assistance, which kept him comfortable for extended periods without caregiver assistance.

https://doi.org/10.1371/journal.pone.0212904.s005

(MP4)

S1 Dataset. Participant data from the ARAT evaluation.

These are the data from the evaluation of the system with 15 users with profound motor deficits. This includes demographic, impairment, and computer access data, data from training sessions, and the reported results for the ARAT, simulated self-care task, and the associated seven-point rating scale responses.

https://doi.org/10.1371/journal.pone.0212904.s006

(XLS)

S2 Dataset. ARAT item data from the ARAT evaluation.

These are the score and timing data from the evaluation of the system with 15 users with profound motor deficits completing the Action Research Arm Test by operating the robotic body surrogate remotely.

https://doi.org/10.1371/journal.pone.0212904.s007

(XLS)

S3 Dataset. Data from the seven-day in-home evaluation.

These are the identified tasks and associated timing information recorded during the seven-day deployment of the system in the home of Henry Evans.

https://doi.org/10.1371/journal.pone.0212904.s008

(XLS)

S1 Source Code. Source code for the robotic system and interface.

This includes all custom code used in the described system and related evaluations.

https://doi.org/10.1371/journal.pone.0212904.s009

(ZIP)

Acknowledgments

We thank Henry and Jane Evans for their numerous contributions to our research. We thank Henry Clever and Newton Chan for assistance in conducting the experiments. We thank Vincent Dureau for his feedback on the interface. We also thank Dr. Lena Ting, Dr. Karen Feigh, Dr. Randy Trumbower, and Dr. Chethan Pandarinath for their comments on early drafts of this manuscript.

References

  1. 1. Robinson-Smith G, Johnston MV, Allen J. Self-care self-efficacy, quality of life, and depression after stroke. Archives of Physical Medicine and Rehabilitation. 2000;81(4):460–464. pmid:10768536
  2. 2. Schulz R, Beach SR. Caregiving as a risk factor for mortality: the Caregiver Health Effects Study. JAMA. 1999;282(23):2215–2219. pmid:10605972
  3. 3. Yamamoto T, Nishino T, Kajima H, Ohta M, Ikeda K. Human Support Robot (HSR). In: ACM SIGGRAPH 2018 Emerging Technologies. SIGGRAPH’18. New York, NY, USA: ACM; 2018. p. 11:1–11:2. Available from: http://doi.acm.org/10.1145/3214907.3233972.
  4. 4. Wise M, Ferguson M, King D, Diehr E, Dymesich D. Fetch & Freight: Standard Platforms for Service Robot Applications; 2016.
  5. 5. Pages J, Marchionni L, Ferro F. TIAGo: the modular robot that adapts to different research needs. In: International Workshop on Robot Modularity, IROS; 2016.
  6. 6. Inc K. Kinova MOVO Datasheet; 2018. Available from: https://www.kinovarobotics.com/sites/default/files/0008_Kinova_MovoBrochure_LetterSize_vPrint_EN_R-Web.pdf.
  7. 7. Robotnik RB-1 Datasheet; 2017. Available from: https://www.robotnik.eu/web/wp-content/uploads//2018/12/DATASHEET_RB-1_EN-2.pdf.
  8. 8. Spenko M, Buerger S, Iagnemma K. The DARPA Robotics Challenge Finals: Humanoid Robots To The Rescue. vol. 121. Springer; 2018.
  9. 9. Kajita S, Ott C. Limbed Systems. In: Siciliano B, Khatib O, editors. Springer handbook of robotics. Springer; 2016.
  10. 10. Yanco HA, Norton A, Ober W, Shane D, Skinner A, Vice J. Analysis of Human-robot Interaction at the DARPA Robotics Challenge Trials. Journal of Field Robotics. 2015;32(3):420–444.
  11. 11. Mankoff J, Dey A, Batra U, Moore M. Web accessibility for low bandwidth input. In: Proceedings of the fifth international ACM conference on Assistive technologies. Assets’02. ACM. New York, NY, USA: ACM; 2002. p. 17–24.
  12. 12. Bien Z, Chung MJ, Chang PH, Kwon DS, Kim DJ, Han JS, et al. Integration of a Rehabilitation Robotic System (KARES II) with Human-Friendly Man-Machine Interaction Units. Auton Robot. 2004;16(2):165–191.
  13. 13. Murphy RR. Meta-analysis of Autonomy at the DARPA Robotics Challenge Trials. Journal of Field Robotics. 2015;32(2):189–191.
  14. 14. Marion P, Fallon M, Deits R, Valenzuela A, Pérez D’Arpino C, Izatt G, et al. Director: A user interface designed for robot operation with shared autonomy. Journal of Field Robotics. 2017;34(2):262–280.
  15. 15. Hawkins K, Grice P, Chen T, King CH, Kemp C. Assistive Mobile Manipulation for Self-Care Tasks Around the Head. In: 2014 IEEE Symposium on Computational Intelligence in Robotic Rehabilitation and Assistive Technologies (CIR2AT); 2014. p. 16–25.
  16. 16. Ciocarlie M, Hsiao K, Leeper A, Gossow D. Mobile manipulation through an assistive home robot. In: Intelligent Robots and Systems (IROS), 2012 IEEE/RSJ International Conference on. IEEE; 2012. p. 5313–5320.
  17. 17. Chen TL, Ciocarlie M, Cousins S, Grice P, Hawkins K, Hsiao K, et al. Robots for Humanity: Using Assistive Robotics to Empower People with Disabilities. IEEE Robotics & Automation Magazine. 2013;20(1):30–39.
  18. 18. Tidoni E, Gergondet P, Fusco G, Kheddar A, Aglioti SM. The Role of Audio-Visual Feedback in a Thought-Based Control of a Humanoid Robot: A BCI Study in Healthy and Spinal Cord Injured People. IEEE Transactions on Neural Systems and Rehabilitation Engineering. 2017;25(6):772–781. pmid:28113631
  19. 19. Hochberg LR, Serruya MD, Friehs GM, Mukand JA, Saleh M, Caplan AH, et al. Neuronal ensemble control of prosthetic devices by a human with tetraplegia. Nature. 2006;442(7099):164–171. pmid:16838014
  20. 20. Hochberg LR, Bacher D, Jarosiewicz B, Masse NY, Simeral JD, Vogel J, et al. Reach and grasp by people with tetraplegia using a neurally controlled robotic arm. Nature. 2012;485(7398):372–375. pmid:22596161
  21. 21. Collinger JL, Wodlinger B, Downey JE, Wang W, Tyler-Kabara EC, Weber DJ, et al. High-performance neuroprosthetic control by an individual with tetraplegia. The Lancet. 2013;381(9866):557–564.
  22. 22. Ajiboye AB, Willett FR, Young DR, Memberg WD, Murphy BA, Miller JP, et al. Restoration of reaching and grasping movements through brain-controlled muscle stimulation in a person with tetraplegia: a proof-of-concept demonstration. The Lancet. 2017.
  23. 23. Wodlinger B, Downey J, Tyler-Kabara E, Schwartz A, Boninger M, Collinger J. Ten-dimensional anthropomorphic arm control in a human brain- machine interface: difficulties, solutions, and limitations. Journal of neural engineering. 2014;12(1):016011. pmid:25514320
  24. 24. Pandarinath C, Nuyujukian P, Blabe CH, Sorice BL, Saab J, Willett FR, et al. High performance communication by people with paralysis using an intracortical brain-computer interface. eLife. 2017;6:e18554. pmid:28220753
  25. 25. Ajemian R. Neurosurgery: Gentler alternatives to chips in the brain. Nature. 2017;544(7651):416–416. pmid:28447637
  26. 26. Cohen O, Druon S, Lengagne S, Mendelsohn A, Malach R, Kheddar A, et al. fMRI robotic embodiment: a pilot study. In: Biomedical Robotics and Biomechatronics (BioRob), 2012 4th IEEE RAS & EMBS International Conference on. IEEE; 2012. p. 314–319.
  27. 27. Schuler D, Namioka A. Participatory design: Principles and practices. CRC Press; 1993.
  28. 28. Sanders EBN. From user-centered to participatory design approaches. In: Design and the social sciences. CRC Press; 2003. p. 18–25.
  29. 29. Simpson R, Koester HH, LoPresti E. Research in computer access assessment and intervention. Physical medicine and rehabilitation clinics of North America. 2010;21(1):15. pmid:19951775
  30. 30. Park D, Kim H, Hoshi Y, Erickson Z, Kapusta A, Kemp CC. A multimodal execution monitor with anomaly classification for robot-assisted feeding. In: 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS); 2017. p. 5406–5413.
  31. 31. Tsui K, Dalphond J, Brooks D, Medvedev M, McCann E, Allspaw J, et al. Accessible Human-Robot Interaction for Telepresence Robots: A Case Study. Paladyn: Journal of Behavioral Robotics. 2015;6(2).
  32. 32. Wang H, Candiotti J, Shino M, Chung CS, Grindle GG, Ding D, et al. Development of an advanced mobile base for personal mobility and manipulation appliance generation II robotic wheelchair. The journal of spinal cord medicine. 2013;36(4):333–346. pmid:23820149
  33. 33. Soekadar SR, Witkowski M, Gómez C, Opisso E, Medina J, Cortese M, et al. Hybrid EEG/EOG-based brain/neural hand exoskeleton restores fully independent daily living activities after quadriplegia. Science Robotics. 2016;1(1).
  34. 34. Alaiad A, Zhou L, Koru G. An exploratory study of home healthcare robots adoption applying the UTAUT model. International Journal of Healthcare Information Systems and Informatics (IJHISI). 2014;9(4):44–59.
  35. 35. Sung J, Grinter RE, Christensen HI. Domestic robot ecology. International Journal of Social Robotics. 2010;2(4):417–429.
  36. 36. Jain A, Killpack MD, Edsinger A, Kemp CC. Reaching in Clutter with Whole-Arm Tactile Sensing. International Journal of Robotics Research. 2013;32(4):458–482.
  37. 37. Grice PM, Killpack MD, Jain A, Vaish S, Hawke J, Kemp CC. Whole-arm tactile sensing for beneficial and acceptable contact during robotic assistance. In: Rehabilitation Robotics (ICORR), 2013 IEEE International Conference on. IEEE; 2013. p. 1–8.
  38. 38. Toris R, Kammerl J, Lu D, Lee J, Jenkins O, Osentoski S, et al. Robot Web Tools: Efficient Messaging for Cloud Robotics. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS); 2015.
  39. 39. Cornwall A, Jewkes R. What is participatory research? Social science & medicine. 1995;41(12):1667–1676.
  40. 40. Viswanathan M, Ammerman A, Eng E, Gartlehner G, Lohr KN, Griffith D, et al. Community-based participatory research: assessing the evidence. Evidence report/technology assessment. 2004;99:1–8.
  41. 41. Cargo M, Mercer SL. The Value and Challenges of Participatory Research: Strengthening Its Practice. Annual Review of Public Health. 2008;29(1):325–350. pmid:18173388
  42. 42. Kintsch A, DePaula R. A framework for the adoption of assistive technology. SWAAAC 2002: Supporting learning through assistive technology. 2002.
  43. 43. Wilkinson CR, Angeli AD. Applying user centred and participatory design approaches to commercial product development. Design Studies. 2014;35(6):614–631.
  44. 44. Wagner J, Van der Loos H, Smaby N, Chang K, Burgar C. ProVAR assistive robot interface. In: Proceedings of ICORR. vol. 99; 1999. p. 250–254.
  45. 45. Green A, Huttenrauch H, Norman M, Oestreicher L, Eklundh KS. User centered design for intelligent service robots. In: Proceedings 9th IEEE International Workshop on Robot and Human Interactive Communication. IEEE RO-MAN 2000 (Cat. No.00TH8499); 2000. p. 161–166.
  46. 46. Grice PM, Kemp CC. Assistive mobile manipulation: Designing for operators with motor impairments. In: RSS 2016 Workshop on Socially and Physically Assistive Robotics for Humanity; 2016.
  47. 47. Townsend G, LaPallo B, Boulay C, Krusienski D, Frye G, Hauser C, et al. A novel P300-based brain–computer interface stimulus presentation paradigm: moving beyond rows and columns. Clinical Neurophysiology. 2010;121(7):1109–1120. pmid:20347387
  48. 48. Jain S, Farshchiansadegh A, Broad A, Abdollahi F, Mussa-Ivaldi F, Argall B. Assistive robotic manipulation through shared autonomy and a Body-Machine Interface. In: Rehabilitation Robotics (ICORR), 2015 IEEE International Conference on; 2015. p. 526–531.
  49. 49. Iwarsson S, Ståhl A. Accessibility, usability and universal design—positioning and definition of concepts describing person-environment relationships. Disability and rehabilitation. 2003;25(2):57–66. pmid:12554380
  50. 50. Zalud L. ARGOS-system for heterogeneous mobile robot teleoperation. In: Intelligent Robots and Systems, 2006 IEEE/RSJ International Conference on. IEEE; 2006. p. 211–216.
  51. 51. Hutchins E, Hollan J, Norman D. Direct Manipulation Interfaces. Human-Computer Interaction. 1985;1(4):311–338.
  52. 52. Chou W, Wang T. The design of multimodal human-machine interface for teleoperation. In: Systems, Man, and Cybernetics, 2001 IEEE International Conference on. vol. 5; 2001. p. 3187–3192.
  53. 53. Leeper A, Hsiao K, Ciocarlie M, Takayama L, Gossow D. Strategies for human-in-the-loop robotic grasping. In: Proceedings of the seventh annual ACM/IEEE international conference on Human-Robot Interaction. ACM; 2012. p. 1–8.
  54. 54. Norman DA. The Design of Everyday Things—Revised and Expanded Edition. New York, NY: Basic Books; 2013.
  55. 55. Herlant L, Holladay R, Srinivasa S. Assistive Teleoperation of Robot Arms via Automatic Time-Optimal Mode Switching. In: Human-Robot Interaction; 2016.
  56. 56. Rosenholtz R, Li Y, Nakano L. Measuring visual clutter. Journal of vision. 2007;7(2):17–17. pmid:18217832
  57. 57. Romano JM, Hsiao K, Niemeyer G, Chitta S, Kuchenbecker KJ. Human-Inspired Robotic Grasp Control With Tactile Sensing. IEEE Transactions on Robotics. 2011;27(6):1067–1079.
  58. 58. Lyle RC. A performance test for assessment of upper limb function in physical rehabilitation treatment and research. International Journal of Rehabilitation Research. 1981;4(4):483–492. pmid:7333761
  59. 59. Yozbatiran N, Der-Yeghiaian L, Cramer SC. A standardized approach to performing the action research arm test. Neurorehabilitation and Neural Repair. 2008;22(1):78–90. pmid:17704352
  60. 60. Lang CE, Edwards DF, Birkenmeier RL, Dromerick AW. Estimating minimal clinically important differences of upper-extremity measures early after stroke. Archives of physical medicine and rehabilitation. 2008;89(9):1693–1700. pmid:18760153
  61. 61. Soukoreff RW, MacKenzie IS. Towards a standard for pointing device evaluation, perspectives on 27 years of Fitts’ law research in {HCI}. International Journal of Human-Computer Studies. 2004;61(6):751–789.
  62. 62. Annett J. Hierarchical task analysis. Handbook of cognitive task design. 2003;2:17–35.
  63. 63. Felipe SK, Adams AE, Rogers WA, Fisk AD. Training Novices on Hierarchical Task Analysis. Proceedings of the Human Factors and Ergonomics Society Annual Meeting. 2010;54(23):2005–2009.
  64. 64. van der Lee JH, de Groot V, Beckerman H, Wagenaar RC, Lankhorst GJ, Bouter LM. The intra- and interrater reliability of the action research arm test: A practical test of upper extremity function in patients with stroke. Archives of Physical Medicine and Rehabilitation. 2001;82(1):14–19. https://doi.org/10.1053/apmr.2001.18668. pmid:11239280
  65. 65. Meng J, Zhang S, Bekyo A, Olsoe J, Baxter B, He B. Noninvasive electroencephalogram based control of a robotic arm for reach and grasp tasks. Scientific Reports. 2016;6:38565. pmid:27966546
  66. 66. Gopinath D, Jain S, Argall BD. Human-in-the-Loop Optimization of Shared Autonomy in Assistive Robotics. IEEE Robotics and Automation Letters. 2017;2(1):247–254. pmid:30662953
  67. 67. Davis FD. Perceived Usefulness, Perceived Ease of Use, and User Acceptance of Information Technology. MIS Quarterly. 1989;13(3):319–340.
  68. 68. Venkatesh V, Davis FD. A Theoretical Extension of the Technology Acceptance Model: Four Longitudinal Field Studies. Management Science. 2000;46(2):186–204.
  69. 69. Ezer N, Fisk A, Rogers W. Attitudinal and intentional acceptance of domestic robots by younger and older adults. Universal access in human-computer interaction Intelligent and Ubiquitous Interaction Environments. 2009; p. 39–48.