Next Article in Journal
Real-Time User Identification and Behavior Prediction Based on Foot-Pad Recognition
Next Article in Special Issue
HYBRID: Ambulatory Robotic Gait Trainer with Movement Induction and Partial Weight Support
Previous Article in Journal
Inversion of Lake Bathymetry through Integrating Multi-Temporal Landsat and ICESat Imagery
Previous Article in Special Issue
AMiCUS—A Head Motion-Based Interface for Control of an Assistive Robot
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Human–Robot–Environment Interaction Interface for Smart Walker Assisted Gait: AGoRA Walker

by
Sergio D. Sierra M.
1,
Mario Garzón
2,
Marcela Múnera
1 and
Carlos A. Cifuentes
1,*
1
Department of Biomedical Engineering, Colombian School of Engineering Julio Garavito, Bogota 111166, Colombia
2
INRIA, University Grenoble Alpes, Grenoble INP, 38000 Grenoble, France
*
Author to whom correspondence should be addressed.
Sensors 2019, 19(13), 2897; https://doi.org/10.3390/s19132897
Submission received: 11 April 2019 / Revised: 29 May 2019 / Accepted: 27 June 2019 / Published: 30 June 2019
(This article belongs to the Special Issue Assistance Robotics and Biosensors 2019)

Abstract

:
The constant growth of the population with mobility impairments has led to the development of several gait assistance devices. Among these, smart walkers have emerged to provide physical and cognitive interactions during rehabilitation and assistance therapies, by means of robotic and electronic technologies. In this sense, this paper presents the development and implementation of a human–robot–environment interface on a robotic platform that emulates a smart walker, the AGoRA Walker. The interface includes modules such as a navigation system, a human detection system, a safety rules system, a user interaction system, a social interaction system and a set of autonomous and shared control strategies. The interface was validated through several tests on healthy volunteers with no gait impairments. The platform performance and usability was assessed, finding natural and intuitive interaction over the implemented control strategies.

1. Introduction

Human mobility is a complex behavior that involves not only the musculoskeletal system but also dissociable neuronal systems. These systems control gait initiation, planning, and execution, while adapting them to satisfy motivational and environmental demands [1]. However, there are some health conditions and pathologies that affect key components of mobility [2] (e.g., gait balance, control, and stability [3]). Among these pathologies, Spinal Cord Injury (SCI), Cerebral Palsy (CP) and Stroke are found to be strongly related to locomotion impairments [4]. Likewise, the progressive deterioration of cognitive functions [1] (i.e., sensory deficits and coordination difficulties [5]) and the neuromuscular system in the elderly [6] (i.e., loss of muscle strength and reduced effort capacity [5]) are commonly related to the partial or total loss of locomotion capacities.
Moreover, according to the World Health Organization (WHO) the proportion of the mobility impaired population has been experiencing constant and major growth [7]. Specifically, nearly 15% of the world’s population experience some form of disability [8], and by 2050 the proportion of the world’s population over 60 years will nearly double from 12% to 22% [9,10]. These studies also report that a larger percentage of this growth will take place in developing countries [9]. Although these populations may be represented by different types of disability, mobility impairments have been identified as a common condition in elderly populations and people with functioning and cognitive disabilities [5,11,12]. Considering this, several rehabilitation and assistance devices have been developed to retrain, empower or provide the affected or residual locomotion capacities [13].
Devices such as canes, crutches, walkers, and wheelchairs, as well as ambulatory training devices, are commonly found in assisted gait and rehabilitation scenarios [14] and are intended to improve user’s life quality. Concretely, mobility assistive devices are aimed at overcoming and compensating physical limitations by maintaining or improving individual’s functioning and independence in both clinical and everyday scenarios [15]. Regarding conventional walkers, these devices exhibit simple and affordable mechanical structures, as well as partial body weight support and stability. However, natural balance, user’s energetic costs, fall prevention and security issues are often compromised with conventional walkers [16]. Moreover, several issues related to sensory and cognitive assistance, often required by people with physical limitations, are not completely addressed by conventional devices [17,18,19]. Accordingly, to outstrip such problems, robotic technologies and electronics have been integrated, leading to the emergence of intelligent walkers or Smart Walkers (SWs).
The SWs are often equipped with actuators and sensory modalities that provide biomechanical monitoring mechanisms and individual’s intention estimators for user interaction, as well as several control strategies for movement and assistance level control [16]. Likewise, path following modules are usually included, in addition to safety rules and fall prevention systems [20]. These features enable SWs to interact in dynamic and complex environments. The particular selection and implementation of such features can be referred to as Human–Robot Interaction (HRI) interfaces [21]. Notwithstanding, Human–Robot–Environment Interaction (HREI) interfaces are required, in such a way that they provide natural user interactions, as well as effective environment sensing and adaption while maintaining safety requirements.
In this context, the design and implementation of a multimodal HREI interface for an SW is presented. Such implementation was made to improve previous implementations of HRI interfaces on SWs, by providing safety, natural user interactions and robust environment interactions. The HREI was focused on the development of shared control strategies (i.e., natural and intuitive user interaction while multiple systems are running), as well as on the implementation of a robust Robot–Environment Interaction (REI) interface (i.e., a safety system for collision prevention, a navigation system and a social interaction system). Moreover, the interaction interface was equipped with several strategies for therapy management and supervision by a technical or health care professional. To this end, several robotic and image processing techniques, as well as different control strategies, were implemented. Navigation and human detection systems were aimed at enabling the SW with social interaction and social acceptance capabilities. Additionally, user interaction systems and shared control strategies sought to provide a more natural, intuitive and comfortable interaction.
The remainder of this work is organized as follows. Section 2 describes the existing HRI and REI interfaces implemented on several SWs. Section 3 shows the proposed HREI interface and the platform description. Since the HREI interface is composed by a HRI interface and a REI interface, Section 4 describes the systems and modules for HRI on the AGoRA Walker, and Section 5 presents the systems for environment and social interaction (i.e., the REI interface). Thereafter, Section 6 details the different control strategies implemented on the HREI interface, while Section 7 exhibits the experimental test conducted to assess the interface performance. Finally, Section 8 expresses the conclusions and relevant findings of this work and mentions proposals for future research.

2. Related Work

Reviewing literature evidence, several SWs and walker based robotic platforms have introduced HRI and REI interfaces. Generally, these systems are aimed at assessing the user’s state (i.e., biomechanical and spatiotemporal parameters), the user’s intentions of movement and environment constraints. Likewise, these interfaces and interaction systems are commonly aimed at providing effectiveness, comfort, safety and different control strategies during rehabilitation and assistance tasks. For this purpose, some sensory modalities are frequently implemented, such as potentiometers, joysticks, force sensors, voice recognition modules and scanning sensors [20]. Some of these HRI and REI interfaces are shown in Table 1, where the SWs are characterized by their type (i.e, active for motorized walkers and passive for non motorized walkers), the sensors used, the internal modules (i.e., main reported functionalities or systems), the reported modes of operation, the implemented shared control strategies and by their social interaction capabilities (i.e., specific strategies for people avoidance or interaction).
One of the most notable smart walkers is CO-Operative Locomotion Aide (COOL Aide), which is a three-wheeled passive SW [36] intended to assist the elderly with routine walking tasks. It includes mapping and obstacle detection systems, as well as navigation and guidance algorithms. Additionally, it is equipped with force sensors on its handlebars and a Laser Range Finder (LRF) to estimate the user’s desired direction to turn. Although it is a passive walker, shared control strategies are achieved by granting walker control to the platform or the user.
Other passive walkers, such as those presented in [37,38], include navigation and guidance algorithms in conjunction with shared control systems. These strategies are based on sharing the steering control between the user and the walker.
Different approaches on active SWs have been developed in the past few years regarding HRI and REI interfaces [21,22,23,24,25,26,28,29,30,31,33]. These interfaces are also equipped with navigation and user interaction systems to provide shared control capabilities. Such strategies are based on granting walker steering to the user or the SW, depending on the obstacle detection and navigation systems, as well as on changing the walker responses to user’s commands (i.e., some strategies are based on inducing the user’s actions through haptic communication channels). To this end, user interaction systems are required to manage how user’s intentions of movement are interpreted. The estimation of such intentions is commonly achieved by admittance control systems, gait analysis systems, and rule-based algorithms.
In addition, other robotic walkers have been reported in the literature, including different HRI interfaces [41,42,43,44]. For instance, the approach developed by Ye et al. [42] includes a width changeable walker that adapts to the user’s intentions and environment constraints. Likewise, some REI interfaces have been presented in [45,46,47]. These approaches intend to assess the environment information to adapt their control strategies. Finally, regarding social interaction approaches, the c-Walker [40] includes a social force model that represents pedestrians and desired trajectory paths as repulsive or attractive objects, respectively. Although the c-Walker presents both shared control strategies and social interaction, it is a passive walker and its shared strategy is based on brakes control and shared steering of the platform.
According to the above, this work presents the implementation of an HREI interface in order to join the multiple advantages of the current HRI and REI interfaces on the AGoRA Smart Walker. The AGoRA Walker is equipped with a sensory and actuation interface that enables the implementation of several functionalities for HRI and REI, as well as a set of control strategies for shared control and social interaction. Moreover, the developed interface is equipped with a robust navigation system, a user interaction system (i.e., a gait analyzer module and an user’s intention detector), a low-level safety system, a people detection system for social interaction, and a safe strategy for shared control of the walker.

3. Human–Robot–Environment Interaction (HREI) Interface

3.1. Robotic Platform Description

According to the different motivations and related approaches presented in Section 1 and Section 2, this work covers the design, development, and implementation of a set of control strategies and interaction systems that establish an HREI interface on a robotic walker. Hence, a robotic platform was adapted to emulate the structural frame of a conventional assistance walker, by attaching two forearm support handlebars on the platform’s main deck. Specifically, the Pioneer LX research platform (Omron Adept Technologies, Pleasanton, CA, USA), named as AGoRA Smart Walker, was used to implement and test the interface systems. The platform is equipped with an onboard computer running a Linux operating system distribution providing support for the Robotic Operating System (ROS) framework.
As shown in Figure 1a, several sensory modalities, actuators, and processing units were implemented and integrated on the AGoRA Smart Walker. The AGoRA Smart Walker is equipped with: (1) Two motorized wheels and two caster wheels for walker’s propulsion and stability; (2) two encoders and one Inertial Measurement Unit (IMU) to measure walker’s ego-motion; (3) a 2D Light Detection and Ranging Sensor (LiDAR) (S300 Expert, SICK, Waldkirch, Germany) for environment and obstacle sensing; (4) two ultrasonic boards (one in the back and one in the front) for user’s presence detection and low-rise obstacles detection; (5) two tri-axial load cells (MTA400, FUTEK, Irvine, CA, USA) used to estimate the user’s navigation commands; (6) one HD camera (LifeCam Studio, Microsoft, Redmond, WA, USA) to sense people presence in the environment; and (7) a 2D Laser Range-Finder (LRF) (Hokuyo URG-04LX-UG01, Osaka, Japan) for user’s gait parameters estimation.
Additionally, to leverage the AGoRA Smart Walker’s processing capabilities, an external computer is used for running several non-critical systems. The communication with the external CPU can be achieved through the walker’s Ethernet and Wi-Fi modules.
As shown in Figure 1b, the position of the force sensors on the platform’s deck is not vertically aligned with the actual supporting points of the user on the handlebars. Essentially, the forces in y- and z-axis read by the sensors (i.e., F y R i g h t , F y L e f t , F z R i g h t and F z L e f t ) will be a combination of the forces in y- and z-axis at the supporting points (i.e., F s p y R i g h t , F s p y L e f t , F s p z R i g h t and F s p z L e f t ). The forces in x-axis (i.e, F x R i g h t , F x L e f t , F s p x R i g h t and F s p x L e f t ) are discarded, as they do not provide additional relevant information.

3.2. Interface Design Criteria

The HREI interface presented in this work takes into account several sensor modalities and control strategies to fulfill several design requirements. The design criteria are grouped in the HRI and REI interfaces that compose the final HREI interface:
  • HRI Interface functions:
    Recognition of user–walker interaction forces. The interaction forces between the user and the platform are required to analyze the physical interaction between them.
    Estimation of user’s navigation commands. To provide a shared control strategy, as well as a natural and intuitive HRI, the walker needs to be compliant to the user’s intentions of movement.
    Detection of user’s presence and support on the walker. To ensure safe HRI, the walker movement should only be allowed when the user is properly interacting with it (i.e., partially supporting on the platform and standing behind it).
    Estimation of user’s gait parameters. To adapt the walker’s behavior to each user gait pattern, several gait parameters are computed and analyzed.
    Implementation of control strategies. To provide walker natural response to user’s intentions of movement, it is required to introduce control strategies based on physical HRI between the user and the walker.
  • REI Interface functions:
    Implementation of a robust navigation system. To provide a safe and effective REI, the implementation of navigation capabilities is required. Such functions include: map building and edition, autonomous localization and path planning.
    Walker motion control. The execution of desired movements on the walker, relays on a low-level motion control provided by the robotic platform previously described.
    Detection of surrounding people. The navigation system is able to sense obstacles (e.g., people, fixed obstacles and moving obstacles) in the environment as simple physical objects. Therefore, to provide social interaction capabilities between the walker and surrounding people, it is necessary to differentiate among those types of obstacles.
    Path adaptation due to social spacing. To ensure social interaction, the detected surrounding people should modify or adapt the results from the path planning system.
    Security restrictions. A low-level security system is required to ensure safe interaction, even under failure or malfunction of previously described systems.
  • Additional functions:
    Remote control by therapy supervisor. The therapy manager should be able to modify the walker parameters, as well as to set the desired control strategy.
    Emergency braking system. To provide an additional safety system, the platform should be equipped with an emergency system based on an external input that completely stops the walker.
    Session’s data recording. The platform should be equipped with a storage system for data recording, in such a way that the information is available for further analysis.
According to the above, Figure 2a illustrates the most relevant systems provided by the HRI and REI interaction interfaces included in our approach.

3.3. Interface Communication Channels

Relying on the different interface functions, there are some notable communication channels that provide information exchange between them, as shown in Figure 2b. The communication channels immersed over the HREI interaction are described as follows:
  • User–Walker physical and cognitive channel. Through this communication channel, the walker’s sensors assess the user’s information (i.e., navigation commands, interaction forces, body weight support and gait parameters). Similarly, the user is able to sense the walker’s behavior through mechanical impedance, safety restrictions, guidance, and response to navigation commands.
  • Walker–Environment sensory and social channel. The walker’s behavior is also a result of the information retrieved from the environment (e.g., obstacles and the presence people). Such information is used by the walker’s systems to accomplish obstacle avoidance, safety provision, and social interaction.
  • Manager–Walker supervising channel. A therapy manager is able to remotely assess the session data, as well as override or control walker behavior, if required.
  • Manager–Environment supervising channel. The environment is also sensed by the natural communication channel with the therapy manager (i.e., visual supervision). Such natural sensing allows the manager to set and control the walker’s behavior.
  • User–Walker–Environment visual channel. Relying on the visual faculty of the user, the environment and walker behavior is cognitively sensed by the user. This natural communication channel takes place during the HREI loop, however it is not addressed or included in the HREI control strategies.
The following sections describe the systems that compose each interaction interface (i.e., HRI interface and REI interface), as well as the proposed control strategies.

4. HRI Interface

Based on the physical interaction between the user’s upper limbs and the walker’s handlebars, the HRI interface is composed of two systems: (A) a gait parameters estimator; and (B) a user’s intention detector.

4.1. Gait Parameters Estimator

During gait, the movement of human trunk and center of mass describe oscillatory displacements in the sagittal plane [48]. Thus, in walker assisted gait, the interaction forces between the user and the walker handlebars are associated to the movements of the user’s upper body [44].
In this sense, to implement a proper control strategy based on such interaction forces, a filtering and gait parameter extraction process is required. Consequently, the estimation of the user’s intentions of movement and the user’s navigation commands could be achieved with ease and less likely to be misinterpreted.
According to the above, to carry out filtering processes, a gait cadence estimator (GCE) was implemented. The GCE addresses the gait modeling problem, which is reported in the literature to be solved with several applications of the Kalman filter and adaptive filters [49]. In fact, the Weighted-Fourier Linear Combiner (WFLC) is and adaptive filter for tracking of quasi-periodic signals [49], such as gait related signals (e.g., the interaction force on walker’s handlebars). Therefore, based on the on-line method proposed by Frizera-Neto et al. [50], a GCE was integrated into the HRI interface. This method uses a WFLC to estimate gait cadence from upper body interaction forces.
The two vertical forces (i.e., F z R i g h t and F z L e f t ) are computed to obtain a final force, F C A D = ( F z R i g h t + F z L e f t ) / 2 . The resulting force, F C A D , is firstly passed through a band-pass filter with experimentally obtained cutoff frequencies of 1 Hz and 2 Hz. This filter allows the elimination of signal’s offset and high frequency noise (i.e., mainly due to vibrations between the walker structure and the ground). The filtered force F C A D is fed to the WFLC, in order to estimate the frequency of the first harmonic of F C A D . Such frequency represents the gait cadence, which is the final output of the GCE. This process is illustrated in Figure 3.
According to several experimental trials, the users performed significant forces, related to their intentions of movement, along y-axis (i.e., F y L e f t and F y R i g h t , see Figure 1b). It was also observed that the user’s navigation commands were mainly included within the y-axis forces. Therefore, the x-axis (i.e., F x L e f t and F x R i g h t , see Figure 1b) forces were discarded. As previously stated, the interaction force signals require a filtering process to remove high frequency noise and signal offset [50]. Thus, a fourth order Butterworth low-pass filter was used.
To eliminate gait components from the interaction force signals along y-axis, a Fourier Lineal Combiner (FLC) filter in conjunction with the GCE was implemented. Such integration is illustrated in the filtering system (FS) diagram shown in Figure 4. The FS is independently applied to both left and right forces obtaining filtered forces F y L e f t and F y R i g h t . Thus, Figure 4 denotes F y Φ as whether F y L e f t or F y R i g h t and F y Φ as whether F y L e f t or F y R i g h t . The final output F y Φ of the FS is calculated as the difference between the resulting signal from the low-pass filter (i.e., F y Φ L P ) and the output of the FLC (i.e., F y Φ C A D , the cadence signal obtained from each F y Φ signal).
As shown in Figure 4, the order M of the FLC filter was experimentally set to 2, and a 0.5 gain was added between the GCE’s output and the FLC’s frequency input. This gain was set to filter any additional harmonics produced by asymmetrical supporting forces [51]. Moreover, an adaptive gain μ of 0.008 was used.
The final linear force F and torque τ , applied by the user to the walker, were computed using F y L e f t and F y R i g h t (i.e., the y-axis forces resulting from the filtering processes) as follows: F is computed as the sum of F y L e f t and F y R i g h t , and τ as the difference between them. For instance, the F y L e f t signal obtained from the left force sensor and the implementation of the different filters is presented in Figure 5. The signal obtained corresponds to the readings of the force sensor during a walk along an L-shaped path. Different zones are illustrated in the figure: (1) the green zones show the start and end of the path; (2) the five gray areas denote straight parts of the path; and (3) the blue zone corresponds to the curve to the right, where a reduction of the signal is observed.

4.2. User’s Intentions Detector

Starting from the linear force signal and the torque signal, two admittance controllers were implemented to generate walker’s linear velocity and angular velocity responses from user’s intentions of movement. This type of controllers has been reported to provide natural and comfortable interaction in walker assisted gait [28], as they take the interaction forces to generate compliant walker behaviors. Specifically, the implemented admittance controllers emulate dynamic systems providing the user with a sensation of physical interaction during gait assistance. These systems are modeled with two mass–damper–spring second-order systems, whose inputs are the resulting force F and torque τ (i.e., the force and torque applied to the walker by the user), from the filtered y-axis forces. The outputs of these controllers are the linear (v) and angular ( ω ) velocities, meaning the user’s navigation commands.
On the one hand, the transfer function of the linear system is described by Equation (1) ( L ( s ) stands for Linear System), where m is the virtual mass of the walker, b l is the damping ratio and k l is the elastic constant. On the other hand, Equation (2) ( A ( s ) stands for Angular System) shows the transfer function for the angular system, where J is the virtual moment of inertia of the walker, b a is the damping ratio, and k a is the elastic constant for the angular system. According to this, the static and dynamic behavior, meaning the mechanical impedance of the walker, could be changed by the modification of the controllers parameters.
L ( s ) = v ( s ) F ( s ) = 1 m s 2 + b l m s + k l m
A ( s ) = ω ( s ) τ ( s ) = 1 J s 2 + b a J s + k a m
Empirically, the authors realized that the values of m = 15 Kg, b l = 5 N·s/m, J = 5 Kg·m 2 and b a = 4 N·m·s were appropriate for the purposes of the experimental study. Moreover, k l and k a were used for the walker’s behavior modulation. Figure 6 shows how the two FSs of the GCE and the user’s intention detector are connected.
The next section describes the implemented systems for REI on the walker.

5. REI Interface

The REI interface is composed of three main systems: (A) a navigation system; (B) a human detection system; and (C) a low-level safety system.

5.1. Navigation System

Navigation during walker-assisted gait is mainly focused on safety provision while guiding the user through different environments. According to the health condition that is being rehabilitated or assisted, the implementation of goal reaching and path following tasks is required. Moreover, such navigation tasks on smart walkers require the consideration of user interaction strategies, obstacle detection and avoidance techniques, as well as social interaction strategies. Particularly, the navigation system presented in this work considers map building, autonomous localization, obstacle avoidance and path following strategies and is based on previous developments of the authors [52].

5.1.1. Map Building and Robot Localization

Relying on the ROS navigation stack, a 2D map building algorithm, that uses a Simultaneous Localization and Mapping (SLAM) technique to learn a map from the unknown environment was integrated. Specifically, the ROS GMapping package for map learning was used [53]. This package is aimed at creating a static map of the complete interaction environment. The static map is made off-line and is focused on defining the main constrains and characteristics of the environment. Figure 7a shows the raw static map obtained at the authors’ research center. This map is also used for the walker on-line localization. For this purpose, the Adaptive Monte Carlo Localization Approach (AMCL) [54] was configured and integrated.
In general, zones such as stairs, elevator entrances, and corridor railings, among others, are defined as non-interaction zones (i.e., mainly due to the risk of collisions). These restrictions are achieved by an off-line editing process of the resulting static map. Further modifications are also required, since LiDARs are light-based sensors and the presence of reflecting objects, such as mirrors, affects their readings. As shown in Figure 7b, the map constitutes a grayscale image, therefore modifications were made by changing colors in the map.

5.1.2. Path Planning and Obstacle Detection

To achieve path planning, 2D cost-maps are elaborated from the previous edited map. These cost-maps consist of 2D occupancy grids, where every detected obstacle is represented as a cost. These numerical costs represent how close the walker is allowed to approach to the obstacles. Specifically, local and global cost-maps are generated. The local cost-map is made using readings from the LiDAR that rely on a portion of the edited map, while the global cost-map uses the whole edit map. Moreover, these cost-maps semantically separate the obstacles in several layers [55]. The navigation system integrated in this work was configured with an static map layer, an obstacle layer, a sonar layer and an inflation layer [55]. During the path planning process, the global cost-map is used for the restriction of global trajectories. The local cost-map restricts the planning of local trajectories, which are affected for variable, moving and sudden obstacles.
The Trajectory Rollout and the Dynamic Window approaches (DWA) were used to plan local paths, based on environment data and sensory readings [56]. As presented in the research of Rösmann et al. [57], this local planner is optimized using a Time Elastic Band (TEB) approach. The information of the environment and global cost-map is used by a global path planner. This planner calculates the shortest collision-free trajectory to a goal point. To do this, the Dijkstra’s algorithm was used. Finally, a motion controller takes into account both trajectory plans and generates linear and angular velocity commands to take the walker to each plan’s positions.
Figure 8 shows the trajectories planned by the local and global planner, the positions estimations calculated by the AMCL algorithm, a current goal and the cost-map grid.

5.2. People Detection System

The main goal of this module is to complement the performance of the navigation module in the distinction of obstacles regarding to people from simple obstacles (i.e., stationary or mobile objects). This distinction enables the walker with social acceptance and social interaction skills. To achieve this, the people detection system implemented in this work is based on the techniques proposed by Fotiadis et al. [58] and Garzón et al. [59]. Such approaches exploit the localization information provided by the laser of potential humans, in order to reduce the processing time of the camera data. This sensory fusion requires a proper process of calibration. Hence, an extrinsic calibration method was implemented for laser-camera information fusion. Figure 9 illustrates the methodology of the integrated people detection system.

5.2.1. Detection Approach

The people detection system begins with the segmentation of laser data into clusters, based on Euclidean distance differences. These laser clusters are inputs of a process of characteristic extraction [60]. Consequently, these features feed a classification algorithm based on Real AdaBoost [61], which is trained off-line with several laser clusters. In parallel, a camera based detection process starts from the projection of each laser cluster into the image frames. As previously mentioned, this projection is accomplished thanks to a calibration process that provides a set of rotation and translation matrices. Such matrices allow the transformation of laser points into the camera frame [62]. From the localization of each cluster, a region of interest (ROI) is defined for the calculation of a Histogram of Oriented Gradients (HOG) descriptor [63]. This HOG descriptor is used by a Linear Support Vector Machine (SVM), which is aimed at classifying the descriptor.
As also proposed in [58], to increase the possibilities to detect a person, the ROI is defined by several adaptive projections, resulting in a group of ROIs in which a person might be.
Both classifiers, Real AdaBoost and Linear SVM, are not completely probabilistic methods, since they produce probability distributions that are typically distorted. Such distortions take place as the classifiers outputs constitute signed scores representing a classification decision [64]. To overcome this, a probabilistic calibration method is proposed. The calibration of Real AdaBoost scores is achieved by a logistic correction and for the Linear SVM a parametric sigmoid function is used [58]. Afterwards, the outputs of each classifier are passed through an information fusion system, in order to get a unique probabilistic value from both detection methods, resulting in a decision about the presence of people in the environment.
Finally, a tracking process takes into account the previous people observations to generate a final decision about pedestrian locations. As presented by one of the authors, a Kalman filter instance is created for each detection, including those that rely out the image frame [59]. Based on each person’s current and previous position, the filter uses a linear model to calculate people velocities, and consequently achieve the tracking task. A location pairing-updating process is carried out, as presented in [59]. This process is aimed at adding new people locations, updating previous locations, scoring, and removing them.
Figure 10a shows several laser clusters obtained from a LiDAR reading. Figure 10b explains the projection of the clusters into the image, where possible. Likewise, three moving people were detected out four. The laser cluster related to the non-detected person included additional points belonging to walls, therefore its detection was not achieved.

5.2.2. Social Interaction

The navigation system and people detection system are integrated to enable the AGoRA Smart Walker with social interaction and social acceptance skills. This is accomplished by adjusting how obstacles are understood by the navigation system. Through the modification of the navigation 2D cost-map, these changes are achieved. As described in the navigation system, the obstacles detected in the environment, including people, are represented as equal costs in the 2D cost-maps. Therefore, it is necessary to inflate the costs corresponding to a person, in order to avoid the interruption of social interaction zones in the environment. The inflation is made to match the social interaction zone of each person. This is achieved using the information provided by the people detection system, and passing people locations to navigation system. The criteria to inflate the costs are defined by strategies of adaptive spacing in walker–human interactions, as described in [65].

5.3. Safety Restrictions System

The AGoRA Smart Walker is aimed to be both remotely supervised by a therapy manager, meaning medical staff or technical staff, as well as to be controlled by the user’s intentions of movement. Thus, some security rules were included to constraint the walker’s movement.

5.3.1. User Condition

The walker movement is only allowed if the user is supporting itself on the walker handlebars, as well as standing behind it within an established distance.

5.3.2. Warning Zone Condition

The maximum allowed velocity of the walker is constrained by its distance to surrounding obstacles. A squared shape warning zone is defined in front of the walker, and its dimensions are proportionally defined by the walker’s current velocity. If an obstacle yields within the warning zone, the maximum velocity is constrained.
Figure 11 illustrates the warning zone shape and its parameters that change according to the walker’s velocity. The Stop Distance Parameter (STD) determines the minimum distance of the walker to an obstacle before absolute stopping. The Slow Distance Parameter (SD) determines the distance at which obstacles will begin to be taken into account before velocity limitation. Hence, if an obstacle is at distance SD, the walker’s velocity will be slowed. The Width Rate (WR) parameter is the multiplying factor of the warning zone width. When an obstacle is detected within the warning zone, the velocity is limited as described in Equation (3).
V m a x = S l o w v e l · D o b s S T D S D S T D
D o b s is the distance to the nearest obstacle and S l o w v e l is the maximum allowed velocity when an obstacle is the warning zone. Additionally, the S l o w v e l is continuously adapted by the walker’s velocity, as shown in Table 2. Such values were defined after several experimental trials, in such a way that the warning zone ensures proper stopping of the walker at each velocities range.

6. Control Strategies

As previously explained in Section 3, the HREI interface integrates functions from the HRI and REI interfaces, in order to provide efficient, safe and natural interaction. To this end, three control strategies were proposed.

6.1. User Control

By the implementation of the HRI interface, the user is able to control the walker’s motion. The gait parameter estimator and the admittance controller are capable of generating velocity commands from the interaction forces. However, the security rules keep ensuring a safe interaction with the environment. Additionally, as the therapy manager is able of controlling the walker’s movement, through a wireless joystick the user’s commands can be revoked or modified.

6.2. Navigation System Control

In this control mode, the REI interface has total control of the walker’s movement for providing secure user guidance (i.e., the user’s intentions of movement are ignored). The guidance goals can be whether programmed or on-line modified, while the navigation and social interaction system ensure safety paths. Additionally, the security rules warrant that the walker moves only if the user is supporting and standing in front of the walker.

6.3. Shared Control

This strategy combines the navigation velocity commands and the user’s intentions of movement for walker’s control granting. The user’s intentions are calculated using F and τ , as a vector of magnitude equals to the normalized F, with proportional orientation to the exerted τ . Equation (4) illustrates the calculation of intention vector’s orientation, where M a x a n g l e is the maximum turn angle allowed and MET is the maximum exerted torque.
θ ( t ) u s r = M a x a n g l e · τ ( t ) M E T
To estimate the control granting (i.e., walker control by the user or by the navigation system), the user’s intentions are compared with the navigation path, to obtain the final pose to be followed by the walker. Specifically, as shown in Figure 12, for the nearest path point ( x n a v , y n a v ) to the current walker position at ( x s w , y s w ), a range of possible user intentions is calculated (i.e., the range where the control is granted to the user). The positions are calculated in the map coordinate reference frame, since the navigation system generates the path plans in such reference frame.
In Figure 12, the range of possible intentions is calculated as a triangle-shaped window, which is formed by: (1) θ s w , the current orientation of the walker; (2) θ u s r , the current user’s intention of movement; (3) θ n a v , the orientation of the next and nearest path point; and (4) d, the Euclidean distance from the walker position to the next pose. The geometric parameters for the window formation are described in Equations (5)–(8). A window scaling factor W i n d w i d t h is used to adapt the window area. Graphically, the window is formed by two right-angled triangles. These smaller triangles are constituted with height d, bases L a and L b , and auxiliary angles θ a and θ b .
L a = W i n w i d t h · ( θ n a v θ s w ) M a x a n g l e
L b = W i n w i d t h L a
θ a = t a n 1 L a d
θ b = t a n 1 L b d
If the user’s intention of movement lies in the described window, the control is granted to the user. Otherwise, if the user’s objective lies outside the area of possible movements, a new path pose is computed. This new pose is calculated to be within the area of possible movements. To this end, both x n a v and y n a v define the new pose position and the new pose orientation ( θ n x t ) is defined as presented in Equation (9):
θ n x t = θ n a v , i f θ d i f f θ a θ u s r θ d i f f + θ b θ d i f f θ a , i f θ u s r < θ d i f f θ a θ d i f f + θ b , i f o t h e r
where θ d i f f is estimated as shown in Equation (10) and represents the relative center of the window of possible movements.
θ d i f f = s i n 1 y n a v y s w d

7. Experimental Tests

To evaluate the described HREI interface, several performance and usability tests were proposed, regarding the control strategies previously described. The main goal of these tests was to assess the performance of every module of the AGoRA Smart Walker, both independently and simultaneously. Several healthy subjects were recruited to voluntarily participate in the validation study. Specifically, seven volunteers conformed the validation group (6 males, 1 female, 33.71 ± 16.63 y.o., 1.69 ± 0.056 m, 65.42 ± 7.53 kg) with no gait assistance requirements accomplished the tests that are further presented (see Table 3 for additional information).
The experimental trials took place at the laboratories building of the Colombian School of Engineering. A total of 21 trials divided into 7 sessions were performed. Every session consisted in three different trials of each specific control mode (i.e., user control, navigation system control and shared control). At the beginning of each session, the order in which the modes of operation were going to be evaluated was randomized. Likewise, before each trial the volunteers were instructed in the behavior of control mode, allowing them to interact with the platform. During trials, the researchers stayed out of the session environment to avoid interfering with the tasks achievement. At the end of each trial, a data log including user and walker’s information was stored for further analysis purposes.
According to the above, the obtained results under each control mode are presented in the following sub-sections.

7.1. User Control Tests

The volunteers were asked to achieve a square-shaped trajectory by following several landmarks. Figure 13a illustrates the reference trajectory to be followed by the participants and Figure 13b illustrates the achieved trajectories by the participants. Under this control mode, the only active systems were those corresponding to the HRI interface. The trajectory was aimed at assessing the capabilities of the interface to respond to the users’ intentions of movement and adapt to their gait pattern. Specifically, the gait parameter estimator was responsible for acquiring and filtering the force and torque signals due to the physical interaction between the walker and the user. As an explanatory result, Figure 14a shows the filtered signals regarding to force and torque for subject 1. The user’s intentions detector was in charge of generating the linear and angular speed control signals of the walker. Figure 14b shows the speed signals for subject 1. Similarly, the low level security system was running in parallel, in such a way that collisions were avoided. Specifically, no collisions took place during these trials.
During the execution user control trials, higher differences were encountered between the ideal and the achieved paths at the trajectory corners. Accordingly, the 90-degree turns were more difficult to accomplish by the participants, as the AGoRA Walker axis of rotation is not aligned with the user’s axis of rotation. However, such kind of turns should be avoided as they risk user’s stability and balance. Thus, less steep turns are more natural and safer for the users.

7.2. Navigation System Control Tests

To evaluate the path following and security restrictions capabilities alongside the people detection system, a preliminary guidance trial with one subject was performed in presence of people. The volunteer user was guided through a random path previously programmed, while overcoming both regular and people obstacles in the environment. Additionally, the navigation system was configured with: (1) minimum turning radius of 15 cm, to avoid steeped curves planning; (2) local planner frequency of 25 Hz; (3) global planner frequency of 5 Hz; and (4) maximum linear velocity of 0.3 m/s and maximum angular velocity of 0.2 rad/s.
Figure 15 illustrates the carried out test in three different states. The first state shows the planned trajectory according to the initial environment sense, as shown in Figure 15a. The second state in Figure 15b presents an update in the trajectory due to new people locations. Although the most proximate person to the walker is not detected by the camera, laser readings allows the person’s position tracking and therefore its detection. Finally, Figure 15c illustrates the avoiding of another person, while continuing with the guidance task.
In addition to the above, the guiding capability of the navigation system was also validated on the seven volunteers who participated in the study. Specifically, the predefined path goals presented in Figure 16 were configured in the navigation system to form a desired trajectory. The reference trajectory was designed to be similar to the reference path used for the user control trials. However, the trajectory corners were designed as soft turn curves, in such a way that the user’s balance and stability were not compromised. During the seven trials, no significant differences were encountered in the achieved trajectories, no collisions took place and the mean guidance task time was 53.06 ± 2.15 s. The participants were asked to perceive their interactions with the AGoRA Walker during the guiding task.

7.3. Shared Control Tests

To assess the shared control performance, each volunteer was asked to follow the reference trajectory previously presented in Figure 16. Under this control mode, the participants were partially guided by the navigation system. Likewise, before each trial the volunteers were informed that their intentions of movement would be taken into account. The Table 4 summarizes main findings for each trial.
The results presented in Table 4 suggest proper capabilities of the shared control strategy to effectively guide the participants through a specific trajectory. Six subjects achieved the full reference path by reaching its ten intermediate goals. Specifically, one subject did not complete the task by only reaching eight goals. This result is due to a random false obstacle perceived at the ninth goal, resulting in the blocking of the path planning module. Regarding the task completion times, the mean task time obtained for all the participants was 67.55 ± 11.25 s. The differences among these times is mainly supported by the fact that the linear speed was totally controlled by the user intentions of movement. Accordingly, the obtained mean linear speed was 0.33 ± 0.07 m/s. Finally, to evaluate the control granting behavior under this mode, the percentage of user control was estimated. This ratio was calculated taking into account the total time of user control and the overall task time. A mean percentage of 66.71 ± 6.26 % was obtained. The user control occurred mainly in the straight segments of the trajectory, since at the trajectory curves the users’ intentions of movement did not completely matched to the planned path.

7.4. Questionnaires Responses

To qualitatively assess the interactions between the participants and the AGoRA Walker, at the end of each trial, the volunteers were asked to fill out a usability questionnaire to obtain instant perceptions of the mode of operation. The participants were also encouraged to highlight perceptions regarding the interaction with the smart walker. Regarding the perception questionnaire, based on the UTAUT models in [66,67], an acceptance and usability questionnaire was designed. The questionnaire was adapted to be relevant to the interaction with the AGoRA Walker (see Table 5 for further details).
The Likert data obtained from the acceptance and usability questionnaires were aimed at assessing the participants’ perceptions of the interaction with the AGoRA Walker. For analysis purposes, the answers from Questions Q1–Q4 were grouped into a single category (C1), since they evaluated the attitude towards the device and the expected performance. Similarly, the answers from Questions Q5–Q7 were grouped into another category (C2), as they evaluated the perceived effort and anxiety of the interaction with the device. Finally, Questions Q8–Q10 were aimed at assessing the behavior perception of each control mode. However, the answers from these question were independently analyzed, in order to find differences between them. The questionnaire responses are presented in Figure 17, illustrating the percentage of opinions in each category (i.e., C1 and C2), as well as in Questions Q8–Q10 for each Likert item.
Relying on the questionnaire responses for Categories C1 and C2, a direct measure of the interaction perception in the experimental sessions can be obtained. Consequently, resembling survey answers were obtained under each control mode with major positive distributions. These results might suggest safe, natural and intuitive interactions perceived by the volunteers who participated in the study. Moreover, some participants stated additional comments regarding to the navigation control mode. Specifically, the volunteers suggested that at specific trajectory points the device stopped, in such a way that the path following task was not very comfortable. These impressions occurred at several trajectory goals, since the navigation system was configured to reach them at specific orientations.
To analyze the participants’ behavior perception under each control mode, the responses from Questions Q8–Q10 were statistically analyzed. As found in [68,69], Mann–Whitney–Wilcoxon (MWW) tests have shown optimal results comparing Likert data for small sample sizes MWW. Therefore, the MWW test was used to assess differences in the perception of each control mode. Specifically, Table 6 summarizes the p values obtained for each paired test between control modes (i.e., Mode 1, user control; Mode 2, navigation system control; and Mode 3, shared control).
As can be seen in Table 6 and Figure 17, significant differences were encountered among all participants responses for Question Q8. Such outcome may suggest that all participants perceived the ability of the interface to respond to their intentions of movement. Likewise, responses for question Q9 showed significant differences between two paired tests (i.e., Mode 1 vs. Mode 2 and Mode 1 vs. Mode 3), indicating that participants perceived modifications in the walker behavior. Finally, regarding Question Q10, a significant difference was only obtained for paired test between Mode 2 and Mode 3. Such behavior might be supported by the fact that both navigation system control and user control work together under the shared control mode.

8. Conclusions and Future Work

An HREI interface, composed by HRI and REI interfaces, was developed and implemented on a robotic platform for walker assisted gait. The robotic platform was equipped with two handlebars for forearm support and several sensory modalities, in order to emulate the performance and capabilities of an SW. Within the HREI interface design criteria, the following functions are found: estimation of user’s intentions of movement, providing of a safe and natural HRI interaction, implementation of a navigation system alongside a people detection system for social interaction purposes, and the integration of a set of control strategies for intuitive and natural interaction.
To validate the platform performance and interaction capabilities, several preliminary tests were conducted with seven volunteer users with no gait requirements reported. Specifically, data were collected from 21 trials divided into seven sessions, where all participant interacted with each control mode. Regarding the user control mode, a squared-shaped trajectory was proposed to be followed by each participant. The achieved trajectories for all the volunteers, as well as the admittance responses for a specific subject were presented. According to the participants’ performance under this control mode, preferences for less steeped curves were found. Concretely, the participants did not strictly execute 90-degree turns at trajectory corners. Such behavior is mainly supported by the not aligned axes of rotation of the walker and the users. Moreover, ignoring path corners allowed the participants to ensure balance and stability during walking.
The validation trials were also aimed at assessing the performance of the navigation system in guidance tasks, as well as at evaluating the performance of the navigation and people detection systems working together. Specifically, an isolated preliminary test with a volunteer was carried out to evaluate the capabilities of the platform for overcoming environments with people, even when sudden changes in obstacles locations. In the preliminary test, both navigation and people detection systems were executed at a maximum frequency of 4 Hz, due the on-board computational limitations. To ensure user’s balance and stability, the trajectory planning was configured to prefer curves with minimum turning radius of 15 cm. Although collisions and system clogging were not presented, the implementation of the REI on clinical or crowded scenarios should required higher computational resources. Regarding the validation trials with the seven volunteer users, a reference trajectory composed by 10 intermediate goals was proposed. All participants experienced the navigation system control completely achieving the reference path with no collisions.
Regarding the assessment of the shared control mode, a path following task was also proposed. Under this control mode, the participant’s intentions of movements and the navigation system cooperatively controlled the platform. Specifically, the linear speed was totally controlled by the users. Similarly, the angular speed was controlled according to the shared control strategy estimations. To ensure participant’s balance and stability, minimal turning radius of 15 cm were also configured. Among the participants trials, a mean percentage of user control of 66.71 ± 6.26 was obtained. Concretely, the control of the platform was mainly granted to the user at straight segments of the trajectory, since the participants’ did not have exact information about the reference trajectory. According to the geometrical model implemented for the shared control strategy, more strict or more flexible behaviors can be configured by modifying the dimensions of the interaction window. Such modifications can potentially be implemented in rehabilitation scenarios in order to provide different levels of assistance. Specifically, early stages of physical and cognitive rehabilitation processes might benefit from more rigorous interaction windows, ensuring a higher percentage of control of the navigation system.
A qualitative assessment of the platform performance and interaction capabilities relying on an acceptance and usability questionnaire was carried out. The participants’ attitude towards the device, as well as the performance and behavior perception were evaluated. According to the survey responses, the participants perceived a mostly positive interaction with the platform. Specifically, the questionnaires showed natural, safe and intuitive interactions under all the control modes. Regarding the behavior perception, significant differences were statistically found between the control modes. Slightly negative distributions were found for the navigation system control for C2 questions. These questions were aimed at evaluating effort and anxiety perceptions, which where experience by some participants. Particularly, two volunteers stated that the navigation system suddenly stopped at specific points of the trajectory. Such behavior was mainly due to the system configuration to reach goals at specific orientations.
Future works will address extensive evaluations of social interactions between the walker and people in the environment, by implementing several avoidance strategies, as well as algorithms for recognition of social groups interactions. Similarly, the assessment of the interface here proposed in clinical and rehabilitation scenarios will be achieved. Specifically, validation studies will firstly be carried out on post-stroke patients as they require a lower assistance level than SCI and CP patients. These validation studies will be aimed at analyzing specific relationships between the users’ characteristics and the interaction performance. Moreover, according to the the AGoRA Walker’s handlebars configuration, the platform might be classified as an assistance SW. Therefore, the HREI interface will be implemented and validated on a rehabilitation SW. Additional developments will seek to implement feedback strategies for the user under each control mode, in order to pursue better performance and interaction perceptions. Future works will also address the implementation of the presented interface on an SW that cooperates with an exoskeleton for gait assistance and rehabilitation. Finally, the integration of a cloud based system could leverage processing capabilities, resulting in better performance results.

Author Contributions

Conceptualization, S.D.S.M. and C.A.C.; methodology, S.D.S.M. and C.A.C.; software, S.D.S.M. and M.G.; validation, S.D.S.M.; investigation, S.D.S.M.; resources, M.M.; data curation, S.D.S.M.; writing—original draft preparation, S.D.S.M.; writing—review and editing, M.G., M.M. and C.A.C.; supervision, M.M. and C.A.C.; project administration, M.M. and C.A.C.; and funding acquisition, M.M. and C.A.C.

Funding

This work was supported by the Colombian Administrative Department of Science, Technology and Innovation Colciencias (grant ID No. 801-2017) and Colombian School of Engineering Julio Garavito internal funds.

Acknowledgments

This work was supported by Colombian Department Colciencias (Grant 801-2017) and the Colombian School of Engineering Julio Garavito Funds. The authors also wish to acknowledge the support from project CAMPUS (Connected Automated Mobilty Platform for Urban Sustainability) sponsored by Programme d’Investissements d’Avenir (PIA) of the French Agence de l’Environnement et de la Maîtrise de l’Énergie (ADEME).

Conflicts of Interest

The authors declare no conflict of interest

Abbreviations

The following abbreviations are used in this manuscript:
SCISpinal Cord Injury
CPCerebral Palsy
WHOWorld Health Organization
SWSmart Walker
SWsSmart Walkers
HRIHuman–Robot Interaction
HREIHuman–Robot–Environment Interaction
REIRobot–Environment Interaction
COOL AideCO-Operative Locomotion Aide
ROSRobotic Operating System
IMUInertial Measurement Unit
LiDARLight Detection and Ranging Sensor
HDHigh Definition
LRFLaser Range Finder
CPUCentral Processing Unit
F x R i g h t Force along the x-axis on the right load cell
F x L e f t Force along the x-axis on the left load cell
F y R i g h t Force along the y-axis on the right load cell
F y L e f t Force along the y-axis on the left load cell
F z R i g h t Force along the z-axis on the right load cell
F z L e f t Force along the z-axis on the left load cell
F s p x R i g h t Force along the x-axis on the right supporting point
F s p x L e f t Force along the x-axis on the left supporting point
F s p y R i g h t Force along the y-axis on the right supporting point
F s p y L e f t Force along the y-axis on the left supporting point
F s p z R i g h t Force along the z-axis on the right supporting point
F s p z L e f t Force along the z-axis on the left supporting point
GCEGait Cadence Estimator
WFLCWeighted-Fourier Linear Combiner
F C A D Resulting force used to estimate user’s gait cadence
F C A D Filtered F C A D force
FLCFourier Lineal Combiner
FSFiltering System of forces along y-axis
F y Φ Representation of whether F y L e f t or F y R i g h t
F y L e f t Filtered F y L e f t
F y R i g h t Filtered F y R i g h t
F y Φ Representation of whether F y L e f t or F y R i g h t
F y Φ L P Resulting F y Φ signals after low-pass filter
F y Φ C A D Cadence signals obtained from the FLC
MOrder of the FLC filter
μ Adaptive gain of the FLC filter
FFinal force applied to the walker by the user
τ Final torque applied to the walker by the user
F y L e f t L P Resulting signal from the low-pass filter for F y L e f t
F y L e f t C A D Resulting signal from the FLC for F y L e f t
vLinear velocity generated with an admittance controller
ω Angular velocity generated with an admittance controller
L ( s ) Second order system for linear velocities generation
mVirtual mass of the walker
b l Damping ratio for L ( s )
k l Elastic constant for L ( s )
A ( s ) Second order system for angular velocities generation
JVirtual moment of inertia of the walker
b a Damping ratio for A ( s )
k a Elastic constant for A ( s )
SLAMSimultaneous Localization and Mapping
AMCLAdaptive Monte Carlo Localization Approach
TEBTime Elastic Band
ROIRegion of interest in the camera image
HOGHistogram of Oriented Gradients
SVMSupport Vector Machine
STDStop Distance Parameter
SDSlow Distance Parameter
WRWidth Rate
METMaximum Exerted Torque
x n a v X position of nearest path point
y n a v Y position of nearest path point
x s w X position of the walker
y n a v Y position of the walker
θ s w Orientation of the walker
θ u s r Orientation of user’s intention of movement
θ n a v Orientation of nearest path point
dEuclidean distance from the walker position to the next pose
L a Base of first right-angled triangle
L b Base of second right-angled triangle
θ a Auxiliary angle for first right-angled triangle
θ b Auxiliary angle for first right-angled triangle
W i n w i d t h Scaling factor of triangle-shaped window
θ n x t Orientation of next path pose
θ d i f f Relative orientation of the triangle-shaped window center

References

  1. Buchman, A.S.; Boyle, P.A.; Leurgans, S.E.; Barnes, L.L.; Bennett, D.A. Cognitive Function is Associated with the Development of Mobility Impairments in Community-Dwelling Elders. Am. J. Geriatr. Psychiatry 2011, 19, 571–580. [Google Scholar] [CrossRef] [PubMed]
  2. Pirker, W.; Katzenschlager, R. Gait disorders in adults and the elderly. Wien. Klin. Wochenschr. 2017, 129, 81–95. [Google Scholar] [CrossRef] [PubMed]
  3. Mrozowski, J.; Awrejcewicz, J.; Bamberski, P. Analysis of stability of the human gait. J. Theor. Appl. Mech. 2007, 45, 91–98. [Google Scholar]
  4. Cifuentes, C.A.; Frizera, A. Human-Robot Interaction Strategies for Walker-Assisted Locomotion; Springer Tracts in Advanced Robotics; Springer: Cham, Switzerland, 2016; Volume 115, p. 105. [Google Scholar] [CrossRef]
  5. Mikolajczyk, T.; Ciobanu, I.; Badea, D.I.; Iliescu, A.; Pizzamiglio, S.; Schauer, T.; Seel, T.; Seiciu, P.L.; Turner, D.L.; Berteanu, M. Advanced technology for gait rehabilitation: An overview. Adv. Mech. Eng. 2018, 10, 1–19. [Google Scholar] [CrossRef]
  6. Gheno, R.; Cepparo, J.M.; Rosca, C.E.; Cotten, A. Musculoskeletal Disorders in the Elderly. J. Clin. Imaging Sci. 2012, 2, 39. [Google Scholar] [CrossRef]
  7. World Health Organization. Disability and Health. 2018. Available online: https://www.who.int/news-room/fact-sheets/detail/disability-and-health (accessed on 29 June 2019).
  8. World Health Organization. World Report on Disability 2011; World Health Organization: Geneva, Switzerland, 2011. [Google Scholar]
  9. World Health Organization. Ageing and Health; World Health Organization: Geneva, Switzerland, 2018. [Google Scholar]
  10. The World Bank. Disability Inclusion. 2018. Available online: https://www.worldbank.org/en/topic/disability (accessed on 29 June 2019).
  11. Pedersen, M.M.; Holt, N.E.; Grande, L.; Kurlinski, L.A.; Beauchamp, M.K.; Kiely, D.K.; Petersen, J.; Leveille, S.; Bean, J.F. Mild cognitive impairment status and mobility performance: An analysis from the Boston RISE study. J. Gerontol. Ser. Biol. Sci. Med. Sci. 2014, 69, 1511–1518. [Google Scholar] [CrossRef]
  12. Brown, C.J.; Flood, K.L. Mobility limitation in the older patient: A clinical review. JAMA J. Am. Med. Assoc. 2013, 310, 1168–1177. [Google Scholar] [CrossRef]
  13. Chaparro-Cárdenas, S.L.; Lozano-Guzmán, A.A.; Ramirez-Bautista, J.A.; Hernández-Zavala, A. A review in gait rehabilitation devices and applied control techniques. Disabil. Rehabil. Assist. Technol. 2018. [Google Scholar] [CrossRef]
  14. Martins, M.M.; Frizera-Neto, A.; Urendes, E.; dos Santos, C.; Ceres, R.; Bastos-Filho, T. A novel human-machine interface for guiding: The NeoASAS smart walker. In Proceedings of the IEEE 2012 ISSNIP Biosignals and Biorobotics Conference: Biosignals and Robotics for Better and Safer Living (BRC), Manaus, Brazil, 9–11 January 2012; pp. 1–7. [Google Scholar] [CrossRef]
  15. Bateni, H.; Maki, B.E. Assistive devices for balance and mobility: Benefits, demands, and adverse consequences. Arch. Phys. Med. Rehabil. 2005, 86, 134–145. [Google Scholar] [CrossRef]
  16. Neto, A.F.; Elias, A.; Cifuentes, C.; Rodriguez, C.; Bastos, T.; Carelli, R. Smart Walkers: Advanced Robotic Human Walking-Aid Systems. In Springer Tracts in Advanced Robotics 106 Intelligent Assistive Robots Recent Advances in Assistive Robotics; Springer: Cham, Switzerland, 2015; pp. 103–131. [Google Scholar] [CrossRef]
  17. Geravand, M.; Werner, C.; Hauer, K.; Peer, A. An Integrated Decision Making Approach for Adaptive Shared Control of Mobility Assistance Robots. Int. J. Soc. Robot. 2016, 8, 631–648. [Google Scholar] [CrossRef] [Green Version]
  18. Mitzner, T.L.; Chen, T.L.; Kemp, C.C.; Rogers, W.A. Identifying the Potential for Robotics to Assist Older Adults in Different Living Environments. Int. J. Soc. Robot. 2014, 6, 213–227. [Google Scholar] [CrossRef] [PubMed]
  19. Jenkins, S.; Draper, H. Care, Monitoring, and Companionship: Views on Care Robots from Older People and Their Carers. Int. J. Soc. Robot. 2015, 7, 673–683. [Google Scholar] [CrossRef]
  20. Martins, M.; Santos, C.; Frizera, A.; Ceres, R. A review of the functionalities of smart walkers. Med. Eng. Phys. 2015, 37, 917–928. [Google Scholar] [CrossRef] [PubMed]
  21. Martins, M.; Santos, C.; Seabra, E.; Frizera, A.; Ceres, R. Design, implementation and testing of a new user interface for a smart walker. In Proceedings of the 2014 IEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC), Espinho, Portugal, 14–15 May 2014; pp. 217–222. [Google Scholar] [CrossRef]
  22. Lacey, G.J.; Rodriguez-Losada, D. The evolution of guido. IEEE Robot. Autom. Mag. 2008, 15, 75–83. [Google Scholar] [CrossRef]
  23. Morris, A.; Donamukkala, R.; Kapuria, A.; Steinfeld, A.; Matthews, J.; Dunbar-Jacob, J.; Thrun, S. A robotic walker that provides guidance. In Proceedings of the 2003 IEEE International Conference on Robotics and Automation (Cat. No. 03CH37422), Taipei, Taiwan, 14–19 September 2003; Volume 1, pp. 25–30. [Google Scholar] [CrossRef]
  24. Alves, J.; Seabra, E.; Caetano, I.; Santos, C.P. Overview of the ASBGo++ Smart Walker. In Proceedings of the 2017 IEEE 5th Portuguese Meeting on Bioengineering (ENBENG), Coimbra, Portugal, 16–18 February 2017; pp. 1–4. [Google Scholar] [CrossRef]
  25. Caetano, I.; Alves, J.; Goncalves, J.; Martins, M.; Santos, C.P. Development of a Biofeedback Approach Using Body Tracking with Active Depth Sensor in ASBGo Smart Walker. In Proceedings of the 2016 International Conference on Autonomous Robot Systems and Competitions (ICARSC), Bragança, Portugal, 4–6 May 2016; pp. 241–246. [Google Scholar] [CrossRef]
  26. Lee, G.; Ohnuma, T.; Chong, N.Y. Design and control of JAIST active robotic walker. Intell. Serv. Robot. 2010, 3, 125–135. [Google Scholar] [CrossRef]
  27. Lee, G.; Jung, E.J.; Ohnuma, T.; Chong, N.Y.; Yi, B.J. JAIST Robotic Walker control based on a two-layered Kalman filter. In Proceedings of the IEEE International Conference on Robotics and Automation, Shanghai, China, 9–13 May 2011; pp. 3682–3687. [Google Scholar] [CrossRef]
  28. Jiménez, M.F.; Monllor, M.; Frizera, A.; Bastos, T.; Roberti, F.; Carelli, R. Admittance Controller with Spatial Modulation for Assisted Locomotion using a Smart Walker. J. Intell. Robot. Syst. 2019, 94, 621–637. [Google Scholar] [CrossRef]
  29. Spenko, M.; Yu, H.; Dubowsky, S. Robotic personal aids for mobility and monitoring for the elderly. IEEE Trans. Neural Syst. Rehabil. Eng. 2006, 14, 344–351. [Google Scholar] [CrossRef]
  30. Efthimiou, E.; Fotinea, S.E.; Goulas, T.; Dimou, A.L.; Koutsombogera, M.; Pitsikalis, V.; Maragos, P.; Tzafestas, C. The MOBOT Platform—Showcasing Multimodality in Human-Assistive Robot Interaction; Springer: Cham, Switzerland, 2016; pp. 382–391. [Google Scholar] [CrossRef]
  31. Efthimiou, E.; Fotinea, S.E.; Goulas, T.; Koutsombogera, M.; Karioris, P.; Vacalopoulou, A.; Rodomagoulakis, I.; Maragos, P.; Tzafestas, C.; Pitsikalis, V.; et al. The MOBOT rollator human-robot interaction model and user evaluation process. In Proceedings of the 2016 IEEE Symposium Series on Computational Intelligence (SSCI 2016), Athens, Greece, 6–9 December 2019. [Google Scholar] [CrossRef]
  32. Papageorgiou, X.S.; Chalvatzaki, G.; Lianos, K.N.; Werner, C.; Hauer, K.; Tzafestas, C.S.; Maragos, P. Experimental validation of human pathological gait analysis for an assisted living intelligent robotic walker. In Proceedings of the IEEE RAS and EMBS International Conference on Biomedical Robotics and Biomechatronics, Pisa, Italy, 20–22 February 2016; pp. 1086–1091. [Google Scholar] [CrossRef]
  33. Mou, W.H.; Chang, M.F.; Liao, C.K.; Hsu, Y.H.; Tseng, S.H.; Fu, L.C. Context-aware assisted interactive robotic walker for Parkinson’s disease patients. In Proceedings of the 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, Vilamoura, Portugal, 7–12 October 2012; pp. 329–334. [Google Scholar] [CrossRef]
  34. Paulo, J.; Peixoto, P.; Nunes, U.J. ISR-AIWALKER: Robotic Walker for Intuitive and Safe Mobility Assistance and Gait Analysis. IEEE Trans. Hum. Mach. Syst. 2017, 47, 1110–1122. [Google Scholar] [CrossRef]
  35. Garrote, L.; Paulo, J.; Perdiz, J.; Peixoto, P.; Nunes, U.J. Robot-Assisted Navigation for a Robotic Walker with Aided User Intent. In Proceedings of the RO-MAN 2018—27th IEEE International Symposium on Robot and Human Interactive Communication, Nanjing, China, 27 August–1 September 2018; pp. 348–355. [Google Scholar] [CrossRef]
  36. Huang, C.; Wasson, G.; Alwan, M.; Sheth, P. Shared Navigational Control and User Intent Detection in an Intelligent Walker. 2005. Available online: https://www.aaai.org/Papers/Symposia/Fall/2005/FS-05-02/FS05-02-010.pdf (accessed on 29 June 2019).
  37. Wachaja, A.; Agarwal, P.; Zink, M.; Adame, M.R.; Möller, K.; Burgard, W. Navigating blind people with walking impairments using a smart walker. Auton. Robot. 2017, 41, 555–573. [Google Scholar] [CrossRef]
  38. Wasson, G.; Gunderson, J.; Graves, S.; Felder, R. Effective Shared Control in Cooperative Mobility Aids. In Proceedings of the Fourteenth international Florida Artificial intelligence Research Society Conference, Key West, FL, USA, 21–23 May 2001; AAAI Press: Menlo Park, CA, USA, 2001; pp. 509–513. [Google Scholar]
  39. Wasson, G.; Gunderson, J.; Graves, S.; Felder, R. An assistive robotic agent for pedestrian mobility. In Proceedings of the Fifth International Conference on Autonomous Agents—AGENTS’01, Montreal, QC, Canada, 28 May–1 June 2001; ACM Press: New York, NY, USA, 2001; pp. 169–173. [Google Scholar] [CrossRef]
  40. Palopoli, L.; Argyros, A.; Birchbauer, J.; Colombo, A.; Fontanelli, D.; Legay, A.; Garulli, A.; Giannitrapani, A.; Macii, D.; Moro, F.; et al. Navigation assistance and guidance of older adults across complex public spaces: The DALi approach. Intell. Serv. Robot. 2015, 8, 77–92. [Google Scholar] [CrossRef]
  41. Cheng, W.C.; Wu, Y.Z. A user’s intention detection method for smart walker. In Proceedings of the 2017 IEEE 8th International Conference on Awareness Science and Technology (iCAST), Taiwan, China, 8–10 November 2017; pp. 35–39. [Google Scholar] [CrossRef]
  42. Ye, J.; Huang, J.; He, J.; Tao, C.; Wang, X. Development of a width-changeable intelligent walking-aid robot. In Proceedings of the 2012 International Symposium on Micro-NanoMechatronics and Human Science (MHS), Nagoya, Japan, 4–7 November 2012; pp. 358–363. [Google Scholar] [CrossRef]
  43. Hirata, Y.; Hara, A.; Kosuge, K. Passive-type intelligent walking support system “RT Walker”. In Proceedings of the 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (IEEE Cat. No. 04CH37566), Sendai, Japan, 28 September–2 October 2004; Volume 4, pp. 3871–3876. [Google Scholar] [CrossRef]
  44. Frizera-Neto, A.; Ceres, R.; Rocon, E.; Pons, J.L. Empowering and assisting natural human mobility: The simbiosis walker. Int. J. Adv. Robot. Syst. 2011, 8, 34–50. [Google Scholar] [CrossRef]
  45. Kulyukin, V.; Kutiyanawala, A.; LoPresti, E.; Matthews, J.; Simpson, R. IWalker: Toward a rollator-mounted wayfinding system for the elderly. In Proceedings of the 2008 IEEE International Conference on RFID (Frequency Identification), Amman, Jordan, 20–22 July 2008; pp. 303–311. [Google Scholar]
  46. Lu, C.K.; Huang, Y.C.; Lee, C.J. Adaptive guidance system design for the assistive robotic walker. Neurocomputing 2015, 170, 152–160. [Google Scholar] [CrossRef]
  47. Reyes Adame, M.; Yu, J.; Moeller, K. Mobility Support System for Elderly Blind People with a Smart Walker and a Tactile Map. IFMBE Proc. 2016, 57, 602–607. [Google Scholar] [CrossRef]
  48. Thorstensson, A.; Nilsson, J.; Carlson, H.; Zomlefer, M.R. Trunk movements in human locomotion. Acta Physiol. Scand. 1984, 121, 9–22. [Google Scholar] [CrossRef] [PubMed]
  49. Bonnet, V.; Mazzà, C.; McCamley, J.; Cappozzo, A. Use of weighted Fourier linear combiner filters to estimate lower trunk 3D orientation from gyroscope sensors data. J. Neuroeng. Rehabil. 2013, 10, 29. [Google Scholar] [CrossRef] [PubMed]
  50. Neto, A.F.; Gallego, J.A.; Rocon, E.; Abellanas, A.; Pons, J.L.; Ceres, R. Online Cadence Estimation through Force Interaction in Walker Assisted Gait. In Proceedings of the ISSNIP Biosignals and Biorobotics Conference 2010, Vitoria, Brazil, 4–6 January 2010; pp. 1–5. [Google Scholar]
  51. Frizera Neto, A.; Gallego, J.A.; Rocon, E.; Pons, J.L.; Ceres, R. Extraction of user’s navigation commands from upper body force interaction in walker assisted gait. BioMed. Eng. Online 2010, 9, 1–16. [Google Scholar] [CrossRef] [PubMed]
  52. Sierra, S.D.; Molina, J.F.; Gómez, D.A.; Cifuentes, C.A.; Múnera, M.C. Development of an Interface for Human-Robot Interaction on a Robotic Platform for Gait Assistance: AGoRA Smart Walker. In Proceedings of the 2018 IEEE ANDESCON, Santiago de Cali, Colombia, 22–24 August 2018. [Google Scholar]
  53. Grisetti, G.; Stachniss, C.; Burgard, W. Improved Techniques for Grid Mapping With Rao-Blackwellized Particle Filters. IEEE Trans. Robot. 2007, 23, 34–46. [Google Scholar] [CrossRef] [Green Version]
  54. Fox, D.; Burgard, W.; Dellaert, F.; Thrun, S. Monte Carlo Localization: Efficient Position Estimation for Mobile Robots. In Proceedings of the Sixteenth National Conference on Artificial Intelligence and Eleventh Conference on Innovative Applications of Artificial Intelligence, Orlando, FL, USA, 8–22 July 1999; pp. 343–349. [Google Scholar]
  55. Lu, D.V.; Hershberger, D.; Smart, W.D. Layered costmaps for context-sensitive navigation. In Proceedings of the 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems, Chicago, IL, USA, 14–18 September 2014; pp. 709–715. [Google Scholar] [CrossRef]
  56. Fox, D.; Burgard, W.; Thrun, S. The Dynamic Window Approach to Collision Avoidance. Robot. Autom. Mag. 1997, 4, 1–23. [Google Scholar] [CrossRef]
  57. Rösmann, C.; Feiten, W.; Wösch, T.; Hoffmann, F.; Bertram, T. Trajectory modification considering dynamic constraints of autonomous robots. In Proceedings of the 7th German Conference on Robotics, Munich, Germany, 21–22 May 2012; pp. 74–79. [Google Scholar]
  58. Fotiadis, E.P.; Garzón, M.; Barrientos, A. Human detection from a mobile robot using fusion of laser and vision information. Sensors 2013, 13, 11603–11635. [Google Scholar] [CrossRef]
  59. Garzon Oviedo, M.A.; Barrientos, A.; Del Cerro, J.; Alacid, A.; Fotiadis, E.; Rodríguez-Canosa, G.R.; Wang, B.C. Tracking and following pedestrian trajectories, an approach for autonomous surveillance of critical infrastructures. Ind. Robot. Int. J. 2015, 42, 429–440. [Google Scholar] [CrossRef]
  60. Arras, K.O.; Lau, B.; Grzonka, S.; Luber, M.; Mozos, O.M.; Meyer-Delius, D.; Burgard, W. Range-Based People Detection and Tracking for Socially Enabled Service Robots. In Towards Service Robots for Everyday Environments; Springer Tracts in Advanced Robotics; Springer: Berlin/Heidelberg, Germany, 2012; Volume 76, pp. 235–280. [Google Scholar] [CrossRef]
  61. Schapire, R.E.; Schapire, R.E. Improved Boosting Algorithms Using Confidence-rated Predictions. Computer 1999, 336, 297–336. [Google Scholar] [CrossRef]
  62. Zhang, Q.; Pless, R. Extrinsic Calibration of a Camera and Laser Range Finder (improves camera calibration). IROS 2004, 3, 2301–2306. [Google Scholar] [CrossRef]
  63. Dalal, N.; Triggs, B. Histograms of oriented gradients for human detection. In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2005), San Diego, CA, USA, 20–25 June 2005; Volume I, pp. 886–893. [Google Scholar] [CrossRef]
  64. Niculescu-Mizil, A.; Caruana, R. Predicting good probabilities with supervised learning. In Proceedings of the 22nd International Conference on Machine Learning (ICML’05), Bonn, Germany, 7–11 August 2005; pp. 625–632. [Google Scholar] [CrossRef]
  65. Papadakis, P.; Rives, P.; Spalanzani, A. Adaptive spacing in human-robot interactions. In Proceedings of the 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems, Chicago, IL, USA, 14–18 September 2014; pp. 2627–2632. [Google Scholar] [CrossRef]
  66. Venkatesh, V.; Morris, M.G.; Davis, G.B.; Davis, F.D. User Acceptance of Information Technology: Toward a Unified View. MIS Q. 2003, 27, 425. [Google Scholar] [CrossRef] [Green Version]
  67. Venkatesh, V.; Thong, J.Y.L.; Xu, X. Consumer Acceptance and Use of Information Technology: Extending the Unified Theory. MIS Q. 2012, 36, 157–178. [Google Scholar] [CrossRef]
  68. Joost, C.F.; Dodou, D. Five-Point Likert Items: t test versus Mann-Whitney-Wilcoxon. Pract. Assess. Res. Eval. 2010, 15, 1–16. [Google Scholar]
  69. Blair, R.C.; Higgins, J.J. A Comparison of the Power of Wilcoxon’s Rank-Sum Statistic to That of Student’s t Statistic under Various Nonnormal Distributions. J. Educ. Stat. 1980, 5, 309. [Google Scholar] [CrossRef]
Figure 1. (a) The AGoRA Smart Walker is a robotic walker mounted on a commercial robotic platform. Several sensor modalities retrofit the walker with user and environment information. (b) Coordinate reference frames on handlebars and force sensors.
Figure 1. (a) The AGoRA Smart Walker is a robotic walker mounted on a commercial robotic platform. Several sensor modalities retrofit the walker with user and environment information. (b) Coordinate reference frames on handlebars and force sensors.
Sensors 19 02897 g001
Figure 2. HREI interface model and communication channels. (a) HRI and REI systems: ( 1 ) Estimation of user interaction forces; ( 2 ) low level security rules; ( 3 ) laser based estimation of user’s gait parameter; ( 4 ) laser-camera fusion scheme for people detection; ( 5 ) laser based navigation; ( 6 ) motion control for navigation goal reaching; ( 7 ) low-rise obstacle avoidance; ( 8 ) social spacing for people type obstacles; and ( 9 ) therapy supervision. (b) Communication channels.
Figure 2. HREI interface model and communication channels. (a) HRI and REI systems: ( 1 ) Estimation of user interaction forces; ( 2 ) low level security rules; ( 3 ) laser based estimation of user’s gait parameter; ( 4 ) laser-camera fusion scheme for people detection; ( 5 ) laser based navigation; ( 6 ) motion control for navigation goal reaching; ( 7 ) low-rise obstacle avoidance; ( 8 ) social spacing for people type obstacles; and ( 9 ) therapy supervision. (b) Communication channels.
Sensors 19 02897 g002
Figure 3. The Gait Cadence Estimator system takes the vertical interaction forces through a filtering process, based on a band-pass filter that eliminates high frequency noise due to walker’s vibrations. Finally, the Weighted-Fourier Linear Combiner filter adaptively estimates the user’s gait cadence.
Figure 3. The Gait Cadence Estimator system takes the vertical interaction forces through a filtering process, based on a band-pass filter that eliminates high frequency noise due to walker’s vibrations. Finally, the Weighted-Fourier Linear Combiner filter adaptively estimates the user’s gait cadence.
Sensors 19 02897 g003
Figure 4. Filter system for y-axis forces ( Φ means l e f t or r i g h t ). There is an independent FS for each y-axis force (i.e., F y L e f t and F y R i g h t ), composed by a low-pass filter and a FLC filter.
Figure 4. Filter system for y-axis forces ( Φ means l e f t or r i g h t ). There is an independent FS for each y-axis force (i.e., F y L e f t and F y R i g h t ), composed by a low-pass filter and a FLC filter.
Sensors 19 02897 g004
Figure 5. (a) Raw F y L e f t signal from left force sensor. (b) F y L e f t L P (Blue), meaning the resulting signal from the low-pass filter, and F y L e f t C A D (Red), meaning the resulting signal from the FLC. (c) F y L e f t L P and F y L e f t C A D were subtracted obtaining the filtered signal without gait components, F y L e f t .
Figure 5. (a) Raw F y L e f t signal from left force sensor. (b) F y L e f t L P (Blue), meaning the resulting signal from the low-pass filter, and F y L e f t C A D (Red), meaning the resulting signal from the FLC. (c) F y L e f t L P and F y L e f t C A D were subtracted obtaining the filtered signal without gait components, F y L e f t .
Sensors 19 02897 g005
Figure 6. HRI interface system diagram.
Figure 6. HRI interface system diagram.
Sensors 19 02897 g006
Figure 7. (a) Navigation raw static map. (b) Navigation edited static map. White means non-obstacle zones, gray means unknown zones and black means obstacles.
Figure 7. (a) Navigation raw static map. (b) Navigation edited static map. White means non-obstacle zones, gray means unknown zones and black means obstacles.
Sensors 19 02897 g007
Figure 8. Illustration of a navigation task for the AGoRA Smart Walker reaching a specific goal. Green and orange lines represent local and global trajectories calculated by the path planning system. Light blue and dark blue zones represent the 2D cost-map occupancy grid.
Figure 8. Illustration of a navigation task for the AGoRA Smart Walker reaching a specific goal. Green and orange lines represent local and global trajectories calculated by the path planning system. Light blue and dark blue zones represent the 2D cost-map occupancy grid.
Sensors 19 02897 g008
Figure 9. Outline of the people detection system.
Figure 9. Outline of the people detection system.
Sensors 19 02897 g009
Figure 10. (a) Clusters obtained from the segmentation process of laser’s data. (b) Three people detected in stationary position.
Figure 10. (a) Clusters obtained from the segmentation process of laser’s data. (b) Three people detected in stationary position.
Sensors 19 02897 g010
Figure 11. Warning zone shape and parameters for velocity limitation during obstacles presence.
Figure 11. Warning zone shape and parameters for velocity limitation during obstacles presence.
Sensors 19 02897 g011
Figure 12. Estimation of possible user’s intentions area.
Figure 12. Estimation of possible user’s intentions area.
Sensors 19 02897 g012
Figure 13. (a) Reference path for user control tests based on a square-shaped trajectory. Landmarks and path direction were indicated through reference points at path corners. (b) Trajectories achieved by the nine participants under user control trials.
Figure 13. (a) Reference path for user control tests based on a square-shaped trajectory. Landmarks and path direction were indicated through reference points at path corners. (b) Trajectories achieved by the nine participants under user control trials.
Sensors 19 02897 g013
Figure 14. (a) Force (blue) and torque (orange) signals during the trajectory for the first subject. (b) Linear (blue) and angular (orange) velocities obtained from the admittance controller during the trajectory for the first subject.
Figure 14. (a) Force (blue) and torque (orange) signals during the trajectory for the first subject. (b) Linear (blue) and angular (orange) velocities obtained from the admittance controller during the trajectory for the first subject.
Sensors 19 02897 g014
Figure 15. Navigation and people detection systems during guidance task. Yellow and purple squares represent people obstacles detected by both camera and laser. Yellow and purple circles represent people obstacles only detected by the laser, as well as the obstacles costs inflations. Gray circles show old obstacles that will be removed once the walker senses such areas again. Green line illustrates the path.
Figure 15. Navigation and people detection systems during guidance task. Yellow and purple squares represent people obstacles detected by both camera and laser. Yellow and purple circles represent people obstacles only detected by the laser, as well as the obstacles costs inflations. Gray circles show old obstacles that will be removed once the walker senses such areas again. Green line illustrates the path.
Sensors 19 02897 g015
Figure 16. Reference trajectory and goals for the guiding task.
Figure 16. Reference trajectory and goals for the guiding task.
Sensors 19 02897 g016
Figure 17. Acceptance and usability questionnaire results: Mode 1, user control; Mode 2, navigation system control; Mode 3, shared control.
Figure 17. Acceptance and usability questionnaire results: Mode 1, user control; Mode 2, navigation system control; Mode 3, shared control.
Sensors 19 02897 g017
Table 1. Related works involving smart walkers with the integration of interfaces for Human–Robot–Environment Interaction.
Table 1. Related works involving smart walkers with the integration of interfaces for Human–Robot–Environment Interaction.
WalkerTypeSensory
Interface
Internal ModulesModes of
Operation
Shared Control
Strategies
Social
Interaction
GUIDO [22]Active- Force sensors
- LRF
- Sonars
- Encoders
- Autonomous navigation
- Detection of user’s intentions
- Sound feedback
- Supervised
- Autonomous
--
XR4000 [23]Active- Force sensors
- LRF
- Sonars
- Infrared sensors
- Encoders
- Autonomous navigation
- Detection of user’s intentions
- Free
- Supervised
- Autonomous
Shared walker
steering on active
mode
ASBGo++
[21,24,25]
Active- Force sensors
- LRF
- Sonar
- Infrared sensors
- Camera
- Encoders
- Autonomous navigation
- Detection of user’s intentions
- Gait monitoring
- User position feedback
- Free
- Supervised
- Autonomous
--
JARoW [26,27]Active- Infrared sensors
- Encoders
- LRFs
- User position estimation
and prediction
- Obstacle avoidance
- Free
- Supervised
--
NeoASAS [14]Active- Force sensors- Detection of user’s intentions- Free--
UFES [16,28]Active- Force sensors
- LRF
- IMUs
- Encoders
- Path following
- Obstacle avoidance
- Detection of user’s intentions
- Gait monitoring
- Free
- Supervised
- Feedback
Spatially modulated
admittance control,
visual feedback
-
PAMM [29]Active- Force sensors
- Sonars
- Camera
- Encoders
- Autonomous navigation
- Health monitoring
User control, path
following control
Adaptive and
shared admittance
controller
-
MOBOT
[17,30,31,32]
Active- Force sensors
- LRFs
- Cameras
- Kinect sensors
- Microphones
- Autonomous navigation
- Detection of user’s intentions
- Speech and gesture recognition
- Body pose estimation
- Gait Analyzer
Walking assitance,
sit-to-stand
assistance, nurse
type
Adaptive control
based on context
-
CAIROW [33]Active- Force sensors
- LRFs
- Environment analyzer
- Force analyzer
- Gait analyzer
Context aware
mode
Adaptive system
parameters
-
ISR-AIWALKER
[34,35]
Active- Force sensors
- Kinect sensor
-Encoders
- Leap motion sensor
- RGB-D Camera
- Detection of user’s intention
- Gripping recognition
- Gait analyzer
- Autonomous navigation
- Supervised
- Navigation aided
Aided user intent
by navigation system
-
COOL Aide [36]Passive- Force sensors
- LRF
- Encoders
- Autonomous navigation
- Detection of user’s intentions
- SupervisedShared control
based on obstacles
and user’s
intentions
-
Wachaja
et al. [37]
Passive- LRF
- Tilting LRF
- 3D Mapping and localization
- Obstacle avoidance
- Vibrotactile feedback
- Single feedback
- Multiple feedback
--
MARC [38,39]Passive- Sonars
- Infrared sensors
- Encoders
- Path following
- Obstacle avoidance
Warning mode,
safety braking
mode and braking
and steering mode
Shared walker
steering
-
c-Walker [40]Passive- Kinect like sensor
- RFID reader
- IMU
- Camera
- Encoders
- Autonomous navigation
- People detection and tracking
- Guidance
Acoustic feedback,
mechanic feedback
and haptic feedback
Shared walker
steering
Social Force
Model for
path
planning
Table 2. Warning zone parameters adaption.
Table 2. Warning zone parameters adaption.
Walker’s
Velocity (ms)
Warning Zone Parameters
STD (m)SD (m)WR
≤0.30.30.61.0
≤0.40.30.81.2
≤0.50.31.01.4
≤0.60.31.21.5
≤0.80.31.42.0
>0.80.32.03.0
Table 3. Summary of volunteers who participated in the study.
Table 3. Summary of volunteers who participated in the study.
SubjectAge (y.o.)Height (m)Weight (kg)Gender
1231.7665Male
2231.7772Male
3231.6562Female
4611.6765Male
5231.7269Male
6591.6050Male
7241.7075Male
Table 4. Summary of the results obtained for shared control trials.
Table 4. Summary of the results obtained for shared control trials.
SubjectAchieved GoalsTask Time (s)Mean Linear Speed (m/s)Percentage of User Control (%)
11063.940.3469.19
21071.460.3471.63
31048.380.4653.66
41083.450.2362.55
51064.540.3468.25
6880.80.2173.99
71060.290.3767.71
Table 5. Acceptance and usability questionnaire used in the study.
Table 5. Acceptance and usability questionnaire used in the study.
No.Question
Q1I think the robotic device makes me feel safe
Q2I think the robotic device was easy to use
Q3I think most people would learn to use this device quickly, it is intuitive
Q4I think the device guides me well
Q5I think my experience interacting with the device was natural
Q6I think my experience interacting with the device was intuitive
Q7I think my experience interacting with the device was stressful.
Q8In this session, I felt that I had control of the device
Q9In this session, I felt that the device had the control of the path to be followed
Q10In this session, I felt that the device control was shared with me
Table 6. Mann–Whitney–Wilcoxon p values for paired tests among Q8, Q9 and Q10. p values in bold illustrate significant differences encountered, meaning p 0.05 .
Table 6. Mann–Whitney–Wilcoxon p values for paired tests among Q8, Q9 and Q10. p values in bold illustrate significant differences encountered, meaning p 0.05 .
QuestionMode 1 vs. Mode 2Mode 1 vs. Mode 3Mode 2 vs. Mode 3
Q80.020.020.05
Q90.020.020.08
Q100.370.1360.04

Share and Cite

MDPI and ACS Style

Sierra M., S.D.; Garzón, M.; Múnera, M.; Cifuentes, C.A. Human–Robot–Environment Interaction Interface for Smart Walker Assisted Gait: AGoRA Walker. Sensors 2019, 19, 2897. https://doi.org/10.3390/s19132897

AMA Style

Sierra M. SD, Garzón M, Múnera M, Cifuentes CA. Human–Robot–Environment Interaction Interface for Smart Walker Assisted Gait: AGoRA Walker. Sensors. 2019; 19(13):2897. https://doi.org/10.3390/s19132897

Chicago/Turabian Style

Sierra M., Sergio D., Mario Garzón, Marcela Múnera, and Carlos A. Cifuentes. 2019. "Human–Robot–Environment Interaction Interface for Smart Walker Assisted Gait: AGoRA Walker" Sensors 19, no. 13: 2897. https://doi.org/10.3390/s19132897

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop