Next Article in Journal
Energy-Based Design Criterion of Dissipative Bracing Systems for the Seismic Retrofit of Frame Structures
Previous Article in Journal
Investigation of Physical Phenomena and Cutting Efficiency for Laser Cutting on Anode for Li-Ion Batteries
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Driving Control of a Powered Wheelchair Considering Uncertainty of Gaze Input in an Unknown Environment

1
School of Science for Open and Environmental Systems, Graduate School of Science and Technology, Keio University, 3-14-1 Hiyoshi, Kohoku-ku, Yokohama 223-8522, Japan
2
Graduate School of Science and Technology, Keio University, 3-14-1 Hiyoshi, Kohoku-ku, Yokohama 223-8522, Japan
3
Department of System Design Engineering, Faculty of Science and Technology, Keio University, 3-14-1 Hiyoshi, Kohoku-ku, Yokohama 223-8522, Japan
*
Author to whom correspondence should be addressed.
Appl. Sci. 2018, 8(2), 267; https://doi.org/10.3390/app8020267
Submission received: 20 January 2018 / Revised: 3 February 2018 / Accepted: 6 February 2018 / Published: 11 February 2018
(This article belongs to the Section Mechanical Engineering)

Abstract

:
This paper describes the motion control system for a powered wheelchair using eye gaze in an unknown environment. Recently, new Human-Computer Interfaces (HCIs) that have replaced joysticks have been developed for a person with a disability of the upper body. In this paper, movement of the eyes is used as an HCI. The wheelchair control system proposed in this study aims to achieve an operation such that a passenger gazes towards the direction he or she wants to move in the unknown environment. Implementation of such an operating method facilitates easy and accurate movement of the wheelchair even in complicated environments comprising passages on the same side. This paper presents a system based on gaze detection and environment recognition that are integrated by the fuzzy set theory in real time. In the fuzzy set theory, we achieve the movement to the passage which a passenger gazes towards among some passages by integrating the information of some passages and gaze. Moreover, we design it with consideration of uncertain gaze input by using the value of gaze detection accuracy. Moreover, we achieve obstacle avoidance by integrating the information of obstacles. This motion control system can support safe and smooth movement of the wheelchair by automatically calculating its direction of motion and velocity, to avoid obstacles and move in the gaze direction of the passenger. The effectiveness of the proposed system is demonstrated through experiments in a real environment.

Graphical Abstract

1. Introduction

While a powered wheelchair is an important assistive product for a person who is physically handicapped, a person with a disability of the upper body cannot use a joystick. To this end, there has been the recent development of Human–Computer Interfaces (HCIs) that have replaced joysticks. Examples of these HCIs include voice control [1], brain–machine interfaces (BMI) [2], facial muscles [3], and eye blinks [4]. In this study, movement of the eyes is used as an HCI. It is known that the muscle around the eyes has a long-lasting functionality, so this HCI has a high likelihood of being available even if a person cannot move his or her mouth, face, or neck.
There are several conventional methods that use the movement of eyes as an HCI. Al-Haddad et al. proposed an electrooculography (EOG) based control algorithm for target navigation [5,6,7]; however, their technique necessitated attachment of surface electrodes around the eyes of the operator, and two modes of wheelchair control were proposed—manual and automatic. In the manual mode, the passenger inputs a turn right or turn left signal by looking towards the right or left. Likewise, one may move forward or stop by looking up or down, respectively. In the automatic mode of operation, the user looks towards a desired destination and blinks (right to start and left to stop) to starts navigating the wheelchair to the target position. However, the operation in this mode is constrained by a well-defined and known environment. Pingali and colleagues proposed a method that inputs a turn right and turn left signal by moving the gaze direction to the right and left and moves forward and stops by moving the gaze direction up and down using the head gear with EOG electrodes [8]. Matsumoto proposed a method in which the gaze direction is detected by processing images, using two charge coupled device cameras [9]. The self-position and the environment are recognized using a laser range finder (LRF) and a map that is created in advance. Consequently, the gaze position of the passenger in the environment is estimated. However, it is difficult for the wheelchair to move in the unknown environment because the map created in advance is needed.
Conventional methods of controlling wheelchair systems in unknown environments also exist. A wheelchair exploring an unknown environment requires real-time map generation and path planning to ensure accurate obstacle-avoiding navigation. This is accomplished by recognizing the surrounding environment through use of electronic sensors. Examples of these systems include those described in [10,11], NavChair developed by Simpson et al. [12], SENARIO (Sensor-Aided Intelligent Wheelchair Navigation) developed by Katevas et al. [13], and the Robchair developed by Pires et al. [14].
This paper proposes an eye-gaze controlled wheelchair system for navigating through unknown environments. Mohamad and his colleagues proposed a system for controlling wheelchairs by eye gaze in unknown environments [15]. Their system comprised eye-tracking glasses, depth camera to capture the geometry of the ambient space, and a set of ultrasound and infrared sensors to detect obstacles. The passenger provides inputs to the wheelchair system to move forward, stop, and turn left or right by looking up, down, left, or right, respectively, along arrows displayed on a laptop placed in front of the passenger.
However, this method was found to be ineffective when dealing with complicated environments, because of the constraints associated with input directions. This paper refers to complicated environments as those wherein some passages exist on the same side (either left or right) as shown in Figure 1. This renders the navigation operation difficult and complicated.
In view of the above difficulties, the wheelchair control system proposed in this study aims to achieve an operation wherein the passenger actually gazes towards the direction in which he or she wishes to move in an unknown environment. Implementation of such an operating method facilitates easy and accurate movement of the wheelchair even in complicated environments comprising passages on the same side.
The proposed system utilizes an eye tracker and RGB (Red, Green, Blue) camera, for detecting passenger gaze, and an LRF for environment recognition. All information captured by various sensors are integrated in real time using the fuzzy set theory, thereby facilitating accurate detection of passenger-gaze direction along with presence of obstacles and passages in the actual environment. In the fuzzy set theory, we achieve the movement to the passage which a passenger gazes towards among some passages by integrating the information of some passages and gaze. Moreover, the gaze direction is filtered to suppress the unnatural movement caused by the observation noise of two eye cameras. In addition, only the information with high gaze detection accuracy is used and the value of detection accuracy is used for designing fuzzy set theory. In this way, the system that considers uncertainty of gaze input is designed. Moreover, we achieve obstacle avoidance by integrating the information of obstacles. Subsequently, the correct speed and direction of motion to avoid these obstacles are determined by moving along the gaze direction of the passenger. This control system is an extension of the conventional method [4] and ensures safe and smooth movement of the wheelchair.
In order to demonstrate the effectiveness of the proposed method, we performed real-life experiments in a complicated environment. Also, a long distance movement experiment was carried out.

2. Materials and Methods

2.1. Configuration of Control System

2.1.1. Sensors

Figure 2 shows the sensors used in the proposed motion control system of the wheelchair. The eye tracker (Pupil Labs Pupil) [16] detects the passenger’s gaze point in the environment. It has a world camera and two eye cameras. An RGB camera (Microsoft Kinect for Windows v2) (Microsoft Co., Redmond, WA, USA) [17] records the images in the RGB model (defined as RGB camera image). By matching the RGB camera image to that captured by the world camera, the gaze point in the passenger environment is detected. An LRF (Hokuyo UST-10LX) (Hokuyo Automatic Co., Ltd., Osaka, Japan) [18] is used to detect passages and obstacles.
The angle of view of the RGB camera is in the range of −35–35° with the forward direction set along the zero-degree direction. The eye tracker can obtain images in the RGB model (defined as world camera image), as shown in Figure 2. It can also obtain the 2D coordinate of the gaze point in the world camera by detecting a pupil using the two eye cameras. The angle of view of the world camera is in the range of −50–50° with the forward direction set along the zero-degree direction. The LRF can acquire distance data in the range of −135–135°, up to 10 m, with the forward direction set along the zero-degree direction and the angular resolution is 0.25°. In this study, we acquire the distance data in the range of −90–90°, and the angular resolution is 1°.
A two-dimensional coordinate system is used in this study in the RGB camera image with the origin located at the top left corner of the screen. The coordinate system is defined as the RGB camera coordinate system ( x c , y c ) . This image size is 1920 × 1080 pixels. The two-dimensional coordinate system in the world camera image with the origin located at the top left corner of the screen is defined as the world camera coordinate system ( x e , y e ) . This image size is 1280 × 720 pixels. Furthermore, the two-dimensional coordinate system with the origin located at the wheelchair in the environment is defined as the wheelchair coordinate system ( X w , Y w ) .

2.1.2. System Flow

The control system proposed in this study integrates the input gaze direction and environment information and determines the correct speed and direction of motion in real time. Figure 3 shows the system flow. The control system has four stages: input, environment recognition, integration by the fuzzy set theory, and output.
First, gaze point ( x g e , y g e ) and environmental information are obtained as input. In case of the eye gaze input, the passenger is instructed to gaze in the intended direction of motion; as a result, the wheelchair moves in that direction.
Second, the environment is recognized based on the environmental information obtained from the LRF. In this step, obstacles ( X o w , Y o w ) and passages ( X p w , Y p w ) through which the wheelchair can move are detected at approximately the same time.
Finally, through use of this information, the direction of motion is determined based on the fuzzy set theory with due consideration of the direction in which the passenger wishes to move whilst ensuring safety and avoidance of obstacles.
Section 2.2 describes the method to obtain gaze direction as input, while Section 2.3 describes the passage detection method using LRF. Section 2.4 describes the design of the motion control system based on the fuzzy set theory.

2.2. Obtaining the Gaze Direction φ g

Since we wish to integrate information concerning gaze direction at a later point along with the environmental information acquired by the LRF, this step describes the method to obtain gaze direction φ g with the front of the wheelchair coordinate system aligned at 0°. The eye tracker is attached to the head of the passenger and moves freely against the wheelchair. This facilitates use of not only the information recorded by the eye tracker but also that captured by the RGB camera fixed to the wheelchair. First, the gaze point in the world camera coordinate system ( x g e , y g e ) is converted to the RGB camera coordinate system ( x g c , y g c ) . Subsequently, the gaze point in the RGB camera coordinate system is converted into gaze direction φ g relative to the wheelchair coordinate system. The gaze direction is then successively obtained and filtered. This makes it possible to acquire gaze information with higher reliability by excluding those movements of the passenger that do not qualify as gaze motion.

2.2.1. Obtaining Gaze Point in the World Camera Coordinate System ( x g e , y g e )

First, coordinates of the gaze point in the world camera coordinate system are acquired by the eye tracker. The eye tracker obtains coordinates of the gaze point by image-processing the position of the pupil by using left- and right-eye images captured by the eyeball camera. At this time, the precision with which the pupil may be detected differs depending on how the pupil appears. The pupil detection accuracy S p u p i l is represented by a value between 0 to 1. In this study, through use of only the gaze point information S p u p i l > 0.6 , reliable gaze information was obtained. The above threshold value was determined through preliminary verification. This value S p u p i l is one of two values affecting the gaze detection accuracy.

2.2.2. Obtaining the Gaze Point in the RGB Camera Coordinate System ( x g c , y g c )

World camera coordinates of the gaze point were converted into the RGB camera coordinate system to facilitate acquisition of the gaze point in the RGB camera image. To this end, an image-matching exercise, called template matching, was performed, and coordinates of the gaze point were translated and converted using the results of template matching. This image matching is used because the processing speed is fast.
Template matching is a method of checking whether patterns similar to those existing in a certain region of a template image are present in a whole image. Examples of a template matching are shown in Figure 4. The similarity S N C C ( x c , y c ) is calculated using the luminance value of the image while sequentially moving the template image over the whole image to identify the region demonstrating highest similarity. Here, determination of the similarity utilizes the Normalized Cross-Correlation (NCC). NCC is calculated using Equation (1). T ( i , j ) represents the luminance value of the template image while I ( i , j ) represents that of the whole image. NCC indicates that the closer its value is to 1, the higher the similarity is.
S N C C ( x c , y c ) = j = 0 N 1 i = 0 M 1 I ( i , j ) T ( i , j ) j = 0 N 1 i = 0 M 1 I ( i , j ) 2 j = 0 N 1 i = 0 M 1 T ( i , j ) 2
In the proposed study, a part of the world camera image was cut out as the template image, and the RGB camera image was considered the whole image. In order to speed up the processing, the size of the RGB camera image was compressed from 1920 × 1080 pixels to 384 × 216 pixels while that of the world camera image was reduced from 1280 × 720 pixels to 344 × 194 pixels. From the size of the world camera image, cut out 172 × 96 pixels which is about half size in both length and width. The part to be cut off forms the part centered at the point of regard. Figure 4a–c depicts examples of the RGB camera image, gaze point captured in the world camera image, and the template image in the crossroad.
Template matching is performed using the above images. The pixel coordinate when the similarity S N C C ( x c , y c ) is highest is defined as ( x N C C c , y N C C c ) . At this time, the gaze point in the world camera image is transformed into the RGB camera image using Equation (2).
( x g c , y g c ) = ( x g e , y g e ) + ( x N C C c , y N C C c )
In this study, we use of only the matching result when ( x N C C c , y N C C c ) is less than 50 pixels compared with the value before a time as shown in the following equation.
{ | x N C C c ( t ) x N C C c ( t 1 ) | < 50 | y N C C c ( t ) y N C C c ( t 1 ) | < 50
Figure 4d–f show examples of the result obtained via template matching and gaze point in the RGB camera image in the crossroad.
Through use of only the matching result S N C C ( x N C C c , y N C C c ) > 0.9 , we obtained more reliable gaze information. This value S N C C ( x N C C c , y N C C c ) is the other of the two values affecting the gaze detection accuracy.

2.2.3. Determining Gaze Direction in the Wheelchair Coordinate System φ g

Gaze-point coordinates in the RGB camera image ( x g c , y g c ) were converted into the gaze direction φ g in the wheelchair coordinate system. Since both, the LRF and RGB camera are located at the same position in two-dimensional coordinates, the angle between the RGB camera and gaze point is identical to that between the LRF and gaze point. This defines the gaze direction φ g in the wheelchair coordinate system. From the geometrical relationship depicted in Figure 5, the gaze direction φ g is obtained from Equation (4). For the example illustrated in Figure 5, the gaze direction is φ g = 8 °
φ g = arctan ( x tan 35 ° 960 )
In the proposed study, determination of the gaze direction is performed in real time. The total sampling time is approximately 75 ms. Filtering is performed so as to exclude unnatural gaze movement. The filtering operation is performed using a low-pass filter that blocks signals above a specific frequency ( 1 / 6   Hz in this case). It is believed that sliding eyeball motion tracking is capable of tracking visual targets moving at speeds of the order of 30 ° / s [19]. Sliding eyeball motion essentially relates to eye movements that occur while tracking a moving visual object. For the purpose of this study, eye movement speeds of the order of 60 ° / s or higher are considered unnatural and are, therefore, discarded. In line with the above consideration, a fourth-order low-pass filter, described by Equation (5), is designed with a cutoff frequency of 1 / 6   Hz and sampling frequency of 1 / 0.075   Hz . The order of the low pass filter was determined through preliminary verification.
φ ^ g = 0.0158 φ g ( t ) + 0.2502 φ g ( t 1 ) + 0.5319 φ g ( t 2 ) + 0.2502 φ g ( t 3 ) 0.0158 φ g ( t 4 )
In addition, although the control cycle of eye gaze detection is 75 ms, as described in Section 2.2.1 and Section 2.2.2, when the pupil detection accuracy S p u p i l or the image matching accuracy S N C C is less than the threshold value, it is excluded as gaze information with low reliability. In that case, it is necessary to undertake filtering by using the previous value φ g ( t ) = φ g ( t 1 ) . The pupil detection accuracy S p u p i l is less than the threshold value, for example, when the eyes are not opened properly. The image matching accuracy S N C C is less than the threshold value when the environment with less features is in front of the wheelchair.
In this manner, relevant information concerning gaze direction is obtained.

2.3. Passage Detection by LRF

The passage detection by the LRF extends the conventional method [4]. The conventional method aims to provide an algorithm applicable to a real environment, and detect passages through which a wheelchair can move even in environments with passages and obstacles of various shapes. In particular, obstacles are grouped from the depth information of an LRF, and distances between obstacles are calculated to detect a gap for wheelchair passage ( X p w , Y p w ) .

2.3.1. Adaptation to Barrier-Free Environment

In the conventional method, a passage width of 1.2–2.5 m is assumed, and an LRF with a range of 4 m is used. However, in recent years, construction of barrier-free buildings such as hospitals and welfare facilities has increased, and it is now a requirement to widen the passage widths of public facilities [20]. The Ministry of Health, Labor and Welfare, through the Social Security Council (Medical Subcommittee), have put up a requirement that medical facilities should have a passage width of at least 2.7 m for those undergoing prolonged medical [21].
Therefore, in this study, the maximum value of the passage width and LRF range were set to w max = 3.5   m and 10 m, respectively so that passage detection could be performed even in a barrier-free environment such as a hospital.
Figure 6a shows the result of passage detection at the environment as shown in Figure 1 using the wheelchair coordinate system and an LRF. This environment includes a passage measuring more than 2.7 m in length. ( X p w , Y p w ) represents center coordinates of the detected passage. Furthermore, we confirmed that a passage can be detected at a distance of about 8 m from the wheelchair.

2.3.2. Interpolation of Passage Detection

In this study, we interpolate the passage. In the conventional method, passage detection is performed at each time point. However, there is a moment when passage detection is not performed. Therefore, in this study we save the center coordinates of the passage once detected. In this study, it is assumed that the system is performed in an unknown environment which does not have an environmental map in advance. Therefore, the coordinates of the passage center are saved as absolute coordinates with the movement start position as the origin. Then, the number of times passage center coordinates included within 0.7 m from the saved coordinates are detected is counted. When this count exceeds 400 times, the passage center coordinates are assumed to be reliable. Then, if no passage center included within 0.7 m from the coordinates has been detected after that, interpolate the passage center coordinates ( X ^ p w , Y ^ p w ) at that coordinate. By interpolating the passages in this manner, more accurate passage detection is performed.
Figure 6b shows the result of interpolation of the passage center coordinates when turning right. The interpolated center coordinates are marked with a red circle.

2.4. Design of Motion Control System Based on the Fuzzy Set Theory

In the fuzzy set theory, we achieve the movement to the passage which a passenger gazes towards among some passages by integrating the information of some passages and gaze. Moreover, we achieve obstacle avoidance by integrating the information of obstacles. It integrates the gaze detection input and information of the surroundings to determine the direction and speed of the wheelchair. By creating membership functions (MFs) based on various rules and integrating them using the fuzzy set theory, an output satisfying each requirement becomes possible. The MF is a function obtained by plotting the relative angle of the wheelchair on the horizontal axis and grading each direction on the vertical axis. A high grade indicates the passenger’s desire to move toward a particular direction. In this study, MF μ g for moving in the gaze direction, MF μ p for moving in the passage direction, and MF μ o for avoiding obstacles are respectively created and integrated by the fuzzy set theory to determine the speed and direction of motion.

2.4.1. MF for Gaze Direction μ g

In order to move the wheelchair along the gaze direction, a normally distributed MF μ g was created, as depicted in Figure 7. At this instant, the vertex of the MF as placed in line with the gaze direction φ g . Since the base of the normal distribution curve is usually determined through standard deviation, the standard deviation was set to 20° through preliminary verification. The higher the value of the vertex grade μ g t o p , the more it tends to move along the gaze direction. Therefore, through use of pupil-detection accuracy S p u p i l and similarity S N C C of the template-matching exercise, calculated during acquisition of gaze direction in Section 2.2, Equation (6) for calculating the vertex grade may be deduced.
μ g t o p = 0.6 + 0.4 ( S p u p i l 0.6 0.4 S N C C 0.9 0.1 )
As a result, when gaze direction is highly reliable, the vertex grade tends to be large. A similar trend exists when the gaze direction has low reliability.

2.4.2. MF for Passage Direction μ p

In order to move the wheelchair in the passage direction, a triangular MF μ p is created, as depicted in Figure 8. At that instant, the MF vertex is placed along the direction of the center of the passage φ p . The base of the triangle was set to 120° through preliminary verification. The higher the value of the vertex grade is 1. To avoid sudden start, its value gradually increases from the beginning of movement.

2.4.3. MF for Obstacles μ o

To avoid obstacles, MF μ o is created. As shown in Figure 9, this is an MF for avoiding obstacles, and it is created by a geometrical relationship with an obstacle.
The concave type MF μ o suppresses the grade in the contact direction with the obstacle φ o . This makes it possible to move while maintaining the observation point and distance d t h . The grade a , which is the restrained portion, is calculated using Equation (7) according to the distance between the wheelchair and the obstacle l i and the distance required to avoid l t h . The l t h parameter suppresses the grade when the obstacle is within close range, but not when it is far.
a = { ( l i d t h r w h e e l ) ( l t h d t h r w h e e l )     i f l i < l t h 1.0 o t h e r w i s e
The margin with obstacles is d t h = 0.3 m . As l t h increases, safety is initially improved, but if you want to approach, you avoid it unnecessarily. For this reason, we set l t h = 4.0 m in this study.

2.4.4. Integration and Output

MF μ g for moving in the gaze direction, MF μ p for moving in the path direction, and MF μ o for avoiding the obstacle are integrated to determine the direction of motion and speed. First, μ g , μ p , μ o are integrated by the logical operator AND, and MF μ m i x shown in Equation (8) is obtained.
Second, based on the grade of the largest value μ m i x , the moving direction φ o u t and the speed v o u t of the wheelchair are calculated from the Equations (9) and (10). v max is the maximum speed of the wheelchair, and in this study we set v max = 0.5 m / s . Figure 10 shows the results of the integration process. The gaze and passage direction are set as the target direction, and the wall is used as an obstacle to suppress the grade in that direction; finally, the moving direction φ o u t is determined.
μ m i x = μ g μ p μ o
φ o u t = arg max ( μ m i x )
v o u t = μ m i x ( φ o u t ) v max

3. Results and Discussion

3.1. Outline of Experiment

Experiments using a powered wheelchair in a complicated environment where there are multiple passages on the same side as shown in Figure 1 to verity the effectiveness of the proposed system. Three volunteers (two men and one woman, mean age 23.3 ± 0.47 years) were recruited as subjects for this study. They were instructed to gaze towards the direction in which he or she wanted to turn. We confirmed that they could select passages in front and back of the right side and turn right. Section 3.2 describes the detail of these experiments.
Also, a long distance movement experiment as shown in Figure 11 is carried out. One volunteer (one man, age 23 years) was recruited as a subject for this study. He was instructed to gaze towards the direction in which he wanted to turn. We confirmed that he could turn left and turn right once and move a long distance. Section 3.3 describes the detail of this experiment.
The maximum speed of the wheelchair was set to 0.50 m/s. In addition, the control cycle of obtaining of gaze direction is about 75 ms and the control cycle of LRF is about 25 ms.

3.2. Verification of the Experiments in the Complicated Environment

Figure 12 depicts the operating state of the experiment of one subject—wheelchair trajectory, time history of gaze direction φ g , direction of movement φ o u t , and speed φ o u t —when the subject performed a right turn on the front and back passages. Figure 13 and Figure 14 demonstrate gaze detection results, state of the experiment, result obtained from integration of MF based on environmental recognition and fuzzy set theory when the subject performed a right turn on the front and back passage. Figure 15 shows the details of Figure 13(d-3,e-3,d-4). Figure 16 shows the details of Figure 14(d-3,e-3,d-4,e-4). Video S1-1 and S2-1 shows the state of the experiment, S1-2 and S2-2 shows the environmental recognition, S1-3 and S2-3 shows the MF when the subject performed a right turn on the front and back passage.
We confirmed that the gaze and passage were able to be detected at each time in the real environment. In the section A in Figure 13a, the subject is gazing at the passage in front, and as a result the direction of movement changes to the right direction, and he turned right at the front passage. Figure 13a,b show that the subject is gazing near the direction φ p 5 which is the front passage, but because the wall is approaching, the output direction φ o u t is determined to the left of the direction φ p 5 . Figure 13c,d show that the subject is gazing near the direction φ p 4 which is the front passage, and there is no wall in that direction, so the output direction φ o u t is determined near the direction φ p 4 .
In the section B in Figure 13b, the subject is gazing at the back passage, and as a result the direction of movement goes in a straight line and he did not turn right at the front passage. Figure 15a,b show that the subject is gazing near the direction φ p 3 which is the back passage, but because the wall is approaching, the output direction φ o u t is determined to the left of the direction φ p 3 . In section C, the subject is gazing at the back passage, and as a result the direction of movement changes to the right direction, and he turned right at the back passage. Figure 15c,d show that the subject is gazing near the direction φ p 1 which is the back passage, and there is no wall in that direction, so the output direction φ o u t is determined near the direction φ p 1 .
These results confirm the ability of a passenger to operate the wheelchair by merely gazing towards the direction in which he or she wishes to move in an unknown environment.
Simultaneously, the effectiveness of the proposed system during operation in a complex environment is also confirmed. Furthermore, it was confirmed by multiple subjects.

3.3. Verification of the Long Distance Movement Experiment

Figure 17 shows the operating state of the experiment of the subject—wheelchair trajectory, time history of gaze direction φ g , direction of movement φ o u t , and speed φ o u t —when the subject performed a long distance movement. Figure 18 demonstrates gaze detection results, state of the experiment, result obtained from integration of MF based on environmental recognition and fuzzy set theory when the subject performed a long distance movement. Video S3-1 shows the state of the experiment, S3-2 shows the environmental recognition, S3-3 shows the MF when the subject performed a right turn on the front and back passage.
We confirmed that the gaze and passage were able to be detected at each time in the real environment. Firstly, in the sections (1) to (4) in Figure 17a and Figure 18a,b, the subject is gazing at left turn direction, and as a result the direction of movement changes to the left direction in Figure 17b, and he turned left. Secondly, in the sections (6) to (9) in Figure 16a and Figure 17a,b, the subject is gazing at right turn direction, and as a result the direction of movement changes to the right direction in Figure 17b, and he turned right. Finally, in the sections (10) and (11) in Figure 17a and Figure 18a,b, the subject is gazing at straight direction, and as a result the direction of movement changes to the straight direction in Figure 17b, and he went straight.
This result confirms the ability of a passenger to operate the wheelchair by just gazing towards the direction in which he wants to move for a long distance.

4. Conclusions

The study focuses on the design of a wheelchair motion-control system operated by a passenger by gazing towards the direction in which he or she wishes to move in an unknown environment. Implementation of such an operating method facilitates easy and accurate movement of the wheelchair even in complicated environments comprising passages on the same side, facilitating selection of and movement along front and back passages. The proposed system employs an eye tracker and RGB camera for detecting the passenger’s gaze and an LRF for environment recognition. The aggregate sensor information integrated in real time using the fuzzy set theory. In the fuzzy set theory, we achieve the movement to the passage which a passenger gazes towards among some passages by integrating the information of some passages and gaze. Moreover, we achieve obstacle avoidance by integrating the information of obstacles. As a result, it is possible to detect the gaze direction of a passenger along with obstacles and passages that exist in the actual environment. Subsequently, the correct speed and direction of motion for avoiding these obstacles are determined while moving along the passenger’s gaze direction. The effectiveness of the proposed method was verified by performing an experimenting involving an actual HCI in a real operating environment.
In future endeavors, the authors intend to perform involving various age groups and examine whether there exists a difference in gaze movement depending on the subject’s age. In addition, they intend to verify the system’s effectiveness in dynamic real-life environments involving other people moving in the scene.

Supplementary Materials

The following are available online at https://www.mdpi.com/2076-3417/8/2/267/s1, Video S1-1, S1-2, S1-3: Verification experiment turning right at the front passage, Video S2-1, S2-2, S2-3: Verification experiment turning right at the back passage, Video S3-1, S3-2, S3-3: Verification experiment moving a long distance.

Acknowledgments

This study was supported by “A Framework PRINTEPS to Develop Practical Artificial Intelligence” of the Core Research for Evolutional Science and Technology (CREST) of the Japan Science and Technology Agency (JST) under Grant Number JPMJCR14E3.

Author Contributions

Airi Ishizuka, Ayanori Yorozu and Masaki Takahashi conceived and designed the proposed method and the verification experiments; Airi Ishizuka performed the experiments; Airi Ishizuka wrote the paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Pires, G.; Nunes, U. A wheelchair steered through voice commands and assisted by a reactive fuzzy-logic controller. J. Intell. Robot. Syst. 2002, 34, 301–314. [Google Scholar] [CrossRef]
  2. Iturrate, I.; Antelis, J.M.; Kubler, A.; Minguez, J. A noninvasive brain-actuated wheelchair based on a P300 neurophysiological protocol and automated navigation. IEEE Trans. Robot. 2009, 25, 614–627. [Google Scholar] [CrossRef]
  3. Tamura, H.; Manabe, T.; Tanno, K.; Fuse, Y. The electric wheelchair control system using surface-electromygram of facial muscles. In Proceedings of the World Automation Congress (WAC), Kobe, Japan, 19–23 September 2010; pp. 1–6. [Google Scholar]
  4. Okugawa, K.; Nakanishi, M.; Mitsukura, Y.; Takahashi, M. Experimental verification for driving control of a powered wheelchair by voluntary eye blinking and with environment recognition. Trans. JSME 2014, 80, 1–15. (In Japanese) [Google Scholar] [CrossRef]
  5. Al-Haddad, A.; Sudirman, R.; Omar, C. Gaze at desired destination, and wheelchair will navigate towards it. New technique to guide wheelchair motion based on EOG signals. In Proceedings of the 2011 First International Conference on Informatics and Computational Intelligence (ICI), Bandung, Indonesia, 12–14 December 2011; pp. 126–131. [Google Scholar]
  6. Al-Haddad, A.; Sudirman, R.; Omar, C.; Hui, K.Y.; Jimin, M.R. Wheelchair motion control guide using eye gaze and blinks based on pointbug algorithm. In Proceedings of the 2012 Third International Conference on Intelligent Systems, Modelling and Simulation (ISMS), Kota Kinabalu, Malaysia, 8–10 February 2012; pp. 37–42. [Google Scholar]
  7. Al-Haddad, A.; Sudirman, R.; Omar, C.; Hui, K.Y.; Jimin, M.R. Wheelchair motion control guide using eye gaze and blinks based on bug 2 algorithm. In Proceedings of the 2012 8th International Conference on Information Science and Digital Content Technology (ICIDT), Jeju, Korea, 26–28 June 2012; pp. 438–443. [Google Scholar]
  8. Pingali, T.R.; Dubey, S.; Shivaprasad, A.; Varshney, A.; Ravishankar, S.; Pingali, G.R.; Padmaja, K.Y. Eye-gesture controlled intelligent wheelchair using Electro-Oculography. In Proceedings of the 2014 IEEE International Symposium on Circuits and Systems (ISCAS), Melbourne VIC, Australia, 1–5 June 2014; pp. 2065–2068. [Google Scholar]
  9. Matsumoto, Y.; Ino, T.; Ogasawara, T. Development of intelligent wheelchair system with face and gaze based interface. In Proceedings of the 10th IEEE International Workshop on Robot and Human Interactive Communication, Bordeaux, Paris, France, 18–21 September 2001; pp. 262–267. [Google Scholar]
  10. Liu, G.; Yao, M.; Zhang, L.; Zhang, C. Fuzzy controller for obstacle avoidance in electric wheelchair with ultrasonic sensors. In Proceedings of the 2011 International Symposium on Computer Science and Society (ISCCS), Kota Kinabalu, Malaysia, 16–17 July 2011; pp. 71–74. [Google Scholar]
  11. Nguyen, A.V.; Nguyen, L.B.; Su, S.; Nguyen, H.T. The advancement of an obstacle avoidance bayesian neural network for an intelligent wheelchair. In Proceedings of the 2013 35th Annual International Conference on Engineering in Medicine and Biology Society (EMBC), Osaka, Japan, 3–7 July 2013; pp. 3642–3645. [Google Scholar]
  12. Levine, S.P.; Bell, D.A.; Jaros, L.A.; Simpson, R.C.; Koren, Y.; Borenstein, J. The NavChair assistive wheelchair navigation system. IEEE Trans. Rehabil. Eng. 1999, 7, 443–451. [Google Scholar] [CrossRef] [PubMed]
  13. Katevas, N.I.; Sgouros, N.M.; Tzafestas, S.G.; Papakonstantinou, G.; Beattie, P.; Bishop, J.M.; Koutsouris, D. The autonomous mobile robot SENARIO: A sensor aided intelligent navigation system for powered wheelchairs. IEEE Robot. Autom. Mag. 1997, 4, 60–70. [Google Scholar] [CrossRef]
  14. Pires, G.; Nunes, U.; de Almeida, A.T. Robchair-a semi-autonomous wheelchair for disabled people. In Proceedings of the 3rd IFAC Symposium on Intelligent Autonomous Vehicles 1998 (IAV’98), Madrid, Spain, 25–27 March 1998; pp. 509–513. [Google Scholar]
  15. Eid, M.A.; Giakoumidis, N.; El-Saddik, A. A Novel Eye-Gaze-Controlled Wheelchair System for Navigating Unknown Environments: Case Study with a Person with ALS. IEEE Access 2016, 4, 558–573. [Google Scholar] [CrossRef]
  16. Pupil Labs. Available online: https://pupil-labs.com (accessed on 9 January 2018).
  17. Microsoft. Available online: https://dev.windows.com/en-us/kinect (accessed on 9 January 2018).
  18. Hokuyo Automatic Co. Available online: https://www.hokuyo-aut.co.jp (accessed on 9 January 2018).
  19. Vision Society of Japan. Visual Information Processing Handbook; Asakura Shoten: Tokyo, Japan, 2000; pp. 390–398. [Google Scholar]
  20. Ministry of Land, Infrastructure, Transport and Tourism. “Building Design Standard Considering Smooth Movement of Elderly People, Handicapped Person (FY2008)”. Available online: http://www.mlit.go.jp/jutakukentiku/jutakukentiku_house_fr_000049.html (accessed on 9 January 2018).
  21. Ministry of Health, Labour and Welfare. “Regarding the Medical Facilities System, the Social Security Council (Medical Subcommittee)”. Available online: http://www.mhlw.go.jp/stf/shingi/shingi-hosho.html?tid=126719 (accessed on 9 January 2018).
Figure 1. Examples of complicated environments (a) 2-dimensional environment; (b) Real-life environment.
Figure 1. Examples of complicated environments (a) 2-dimensional environment; (b) Real-life environment.
Applsci 08 00267 g001
Figure 2. (a) Sensor configuration; (b) The range of sensors; (c) RGB camera image obtained by RGB camera; (d) Environment recognition obtained by LRF; (e) World camera image obtained by Eye tracker; (f) Eye camera image obtained by Eye tracker.
Figure 2. (a) Sensor configuration; (b) The range of sensors; (c) RGB camera image obtained by RGB camera; (d) Environment recognition obtained by LRF; (e) World camera image obtained by Eye tracker; (f) Eye camera image obtained by Eye tracker.
Applsci 08 00267 g002
Figure 3. System flow.
Figure 3. System flow.
Applsci 08 00267 g003
Figure 4. Example of determination of gaze-point coordinates (a) RGB camera image (Whole image); (b) Gaze point in the world camera coordinate system; (c) Template image; (d) Checking similar patterns; (e)Result obtained from template-matching exercise; (f) Gaze point in the RGB camera coordinate system.
Figure 4. Example of determination of gaze-point coordinates (a) RGB camera image (Whole image); (b) Gaze point in the world camera coordinate system; (c) Template image; (d) Checking similar patterns; (e)Result obtained from template-matching exercise; (f) Gaze point in the RGB camera coordinate system.
Applsci 08 00267 g004
Figure 5. Determination of gaze direction φ g .
Figure 5. Determination of gaze direction φ g .
Applsci 08 00267 g005
Figure 6. Passage detection by laser range finder (LRF): (a) Result of passage detection; (b) Result of interpolation of the passage.
Figure 6. Passage detection by laser range finder (LRF): (a) Result of passage detection; (b) Result of interpolation of the passage.
Applsci 08 00267 g006
Figure 7. Membership function (MF) for gaze direction μ g : (a) MF; (b) 2-dimensional environment.
Figure 7. Membership function (MF) for gaze direction μ g : (a) MF; (b) 2-dimensional environment.
Applsci 08 00267 g007
Figure 8. MF for passage direction μ p : (a) MF; (b) 2-dimensional environment.
Figure 8. MF for passage direction μ p : (a) MF; (b) 2-dimensional environment.
Applsci 08 00267 g008
Figure 9. MF for obstacles μ o : (a) MF; (b) 2-dimensional environment.
Figure 9. MF for obstacles μ o : (a) MF; (b) 2-dimensional environment.
Applsci 08 00267 g009
Figure 10. Integration of MF μ m i x : (a) MF μ g μ p μ o μ m i x ; (b) 2-dimensional environment.
Figure 10. Integration of MF μ m i x : (a) MF μ g μ p μ o μ m i x ; (b) 2-dimensional environment.
Applsci 08 00267 g010
Figure 11. The long distance movement environment.
Figure 11. The long distance movement environment.
Applsci 08 00267 g011
Figure 12. Results obtained when subjects executed a right turn on the front (1) and back (2) passage (a) wheelchair trajectory; (b) gaze direction φ g ; (c) direction of movement φ o u t ; (d) Speed v o u t .
Figure 12. Results obtained when subjects executed a right turn on the front (1) and back (2) passage (a) wheelchair trajectory; (b) gaze direction φ g ; (c) direction of movement φ o u t ; (d) Speed v o u t .
Applsci 08 00267 g012
Figure 13. Results obtained when the subject executed a right turn on the front passage: (1) 0.9 s; (2) 5.5 s; (3) 7.1 s; (4) 9.6 s; (5) 13.2 s; (a) World camera image; (b) RGB camera image; (c) Real environment; (d) Integration of MF μ m i x ; (e) LRF + data.
Figure 13. Results obtained when the subject executed a right turn on the front passage: (1) 0.9 s; (2) 5.5 s; (3) 7.1 s; (4) 9.6 s; (5) 13.2 s; (a) World camera image; (b) RGB camera image; (c) Real environment; (d) Integration of MF μ m i x ; (e) LRF + data.
Applsci 08 00267 g013
Figure 14. Results obtained when the subject executed a right turn on the back passage: (1) 2.8 s; (2) 5.8 s; (3) 11.4 s; (4) 17.1 s; (5) 21.4 s; (a) World camera image; (b) RGB camera image; (c) Real environment; (d) Integration of MF μ m i x ; (e) LRF data.
Figure 14. Results obtained when the subject executed a right turn on the back passage: (1) 2.8 s; (2) 5.8 s; (3) 11.4 s; (4) 17.1 s; (5) 21.4 s; (a) World camera image; (b) RGB camera image; (c) Real environment; (d) Integration of MF μ m i x ; (e) LRF data.
Applsci 08 00267 g014
Figure 15. Details of Figure 13: (a) Detail of (d-3); (b) Detail of (e-3); (c) Detail of (d-4); (d) Detail of (e-4).
Figure 15. Details of Figure 13: (a) Detail of (d-3); (b) Detail of (e-3); (c) Detail of (d-4); (d) Detail of (e-4).
Applsci 08 00267 g015aApplsci 08 00267 g015b
Figure 16. Details of Figure 14: (a) Detail of (d-3); (b) Detail of (e-3); (c) Detail of (d-4); (d) Detail of (e-4).
Figure 16. Details of Figure 14: (a) Detail of (d-3); (b) Detail of (e-3); (c) Detail of (d-4); (d) Detail of (e-4).
Applsci 08 00267 g016
Figure 17. Results obtained when the subject executed a long distance movement: (a) wheelchair trajectory; (b) gaze direction φ g ; (c) direction of movement φ o u t ; (d) Speed v o u t .
Figure 17. Results obtained when the subject executed a long distance movement: (a) wheelchair trajectory; (b) gaze direction φ g ; (c) direction of movement φ o u t ; (d) Speed v o u t .
Applsci 08 00267 g017aApplsci 08 00267 g017b
Figure 18. Results obtained when the subject executed a long distance movement: (1) 0.0 s; (2) 10.6 s; (3) 13.6 s; (4) 21.7 s; (5) 39.2 s; (6) 52.7 s; (7) 56.5 s; (8) 59.6 s; (9) 61.5 s; (10) 73.7 s; (11) 80.1 s; (12) 95.3 s; (13) 100.3 s; (a) World camera image; (b) RGB camera image; (c) Real environment; (d) Integration of MF μ m i x ; (e) LRF data.
Figure 18. Results obtained when the subject executed a long distance movement: (1) 0.0 s; (2) 10.6 s; (3) 13.6 s; (4) 21.7 s; (5) 39.2 s; (6) 52.7 s; (7) 56.5 s; (8) 59.6 s; (9) 61.5 s; (10) 73.7 s; (11) 80.1 s; (12) 95.3 s; (13) 100.3 s; (a) World camera image; (b) RGB camera image; (c) Real environment; (d) Integration of MF μ m i x ; (e) LRF data.
Applsci 08 00267 g018aApplsci 08 00267 g018bApplsci 08 00267 g018c

Share and Cite

MDPI and ACS Style

Ishizuka, A.; Yorozu, A.; Takahashi, M. Driving Control of a Powered Wheelchair Considering Uncertainty of Gaze Input in an Unknown Environment. Appl. Sci. 2018, 8, 267. https://doi.org/10.3390/app8020267

AMA Style

Ishizuka A, Yorozu A, Takahashi M. Driving Control of a Powered Wheelchair Considering Uncertainty of Gaze Input in an Unknown Environment. Applied Sciences. 2018; 8(2):267. https://doi.org/10.3390/app8020267

Chicago/Turabian Style

Ishizuka, Airi, Ayanori Yorozu, and Masaki Takahashi. 2018. "Driving Control of a Powered Wheelchair Considering Uncertainty of Gaze Input in an Unknown Environment" Applied Sciences 8, no. 2: 267. https://doi.org/10.3390/app8020267

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop