A protocol that standardizes a set of procedures is designed to standardize testing and reduce the number of interference with results. The protocol describes the following steps:
The factors observed in this experiment are the edge and mass information of the obstacles, as well as the position correction processes provided by the learning algorithms.
A scenario was used for tests containing straight corridors, turns, and the presence of obstacles. A set of objects along the way was an obstacle to evaluating the information provided by the navigation algorithm. The algorithm should calculate the distance and position of obstacles for the user and provide a viable path for use. The distance calculation results indicated by the mono and stereo vision algorithms were recorded for further evaluation.
The user test ends with an interview to record their perceptions of the system as a whole (physical and logical). The answers complement the thesis by inserting information about usability aspects to understand the target audience better and have indications of where to act to make the system more compatible with the expected demands.
The environment chosen to evaluate the prototype and algorithms was the Intelligent Environmental Laboratory (Amilab), belonging to the Center for Electronic Research and Development of Technology and Information (CETELI), of the Federal University of Amazonas (UFAM). The AmILab has dimensions of 10.83 m × 13.80 m, containing straight lines, curves, and obstacles.
4.2. Experiment Setup
The experiments followed the predefined protocol and, in each test, the algorithms were evaluated alone and combined (hybrids) to record performance evolutions (precision versus time) and meet tuning needs.
The location test used an array of 462 cells, 0.33 m
2 in size, to record the values of the navigation system locations, inserted into a 2D scenario schema (
Figure 11). In this representation, the blue line indicates the reference points, an orange dashed line shows the positions of the visual location, and a solid red line shows the hybrid location.
The positions shown by the visual system differed from the expected values, mainly caused by natural light and the numerous intensities of artificial lights. These variations confused the interpretation, sometimes indicating the walls as part of the floor. The system presented an error rate of 11.3% when only one of the cameras (monovision) was triggered for ground-level perception, where interference caused by lighting variations was perceived. The data combination of the two cameras in the stereo model allowed a 3.60% error reduction.
To increase the level of accuracy, the hybrid model has a decision layer that holds or removes data within predetermined error limits. The fusion algorithms provided a more robust horizontal and vertical orientation perception, with an average error of approximately 0.186 rad every 10.00 m.
Table 3 shows the average distance from the perceived margin of error during testing.
One criterion adopted in systems evaluation is the time factor. This criterion was chosen in each subsystem and in the final process to indicate the evolution of time for each physical and logical intervention, as shown in
Table 4.
The time gain in the hybrid localization process was obtained by the perception of the standard deviation, which combined the data collected by the two cameras after the sample selection and calculated the position of obstacles.
Another criterion used to test and analyze the performance of the visual positioning system was to identify the number of frames processed every second (FPS) for each technique added to the process. The efficiency ratio is directly related to the number of frames delivered, where the higher the value, the more natural the passage of location information to the user during the navigation process.
Table 5 shows the values obtained throughout the tests and the inclusions of the image processing techniques.
The i-PDR algorithm presented a mean localization error of 0.11 m and a direction error of 1.8 degrees, keeping the system stable in both positioning modes (static and dynamic). The hybrid model still had a 93.23% lower margin of error than the Wi-Fi-based system, which had the worst result among subsystems.
The tests were performed at the Ambient Intelligent Laboratory (AmILab), which had the area divided into four regions, with obstacles placed along the way, as shown in
Figure 12. The audible alerts provided were recorded to assess the speed and accuracy of the presence and location of the obstacle. Users’ reactions were also recorded for each navigation (whether they obeyed or ignored the system alert).
The obstacles left in each region were as follows: region 1—wheelchair; region 2—brick with a height of 0.20 m; region 3—an air balloon hanging at 1.50 m in height; and region 4—a table with a height of 0.75 m.
Table 6 shows the average error of users’ perceptions of obstacles in the horizontal and vertical planes.
A group of 20 visually impaired users (14 men and 6 women) participated in the tests to evaluate system behavior (algorithms and hardware). The age ranged from 26 years to 72 years. People had varying levels of vision, ranging from 10% of visual loss to totally blind. It was also identified that 16 people use a cane, while 4 do not use it to get around. These 20 users were initially identified based on their degree of visual impairment (
Figure 13) and were divided into five groups: group 1—partial view, group 2—total loss of vision, group 3—total loss of vision associated with some cognitive/motor impairment, group 4—total vision loss associated with some physical disability, and group 5—others.
In order to make the tests homogeneous in the degree of difficulty, users were blindfolded to match those classified as total blind. Each user performed the test five times, totaling 100 tests. The procedures are as follows:
- -
Users were blindfolded to remove vision difference factor (partial blind and totally blind);
- -
To avoid information contamination, test users were interviewed and separated from the group.
All users were invited to travel the path plotted on the experimental route without the help of the electronic device the first time, as they defined it as most comfortable. Users then retraced the route using the device. The result of this test showed that the number of collisions with the obstacles is smaller when the smart glasses were used, as shown in
Figure 14. Two users (user 12 and user 13) showed high difficulty performing the route without the device and with the device. Users described in their interviews that the loss of their visual abilities was accompanied by the loss of other senses, such as hearing (user 12) and some motor skills (user 13).
The navigation system test ends with the application of a questionnaire, where the user describes his feedback regarding the physical part (wearable device) and the logical part (the type of sound information, speed, hits, errors, and so on). A set of six questions were evaluated according to the Likert scale, following the relationship: 1—excellent, 2—very good, 3—good, 4—satisfactory, and 5—bad.
The interview questions were as follows: question 1—the quality of guidance; question 2—providing independence to users; question 3—identification of user position and obstacles; question 4—reliability generated by the use of the device; question 5—response time and question 6: system usability. Response percentages are cataloged in
Table 7.
Looking at the results of the assessment, the two factors with the best scores are response time with 85% and location with 80%. The rating indicated as excellent (green color) is highlighted in the graph in
Figure 15.
The factor that got the worst score was independence, with only 40% of the excellent rating. This note represents the amount of user request for help to reposition in the scenario. The note for this item was opened so that a more detailed analysis could indicate its causes. Thus, users were organized into two groups: users with a cane, represented by the color blue, and users without a cane, represented by the color red. The results are depicted in the graph of
Figure 16.
The results show that 30% of users who use a cane had less difficulty wearing smart glasses, rating the experience as excellent, while only 10% of users who do not use a cane can be rated excellent. A small group equivalent to 5% of the total users who do not use a cane showed a great difficulty in the experiment, evaluating the experience only as satisfactory.
The evaluative process is complex, and to be fair with the works used, it is crucial to use the same setup (equipment, software, and database). In spite of this, it was still possible to evaluate with a set of works that brought in their publications the information related to the algorithms, hardware, and databases, as well as the results obtained regarding the margins of error and the information delivery time. The system was confronted with a set of works to evaluate the impact of the choices used.
Figure 17 graphically shows a comparative evaluation of these related works and this study in question.
Compared with Xue’s work, the proposed model showed more stable results. This stability is the direct result of applying a dynamic data reduction by the RANSAC algorithm, which determined a collection time, not a fixed amount of data volume [
12]. The proposed model applied a distance calculation approach for obstacles by stereo vision, and unlike the model presented by Alcantarilla, there was a concern to indicate viable paths between the detected objects closest to the user [
14]. The strategy adopted to identify the presence of obstacles was not concerned with describing what these obstacles were, as did Zheng, who developed visual markers and used a map to relate markers, increasing the complexity of his system [
13]. The model built by Kitt also used visual markers; however, it had strong limitations of use in low light environments and with the different viewing angles of the floor marker [
11]. The i-PDR algorithms and stereo vision obstacle detector stabilized location error margins and quickly provided accurate information on distances and obstacle positions and possible viable routes.