Next Article in Journal
Sensitivity Improvement of a Surface Plasmon Resonance Sensor Based on Two-Dimensional Materials Hybrid Structure in Visible Region: A Theoretical Study
Next Article in Special Issue
Advances in Biosensors Technology for Detection and Characterization of Extracellular Vesicles
Previous Article in Journal
Optimization of SAW Devices with LGS/Pt Structure for Sensing Temperature
Previous Article in Special Issue
Situation Awareness-Oriented Patient Monitoring with Visual Patient Technology: A Qualitative Review of the Primary Research
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Open Software/Hardware Platform for Human-Computer Interface Based on Electrooculography (EOG) Signal Classification

by
Jayro Martínez-Cerveró
1,
Majid Khalili Ardali
1,
Andres Jaramillo-Gonzalez
1,
Shizhe Wu
1,
Alessandro Tonin
2,
Niels Birbaumer
1 and
Ujwal Chaudhary
1,2,*
1
Institute of Medical Psychology and Behavioural Neurobiology, University of Tübingen, Silcherstraße 5, 72076 Tübingen, Germany
2
Wyss-Center for Bio- and Neuro-Engineering, Chemin des Mines 9, Ch 1202 Geneva, Switzerland
*
Author to whom correspondence should be addressed.
Sensors 2020, 20(9), 2443; https://doi.org/10.3390/s20092443
Submission received: 21 February 2020 / Revised: 21 April 2020 / Accepted: 23 April 2020 / Published: 25 April 2020

Abstract

:
Electrooculography (EOG) signals have been widely used in Human-Computer Interfaces (HCI). The HCI systems proposed in the literature make use of self-designed or closed environments, which restrict the number of potential users and applications. Here, we present a system for classifying four directions of eye movements employing EOG signals. The system is based on open source ecosystems, the Raspberry Pi single-board computer, the OpenBCI biosignal acquisition device, and an open-source python library. The designed system provides a cheap, compact, and easy to carry system that can be replicated or modified. We used Maximum, Minimum, and Median trial values as features to create a Support Vector Machine (SVM) classifier. A mean of 90% accuracy was obtained from 7 out of 10 subjects for online classification of Up, Down, Left, and Right movements. This classification system can be used as an input for an HCI, i.e., for assisted communication in paralyzed people.

1. Introduction

In the past few years, we have seen an exponential growth in the development of Human-Computer Interface (HCI) systems. These systems have been applied for a wide range of purposes like controlling a computer cursor [1], a virtual keyboard [2], a prosthesis [3], or a wheelchair [4,5,6,7]. They could also be used for patient rehabilitation and communication [8,9,10,11]. HCI systems can make use of different input signals such as voice [7], electromyography (EMG) [12], electroencephalography (EEG) [13], near-infrared spectroscopy (NIRS) [14,15,16] or electrooculography (EOG) [5].
In this paper, we describe an EOG classification system capable of accurately and consistently classifying Up, Down, Left, and Right eye movements. The system is small, easy to carry, with considerable autonomy, and economical. It was developed using open hardware and software, not only because of economic reasons, but also to ensure that the system could reach as many people as possible and could be improved and adapted in the future by anyone with the required skills.
The end goal of this work is to build a system that could be easily connected to a communication or movement assistance device like a wheelchair, any kind of speller application, or merely a computer mouse and a virtual keyboard.
To achieve these objectives, we have developed and integrated the code needed for:
  • Acquiring the Electrooculography (EOG) signals.
  • Processing these signals.
  • Extracting the signal features.
  • Classifying the features previously extracted.
EOG measures the dipole direction changes of the eyeball, with the positive pole in the front [17]. The technique of recording these potentials was introduced for diagnostic purposes in the 1930s by R. Jung [18]. The presence of electrically active nerves in the posterior part of the eyeball, where the retina is placed, and the front part, mainly the cornea, creates the difference in potential on which EOG is based [19]. This creates an electrical dipole between the cornea and the retina, and its movements generates the potential differences that we can record in an EOG.
There are several EOG HCI solutions present in the literature. One of the issues with current HCI systems is their size and lack of autonomy, the use of proprietary software, or being based on self-designed acquisition and processing devices. Regarding the acquisition system, the most common approach is to use a self-designed acquisition device [1,4,20,21,22]. In our view, this solution dramatically restricts the number of users who can adopt this system. Other proposed systems make use of commercial amplifiers [23,24], which in turn make use of proprietary software and require robust processing systems, mainly laptops. This also reduces the number of potential users of the system and its applications since it increases the cost of the system and reduces its flexibility, portability, and autonomy. As far as signal processing is concerned, most systems choose to use a laptop to carry out these calculations [1,20,21,22,24,25], but we can also find the use of self-designed boards [6,26]. Table 1 shows the characteristics of some solutions present in literature as a representation of the current state of the art. The goal of our work is to achieve results equivalent to the present state of the art using an open paradigm, demonstrating that it is possible to arrive at a solution using cheaper components that could be modified to build a tailored solution. As far as we know, this is the first time that an open system is presented in this scope.
In our system, the signal is acquired using the OpenBCI Cyton Board (Raspberry Pi 3B+ official website), a low-cost open software/hardware biosensing device, resulting in an open hardware/software-based system that is portable, with considerable autonomy and flexibility.
Once we have the EOG signal, this is processed using a Raspberry Pi (OpenBCI Cyton official website), a single board computer that allows installing a Linux-based distribution, which is small, cheap, and gives us the option to use non-proprietary software.
Features are then extracted from the acquired signal and classified employing a machine learning algorithm. The feature extraction process aims to reduce the dimensionality of the input data without losing relevant information for classification [28] and maximizing the separation between elements of different classes by minimizing it between elements of the same class [27]. To achieve this, several models have been proposed on EOG feature extraction [29,30,31,32]. We employed Support Vector Machine (SVM) to classify the data [33,34], which creates a boundary to split the given data points into two different groups.
The result of this process, in the context of signal (EOG) mentioned in this article, is the classification of the subject’s eye movement to be used as input commands for further systems. This process and the tools used for it are explained in detail in Section 2. The Section 3 shows the performance achieved by the system. Finally, in the Section 4, we discuss the designed system and compare our system with existing related work along with the limitations of our system and future work.

2. Materials and Methods

2.1. Hardware-Software Integration

In the present study, OpenBCI Cyton board was used for the signal acquisition. This board contains a PIC32MX250F128B microcontroller, a Texas Instruments ADS1299 analog/digital converter, a signal amplifier and an eight-channel neural interface. This device is distributed by OpenBCI (USA). Figure 1 depicts the layout of the system.
This device gives us enough precision and sampling rate (250 Hz) for our needs, it has an open-source environment (including a Python library to work with the boards (OpenBCI Python repository)), it has an active and large community, and it can be powered with a power bank, which is a light and mobile solution. Attached to the board, we have 4 wet electrodes connected to two channels on the board in a differential mode. Differential mode computes the voltage difference between the two electrodes connected to the channel and doesn’t need a reference electrode. The two channels correspond to the horizontal and vertical components of the signal.
The acquisition board is connected to a Raspberry Pi, a single-board computer developed by the Raspberry Pi company based in the United Kingdom. Although its firmware is not open source, it allows installing a Linux-based distribution keeping the open paradigm in our system. In this case, we chose to install Raspbian, a Debian-based distribution. The hardware connection between the OpenBCI board and the Raspberry Pi is made using a wireless RFDuino USB dongle. On the software side, we used an open Python library released by OpenBCI. To run this library over the Raspberry Pi, the source code of the mentioned library has been partially modified. It has also been necessary to recompile some third-party libraries so that they could run on the Raspberry Pi. We decided to power both the OpenBCI board and the Raspberry Pi via a USB connection to a power bank (20,000 mAh) to maximize the system autonomy and mobility.
This hardware configuration offers us all the characteristics that we were looking for: it has enough computational power to carry our calculations, it’s small and light, it allows us to use free and open-source software, and is economical. It should be noted that although we have used the OpenBCI board as acquisition system there are some other solutions that fit our needs like the BITalino biosignal acquisition board. This board offers an EOG acquisition module and an open environment which includes a Python-based API for connection and signal acquisition over Raspberry Pi.
It should be mentioned that the data presented in this article have been processed using a conventional laptop instead of the Raspberry Pi, just for the convenience of the experimenters. During the development of the research, several tests were carried out that did not show any difference in the data or the results depending on the platform used.
We decided to use EOG over other eye movement detection techniques like Infrared Reflection Oculography (IROG) [35] or video-based systems [25], as the EOG technique does not require the placement of any device that could obstruct the subjects’ visual field. Four electrodes were placed in contact with the skin close to the eyes to record both the horizontal and the vertical components of the eye movements [36,37].

2.2. Experimental Paradigm

Ten healthy subjects between 24 and 35 years old participated in the study and gave their informed consent for inclusion. The signal acquisition was performed in two stages: training and online prediction. For both stages, we asked the subjects to perform four different movements: Up, Down, Left, and Right. Each movement should start with the subject looking forward and then look at one of these four points already mentioned and look again at the center. For the training stage, we acquired two blocks of 20 trials, 5 trials per movement. In these blocks, five “beep” tones were presented to the subject at the beginning of each block in 3 s intervals to indicate the subject the interval that they had to perform the requested action. After these initial tones, the desired action was presented via audio, and a “beep” tone was presented as a cue to perform the action. The system recorded during the 3 s after this tone was presented, and the system presented again another action to be performed. For some of the subjects, these two training blocks were appended in a single data file. The schematic of the training paradigm (offline acquisition) is shown in Figure 2a.
The online classification was performed with a block of 40 trials, 10 per movement, on Subject 1. After this experiment, we decided to reduce the number of trials per block to 20, 5 per movement, for the convenience of the subject. This online block had the same characteristics as the classification blocks except that the five initial tones were not presented, and the actions to be performed were separated by 5 s interval to have enough time for the prediction tasks. Furthermore, in these blocks, the system recorded only during the 3 s after the cue tone was presented. During this stage, we generate two auxiliary files: one with the acquired data and the other containing the action that the user should perform and the action predicted. We only considered predicted actions with a prediction probability higher than a certain threshold. For the first subject, we set this threshold as 0.7, but after that experiment, we changed the threshold to 0.5. In this case, the auxiliary file corresponding to subject 1 contains the predictions made using 0.7 as a prediction probability threshold. Figure 2b depicts the schematic of the online prediction paradigm.

2.3. Signal Processing

A second-order 20 Hz lowpass Butterworth filer [37] was used to remove the artifacts arising from electrodes or head movements and illumination changes [19,27,38]. A 20 Hz lowpass filter was used because the artifacts, as mentioned earlier, appear in the high frequencies [17], and the EOG signal information is contained mainly in low frequencies [30]. The irregularities in the signal after the lowpass filter were removed using a smoothing filter [30]. For applying these filters, we used the SciPy library. This library is commonly used and has a big community supporting it.
The last step in pre-processing was to standardize the data. This is done to remove the baseline of EOG signals [27]. The standardization was done using the following formula:
X t = x t   μ i σ i ,
where i is the sample that we are processing, t corresponds to a single datapoint inside a sample, X t is the resulting datapoint, x t is the data point value before standardization, μ i is the mean value of the whole sample and σ i is the standard deviation of the whole sample. An example of the processed signal can be seen in Figure 3, which shows a single Down trial extracted from a classification block of Subject 5.
Figure 4 depicts the vertical and horizontal component for four different eye movement tasks performed by subject 5.

2.4. Feature Extraction

An essential step in our system’s signal processing pipeline is feature extraction, which for each sample, calculates specific characteristics that will allow us to maximize the distance between elements in different classes and the similarity between those that belong to the same category. We use a model based on the calculation of 3 features for the horizontal and vertical components of our signal, i.e., 6 total features per sample. The features are the following:
  • Min: The minimum amplitude value during the eye movement.
  • Max: The maximum amplitude value during the eye movement.
  • Median: The amplitude value during the eye movement that has 50% values above as below.

2.5. Classification

Once we have calculated the features of each sample, we create a model using that feature values and its class labels. Even though some biosignal-based HCI use other machine learning techniques, such as artificial neural networks [29,36] or other statistical techniques [19], most of the HCI present in the literature use the machine learning technique called Support Vector Machine. We have decided to use SVM because of its simplicity over other techniques, which results in a lower computational cost and excellent performance.
In this study, we have used the implementation of the SVM of Scikit-Learn, a free and open-source Machine Learning Python library. This library has a high reputation in Machine Learning, and it has been widely used. The selected parameters for creating the model are a Radial Basis Function (RBF) as kernel [39], which allows us to create a model using data points that are not linearly separable [40], and a One vs. One strategy [41], i.e., creating a classifier for each pair of movement classes. Finally, we have performed 5-fold cross-validation [42], splitting the training dataset into 5 mutually exclusive subsets and also creating 5 models, each one using one of these subsets to test the model and the other four to create it. Our model accuracy is calculated as the mean of these 5 models.

3. Results

The acquired signal is processed to remove those signal components that contain no information, resulting in a clearer signal. The data were acquired from 10 healthy subjects between 24 and 35 years old. The result of signal processing can be seen in Figure 3 and Figure 4, which shows the single trials of a training block performed by Subject 5. As Figure 3 and Figure 4 show, the result of this step is the one expected. For Subject 8, we found flat or poor-quality signals in the vertical and horizontal component, so we decided to stop the acquisition and discard these data. Some trials extracted from this discarded block can be seen in Figure 5 which shows no clear steps or any other patterns for the four movements. This situation is probably due to an electrode movement, detachment, or misplacement that could not be solved during the experiment.
After artifact removal, feature extraction is performed to reduce the dimensionality in input, leading to characteristics that define the signal without information loss. As mentioned above, the features used were Maximum, Minimum, and Median. It should be noticed that Up and Down movements have relevant information only for the vertical channel of our signal as well as Left and Right movements have this relevant information in the horizontal component. Figure 6 and Figure 7 present an example of this feature extraction process over two blocks of 20 trials, each corresponding to the training data of Subject 5, who ended up with 100% accuracy. Figure 8 and Figure 9 present an example of the same feature extraction process over two blocks of 20 trials performed by Subject 6, who ended up with 78.7% accuracy. In these figures, we can appreciate that Subject 5, with 100% accuracy, shows a more evident difference in the data values than Subject 6, with 78.7% accuracy. Figure 8 and Figure 9 show some overlapping in the data values, which explains the lower classification accuracy achieved.
The last step in our pipeline is to build a model and perform an online classification of the subject’s eye movements. As we mentioned before, we build our model using 5-fold cross-validation. Table 2 shows the model accuracy, the accuracy-related on how good the model has been classifying the training data, as the mean of these five models for each subject. For the prediction accuracy—the accuracy related to the prediction of unseen data—we have asked the subject to perform 20 movements per block (five of each movement), as is explained in Section 2.2. We predicted those movements using the pre-built model and, finally, validated how accurate that prediction was.
As mentioned in Section 2.2., we only consider those predicted actions with a prediction probability higher than 0.5. For subject 1, the prediction probability threshold was set to 0.7 during the online acquisition, so the auxiliary file with the predictions corresponds to this threshold, and after experimenting, we re-analyzed the online data using a 0.5 threshold.
We acquired one single online block for subjects 1, 2, 5 and 7. For subject 3, we acquired three online blocks with 50%, 80%, and 85% accuracy. For subject 6, we acquired two online blocks with 80% and 85% accuracy. For subject 10, we acquired three online blocks with 55%, 70%, and 80% accuracy. It can be seen that for all subjects, the online accuracy increases with each block acquisition. The accuracy shown in Table 1 corresponds to those online blocks with the highest accuracy for each subject. For subject 4, the training and online data have poor quality (66.7% accuracy for the model and 20% accuracy for the online prediction). Subject 9 had a good model accuracy (95%) but poor-quality signals during online acquisition (50% and 20% accuracy). Post-experimental analysis of the data revealed noisy and flat signals, showing no clear pattern in the signal acquired from subjects 4 and 9, similar to the signal acquired from Subject 8 (Please see Figure 5 for the signal from patient 8). These distortions may have arisen due to the probable electrode movement, detachment, or misplacement. Thus, we decided to discard the data from Subjects 4, 8 and 9.

4. Discussion

It must be clear that in order to make a completely fair comparison between our system and the state-of-the-art systems, some extensive testing would be required. These tests should process the data acquired in this study with other processing pipelines, run our pipeline over the data acquired in other studies, and adapt our acquisition and processing modules to be connected to further systems found in the literature. The results obtained after this process would give us a full picture of the differences between our system and those already in place. Unfortunately, due to lack of time and materials, these tests could not be carried out.
Concentration loss and tiredness are two of the biggest challenges when it comes to EOG-based HCI. As reported in Barea et al. [43], the number of failures using this kind of system increases over time after a specific period of use. This has been seen during the development of this study, where long periods of system use have led to the appearance of irritation and watery eyes. This could be a problem for subjects who use the system for a long time. In the paper above mentioned [43], the researchers deal with this problem by retraining the system.
Another challenge related to our system is the presence of unintentional eye blinks. Eye blinks create artifacts in the EOG signal and, also, during the eye blinks, there is a slight eye movement [37]. The trials containing eye blinks can lead to a reduced model accuracy if it occurs in the training stage or to a trial misclassification if it is in the online acquisition stage. Pander et al. [44], and Merino et al. [30] have proposed methods to detect spontaneous blinks so these trials can be rejected. Yathunanthan et al. [6] proposed a system where eye blinks are automatically discarded.
Our system, like most of the available systems in the literature [19,20,21,29,30,38,43], uses a discrete approach, i.e., the user is not free to perform an action when desired, but the action must be performed at a specific time. This affects the agility of the system by increasing the time needed to perform an action. Barea et al. [38,43] and Arai et al. [25] have proposed systems with a continuous approach where the subject has no time restrictions to perform an action.
There are different ways to improve our system in future work. First, we could put in place a mechanism to detect and remove unintentional blinks. This would prevent us discarding training blocks, or could improve the training accuracy in the cases in which these unintentional blinks occur. In some cases, a continuous online classification means a considerable advantage. Therefore, it would be interesting to add the necessary strategies to perform this type of classification. Finally, by combining our system with further communication or movement assistance systems, we could check its performance in a complete HCI loop.

5. Conclusions

We have presented an EOG signal classification system that can achieve a 90% mean accuracy in online classifications. These results are equivalent to other state-of-the-art systems. Our system is built using only open components, showing that it is possible to avoid the usage of expensive and proprietary tools in this scope. As intended, the system is small, easy to carry, and has complete autonomy. This is achieved using OpenBCI and Raspberry Pi as hardware, connected to a power bank as a power source.
Because of the use of open hardware and software technologies, the system is also open, easy to replicate, and can be improved or modified by someone with the required skills to build a tailored solution. The use of open technologies also helps us to obtain a cheap platform.
Finally, the resulting system is easy to connect to subsequent communication or movement assistance systems.

Author Contributions

Conceptualization, J.M.-C., N.B., and U.C.; methodology, J.M.-C., M.K.A., A.J.-G. and U.C.; software, J.M.-C.; validation, A.J.-G.; formal analysis, J.M-C.; data curation, J.M.-C. and S.W.; writing—Original draft preparation, J.M.-C., S.W., M.K.A. and U.C.; writing—Review and editing, J.M.-C., A.T., N.B., and U.C.; supervision—U.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Deutsche Forschungsgemeinschaft (DFG) DFG BI 195/77-1, BMBF (German Ministry of Education and Research) 16SV7701 CoMiCon, LUMINOUS-H2020-FETOPEN-2014-2015-RIA (686764), and Wyss Center for Bio and Neuroengineering, Geneva.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Hossain, Z.; Shuvo, M.M.H.; Sarker, P. Hardware and Software Implementation of Real Time Electrooculogram (EOG) Acquisition System to Control Computer Cursor with Eyeball Movement. In Proceedings of the 4th International Conference on Advances in Electrical Engineering (ICAEE), Dhaka, Bangladesh, 28–30 September 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 132–137. [Google Scholar] [CrossRef]
  2. Usakli, A.B.; Gurkan, S. Design of a Novel Efficient Human–Computer Interface: An Electrooculagram Based Virtual Keyboard. IEEE Trans. Instrum. Meas. 2010, 59, 2099–2108. [Google Scholar] [CrossRef]
  3. Argentim, L.M.; Castro, M.C.F.; Tomaz, P.A. Human Interface for a Neuroprothesis Remotely Control. In Proceedings of the 11th International Joint Conference on Biomedical Engineering Systems and Technologies, Funchal, Madeira, Portugal, 19–21 January 2018; SCITEPRESS—Science and Technology Publications: Setubal, Portugal, 2018; pp. 247–253. [Google Scholar] [CrossRef]
  4. Rokonuzzaman, S.M.; Ferdous, S.M.; Tuhin, R.A.; Arman, S.I.; Manzar, T.; Hasan, M.N. Design of an Autonomous Mobile Wheelchair for Disabled Using Electrooculogram (EOG) Signals. In Mechatronics; Jablonski, R., Brezina, T., Eds.; Springer: Berlin/Heidelberg, Germany, 2011; pp. 41–53. [Google Scholar]
  5. Barea, R.; Boquete, L.; Bergasa, L.M.; López, E.; Mazo, M. Electro-Oculographic Guidance of a Wheelchair Using Eye Movements Codification. Int. J. Robot. Res. 2003, 22, 641–652. [Google Scholar] [CrossRef]
  6. Yathunanthan, S.; Chandrasena, L.U.R.; Umakanthan, A.; Vasuki, V.; Munasinghe, S.R. Controlling a Wheelchair by Use of EOG Signal. In Proceedings of the 4th International Conference on Information and Automation for Sustainability, Colombo, Sri Lanka, 12–14 December 2008; IEEE: Piscataway, NJ, USA, 2008; pp. 283–288. [Google Scholar] [CrossRef]
  7. Mazo, M.; Rodríguez, F.J.; Lázaro, J.L.; Ureña, J.; García, J.C.; Santiso, E.; Revenga, P.A. Electronic Control of a Wheelchair Guided by Voice Commands. Control. Eng. Pract. 1995, 3, 665–674. [Google Scholar] [CrossRef]
  8. Chaudhary, U.; Mrachacz-Kersting, N.; Birbaumer, N. Neuropsychological and Neurophysiological Aspects of Brain-computer-interface (BCI)-control in Paralysis. J. Physiol. 2020, JP278775. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  9. Chaudhary, U.; Birbaumer, N.; Curado, M.R. Brain-Machine Interface (BMI) in Paralysis. Ann. Phys. Rehabil. Med. 2015, 58, 9–13. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  10. Chaudhary, U.; Birbaumer, N.; Ramos-Murguialday, A. Brain–Computer Interfaces in the Completely Locked-in State and Chronic Stroke. In Progress in Brain Research; Elsevier: Amsterdam, The Netherlands, 2016; Volume 228, pp. 131–161. [Google Scholar] [CrossRef]
  11. Chaudhary, U.; Birbaumer, N.; Ramos-Murguialday, A. Brain–Computer Interfaces for Communication and Rehabilitation. Nat. Rev. Neurol. 2016, 12, 513–525. [Google Scholar] [CrossRef] [Green Version]
  12. Rosen, J.; Brand, M.; Fuchs, M.B.; Arcan, M. A Myosignal-Based Powered Exoskeleton System. IEEE Trans. Syst. Man Cybern. Part. A Syst. Hum. 2001, 31, 210–222. [Google Scholar] [CrossRef] [Green Version]
  13. Ferreira, A.; Celeste, W.C.; Cheein, F.A.; Bastos-Filho, T.F.; Sarcinelli-Filho, M.; Carelli, R. Human-Machine Interfaces Based on EMG and EEG Applied to Robotic Systems. J. NeuroEng. Rehabil. 2008, 5, 10. [Google Scholar] [CrossRef] [Green Version]
  14. Chaudhary, U.; Xia, B.; Silvoni, S.; Cohen, L.G.; Birbaumer, N. Brain–Computer Interface–Based Communication in the Completely Locked-In State. PLOS Biol. 2017, 15, e1002593. [Google Scholar] [CrossRef]
  15. Khalili Ardali, M.; Rana, A.; Purmohammad, M.; Birbaumer, N.; Chaudhary, U. Semantic and BCI-Performance in Completely Paralyzed Patients: Possibility of Language Attrition in Completely Locked in Syndrome. Brain Lang. 2019, 194, 93–97. [Google Scholar] [CrossRef]
  16. Gallegos-Ayala, G.; Furdea, A.; Takano, K.; Ruf, C.A.; Flor, H.; Birbaumer, N. Brain Communication in a Completely Locked-in Patient Using Bedside near-Infrared Spectroscopy. Neurology 2014, 82, 1930–1932. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  17. Bharadwaj, S.; Kumari, B.; Tech, M. Electrooculography: Analysis on Device Control by Signal Processing. Int. J. Adv. Res. Comput. Sci. 2017, 8, 787–790. [Google Scholar]
  18. Heide, W.; Koenig, E.; Trillenberg, P.; Kömpf, D.; Zee, D.S. Electrooculography: Technical Standards and Applications. Electroencephalogr. Clin. Neurophysiol. Suppl. 1999, 52, 223–240. [Google Scholar] [PubMed]
  19. Lv, Z.; Wang, Y.; Zhang, C.; Gao, X.; Wu, X. An ICA-Based Spatial Filtering Approach to Saccadic EOG Signal Recognition. Biomed. Signal. Process. Control. 2018, 43, 9–17. [Google Scholar] [CrossRef]
  20. Wu, S.L.; Liao, L.D.; Lu, S.W.; Jiang, W.L.; Chen, S.A.; Lin, C.T. Controlling a Human–Computer Interface System with a Novel Classification Method That Uses Electrooculography Signals. IEEE Trans. Biomed. Eng. 2013, 60, 2133–2141. [Google Scholar] [CrossRef]
  21. Huang, Q.; He, S.; Wang, Q.; Gu, Z.; Peng, N.; Li, K.; Zhang, Y.; Shao, M.; Li, Y. An EOG-Based Human–Machine Interface for Wheelchair Control. IEEE Trans. Biomed. Eng. 2018, 65, 2023–2032. [Google Scholar] [CrossRef]
  22. Larson, A.; Herrera, J.; George, K.; Matthews, A. Electrooculography Based Electronic Communication Device for Individuals with ALS. In Proceedings of the IEEE Sensors Applications Symposium (SAS), Glassboro, NJ, USA, 13–15 March 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 1–5. [Google Scholar] [CrossRef]
  23. Iáñez, E.; Azorin, J.M.; Perez-Vidal, C. Using Eye Movement to Control a Computer: A Design for a Lightweight Electro-Oculogram Electrode Array and Computer Interface. PLoS ONE 2013, 8, e67099. [Google Scholar] [CrossRef] [Green Version]
  24. Kherlopian, A.; Sajda, P.; Gerrein, J.; Yue, M.; Kim, K.; Kim, J.W.; Sukumaran, M. Electrooculogram Based System for Computer Control Using a Multiple Feature Classification Model. 4. In Proceedings of the 28th IEEE EMBS Annual International Conference, New York, NY, USA, 30 August–3 September 2006. [Google Scholar]
  25. Arai, K.; Mardiyanto, R. A Prototype of ElectricWheelchair Controlled by Eye-Only for Paralyzed User. J. Robot. Mechatron. 2011, 23, 66–74. [Google Scholar] [CrossRef]
  26. Heo, J.; Yoon, H.; Park, K. A Novel Wearable Forehead EOG Measurement System for Human Computer Interfaces. Sensors 2017, 17, 1485. [Google Scholar] [CrossRef] [Green Version]
  27. Qi, L.J.; Alias, N. Comparison of ANN and SVM for Classification of Eye Movements in EOG Signals. J. Phys. Conf. Ser. 2018, 971, 012012. [Google Scholar] [CrossRef]
  28. Guo, X.; Pei, W.; Wang, Y.; Chen, Y.; Zhang, H.; Wu, X.; Yang, X.; Chen, H.; Liu, Y.; Liu, R. A Human-Machine Interface Based on Single Channel EOG and Patchable Sensor. Biomed. Signal. Process. Control. 2016, 30, 98–105. [Google Scholar] [CrossRef]
  29. Erkaymaz, H.; Ozer, M.; Orak, İ.M. Detection of Directional Eye Movements Based on the Electrooculogram Signals through an Artificial Neural Network. Chaos Solitons Fractals 2015, 77, 225–229. [Google Scholar] [CrossRef]
  30. Merino, M.; Rivera, O.; Gomez, I.; Molina, A.; Dorronzoro, E. A Method of EOG Signal Processing to Detect the Direction of Eye Movements. In Proceedings of the First International Conference on Sensor Device Technologies and Applications, Venice, Italy, 18–25 July 2010; IEEE: Piscataway, NJ, USA, 2010; pp. 100–105. [Google Scholar] [CrossRef] [Green Version]
  31. Aungsakul, S.; Phinyomark, A.; Phukpattaranont, P.; Limsakul, C. Evaluating Feature Extraction Methods of Electrooculography (EOG) Signal for Human-Computer Interface. Procedia Eng. 2012, 32, 246–252. [Google Scholar] [CrossRef] [Green Version]
  32. Phukpattaranont, P.; Aungsakul, S.; Phinyomark, A.; Limsakul, C. Efficient Feature for Classification of Eye Movements Using Electrooculography Signals. Therm. Sci. 2016, 20, 563–572. [Google Scholar] [CrossRef]
  33. Boser, B.; Guyon, I.; Vapnik, V. A Training Algorithm for Optimal Margin Classifiers. In Proceedings of the Fifth Annual Workshop on Computational Learning Theory, Pittsburgh, PA, USA, 27–29 July 1992; Association for Computing Machinery: New York, NY, USA, 1992; pp. 144–152. [Google Scholar]
  34. Vapnik, V.; Golowich, S.E.; Smola, A.J. Support Vector Method for Function Approximation, Regression Estimation and Signal Processing. In Advances in Neural Information Processing Systems; Mozer, M.C., Jordan, M.I., Petsche, T., Eds.; MIT Press: Cambridge, MA, USA, 1997. [Google Scholar]
  35. Hess, C.W.; Muri, R.; Meienberg, O. Recording of Horizontal Saccadic Eye Movements: Methodological Comparison Between Electro-Oculography and Infrared Reflection Oculography. Neuro Ophthalmol. 1986, 6, 189–197. [Google Scholar] [CrossRef]
  36. Barea, R.; Boquete, L.; Ortega, S.; López, E.; Rodríguez-Ascariz, J.M. EOG-Based Eye Movements Codification for Human Computer Interaction. Expert Syst. Appl. 2012, 39, 2677–2683. [Google Scholar] [CrossRef]
  37. Chang, W.D. Electrooculograms for Human–Computer Interaction: A Review. Sensors 2019, 19, 2690. [Google Scholar] [CrossRef] [Green Version]
  38. Barea, R.; Boquete, L.; Mazo, M.; Lopez, E. System for Assisted Mobility Using Eye Movements Based on Electrooculography. IEEE Trans. Neural Syst. Rehabil. Eng. 2002, 10, 209–218. [Google Scholar] [CrossRef]
  39. Amari, S.; Wu, S. Improving Support Vector Machine Classifiers by Modifying Kernel Functions. Neural Netw. 1999, 12, 783–789. [Google Scholar] [CrossRef]
  40. Ben-Hur, A.; Weston, J. A User’s Guide to Support Vector Machines. In Data Mining Techniques for the Life Sciences; Carugo, O., Eisenhaber, F., Eds.; Humana Press: Totowa, NJ, USA, 2010; Volume 609, pp. 223–239. [Google Scholar] [CrossRef] [Green Version]
  41. Hsu, C.W.; Lin, C.J. A Comparison of Methods for Multiclass Support Vector Machines. IEEE Trans. Neural Netw. 2002, 13, 415–425. [Google Scholar] [CrossRef] [Green Version]
  42. Kohavi, R. A Study of Cross-Validation and Bootstrap for Accuracy Estimation and Model Selection. Int. Jt. Conf. Artif. Intell. 1995, 14, 8. [Google Scholar]
  43. Barea, R.; Boquete, L.; Rodriguez-Ascariz, J.M.; Ortega, S.; López, E. Sensory System for Implementing a Human-Computer Interface Based on Electrooculography. Sensors 2010, 11, 310–328. [Google Scholar] [CrossRef] [PubMed]
  44. Pander, T.; Przybyła, T.; Czabanski, R. An Application of Detection Function for the Eye Blinking Detection. In Proceedings of the Conference on Human System Interactions, Krakow, Poland, 25–27 May 2008; IEEE: Piscataway, NJ, USA, 2008; pp. 287–291. [Google Scholar] [CrossRef]
Sample Availability: Corresponding code and data are available at https://github.com/JayroMartinez/EOG-Classification.
Figure 1. Block diagram with system connection.
Figure 1. Block diagram with system connection.
Sensors 20 02443 g001
Figure 2. Acquisition paradigm. (a) Offline acquisition. (b) Online acquisition.
Figure 2. Acquisition paradigm. (a) Offline acquisition. (b) Online acquisition.
Sensors 20 02443 g002
Figure 3. Down movement example taken from Subject 5. The x-axis depicts time (in seconds), and Y-axis represents the signal amplitude (in millivolts). (a) Unfiltered vertical component. (b) Filtered vertical component. (c) Unfiltered horizontal component. (d) Filtered horizontal component.
Figure 3. Down movement example taken from Subject 5. The x-axis depicts time (in seconds), and Y-axis represents the signal amplitude (in millivolts). (a) Unfiltered vertical component. (b) Filtered vertical component. (c) Unfiltered horizontal component. (d) Filtered horizontal component.
Sensors 20 02443 g003
Figure 4. Processed signals examples taken from Subject 5. The x-axis depicts time (in seconds), and Y-axis represents the signal amplitude (in millivolts). (a) Vertical component for Up movement. (b) Vertical component for Down Movement. (c) Vertical component for Left movement. (d) Vertical component for the Right movement. (e) Horizontal component for Up movement. (f) Horizontal component for Down movement. (g) Horizontal component for Left movement. (h) Horizontal component for Right movement.
Figure 4. Processed signals examples taken from Subject 5. The x-axis depicts time (in seconds), and Y-axis represents the signal amplitude (in millivolts). (a) Vertical component for Up movement. (b) Vertical component for Down Movement. (c) Vertical component for Left movement. (d) Vertical component for the Right movement. (e) Horizontal component for Up movement. (f) Horizontal component for Down movement. (g) Horizontal component for Left movement. (h) Horizontal component for Right movement.
Sensors 20 02443 g004
Figure 5. Example trials taken from Subject 8. The x-axis depicts time (each trial is 3 s), and Y-axis represents the signal amplitude (in millivolts). (a) Vertical component. (b) Horizontal component.
Figure 5. Example trials taken from Subject 8. The x-axis depicts time (each trial is 3 s), and Y-axis represents the signal amplitude (in millivolts). (a) Vertical component. (b) Horizontal component.
Sensors 20 02443 g005
Figure 6. Values after Feature Extraction for Up, Down, Left, and Right movements performed by Subject 5 (100% model accuracy). Both X-axis and Y-axis depict signal values (in millivolts). (a) Horizontal Min vs. Max. (b) Horizontal Max vs. Median. (c) Horizontal Median vs. Min. (d) Vertical Min vs. Max. (e) Vertical Max vs. Median. (f) Median vs. Min.
Figure 6. Values after Feature Extraction for Up, Down, Left, and Right movements performed by Subject 5 (100% model accuracy). Both X-axis and Y-axis depict signal values (in millivolts). (a) Horizontal Min vs. Max. (b) Horizontal Max vs. Median. (c) Horizontal Median vs. Min. (d) Vertical Min vs. Max. (e) Vertical Max vs. Median. (f) Median vs. Min.
Sensors 20 02443 g006
Figure 7. Values after Feature Extraction for Up, Down, Left, and Right Movements performed by Subject 5 (100% model accuracy). The x-axis depicts movement class, and Y-axis depicts signal amplitude (in millivolts). (a) Horizontal Min. (b) Horizontal Max. (c) Horizontal Median. (d) Vertical Min. (e) Vertical Max. (f) Vertical Median.
Figure 7. Values after Feature Extraction for Up, Down, Left, and Right Movements performed by Subject 5 (100% model accuracy). The x-axis depicts movement class, and Y-axis depicts signal amplitude (in millivolts). (a) Horizontal Min. (b) Horizontal Max. (c) Horizontal Median. (d) Vertical Min. (e) Vertical Max. (f) Vertical Median.
Sensors 20 02443 g007
Figure 8. Values after Feature Extraction for Up, Down, Left, and Right movements performed by Subject 6 (78.7% model accuracy). Both X-axis and Y-axis depict signal values (in millivolts). (a) Horizontal Min vs. Max. (b) Horizontal Max vs. Median. (c) Horizontal Median vs. Min. (d) Vertical Min vs. Max. (e) Vertical Max vs. Median. (f) Median vs. Min.
Figure 8. Values after Feature Extraction for Up, Down, Left, and Right movements performed by Subject 6 (78.7% model accuracy). Both X-axis and Y-axis depict signal values (in millivolts). (a) Horizontal Min vs. Max. (b) Horizontal Max vs. Median. (c) Horizontal Median vs. Min. (d) Vertical Min vs. Max. (e) Vertical Max vs. Median. (f) Median vs. Min.
Sensors 20 02443 g008
Figure 9. Values after Feature Extraction for Up, Down, Left, and Right Movements performed by Subject 6 (78.7% model accuracy). The x-axis depicts movement class, and Y-axis depicts signal amplitude (in millivolts). (a) Horizontal Min. (b) Horizontal Max. (c) Horizontal Median. (d) Vertical Min. (e) Vertical Max. (f) Vertical Median.
Figure 9. Values after Feature Extraction for Up, Down, Left, and Right Movements performed by Subject 6 (78.7% model accuracy). The x-axis depicts movement class, and Y-axis depicts signal amplitude (in millivolts). (a) Horizontal Min. (b) Horizontal Max. (c) Horizontal Median. (d) Vertical Min. (e) Vertical Max. (f) Vertical Median.
Sensors 20 02443 g009
Table 1. Comparison of results between different studies.
Table 1. Comparison of results between different studies.
StudyMovementsAcquisitionProcessingMethodAccuracy
Qi et al. [27]Up, Down, Left, RightCommercial-Offline70%
Guo et al. [28]Up, Down, BlinkCommercialLaptopOnline84%
Kherlopian et al. [24]Left, Right, CenterCommercialLaptopOnline80%
Wu et al. [20]Up, Down, Left, Right, Up-Right, Up-Left, Down-Right, Down-LeftSelf-designedLaptopOnline88.59%
Heo et al. [26] Up, Down, Left, Right, BlinkSelf-designedSelf-designed + LaptopOnline91.25%
Heo et al. [26] Double BlinkSelf-designedSelf-designed + LaptopOnline95.12%
Erkaymaz et al. [29]Up, Down, Left, Right, Blink, TicCommercialLaptopOffline93.82%
Merino et al. [27]Up, Down, Left, RightCommercialLaptopOnline94.11%
Huang et al. [21] BlinkSelf-designedLaptopOnline96.7%
Lv et al. [19] Up, Down, Left, RightCommercialLaptopOffline99%
Yathunanthan et al. [6] Up, Down, Left, RightSelf-designedSelf-designedOnline99%
Table 2. Model and Prediction Accuracies.
Table 2. Model and Prediction Accuracies.
SubjectModel Mean AccuracyOnline Accuracy
Subject 1100%90%
Subject 2100%95%
Subject 392.5%85%
Subject 5100%100%
Subject 678.7%85%
Subject 797.5%95%
Subject 1090.8%80%
MEAN94.21%90%

Share and Cite

MDPI and ACS Style

Martínez-Cerveró, J.; Ardali, M.K.; Jaramillo-Gonzalez, A.; Wu, S.; Tonin, A.; Birbaumer, N.; Chaudhary, U. Open Software/Hardware Platform for Human-Computer Interface Based on Electrooculography (EOG) Signal Classification. Sensors 2020, 20, 2443. https://doi.org/10.3390/s20092443

AMA Style

Martínez-Cerveró J, Ardali MK, Jaramillo-Gonzalez A, Wu S, Tonin A, Birbaumer N, Chaudhary U. Open Software/Hardware Platform for Human-Computer Interface Based on Electrooculography (EOG) Signal Classification. Sensors. 2020; 20(9):2443. https://doi.org/10.3390/s20092443

Chicago/Turabian Style

Martínez-Cerveró, Jayro, Majid Khalili Ardali, Andres Jaramillo-Gonzalez, Shizhe Wu, Alessandro Tonin, Niels Birbaumer, and Ujwal Chaudhary. 2020. "Open Software/Hardware Platform for Human-Computer Interface Based on Electrooculography (EOG) Signal Classification" Sensors 20, no. 9: 2443. https://doi.org/10.3390/s20092443

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop