Next Article in Journal
Modulation Recognition of Digital Signal Using Graph Feature and Improved K-Means
Previous Article in Journal
Mobile Anchor and Kalman Filter Boosted Bounding Box for Localization in Wireless Sensor Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Automatically Identified EEG Signals of Movement Intention Based on CNN Network (End-To-End)

1
Deparment of Computer Engineering and Information Technology, Amirkabir University of Technology, Tehran 15875-4413, Iran
2
Department of Psychology, Roudehen Branch, Islamic Azad University, Roudehen 39731-88981, Iran
3
Biomedical Engineering Department, Faculty of Electrical and Computer Engineering, University of Tabriz, Tabriz 51666-16471, Iran
4
Cyberspace Research Institute, Shahid Beheshti University, Tehran 19839-69411, Iran
5
College of Engineering, Design and Physical Sciences, Brunel University London, Uxbridge UB8 3PH, UK
6
Department of Gastroenterology, Imam Khomaini Hospital, Urmia University of Medical Sciences, Urmia 57167-63111, Iran
*
Author to whom correspondence should be addressed.
Electronics 2022, 11(20), 3297; https://doi.org/10.3390/electronics11203297
Submission received: 4 September 2022 / Revised: 5 October 2022 / Accepted: 8 October 2022 / Published: 13 October 2022
(This article belongs to the Section Bioelectronics)

Abstract

:
Movement-based brain–computer Interfaces (BCI) rely significantly on the automatic identification of movement intent. They also allow patients with motor disorders to communicate with external devices. The extraction and selection of discriminative characteristics, which often boosts computer complexity, is one of the issues with automatically discovered movement intentions. This research introduces a novel method for automatically categorizing two-class and three-class movement-intention situations utilizing EEG data. In the suggested technique, the raw EEG input is applied directly to a convolutional neural network (CNN) without feature extraction or selection. According to previous research, this is a complex approach. Ten convolutional layers are included in the suggested network design, followed by two fully connected layers. The suggested approach could be employed in BCI applications due to its high accuracy.

1. Introduction

The brain–computer interface (BCI) is the direct interaction between the human brain and external technologies. Specifically, motor imagery BCIs that depend on electroencephalography (EEG) signals enable the subject to accomplish various activities without needing physical motion. In recent years, this method’s contribution to the rehabilitation of disabled people has made it an important inter-disciplinary issue. MI-EEG BCIs analyze and interpret imagined task signals as instructions for controlling peripherals, wheelchairs, and prostheses [1,2].
BCIs are typically driven by evoked activity paradigms, such as visually evoked steady-state potentials (SSVEP) [3,4], event-related potentials (ERP) [5], and motor-related paradigms, such as motor imagery [6]. SSVEP and ERP use visual and attention processes, and they constantly include an external trigger to generate a visible response. On the other hand, movement neural correlates allow the intuitive control of BCIs by generating movement intentions at will, without the need for external stimuli [7,8]. Typically, power changes in several EEG frequency bands are employed to determine movement intent. However, this disregards the movement-related information available in the rest of the EEG spectrum and the temporal domain, since the EEG signal is fundamentally non-stationary. Regularly used neural movement correlates, notably event-related synchronization (ERD/S) and motor-related cortical potential (MRCP), are often employed to determine voluntary movement intention, execution, and imagery from EEG [9]. ERD and ERS, which result in a reduction in μ and β power and a rise in power, respectively, are commonly employed to determine movement intention and imagery [10,11]. As a result, the EEG spectral domain is used to derive several characteristics for detecting movement-related tasks [7]. The most popular technique for evaluating ERD is the assessment of power spectral density (PSD) and time frequency [11,12,13]. MRCP is a slow negative cortical potential identified at low frequencies [11], approximately 2 seconds before voluntary human movement [14]. Compared with spontaneous EEG activity (100 μV), MRCP has a minimal amplitude (8–10 μV) which makes it difficult to detect [11]. The average of multiple EEG trials of voluntary movements is a typical approach [11] to identifying MRCP. Numerous computational methods relying on EEG data were evolved to evaluate and observe automatically recognized movement intention, as is detailed in the following section.
Yom et al. [15] automatically identified movement intention in five healthy volunteers. They used nine channels of EEG signals for the experiment. Furthermore, they used a finger-tapping movement to record the signal. The researchers used the MRP component to classify movement intention. They used a 10-hertz low-pass filter to preprocess the section. The support vector machine (SVM) and k-nearest neighbor (KNN) were also used for the classification. Haw et al. [16] automatically identified movement intention in five healthy volunteers from a single-channel EEG signal. They used the BP component to classify the movement intention. In addition, the correlation and error thresholds were used for two-stage classification. The accuracy of their method was reported at 70%. One of the limitations of their research was the change in performance of the proposed method on different subjects. One of their research’s benefits was the use of a single-channel EEG signal. Bai et al. [17] tested the automatic detection of movement intention on 12 subjects. They used 122 channels of EEG signals to record the signal. The type of movement in their experiment was based on finger tapping. In addition, the researchers used the MRP and ERD components to classify the movement intention. They used a third-order Butterworth low-pass filter to preprocess the section. Their classification accuracy for two stages based on artificial neural networks (ANN) was 75%. The limitations of their method were the use of 122 channels of EEG signals, which can be uncomfortable for the patient, as well as increasing the power consumption in the design of prosthetic prostheses. Boye et al. [18] used only one volunteer to automatically identify movement intention. They used a finger-tapping movement to record the EEG signal. Furthermore, the researchers used the MRP component to classify movement intention. They used a low-pass filter and principal component analysis (PCA) algorithm for the preprocessing section. The SVM and KNN were also used for the classification. The sensitivity of their classification for two stages was reported to be 96%. One of the limitations of the study was the experiment on one subject. Kato et al. [19] automatically identified movement intention in seven healthy volunteers from a single-channel EEG signal. The type of movement in their experiment was based on tapping. They used the CNV component to classify the movement intention. The SVM was also used for classification. Lew et al. [20] used eight healthy volunteers and two unhealthy volunteers with a history of stroke to identify movement intention automatically. They used 64 channels of EEG signals to record the signal. Furthermore, the type of movement in their experiment was based on the movement of the arms. They used an IIR filter with a cutoff frequency of 0.1 for the preprocessing section. In addition, the KNN was also used for the classification. Overall, the performance of their method for separating movement intention was reported to be 76%. However, their proposed algorithm’s performance was reported to be 82% for healthy subjects and 64% for unhealthy subjects. Niazi et al. [21] used 16 healthy subjects in their experiment to automatically identify movement intention. They used ten channels of EEG signal to record the signal. In addition, the type of movement in their experiment was based on the movement of the legs. The researchers also used the MRCP and BP components to classify the movement intention. The Neyman Pearson Lemma (NPL) was also used for the classification. Niazi et al. [22], in another study, used twenty healthy and five unhealthy people with stroke to automatically identify movement intention. Their study was based on limb movement, and they also used ten channels of EEG signals to record the signal. Furthermore, the researchers used the MRP component to classify the movement intention. They used a band-pass filter in the range of 0.05 to 10 Hz for the preprocessing data section. Ahmadian et al. [23] used three healthy subjects for their experiments. They used 128 channels of EEG signals to record the signal for automatically identified movement intention. They used a finger-tapping movement to record the signal. The researchers used the BP component to classify movement intention. They used an ideal filter (0.5 to 70 Hz) for the preprocessed section. Additionally, they used the independent component analysis (ICA) approach to minimize the dimension of the feature vector. The time required to separate the blind sources in their algorithm was about 51 seconds. Their research limitations included the high number of channels used in the EEG signal and the low number of samples. Jochumsen et al. [24] used 12 healthy subjects in their experiment to automatically identify movement intention. Furthermore, they used ten channels of EEG signals to record the signal. In addition, their movement type was based on leg movement in the experiment. They used an ideal filter (0.5 to 10 Hz) for the preprocessed section. They used the constraint-satisfaction-problem (CSP) algorithm to reduce the feature-vector dimension. SVM was also used for the classification. Overall, the performance of their method for separating movement intention was reported to be 80%. Jiang et al. [25] used nine healthy subjects for their experiments. They used nine channels of EEG signals to record the signal for automatically identified movement intention. In addition, their movement type was based on leg movement in the experiment, and they used the MRCP component to classify the movement intention. They used LSF to increase the SNR. The stated accuracy of their two-stage categorization was 76%. Xu et al. [26] used nine healthy subjects in their experiment. They used nine channels of EEG signals to record the signal. Their movement type was also based on foot movement and they used the MRCP component in the experiment. They used a band-pass filter in the frequency ranges (0.5 to 3 Hz) to preprocess the data. Their classification accuracy for two stages based on KNN was reported to be 75%. Wairagkar et al. [27] used nine healthy subjects (eight women and six men between 22 and 30 years of age) for their experiments. In this study, the autocorrelation function was used. The researchers used the ERD component to classify the movement intention. The KNN was also used for the classification. The sensitivity of their classification for two stages was reported to be 78%. The main purpose of automated-identified-movement-intention systems is to accurately detect classes (resting state, right- or left-hand movement, left- or right-foot movement) with high accuracy for BCI applications (such as designing intelligent prosthetics to assist patients after amputation). Recent studies have shown that the accuracy with which automatic movement intention is identified is below 80%. Previous studies have also observed that most movement-intention algorithms require more than one EEG signal channel, which can be uncomfortable for the patient and is a problem in prosthetic design. Selecting the discriminative elements of several phases is the challenging step in automatically detecting movement intention. The majority of current research begins by extracting statistical characteristics. The most discriminating qualities are then chosen manually or via commonly used, time-consuming, and complicated feature-selection algorithms. Additionally, the ideal characteristics for one scenario may not be considered optimal for another. Thus, developing an algorithm that learns the relevant characteristics for every scenario is vital. This continues to be the main advantage of such a study. The contributions and novelty of this article are as follows:
  • An automatic algorithm that can extract the effective features from the signal without the feature-selection/extraction block diagram and classify them into several classes with the precondition of high accuracy.
  • An optimal architecture that is resistant to a wide range of different SNRs.
  • Experimental data in two scenarios of two classes and three classes.
  • A higher level of accuracy, sensitivity, accuracy, and specificity compared to previous studies for the automatic classification of movement intention.
In the proposed algorithm, active electrodes are determined after preprocessing the data. Next, a deep convolutional network is used to train and classify 2-class and 3-class scenarios of movement intention. The proposed approach might be seen as an end-to-end classifier in which no feature selection/extraction methodology is required, and a deep convolutional neural network automatically acquires the proper features of every class.
The following sections of the paper are arranged as follows. Section 2 provides the CNN networks and the associated mathematical background. Section 3 presents the suggested approach. Section 4 contains the simulation outcomes and a comparison of the suggested approach to those in previous studies. Section 5 contains the conclusion.

2. Materials and Methods

In this part, the mathematical background related to deep convolutional neural networks is described.

Deep Convolutional Neural Network

CNN seems to be a superior alternative to the traditional neural network, which provides classification techniques in machine vision [28]. CNN comprises two learning stages: the feed-forward and backpropagation (BP) phases [29]. CNN comprises three layers: convolutional, pooling, and fully connected (FC) [29,30,31]. Feature mapping is the output of the convolution layer. The max-pooling layer, which takes the highest values from each feature map, was utilized in this investigation. The drop-out technique prevents overfitting; thus, every neuron is randomly removed from the network at every training step, resulting in a decreased network. The network’s data are normalized using the batch normalization (BN) layer. The following is the BN transformation:
y ^ ( l-l ) = y * ( l-l ) μ B σ B 2 + ε z * ( l ) = γ ( l ) y ^ ( l-l ) + β ( l )
where y * ( l-l ) is the input vector to the BN layer, z * ( l ) indicates the output response associated with a neuron in layer 1, μ B = E [ y * ( l-l ) ] , σ B 2 = var [ y * ( l-l ) ] , ε indicate a small constant for numerical stability, and γ ( l ) and β ( l ) are the scale and shift parameters, respectively, which are determined by learning. An activation function is used following every layer. In this investigation, the activation functions Relu and Softmax were utilized. Relu is used in the convolutional layers as the activation function and can add nonlinearity and sparsity to the network structure, as described in (2).
R ( d ) = { d    i f   d > 0 0    otherwise
A Softmax activation function may determine the distribution of the output classes. The Softmax function is thus implemented in the final FC layer and is described as follows:
σ ( δ ) i = e δ i j = 1 k e δ j   for   i = 1 ,   k a n d δ = ( δ 1 , , δ k ) R k
where δ denotes the input vector and σ ( δ ) represents the output values ranging from 0 to 1, with 1 [29,30,31].

3. Suggested Method

This part describes the suggested automated identification of movement intention relying on CNN. The block diagram of the suggested method is illustrated in Figure 1.

3.1. EEG Collection

Fourteen university students (eight women and six men, 22–30 years of age) participated in this experiment. A moral license number, IR.TBZ-REC.1397.3, was issued for experiments at the Biomedical Engineering Department’s BCI Laboratory at the Faculty of Electrical and Computer Engineering, University of Tabriz. The international 10–20 system was used for digitizing the 21-channel electrode cap’s data at a rate of 1024 Hz, with all channels referring to two Fpz and Fcz reference electrodes. The experiment consisted of 3 classes, resting, right-hand tapping, and left-hand tapping, for 40 repetitions. The length of each state was 6 seconds, which was available for each state of 6 × 1024 = 6144 sampling points with 35 repetitions. The participants in this study had no previous experience with EEG recording or BCI. There were 12 right-handed participants; the first and sixth were left-handed. Figure 2 shows how the EEG signals were recorded for one of the participants during the experiment. Figure 3 shows a sample EEG signal collected for resting state and left- and right-hand tapping from the F3 electrode in one experiment; in the Figure, an important distinction between resting state, right-hand, and left-hand tapping can be seen. However, this visual distinction makes it difficult to detect the three different stages. Based on the collected data, we considered two scenarios in this research. The first scenario includes two classes, left- and right-finger tapping, and the second scenario is related to left-finger tapping, rest mode, and right-finger tapping.

3.2. Preprocessing

According to the data dimensions for each class (resting state, left-hand tapping, right-hand tapping) equal to 6 seconds (6144 sampling points) with 35 repetitions (35 × 6144 = 21,504), for each subject, six pairs of electrodes were considered (F3-C3, Fz-Cz, F4-C4, C3-P3, Cz-Pz, C4-P4); we therefore obtained 2 × 21,504 sampling points of data. In order to avoid overfitting, the data for every electrode were then separated into 4135 sampling points utilizing the overlap approach, and we had 1020 samples for each class. Since the two electrodes were used, each class’s sample size and initial characteristics were (2 × 4135) × 1020. In addition, for the two-class scenario, involving Class 1 (right-hand movement) and Class 3 (left-hand movement), together considered Class 1, the dimensions were (2 × 4135) × 2040. As with the three-class scenario, the dimensions of each class were (2 × 4135) × 1020. The signals were also normalized using a min–max normalizer between zero and one; subsequently, the data were subjected to a Notch filter to eliminate the power supply’s 50 Hz frequency. We also focused only on F3-C3, Fz-Cz, F4-C4, C3-P3, Cz-Pz, and C4-P4 channels for simulation according to [27] to avoid high computational efficiency. Figure 4 shows this operation.

3.3. Network Architecture

In the suggested network design, ten convolution 1-D layers were utilized. The suggested CNN network was implemented with the assistance of a Python cross-library. The following details the preferred deep-neural-network architecture: 1. A convolutional layer containing a nonlinear Lealy–Relu function, followed by layers of integration that use drop-out and maximum pooling, and, finally, a layer of batch normalization. 2. The preceding step design is repeated nine times without a drop-out layer. 3. The result of the previous design is connected to a two-dimensional matrix. Two fully connected layers are used to access the output layer. The suggested deep-neural-network design is displayed in Table 1. Table 1 shows that the number of key features was reduced from 8270 to 80, which was the new dimension of the hidden layers. Ultimately, using the nonlinear Leaky–Relu function and Softmax, the specified attribute vector was attached to the layer with all the interconnections. Figure 5 depicts the design of the planned CNN network.

3.4. Proposed Network Training and Evaluation

After using the trial-and-error method, the cross-entropy loss function and Adam optimizer [32,33] with a learning rate of 0.001 were employed to define the deep neural network’s hyperparameters. The standard BP approach with a size of 10 was used for the recommended network training. For the three-class scenario, there were 42,840 specimens overall, of which 82% were selected at random to train the network (35,000), and the other 18% were used as the test set (7840). Additionally, 8% of the data from the training set were utilized for validation. Furthermore, there were 57,120 examples overall for the two-class situation, of which 48,000 were randomized for training the network (84%), and 9120 were used as the test set. Additionally, 6% of the training set’s data were utilized for the validation. Figure 6 illustrates an evaluation of the suggested two-class and three-class scenarios after training a deep neural network.

4. Results

This section displays the simulation outcomes of the suggested approach for automatically identifying movement intention. A laptop containing 4 GB of RAM and a Core i5 processor running at 2.4 GHz was used to demonstrate the suggested technique. The suggested network’s loss function for the F3 and C3 channels for categorizing two-class scenarios is demonstrated in Figure 7. As shown in Figure 7, the error decreased from 0.7 to 0.15. Figure 8 shows the accuracy of the proposed method for classifying two-class scenarios for the F3 and C3 channels in 500 iterations for the validation data. Figure 8 illustrates how the suggested strategy for identifying two-class situations achieves an accuracy of 99.23%. The suggested network’s loss function for the F3 and C3 channels for categorizing three-class situations is displayed in Figure 9. As shown in Figure 9, the error decreased from 1 to 0.45. Figure 10 shows the accuracy of the proposed method for classifying three-class scenarios for the F3 and C3 channels in 500 iterations for the validation data. Figure 10 reveals that the recommended method for classifying three different class scenarios is 99.23% accurate.
Figure 11 also depicts the T-Sen diagrams for the raw signal, Conv6, Conv10, and FC2 layers in instances involving two-class F3 and C3 channels. The T-Sen chart for the raw signal, Conv6, FC1, and FC2 layers, and the three-class scenarios for the F3 and C3 channels, are displayed in Figure 12. As can be observed in the final layer, practically all of the samples were divided for the evaluation set, demonstrating the proposed method’s efficient classification of scenarios into two classes and three classes. Figure 13 depicts the confusion matrix for identifying two- and three-class scenarios for the F3 and C3 channels for further analysis of the suggested approach. The effectiveness of the suggested approach is also remarkable. Figure 14 additionally depicts the ROC diagram for categorizing scenarios into two-class and three-class categories using the suggested strategy. Table 2 also shows the accuracy obtained for the selected channels. According to this table, the performance of the F3 and C3 channels is promising for classifying two-class and three-class scenarios.
Several automatically identified movement-intention methods using EEG signals have been proposed recently. In Table 3, we compare various studies classifying two-class scenarios using EEG signals. Table 3 reveals that our suggested technique has the greatest accuracy, sensitivity, and specificity for categorizing two-class situations compared to all other comparable methods. The classification sensitivity of the suggested technique for two classes is 96.93%, while [26] and [27] claim sensitivity values of 86% and 90% for identical situations. In the majority of previous works, common techniques, such aswavelet transform and empirical-mode decomposition, were used to distinguish the signal’s main characteristics and properties. However, these techniques often involved issues with the parameters of the feature selection and extraction process, depending on factors such as the number of decomposition levels and the kind of mother wavelet. Compared to previous approaches, one of the greatest advantages of the suggested method is that feature extraction is performed automatically, without the need for a feature-selection procedure when employing deep learning.
In order to show the performance of the proposed CNN method with different data types as inputs, the accuracy of the classification was determined using the other common methods for the automatically identification of movement intention. In this regard, time data and several manual features of these data, along with DBM and MLP, were selected as comparative methods [34,35,36,37,38]. The number of hidden layers was considered 3 for DBM and MLP, and the learning rate was chosen as 0.001. Furthermore, for CNN, the proposed architecture in Table 1 was selected. The parameters minimum, maximum, skewness, crest factor, variance, root mean square (RMS), mean, and kurtosis were chosen as the hand-crafted features of the time domain (time features). The classification accuracy of the different methods based on the feature learning from the raw data and the manual features are presented in Figure 15. The reliability of the CNN, DBM, and MLP reached 96%, 82%, and 71%, respectively, after 100 iterations. As can be seen from Figure 15, the performance of the proposed network is promising compared to the DBM and MLP and the proposed algorithm converges to the desired value faster. White Gaussian noise with a signal-to-noise ratio (SNR) of −4 to 20 dB was introduced to the EEG signals as measurement noise. Figure 16 shows the classification accuracy for every approach for investigating how well the suggested CNN, DBM, and MLP methods work against measurement noise. With accuracy over 90% for SNR −4 to 20 dB, the classification performance of the suggested approach seems very resilient to measurement noise over a broad range of SNR.
Many researchers have used CNN in their research. However, although previous studies used deep convolutional networks, the classification accuracy is low and the network requires a large amount of data to be trained. Furthermore, the efficiency of the deep networks in previous studies in noisy environments has not been considered. In addition, most of the previous studies applied many forms of pre-processing, such as wavelet transform and empirical-mode decomposition, with a high computational volume on the raw signal before entering the signal into the network, which increases the computational load of the algorithm. The key point of this research is that the proposed network can select/extract the necessary feature from the raw signal based on the unique architecture, according to Table 1, without the need for additional pre-processing. Furthermore, due to the choice of optimizer, the number of layers, etc., the proposed network has the best performance in terms of speed and accuracy compared to previous those in previous studies. In addition, due to the selection of filters with large sizes in the initial layer and the selection of filters with medium and small layers in the other layers in the unique architecture, the proposed network, according to the results, can offer good resistance in noisy environments in a wide range of different SNRs.

5. Conclusions

This research introduces a novel deep-neural-network-based technique for automatically identifying movement intention. The proposed network comprises ten layers of CNN and two fully conected layers. In addition, we obtained 96.9% and 89.8% accuracy for identifying two-stage and three-stage movement intentions, respectively, which is a considerable improvement over earlier methods. Previous methods have often been based on manual feature extraction, which increases the computational efficiency of the algorithm. Based on this, the proposed deep architecture aims to remove the feature-selection/extraction block diagram, which makes it possible to receive the raw EEG signal and extract the necessary features from the raw signal without the need for additional pre-processing, and classify them into two different scenarios. Due to the use of large-sized filters in the initial layer and the use of small-sized filters in the middle layers, the proposed network has a good ability to withstand a wide range of different SNRs. Therefore, in SNR = 1 dB, the classification accuracy is still above 90%. It is predicted that the suggested approach will also be employed in BCI applications.

Author Contributions

Conceptualization, N.S. and Z.B.; methodology, S.S. and M.D.; software, S.S. and Y.R.; validation, S.S. and S.D.; writing—original draft preparation, N.S. and S.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and approved by Ethics Committee of university of tabriz. (protocol code IR.TBZ-REC.1397.3 and date of approval 1397.3).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data is private due to the lack of permission from the ethics committee.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Bulárka, S.; Gontean, A. Brain-computer interface review. In Proceedings of the 2016 12th IEEE International Symposium on Electronics and Telecommunications (ISETC), Timisoara, Romania, 27–28 October 2016; pp. 219–222. [Google Scholar]
  2. Amiri, S.; Fazel-Rezai, R.; Asadpour, V. A Review of Hybrid Brain-Computer Interface Systems. Adv. Human-Computer Interact. 2013, 2013, 187024. [Google Scholar] [CrossRef]
  3. Wang, H.; Zhang, Y.; Waytowich, N.R.; Krusienski, D.J.; Zhou, G.; Jin, J.; Wang, X.; Cichocki, A. Discriminative feature extraction via multivariate linear regression for SSVEP-based BCI. IEEE Trans. Neural Syst. Rehabil. Eng. 2016, 24, 532–541. [Google Scholar] [CrossRef]
  4. Guger, C.; Coyle, D.; Mattia, D.; Lucia, M.D.; Hochberg, L.; Edlow, B.L.; Peters, B.; Eddy, B.; Nam, C.S.; Noirhomme, Q. Trends in BCI research I: Brain-computer interfaces for assessment of patients with locked-in syndrome or disorders of consciousness. In Brain-Computer Interface Research; Springer: Berlin/Heidelberg, Germany, 2017; pp. 105–125. [Google Scholar]
  5. Jin, J.; Zhang, H.; Daly, I.; Wang, X.; Cichocki, A. An improved P300 pattern in BCI to catch user’s attention. J. Neural Eng. 2017, 14, 036001. [Google Scholar] [CrossRef] [PubMed]
  6. Ang, K.K.; Guan, C. EEG-based strategies to detect motor imagery for control and rehabilitation. IEEE Trans. Neural Syst. Rehabil. Eng. 2016, 25, 392–401. [Google Scholar] [CrossRef] [PubMed]
  7. Hwang, H.-J.; Kim, S.; Choi, S.; Im, C.-H. EEG-based brain-computer interfaces: A thorough literature survey. Int. J. Hum.-Comput. Interact. 2013, 29, 814–826. [Google Scholar] [CrossRef]
  8. Sheykhivand, S.; Rezaii, T.Y.; Saatlo, A.N.; Romooz, N. Comparison between different methods of feature extraction in BCI systems based on SSVEP. Int. J. Ind. Math. 2017, 9, 341–347. [Google Scholar]
  9. Toro, C.; Deuschl, G.; Thatcher, R.; Sato, S.; Kufta, C.; Hallett, M. Event-related desynchronization and movement-related cortical potentials on the ECoG and EEG. Electroencephalogr. Clin. Neurophysiol. Evoked Potentials Sect. 1994, 93, 380–389. [Google Scholar] [CrossRef]
  10. Pfurtscheller, G.; Neuper, C. Future prospects of ERD/ERS in the context of brain–computer interface (BCI) developments. Prog. Brain Res. 2006, 159, 433–437. [Google Scholar]
  11. Bai, O.; Rathi, V.; Lin, P.; Huang, D.; Battapady, H.; Fei, D.-Y.; Schneider, L.; Houdayer, E.; Chen, X.; Hallett, M. Prediction of human voluntary movement before it occurs. Clin. Neurophysiol. 2011, 122, 364–372. [Google Scholar] [CrossRef] [Green Version]
  12. Ibáñez, J.; Serrano, J.; Del Castillo, M.; Monge-Pereira, E.; Molina-Rueda, F.; Alguacil-Diego, I.; Pons, J.L. Detection of the onset of upper-limb movements based on the combined analysis of changes in the sensorimotor rhythms and slow cortical potentials. J. Neural Eng. 2014, 11, 056009. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  13. Demandt, E.; Mehring, C.; Vogt, K.; Schulze-Bonhage, A.; Aertsen, A.; Ball, T. Reaching movement onset-and end-related characteristics of EEG spectral power modulations. Front. Neurosci. 2012, 6, 65. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  14. Libet, B.; Gleason, C.A.; Wright, E.W.; Pearl, D.K. Time of conscious intention to act in relation to onset of cerebral activity (readiness-potential). In Neurophysiology of Consciousness; Springer: Berlin/Heidelberg, Germany, 1993; pp. 249–268. [Google Scholar]
  15. Yom-Tov, E.; Inbar, G. Detection of movement-related potentials from the electro-encephalogram for possible use in a brain-computer interface. Med. Biol. Eng. Comput. 2003, 41, 85–93. [Google Scholar] [CrossRef] [PubMed]
  16. Haw, C.; Lowne, D.; Roberts, S. User Specific Template Matching for Event Detection Using Single Channel EEG; Information Engineering Department, University of Oxford: Oxford, UK, 2006. [Google Scholar]
  17. Bai, O.; Lin, P.; Vorbach, S.; Li, J.; Furlani, S.; Hallett, M. Exploration of computational methods for classification of movement intention during human voluntary movement from single trial EEG. Clin. Neurophysiol. 2007, 118, 2637–2655. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  18. Boye, A.T.; Kristiansen, U.Q.; Billinger, M.; do Nascimento, O.F.; Farina, D. Identification of movement-related cortical potentials with optimized spatial filtering and principal component analysis. Biomed. Signal Processing Control 2008, 3, 300–304. [Google Scholar] [CrossRef]
  19. Kato, Y.X.; Yonemura, T.; Samejima, K.; Maeda, T.; Ando, H. Development of a BCI master switch based on single-trial detection of contingent negative variation related potentials. In Proceedings of the 2011 Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Boston, MA, USA, 30 August–3 September 2011; pp. 4629–4632. [Google Scholar]
  20. Lew, E.; Chavarriaga, R.; Silvoni, S.; Millán, J.D.R. Detection of self-paced reaching movement intention from EEG signals. Front. Neuroeng. 2012, 5, 13. [Google Scholar] [CrossRef] [PubMed]
  21. Niazi, I.K.; Mrachacz-Kersting, N.; Jiang, N.; Dremstrup, K.; Farina, D. Peripheral electrical stimulation triggered by self-paced detection of motor intention enhances motor evoked potentials. IEEE Trans. Neural Syst. Rehabil. Eng. 2012, 20, 595–604. [Google Scholar] [CrossRef]
  22. Niazi, I.K.; Jiang, N.; Jochumsen, M.; Nielsen, J.F.; Dremstrup, K.; Farina, D. Detection of movement-related cortical potentials based on subject-independent training. Med. Biol. Eng. Comput. 2013, 51, 507–512. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  23. Ahmadian, P.; Sanei, S.; Ascari, L.; González-Villanueva, L.; Umiltà, M.A. Constrained blind source extraction of readiness potentials from EEG. IEEE Trans. Neural Syst. Rehabil. Eng. 2012, 21, 567–575. [Google Scholar] [CrossRef]
  24. Jochumsen, M.; Niazi, I.K.; Mrachacz-Kersting, N.; Farina, D.; Dremstrup, K. Detection and classification of movement-related cortical potentials associated with task force and speed. J. Neural Eng. 2013, 10, 056015. [Google Scholar] [CrossRef]
  25. Jiang, N.; Gizzi, L.; Mrachacz-Kersting, N.; Dremstrup, K.; Farina, D. A brain–computer interface for single-trial detection of gait initiation from movement related cortical potentials. Clin. Neurophysiol. 2015, 126, 154–159. [Google Scholar]
  26. Xu, R.; Jiang, N.; Lin, C.; Mrachacz-Kersting, N.; Dremstrup, K.; Farina, D. Enhanced low-latency detection of motor intention from EEG for closed-loop brain-computer interface applications. IEEE Trans. Biomed. Eng. 2013, 61, 288–296. [Google Scholar]
  27. Wairagkar, M.; Hayashi, Y.; Nasuto, S.J. Exploration of neural correlates of movement intention based on characterisation of temporal dependencies in electroencephalography. PLoS ONE 2018, 13, e0193722. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  28. Goodfellow, I.; Bengio, Y.; Courville, A. Deep Learning; MIT Press: Cambridge, MA, USA, 2016. [Google Scholar]
  29. Hung, S.-L.; Adeli, H. Parallel backpropagation learning algorithms on Cray Y-MP8/864 supercomputer. Neurocomputing 1993, 5, 287–302. [Google Scholar] [CrossRef]
  30. Hinton, G.E.; Srivastava, N.; Krizhevsky, A.; Sutskever, I.; Salakhutdinov, R.R. Improving neural networks by preventing co-adaptation of feature detectors. arXiv 2012, arXiv:1207.0580 2012. [Google Scholar]
  31. Mousavi, Z.; Rezaii, T.Y.; Sheykhivand, S.; Farzamnia, A.; Razavi, S. Deep convolutional neural network for classification of sleep stages from single-channel EEG signals. J. Neurosci. Methods 2019, 324, 108312. [Google Scholar] [CrossRef] [PubMed]
  32. Kingma, D.P.; Ba, J. Adam: A method for stochastic optimization. arXiv 2014, arXiv:1412.6980 2014. [Google Scholar]
  33. Kwon, H.; Lee, S. Friend-guard adversarial noise designed for electroencephalogram-based brain–computer interface spellers. Neurocomputing 2022, 506, 184–195. [Google Scholar] [CrossRef]
  34. Sheykhivand, S.; Rezaii, T.Y.; Meshgini, S.; Makoui, S.; Farzamnia, A. Developing a Deep Neural Network for Driver Fatigue Detection Using EEG Signals Based on Compressed Sensing. Sustainability 2022, 14, 2941. [Google Scholar] [CrossRef]
  35. Sheykhivand, S.; Rezaii, T.Y.; Mousavi, Z.; Meshgini, S.; Makouei, S.; Farzamnia, A.; Danishvar, S.; Teo Tze Kin, K. Automatic Detection of Driver Fatigue Based on EEG Signals Using a Developed Deep Neural Network. Electronics 2022, 11, 2169. [Google Scholar] [CrossRef]
  36. Sabahi, K.; Sheykhivand, S.; Mousavi, Z.; Rajabioun, M. Recognition Covid-19 cases using deep type-2 fuzzy neural networks based on chest X-ray image. Comput. Intell. Electr. Eng. 2022. [Google Scholar]
  37. Abdolahi, M.; Yousefi, R.T.; Sheykhivand, S. Recognition of Emotions Provoked by Auditory Stimuli using EEG Signal Based on Sparse Representation-Based Classification. Tabriz J. Electr. Eng. 2019, 49, 331–341. [Google Scholar]
  38. Sheykhivand, S.; Yousefi Rezaii, T.; Mousavi, Z.; Meshini, S. Automatic stage scoring of single-channel sleep EEG using CEEMD of genetic algorithm and neural network. Comput. Intell. Electr. Eng. 2018, 9, 15–28. [Google Scholar]
Figure 1. The block diagram of the proposed algorithm.
Figure 1. The block diagram of the proposed algorithm.
Electronics 11 03297 g001
Figure 2. Recording of the EEG signal while tapping for subject 1.
Figure 2. Recording of the EEG signal while tapping for subject 1.
Electronics 11 03297 g002
Figure 3. Part of the EEG signal for resting, left-hand, and right-hand stages of F3 channel for subject 1.
Figure 3. Part of the EEG signal for resting, left-hand, and right-hand stages of F3 channel for subject 1.
Electronics 11 03297 g003
Figure 4. The overlap operation for each electrode.
Figure 4. The overlap operation for each electrode.
Electronics 11 03297 g004
Figure 5. Proposed network architecture.
Figure 5. Proposed network architecture.
Electronics 11 03297 g005
Figure 6. EEG data allocation in the proposed method for two-class and three-class scenario.
Figure 6. EEG data allocation in the proposed method for two-class and three-class scenario.
Electronics 11 03297 g006
Figure 7. The proposed network error for two-class scenarios for F3 and C3 channels.
Figure 7. The proposed network error for two-class scenarios for F3 and C3 channels.
Electronics 11 03297 g007
Figure 8. The proposed network accuracy for two-class scenarios for F3 and C3 channels.
Figure 8. The proposed network accuracy for two-class scenarios for F3 and C3 channels.
Electronics 11 03297 g008
Figure 9. The proposed network error for three-class scenarios for F3 and C3 channels.
Figure 9. The proposed network error for three-class scenarios for F3 and C3 channels.
Electronics 11 03297 g009
Figure 10. The proposed network accuracy for three-class scenarios for F3 and C3 channels.
Figure 10. The proposed network accuracy for three-class scenarios for F3 and C3 channels.
Electronics 11 03297 g010
Figure 11. The T-Sen chart of the proposed method for two-class scenarios for F3 and C3 channels.
Figure 11. The T-Sen chart of the proposed method for two-class scenarios for F3 and C3 channels.
Electronics 11 03297 g011
Figure 12. The T-Sen chart of the proposed method for three-class scenarios for F3 and C3 channels.
Figure 12. The T-Sen chart of the proposed method for three-class scenarios for F3 and C3 channels.
Electronics 11 03297 g012
Figure 13. The confusion matrix for classifying two-class and three-class scenarios for F3 and C3 channels.
Figure 13. The confusion matrix for classifying two-class and three-class scenarios for F3 and C3 channels.
Electronics 11 03297 g013
Figure 14. The ROC diagram for classifying two-class and three-class scenarios for F3 and C3 channels.
Figure 14. The ROC diagram for classifying two-class and three-class scenarios for F3 and C3 channels.
Electronics 11 03297 g014
Figure 15. Comparison of the proposed method with common methods.
Figure 15. Comparison of the proposed method with common methods.
Electronics 11 03297 g015
Figure 16. Accuracy of the proposed network versus SNR in additive white Gaussian noise.
Figure 16. Accuracy of the proposed network versus SNR in additive white Gaussian noise.
Electronics 11 03297 g016
Table 1. Details of the proposed deep-neural-network architecture.
Table 1. Details of the proposed deep-neural-network architecture.
Layer NumberLayer TypeSize and Filter StepsNumber of FiltersOutput ValuePadding
1Convolution112 × 1/8 × 1161034 × 16yes
2Pooling12 × 1/2 × 116517 × 16no
3Convolution23 × 1/2 × 132517 × 32yes
4Pooling22 × 1/2 × 132258 × 32no
5Convolution33 × 1/1 × 164258 × 64yes
6Pooling32 × 1/2 × 164129 × 64no
7Convolution43 × 1/1 × 180129 × 80yes
8Pooling42 × 1/2 × 18064 × 80no
9Convolution53 × 1/1 × 18064 × 80yes
10Pooling52 × 1/2 × 18032 × 80no
11Convolution63 × 1/1 × 18032 × 80yes
12Pooling62 × 1/2 × 18016 × 80no
13Convolution73 × 1/1 × 18016 × 80yes
14Pooling72 × 1/2 × 1808 × 80no
15Convolution83 × 1/1 × 1808 × 80yes
16Pooling82 × 1/2 × 1804 × 80no
17Convolution93 × 1/1 × 1804 × 80yes
18Pooling92 × 1/2 × 1802 × 80no
19Convolution103 × 1/1 × 1802 × 80yes
20Pooling102 × 1/2 × 1801 × 80no
21Fully-connected100 100
22Softmax2–312–3
Table 2. The accuracy of the intended channels for classifying two-class and three-class scenarios.
Table 2. The accuracy of the intended channels for classifying two-class and three-class scenarios.
ChannelsF3-C3Cz-FzC4-F4P3-C3Cz-PzP4-C4
Acc in two-class scenarios96.9096.8193.5794.1093.8494.15
Acc in three-class scenarios89.8079.978.581.382.478.6
Table 3. The proposed method compared with previous studies.
Table 3. The proposed method compared with previous studies.
ResearchMethodPrecision (%)Accuracy (%)Sensitivity (%)Specificity (%)
[23]CBSE--74±14-
[24]CSP--76-
[25]MLP--75-
[26]DBM--79±11-
[27]ICA-70/7678±8-
P-MCNN9796/9396/9396/93
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Shahini, N.; Bahrami, Z.; Sheykhivand, S.; Marandi, S.; Danishvar, M.; Danishvar, S.; Roosta, Y. Automatically Identified EEG Signals of Movement Intention Based on CNN Network (End-To-End). Electronics 2022, 11, 3297. https://doi.org/10.3390/electronics11203297

AMA Style

Shahini N, Bahrami Z, Sheykhivand S, Marandi S, Danishvar M, Danishvar S, Roosta Y. Automatically Identified EEG Signals of Movement Intention Based on CNN Network (End-To-End). Electronics. 2022; 11(20):3297. https://doi.org/10.3390/electronics11203297

Chicago/Turabian Style

Shahini, Nahal, Zeinab Bahrami, Sobhan Sheykhivand, Saba Marandi, Morad Danishvar, Sebelan Danishvar, and Yousef Roosta. 2022. "Automatically Identified EEG Signals of Movement Intention Based on CNN Network (End-To-End)" Electronics 11, no. 20: 3297. https://doi.org/10.3390/electronics11203297

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop