Deep Learning Techniques for EEG-Based BCI Analysis and Applications
Deep Learning Techniques for EEG-Based BCI Analysis and Applications
2023 16th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI) | 979-8-3503-3075-5/23/$31.00 ©2023 IEEE | DOI: 10.1109/CISP-BMEI60920.2023.10373270
Abstract—Brain-Computer Interface enables people to subjects and expertise on identifying EEG signals. EEG signals
communicate with each other or with computing systems without are known to have low signal-to-noise ratio (SNR) [29], high
any movements even speech. Recent advancements in BCI mostly inter-subject variability and to be non-stationary [30]. To
use EEG as their channels to achieve connections between the overcome these challenges and achieve automation, approaches
brain and computer, leaving EEG signal decoding a challenge to with better generalization capabilities are needed.
be solved. Similar complex problems have been given satisfying
solutions using Deep Learning techniques, due to its capacity to
learn patterns hidden in raw data and its ability to be generalized.
However, at present, people usually use some methods that are not
state-of-the-art DL architectures for EEG decoding. Besides, some
open challenges and limitations have also been exposed in the
previous research yet haven’t been solved. In this paper, some
basic but critical concepts and challenges in EEG-Based BCI
systems are mentioned, as well as prospects of further
development in combination with DL and EEG analysis are arisen.
The author expects that the investigation presented in this paper Figure 1. Five stages of BCI.
will help researchers to have the basic and hands-on concepts for
constructing EEG-Based BCI and prompt more recent DL Deep Learning (DL) provides such an approach which has
techniques to be used in designing an effective EEG-BCI system. already been proven to outperform in complex tasks including
natural language processing, image and audio pattern
Keywords- Electroencephalograph (EEG); Deep Learning (DL); recognition, and text comprehension. DL is a subfield of
Brain-Computer Interface (BCI) Machine Learning (ML) and has been well developed recently.
Benefiting from newly developed techniques, DL utilized its
I. INTRODUCTION advantage of stacked layers of neurons and achieved remarkably
Brain-Computer Interface (BCI), sometimes referred to as high performance in capturing patterns [31, 32].
Brain-Machine Interface, is the system aimed at creating a direct
Classic ML & DL methods and architectures have already
communication pathway between human brain and external
been used in the field of EEG-Based BCI systems, including ML
devices [1-3] (usually computers or robotic arms) without any or
methods SVM [32-34], LDA [36, 37], etc., and DL architectures
minimum physiological actions or verbal expressions.
CNN [38], RNN [39] and DBN [40]. Unfortunately, the most
Implementations of BCIs include non-invasive approaches
state-of-the-art architectures like Transformer, Diffusion, or
(EEG, MEG, MRI), partially invasive approaches (ECoG and
YOLO haven’t or seldom been used to facilitate, correct, or
endovascular), and invasive approaches (microelectrode array),
accelerate EEG-Based BCI systems. Hence, this article surveyed
based on how close electrodes or other kinds of detectors are
the DL techniques used for EEG-Based BCI systems and mostly
from brain tissue [4, 5]. The most prevalent used signal captured
focused on the most recent architectures, for their usages and
from the brain to conduct a fully functional BCI system is the
expectations.
electroencephalogram (EEG), because of its convenience and
low level of cost [6, 7]. It has been successfully instantiated into The structure of this paper is organized as follows: Sect. 1
applicable day-to-day systems controlling wheelchairs, robots, briefly introduces some concepts and the research situations,
and automatic vehicles for life-quality improvements [8-10]. It with the aim of this review; Sect. 2 introduces more detailed
has also been used for mental or neurological disorders detection EEG concepts, with their challenges and applications; Sect. 3
and prediction [11-17], or other medical or practical uses such introduces several DL architectures and their usage in EEG-
as sleep staging, fatigue or mental workload identification, and Based BCI systems, with some relatively avant-garde thoughts
emotions detection [18-25]. and usages; Sect. 4 briefly discusses the DL techniques for EEG-
Based BCI systems and concludes by suggesting several future
Normally EEG-Based BCI systems involve 5 stages: data
research aspects.
acquisition, data pre-processing, feature extraction,
classification, and feedback, as illustrated in Fig. 1. During data
pre-processing to classification stage, human experts are needed
since traditional methods [26-28] require both familiarities to
Authorized licensed use limited to: SHANGHAI UNIVERSITY. Downloaded on January 05,2024 at 01:58:45 UTC from IEEE Xplore. Restrictions apply.
signals over time. In short, EEG-Based BCIs developed Compared to CNN’s identical convolutional layers,
must be tested through various subjects to show their Recurrent Neural Networks (RNNs) focus more on the
capability of generalization. relationships among layers, introducing a mechanism of
recurrent connections to achieve better performance on spatial
3. Lack of Efficient Feature Extraction and Selection: or temporal sequential data [49]. RNNs are specifically designed
Extracting relevant features from raw EEG signals and to process sequential data by maintaining a hidden state that
selecting the most informative ones for classification is a captures information about previous time steps. This hidden
complex task. The process of multi-task thinking enables state allows RNNs to capture temporal (spatial, if specially
human to handle complex task, but also raise challenges for designed and altered from input data) dependencies in the data.
us to extract the expected features since a large range of By doing recurrent connections that loop back on themselves,
patterns have been generated [46]. Developing efficient RNNs allow information to persist over time.
methods for feature extraction and selection is critical for
accurate and real-time performance. Long Short-Term Memory (LSTM) is a type of RNN
architecture designed to precisely utilize the most valuable
4. Lack of Unified Protocols: In the data acquisition stage, a information the network has received before an input. LSTM
unified protocol should be established to provide a better networks are equipped with memory cells that can store and
chance for data availability and variable controlling. A
retrieve information over extended sequences. Also, LSTMs
system indicating the number and placements of electrodes have three gates - the input gate, forget gate, and output gate -
should be extended from the International 10-20 system, that regulate the flow of information into and out of the memory
with a better-denoted format of data instead of raw data cell. These gates control which information to remember, forget,
points. Also, more unified and modular designs in helmets and output at each time step. These mechanisms allow LSTM to
supporting the electrodes may facilitate the standardization capture long-range dependencies in data, and by carefully
of data. Unified datasets and databases should be controlling the gates, the problem of vanishing gradients is partly
established. addressed [50-52].
5. Requirements for User Training: Users often need
3) Transformer
extensive training to control BCIs effectively, especially in
robotic controlling systems or spelling systems. Designing Transformer architecture was first introduced on the task of
user-friendly training protocols that accelerate the learning machine translation, yet it has been widely adopted in various
curve and improve user engagement is still relatively hard. other domains beyond NLP ever since. Its parallel processing
capabilities and attention mechanism have made it a versatile
Also, although through decades of advancements, the EEG- tool for modeling sequential and structured data. Transformer
Based BCI systems remain mostly lab experiments instead of relies mostly on a self-attention mechanism that allows it to
commercial products. Aside from challenges in ethical and weigh the importance of different parts of the input sequence
privacy concerns, transitioning from controlled lab when processing each element. This attention mechanism
environments to real-world applications poses even more severe enables the model to capture more long-range dependencies than
challenges due to noise, user variability, and the need for reliable LSTM or CNN. Besides, Transformer can process input
performance outside the lab. Researchers are actively addressing sequences in parallel, making them highly efficient for both
these challenges through rapid innovations and interdisciplinary training and inference. To address the problem that Transformer
collaborations. DL techniques are regarded as strong candidates does not inherently have a notion of sequence order, positional
to provide end-to-end solutions, which will be briefly introduced encodings are added to the input embeddings to provide
in the next section. information about the position of each element in the sequence.
III. MOST ADVANCED DL TECHNIQUES & USAGES Vision Transformer (ViT) is a DL model that applies the
Transformer architecture for image processing works. ViT
A. Introduction of Architectures breaks down images into smaller non-overlapping patches,
1) CNN which are treated as sequences of vectors. These patches are then
Convolutional Neural Networks (CNNs) are composed of processed by the Transformer model, allowing the model to
multiple module layers, typically including convolutional layers, capture global and local image features [54]. Fig.3 shows a
pooling layers, and fully connected layers. The essential possible implementation of ViT-based BCI system used for
convolutional layers apply convolution operator filters to input seizure domains.
data, capturing local patterns and extracting features. These
operations apply learnable filters (kernels) across the input,
enabling the network to recognize patterns of varying
complexity. Other layers, including pooling layers which reduce
spatial dimensions, and fully connected layers which make
predictions based on learned features, are attached to stacked
layers of convolutional layers to achieve better performance.
CNNs learn hierarchical features, with early layers capturing
simple features like edges and later layers learning complex
patterns or object representations [31, 47-48].
2) RNN and LSTM
Authorized licensed use limited to: SHANGHAI UNIVERSITY. Downloaded on January 05,2024 at 01:58:45 UTC from IEEE Xplore. Restrictions apply.
Figure 3. A possible ViT model designed for seizure tasks. 2017 [69] ConvNet - CNN 72.53%
Authorized licensed use limited to: SHANGHAI UNIVERSITY. Downloaded on January 05,2024 at 01:58:45 UTC from IEEE Xplore. Restrictions apply.
conclusion, we must focus more on the architecture itself, [2] Xing-Yu, W., Jing, J., Zhang, Y., & Bei, W. (2013). Brain control: human-
e.g., using more advanced mechanisms and methods that computer integration control based on brain-computer interface approach.
Acta Automatica Sinica, 39(3), 208-221.
have been proven to outperform in other fields, rather than
[3] J. R. Wolpaw, N. Birbaumer, D. J. McFarland, G. Pfurtscheller, and T. M.
simplifying the procedure of achieving better performance Vaughan, “Brain-computer interfaces for communication and control.”
by adding layers and increasing parameters. A certain level Clinical neurophysiology: official journal of the International Federation
of distillation is needed. of Clinical Neurophysiology, vol. 113, no. 6, pp. 767–91, jun 2002.
[Online]. Available: http://www.ncbi.nlm.nih.gov/pubmed/12048038
There’s a certain degree of delay in the architectures [4] Martini, M. L., Oermann, E. K., Opie, N. L., Panov, F., Oxley, T., &
used in EEG-Based BCIs. For example, Transformer and Yaeger, K. (2020). Sensor modalities for brain-computer interface
ViT arose in 2017 and 2020, yet the first usage of these technology: a comprehensive literature review. Neurosurgery, 86(2),
architectures was around 2022. Also, there are still some E108-E117.
other architectures that perform better than Transformer in [5] Han C-H, Müller K-R, Hwang H-J. Brain-Switches for Asynchronous
the speed of inference or training, accuracy, or Brain–Computer Interfaces: A Systematic Review. Electronics. 2020;
9(3):422. https://doi.org/10.3390/electronics9030422
generalization abilities, yet haven’t been used in EEG
[6] Roberto Portillo-Lara, Bogachan Tahirbegi, Christopher A. R. Chapman,
analysis. The author urges that further works on these Josef A. Goding, Rylie A. Green; Mind the gap: State-of-the-art
architectures should be made to testify their adaptability on technologies and applications for EEG-Based brain–computer interfaces.
neurophysiological data. APL Bioeng 1 September 2021; 5 (3): 031507.
[7] Arafat, I. (2013). Brain-computer interface: past, present & future.
IV. CONCLUSION International Islamic University Chittagong (IIUC), Chittagong,
Bangladesh, 1-6.
As relatively newly proposed and developed technology,
BCIs have already shown promise in a lot of domains, including [8] A. Cruz, G. Pires, A. Lopes, C. Carona, and U. J. Nunes, “A Self-Paced
BCI With a Collaborative Controller for Highly Reliable Wheelchair
seizure detection, emotion detection, sleep staging as well as Driving: Experimental Tests with Physically Disabled Individuals,” IEEE
motor controlling, yet their effectiveness mostly relies on the Transactions on Human-Machine Systems, vol. 51, no. 2, pp. 109– 119,
accuracy of analysis of the neurophysiology data the system Apr. 2021.
collects, in recent, EEG. Time-consuming tasks like sleep [9] A. Schwarz, M. K. Ho ̈ller, J. Pereira, P. Ofner, and G. R. Mu ̈ller-Putz,
staging or seizure detection can be automatically completed by “Decoding hand movements from human EEG to control a robotic arm in
BCIs which used to require human-expert engagement. a simulation environment,” Journal of Neural Engineering, vol. 17, no. 3,
p. 036010, May 2020.
However, recent BCIs haven’t met the criteria where we can
[10] Y. Song, W. Wu, C. Lin, G. Lin, G. Li, and L. Xie, “Assistive Mobile
safely rely on the system without any monitoring experts aside. Robot with Shared Control of Brain-Machine Interface and Computer
Developing methods that improve the performance of automated Vision,” in 2020 IEEE 4th Information Technology, Networking, Elec-
analysis on EEG should be the preoccupation in the field. Deep tronic and Automation Control Conference (ITNEC). Chongqing, China:
Learning provides us with this promising opportunity. IEEE, Jun. 2020, pp. 405–409.
[11] Miladinović, A., Ajčević, M., Jarmolowska, J., Marusic, U., Silveri, G.,
In this study, by investigating relevant studies, the author Battaglini, P. P., & Accardo, A. (2020). Performance of EEG Motor-
gives basic concepts of EEG, its analysis and applications, and Imagery based spatial filtering methods: A BCI study on Stroke patients.
architectures in DL. The author points out that there are Procedia Computer Science, 176, 2840-2848.
problems of (1) Low SNR, (2) Lack of Ability in Generalization, [12] Birbaumer, N., & Cohen, L. G. (2007). Brain–computer interfaces:
(3) Lack of Efficient Feature Extraction and Selection, (4) Lack communication and restoration of movement in paralysis. The Journal of
physiology, 579(3), 621-636.
of Unified Protocols, and (5) Requirements for User Training.
[13] Chaudhary, U., Birbaumer, N., & Ramos-Murguialday, A. (2016). Brain–
DL has been proven directly or indirectly able to solve the computer interfaces for communication and rehabilitation. Nature
problems to a certain degree, especially for the first three Reviews Neurology, 12(9), 513-525.
problems. DL architectures like CNN and LSTM have been [14] Bai, Z., Fong, K. N., Zhang, J. J., Chan, J., & Ting, K. H. (2020).
Immediate and long-term effects of BCI-based rehabilitation of the upper
investigated and implemented, as well as newly developed extremity after stroke: a systematic review and meta-analysis. Journal of
architectures like Transformer haven’t or seldom been widely neuroengineering and rehabilitation, 17, 1-20.
investigated. The author raises two questions in recent [15] Ramos‐Murguialday, A., Broetz, D., Rea, M., Läer, L., Yilmaz, Ö.,
advancement of DL techniques usages in EEG-Based BCI Brasil, F. L., ... & Birbaumer, N. (2013). Brain–machine interface in
systems, namely (1) Data Issues, that is the necessity of data pre- chronic stroke rehabilitation: a controlled study. Annals of neurology,
processing should be investigated, and the lack of more day-to- 74(1), 100-108.
day (artifacts disrupted) data and datasets, and (2) Architecture [16] Acharya, U. R., Sree, S. V., Swapna, G., Martis, R. J., & Suri, J. S. (2013).
Automated EEG analysis of epilepsy: a review. Knowledge-Based
Issues, that is the CNNs have been widely used, and more state- Systems, 45, 147-165.
of-the-art architectures should be deployed. The author
[17] Arns, M., Conners, C. K., & Kraemer, H. C. (2013). A decade of EEG
underscores the potential of DL techniques in EEG-based BCIs theta/beta ratio research in ADHD: a meta-analysis. Journal of attention
and urges for further exploration and adaptation of advanced disorders, 17(5), 374-383.
architectures to improve the accuracy, efficiency, and [18] Engemann, D. A., Raimondo, F., King, J. R., Rohaut, B., Louppe, G.,
generalization of EEG-Based BCI systems. Faugeras, F., ... & Sitt, J. D. (2018). Robust EEG-Based cross-site and
cross-protocol classification of states of consciousness. Brain, 141(11),
REFERENCES 3179-3192.
[1] Birbaumer, N. (2006). Breaking the silence: brain–computer interfaces [19] Berka, C., Levendowski, D. J., Lumicao, M. N., Yau, A., Davis, G.,
(BCI) for communication and motor control. Psychophysiology, 43(6), Zivkovic, V. T., ... & Craven, P. L. (2007). EEG correlates of task
517-532. engagement and mental workload in vigilance, learning, and memory
tasks. Aviation, space, and environmental medicine, 78(5), B231-B244.
Authorized licensed use limited to: SHANGHAI UNIVERSITY. Downloaded on January 05,2024 at 01:58:45 UTC from IEEE Xplore. Restrictions apply.
[20] Aboalayon, K. A. I., Faezipour, M., Almuhammadi, W. S., & Moslehpour, [40] Ma, T., Li, H., Yang, H., Lv, X., Li, P., Liu, T., ... & Xu, P. (2017). The
S. (2016). Sleep stage classification using EEG signal analysis: a extraction of motion-onset VEP BCI features based on deep learning and
comprehensive survey and new investigation. Entropy, 18(9), 272. compressed sensing. Journal of neuroscience methods, 275, 80-92.
[21] Zander, T. O., & Kothe, C. (2011). Towards passive brain–computer [41] Seeck, M., Koessler, L., Bast, T., Leijten, F., Michel, C., Baumgartner,
interfaces: applying brain–computer interface technology to human– C., ... & Beniczky, S. (2017). The standardized EEG electrode array of the
machine systems in general. Journal of neural engineering, 8(2), 025005. IFCN. Clinical neurophysiology, 128(10), 2070-2077.
[22] Al-Nafjan, A., Hosny, M., Al-Ohali, Y., & Al-Wabil, A. (2017). Review [42] Aggarwal, S., & Chugh, N. (2022). Review of machine learning
and classification of emotion recognition based on EEG brain-computer techniques for EEG based brain computer interface. Archives of
interface system research: a systematic review. Applied Sciences, 7(12), Computational Methods in Engineering, 1-20.
1239. [43] Beverina, F., Palmas, G., Silvoni, S., Piccione, F., & Giove, S. (2003).
[23] Z. Lan, O. Sourina, L.P. Wang, R. Scherer, G. Müller-Putz, “Domain User adaptive BCIs: SSVEP and P300 based interfaces. PsychNology J.,
Adaptation Techniques for EEG-based Emotion Recognition: A 1(4), 331-354.
Comparative Study on Two Public Datasets”, IEEE Trans. Cognitive and [44] Handy, T. C. (Ed.). (2005). Event-related potentials: A methods
Developmental Systems, vol.11, no.1, pp.85-93, March, 2019. handbook. MIT press.
[24] Stikic, M., Johnson, R. R., Tan, V., & Berka, C. (2014). EEG-based [45] Pfurtscheller, G., & Neuper, C. (2001). Motor imagery and direct brain-
classification of positive and negative affective states. Brain-Computer computer communication. Proceedings of the IEEE, 89(7), 1123-1134.
Interfaces, 1(2), 99-112.
[46] Bastos, N. S., Adamatti, D. F., & Billa, C. Z. (2016). Discovering patterns
[25] Lan, Z., Liu, Y., Sourina, O., Wang, L., Scherer, R., & Müller-Putz, G. in brain signals using decision trees. Computational Intelligence and
(2020). SAFE: An EEG dataset for stable affective feature selection. Neuroscience, 2016.
Advanced Engineering Informatics, 44, 101047.
[47] Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet
[26] Bashashati, A., Fatourechi, M., Ward, R. K., & Birch, G. E. (2007). A classification with deep convolutional neural networks. Advances in
survey of signal processing algorithms in brain–computer interfaces based neural information processing systems, 25.
on electrical brain signals. Journal of Neural engineering, 4(2), R32.
[48] Zeiler, M. D., & Fergus, R. (2014). Visualizing and understanding
[27] McFarland, D. J., Anderson, C. W., Muller, K. R., Schlogl, A., & convolutional networks. In Computer Vision–ECCV 2014: 13th European
Krusienski, D. J. (2006). BCI meeting 2005-workshop on BCI signal Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings,
processing: feature extraction and translation. IEEE transactions on neural Part I 13 (pp. 818-833). Springer International Publishing.
systems and rehabilitation engineering, 14(2), 135-138.
[49] Rumelhart, D. E., Hinton, G. E., & Williams, R. J. (1986). Learning
[28] Lotte, F., Congedo, M., Lécuyer, A., Lamarche, F., & Arnaldi, B. (2007). representations by back-propagating errors. nature, 323(6088), 533-536.
A review of classification algorithms for EEG-Based brain–computer
interfaces. Journal of neural engineering, 4(2), R1. [50] Graves, A., & Graves, A. (2012). Long short-term memory. Supervised
sequence labelling with recurrent neural networks, 37-45.
[29] Bigdely-Shamlo, N., Mullen, T., Kothe, C., Su, K. M., & Robbins, K. A.
[51] Hochreiter, S., & Schmidhuber, J. (1997). Long short-term memory.
(2015). The PREP pipeline: standardized preprocessing for large-scale
Neural computation, 9(8), 1735-1780.
EEG analysis. Frontiers in neuroinformatics, 9, 16.
[30] Roy, Y., Banville, H., Albuquerque, I., Gramfort, A., Falk, T. H., & [52] Graves, A., Liwicki, M., Fernández, S., Bertolami, R., Bunke, H., &
Faubert, J. (2019). Deep learning-based electroencephalography analysis: Schmidhuber, J. (2008). A novel connectionist system for unconstrained
a systematic review. Journal of neural engineering, 16(5), 051001. handwriting recognition. IEEE transactions on pattern analysis and
machine intelligence, 31(5), 855-868.
[31] LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. nature,
521(7553), 436-444. [53] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.
N., ... & Polosukhin, I. (2017). Attention is all you need. Advances in
[32] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., ... & neural information processing systems, 30.
Fei-Fei, L. (2015). Imagenet large scale visual recognition challenge.
International journal of computer vision, 115, 211-252. [54] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X.,
Unterthiner, T., ... & Houlsby, N. (2020). An image is worth 16x16 words:
[33] Peng, X., Liu, J., Huang, Y., Mao, Y., & Li, D. (2023). Classification of Transformers for image recognition at scale. arXiv preprint
lower limb motor imagery based on iterative EEG source localization and arXiv:2010.11929.
feature fusion. Neural Computing and Applications, 35(19), 13711-
13724. [55] Lee, H. K., & Choi, Y. S. (2018, January). A convolution neural networks
scheme for classification of motor imagery EEG based on wavelet time-
[34] Stewart, A. X., Nuthmann, A., & Sanguinetti, G. (2014). Single-trial frequecy image. In 2018 International Conference on Information
classification of EEG in a visual object task using ICA and machine Networking (ICOIN) (pp. 906-909). IEEE.
learning. Journal of neuroscience methods, 228, 1-14.
[56] Zhang, J., Yan, C., & Gong, X. (2017, October). Deep convolutional
[35] Bagh, N., & Reddy, M. R. (2020). Hilbert transform-based event-related neural network for decoding motor imagery based brain computer
patterns for motor imagery brain computer interface. Biomedical Signal interface. In 2017 IEEE international conference on signal processing,
Processing and Control, 62, 102020. communications and computing (ICSPCC) (pp. 1-5). IEEE.
[36] Gaur, P., Pachori, R. B., Wang, H., & Prasad, G. (2019). An automatic [57] Amin, S. U., Alsulaiman, M., Muhammad, G., Mekhtiche, M. A., &
subject specific intrinsic mode function selection for enhancing two-class Hossain, M. S. (2019). Deep Learning for EEG motor imagery
EEG-Based motor imagery-brain computer interface. IEEE Sensors classification based on multi-layer CNNs feature fusion. Future
Journal, 19(16), 6938-6947. Generation computer systems, 101, 542-554.
[37] Lindig-Leon, C., & Bougrain, L. (2015, October). A multi-label [58] Cecotti, H., & Graser, A. (2010). Convolutional neural networks for P300
classification method for detection of combined motor imageries. In 2015 detection with application to brain-computer interfaces. IEEE transactions
IEEE International Conference on Systems, Man, and Cybernetics (pp. on pattern analysis and machine intelligence, 33(3), 433-445.
3128-3133). IEEE.
[59] Li, R., Wang, L., Suganthan, P. N., & Sourina, O. (2022). Sample-based
[38] Lawhern, V. J., Solon, A. J., Waytowich, N. R., Gordon, S. M., Hung, C. data augmentation based on electroencephalogram intrinsic
P., & Lance, B. J. (2018). EEGNet: a compact convolutional neural characteristics. IEEE Journal of Biomedical and Health Informatics,
network for EEG-Based brain–computer interfaces. Journal of neural 26(10), 4996-5003.
engineering, 15(5), 056013.
[60] Li, F., Li, X., Wang, F., Zhang, D., Xia, Y., & He, F. (2020). A novel
[39] Luo, T. J., Zhou, C. L., & Chao, F. (2018). Exploring spatial-frequency- P300 classification algorithm based on a principal component analysis-
sequential relationships for motor imagery classification with recurrent convolutional neural network. Applied sciences, 10(4), 1546.
neural network. BMC bioinformatics, 19(1), 1-18.
[61] Ha, K. W., & Jeong, J. W. (2019). Motor imagery EEG classification
using capsule networks. Sensors, 19(13), 2854.
Authorized licensed use limited to: SHANGHAI UNIVERSITY. Downloaded on January 05,2024 at 01:58:45 UTC from IEEE Xplore. Restrictions apply.
[62] Ma, X., Qiu, S., Du, C., Xing, J., & He, H. (2018, July). Improving EEG- [71] Zhang, R., Zong, Q., Dou, L., & Zhao, X. (2019). A novel hybrid deep
based motor imagery classification via spatial and temporal recurrent learning scheme for four-class motor imagery classification. Journal of
neural networks. In 2018 40th annual international conference of the IEEE neural engineering, 16(6), 066004.
engineering in medicine and biology society (EMBC) (pp. 1903-1906). [72] Yang, L., Song, Y., Ma, K., & Xie, L. (2021). Motor imagery EEG
IEEE. decoding method based on a discriminative feature learning strategy.
[63] Zhang, R., Zong, Q., Dou, L., Zhao, X., Tang, Y., & Li, Z. (2021). Hybrid IEEE Transactions on Neural Systems and Rehabilitation Engineering,
deep neural network using transfer learning for EEG motor imagery 29, 368-379.
decoding. Biomedical Signal Processing and Control, 63, 102144. [73] Zhao, Y., Li, C., Liu, X., Qian, R., Song, R., & Chen, X. (2022). Patient-
[64] He, J., Zhao, L., Yang, H., Zhang, M., & Li, W. (2019). HSI-BERT: specific seizure prediction via adder network and supervised contrastive
Hyperspectral image classification using the bidirectional encoder learning. IEEE Transactions on Neural Systems and Rehabilitation
representation from transformers. IEEE Transactions on Geoscience and Engineering, 30, 1536-1547.
Remote Sensing, 58(1), 165-178. [74] Liu, W., & Zeng, Y. (2022, April). Motor Imagery Tasks EEG Signals
[65] Zandvoort, C. S., van Dieën, J. H., Dominici, N., & Daffertshofer, A. Classification Using ResNet with Multi-Time-Frequency Representation.
(2019). The human sensorimotor cortex fosters muscle synergies through In 2022 7th International Conference on Intelligent Computing and Signal
cortico-synergy coherence. Neuroimage, 199, 30-37. Processing (ICSP) (pp. 2026-2029). IEEE.
[66] Song, Y., Jia, X., Yang, L., & Xie, L. (2021). Transformer-based spatial- [75] Kant, P., Laskar, S. H., Hazarika, J., & Mahamune, R. (2020). CWT based
temporal feature learning for EEG decoding. arXiv preprint transfer learning for motor imagery classification for brain computer
arXiv:2106.11170. interfaces. Journal of Neuroscience Methods, 345, 108886.
[67] Deng, Z., Li, C., Song, R., Liu, X., Qian, R., & Chen, X. (2023). EEG- [76] Xu, G., Shen, X., Chen, S., Zong, Y., Zhang, C., Yue, H., ... & Che, W.
based seizure prediction via hybrid vision transformer and data (2019). A deep transfer convolutional neural network framework for EEG
uncertainty learning. Engineering Applications of Artificial Intelligence, signal classification. IEEE Access, 7, 112767-112776.
123, 106401. [77] Almogbel, M. A., Dang, A. H., & Kameyama, W. (2018, February). EEG-
[68] Brunner, C., Leeb, R., Müller-Putz, G., Schlögl, A., & Pfurtscheller, G. signals based cognitive workload detection of vehicle driver using deep
(2008). BCI Competition 2008–Graz data set A. Institute for Knowledge learning. In 2018 20th International Conference on Advanced
Discovery (Laboratory of Brain-Computer Interfaces), Graz University of Communication Technology (ICACT) (pp. 256-259). IEEE.
Technology, 16, 1-6. [78] Aznan, N. K. N., Bonner, S., Connolly, J., Al Moubayed, N., & Breckon,
[69] Schirrmeister, R. T., Springenberg, J. T., Fiederer, L. D. J., Glasstetter, T. (2018, October). On the classification of SSVEP-based dry-EEG
M., Eggensperger, K., Tangermann, M., ... & Ball, T. (2017). Deep signals via convolutional neural networks. In 2018 IEEE international
learning with convolutional neural networks for EEG decoding and conference on systems, man, and cybernetics (SMC) (pp. 3726-3731).
visualization. Human brain mapping, 38(11), 5391-5420. IEEE.
[70] Sakhavi, S., Guan, C., & Yan, S. (2018). Learning temporal information
for brain-computer interface using convolutional neural networks. IEEE
transactions on neural networks and learning systems, 29(11), 5619-5629.
Authorized licensed use limited to: SHANGHAI UNIVERSITY. Downloaded on January 05,2024 at 01:58:45 UTC from IEEE Xplore. Restrictions apply.