Securing EEG-based Brain-Computer Interface Systems from Data Poisoning Attacks
DOI:
https://doi.org/10.51519/journalisi.v7i3.1195Keywords:
Adversarial Perturbation, Backdoors, Brain-Computer Interface, Brain Signals, Electroencephalogram, Machine Learning ModelsAbstract
Electroencephalogram (EEG)-based brain computer interface (BCI) is a widely used access technology to aid human-computer interactions. It enables communication between the human brain and external devices directly without the need for actuators such as human hands and legs. The BCI system acquires brain signals from an EEG device and uses machine learning (ML) algorithms to analyze and interpret the signals into actionable commands. However, EEG-based BCI systems are vulnerable to data poisoning attacks, which compromises the accuracy and security of the BCI system, and user safety. The objective of this paper is to protect the BCI systems against backdoor data poisoning attacks for reliable system operations. In this paper, a backdoor detect-and-clean mechanism, code named Bkd-DETCLEAN, to secure EEG-based BCI systems against data poisoning (backdoor) attacks is proposed using the Random Forest Classifier. Two models were designed, trained and validated on both clean and poisoned dataset respectively. The results of experiments on two benchmark EEG datasets shows that our solution achieves a detection accuracy of 98.5%, effectively identifying poisoned samples with a little below 5% false positive rate. Continued data cleaning iterations restored the poisoned training set, resulting in an overall system accuracy improvement from 78.9% to 93%. Based on these results, the proposed model sustained high detection and cleaning efficiency with different poisoning rates, underscoring the effectiveness of the machine learning driven proposed model in ensuring that brain signal integrity is not compromised. The proposed mechanism is also applicable in other areas including healthcare and medical data protection, protecting fraud detection models in financial systems, ensuring the integrity of sensor data in industrial control systems, protecting against user data manipulation in recommender systems, etc.
Downloads
References
U. Asgher, Y. Ayaz and R. Taiar, "Advances in artificial intelligence (AI) in brain computer interface (BCI) and Industry 4.0 for human machine interaction (HMI)," Frontiers in Human Neuroscience, vol. 5, no. 17, p. 1320536., Dec 2023.
J. I. Joshiraj, "Brain-Computer Interfaces (BCIs) and AI: The Future of Human-Machine Symbiosis," Journal of Science, Technology and Engineering Research, vol. 30, no. 12, pp. 58-65., Dec 2024.
N. Siribunyaphat and Y. Punsawad, "Steady-State Visual Evoked Potential-Based Brain–Computer Interface Using a Novel Visual Stimulus with Quick Response (QR) Code Pattern," Sensors, vol. 22, no. 4, p. 1439, 2022.
W. H. Elashmawi, A. Ayman, M. Antoun, H. Mohamed, S. E. Mohamed, H. Amr, Y. Talaat and A. A. Ali., "Comprehensive review on brain–computer interface (BCI)-based machine and deep learning algorithms for stroke rehabilitation," Applied Sciences, vol. 14, no. 14, p. 6347, 21 Jul 2024.
O. A. Alimi, ". "Data-Driven Learning Models for Internet of Things Security: Emerging Trends, Applications, Challenges and Future Directions," Technologies, vol. 13, no. 5, p. 176, 29 Apr 2025.
S. Ajrawi, R. Rao and M. Sarkar, "Cybersecurity in Brain-Computer Interfaces: RFID-based design-theoretical framework," Informatics in Medicine Unlocked, p. 100489, 2021.
L. Miao, W. Yang, R. Hu, L. Li and L. Huang, "Against Backdoor Attacks in Federated Learning with Differential Privacy," in IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Singapore, 2022.
B. Xue, L. Wu, A. Liu, X. Zhang and X. Chen, "Detecting the universal adversarial perturbations on high-density sEMG signals," Comput Biol Med, 2022.
H. Wang, J. Hong, A. Zhang, J. Zhou and Z. Wang, "Trap and Replace: Defending Backdoor Attacks by Trapping Them into an Easy-to-Replace Subnetwork," Adv Neural Inf Process Syst, 2022.
L. Meng, X. Jiang, J. Huang, Z. Zeng, S. Yu, T. P. Jung, C. T. Lin, R. Chavarriaga and D. Wu, "EEG-based brain–computer interfaces are vulnerable to backdoor attacks," IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 5, no. 31, pp. 2224-2234, May 2023.
X. Sun, J. Li, X. Li, Z. Wang, T. Zhang, H. Qiu, F. Wu and C. Fan, "A General Framework for Defending Against Backdoor Attacks via Influence Graph," arXiv preprint, vol. 21, no. 11, p. 14309, 2021.
J. H. Metzen, T. Genewein, V. Fischer and B. Bischoff, "On Detecting Adversarial Perturbations," ArXiv, vol. 17, no. 2, pp. 42-67, 2017.
F. E. Ekpar, "A Comprehensive Artificial Intelligence-Driven Healthcare System," European Journal of Electrical Engineering and Computer Science, vol. 8, no. 3, May 2024.
M. A. Khan, R. Das, H. K. Iversen and S. Puthusserypady, "Review on motor imagery based BCI systems for upper limb post-stroke neurorehabilitation: From designing to application," Computers in biology and medicine, vol. 123, p. 103843, 2020.
D. Wen, B. Liang, Y. Zhou, H. Chen and T. P. Jung, "The current research of combining multi-modal brain-computer interfaces with virtual reality," IEEE journal of biomedical and health informatics, vol. 25, no. 9, pp. 3278-3287, 29 Dec 2020.
S. Tufail, H. Riggs, M. Tariq and A. I. Sarwat, "Advancements and challenges in machine learning: A comprehensive review of models, libraries, applications, and algorithms," Electronics, vol. 12, no. 8, p. 1789, 10 Apr 2023.
L. Alzubaidi, J. Zhang, A. J. Humaidi, A. Al-Dujaili, Y. Duan, O. Al-Shamma, J. Santamaría, M. A. Fadhel, M. Al-Amidie and L. Farhan, "Review of deep learning: concepts, CNN architectures, challenges, applications, future directions," Journal of big Data, vol. 8, no. 1, p. 53, 31 Mar 2021.
A. Salem, R. Wen, M. Backes, S. Ma and a. Y. Zhang, "Dynamic backdoor attacks against machine learning models," in IEEE 7th European Symposium on Security and Privacy (EuroS&P), 2022.
Y. Gao, B. G. Doan, Z. Zhang, S. Ma, J. Zhang, A. Fu, S. Nepal and H. Kim, "Backdoor attacks and countermeasures on deep learning: A comprehensive review," arXiv preprint, p. 10760, 21 Jul 2020.
W. Luo, C. Wu, L. Ni, N. Zhou and Z. Zhang, "Detecting adversarial examples by positive and negative representations," Applied Soft Computing, vol. 117, p. 108383, 2022.
L. Meng, X. Jiang, X. Chen, W. Liu, H. Luo and D. Wu, "Adversarial filtering-based evasion and backdoor attacks to EEG-based brain-computer interfaces," Information Fusion, vol. 107, p. 102316, 2024.
C. Cintas, S. Speakman, V. Akinwande, W. Ogallo, K. Weldemariam, S. Sridharan and E. McFowland, "Detecting Adversarial Attacks via Subset Scanning of Autoencoder Activations and Reconstruction Error," in International Joint Conference on Artificial Intelligence (IJCAI-20), 2020.
G. Ryu and D. Choi, "Detection of adversarial attacks based on differences in image entropy," International Journal of Information Security, vol. 23, p. 299–314, 2024.
M. Klingner, V. R. Kumar, S. Yogaman, A. B. and T. Fingscheidt, "Detecting Adversarial Perturbations in Multi-Task Perception," in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2022.
S. Li, S. Zhu, S. Paul, A. Roy-Chowdhury, C. Song, S. Krishnamurthy, A. Swami and K. S. Chan, "Connecting the Dots: Detecting Adversarial Perturbations Using Context Inconsistency," in European Conference on Computer Vision, 2020.
Z. Ying and B. Wu, "DLP: towards active defense against backdoor attacks with decoupled learning process," Cybersecurity, vol. 6, no. 9, 2023.
F. Qi, Y. Chen, M. Li, Y. Yao, Z. Liu and M. Sun, "ONION: A Simple and Effective Defense Against Textual Backdoor Attacks," in Conference on Empirical Methods in Natural Language Processing, Association for Computational Linguistics, 2021.
H. Pei, J. Jia, W. Guo, B. Li and D. Song, "TextGuard: Provable Defense against Backdoor Attacks on Text Classification," ArXiv, vol. 2311.11225, 2023.
M. Fan, C. Chen, X. Liu and W. Guo, "Defense Against Backdoor Attacks Via Identifying and Purifying Bad Neurons," ArXiv, 2022.
S. Shamshiri, K. J. Han and I. Sohn, "DB-COVIDNet: A Defense Method against Backdoor Attacks," Mathematics, vol. 11, no. 20, p. 4236, 2023.
K. Shao, J. Yang, P. Hu and X. Li, "A Textual Backdoor Defense Method Based on Deep Feature Classification," Entropy (Basel), vol. 25, no. 2, p. 220, 23 Jan 2023.
Z. Zhao, B. Yang, H. Shu, Q. Liu, K. Zhang and L. Peng, "Sensing Intrusion Detection for Automatic Driving System based on Scene Semantic Centroid," in 2022 IEEE 25th International Conference on Intelligent Transportation Systems (ITSC), Macau, China, 2022.
N. A. Badcock, K. A. Preece, B. De Wit, K. Glenn, N. Fieder, J. Thie and G. McArthur, "Validation of the Emotiv EPOC EEG system for research quality auditory event-related potentials in children," PeerJ, vol. 3, p. 907, 2015.
N. S. Williams, G. M. McArthur and N. A. Badcock, "It’s all about time: precision and accuracy of Emotiv event-marking for ERP research," PeerJ, p. 10700, 2021.
N. A. Badcock, P. Mousikou, Y. Mahajan, P. D. Lissa, J. Thie and G. McArthur, "Validation of the Emotiv EPOC EEG gaming system for measuring research quality auditory ERPs," PeerJ, vol. 1, no. e38, 2013.
W. Y. Choong, W. Khairunizam, W. A. Mustafa, M. Murugappan, A. Hamid, S. Z. Bong, R. Yuvaraj, M. I. Omar, A. K. Junoh, H. Ali, Z. M. Razlan and A. B. Shahriman, "Correlation Analysis of Emotional EEG in Alpha, Beta and Gamma Frequency Bands," in J. Phys.: Conf. Ser., 2021.
P. Margaux, M. Emmanuel, D. S. ebastien, B. Olivier and M. J. e. emie, "Objective and subjective evaluation of online error correction during P300-based spelling," Advances in Human-Computer Interaction, p. 578295, 2012.
Downloads
Published
Issue
Section
License
Authors Declaration
- The Authors certify that they have read, understood, and agreed to the Journal of Information Systems and Informatics (JournalISI) submission guidelines, policies, and submission declaration. The submission has been prepared using the provided template.
- The Authors certify that all authors have approved the publication of this manuscript and that there is no conflict of interest.
- The Authors confirm that the manuscript is their original work, has not received prior publication, is not under consideration for publication elsewhere, and has not been previously published.
- The Authors confirm that all authors listed on the title page have contributed significantly to the work, have read the manuscript, attest to the validity and legitimacy of the data and its interpretation, and agree to its submission.
- The Authors confirm that the manuscript is not copied from or plagiarized from any other published work.
- The Authors declare that the manuscript will not be submitted for publication in any other journal or magazine until a decision is made by the journal editors.
- If the manuscript is finally accepted for publication, the Authors confirm that they will either proceed with publication immediately or withdraw the manuscript in accordance with the journal’s withdrawal policies.
- The Authors agree that, upon publication of the manuscript in this journal, they transfer copyright or assign exclusive rights to the publisher, including commercial rights














