activity. In this work, three emotional states (positive,
negative, and neutral) are classified. Acknowledging
that brain activity is individual-specific and that
emotional responses vary across different brain
regions among individuals is crucial for
understanding how emotions can be identified
through neural signal. This study examines the
effectiveness of machine learning (ML) models in the
classification of human emotional states using EEG
data (Chatterje and Byun, 2022).
Determining what specific brain activity patterns
correspond to momentary mental experiences is a
significant challenge in applications involving brain-
machine interfaces. The sheer amount of information
necessary to represent the complex, nonlinear, and
unpredictable nature of EEG signals accurately is one
of the most critical challenges in EEG signal
classification. In this study, a variety of machine
learning (ML) models have been employed to
categorize different emotional states, including GRU
(Gated Recurrent Unit), LSTM (Long Short-Term
Memory), XGBoost (Extreme Gradient Boosting),
RF (Random Forest), DNN (Deep Neural Network),
SVM (Support Vector Machine), and RNN
(Recurrent Neural Network).
2 LITERATURE REVIEW
This literature review extensively examines the field
of EEG-based emotion recognition. This review
examines various aspects of the subject, including
signal processing, feature extraction, classification
techniques and areas of application. Our review
highlights the progress made in EEG based emotion
recognition while also emphasizing the emerging
challenges and potential directions within this
interdisciplinary area. New techniques have been
developed by researchers to make EEG-based
emotion recognition systems more sensitive,
applicable and usable.
In recent years, research on detecting emotions
using EEG signals has gained significant momentum.
In particular, the availability of low-cost EEG devices
and the sharing of open data sets among researchers
have accelerated work on this subject. In this context,
the “EEG Brainwave Dataset: Feeling Emotions”
published on Kaggle, which we also used in this
study, has been one of the sources frequently referred
to in research. The relevant dataset consists of four-
channel (TP9, AF7, AF8, TP10) EEG signals
obtained with the Muse EEG device in positive,
neutral, and negative emotional states.
Earlier studies on EEG-based emotion detection
focused on determining if emotional data could be
obtained from brain waves. Numerous studies have
examined the link between emotional experiences
and brain activity, with a particular focus on the
frontal regions. Within this context, frontal alpha
asymmetry has been explored as it reflects variations
in alpha brainwave activity of the frontal cortex,
which are connected to different emotional states. In
their work, Allen and Reznik identified frontal EEG
asymmetry as a potential marker for vulnerability to
depression. Although frontal asymmetry may help
detect individuals at greater risk for depression, large
scale longitudinal studies are still required to confirm
this finding (Allen and Reznik, 2015).
Frontal alpha asymmetry neurofeedback was
investigated by Mennella et al. as a strategy for
mitigating symptoms of anxiety and negative affect.
In their study, neurofeedback training was employed
to examine discrete changes in positive and negative
affect, anxiety, and depression, as well as variations
in alpha power across the left and right hemispheres.
These pioneering studies established a scientific
foundation for subsequent research into the neural
correlates of emotions using EEG (Mennella, Patron
and Palomba, 2017).
From the acquired EEG signals, J. J. Bird et al.
extracted statistical features across the alpha, beta,
theta, delta, and gamma bands, followed by feature
selection using techniques including OneR,
Information Gain, Bayesian Network, and
Symmetrical Uncertainty. The dataset, consisting of
2,548 features, was reduced using 63 features selected
by Information Gain, and ensemble classifiers such as
Random Forest trained on these features achieved
approximately 97.89% accuracy. The Deep Neural
Network (DNN) achieved 94.89% accuracy (Bird,
Faria, Manso, Ekárt and Buckingham, 2019).
Joshi and Joshi evaluated the performance of
RNN and KNN (K-Nearest Neighbour) algorithms in
classifying human emotions using EEG signals. In the
study, EEG signals corresponding to positive, neutral,
and negative emotions were analyzed. During the
preprocessing stage, channel selection was
performed, and discrete wavelet transform (DWT)
was used for feature extraction. The obtained features
were fed as input to the RNN and KNN algorithms.
The experiments showed that the RNN algorithm
achieved 94.84% accuracy, while the KNN algorithm
achieved 93.43% accuracy. These results
demonstrate that both algorithms performed well in
the EEG-based emotion recognition task. In
particular, the RNN's ability to model dependencies