frequently involves handling identically dispersed
and non-independent data in an efficient manner to
provide reliable model training. Ensuring sure
datasets may be split and distributed among numerous
clients in an efficient manner while preserving
successful training.
3 CONCLUSIONS
Choosing an appropriate dataset for ED is essential to
developing dependable, widely applicable models
that preserve anonymity. Datasets like ISEAR,
IEMOCAP, RAVDESS, CREMA-D, EmoDB, CK+,
FER-2013, MMI, JAFFE, AffectNet, SemEval-2018,
and SEMAINE provide a great variety of modalities:
text, audio, and facial expressions. These datasets
form the basis for an in-depth study of emotion
recognition. For example, more insightful audio
samples into very emotional speech come forth from
IEMOCAP and RAVDESS, whereas the ISEAR
dataset is rich with textual data in terms of emotional
responses. Again, some of these datasets are focused
on facial expressions relevant to visual ED, such as
FER-2013, CK+, and JAFFE. These datasets can be
used by several clients due to the fact that FL is
innately distributed, which will not risk user privacy
and promote data diversity in order to increase model
performance. Comprehensively including multi-
modal datasets ensures an all-round approach in
emotion identification to improve the accuracy of
capturing the complexity of human emotions.
However, class disparities should be dealt with and
the databases should represent real situations. FL can
significantly advance the science of ED by thoughtful
selection and class balancing of these datasets, so that
more morally correct applications be derived in health
and other fields that are more personalized. Future
applications of emotion detection with FL include
promising enrichments of human-machine
interactions and protections of the privacy of users.
This develops in terms of multimodal integration,
privacy, scalability, cultural diversity, and real-time
applications.
REFERENCES
Adoma, A.F., Henry, N.M., Chen, W. and Andre, N.R.,
December. Recognizing emotions from texts using a
bert-based approach. In 2020 IEEE 17th International
Computer Conference on Wavelet Active Media
Technology and Information Processing (ICCWAMTI
P), (2020) 62-66.
AffectNet Dataset: https://www.kaggle.com/datasets/ngot
hienphu/affectnet
Alotaibi, F.M., Classifying text- based emotions using logi
stic regression. VAWKUM Transactions on Computer
Sciences, 7(1), (2019) 31-37.
Asghar, M.Z., Subhan, F., Imran, M., Kundi, F.M.,
Shamshirband, S., Mosavi, A., Csiba, P. and Varkonyi-
Koczy, A.R., Performance evaluation of supervised
machine learning techniques for efficient detection of
emotions from online content. arXiv preprint
arXiv:1908.01587 (2019).
CK+ Dataset: https://www.kaggle.com/datasets/davilsena/
ckdataset
CREMA- D Dataset: https://www.kaggle.com/datasets/ejl
ok1/cremad
EmoDB Dataset: https://www.kaggle.com/datasets/piyush
agni5/berlin-database-of-emotional-speech-emodb
FER2013 Dataset: https://www.kaggle.com/datasets/msa
mbare/fer2013/code
G. K. J. Hussain and G. Manoj, Federated Learning: A
Survey of a New Approach to Machine Learning, 2022
First International Conference on Electrical, Electronic
s, Information and Communication Technologies (ICE
EICT), Trichy, India, (2022) pp. 1-8.
IEMOCAP Dataset: https://www.kaggle.com/datasets/sa
muelsamsudinng/iemocap-emotion-speech-database
JAFFE Dataset: https://www.kaggle.com/code/mpwolke/j
apanese-female-facial-expression-tiff-images
Jain, S. and Asawa, K., Modeling of emotion elicitation
conditions for a cognitive- emotive architecture. Cogn
itive Systems Research, 55, (2019) 60-76.
MMI Dataset: https://kaggle.com/datasets/zaber666/meld-
dataset
RAVDESS Dataset: https://www.kaggle.com/datasets/uw
rfkaggler/ravdess-emotional-speech-audio