loading
Papers Papers/2022 Papers Papers/2022

Research.Publish.Connect.

Paper

Authors: Aicha Nouisser 1 ; Nouha Khediri 2 ; Monji Kherallah 3 and Faiza Charfi 3

Affiliations: 1 National School of Electronics and Telecommunications of Sfax, Tunisia ; 2 Faculty of Computing and Information Technology, Northern Border University, Rafha, K.S.A. ; 3 Faculty of Sciences of Sfax, University of Sfax, Tunisia

Keyword(s): Sentiment Analysis, Bimodality, Transformer, BERT Model, Audio and Text, CNN.

Abstract: The diversity of human expressions and the complexity of emotions are specific challenges related to sentiment analysis from text and speech data. Models must consider not only text but also nuances of intonation and emotions expressed by voice. To address these challenges, we created a bimodal sentiment analysis model named ATFSC, that organizes emotions based on textual and audio information. It fuses textual and audio information from conversations, providing a more robust analysis of sentiments, whether negative, neutral, or positive. Key features include the use of transfer learning with a pre-trained BERT model for text processing, a CNN-based audio feature extractor for audio processing, and flexible preprocessing capabilities that support different dataset formats. An attention mechanism was employed to perform a bimodal fusion of audio and text features, which led to a notable performance optimization. As a result, we observed a performance amelioration in the accuracy value s such as 64.61%, 69%, 72%, 81.36% on different datasets respectively IEMOCAP, SLUE, MELD, and CMU-MOSI. (More)

CC BY-NC-ND 4.0

Sign In Guest: Register as new SciTePress user now for free.

Sign In SciTePress user: please login.

PDF ImageMy Papers

You are not signed in, therefore limits apply to your IP address 216.73.216.157

In the current month:
Recent papers: 100 available of 100 total
2+ years older papers: 200 available of 200 total

Paper citation in several formats:
Nouisser, A., Khediri, N., Kherallah, M., Charfi and F. (2025). ATFSC: Audio-Text Fusion for Sentiment Classification. In Proceedings of the 17th International Conference on Agents and Artificial Intelligence - Volume 3: ICAART; ISBN 978-989-758-737-5; ISSN 2184-433X, SciTePress, pages 750-757. DOI: 10.5220/0013178300003890

@conference{icaart25,
author={Aicha Nouisser and Nouha Khediri and Monji Kherallah and Faiza Charfi},
title={ATFSC: Audio-Text Fusion for Sentiment Classification},
booktitle={Proceedings of the 17th International Conference on Agents and Artificial Intelligence - Volume 3: ICAART},
year={2025},
pages={750-757},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0013178300003890},
isbn={978-989-758-737-5},
issn={2184-433X},
}

TY - CONF

JO - Proceedings of the 17th International Conference on Agents and Artificial Intelligence - Volume 3: ICAART
TI - ATFSC: Audio-Text Fusion for Sentiment Classification
SN - 978-989-758-737-5
IS - 2184-433X
AU - Nouisser, A.
AU - Khediri, N.
AU - Kherallah, M.
AU - Charfi, F.
PY - 2025
SP - 750
EP - 757
DO - 10.5220/0013178300003890
PB - SciTePress