2 LITERATURE SURVEY
Aayush Devgan et al. (2023) presented a context-
aware emotion recognition system utilizing the BERT
transformer model to improve the precision of
emotion detection in textual data. Through training on
an extensive dataset labeled with emotions, the
machine proficiently comprehends intricate, context-
sensitive emotions. The system exhibits enhanced
performance relative to conventional approaches and
standard transformers, as confirmed by a benchmark
dataset. Applications encompass emotion-sensitive
chatbots and mental health surveillance systems.
Nonetheless, a constraint is the model's reliance on
substantial labeled datasets and computing resources,
which may impede its adaptability for low-resource
languages or domains (Devgan, 2023).
Cangqing Wang et al. (2024) introduced a
Context-Aware BERT (CA-BERT) model that
increases automated chat systems by refining the
comprehension of when supplementary context is
necessary in multi-turn conversations. CA-BERT
exhibits enhanced accuracy and speed in classifying
context necessity by fine-tuning BERT with an
innovative training regimen on a chat discourse
dataset, surpassing baseline models. The method
markedly diminishes training duration and resource
consumption, facilitating its deployment in real-time
scenarios. The integration augments chatbot
response, hence enhancing user experience and
interaction quality. CA-BERT's disadvantage lies in
its dependence on high-quality, annotated multi-turn
datasets, potentially restricting its usefulness in
underrepresented fields or languages (Wang, Liu, et
al. 2023).
Sadam Hussain Noorani et al. (2024) introduced a
sentiment-aware chatbot with a transformer-based
architecture and a self-attention mechanism. The
model utilizes the pre-trained CTRL framework,
enabling adaptation to diverse models without
modifications to the architecture. The chatbot, trained
on the DailyDialogues dataset, exhibits enhanced
content quality and emotional perception.
Experimental findings indicate that it surpasses
existing baselines in producing human-like,
contextually aware reactions. The model's efficacy
relies on the quality and diversity of the training data,
necessitating potential fine-tuning for particular
domains (Noorani, Khan, et al. 2023).
Aamir Khan Jadoon et al. (2024) presents a
method that improves pre-trained large language
models (LLMs) for data analysis by effectively
extracting context from desktop settings while
preserving data privacy. The system prioritizes
applications that are both recent and often utilized,
connecting user inquiries with the data structure to
discover appropriate tools and produce code that
reflects user intent. Assessed with 18 participants in
practical circumstances, it attained a 93.0% success
rate on seven data-centric activities, surpassing
traditional benchmarks. This method greatly
enhances accessibility, user happiness, and
understanding in data analytics; yet, its efficacy relies
on precise context extraction and tool compatibility
(Jadoon, Jadoon, et al. 2024).
Deepak Sharma et al. (2024) examined progress
in Natural Language Processing (NLP) aimed at
improving conversational AI systems, emphasizing
Transformers, RNNs, LSTMs, and BERT for
producing coherent and contextually pertinent
responses. Experimental findings indicate that
Transformers attain an accuracy of 92%, surpassing
BERT (89%), RNNs (83%), and LSTMs (81%),
while user feedback enhances system performance by
15%. The research emphasizes the necessity for
reliable, context-sensitive conversational bots and the
incorporation of varied language inputs to
accommodate wider audiences. Future endeavors
focus on enhancing explainability and flexibility to
facilitate more intuitive human-machine interactions
(Sharma, Sundravadivelu, et al. 2024).
Arun Babu et al. (2024) connected Artificial
Intelligence (AI), the Internet of Things (IoT), and
Deep Learning (DL), changing healthcare by
facilitating tailored medical treatments and enhancing
service quality. This work introduces a BERT-based
medical chatbot aimed at addressing the
shortcomings of conventional systems, including
inadequate comprehension of medical terminology
and absence of tailored responses. Utilizing
Bidirectional Encoder Representations from
Transformers (BERT), the chatbot attains 98%
accuracy, 97% precision, 97% AUC-ROC, 96%
recall, and an F1 score of 98%, underscoring its
formidable predictive capability and dependability in
addressing medical inquiries. This method guarantees
accurate, thorough, and accessible healthcare
communication, showcasing considerable potential
for enhancing contemporary healthcare services
(Babu, and Boddu, 2024).
Saadat Izadi et al. (2024) examined the progress
in chatbot technology, emphasizing error correction
to improve customer happiness and trust. Prevalently
utilized in sectors such as customer service,
healthcare, and education, chatbots frequently
encounter challenges like misinterpretations and
mistakes. An analysis of several corrective tactics,
including feedback loops, human-in-the-loop