learning and machine learning are used to identify
rumours and false news.
The advent of online and social media has enabled
the incorporation of false information with real or
verified information. This situation can be utilized to
influence people’s opinions, thus impacting their
perceptions, thoughts, and behavior. As a result,
disseminating links, messages, photos, videos, and
audio files over several social media platforms has
become very simple for those who propagate fake
news. People who spread these fakes usually have a
political or social agenda. Therefore, the development
of an efficient system to detect misinformation is of
utmost importance (Kaliyar, R. K., et al, 2021). An
approach to detecting false news stories using deep
learning is presented in this research. There are input
datasets that make up methodology. Information
culled from the microblogging service Twitter is the
source of this dataset. Input data that is in its raw form
undergoes data preparation initially. Remove Stop
Words, Stemming, and Tokenization are the main
components of data preparation. Use the NTLK
library to remove stop words. Stemming is done using
Porters Algorithm. Tokenization is completed by N-
gram model. Model is developed using LSTM, CNN
and AdaBoost algorithms. Results have shown that
LSTM’s Compared to CNN and AdaBoost methods,
the accuracy, specificity, and sensitivity are higher.
2 LITERATURE REVIEW
In order to identify false news, researchers use n-gram
analysis and TF-IDF for feature extraction. After that,
they use decision trees, SGD, Linear SVM, Logistic
Regression, SVM, and KNN as machine learning
classifiers (Lahby, M.,et al, 2022). Pennycook & Rand
(Pennycook, G., & Rand, D. G. 2021).
developed an
SVM-based satire detection model with 90%
accuracy. Bahad et al. showed RNNs outperform
manual rumor detection, while Ruchansky et al.
introduced the CSI model, integrating content, user
comments, and sources for improved accuracy. For
fake images, Hsu et al. developed CFFN, using GANs
and DenseNet to classify manipulated images. Bird et
al. developed NLTK, a comprehensive Python toolkit
that facilitates various NLP tasks such as
tokenization, parsing, stemming, and classification,
making text analysis more accessible and efficient.
Huan et alsuggested a deep learning strategy for text
classification that effectively captures both sentiment
and semantic context, improving accuracy in
emotionally charged text analysis (Bird, S., Klein, E.,
& Loper, E. 2009)
.
Umer et al. demonstrated that
combining convolutional neural networks (CNNs)
with Fast Text embeddings enhances text
classification by efficiently extracting contextual and
syntactic features (Umer, M.,et al, 2023). Optimized
deep learning methods for spotting rumors and
misleading information in online social networks
were presented by Zamani et al, leveraging advanced
neural architectures to enhance misinformation
identification and content credibility assessment
(Abu Sarwar Zamani, et al, 2025).
3 RESEARCH METHODOLOGY
3.1 Deep Learning for Fake News
Detection
This technique involves collecting data from Twitter
in order to utilize deep learning to identify false news.
Pre-processing involves stop word removal (NLTK),
stemming (Porter’s Algorithm), and tokenization (N-
gram model). Tokenization applies unigrams,
bigrams, and trigrams to structure text. The model
integrates LSTM, CNN, and AdaBoost for
classification. LSTM, an RNN variant, is effective in
pattern recognition due to its input (I/P), forget (f),
and output (O/P) gates, along with a memory cell.
These gates regulate information flow, ensuring data
integrity and sequence retention, improving accuracy
while preventing gradient descent issues. Figure 2
and 3 illustrate the LSTM network, providing A
schematic illustration of its composition. The network
takes an embedding xi as input at each time step and
calculates its output hi by adding the output h i-1
together with the latest embedding xi to the latest cell
state h i-1. It is possible to insert or remove data from
the cell, depending on its present state
Figure 2: LSTM architecture.