by leveraging subword-level text representations and
deep feature extraction techniques. Experimental data
shows that the CNN model outperforms standard
machine learning methods in accuracy, durability,
and scalability. By addressing the challenges of
misinformation and manipulated content on social
media, this study contributes to the development of
reliable tools for safeguarding online discourse and
ensuring the integrity of digital interactions. Table 1
and figure 1 shows the results obtained.
6 FUTURE SCOPE
The proposed deep learning model for detecting
machine-generated tweets can be extended and
enhanced in several ways. Future research can focus
on improving detection accuracy by incorporating
transformer-based architectures like BERT or GPT
for more advanced contextual understanding.
Additionally, expanding the dataset to include
multilingual tweets will improve the model's
adaptability to diverse linguistic patterns.
Integrating real-time detection mechanisms with
social media platforms can help in the proactive
identification of deepfake content, reducing
misinformation spread. Further, combining textual
analysis with multimodal data such as images and
videos can enhance the detection of complex
deepfake content. The system can also be extended to
detect evolving AI-generated text by continuously
updating training data with the latest generative
models.
REFERENCES
C.Grimme, M.Preuss, L.Adam, and H.Trautmann,
‘‘Socialbots: Human like by means of human control?’’
Big Data, vol. 5, no. 4, pp. 279–293, Dec. 2017.
H. Siddiqui, E. Healy, and A. Olmsted, ‘‘Bot or not,’’ in
Proc. 12th Int. Conf. Internet Technol. Secured Trans.
(ICITST), Dec. 2017, pp. 462–463.
J. P. Verma and S. Agrawal, ‘‘Big data analytics:
Challenges and applica tions for text, audio, video, and
social media data,’’ Int. J. Soft Comput., Artif. Intell.
Appl., vol. 5, no. 1, pp. 41–51, Feb. 2016.
J. Ternovski, J. Kalla, and P. M. Aronow, ‘‘Deepfake
warnings for political videos increase disbelief but do
not improve discernment: Evidence from two
experiments,’’ Ph.D. dissertation, Dept. Political Sci.,
Yale Univ., 2021.
J.-S. Lee and J. Hsiang, ‘‘Patent claim generation by fine-
tuning OpenAI GPT-2,’’ World Pat. Inf., vol. 62, Sep.
2020, Art. no. 101983.
L. Beckman, ‘‘The inconsistent application of internet
regulations and suggestions for the future,’’ Nova Law
Rev., vol. 46, no. 2, p. 277, 2021, Art. no. 2.
M. Westerlund, ‘‘The emergence of deepfake technology:
A review,’’ Technol. Innov. Manage. Rev., vol. 9, no.
11, pp. 39–52, Jan. 2019.
R. Zellers, A. Holtzman, H. Rashkin, Y. Bisk, A. Farhadi,
F. Roesner, and Y. Choi, ‘‘Defending against neural
fake news,’’ in Proc. 33rd Int. Conf. Neural Inf.
Process. Syst. (NIPS), Dec. 2019, pp. 9054–9065, Art.
no. 812.
R. Dale, ‘‘GPT-3: What’s it good for?’’ Natural Lang. Eng.,
vol. 27, no. 1, pp. 113–118, 2021.
S. Vosoughi, D. Roy, and S. Aral, ‘‘The spread of true and
false news online,’’ Science, vol. 359, no. 6380, pp.
1146–1151, Mar. 2018.
S. Bradshaw, H. Bailey, and P. N. Howard, ‘‘Industrialized
disinformation: 2020 global inventory of organized
social media manipulation,’’ Comput. Propaganda
Project Oxford Internet Inst., Univ. Oxford, Oxford,
U.K., Tech. Rep., 2021.
W. D. Heaven, ‘‘A GPT-3 bot posted comments on Reddit
for a week and no one noticed,’’ MIT Technol. Rev.,
Cambridge, MA, USA, Tech. Rep., Nov. 2020, p. 2020,
vol. 24. [Online]. Available:
www.technologyreview.com
X. Liu, Y. Zheng, Z. Du, M. Ding, Y. Qian, Z. Yang, and J.
Tang, ‘‘GPT understands, too,’’ 2021,
arXiv:2103.10385.