findings underscore the effectiveness of transformer
models in this classification task and the value of XAI
tools in uncovering patterns that distinguish human
from AI-generated text.
Future research could expand the dataset by
including more diverse text types (e.g., programming
code, mathematical equations, and domain-specific
content) to improve generalizability. Exploring the
direct use of Large Language Models (LLMs)
through fine-tuning or prompt engineering may also
enhance classification performance given their strong
contextual understanding. Employing the latest GPT
versions or other state-of-the-art models would
ensure up-to-date results, while programs such as
Azure for Students could be leveraged to reduce
research costs when accessing GenAI model APIs.
ACKNOWLEDGEMENTS
The authors acknowledge the use of OpenAI’s GPT-
5 language model for assistance in idea generation,
drafting, and language refinement during the
preparation of this manuscript. All content produced
with AI assistance was carefully reviewed and
verified by the authors, who take full responsibility
for the final version of the paper.
REFERENCES
Ali, S., Abuhmed, T., El-Sappagh, S., Muhammad, K.,
Alonso-Moral, J.M., Confalonieri, R., Guidotti, R., Del
Ser, J., Díaz-Rodríguez, N., Herrera, F., 2023.
Explainable Artificial Intelligence (XAI): What we
know and what is left to attain Trustworthy Artificial
Intelligence. Information Fusion 99, 101805.
Arabadzhieva - Kalcheva, N., Kovachev, I., 2022.
Comparison of BERT and XLNet accuracy with
classical methods and algorithms in text classification,
in: 2021 International Conference on Biomedical
Innovations and Applications (BIA). Presented at the
2021 International Conference on Biomedical
Innovations and Applications (BIA), IEEE, Varna,
Bulgaria, pp. 74–76.
Cesarini, M., Malandri, L., Pallucchini, F., Seveso, A.,
Xing, F., 2024. Explainable AI for Text Classification:
Lessons from a Comprehensive Evaluation of Post Hoc
Methods. Cogn Comput 16, 3077–3095.
Gautam, A., V, V., Masud, S., 2021. Fake News Detection
System using XLNet model with Topic
Distributions:CONSTRAINT@AAAI2021 Shared
Task.
Guido, R., Groccia, M.C., Conforti, D., 2023. A hyper-
parameter tuning approach for cost-sensitive support
vector machine classifiers. Soft Comput 27, 12863–
12881.
Hayawi, K., Shahriar, S., Mathew, S.S., 2024. The imitation
game: Detecting human and AI-generated texts in the
era of ChatGPT and BARD. Journal of Information
Science 01655515241227531.
López Espejel, J., Ettifouri, E.H., Yahaya Alassan, M.S.,
Chouham, E.M., Dahhane, W., 2023. GPT-3.5, GPT-4,
or BARD? Evaluating LLMs reasoning ability in zero-
shot setting and performance boosting through prompts.
Natural Language Processing Journal 5, 100032.
Maktab Dar Oghaz, M., Dhame, K., Singaram, G., Babu
Saheer, L., 2023. Detection and Classification of
ChatGPT Generated Contents Using Deep Transformer
Models (preprint).
Mindner, L., Schlippe, T., Schaaff, K., 2023. Classification
of Human- and AI-Generated Texts: Investigating
Features for ChatGPT. pp. 152–170.
Mitrović, S., Andreoletti, D., Ayoub, O., 2023. ChatGPT or
Human? Detect and Explain. Explaining Decisions of
Machine Learning Model for Detecting Short
ChatGPT-generated Text.
Molnar, C., 2024. Interpretable Machine Learning.
Ouyang, L., Wu, J., Jiang, X., Almeida, D., Wainwright,
C.L., Mishkin, P., Zhang, C., Agarwal, S., Slama, K.,
Ray, A., Schulman, J., Hilton, J., Kelton, F., Miller, L.,
Simens, M., Askell, A., Welinder, P., Christiano, P.,
Leike, J., Lowe, R., 2022. Training language models to
follow instructions with human feedback.
Salih, A., Raisi-Estabragh, Z., Galazzo, I.B., Radeva, P.,
Petersen, S.E., Menegaz, G., Lekadir, K., 2024. A
Perspective on Explainable Artificial Intelligence
Methods: SHAP and LIME.
Vickers, P., Barrault, L., Monti, E., Aletras, N., 2024. We
Need to Talk About Classification Evaluation Metrics
in NLP.
Yenduri, G., M, R., G, C.S., Y, S., Srivastava, G.,
Maddikunta, P.K.R., G, D.R., Jhaveri, R.H., B, P.,
Wang, W., Vasilakos, A.V., Gadekallu, T.R., 2023.
Generative Pre-trained Transformer: A Comprehensive
Review on Enabling Technologies, Potential
Applications, Emerging Challenges, and Future
Directions.