the 58th Annual Meeting of the Association for 
Computational Linguistics, pages 1808 – 1822. 
https://doi.org/10.18653/v1/2020.acl-main.164 
Jakesch, M., Hancock, J. T., Naaman, M. (2023). Human 
heuristics  for  AI-generated  language  are  flawed. 
Proceedings of the National Academy of Sciences 120, 
e2208839120.  https://doi.org/10.1073/ 
pnas.2208839120 
Jawahar,  G.,  Abdul-Mageed,  M.,  Lakshmanan,  L.  V.  S. 
(2020).  Automatic  Detection  of  Machine  Generated 
Text:  A  Critical  Survey.  In  Proceedings of the 28th 
International Conference on Computational 
Linguistics, pages 2296 – 2309. 
Khan, W., Turab, M., Ahmad, W., Ahmad, S. H., Kumar, 
K., Luo, B. (2022). Data Dimension Reduction makes 
ML Algorithms efficient. In Proceedings of the 2022 
International Conference on Emerging Technologies 
in Electronics, Computing and Communication 
(ICETECC),  pages  1  –  7.  https://doi.org/10.1109/ 
ICETECC56662.2022.10069527 
Koike,  R.,  Kaneko,  M.,  Okazaki,  N.  (2024).  OUTFOX: 
LLM-Generated Essay Detection Through In-Context 
Learning  with  Adversarially  Generated  Examples.  In 
AAAI 2024, Proceedings of 38th AAAI Conference on 
Artificial Intelligence, pages 21259 – 21266. 
  https://doi.org/10.1609/aaai.v38i19.30120 
Kumarage,  T.,  Garland,  J.,  Bhattacharjee,  A., 
Trapeznikov,  K.,  Ruston,  S.,  Liu,  H.  (2023). 
Stylometric Detection of AI-Generated Text in Twitter 
Timelines. Preprint arXiv: 2303.03697. 
Liu, Y.,  Ott,  M., Goyal,  N.,  Du, J.,  Joshi,  M., Chen,  D., 
Levy,  O.,  Lewis,  M.,  Zettlemoyer,  L.,  Stoyanov,  V. 
(2019).  RoBERTa:  A  Robustly  Optimized  BERT 
Pretraining Approach. Preprint arXiv:1907.11692.  
Moulik, R.,  Phutela,  A., Sheoran, S.,  &  Bhattacharya, S. 
(2023). Accelerated Neural Network Training through 
Dimensionality  Reduction  for  High-Throughput 
Screening  of  Topological  Materials.  Preprint arXiv: 
arXiv:2308.12722.  
Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., 
Chanan,  G.,  Killeen,  T.,  Lin,  Z.,  Gimelshein,  N., 
Antiga,  L.,  Desmaison,  A.,  Kopf,  A.,  Yang,  E., 
DeVito, Z., Raison, M., Tejani, A., Chilamkurthy, S., 
Steiner, B., Fang, L., Chintala, S. (2019). PyTorch: An 
Imperative  Style,  High-Performance  Deep  Learning 
Library.  Advances in Neural Information Processing 
Systems 32, pages 8024 – 8035. 
Patel,  P.,  Choukse,  E.,  Zhang,  C.,  Shah,  A.,  Goiri,  I., 
Maleki,  S.,  Bianchini,  R.  (2024).  Splitwise:  Efficient 
Generative  LLM  Inference  Using  Phase  Splitting.  In 
ISCA 2024, Proceedings of the 51st ACM/IEEE 
Annual International Symposium on Computer 
Architecture, pages 118–132.  
      https://doi.org/10.1109/ISCA59077.2024.00019 
Pedregosa,  F.,  Varoquaux,  G.,  Gramfort,  A.,  Michel,  V., 
Thirion, B.,  Grisel,  O., Blondel,  M.,  Prettenhofer, P., 
Weiss,  R.,  Dubourg,  V.,  Vanderplas,  J.,  Passos,  A., 
Cournapeau,  D.,  Brucher,  M.,  Perrot,  M.,  & 
Duchesnay, É. (2011). Scikit-learn: Machine Learning 
in Python. Journal of Machine Learning Research 12, 
2825 – 2830. 
Rojas-Simón,  J.,  Ledeneva,  Y.,  García-Hernández,  R.  A. 
(2024).  A  Dimensionality  Reduction  Approach  for 
Text Vectorization in Detecting Human and Machine-
generated  Texts.  Computación y Sistemas  28,  pages 
1919 – 1929.  
https://doi.org/10.13053/cys-28-4-5214 
Singh,  K.  N.,  Devi,  S.  D.,  Devi,  H.  M.,  Mahanta,  A.  K. 
(2022).  A  novel  approach  for  dimension  reduction 
using  word  embedding:  An  enhanced  text 
classification  approach.  International Journal of 
Information Management Data Insights  2,  100061. 
https://doi.org/10.1016/J.JJIMEI.2022.100061 
Tang, R., Chuang,  Y.-N.,  Hu, X. (2024). The Science  of 
Detecting LLM-Generated Texts.  Communications of 
the ACM 67, pages 50 – 59. 
The Jupyter  Development  Team.  (2015).  Project  Jupyter. 
Jupyter Notebook. Available at https://jupyter.org/.