to uncover deception in real time, outperforming
baseline methods on both synthetic and real-world
datasets. While the results are encouraging, careful
consideration of privacy, ethics, and regulatory
compliance is imperative. AI-based lie detection can
serve as a powerful complement to human analysts—
provided it is designed and deployed responsibly.
A key conclusion is the adaptability and
expandability of the proposed framework. This
research supports AI-based lie detection as a viable
strategy for handling insider threats. Nonetheless, the
benefits of such technology can only become a reality
through constant technological improvements,
compliance with protective legal frameworks, and
continued workers' trust. AI-powered lie detection
cannot be considered an autonomous, standalone
remedy but works as a tool that, when utilized wisely,
can contribute immensely towards security and
stability in an institution.
REFERENCES
Akhter, R., & Machado, C. (2022). Multi-head attention for
cross-modal alignment in deception detection.
Neurocomputing, 508, 364–377.
Alhassan, K., & Frolov, S. (2023). Auto-tuning
hyperparameters in GNN-based insider threat
detection. IEEE Transactions on Network and Service
Management, 20(1), 14–25.
Baker, L. J. & McFadyen, S. (2023). Bioethical
perspectives on AI-based lie detection: Implications for
labor rights. AI and Society, 38, 1023–1041.
Bruno, A., Rossi, F., & D’Angelo, S. (2021). Challenges in
insider threat detection: The hardest nuts to crack.
Computers & Security, 117, 102345.
Bruno, A., Rossi, F., & D’Angelo, S. (2021). Strengthening
data confidentiality, integrity, and availability in
enterprise networks: A systematic review. Computers
& Security, 106, 102265.
Cai, Y., Tang, H., & Zhao, F. (2023). Domain adaptation
techniques for transformer-based language models in
secure communication. Expert Systems with
Applications, 223, 119007.
Chittaranjan, G., & Saxena, S. (2023). Uncovering
deceptive behaviors in insider threat detection: A
context-aware AI approach. Computers & Security,
129, 104118.
Johnson, T., Russel, A., & Kwok, A. B. (2022).
Interpretable neural networks for deception detection in
textual data. Neural Computing and Applications, 34,
4567–4578.
Kalodanis, K., Rizomiliotis, P., Feretzakis, G., Papapavlou,
C., & Anagnostopoulos, D. (2025). High-Risk AI
Systems—Lie Detection Application, Future Internet,
17(1), 26.
Kalodanis, K., Rizomiliotis, P., & Anagnostopoulos, D.
(2024). European Artificial Intelligence Act: an AI
security approach, Information and Computer Security:
Volume 32 Issue 3.
Kim, D., Yoon, S., & Park, T. (2022). Text-only vs. multi-
modal approaches in deception detection: A
comparative study. Neural Computing and
Applications, 34, 14457–14468.
Lieberman, T., & Tsung, W. (2023). Design and
deployment of microservices for real-time threat
monitoring and mitigation. IEEE Internet of Things
Journal, 10(5), 4250–4261.
Moradi, F., & Huang, Y. (2023). Differential privacy
techniques in next-generation corporate monitoring
systems. Computers & Security, 123, 102010.
Nakamura, K. (2023). Comprehensive anomaly detection
logging for security analytics. Expert Systems with
Applications, 213, 118966.
Randall, E. S. (2023). AI frameworks for multi-modal data
ingestion in corporate security. Proceedings of the 15th
ACM Workshop on Artificial Intelligence and Security.
Ren, X., Blum, C., & Park, D. (2023). Live deployment of
multimodal deception detection systems: A case study
in a multinational corporation. Proceedings of the 22nd
IEEE International Conference on Trust, Security and
Privacy in Computing and Communications.
Russo, A., & Forti, V. (2022). Bridging compliance and
innovation: GDPR challenges in AI-driven workplace
solutions. Computer Law & Security Review, 45,
109827.
Sarkar, K., & Pereira, S. (2022). Insider threat detection in
modern cybersecurity environments: A risk-based
approach. Computers & Security, 125, 102016.
Swenson, K., & Guerrero, I. (2022). Multi-channel data
fusion for detecting collusive threats in enterprise
networks. Computers & Security, 123, 102002.
Tani, T., Moreira, D., & Khoueiry, R. (2023). Explainable
AI in high-stakes security applications: Visualizing
deception cues. ACM Transactions on Interactive
Intelligent Systems (TiiS), 13(2), 19.
Williams, D., Smith, O., & Dominguez, M. (2023).
Synthetic data generation for insider threat modeling.
Proceedings of the 16th ACM Conference on Data and
Application Security and Privacy.
Zhang, K., & Wu, E. (2022). Comprehensive validation
strategies for insider threat detection frameworks. IEEE
Access, 10, 73200–73215.
Zhou, C., Wang, L., & Cohen, E. (2022). Expanding
deception detection through multimodal physiological
signal analysis. Computers & Security, 120, 102823.