Evaluating Explainable AI for Deep Learning-Based Network Intrusion Detection System Alert Classification

Rajesh Kalakoti, Risto Vaarandi, Hayretdin Bahşi, Hayretdin Bahşi, Sven Nõmm

2025

Abstract

A Network Intrusion Detection System (NIDS) monitors networks for cyber attacks and other unwanted activities. However, NIDS solutions often generate an overwhelming number of alerts daily, making it challenging for analysts to prioritize high-priority threats. While deep learning models promise to automate the prioritization of NIDS alerts, the lack of transparency in these models can undermine trust in their decision-making. This study highlights the critical need for explainable artificial intelligence (XAI) in NIDS alert classification to improve trust and interpretability. We employed a real-world NIDS alert dataset from Security Operations Center (SOC) of TalTech (Tallinn University Of Technology) in Estonia, developing a Long Short-Term Memory (LSTM) model to prioritize alerts. To explain the LSTM model’s alert prioritization decisions, we implemented and compared four XAI methods: Local Interpretable Model-Agnostic Explanations (LIME), SHapley Additive exPlanations (SHAP), Integrated Gradients, and DeepLIFT. The quality of these XAI methods was assessed using a comprehensive framework that evaluated faithfulness, complexity, robustness, and reliability. Our results demonstrate that DeepLIFT consistently outperformed the other XAI methods, providing explanations with high faithfulness, low complexity, robust performance, and strong reliability. In collaboration with SOC analysts, we identified key features essential for effective alert classification. The strong alignment between these analyst-identified features and those obtained by the XAI methods validates their effectiveness and enhances the practical applicability of our approach.

Download


Paper Citation


in Harvard Style

Kalakoti R., Vaarandi R., Bahşi H. and Nõmm S. (2025). Evaluating Explainable AI for Deep Learning-Based Network Intrusion Detection System Alert Classification. In Proceedings of the 11th International Conference on Information Systems Security and Privacy - Volume 1: ICISSP; ISBN 978-989-758-735-1, SciTePress, pages 47-58. DOI: 10.5220/0013180700003899


in Bibtex Style

@conference{icissp25,
author={Rajesh Kalakoti and Risto Vaarandi and Hayretdin Bahşi and Sven Nõmm},
title={Evaluating Explainable AI for Deep Learning-Based Network Intrusion Detection System Alert Classification},
booktitle={Proceedings of the 11th International Conference on Information Systems Security and Privacy - Volume 1: ICISSP},
year={2025},
pages={47-58},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0013180700003899},
isbn={978-989-758-735-1},
}


in EndNote Style

TY - CONF

JO - Proceedings of the 11th International Conference on Information Systems Security and Privacy - Volume 1: ICISSP
TI - Evaluating Explainable AI for Deep Learning-Based Network Intrusion Detection System Alert Classification
SN - 978-989-758-735-1
AU - Kalakoti R.
AU - Vaarandi R.
AU - Bahşi H.
AU - Nõmm S.
PY - 2025
SP - 47
EP - 58
DO - 10.5220/0013180700003899
PB - SciTePress