Authors:
Jacopo Talpini
;
Nicolò Civiero
;
Fabio Sartori
and
Marco Savi
Affiliation:
Department of Informatics, Systems and Communication, University of Milano-Bicocca, Milano, Italy
Keyword(s):
Intrusion Detection, Federated Learning, Machine Learning, Model Calibration.
Abstract:
Network intrusion detection systems (IDSs) are a major component for network security, aimed at protecting network-accessible endpoints, such as IoT devices, from malicious activities that compromise confidentiality, integrity, or availability within the network infrastructure. Machine Learning models are becoming a popular choice for developing an IDS, as they can handle large volumes of network traffic and identify increasingly sophisticated patterns. However, traditional ML methods often require a centralized large dataset thus raising privacy and scalability concerns. Federated Learning (FL) offers a promising solution by enabling a collaborative training of an IDS, without sharing raw data among clients. However, existing research on FL-based IDSs primarily focuses on improving accuracy and detection rates, while little or no attention is given to a proper estimation of the model’s uncertainty in making predictions. This is however fundamental to increase the model’s reliability
, especially in safety-critical applications, and can be addressed by an appropriate model’s calibration. This paper introduces a federated calibration approach that ensures the efficient distributed training of a calibrator while safeguarding privacy, as no calibration data has to be shared by clients with external entities. Our experimental results confirm that the proposed approach not only preserves model’s performance, but also significantly enhances confidence estimation, making it ideal to be adopted by IDSs.
(More)