Authors:
Radia Kassa
1
;
2
;
Kamel Adi
2
and
Myria Bouhaddi
2
Affiliations:
1
Laboratoire LITAN, École supérieure en Sciences et Technologies de l’Informatique et du Numérique, RN 75, Amizour 06300, Bejaia, Algeria
;
2
Computer Security Research Laboratory, University of Quebec in Outaouais, Gatineau, Quebec, Canada
Keyword(s):
Membership Inference Attacks, Data Privacy, Machine Learning, Defense Mechanism, Optimal Noise Injection, Prediction Entropy, Black-Box Defense, Optimized Noise, Shapley Values.
Abstract:
Membership inference attacks (MIAs) present a serious risk to data privacy in machine learning (ML) models, as they allow attackers to determine whether a given data point was included in the training set. Although various defenses exist, they often struggle to effectively balance privacy and utility. To address this challenge, we propose in this paper a novel defense mechanism based on Optimal Noise Injection during the training phase. Our approach involves injecting a carefully designed and controlled noise vector into each training sample. This optimization maximizes prediction entropy to obscure membership signals while leveraging Shapley values to preserve data utility. Experiments on benchmark datasets show that our method reduces MIA success rates significantly without sacrificing accuracy, offering a strong privacy-utility trade-off for black-box scenarios.