Authors:
Soumia Zohra El Mestari
1
;
Maciej Krzysztof Zuziak
2
;
Gabriele Lenzini
1
and
Salvatore Rinzivillo
2
Affiliations:
1
SnT, University of Luxembourg, Esch Sur Alzette, Luxembourg
;
2
National Research Council, Pisa, Italy
Keyword(s):
Membership Inference Attacks, Shapley Values, Federated Learning.
Abstract:
Federated Learning (FL) is a crucial approach for training large-scale AI models while preserving data locality, eliminating the need for centralised data storage. In collaborative learning settings, ensuring data quality is essential, and in FL, maintaining privacy requires limiting the knowledge accessible to the central orchestrator, which evaluates and manages client contributions. Accurately measuring and regulating the marginal impact of each client’s contribution needs specialised techniques. This work examines the relationship between one such technique—Shapley Values—and a client’s vulnerability to Membership inference attacks (MIAs). Such a correlation would suggest that the contribution index could reveal high-risk participants, potentially allowing a malicious orchestrator to identify and exploit the most vulnerable clients. Conversely, if no such relationship is found, it would indicate that contribution metrics do not inherently expose information exploitable for powerf
ul privacy attacks. Our empirical analysis in a cross-silo FL setting demonstrates that leveraging contribution metrics in federated environments does not substantially amplify privacy risks.
(More)