Authors:
Manuel Lengl
1
;
Marc Benesch
1
;
Stefan Röhrl
1
;
Simon Schumann
1
;
Martin Knopp
1
;
2
;
Oliver Hayden
2
and
Klaus Diepold
1
Affiliations:
1
Chair of Data Processing, Technical University of Munich, Germany
;
2
Heinz-Nixdorf Chair of Biomedical Electronics, Technical University of Munich, Germany
Keyword(s):
Federated Learning, Data Bias, Privacy Violation, Membership Inference Attack, Blood Cell Analysis, Quantitative Phase Imaging, Microfluidics, Flow Cytometry.
Abstract:
Federated Learning (FL) has emerged as a promising solution in the medical domain to overcome challenges related to data privacy and learning efficiency. However, its federated nature exposes it to privacy attacks and model degradation risks posed by individual clients. The primary objective of this work is to analyze how different data biases (introduced by a single client) influence the overall model’s performance in a Cross-Silo FL environment and whether these biases can be exploited to extract information about other clients. We demonstrate, using two datasets, that bias injection can significantly affect model integrity, with the impact varying considerably across different datasets. Furthermore, we show that minimal effort is sufficient to infer the number of training samples contributed by other clients. Our findings highlight the critical need for robust data security mechanisms in FL, as even a single compromised client can pose serious risks to the entire system.