3.3 Privacy and Security Concerns
Federated learning, though dedicated to preserve
privacy by keeping data from the server and only
exchanging gradients as updates instead, still faces
significant privacy and security challenges. The
central concern lies in how gradients can
inadvertently leak sensitive information. Attackers
can exploit these gradients to reconstruct original data,
as demonstrated by "deep leakage from gradients"
technique from Zhu et al (2019), which shows how
seemingly innocuous gradient information can be
reverse engineered into sensitive training data. This
undermines the foundational premise of FL, which
states that even when data remains decentralized, the
model update transfers can expose vulnerabilities.
Wei and Liu (2022) offer insights into how
differentially private algorithms, though intended to
protect against privacy breaches, can still fall prey to
gradient leakage attacks when using fixed privacy
parameters. The introduction of dynamic privacy
parameters, which adjust noise based on the behavior
of gradient updates, shows promise in enhancing
privacy resilience while maintaining model accuracy.
Despite this progress, however, fully mitigating these
threats remains an ongoing challenge.
Further research into secure federated learning has
explored various techniques to bolster privacy
defenses. Approaches such as differential privacy by
Zhu et al. (2019), secure aggregation by Kairouz et al.
(2021), and homomorphic encryption by Phong et al.
(2018) have been proposed to protect against not only
gradient leakage but also other adversarial attacks like
backdoor injections and data poisoning. However,
many of these methods either reduce model accuracy
or do not provide full protection against sophisticated
attacks, leaving open problems for future research.
While federated learning allows organizations to
build comprehensive models without sharing raw
data, the evolving landscape of attacks continues to
expose gaps in current defenses. More robust and
performance-preserved encryption techniques as well
as adaptive privacy mechanisms remain critical areas
of research. Moreover, issues such as fairness and
heterogeneity across clients contribute additional
complexity to designing secure and efficient
federated models (Bagdasaryan et al., 2020).
4 CONCLUSIONS
This study offers a comprehensive review of how
federated learning researches faces and tackles
crucial challenges in customer-centric applications,
particularly in finance, retail, and cross-enterprise
collaboration. FL enables decentralized, privacy-
preserved and collaborative model training, which is
increasingly important in privacy-conscious
industries. Through the studies reviewed, FL has
shown its capacity to improve services like financial
evaluation, personalized retail recommendations, and
secure information sharing among enterprises,
offering a balance between privacy preservation and
machine learning effectiveness.
However, this study also highlights several
challenges that limit the widespread deployment of
FL. Data heterogeneity, where clients have varying
and non-IID data, can negatively impact model
performance, and methods like clustered federated
learning attempt to mitigate this issue but add
computational complexity. Additionally, achieving
model convergence in environments with resource-
constrained devices remains difficult, despite the
introduction of techniques like asynchronous updates
and partial contributions. Security risks, particularly
gradient leakage, still pose threats to privacy, even in
decentralized systems. Applying privacy-preserving
methods along with FL offers partial solutions, but
they often reduce model accuracy and introduce
communication overhead. Future investigation is
needed to refine methods and improve the scalability
and robustness of federated learning systems.
REFERENCES
Ahmed, U., Srivastava, G., & Lin, J. C.-W. 2022. Reliable
customer analysis using federated learning and
exploring deep-attention edge intelligence. Future
Generation Computer Systems, 127, 70-79.
Bagdasaryan E, Veit A, Hua Y, Estrin D, Shmatikov V
2020. How to backdoor federated learning. In:
International Conference on Artificial Intelligence and
Statistics. PMLR, pp 2938–2948.
Imteaj, A., & Amini, M. H. 2022. Leveraging asynchronous
federated learning to predict customers financial
distress. Intelligent Systems with Applications, 14,
200064.
Kairouz, P., McMahan, H. B., Avent, B., Bellet, A., Bennis,
M., Bhagoji, A. N., Bonawitz, K., Charles, Z.,
Cormode, G., Cummings, R., D'Oliveira, R. G. L.,
Eichner, H., El Rouayheb, S., Evans, D., Gardner, J.,
Garrett, Z., Gascón, A., Ghazi, B., Gibbons, P. B., ...
Yang, Q. 2021. Advances and open problems in
federated learning. arXiv.
Kaplan, J., McCandlish, S., Henighan, T., Brown, T. B.,
Chess, B., Child, R., ... & Amodei, D. 2020. Scaling
laws for neural language models. arXiv preprint
arXiv:2001.08361.
Li, L., Fan, Y., Tse, M., & Lin, K.-Y. 2020. A review of
applications in federated learning. Computers &
Industrial Engineering, 149, 106854.