For example, the aggregation results observed by the
server in the secure aggregation mechanism may
cause privacy leakage, so it is necessary to further
study and evaluate the risk of exposure of
intermediate parameters. Overall, the construction of
a unified and perfect privacy protection metric is
crucial for privacy protection in FL systems, which
can provide evaluation metrics and promote the
development and optimisation of privacy protection
techniques.
6 CONCLUSION
In recent years, with the rapid development of
artificial intelligence, there have been numerous
reports on the Internet about artificial intelligence
leaking personal privacy, and people have begun to
pay more attention to data privacy. The emergence of
FL has brought new hope and methods to researchers
to a certain extent. However, with the deepening of
research, FL also faces the risk of privacy leakage
different from other machine learning methods.
The present paper conducts an exhaustive
investigation and in-depth analysis of the latest
research on privacy leakage risks and protection
technologies of FL. The architecture and
classification of FL are introduced, the root causes of
privacy risks are analysed and a list of three privacy
protection technologies is provided: secure multi-
party computation, differential privacy and
homomorphic encryption. This paper first briefly
introduces these technologies, and then
comprehensively analyses their communication
efficiency and privacy protection effects when
combined with FL. Among them, secure multi-party
computation is suitable for protecting multi-agency
collaboration scenarios, differential privacy protects
data parameters by adding noise, and homomorphic
encryption focuses on protecting the original data.
Finally, according to the shortcomings of the existing
research, the future research direction was discussed.
In the process of realising FL applications, there are
still some unresolved challenges. In particular, the
three major issues of developing privacy-preserving
solutions applicable to different types of FL,
balancing the contradiction between accuracy and
efficiency, and establishing a unified metric deserve
more in-depth research.
REFERENCES
Choudhury, O., Gkoulalas-Divanis, A., Salonidis, T., Sylla,
I., Park, Y., Hsu, G., & Das, A. 2019. Differential
privacy-enabled federated learning for sensitive health
data. arXiv preprint arXiv:1910.02578.
Gamiz, I., Regueiro, C., Lage, O., Jacob, E., & Astorga, J.
2025. Challenges and future research directions in
secure multi-party computation for resource-
constrained devices and large-scale computations.
International Journal of Information Security, 24(1), 1-
29.
Goddard, M. 2017. The EU general data protection
regulation (GDPR): European regulation that has a
global impact. International Journal of Market
Research, 59(6), 703-705.
Hao, M., Li, H., Luo, X., Xu, G., Yang, H., & Liu, S. 2019.
Efficient and privacy-enhanced federated learning for
industrial artificial intelligence. IEEE Transactions on
Industrial Informatics, 16(10), 6532-6542.
Hong, C. 2025. Recent advances of privacy-preserving
machine learning based on (Fully) Homomorphic
Encryption. Security and Safety, 4, 2024012.
Huang, Y., Gupta, S., Song, Z., Li, K., & Arora, S. 2021.
Evaluating gradient inversion attacks and defenses in
federated learning. Advances in neural information
processing systems, 34, 7232-7241.
Jagarlamudi, G. K., Yazdinejad, A., Parizi, R. M., &
Pouriyeh, S. 2024. Exploring privacy measurement in
federated learning. The Journal of Supercomputing,
80(8), 10511-10551.
Jin, W., Yao, Y., Han, S., Gu, J., Joe-Wong, C., Ravi, S., ...
& He, C. 2023. FedML-HE: An efficient
homomorphic-encryption-based privacy-preserving
federated learning system. arXiv preprint
arXiv:2303.10837.
Li, S., Zhang, C., & Lin, D. 2024. Secure Multiparty
Computation with Lazy Sharing. In Proceedings of the
2024 on ACM SIGSAC Conference on Computer and
Communications Security (pp. 795-809).
Liu, B., Lv, N., Guo, Y., & Li, Y. 2024. Recent advances
on federated learning: A systematic survey.
Neurocomputing, 128019.
Luo, Y., Li, Y., Qin, S., Fu, Q., & Liu, J. 2024. Copyright
protection framework for federated learning models
against collusion attacks.Information Sciences, 680,
121161.
McMahan, B., Moore, E., Ramage, D., Hampson, S., & y
Arcas, B. A. 2017. Communication-efficient learning
of deep networks from decentralized data. In Artificial
intelligence and statistics (pp. 1273-1282). PMLR.
National People's Congress of the People's Republic of
China. (2021). Data Security Law of the People's
Republic of China. Communique of the Standing
Committee of the National People's Congress of the
People's Republic of China, 5(5), 951-956
Qi, T., Wu, F., Wu, C., He, L., Huang, Y., & Xie, X. 2023.
Differentially private knowledge transfer for federated
learning. Nature Communications, 14(1), 3785.
Sharma, A., & Marchang, N. 2024. A review on client-
server attacks and defenses in federated learning.
Computers & Security, 103801.