
6 CONCLUSION
We addressed poisoning attacks in peer-to-peer ma-
chine learning, where nodes aggregate updates with-
out central authority. Although scalable and privacy-
friendly, this architecture complicates the detection of
malicious behaviors, especially in the presence of col-
luding adversaries.
We proposed a defense framework that combines
variance-based reputation scoring, Byzantine-aware
thresholding, and feedback-driven self-healing, en-
abling nodes to detect and mitigate both isolated and
coordinated attacks.
Experiments show that variance alone is insuffi-
cient against collusions, whereas our full defense pre-
serves model accuracy, reduces loss, and maintains
high detection rates under dynamic adversarial condi-
tions.
Future work will explore context-sensitive dy-
namic trust thresholds to further enhance the adapt-
ability and resilience of decentralized learning sys-
tems.
REFERENCES
Bagdasaryan, E., Veit, A., Hua, Y., Estrin, D., and
Shmatikov, V. (2020). How to backdoor federated
learning. In International conference on artificial in-
telligence and statistics, pages 2938–2948. PMLR.
Bhagoji, A. N., Chakraborty, S., Mittal, P., and Calo, S.
(2019). Analyzing federated learning through an ad-
versarial lens. In International conference on machine
learning, pages 634–643. PMLR.
Bouhaddi, M. and Adi, K. (2024). When rewards deceive:
Counteracting reward poisoning on online deep re-
inforcement learning. In 2024 IEEE International
Conference on Cyber Security and Resilience (CSR),
pages 38–44. IEEE.
Cao, X. and Gong, N. Z. (2022). Mpaf: Model poison-
ing attacks to federated learning based on fake clients.
In Proceedings of the IEEE/CVF conference on com-
puter vision and pattern recognition, pages 3396–
3404.
Hossain, M. T., Islam, S., Badsha, S., and Shen, H. (2021).
Desmp: Differential privacy-exploited stealthy model
poisoning attacks in federated learning. In 2021 17th
International Conference on Mobility, Sensing and
Networking (MSN), pages 167–174. IEEE.
Li, H., Sun, X., and Zheng, Z. (2022). Learning to at-
tack federated learning: A model-based reinforcement
learning attack framework. Advances in Neural Infor-
mation Processing Systems, 35:35007–35020.
Naseri, M., Hayes, J., and De Cristofaro, E. (2020). Lo-
cal and central differential privacy for robustness
and privacy in federated learning. arXiv preprint
arXiv:2009.03561.
Panda, A., Mahloujifar, S., Bhagoji, A. N., Chakraborty, S.,
and Mittal, P. (2022). Sparsefed: Mitigating model
poisoning attacks in federated learning with sparsifi-
cation. In International Conference on Artificial In-
telligence and Statistics, pages 7587–7624. PMLR.
Rong, D., Ye, S., Zhao, R., Yuen, H. N., Chen, J., and He,
Q. (2022). Fedrecattack: Model poisoning attack to
federated recommendation. In 2022 IEEE 38th In-
ternational Conference on Data Engineering (ICDE),
pages 2643–2655. IEEE.
Shafahi, A., Huang, W. R., Najibi, M., Suciu, O., Studer,
C., Dumitras, T., and Goldstein, T. (2018). Poison
frogs! targeted clean-label poisoning attacks on neural
networks. Advances in neural information processing
systems, 31.
Shejwalkar, V., Houmansadr, A., Kairouz, P., and Ramage,
D. (2022). Back to the drawing board: A critical eval-
uation of poisoning attacks on production federated
learning. In 2022 IEEE Symposium on Security and
Privacy (SP), pages 1354–1371. IEEE.
Sun, Y., Ochiai, H., and Sakuma, J. (2022). Semi-targeted
model poisoning attack on federated learning via
backward error analysis. In 2022 International Joint
Conference on Neural Networks (IJCNN), pages 1–8.
IEEE.
Sun, Z., Kairouz, P., Suresh, A. T., and McMahan, H. B.
(2019). Can you really backdoor federated learning?
arXiv preprint arXiv:1911.07963.
Tolpegin, V., Truex, S., Gursoy, M. E., and Liu, L. (2020).
Data poisoning attacks against federated learning sys-
tems. In Computer security–ESORICs 2020: 25th
European symposium on research in computer secu-
rity, ESORICs 2020, guildford, UK, September 14–
18, 2020, proceedings, part i 25, pages 480–501.
Springer.
Xie, C., Huang, K., Chen, P.-Y., and Li, B. (2019). Dba:
Distributed backdoor attacks against federated learn-
ing. In International conference on learning repre-
sentations.
Zhou, X., Xu, M., Wu, Y., and Zheng, N. (2021). Deep
model poisoning attack on federated learning. Future
Internet, 13(3):73.
SECRYPT 2025 - 22nd International Conference on Security and Cryptography
546