Robust Peer-to-Peer Machine Learning Against Poisoning Attacks
Myria Bouhaddi, Kamel Adi
2025
Abstract
Peer-to-Peer Machine Learning (P2P ML) offers a decentralized alternative to Federated Learning (FL), removing the need for a central server and enhancing scalability and privacy. However, the lack of centralized oversight exposes P2P ML to model poisoning attacks, where malicious peers inject corrupted updates. A major threat comes from adversarial coalitions, groups of peers that collaborate to reinforce poisoned updates and bypass local trust mechanisms. In this work, we investigate the impact of such coalitions and propose a defense framework that combines variance-based trust evaluation, Byzantine-inspired thresholding, and a feedback-driven self-healing mechanism. Extensive simulations in various attack scenarios demonstrate that our approach significantly improves robustness, ensuring high accuracy, detection by attackers, and model stability under adversarial conditions.
DownloadPaper Citation
in Harvard Style
Bouhaddi M. and Adi K. (2025). Robust Peer-to-Peer Machine Learning Against Poisoning Attacks. In Proceedings of the 22nd International Conference on Security and Cryptography - Volume 1: SECRYPT; ISBN 978-989-758-760-3, SciTePress, pages 539-546. DOI: 10.5220/0013640600003979
in Bibtex Style
@conference{secrypt25,
author={Myria Bouhaddi and Kamel Adi},
title={Robust Peer-to-Peer Machine Learning Against Poisoning Attacks},
booktitle={Proceedings of the 22nd International Conference on Security and Cryptography - Volume 1: SECRYPT},
year={2025},
pages={539-546},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0013640600003979},
isbn={978-989-758-760-3},
}
in EndNote Style
TY - CONF
JO - Proceedings of the 22nd International Conference on Security and Cryptography - Volume 1: SECRYPT
TI - Robust Peer-to-Peer Machine Learning Against Poisoning Attacks
SN - 978-989-758-760-3
AU - Bouhaddi M.
AU - Adi K.
PY - 2025
SP - 539
EP - 546
DO - 10.5220/0013640600003979
PB - SciTePress