Authors:
Myria Bouhaddi
and
Kamel Adi
Affiliation:
Computer Security Research Laboratory, University of Quebec in Outaouais, Gatineau, Quebec, Canada
Keyword(s):
Peer-to-Peer Machine Learning, Poisoning Attacks, Adversarial Machine Learning, Robust Aggregation, Decentralized AI.
Abstract:
Peer-to-Peer Machine Learning (P2P ML) offers a decentralized alternative to Federated Learning (FL), removing the need for a central server and enhancing scalability and privacy. However, the lack of centralized oversight exposes P2P ML to model poisoning attacks, where malicious peers inject corrupted updates. A major threat comes from adversarial coalitions, groups of peers that collaborate to reinforce poisoned updates and bypass local trust mechanisms. In this work, we investigate the impact of such coalitions and propose a defense framework that combines variance-based trust evaluation, Byzantine-inspired thresholding, and a feedback-driven self-healing mechanism. Extensive simulations in various attack scenarios demonstrate that our approach significantly improves robustness, ensuring high accuracy, detection by attackers, and model stability under adversarial conditions.