The Study of Orthogonal Gradient Descent in Unlearning Algorithms

Xu He

2024

Abstract

In today's world, where smartphones and laptops are commonly used for communication, Federated Learning (FL) has become a key method for training models while keeping data private. This method lets data stay on the user's device instead of being shared. However, when users decide to leave FL, removing their data from the model can be difficult and requires a lot of resources, as it usually involves retraining the model. This paper investigates the application of Orthogonal Gradient Descent (OGD) and Steepest Descent in federated learning to enhance data removal and comply with the 'right to be forgotten' under GDPR. Employing OGD minimizes residual data impact, while Steepest Descent facilitates rapid gradient reduction, tested against algorithms like FedRecovery. Despite increasing computational demands by up to 10%, this approach significantly boosts unlearning efficiency and retains model performance, proving viable for stringent unlearning requirements. The study underscores OGD's potential and limitations, such as sensitivity to learning rate changes and its ineffectiveness when tasks greatly deviate, emphasizing the need for further research to optimize these methods in practical federated learning scenarios.

Download


Paper Citation


in Harvard Style

He X. (2024). The Study of Orthogonal Gradient Descent in Unlearning Algorithms. In Proceedings of the 2nd International Conference on Data Analysis and Machine Learning - Volume 1: DAML; ISBN 978-989-758-754-2, SciTePress, pages 562-568. DOI: 10.5220/0013528400004619


in Bibtex Style

@conference{daml24,
author={Xu He},
title={The Study of Orthogonal Gradient Descent in Unlearning Algorithms},
booktitle={Proceedings of the 2nd International Conference on Data Analysis and Machine Learning - Volume 1: DAML},
year={2024},
pages={562-568},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0013528400004619},
isbn={978-989-758-754-2},
}


in EndNote Style

TY - CONF

JO - Proceedings of the 2nd International Conference on Data Analysis and Machine Learning - Volume 1: DAML
TI - The Study of Orthogonal Gradient Descent in Unlearning Algorithms
SN - 978-989-758-754-2
AU - He X.
PY - 2024
SP - 562
EP - 568
DO - 10.5220/0013528400004619
PB - SciTePress