Consequently, it is recommended to continue
employing the conventional OGD approach without
modifications to the Loss Function. This approach
aligns with current best practices and ensures the
integrity of the model's performance across varying
user involvements.
3.5 Discussion
The experiments show that OGD does not outperform
SGD in terms of convergence speed. This is an
expected issue, as OGD requires more computation to
find the appropriate gradients. However, OGD does
improve the model's performance during the post-
training process.
The reason may lie in the fact that FedRecovery
relies on differential privacy (as explained in the
underlying principles), which does not entirely
eliminate a user's personal data but instead perturbs it
with noise (Cao et al., 2023).
The modified Loss Function attempts to push the
model away from the user's original data direction,
but this might cause issues with the model's
generalization. Nevertheless, the experiments
confirm OGD's value for unlearning. This is possibly
because OGD inherently seeks to store more optimal
gradient points on the boundary of gradual changes
while providing good approximations for other
points' gradients, preventing the storage of the full
dataset's gradients (Farajtabar et al., 2020).
Consequently, more thorough unlearning appears to
occur for UF during the post-training phase.
4 CONCLUSIONS
This paper completes an evaluation of the use of
Orthogonal Gradient Descent (OGD) in the context of
federated learning and unlearning. The use of OGD
significantly increases computational costs, with
nearly a 10% increase in time required to achieve the
same accuracy on the MovieLens 1M dataset in an
hour’s training process. However, OGD does have its
merits. With adjustments to the hyperparameters,
better performance is expected. In situations where
time and computational resources are abundant and
the need to ensure thorough unlearning of user data is
critical, OGD can enhance the effectiveness of
FedRecovery in unlearning data. Nevertheless, the
application of OGD still faces inherent limitations.
For instance, it fails severely when the task changes
and becomes dissimilar to the previous task (e.g.,
rotating images more than 90 degrees for MNIST),
and it is sensitive to the learning rate. Additionally,
steepest gradient descent can only be used before the
first user requests unlearning; after that, all
subsequent training must rely on OGD, which poses
an unsolved issue, leading to computational
overhead. The researchers hope that better solutions
will emerge in the future to combine OGD with
federated learning more effectively.
REFERENCES
Britz, D. 2015. Understanding convolutional neural
networks for NLP. Denny’s Blog.
https://www.wildml.com/2015/11/understanding-
convolutional-neural-networks-for-nlp/
Cao, X., Yu, L., Xu, Y., & Ma, X. 2023. FedRecover:
Recovering from poisoning attacks in federated
learning using historical information. In 2023 IEEE
Symposium on Security and Privacy (SP) (pp. 1366–
1383). IEEE.
Chengstone. 2017. Movie recommender [Source code].
GitHub.
https://github.com/chengstone/movie_recommender
Farajtabar, M., Azizan, N., Mott, A., & Li, A. 2020.
Orthogonal gradient descent for continual learning. In
Proceedings of the Twenty Third International
Conference on Artificial Intelligence and Statistics,
Proceedings of Machine Learning Research, 108,
3762–3773.
Fu, J., Hong, Y., Ling, X., Wang, L., Ran, X., Sun, Z., &
Cao, Y. 2024. Differentially private federated learning:
A systematic review. arXiv Preprint,
arXiv:2405.08299.
GroupLens Research. 2003. MovieLens 1M dataset.
GroupLens. Retrieved May 23, 2024, from
https://grouplens.org/datasets/movielens/1m/
Hojjat, K. 2018. MNIST dataset. Kaggle. Retrieved May
29, 2024, from
https://www.kaggle.com/datasets/hojjatk/mnist-dataset
Li, T., Sahu, A. K., Talwalkar, A., & Smith, V. 2020.
Federated learning: Challenges, methods, and future
directions. IEEE Signal Processing Magazine, 37(3),
50–60. https://doi.org/10.1109/MSP.2020.2975749
Liu, G., Ma, X., Yang, Y., Wang, C., & Liu, J. 2020.
Federated unlearning. arXiv Preprint,
arXiv:2012.13891.
Liu, G., Ma, X., Yang, Y., Wang, C., & Liu, J. 2021.
FedEraser: Enabling efficient client-level data removal
from federated learning models. In 2021 IEEE/ACM
29th International Symposium on Quality of Service
(IWQOS) (pp. 1–10). IEEE.
Xue, R., Xue, K., Zhu, B., Luo, X., Zhang, T., Sun, Q., &
Lu, J. 2023. Differentially private federated learning
with an adaptive noise mechanism. IEEE Transactions
on Information Forensics and Security.