models does not lead to a decline in the final model's
recognition ability. Considering the differences in
training difficulty between different labels, KT-pFL
already shows considerable excellence in this case.
4.4 Discussion and Analysis
The data results in this paper tend to stabilize after 13
rounds in both cases, and there is no decline in
accuracy before the 13th round. Compared to the
original KT-pFL algorithm (Zhang, Guo, & Ma,
2021), this paper avoids the decline in accuracy
during training by adjusting the learning rate, to some
extent improving the reliability of the training results.
Future research could consider adjusting the size of
the public dataset, with smaller sizes being better for
saving communication resources but without
significantly affecting accuracy.
5 CONCLUSION
The KT-pFL algorithm shows significant
improvement over the FedMD algorithm, especially
in Non-IID case 2. This is because it can better utilize
different clients to train local models, allowing these
models to specialize in one aspect. This is very
suitable for Non-IID case 2. The improvements in two
detailed directions in this paper can further enhance
KT-pFL on its already excellent foundation. The
improvements based on learning rate and
normalization layers help improve the accuracy of the
algorithm and reduce loss, significantly reducing the
risk of accuracy decline due to excessive data training
adjustments and allowing for more efficient use of
data in each round through normalization layers. In
practical situations, due to limited training data, it is
necessary to make the most of them. However, it
should also be noted that the accuracy of the models
generated by each client varies greatly, and the
models basically stop improving after 10 rounds of
training. Considering the accuracy of the models,
there is still room for improvement.
ACKNOWLEDGEMENTS
All the authors contributed equally and their names
were listed in alphabetical order.
REFERENCES
Chen, J. & Zhang, J., 2024. Data-free personalized
federated learning algorithm based on knowledge
distillation. Information Network Security, 10, pp.
1562-1569.
Cohen, G., Afshar, S., Tapson, J., & van Schaik, A., 2017.
EMNIST: Extending MNIST to handwritten letters. In
Proceedings of the 2017 International Joint Conference
on Neural Networks (IJCNN). IEEE.
He, K., Zhang, X., Ren, S., & Sun, J., 2016. Deep residual
learning for image recognition. In 2016 IEEE
Conference on Computer Vision and Pattern
Recognition (CVPR), pp. 770-778. IEEE.
Kang, Y., Liu, W., 2023. Adaptive federated learning
algorithm with batch normalization. Journal of Wuhan
Institute of Technology, 45(5), pp. 549-555.
Krizhevsky, A., & Hinton, G., 2009. Learning multiple
layers of features from tiny images. Technical Report,
University of Toronto.
Krizhevsky, A., Sutskever, I., & Hinton, G. E., 2012.
ImageNet classification with deep convolutional neural
networks. Advances in Neural Information Processing
Systems, 25, pp. 1106-1114.
LeCun, Y., Bottou, L., Bengio, Y., & Haffner, P., 1998.
Gradient-based learning applied to document
recognition. Proceedings of the IEEE, 86(11), pp. 2278-
2324.
Li, D., & Wang, J., 2019. FedMD: Heterogeneous federated
learning via model distillation. arXiv preprint
arXiv:1910.03581.
Ma, N., Zhang, X., Zheng, H.-T., & Sun, J., 2018.
Shufflenet v2: Practical guidelines for efficient CNN
architecture design. In Proceedings of the European
Conference on Computer Vision (ECCV), pp. 116-131.
Sun, H., 2024. Optimization methods in distributed
machine learning based on adaptive learning rate.
Doctoral Dissertation, University of Science and
Technology of China.
Sun, Y., Wang, Z., Liu, C., Yang, R., Li, M. & Wang, Z.,
2024. Related methods and prospects of personalized
federated learning. Computer Engineering and
Applications, 20, pp. 68-83.
Tamirisa, R., Xie, C., Bao, W., Zhou, A., Arel, R., &
Shamsian, A., 2024. FedSelect: Personalized federated
learning with customized selection of parameters for
fine-tuning. In 2024 IEEE/CVF Conference on
Computer Vision and Pattern Recognition (CVPR).
Xiao, H., Rasul, K., & Vollgraf, R., 2017. Fashion-MNIST:
A novel image dataset for benchmarking machine
learning algorithms. arXiv preprint arXiv:1708.07747.
Zhang, X., Zhuang, Y., Yan, F. & Wang, W., 2019.
Research and progress in category-level object
recognition and detection based on transfer learning.
Acta Automatica Sinica, 07, pp. 1224-1243.
Zhang, J., Guo, S., Ma, X., Wang, H., Xu, W., & Wu, F.,
2021. Parameterized knowledge transfer for
personalized federated learning. In Advances in Neural
Information Processing Systems, 34 (NeurIPS 2021).