number of devices is large and frequent
communication leads to inefficiency. Han, Mao and
Dally (2015) proposed that model compression and
quantization techniques can be used to reduce
communication overhead. At the same time, by
increasing the number of local training cycles, the
local update approach can decrease the amount of
communication between the hardware and the server,
but it may lead to local model overfitting and affect
global performance. In federated learning, two
significant issues are device and data heterogeneity.
Due to the differences in computing power and data
distribution of devices, the computational capabilities
of devices are frequently not fully utilized by
conventional federated learning techniques., and may
even lead to the degradation of global model
performance. To this end, researchers have proposed
algorithms based on gradient alignment and adaptive
adjustment of model parameters, aiming to optimize
the contribution between different devices and
improve the global model effect.
Li, He and Song (2021) proposed the Model-
Contrastive Federated Learning (MOON) algorithm,
which represents a significant advancement in the
field of federated learning recently. MOON
effectively mitigates the differences among devices
through a standardized update strategy,
demonstrating strong robustness, especially in
handling non-IID data and device heterogeneity. By
optimizing model synchronization and dynamically
adjusting local models, MOON reduces
communication overhead and enhances efficiency.
Although MOON does not have an inbuilt privacy
protection mechanism, it can be combined with
technologies such as differential privacy to further
enhance privacy protection. Despite its outstanding
performance in multiple experiments, MOON still
faces issues such as communication efficiency and
computational complexity in large-scale systems,
especially in scenarios where data is highly uneven,
and further optimization is still needed.
This study proposes an improved federated
learning algorithm to solve the training efficiency and
performance problems in heterogeneous data
environments. By introducing pruning technology to
reduce redundant calculations and improve
computing efficiency. To safeguard data privacy and
guarantee that training is carried out without
disclosing user information, differential privacy
techniques are employed. In addition,
hyperparameters such as learning rate, regularization
parameter, and local learning rate are dynamically
adjusted to accelerate model convergence and
improve performance. The research goal is to reduce
communication overhead, optimize computing
resources, and improve the stability and robustness of
the model under multi-party heterogeneous data
while ensuring data privacy.
2 ONLINE INTELLIGENT
KINEMATIC CALIBRATION
METHOD
2.1 Question Statement
Suppose there are ๐ participants ๐ด
๎ฌต
,๐ด
๎ฌถ
,โฆ,๐ด
๎ฏ
, each
participant ๐ด
๎ฏ
has a local dataset ๐
๎ฏ
. The objective is
to secure data privacy while working together to train
a global model ๐ through a central server while
protecting data privacy. Because the distribution of
local data is heterogeneous, updates during training
may fluctuate greatly, impacting the model's
performance and rate of convergence. At the same
time, as the number of training rounds increases,
storage and computing costs rise, resulting in a waste
of resources. To this end, improving training
efficiency is essential, reducing redundant computing
and communication overhead, and ensuring that the
model converges quickly and stably under
heterogeneous data while ensuring privacy.
2.2 Model Framework
This paper proposes an improved federated learning
algorithm - MOON-DPAP, which enhances the
model's efficiency and privacy protection capabilities
by introducing pruning, differential privacy, and
dynamic parameter adjustment. Dynamic pruning
reduces redundant parameters and improves
computational efficiency; Dropout alleviates
overfitting and enhances model adaptability;
differential privacy protects data privacy by adding
noise. Additionally, dynamic adjustment of the
learning rate, contrastive loss temperature parameter
๐, regularization parameter ๐, and local learning rate
accelerates model convergence and optimizes
performance.
In the algorithm process, in every round, the client
receives the global model from the server. The client
trains and updates the model using regional
information, and by comparing the loss, it improves
the similarity between both the regional and global
models. During training, differential privacy protects
data security, and dynamic pruning optimizes the
model structure. After the updated model is uploaded
to the server, the server updates the global model