of the data is realized through the processing of large
models. It can be seen that the optimization strategy
of large model training in the distributed cloud large
model environment studied in this paper is very
effective, which can ensure the good optimization
effect of the distributed cloud platform in the large
model, time efficiency and distributed environment,
that is, the distributed cloud platform is particularly
suitable for the large model training task.
5 CONCLUSIONS
This paper proposes an effective optimization
strategy for large model training in a distributed cloud
large model environment, which obviously solves the
problems of high resource occupancy, difficult
distributed computing and insufficient reliability in
the traditional model training process. This strategy
combines the efficient association, model
compression, and training optimization of large
models to achieve a comprehensive resource balance
between distributed cloud computing and meet the
requirements of fast data training, which greatly
improves the speed of large model training and the
overall performance of the cloud platform. In short,
without using additional hardware resources, the
stability and scalability of large models in a
distributed environment can be ensured based on
intelligent algorithms and optimization mechanisms.
The research in this paper will provide a reliable and
scalable solution for large model training, and it will
be widely used in the field of artificial intelligence.
Although the data in this paper is kept as large as
possible, there are still some limitations, which can be
expanded in the future.
REFERENCES
Aung, N., Dhelim, S., Chen, L. M., Ning, H. S., Atzori, L.,
& Kechadi, T. (2024). Edge-Enabled Metaverse: The
Convergence of Metaverse and Mobile Edge
Computing. Tsinghua Science and Technology, 29(3),
795-805.
Bachhav, A., Kharat, V., & Shelar, M. (2024). QOTUM:
The Query Optimizer for Distributed Database in Cloud
Environment. Tehnicki Glasnik-Technical Journal,
18(2), 172-177.
Balashov, N., Kuprikov, I., Kutovskiy, N., Makhalkin, A.,
Mazhitova, Y., Pelevanyuk, I., et al. (2024). Changes
and Challenges at the JINR and Its Member States
Cloud Infrastructures. Physics of Particles and Nuclei,
55(3), 366-370.
Du, L. Y., & Wang, Q. X. (2024). Metaheuristic
Optimization for Dynamic Task Scheduling in Cloud
Computing Environments. International Journal of
Advanced Computer Science and Applications, 15(7),
590-597.
Gautam, B. P., Batajoo, A., & Shirator, N. (2024). A
Proposal of JYAGUCHI Computing Platform to
Realize ClouEdge (Cloud-Edge) and Serverless
Architecture *. Journal of Information Science and
Engineering, 40(1), 89-105.
Jayanetti, A., Halgamuge, S., & Buyya, R. (2024). Multi-
Agent Deep Reinforcement Learning Framework for
Renewable Energy-Aware Workflow Scheduling on
Distributed Cloud Data Centers. Ieee Transactions on
Parallel and Distributed Systems, 35(4), 604-615.
Lee, H., Ryu, J., & Won, D. (2024). Secure and Anonymous
Authentication Scheme for Mobile Edge Computing
Environments. Ieee Internet of Things Journal, 11(4),
5798-5815.
Santos, N., Ghita, B., & Masala, G. L. (2024). Medical
Systems Data Security and Biometric Authentication in
Public Cloud Servers. Ieee Transactions on Emerging
Topics in Computing, 12(2), 572-582.
Secrieru, G., Bogatencov, P., & Degteariov, N. (2024).
Extension of Distributed Computing Infrastructure and
Services Portfolio for Research and Educational
Activities. Physics of Particles and Nuclei, 55(3), 492-
494.
Verma, R., Taneja, H., Singh, K. D., & Singh, P. D. (2024).
Enhancing Data Analytics in Environmental Sensing
Through Cloud IoT Integration. Journal of Climate
Change, 10(2), 41-45.