Proceedings of the 2025 3rd International Conference on Image, Algorithms, and Artificial Intelligence (ICIAAI 2025)

Parallel Optimization Strategy for Distributed Machine Learning in Massive Data Processing

Authors
Selena Peitong Lin1, *
1School of Engineering, Hong Kong University of Science and Technology, Clear Water Bay, Kowloon, Hong Kong, China
*Corresponding author. Email: splinaa@connect.ust.hk
Corresponding Author
Selena Peitong Lin
Available Online 31 August 2025.
DOI
10.2991/978-94-6463-823-3_108How to use a DOI?
Keywords
Gradient optimization; Load Balancing Strategy; Parallelism Strategy
Abstract

In the current digital era, the growing demand for personalized usage environments has increased the volume and diversity of data used in distributed machine learning, pushing the data computing and storage capabilities to their limit. Thus, numerous parallel optimization strategies emerge to enhance the data processing efficiency. This paper will first examine the common parallel optimization techniques based on gradient optimization, exploring how these strategies improve the efficiency of machine learning models under distributed settings. Next, it will analyze the optimization strategies based on load balancing, which are crucial for workload distribution among nodes in a distributed system. Lastly, the paper will discuss the optimization strategies based on parallelism, focusing on how the data parallelism accelerates the learning process. Furthermore, there will be a comparative analysis of all the parallel optimization strategies discussed, offering insight into the practical application based on the specific characteristics of these optimization strategies. Overall, this paper aims to deliver a comprehensive and valuable analysis of parallel optimization strategies for distributed machine learning, assisting researchers and practitioners to identify the most appropriate optimization strategies for their particular requirements.

Copyright
© 2025 The Author(s)
Open Access
Open Access This chapter is licensed under the terms of the Creative Commons Attribution-NonCommercial 4.0 International License (http://creativecommons.org/licenses/by-nc/4.0/), which permits any noncommercial use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

Download article (PDF)

Volume Title
Proceedings of the 2025 3rd International Conference on Image, Algorithms, and Artificial Intelligence (ICIAAI 2025)
Series
Advances in Computer Science Research
Publication Date
31 August 2025
ISBN
978-94-6463-823-3
ISSN
2352-538X
DOI
10.2991/978-94-6463-823-3_108How to use a DOI?
Copyright
© 2025 The Author(s)
Open Access
Open Access This chapter is licensed under the terms of the Creative Commons Attribution-NonCommercial 4.0 International License (http://creativecommons.org/licenses/by-nc/4.0/), which permits any noncommercial use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

Cite this article

TY  - CONF
AU  - Selena Peitong Lin
PY  - 2025
DA  - 2025/08/31
TI  - Parallel Optimization Strategy for Distributed Machine Learning in Massive Data Processing
BT  - Proceedings of the 2025 3rd International Conference on Image, Algorithms, and Artificial Intelligence (ICIAAI 2025)
PB  - Atlantis Press
SP  - 1128
EP  - 1139
SN  - 2352-538X
UR  - https://doi.org/10.2991/978-94-6463-823-3_108
DO  - 10.2991/978-94-6463-823-3_108
ID  - Lin2025
ER  -