Proceedings of the 2025 2nd International Conference on Mechanics, Electronics Engineering and Automation (ICMEEA 2025)

Hardware Acceleration Techniques for Convolutional Layers in Convolutional Neural Networks

Authors
Jieshen Cai1, *
1Dundee International Institute, Central South University, Changsha, Hunan, 410083, China
*Corresponding author. Email: 7802220130@csu.edu.cn
Corresponding Author
Jieshen Cai
Available Online 31 August 2025.
DOI
10.2991/978-94-6463-821-9_46How to use a DOI?
Keywords
Convolutional Neural Networks; Convolutional Layers; Hardware Acceleration; Optimization
Abstract

Convolutional Neural Networks (CNNs) are widely used in deep learning because of the power of convolutional operations. However, the convolutional layer often becomes a performance bottleneck when performing complex computations, especially in resource-constrained environments. To address this problem, hardware accelerators have emerged to improve the convolutional layer’s performance and efficiency. This review provides an overview of the main hardware acceleration solutions: Application-Specific Integrated Circuits (ASICs), Field-Programmable Gate Arrays (FPGAs), and Graphics Processing Units (GPUs). In addition, it explores different optimization techniques for hardware accelerators such as parallelization, quantization, and sparsity exploitation. Finally, this review looks at the future direction of convolutional layer hardware accelerators, including the co-optimization of hardware and software, reducing the power consumption for edge computing, enhancing the versatility, and overcoming the bottleneck of memory access, and highlights key challenges and possible priorities for future research. Through an in-depth analysis of current research results and future trends, this review aims to provide a comprehensive reference for the design and optimization of convolutional layer hardware accelerators.

Copyright
© 2025 The Author(s)
Open Access
Open Access This chapter is licensed under the terms of the Creative Commons Attribution-NonCommercial 4.0 International License (http://creativecommons.org/licenses/by-nc/4.0/), which permits any noncommercial use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

Download article (PDF)

Volume Title
Proceedings of the 2025 2nd International Conference on Mechanics, Electronics Engineering and Automation (ICMEEA 2025)
Series
Advances in Engineering Research
Publication Date
31 August 2025
ISBN
978-94-6463-821-9
ISSN
2352-5401
DOI
10.2991/978-94-6463-821-9_46How to use a DOI?
Copyright
© 2025 The Author(s)
Open Access
Open Access This chapter is licensed under the terms of the Creative Commons Attribution-NonCommercial 4.0 International License (http://creativecommons.org/licenses/by-nc/4.0/), which permits any noncommercial use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

Cite this article

TY  - CONF
AU  - Jieshen Cai
PY  - 2025
DA  - 2025/08/31
TI  - Hardware Acceleration Techniques for Convolutional Layers in Convolutional Neural Networks
BT  - Proceedings of the 2025 2nd International Conference on Mechanics, Electronics Engineering and Automation (ICMEEA 2025)
PB  - Atlantis Press
SP  - 445
EP  - 456
SN  - 2352-5401
UR  - https://doi.org/10.2991/978-94-6463-821-9_46
DO  - 10.2991/978-94-6463-821-9_46
ID  - Cai2025
ER  -