Proceedings of the 2025 2nd International Conference on Electrical Engineering and Intelligent Control (EEIC 2025)

Hardware Accelerators for Deep Learning Focusing on Sparsity and Computing-in-Memory

Authors
Dingchen Zhu1, *
1College of Integrated Circuits and Micro-Nano Electronics, Fudan University, Shanghai, China
*Corresponding author. Email: 22307130071@m.fudan.edu.cn
Corresponding Author
Dingchen Zhu
Available Online 23 October 2025.
DOI
10.2991/978-94-6463-864-6_7How to use a DOI?
Keywords
Sparse Weight; Hardware Accelerator; Neural Network Optimization
Abstract

This paper presents a comprehensive review of hardware accelerators specifically designed to address the challenges posed by sparse neural networks, focusing on their architectural innovations, algorithmic optimizations, and performance advancements. Traditional computing systems face significant challenges such as energy inefficiency, memory bottlenecks, and computational irregularities due to the inherent complexity and sparsity of modern neural networks. To overcome these issues, this paper analyzes five representative hardware accelerators used in deep learning computations: Eyeriss, Cambricon-S, NullHop, Tensaurus, and GAS. These architectures employ various strategies, including dataflow optimization, sparsity-aware pruning, zero-skipping mechanisms, and in-memory computing, to achieve significant performance gains. The results demonstrate notable improvements, such as 2.5× energy efficiency in Eyeriss, 1.37× efficiency gains in Cambricon-S, and 3 TOp/s/W power efficiency in NullHop, along with breakthroughs in handling mixed sparse-dense computations in Tensaurus and the introduction of a unified in-memory architecture in GAS. This analysis highlights the critical role of hardware-software co-design and effective sparsity exploitation in enabling scalable, energy-efficient AI systems. By consolidating key findings and strategies, this work provides a roadmap for future research aimed at optimizing neural network acceleration for real-world applications.

Copyright
© 2025 The Author(s)
Open Access
Open Access This chapter is licensed under the terms of the Creative Commons Attribution-NonCommercial 4.0 International License (http://creativecommons.org/licenses/by-nc/4.0/), which permits any noncommercial use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

Download article (PDF)

Volume Title
Proceedings of the 2025 2nd International Conference on Electrical Engineering and Intelligent Control (EEIC 2025)
Series
Advances in Engineering Research
Publication Date
23 October 2025
ISBN
978-94-6463-864-6
ISSN
2352-5401
DOI
10.2991/978-94-6463-864-6_7How to use a DOI?
Copyright
© 2025 The Author(s)
Open Access
Open Access This chapter is licensed under the terms of the Creative Commons Attribution-NonCommercial 4.0 International License (http://creativecommons.org/licenses/by-nc/4.0/), which permits any noncommercial use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

Cite this article

TY  - CONF
AU  - Dingchen Zhu
PY  - 2025
DA  - 2025/10/23
TI  - Hardware Accelerators for Deep Learning Focusing on Sparsity and Computing-in-Memory
BT  - Proceedings of the 2025 2nd International Conference on Electrical Engineering and Intelligent Control (EEIC 2025)
PB  - Atlantis Press
SP  - 51
EP  - 60
SN  - 2352-5401
UR  - https://doi.org/10.2991/978-94-6463-864-6_7
DO  - 10.2991/978-94-6463-864-6_7
ID  - Zhu2025
ER  -