Proceedings of the International Conference on Current Problems in Engineering and Applied Sciences (ICCPEAS 2025)

Energy-Efficient Deep Learning using Model Compression

Authors
Ayla Kayabaş1, *
1Kirsehir Ahi Evran University, Kirsehir, Turkey
*Corresponding author. Email: ayla.kayabas@ahievran.edu.tr
Corresponding Author
Ayla Kayabaş
Available Online 14 May 2026.
DOI
10.2991/978-94-6239-668-5_92How to use a DOI?
Keywords
Model Compression; Quantization; Pruning
Abstract

Model compression techniques improve the energy efficiency of convolutional neural networks (CNNs) without losing accuracy. Using the CIFAR-10 dataset, we trained a baseline CNN and implemented two key compression methods: pruning and post-training quantization. Our pruned model with 50% sparsity achieved 66.68% accuracy while reducing the model size to 1.38 megabytes (MB). In comparison, the quantized model maintained 67.61% accuracy while being significantly smaller at 1.18 MB. These results demonstrate that quantization not only preserves classification performance but also produces a more compact model suitable for edge deployment and low-power devices. This indicates that quantization can not only decrease the memory footprint of a model but also potentially improve its accuracy. We illustrate the trade-offs between model size and accuracy and highlight quantization as an effective approach for green computing in deep learning. The results support sustainable AI by offering lightweight, energy-efficient models without requiring complex retraining.

Copyright
© 2026 The Author(s)
Open Access
Open Access This chapter is licensed under the terms of the Creative Commons Attribution-NonCommercial 4.0 International License (http://creativecommons.org/licenses/by-nc/4.0/), which permits any noncommercial use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

Download article (PDF)

Volume Title
Proceedings of the International Conference on Current Problems in Engineering and Applied Sciences (ICCPEAS 2025)
Series
Advances in Engineering Research
Publication Date
14 May 2026
ISBN
978-94-6239-668-5
ISSN
2352-5401
DOI
10.2991/978-94-6239-668-5_92How to use a DOI?
Copyright
© 2026 The Author(s)
Open Access
Open Access This chapter is licensed under the terms of the Creative Commons Attribution-NonCommercial 4.0 International License (http://creativecommons.org/licenses/by-nc/4.0/), which permits any noncommercial use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

Cite this article

TY  - CONF
AU  - Ayla Kayabaş
PY  - 2026
DA  - 2026/05/14
TI  - Energy-Efficient Deep Learning using Model Compression
BT  - Proceedings of the International Conference on Current Problems in Engineering and Applied Sciences (ICCPEAS 2025)
PB  - Atlantis Press
SP  - 877
EP  - 882
SN  - 2352-5401
UR  - https://doi.org/10.2991/978-94-6239-668-5_92
DO  - 10.2991/978-94-6239-668-5_92
ID  - Kayabaş2026
ER  -