Welcome to Francis Academic Press

Academic Journal of Engineering and Technology Science, 2024, 7(2); doi: 10.25236/AJETS.2024.070221.

Energy Consumption Analysis of Convolutional Neural Networks

Author(s)

Xiaokun Qi, Tian He

Corresponding Author:
Tian He
Affiliation(s)

College of Mechanical and Electrical Engineering, Qingdao University, Qingdao, 266071, China

Abstract

The rapid development and widespread application of convolutional neural networks have played a positive role in promoting social progress and economic development. An increasing number of neural networks are being deployed on mobile devices to meet the needs of visual perception tasks. However, the operation of complex neural networks requires a large amount of computational cost, which affects the working duration of mobile devices. Therefore, it becomes crucial to analyze the energy consumption of neural networks. Through energy consumption analysis, the main energy consumption points can be identified, providing ideas for the subsequent low-energy consumption of neural networks. This paper selects six classic convolutional neural networks, AlexNet, VGG, GoogLeNet, ResNet, MobileNet, and ShuffleNet, to study the impact of different network structures on energy consumption, and compares their running time, power, and energy consumption. On this basis, the hotspot layers (convolutional layer, pooling layer, fully connected layer) are analyzed, and it is found that the energy consumption of the convolutional layer accounts for the largest proportion. It is inferred from this that the internal structure of the convolutional layer is a key factor affecting the energy consumption of neural networks. Based on this, improvements to the internal structure of the convolutional layer can reduce algorithm energy consumption.

Keywords

Convolutional Neural Networks, Hotspot Layer, Energy Consumption Analysis

Cite This Paper

Xiaokun Qi, Tian He. Energy Consumption Analysis of Convolutional Neural Networks. Academic Journal of Engineering and Technology Science (2024) Vol. 7, Issue 2: 144-149. https://doi.org/10.25236/AJETS.2024.070221.

References

[1] CHENG Y, WANG D, ZHOU P, et al. A Survey of Model Compression and Acceleration for Deep Neural Networks [J]. ArXiv, 2017, abs/1710.09282.

[2] KRIZHEVSKY A, SUTSKEVER I, HINTON G E. ImageNet classification with deep convolutional neural networks [J]. Communications of the ACM, 2012, 60: 84 - 90.

[3] YANG T-J, CHEN Y-H, SZE V. Designing Energy-Efficient Convolutional Neural Networks Using Energy-Aware Pruning [J]. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016: 6071-9.

[4] SIMONYAN K, ZISSERMAN A. Very Deep Convolutional Networks for Large-Scale Image Recognition [J]. CoRR, 2014, abs/1409.1556.

[5] SZEGEDY C, LIU W, JIA Y, et al. Going deeper with convolutions [J]. 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2014: 1-9.

[6] HE K, ZHANG X, REN S, et al. Deep Residual Learning for Image Recognition [J]. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015: 770-8.

[7] HOWARD A G, SANDLER M, CHU G, et al. Searching for MobileNetV3 [J]. 2019 IEEE/CVF International Conference on Computer Vision (ICCV), 2019: 1314-24.

[8] MA N, ZHANG X, ZHENG H, et al. ShuffleNet V2: Practical Guidelines for Efficient CNN Architecture Design [J]. ArXiv, 2018, abs/1807.11164.

[9] HAN S, LIU X, MAO H, et al. EIE: Efficient Inference Engine on Compressed Deep Neural Network [J]. 2016 ACM/IEEE 43rd Annual International Symposium on Computer Architecture (ISCA), 2016: 243-54.

[10] ROUHANI B D, MIRHOSEINI A, KOUSHANFAR F. DeLight: Adding Energy Dimension To Deep Neural Networks [Z]. Proceedings of the 2016 International Symposium on Low Power Electronics and Design. San Francisco Airport, CA, USA; Association for Computing Machinery. 2016: 112–7.10.1145/2934583.2934599

[11] CHEN Y-H, EMER J, SZE V. Eyeriss: a spatial architecture for energy-efficient dataflow for convolutional neural networks [J]. SIGARCH Comput Archit News, 2016, 44(3): 367–79.

[12] QI, SPARKS E R, TALWALKAR A. Paleo: A Performance Model for Deep Neural Networks; proceedings of the International Conference on Learning Representations, F, 2016 [C].

[13] YANG T-J, CHEN Y-H, EMER J, et al. A method to estimate the energy consumption of deep neural networks; proceedings of the 2017 51st asilomar conference on signals, systems, and computers, F, 2017 [C]. IEEE.

[14] LU Z, RALLAPALLI S, CHAN K S, et al. Modeling the Resource Requirements of Convolutional Neural Networks on Mobile Devices [J]. Proceedings of the 25th ACM international conference on Multimedia, 2017.

[15] CAI E, JUAN D-C, STAMOULIS D, et al. NeuralPower: Predict and Deploy Energy-Efficient Convolutional Neural Networks [J]. ArXiv, 2017, abs/1710.05420.

[16] FAVIOLA RODRIGUES C, RILEY G, LUJAN M. Energy Predictive Models for Convolutional Neural Networks on Mobile Platforms [J]. arXiv e-prints, 2020: arXiv: 2004.05137.