Welcome to Francis Academic Press

Academic Journal of Computing & Information Science, 2020, 3(3); doi: 10.25236/AJCIS.2020.030304.

Fire detection in Surveillance Videos using a combination with PCA and CNN


Otabek Khudayberdiev1,*, Muhammad Hassaan Farooq Butt2

Corresponding Author:
Otabek Khudayberdiev

1. Sichuan Province Key Lab of Signal and Information Processing, Southwest Jiaotong University, Chengdu 610031, PR China
2. School of Information Science and Technology, Southwest Jiaotong University, Chengdu 61756, PR China
*Corresponding Author


This paper proposes a novel approach to early fire detection system from closed-circuit television (CCTV) using combination Principal Component Analysis (PCA) and Convolutional Neural Networks (CNN). It takes full advantage of the existing traditional methods like color or motional characteristics information of fire. However, CNN based fire detection system needs more computational requirements, high memory and time, in this paper, we propose energy-friendly CNN architecture for fire detection deep neural networks, inspired by MobileNet. The main role of PCA is to perform feature extraction of row data and then send it to CNN architecture. The experimental results on benchmark fire datasets reveal that the proposed method can achieve better classification performance and indicates that using CNN to detect fire in video captures is an effective way.


Convolutional neural networks (CNN), deep learning, Principal component analysis (PCA), fire detection, surveillance networks, image classification.

Cite This Paper

Otabek Khudayberdiev, Muhammad Hassaan Farooq Butt. Fire detection in Surveillance Videos using a combination with PCA and CNN. Academic Journal of Computing & Information Science (2020), Vol. 3, Issue 3: 27-33. https://doi.org/10.25236/AJCIS.2020.030304.


[2] A. E. Çetin et al., ‘Video fire detection – Review’, vol. 23, pp. 1827–1843, 2013.
[3] O. Arandjelović, D. S. Pham, and S. Venkatesh, ‘CCTV Scene Perspective Distortion Estimation From Low-Level Motion Features’, IEEE Trans. Circuits Syst. Video Technol., vol. 26, no. 5, pp. 939–949, 2016.
[4] C. Bin Liu and N. Ahuja, ‘Vision based fire detection’, Proc. - Int. Conf. Pattern Recognit., vol. 4, no. 1, pp. 134–137, 2004.
[5] Y. Liu, L. Nie, L. Liu, and D. S. Rosenblum, ‘From action to activity: Sensor-based activity recognition’, Neurocomputing, vol. 181, pp. 108–115, 2016.
[6] P. Foggia, A. Saggese, and M. Vento, ‘Real-Time Fire Detection for Video-Surveillance Applications Using a Combination of Experts Based on Color, Shape, and Motion’, IEEE Trans. Circuits Syst. Video Technol., vol. 25, no. 9, pp. 1545–1556, 2015.
[7] T. Çelik and H. Demirel, ‘Fire detection in video sequences using a generic color model’, Fire Saf. J., vol. 44, no. 2, pp. 147–158, 2009.
[8] A. Rafiee, R. Dianat, M. Jamshidi, R. Tavakoli, and S. Abbaspour, ‘Fire and smoke detection using wavelet analysis and disorder characteristics’, ICCRD2011 - 2011 3rd Int. Conf. Comput. Res. Dev., vol. 3, no. July 2015, pp. 262–265, 2011.
[9] K. Muhammad, S. Khan, M. Elhoseny, S. Hassan Ahmed, and S. Wook Baik, ‘Efficient Fire Detection for Uncertain Surveillance Environment’, IEEE Trans. Ind. Informatics, vol. 15, no. 5, pp. 3113–3122, 2019.
[10] I. Garg, P. Panda, and K. Roy, ‘A Low Effort Approach to Structured CNN Design Using PCA’, IEEE Access, vol. 8, pp. 1347–1360, 2020.
[11] S. Pfeifer, ‘Combining Pca Analysis and Artificial Neural Networks in Modelling Entrepreneurial Intentions of Students’, Croat. Oper. Res. Rev., vol. 4, no. 1, pp. 306–317, 2013.
[12] K. Fukushima, ‘Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position’, Biol. Cybern., vol. 36, no. 4, pp. 193–202, 1980.
[13] Y. Liu, L. Nie, L. Liu, and D. S. Rosenblum, ‘Neurocomputing From action to activity : Sensor-based activity recognition’, Neurocomputing, vol. 181, pp. 108–115, 2016.
[14] T. H. Chan, K. Jia, S. Gao, J. Lu, Z. Zeng, and Y. Ma, ‘PCANet: A Simple Deep Learning Baseline for Image Classification?’, IEEE Trans. Image Process., vol. 24, no. 12, pp. 5017–5032, 2015.
[15] W. Zhang et al., ‘Deep convolutional neural networks for multi-modality isointense infant brain image segmentation’, Neuroimage, vol. 108, pp. 214–224, 2015.
[16] K. He, X. Zhang, S. Ren, and J. Sun, ‘Deep residual learning for image recognition’, Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., vol. 2016-Decem, pp. 770–778, 2016.
[17] K. Simonyan and A. Zisserman, ‘Very deep convolutional networks for large-scale image recognition’, 3rd Int. Conf. Learn. Represent. ICLR 2015 - Conf. Track Proc., pp. 1–14, 2015.
[18] S. Zagoruyko and N. Komodakis, ‘Wide Residual Networks’, Br. Mach. Vis. Conf. 2016, BMVC 2016, vol. 2016-Septe, pp. 87.1-87.12, 2016.
[19] N. Passalis and A. Tefas, ‘Training Lightweight Deep Convolutional Neural Networks Using Bag-of-Features Pooling’, IEEE Trans. Neural Networks Learn. Syst., vol. 30, no. 6, pp. 1705–1715, 2019.
[20] D. Y. T. Chino, L. P. S. Avalhais, J. F. Rodrigues, and A. J. M. Traina, ‘BoWFire: Detection of Fire in Still Images by Integrating Pixel Color and Texture Analysis’, Brazilian Symp. Comput. Graph. Image Process., vol. 2015-Octob, no. August, pp. 95–102, 2015.