Welcome to Francis Academic Press

Academic Journal of Computing & Information Science, 2021, 4(7); doi: 10.25236/AJCIS.2021.040707.

Automatic Annotation Method of Terror Image Based on Integrated Deep Migration Learning

Author(s)

Wenjun Zhao1, Miaolei Deng1, Dexian Zhang1, Hui Gao2

Corresponding Author:
Dexian Zhang
Affiliation(s)

1School of Information Science and Engineering Henan University of Technology, Zhengzhou, China

2School of Mechanical and Electrical Engineering Henan University of Technology, Zhengzhou, China

Abstract

With the continuous development of Internet technology, anti-terrorism images have become an important part of network anti-terrorism. In this paper, a method of automatic annotation of terror images based on integrated deep migration learning is proposed by combining parameter migration and ensemble learning, which can help filter terror information in web pages. Firstly, the deep convolution neural network model is used to train the terror image model in the source domain, and then the migration from the source domain to the target domain is realized through the migration learning technology. Then, the integrated learning framework is used to integrate the transfer learning model. Experimental results show that the accuracy and recall rate of the proposed algorithm are obviously improved.

Keywords

Terror image, automatic labeling, transfer learning, integrated learning

Cite This Paper

Wenjun Zhao, Miaolei Deng, Dexian Zhang, Hui Gao. Automatic Annotation Method of Terror Image Based on Integrated Deep Migration Learning. Academic Journal of Computing & Information Science (2021), Vol. 4, Issue 7: 46-51. https://doi.org/10.25236/AJCIS.2021.040707.

References

[1] FU Yabin. Design and implementation of the Violent-terrorist video recognition system based on Logo markers detection[D].Beijing: Beijing Jiaotong University,2016:15-30.

[2] Liu Wei. Video detection of terrorist violence based on MPEG-7 audio-visual features [D]. Hebei University of technology, 2014

[3] Wu Baoyuan, Jia Fan, Liu Wei, et al. Diverse image annotation[C] //Proc of the IEEE Conf on Computer Vision and Pattern Recognition. Piscataway, NJ ; IEEE, 2017:2559-2567. 

[4] Wu Baoyuan, Chen Weidong, Sun Peng, et al. Tagging like humans: Diverse and distinct image annotation[C] //Proc of the IEEE Conf on Computer Vision and Pattern Recognition. Piscataway, NJ ; IEEE, 2018:7967-7975. 

[5] Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition [J]. arXiv preprint, arXiv:1409.1556,2014.

[6] Goodfellow I, Pouget-abadie J, Mirza M, et al. Generative adversarial nets [C]//Proc of Advances in Neural Information Processing Systems. Cambridge, MA; MIT, 2014: 2672-2680.

[7] Kulesza A, Taskar B. Determinatal point processes for machine learning[J]. Foundations and Trends in Machine Learning, 2012,5(2/3): 123-286.

[8] Zhang Junjie, Wu Qi, Zhang Jian, et al. Mind your neighbours: Image annotation with metadata neighbourhood graph coattention networks[C] //Proc of IEEE Conf on Computer Vision and Pattern Recognition .Piscataway, NJ ; IEEE, 2019:2956-2964.

[9] He Kaiming, Zhang Xiangyu, Ren Shaoqing, et al. Deep residual learning for image recognition[C] //Procof IEEE Conf on Computer Vision and Pattern Recognition (CVPR).Piscataway, NJ ; IEEE, 2016:770-778.

[10] Wang Jiang, Yang Yi, Mao Junhua, et al. CNN-RNN:A unified framework for multi-label image classification[C] //Proc of IEEE Conf on Computer Vision and Pattern Recognition .Piscataway, NJ ; IEEE, 2016:2285-2294.

[11] Oquab M, Bottou L, Laptev I, et al. Learning and transferring mid-level image representations using convolutional neural networks[C] //Proc of IEEE Conf on Computer Vision and Pattern Recognition .Piscataway, NJ ; IEEE, 2014:1717-1724.

[12] Yosinski J,,Clune J,Bengio Y, et al. How transferable are features in deep neural networks [C] //Proc of Annual Conf on Neural Information Processing Systems. Cambridge, MA; MIT, 2014: 3320-3328.

[13] Gong Yunchao, Jia Yangqing, Leung T, et al. Deep convolutional ranking for multilabel image annotation[J]. arXiv preprint, arXiv:1312.4897,2013.

[14] Tan Ben, Song Yangqiu, Zhong Erheng, et al. Transitive transfer learning[C] //Proc of the 21st ACM SIGKDD Int Conf on Knowledge Discoverry and Data Mining. New York: ACM,2015: 1155-1164 

[15] Tan Ben, Zhang Yu, Pan S, et al. Distant domain transfer learning [C]//Proc of the 31st AAAI Conf on Artifitial Intelligence. Menlo Park, CA: AAAI, 2017:2604-2610.

[16] Chen Mengfu. Research on automatic recognition model for terrorism related image based on Transfer Learning,2020,46(09):1677-1681.

[17] Yan Liang. Violet image annotation using ensemble learning [J]. Terahertz Journal of science and electronic information, 2020,18 (02): 306-312

[18] Krizhevsky A, Sutskever I, Hinton G E. Imagenet classification with deep convolutional neural networks[J]. Advances in neural information processing systems, 2012, 25: 1097-1105.

[19] Feng S L, Manmatha R, Lavrenko V. Multiple bernoulli relevance models for image and video annotation[C]//Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2004. CVPR 2004. IEEE, 2004, 2: II-II.

[20] Ghoshal A, Ircing P, Khudanpur S. Hidden Markov models for automatic annotation and content-based retrieval of images and video[C]//Proceedings of the 28th annual international ACM SIGIR conference on Research and development in information retrieval. 2005: 544-551.

[21] Pham T T, Maillot N E, Lim J H, et al. Latent semantic fusion model for image retrieval and annotation[C]//Proceedings of the sixteenth ACM conference on Conference on information and knowledge management. 2007: 439-444.

[22] Murthy V N, Maji S, Manmatha R. Automatic image annotation using deep learning representations[C]//Proceedings of the 5th ACM on International Conference on Multimedia Retrieval. 2015: 603-606.

[23] Liu Y, Wen K, Gao Q, et al. SVM based multi-label learning with missing labels for image annotation[J]. Pattern Recognition, 2018, 78: 307-317.

[24] Wang J, Yang Y, Mao J, et al. Cnn-rnn: A unified framework for multi-label image classification[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2016: 2285-2294.