Welcome to Francis Academic Press

The Frontiers of Society, Science and Technology, 2024, 6(9); doi: 10.25236/FSST.2024.060901.

Visual Analysis of Research in the Field of Food Target Detection

Author(s)

Chen Xieyu1,2, Tang Na1,2

Corresponding Author:
Chen Xieyu
Affiliation(s)

1School of Business, Geely University of China, Chengdu, China

2Key Laboratory of Sichuan Cuisine Artificial Intelligence, Chengdu, China

Abstract

Based on CiteSpace, this study conducts a bibliometric visualization analysis of 401 research articles related to food target detection from the Web of Science, covering the period from 2014 to 2024. It explores the research progress, hotspots, and limitations in this field in recent years. The results indicate that there is close international collaboration in food target detection, forming a core cooperative network. The cited references are categorized into 8 clusters, with a majority of highly cited works focusing on theoretical research, specifically on the algorithms themselves. In contrast to the cited references, the documents from research sample are categorized into10 clusters based on keywords, predominantly focusing on the identification and detection of food components. However, the study also reveals certain limitations, such as the absence of a core group of authors in the field, a lack of diverse authorship, and the absence of an interdisciplinary collaborative network. In light of this, The paper proposes corresponding solutions from the perspective of resource sharing and co-construction.

Keywords

Target detection, Food, Visualization, CiteSpace

Cite This Paper

Chen Xieyu, Tang Na. Visual Analysis of Research in the Field of Food Target Detection. The Frontiers of Society, Science and Technology (2024), Vol. 6, Issue 9: 1-8. https://doi.org/10.25236/FSST.2024.060901.

References

[1] Taheri-Garavand A, Nasiri A, Banan A, et al. Smart deep learning-based approach for non-destructive freshness diagnosis of common carp fish[J]. Journal of Food Engineering, 2020, 278: 109930.

[2] Felzenszwalb P F, Girshick R B, McAllester D, et al. Object Detection with Discriminatively Trained Part-Based Models[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2010, 32(9): 1627-1645.

[3] Ren S, He K, Girshick R, et al. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks [J]. arXiv e-prints, 2015: arXiv:1506.01497.

[4] Dai J, Li Y, He K, et al. R-FCN: Object Detection via Region-based Fully Convolutional Networks [J]. arXiv e-prints, 2016: arXiv:1605.06409.

[5] Diwan T, Anirudh G, Tembhurne J V. Object detection using YOLO: challenges, architectural successors, datasets and applications [J]. Multimedia Tools and Applications, 2023, 82(6): 9243-9275.

[6] Jiang P, Ergu D, Liu F, et al. A Review of Yolo Algorithm Developments[J]. The 8th International Conference on Information Technology and Quantitative Management (ITQM 2020 & 2021): Developing Global Digital Economy after COVID-19, 2022, 199: 1066-1073.

[7] Liu W, Anguelov D, Erhan D, et al. SSD: Single Shot MultiBox Detector[C]//LEIBE B, MATAS J, SEBE N, et al. Computer Vision - ECCV 2016. Cham: Springer International Publishing, 2016: 21-37.

[8] Diem A, Wolter S C. The Use of Bibliometrics to Measure Research Performance in Education Sciences [J]. Research in Higher Education, 2013, 54(1): 86-114.

[9] Börner K, Chen C, Boyack K W. Visualizing knowledge domains[J]. Annual Review of Information Science and Technology, 2003, 37(1): 179-255.

[10] Dickinson, L. Second language learning and language teaching [J]. Second Language Research, 1993, 9(3), 281-285.

[11] Chen C. CiteSpace II: Detecting and visualizing emerging trends and transient patterns in scientific literature [J]. Journal of the American Society for Information Science and Technology, 2006, 57(3): 359-377.

[12] Redmon J, Farhadi A. YOLOv3: An Incremental Improvement[J]. arXiv e-prints, 2018: arXiv:1804.02767.

[13] Sandler M, Howard A, Zhu M, et al. MobileNetV2: Inverted Residuals and Linear Bottlenecks [C]// 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2018: 4510-4520.

[14] Kamilaris A, Prenafeta-Boldú F X. Deep learning in agriculture: A survey [J]. Computers and Electronics in Agriculture, 2018, 147: 70-90.

[15] Thuyet D Q, Kobayashi Y, Matsuo M. A robot system equipped with deep convolutional neural network for autonomous grading and sorting of root-trimmed garlics [J]. Computers and Electronics in Agriculture, 2020, 178: 105727.

[16] Krizhevsky A, Sutskever I, Hinton G E. ImageNet classification with deep convolutional neural networks [J]. Commun. ACM, 2017, 60(6): 84-90.

[17] He K, Gkioxari G, Dollár P, et al. Mask R-CNN [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020, 42(2): 386-397.

[18] Hussain M. YOLO-v1 to YOLO-v8, the Rise of YOLO and Its Complementary Nature toward Digital Manufacturing and Industrial Defect Detection [J]. Machines, 2023, 11(7).

[19] Djekic I, Jankovic D, Rajkovic A. Analysis of foreign bodies present in European food using data from Rapid Alert System for Food and Feed (RASFF)[J]. Food Control, 2017, 79: 143-149.

[20] He K, Zhang X, Ren S, et al. Deep Residual Learning for Image Recognition[C]//2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Los Alamitos, CA, USA: IEEE Computer Society, 2016: 770-778.

[21] Bochkovskiy A, Wang C Y, Liao H Y M. YOLOv4: Optimal Speed and Accuracy of Object Detection [J]. arXiv e-prints, 2020: arXiv:2004.10934.