Academic Journal of Agriculture & Life Sciences, 2023, 4(1); doi: 10.25236/AJALS.2023.040106.
Cao Tianzhi1, Hiroshi Okamoto2
1Graduate School of Agriculture, Hokkaido University, Sapporo, Hokkaido, 060-8589, Japan
2Research Faculty of Agriculture, Hokkaido University, Sapporo, Hokkaido, 060-8589, Japan
Root crop yield estimate and harvest technology, especially Japanese Yam harvest production, has a high demand for intelligent automation systems. This paper develops a Japanese Yam quality grading system based on computer vision and deep learning, which can automatically detect the shape and quality of Japanese Yam and grade the size of harvested. Specifically, based on Shuffle-Net and transfer learning, a lightweight deep learning model (CDD Net) was constructed to detect surface and shape defects of Japanese Yam. It’s methods were also proposed based on minimum bounding rectangle (MBR) fitting and convex polygon approximation. Experimental results showed that the detection accuracy of the proposed CDD Net was 98.94% for binary classification (normal and defective) and 92.92% for multi-class classification (normal, curve, fork root, fracture), and demonstrated good performance both in time efficiency and detection accuracy. The size accuracy of MBR fitting and convex polygon approximation was 92.8% and 95.1% respectively. This study provides a practical method for defect detection and size grading, which has great potential for application in yield estimation of root crops.
Computer Vision; Quality Grading System; Intelligent Automation System
Cao Tianzhi, Hiroshi Okamoto. Defect Detection and Size Grading of Harvested Japanese Yam Using Computer Vision Technology. Academic Journal of Agriculture & Life Sciences (2023) Vol. 4 Issue 1: 30-42. https://doi.org/10.25236/AJALS.2023.040106.
[1] Da Costa, A. Z., Figueroa, H. E. H., & Fracarolli, J. A. Computer vision based detection of external defects on tomatoes using deep learning. Biosystems Engineering.(2020). https://doi.org/ 10.1016/j.biosystemseng.2019.12.003
[2] Rong J C, Wang P B, Yang Q, Huang F. A field-tested harvesting robot for oyster mushroom in greenhouse. Agronomy, (2021): 1210. doi: 10.3390/agronomy11061210.
[3] Zhang F, Chen Z J, Wang Y F, Bao R F, Chen X G, Fu S L, et al. Research on flexible end-effectors with humanoid grasp function for small spherical fruit picking. Agriculture, (2023): 123. doi: 10.3390/agriculture13010123.
[4] Nasiri, A., Omid, M., & Taheri-Garavand, A. An automatic sorting system for unwashed eggs using deep learning. Journal of Food Engineering, (2020). https://doi.org/ 10.1016/j.jfoodeng.2020.110036
[5] Kala, J. R., Viriri, S., & Tapamo, J. R. An approximation based algorithm for minimum bounding rectangle computation. In IEEE international conference on adaptive science and technology, ICAST, 2015-Janua. (2015). https://doi.org/10.1109/
[6] Rong J C, Wang P B, Wang T J, Ling H, Yuan T. Fruit pose recognition and directional orderly grasping strategies for tomato harvesting robots. Computers and Electronics in Agriculture, (2022); 202: 107430. doi: 10.1016/j.compag.2022.107430.
[7] Afroza A, Ambreen N, Baseerat A; Nigeena N, Ahmad S P, Azrah I S, Amreena S; Insha J, Majid R. Evaluation of Cherry Tomato (Solanum lycopersicum L. var. cerasiforme) Genotypes for Yield and Quality Traits. Journal of Community Mobilization and Sustainable Development, (2021): 72-76.
[8] Vasseghian, Y., Bahadori, A., Khataee, A., Dragoi, E. N., & Moradi, M. Modeling the interfacial tension of water-based binary and ternary systems at high pressures using a neuro-evolutive technique. ACS Omega, (2020). https://doi.org/10.1021/acsomega.9b03518
[9] Yamamoto K, Guo W, Ninomiya S S. Node detection and internode length estimation of tomato seedlings based on image analysis and machine learning. Sensors, (2016): 1044.
[10] Wu, J. S., Zhang, B., & Gao, Y. L. An effective flame segmentation method based on OHTA color space. Applied Mechanics andMaterials,485,7–11.(2012).https://doi.org/10. 4028/www. scientific.net/ AMR. 485.7.
[11] Wang Z L. Underwood J. Walsh KB. Machine vision assessment of mango orchard flowering. Computers and Electronics in Agriculture, (2018); 151: 501–511.
[12] Wu J G, Zhang B H, Zhou J, Xiong Y J, Gu B X, Yang X L. Automatic Recognition of Ripening Tomatoes by Combining Multi-Feature Fusion with a Bi-Layer Classification Strategy for Harvesting Robots. Sensors, (2019): 612- 612.
[13] Xiong J T, Lin R, Liu Z, He Z L, Tang L Y, Yang Z G Zou X J. The recognition of litchi clusters and the calculation of picking point in a nocturnal natural environment. Biosystems Engineering, (2018); 166: 44-57.
[14] Bechar A, Vigneault C. Agricultural robots for field operations: Concepts and components. Biosystems Engineering, (2016); 149: 94-111.
[15] Silwal A, Davidson J R, Karkee M, Mo C K, Zhang Q, Lewis K. Design, integration, and field evaluation of a robotic apple harvester. Journal of Field Robotics, (2017); 34(6: 1140-1159.
[16] Silwal A, Karkee M, Zhang Q. A hierarchical approach to apple identification for robotic harvesting. Transactions of the ASABE, (2016); 59(5): 1079-1086.
[17] Guo Y M, Liu Y, Oerlemans A, Lao S Y, Lew M S. Deep learning for visual understanding: A review. Neurocomputing, (2016); 187: 27-48.
[18] He K M, Zhang X Y, Ren S Q, Sun J. Deep residual learning for image recognition. (2016 )IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2016); 770-778.
[19] Xu Z F, Jia R S, Liu Y B, Zhao C Y, Sun H M. Fast Method of Detecting Tomatoes in a Complex Scene for Picking Robots. IEEE Access, (2020); 8: 55289 - 55299.
[20] Ren S Q, He K M, Girshick R, Sun J. Faster R-CNN: Towards real-time object detection with region proposal networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, (2017); 39(6): 1137-1149.
[21] Thuyet, D. Q., Kobayashi, Y., & Matsuo, M. A robot system equipped with deep convolutional neural network for autonomous grading and sorting of root-trimmed garlics. Computers and Electronics in Agriculture,(2020). https://doi.org/10.1016/j. compag.2020.105727.
[22] Wang C Y, Liao H M, Wu Y H, Chen P Y, Hsieh J W, Yeh I H. CSPNet: A new backbone that can enhance learning capability of CNN. In: (2020) IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), (2020); pp.1571-1580.
[23] Woo S, Park J C, Lee J, Lweon I. CBAM: Convolutional Block Attention Module. In: Computer Vision - ECCV, (2018); pp. 3-19.