Welcome to Francis Academic Press

Academic Journal of Computing & Information Science, 2024, 7(9); doi: 10.25236/AJCIS.2024.070906.

Research on Visual Perception Method of Intelligent Robot Environment Based on Machine Learning

Author(s)

Minmin Chen

Corresponding Author:
Minmin Chen
Affiliation(s)

Moli Technology (Suzhou) Co., Ltd, Suzhou, Jiangsu, China

Abstract

With the development of modern science and technology, robots have begun to frequently appear in people's work and life, providing people with more and more diverse intelligent services. Human perception of the surrounding environment is achieved through visual observation in most cases, and environmental visual perception in the field of robotics has always been the focus and difficulty in research. In this paper, the research on the visual perception method of intelligent robot environment based on machine learning. In order to improve the robot's environment perception level, it is necessary to solve the problems in the perception of objects in the space environment and target objects. This paper analyzes the machine learning model, and improves the learning model through random variable analysis to improve the machine's ability to intelligently obtain information in the environment. At the same time, this paper analyzes the method of extracting the robot's three-dimensional shape category information, starting with the robot's perception of the target object space, and improving the robot's environmental visual perception ability. According to experiments, the recognition accuracy of the volume integral layer network model based on layer feature fusion in the ModelNet-40 dataset is as high as 90.57%, and the average recognition time is only 3.8ms, which is 37.71% faster than other similar models. Based on various data, the research in this article is very meaningful for improving the speed and accuracy of visual perception of intelligent robots in complex environments.

Keywords

Intelligent Robot, Visual Perception, Machine Learning, Neural Network

Cite This Paper

Minmin Chen. Research on Visual Perception Method of Intelligent Robot Environment Based on Machine Learning. Academic Journal of Computing & Information Science (2024), Vol. 7, Issue 9: 41-50. https://doi.org/10.25236/AJCIS.2024.070906.

References

[1] Bilal Iscimen, Huseyin Atasoy, Yakup Kutlu, et al. Smart Robot Arm Motion Using Computer Vision. Elektronika Ir Elektrotechnika, 2015, 21 (6): 3-7.

[2] Alcantara M, Castaneda A, Flores-Penaloza D, et al. The topology of look-compute-move robot wait-free algorithms with hard termination. Distributed Computing, 2018, 32 (3): 1-21.

[3] Hariyama M, Araumi Y, Kameyama M. A robot vision VLSI processor for the rectangular solid representation of three‐dimensional objects. Systems & Computers in Japan, 2015, 28 (2): 54-61.

[4] Keiichi Yamada, Tomoaki Nakano, Shin Yamamoto. Robot vision robust to changes of lighting conditions using a wide‐dynamic‐range visual sensor. Electrical Engineering in Japan, 2015, 120 (2): 34-40.

[5] Lin Y M, N.-G. Lü, Lou X P, et al. Robot vision system for 3D reconstruction in low texture environment. Guangxue Jingmi Gongcheng/Optics and Precision Engineering, 2015, 23 (2): 540-549.

[6] Zhang W, Sun F, Wu H, et al. A framework for the fusion of visual and tactile modalities for improving robot perception. Ence China Information ences, 2017, 60 (1): 1-12.

[7] Wang Q, Yang X, Huang Z, et al. A novel design framework for smart operating robot in power system. IEEE/CAA Journal of Automatica Sinica, 2018, 5 (2): 531-538.

[8] Balderas D, Molina A, Ponce P. Alternative Classification Techniques for Brain-Computer Interfaces for Smart Sensor Manufacturing Environments. Ifac Papersonline, 2015, 48 (3): 680-685.

[9] Jaeik C, Chilamkurti N, Wang S J. Editorial of special section on enabling technologies for industrial and smart sensor internet of things systems. Journal of Supercomputing, 2018, 74 (9): 4171-4172.

[10] Caron F, Duflos E, Pomorski D, et al. GPS/IMU Data Fusion using multisensor Kalman filtering: Introduction of contextual aspects. Information Fusion, 2017, 7 (2): 221-230.

[11] Yusuf R, Sharma D G, Tanev I, et al. Evolving an emotion recognition module for an intelligent agent using genetic programming and a genetic algorithm. Artificial Life and Robotics, 2016, 21 (1): 85-90.

[12] Chang H. Performance evaluation framework design for smart sensor business. The Journal of Supercomputing, 2018, 74 (9): 4481-4496.

[13] Jameel S, Farooqui Z. Energy Efficient Smart Sensor Network Routing Protocol using Node Scheduling. International Journal of Computer Applications, 2015, 126 (4): 188-191.

[14] Nickel M, Murphy K, Tresp V, et al. A Review of Relational Machine Learning for Knowledge Graphs. Proceedings of the IEEE, 2015, 104 (1): 11-33.

[15] Libbrecht M W, Noble W S. Machine learning applications in genetics and genomics. Nature Reviews Genetics, 2015, 16 (6): 321-32.

[16] Moeskops P, Viergever M A, Adri?nne M. Mendrik, et al. Automatic Segmentation of MR Brain Images with a Convolutional Neural Network. IEEE Transactions on Medical Imaging, 2016, 35 (5): 1252-1261.

[17] Kruthiventi S S S, Ayush K, Venkatesh Babu R. DeepFix: A Fully Convolutional Neural Network for Predicting Human Eye Fixations. IEEE Transactions on Image Processing, 2017, 26 (9): 4446-4456.

[18] Chen H, Zhang Y, Kalra M K, et al. Low-Dose CT with a Residual Encoder-Decoder Convolutional Neural Network (RED-CNN). IEEE Transactions on Medical Imaging, 2017, 36 (99): 2524-2535.

[19] Liang X, Xu C, Shen X, et al. Human Parsing with Contextualized Convolutional Neural Network. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2016, 39 (1): 115-127.

[20] Li S, Liu Z Q, Chan A B. Heterogeneous Multi-task Learning for Human Pose Estimation with Deep Convolutional Neural Network. International Journal of Computer Vision, 2015, 113 (1): 19-36.

[21] Farashahi A G. A Class of Abstract Linear Representations for Convolution Function Algebras over Homogeneous Spaces of Compact Groups. Canadian Journal of Mathematics, 2016, 70 (1): 1-21.

[22] Grimmett H, Triebel R, Paul R, et al. Introspective classification for robot perception. International Journal of Robotics Research, 2016, 35 (7): 743-762.

[23] Lindner L, Sergiyenko O, Rodríguez-Qui?onez, Julio C, et al. Mobile robot vision system using continuous laser scanning for industrial application. Industrial Robot, 2016, 43 (4): 360-369.

[24] Frese U, Hirschmueller H. Special issue on robot vision: what is robot vision? Journal of Real-Time Image Processing, 2015, 10 (4): 597-598.

[25] Zhang J, Zhang P, Zhuo L. Fuzzy Support Vector Machine Based on Color Modeling for Facial Complexion Recognition in Traditional Chinese Medicine. Chinese Journal of Electronics, 2016, 25 (3): 474-480.