Welcome to Francis Academic Press

Academic Journal of Engineering and Technology Science, 2022, 5(11); doi: 10.25236/AJETS.2022.051107.

Visualized Analysis of Intention Recognition Research Based on CiteSpace

Author(s)

Yuxuan Fang1, Honglan Wu2, Hao Liu2

Corresponding Author:
Honglan Wu
Affiliation(s)

1Department of Changkong, Nanjing University of Aeronautics and Astronautics, Nanjing, China

2Department of Civil Aviation, Nanjing University of Aeronautics and Astronautics, Nanjing, China

Abstract

Intention recognition has improved the accuracy and speed of interactions between human and machine. However, due to the rapid development, the diversity of research content and methods, and the wide range of applications, researches in this field lack systematic integration. Based on 1234 selected papers, this paper applied visualization methods in CiteSpace to figure out significant country, institution, journal, reference and keywords to demonstrate the developments and research hotspots from 2011 to 2021. He Huang and Aaron J. Young are thought to be the most productive authors. According to the keywords analysis, “Hidden Markov Model”, “networks”, “feature extraction” are current hotspots. Furthermore, the deficiencies and future directions have been indicated by conducting this composite analysis.

Keywords

Intention recognition, Visualization, Knowledge map, CiteSpace

Cite This Paper

Yuxuan Fang, Honglan Wu, Hao Liu. Visualized Analysis of Intention Recognition Research Based on CiteSpace. Academic Journal of Engineering and Technology Science (2022) Vol. 5, Issue 11: 44-57. https://doi.org/10.25236/AJETS.2022.051107.

References

[1] C. Sven, D. S. Kumar, W. I. B., P. D. O., and L. F. L., Model-free online neuroadaptive controller with intent estimation for physical human–robot interaction, IEEE Transactions on Robotics, 2020, 36, 240–253.

[2] Synakowski, Q. Feng, and A. Martinez, Adding Knowledge to Unsupervised Algorithms for the Recognition of Intent, International Journal of Computer Vision, 2021, 129, 942–959.

[3] S. Oh, C. Bae, J. Cho, S. Lee, and Y. Jung, Command recognition using binarized convolutional neural network with voice and radar sensors for human-vehicle interaction, 2021, Sensors, 21, 3906.

[4] J. Huang, W. Huo, W. Xu, S. Mohammed, and Y. Amirat, Control of upper-limb power-assist exoskeleton using a human-robot interface based on motion intention recognition, IEEE Transactions on Automation Science and Engineering, 2015, 12, 1257–1270.

[5] S. Lin, T. Shen, and W. Guo, Evolution and emerging trends of kansei engineering: A visual analysis based on CiteSpace, IEEE Access, 2021, 9, 111 181–111 202, 

[6] A. J. Young, T. A. Kuiken, and L. J. Hargrove, Analysis of using EMG and mechanical sensors to enhance intent recognition in powered lower limb prostheses, Journal of Neural Engineering, 2014, 11, 056021.

[7] J. Han, Q. Ding, A. Xiong and X. Zhao, "A State-Space EMG Model for the Estimation of Continuous Joint Movements," in IEEE Transactions on Industrial Electronics, vol. 62, no. 7, pp. 4267-4275, July 2015, doi: 10.1109/TIE.2014.2387337.

[8] Pin Wang, En Fan, Peng Wang, Comparative analysis of image classification algorithms based on traditional machine learning and deep learning, Pattern Recognition Letters, 2021, 41, 61-67.

[9] W. Xu, J. Wang, T. Fu, H. Gong, and A. Sobhani, Aggressive driving behavior prediction considering driver’s intention based on multivariatetemporal feature data, Accident Analysis Prevention, 2022, 164, 106477.

[10] Q. Wang, Research on the Improved CNN Deep Learning Method for Motion Intention Recognition of Dynamic Lower Limb Prosthesis, Journal of Healthcare Engineering, 2021, 2021, 7331692.

[11] J. Liu, Convolutional neural network-based human movement recognition algorithm in sports analysis, Frontiers in Psychology, 2021, 12.

[12] Liu, P. Zhao, D. Qin, G. Li, Z. Chen, and Y. Zhang, Driving Intention Identification Based on Long Short-Term Memory and A Case Study in Shifting Strategy Optimization, IEEE Access, 2019, 7, 128 593–128 605.

[13] Eugene Garfield, A. I. Pudovkin, V. S. Istomin, Algorithmic Citation-Linked Historiography - Mapping the Literature of Science, 2002, 39, 14–24.

[14] B. Simboli, M. Zhang, Clustering concepts, Science, 2004, 303, 768–768. 

[15] S. Morris, G. Yen, Z. Wu, B. Asnake, Time line visualization of research fronts, Journal of the American Society for Information Science and Technology, 2003, 54, 413–422.

[16] N. van Eck and L. Waltman, Software survey: VOSviewer, a computer program for bibliometric mapping, Scientometrics, 2010, 84, 523–538.

[17] X. Li , H. Li, A Visual Analysis of Research on Information Security Risk by Using CiteSpace, IEEE Access, 2018, 6, 63 243–63 257.

[18] C. Chen, CiteSpace II: Detecting and visualizing emerging trends and transient patterns in scientific literature, Journal of the American Society for Information Science and Technology, 2006, 57, 359–377.

[19] X. Liu, S. Zhao, L. Tan, Y. Tan, Y. Wang, Z. Ye, C. Hou, Y. Xu, S. Liu, G. Wang, Frontier and hot topics in Electrochemiluminescence Sensing technology based on CiteSpace bibliometric analysis, Biosensors and Bioelectronics, 2022, 201, 113932.

[20] H. Xia, M. A. Khan, Z. Li and M. Zhou, "Wearable Robots for Human Underwater Movement Ability Enhancement: A Survey," in IEEE/CAA Journal of Automatica Sinica, vol. 9, no. 6, pp. 967-977, June 2022, doi: 10.1109/JAS.2022.105620.

[21] Asghar A, Jawaid Khan S, Azim F, Shakeel CS, Hussain A, Niazi IK. Review on electromyography based intention for upper limb control using pattern recognition for human-machine interaction. Proc Inst Mech Eng H. 2022 May; 236(5):628-645. doi: 10.1177/09544119221074770. Epub 2022 Feb 4. PMID: 35118907.

[22] Kexiang Li, Jianhua Zhang, Lingfeng Wang, Minglu Zhang, Jiayi Li, Shancheng Bao, A review of the key technologies for sEMG-based human-robot interaction systems, Biomedical Signal Processing and Control, 2020, 62, 102074

[23] Li Wei, Shi Ping, Yu Hongliu, Gesture Recognition Using Surface Electromyography and Deep Learning for Prostheses Hand: State-of-the-Art, Challenges, and Future, Frontiers in Neuroscience, 2021, 15, doi:10.3389/fnins.2021.621885 

[24] W. Wang, C. Lu, Visualization analysis of big data research based on Citespace, Soft Computing, 2020, 8173–8186.

[25] X. Chen, Y. Liu, Visualization analysis of high-speed railway research based on citespace, Transport Policy, 2020, 85, 1–17.

[26] A. Azam, A. Ahmed, M. S. Kamran, L. Hai, Z. Zhang, and A. Ali, Knowledge structuring for enhancing mechanical energy harvesting (MEH): An in-depth review from 2000 to 2020 using CiteSpace, Renewable and Sustainable Energy Reviews, 2021, 150, 111460. 

[27] A. J. Young, L. J. Hargrove, A classification method for user independent intent recognition for transfemoral amputees using powered lower limb prostheses, IEEE Transactions on Neural Systems and Rehabilitation Engineering, 2016, 24, 217–225.

[28] A. J. Young, A. M. Simon, N. P. Fey, and L. J. Hargrove, Intent Recognition in a Powered Lower Limb Prosthesis Using Time History Information, Annals of Biomedical Engineering, 2014, 42, 631–641.

[29] H. A. Varol, F. Sup, and M. Goldfarb, Multiclass real-time intent recognition of a powered lower limb prosthesis, IEEE Transactions on Biomedical Engineering, 2010, 57, 542–551. 

[30] H. Huang, F. Zhang, L. J. Hargrove, Z. Dou, D. R. Rogers, and K. B. Englehart, Continuous locomotion-mode identification for prosthetic legs based on neuromuscular–mechanical fusion, IEEE Transactions on Biomedical Engineering, 2011, 58, 2867–2875.

[31] A. Alahi, K. Goel, V. Ramanathan, A. Robicquet, L. Fei-Fei, and S. Savarese, Social lstm: Human trajectory prediction in crowded spaces, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, 961–971.

[32] K. Wakita, J. Huang, P. Di, K. Sekiyama, and T. Fukuda, Human-walking intention-based motion control of an omnidirectional-type cane robot, IEEE/ASME Transactions on Mechatronics, 2013, 18, 285–296.

[33] K. S., Characteristics of interdisciplinary research in author keywords appearing in korean journals, Malaysian Journal of Library and Information Science, 2018, 23, 77 – 93.

[34] J. Ju, L. Bi, A. G. Feleke, Detection of Emergency Braking Intention From Soft Braking and Normal Driving Intentions Using EMG Signals, IEEE Access, 2021, 9, 131637-131647.

[35] Xiangxin Li, Lan Tian, Yue Zheng, Oluwarotimi Williams Samuel, Peng Fang, Lin Wang, Guanglin Li, A new strategy based on feature filtering technique for improving the real-time control performance of myoelectric prostheses, Biomedical Signal Processing and Control, 2021, 70, 102969. 

[36] Xue Li, Design and Implementation of Human Motion Recognition Information Processing System Based on LSTM Recurrent Neural Network Algorithm, Computational Intelligence and Neuroscience, 2021, 9, 3669204.

[37] X.Gao, L.Yan, G. Wang and C. Gerada, Hybrid Recurrent Neural Network Architecture-Based Intention Recognition for Human-Robot Collaboration, IEEE Transactions on Cybernetics, 2021.

[38] Zhao, X., Wang, S., Ma, J. et al. Identification of driver’s braking intention based on a hybrid model of GHMM and GGAP-RBFNN. Neural Comput & Applic, 2019, 31, 161–174.

[39] Keqiang Li, Xiao Wang, Youchun Xu, Jianqiang Wang, Lane changing intention recognition based on speech recognition models, Transportation Research Part C: Emerging Technologies, 2016, 69, 497-514.

[40] F. Xiao, J. Mu, J. Lu, G. Dong, and Y. Wang, Real-time modeling and feature extraction method of surface electromyography signal for hand movement classification based on oscillatory theory, Journal of Neural Engineering, 2022, 19, 026011.

[41] R. Quintero Mínguez, I. Parra Alonso, D. Fernández-Llorca, and M. Sotelo, Pedestrian path, pose, and intention prediction through gaussian process dynamical models and pedestrian activity recognition, IEEE Transactions on Intelligent Transportation Systems, 2019, 20, 1803–1814.

[42] Tie Zhang, Hanlei Sun, Yanbiao Zou, An electromyography signals-based human-robot collaboration system for human motion intention recognition and realization, Robotics and Computer-Integrated Manufacturing, 2022, 77, 102359

[43] Synakowski, S., Feng, Q. & Martinez, A. Adding Knowledge to Unsupervised Algorithms for the Recognition of Intent. Int J Comput Vis 129, 942–959 (2021).