Welcome to Francis Academic Press

Academic Journal of Computing & Information Science, 2023, 6(6); doi: 10.25236/AJCIS.2023.060607.

A VR User Behavior Classification Method Integrating Scene Information and Operation Information

Author(s)

Hao Zhou, Bo Mao

Corresponding Author:
Hao Zhou
Affiliation(s)

College of Information Engineering, Nanjing University of Finance & Economics, Nanjing, China

Abstract

With the rapid growth of the virtual reality industry, the number of users participating in virtual reality has rapidly increased, and analyzing user behavior in virtual reality scenarios has become increasingly important. Compared with general time series classification tasks, the input sources in virtual reality are more complex and diverse, and traditional time series classification models are difficult to handle this complexity. This article designs a series of methods for classifying VR user behavior and conducts experiments. Apply video motion recognition methods to VR motion recognition based on VR scene data. Due to the ability to obtain a series of image data from both videos and scenes, this article uses the method of video action recognition to process VR scene data. On this basis, a series of fusion methods were designed to fuse the operational information of VR. Experiments have shown that these methods can effectively handle action recognition problems in VR scenes.

Keywords

time series classification; VR User Behavior Classification; VR

Cite This Paper

Hao Zhou, Bo Mao. A VR User Behavior Classification Method Integrating Scene Information and Operation Information. Academic Journal of Computing & Information Science (2023), Vol. 6, Issue 6: 49-54. https://doi.org/10.25236/AJCIS.2023.060607.

References

[1] Karpathy A, Toderici G, Shetty S, et al. Large-scale video classification with convolutional neural networks [J]. Proceedings of the IEEE conference on Computer Vision and Pattern Recognition. 2014: 1725-1732.

[2] Simonyan K, Zisserman A. Two-stream convolutional networks for action recognition in videos [J]. Advances in neural information processing systems, 2014, 27.

[3] Tran D, Bourdev L, Fergus R, et al. Learning spatiotemporal features with 3d convolutional networks [C]. Proceedings of the IEEE international conference on computer vision. 2015: 4489-4497.

[4] Tran D, Wang H, Torresani L, et al. Video classification with channel-separated convolutional networks [C]. Proceedings of the IEEE/CVF International Conference on Computer Vision. 2019: 5552-5561.

[5] Xie S, Sun C, Huang J, et al. Rethinking spatiotemporal feature learning: Speed-accuracy trade-offs in video classification [C]. Proceedings of the European conference on computer vision (ECCV). 2018: 305-321.

[6] Zhou Y, Sun X, Zha Z J, et al. Mict: Mixed 3d/2d convolutional tube for human action recognition [C]. Proceedings of the IEEE conference on computer vision and pattern recognition. 2018: 449-458.

[7] Zhao Y, Xiong Y, Lin D. Trajectory convolution for action recognition[C].Advances in neural information processing systems, 2018, 31.

[8] Bertasius G, Wang H, Torresani L. Is space-time attention all you need for video understanding? [C]. ICML. 2021, 2(3): 4.