Welcome to Francis Academic Press

Frontiers in Educational Research, 2023, 6(12); doi: 10.25236/FER.2023.061222.

Research on classroom behavior analysis and optimization strategies based on convolutional neural network in smart learning environment

Author(s)

Jin Lu

Corresponding Author:
Jin Lu
Affiliation(s)

Guangdong Key Laboratory of Big Data Intelligence for Vocational Education, Shenzhen Polytechnic, Shenzhen, Guangdong, 518055, China

Abstract

Realizing the analysis and prediction of attention and learning emotion recognition in teacher-student adaptive interaction in a smart teaching environment is a major development need in the fields of intelligent guidance, teacher classroom evaluation, and classroom stress analysis. However, classroom education and online education big data are characterized by cross-modality, multi-source heterogeneity, content redundancy and structural confusion, and the secrecy of teacher-student relationship between different types of courses is high, which brings great challenges to the analysis and prediction of attention and learning emotion recognition in teacher-student adaptive interaction. This paper is based on the research idea of "sentiment analysis-data modeling-group discovery-behavior prediction", and innovatively uses the core algorithm of convolutional neural network to overcome the key problems of attention and learning emotion recognition and prediction in teacher-student adaptive interaction in the intelligent teaching environment. This paper presents an innovative approach to the analysis and prediction of attention and learning emotions in teacher-student adaptive interaction in a smart teaching environment. The method proposed in this paper achieves 96.88% accuracy in practical application at the expense of computation time. This study is expected to form a key method for the analysis of teacher-learner adaptive interaction states driven by multimodal data in a smart teaching environment, which provides optimized strategies and theoretical support for personalized smart teaching, and has high theoretical significance and application value.

Keywords

Classroom behavior analysis, Convolutional neural network, Smart learning environment

Cite This Paper

Jin Lu. Research on classroom behavior analysis and optimization strategies based on convolutional neural network in smart learning environment. Frontiers in Educational Research (2023) Vol. 6, Issue 12: 119-129. https://doi.org/10.25236/FER.2023.061222.

References

[1] Iandola F N, Han S, Moskewicz M W, et al. SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and< 0.5 MB model size[J]. arXiv preprint arXiv:1602.07360, 2016.

[2] Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition [J]. arXiv preprint arXiv:1409. 1556, 2014.

[3] Howard A G, Zhu M, Chen B, et al. Mobilenets: Efficient convolutional neural networks for mobile vision applications [J]. arXiv preprint arXiv:1704.04861, 2017.

[4] Sandler M, Howard A, Zhu M, et al. Mobilenetv2: Inverted residuals and linear bottlenecks [C] // Proceedings of the IEEE conference on computer vision and pattern recognition. 2018: 4510-4520.

[5] Koonce B, Koonce B. MobileNetV3 [J]. Convolutional Neural Networks with Swift for Tensorflow: Image Recognition and Dataset Categorization, 2021: 125-144.

[6] Ma N, Zhang X, Zheng H T, et al. Shufflenet v2: Practical guidelines for efficient cnn architecture design[C]//Proceedings of the European conference on computer vision (ECCV). 2018: 116-131.

[7] He K, Zhang X, Ren S, et al. Deep residual learning for image recognition[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2016: 770-778.

[8] Xie S, Girshick R, Dollár P, et al. Aggregated residual transformations for deep neural networks [C] //Proceedings of the IEEE conference on computer vision and pattern recognition. 2017: 1492-1500.

[9] Szegedy C, Liu W, Jia Y, et al. Going deeper with convolutions[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2015: 1-9.

[10] Tan M, Le Q. Efficientnet: Rethinking model scaling for convolutional neural networks [C] // International conference on machine learning. PMLR, 2019: 6105-6114.

[11] Chollet F. Xception: Deep learning with depthwise separable convolutions[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2017: 1251-1258.

[12] Szegedy C, Vanhoucke V, Ioffe S, et al. Rethinking the inception architecture for computer vision [C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2016: 2818-2826.

[13] Szegedy C, Ioffe S, Vanhoucke V, et al. Inception-v4, inception-resnet and the impact of residual connections on learning[C]//Proceedings of the AAAI conference on artificial intelligence. 2017, 31(1).

[14] Tan M, Le Q. Efficientnet: Rethinking model scaling for convolutional neural networks [C] // International conference on machine learning. PMLR, 2019: 6105-6114.

[15] Vaswani A, Shazeer N, Parmar N, et al. Attention is all you need [J]. Advances in neural information processing systems, 2017, 30.