Welcome to Francis Academic Press

International Journal of New Developments in Education, 2023, 5(25); doi: 10.25236/IJNDE.2023.052511.

The Effectiveness of Automated Written Corrective Feedback on L2 Learners’ Revision Outcomes: A Case for ChatGPT

Author(s)

Yushan Zhou

Corresponding Author:
Yushan Zhou
Affiliation(s)

School of Humanities and Social Science, The Chinese University of Hong Kong, Shenzhen, Shenzhen, China

Abstract

Technological advancements have significantly reshaped L2 teaching pedagogy, leading to extensive application of automated written evaluation (AWE) tools in L2 learning. While previous research has mainly focused on lower-order feedback from traditional AWE tools, higher-order issues have been largely overlooked. The emergence of ChatGPT, a burgeoning large language model, introduces fresh perspectives on automated written corrective feedback (AWCF). This study investigates ChatGPT’s competence in tackling diverse writing challenges, including both lower-order and higher-order aspects, to assist L2 learners in their IELTS writing. It further identifies the comparative strengths and shortcomings of ChatGPT against traditional AWE tools. The findings underscore the importance of tailoring ChatGPT’s feedback to individual needs, highlighting its potential as an AWE resource for writing support. These insights inform the incorporation of ChatGPT in L2 writing instruction, enhancing language proficiency and assessment methods in education.

Keywords

Second Language Acquisition; Automated Written Corrective Feedback; ChatGPT; Academic Writing

Cite This Paper

Yushan Zhou. The Effectiveness of Automated Written Corrective Feedback on L2 Learners’ Revision Outcomes: A Case for ChatGPT. International Journal of New Developments in Education (2023) Vol. 5, Issue 25: 58-65. https://doi.org/10.25236/IJNDE.2023.052511.

References

[1] Zhang, Z. V. (2020). The Interplay between Students and AWE: Unpacking Feedback Reception and Perception. Assessing Writing, 43, 100439.

[2] Babak, K. (2019). Scrutinizing the Limitations of AWE Tools: A Study on Grammarly.

[3] Ferster, B., Hammond, T. C., Alexander, R. C., & Lyman, H. (2012). The Role of Automated Systems in Enhancing Student Writing. Journal of Interactive Learning Research, 23(1), 81-99.

[4] Shadiev, R., & Feng, Y. (2023). Advancements in AWE: A Study on the Evolution of Feedback Tools. Interactive Learning Environments.

[5] Kasneci, E. et al. (2023). ChatGPT for good? On opportunities and challenges of large language models for education. Learning and Individual Differences, 103, 102274. 

[6] Blair, A., Curtis, S., Goodwin, M., & Shields, S. (2013). What feedback do students want? Politics, 33(1), 66-79. 

[7] Atlas, S. (2023). ChatGPT for higher education and professional development: A guide to conversational AI. Available at: https://digitalcommons.uri.edu/cba_facpubs/548. Accessed: July 23, 2023.

[8] Khoshnevisan, B. (2019). The affordances and constraints of automatic writing evaluation (AWE) tools: A case for Grammarly. ARTESOL EFL Journal, 2(2), 12-25.

[9] John, P., & Woll, N. (2020). Using grammar checkers in an ESL context: An investigation of automatic corrective feedback. CALICO Journal, 37(2), 169–192. 

[10] El Ebyary, K., & Windeatt, S. (2010). The impact of computer-based feedback on students’ written work. International Journal of English Studies, 10(2), 121-142.

[11] Singh, D., Meyer, A., & Young, J. (2017). The impact of peer feedback on students’ written assignment. Education Tech Research Dev, 65, 203-226. 

[12] Newmann, F. M. (1990). Higher order thinking in teaching social studies: A rationale for the assessment of classroom thoughtfulness. Journal of curriculum studies, 22(1), 41-56.

[13] Hou, Y. (2020). Implications of AES system of Pigai for self-regulated learning. Theory and Practice in Language Studies, 10(3), 261-268.

[14] Cheville, J. (2004). Automated scoring technologies and the rising influence of error. The English Journal, 93(4), 47–52.

[15] About Us. Grammarly. (n.d.). Retrieved April 1, 2023, from https://www.grammarly.com/about 

[16] Lim, H. & Kahng, J. (2012). Review of Criterion for English Language Learning. Language Learning & Technology, 16(2), 38–45. http://dx.doi.org/10125/44285

[17] Dergaa, I., Chamari, K., Zmijewski, P., & Saad, H. B. (2023). From human writing to artificial intelligence generated text: examining the prospects and potential threats of ChatGPT in academic writing. Biology of Sport, 40(2), 615-622.

[18] Dai, W., Lin, J., Jin, H., Li, T., Tsai, Y. S., Gašević, D., & Chen, G. (2023, July). Can large language models provide feedback to students? A case study on ChatGPT. In 2023 IEEE International Conference on Advanced Learning Technologies (ICALT) (pp. 323-325). IEEE.

[19] Perkins, M. (2023). Academic Integrity considerations of AI Large Language Models in the post-pandemic era: ChatGPT and beyond. Journal of University Teaching & Learning Practice, 20(2), 07.

[20] Zohery, M. (2023). ChatGPT in Academic Writing and Publishing: A Comprehensive Guide. In Artificial Intelligence in Academia, Research and Science: ChatGPT as a Case Study. (Frist Edition). Achtago Publishing. https://doi.org/10.5281/zenodo.7803703

[21] Fitria, T. N. (2023, March). Artificial intelligence (AI) technology in OpenAI ChatGPT application: A review of ChatGPT in writing English essay. In ELT Forum: Journal of English Language Teaching (Vol. 12, No. 1, pp. 44-58).

[22] Susnjak, T. (2022). ChatGPT: The end of online exam integrity? arXiv preprint arXiv:2212.09292.

[23] Haque, M. U., Dharmadasa, I., Sworna, Z. T., Rajapakse, R. N., & Ahmad, H. (2022). “I think this is the most disruptive technology”: Exploring sentiments of ChatGPT early adopters using Twitter data. arXiv.