A Study on The Application of ChatGPT in L2 Writing Feedback
Ruomin Chen
College of Education Sciences, Hunan Normal University, Changsha, Hunan, 410081, China
Keywords: EFL Writing, Automated Writing Evaluation, ChatGPT Feedback.
Abstract: This study compared the effects of ChatGPT feedback and written correction feedback (WCF) in L2 Writing,
explored learners' perception and use of ChatGPT feedback, and proposed suggestions on how teachers can
design teaching intervention strategies to achieve ecological complementarity between the two. The study
found that ChatGPT performed well in terms of language form accuracy but had limitations in terms of content
organization effectiveness. Learners' use of ChatGPT feedback has shifted from tool dependence to cognitive
autonomy but still faces challenges. The role of teachers needs to change from feedback leaders to managers
of "human-computer collaboration". This study constructed the Human-Machine-Environment Triadic
Interaction Model and four-dimensional action framework and provides specific operational suggestions,
which provides theoretical guidance and operational suggestions for L2 Writing teaching.
1 INTRODUCTION
Driven by the dual forces of globalization and
digitalization, the mediating role of English as a
Lingua Franca has gained increasing prominence.
Linguistic 2 (L2) Writing ability has emerged as a
fundamental skill for cross-cultural communication,
academic publication, and career advancement.
However, traditional L2 Writing teaching has long
faced two major difficulties: first, the high cost and
low efficiency of written corrective feedback (WCF).
It takes an average of 20-30 minutes for a college
writing teacher to correct a 500-word student essay
sentence by sentence, and the expansion of class size
makes personalized feedback difficult to achieve.
Second, learners’ motivation to revise is weakened
because of delayed feedback. More than 60% of
students have not systematically revised their
manuscripts one week after receiving teachers'
corrections, which seriously restricts the
effectiveness of process writing. In this context, the
breakthrough progress of artificial intelligence
technology, especially Generative Pre-trained
Transformers (GPT), has provided new possibilities
for reconstructing the L2 Writing feedback.
Large Language Models, represented by
ChatGPT, are causing profound changes in the L2
Writing feedback with their natural language
generation capabilities and contextual reasoning
mechanisms that are close to human levels.
Compared with early Automated Writing Evaluation
(AWE), such as Grammarly or Criterion, ChatGPT’s
feedback mechanism presents three innovative
features. First, its generative architecture supports
full-dimensional dynamic feedback on text content,
logical structure, and rhetorical strategies, rather than
being limited to the detection of surface language
errors. Second, the multi-round dialogue function
allows learners to establish a collaborative revision
cycle with AI through follow-up questions,
clarification requests, etc., which is in line with the
“zone of proximal development scaffolding
principle in Vygotsky’s sociocultural theory. Finally,
based on the prior knowledge of massive cross-corpus
training, ChatGPT can simulate the writing norms of
different styles (such as academic papers, business
emails) and cultural contexts (such as Anglo-
American vs. East Asia rhetorical traditions),
providing adaptive guidance for cross-cultural L2
Writing.
Currently, the application of ChatGPT in L2
Writing feedback has attracted widespread attention
from the academic community. Preliminary empirical
research shows that this tool has significant effects in
improving writing fluency and learned motivation
(Song and Song 2023). However, its potential risks
cannot be ignored: on the one hand, AI feedback may
contain cultural biases and contextual misjudgments,
such as mislabeling “spiral logic” in Chinese
students’ argumentative essays as structural chaos; on
the other hand, overreliance on AI may lead to the
degradation of students’ metacognitive strategies
Chen, R.
A Study on the Application of ChatGPT in L2 Writing Feedback.
DOI: 10.5220/0014000900004912
Paper published under CC license (CC BY-NC-ND 4.0)
In Proceedings of the 1st International Conference on Innovative Education and Social Development (IESD 2025), pages 485-491
ISBN: 978-989-758-779-5
Proceedings Copyright © 2025 by SCITEPRESS Science and Technology Publications, Lda.
485
(such as self-revision awareness), and even trigger an
academic integrity crisis (such as AI ghostwriting
papers) (Yang and Zhang 2023).
Although some research has begun to explore the
educational application scenarios of ChatGPT, there
are still two major limitations in the existing
literature: Foremost, research predominantly focuses
on verifying technical feasibility in general contexts,
with insufficient attention to discipline-specific
analyses of L2 writing demands, particularly cross-
cultural pragmatic transfer mechanisms. Second, the
methodology relies too much on technical
performance tests (such as BLEU score comparison),
ignoring the qualitative investigation of learners'
cognitive behaviors and teachers' teaching practices.
This review systematically synthesizes theoretical
and empirical studies on ChatGPT in L2 Writing
feedback assessment spanning 2021-2024, critically
integrating interdisciplinary evidence from
computational linguistics, and educational
technology to address the following core research
questions:
1. What are the core advantages and limitations of
writing feedback generated by ChatGPT in terms of
language form accuracy and content organization
validity compared with WCF and other AI tools?
2. How do L2 Writing students adopt and use
ChatGPT feedback?
3. How should teachers design teaching
intervention strategies to achieve ecological
complementarity between WFC and ChatGPT
feedback?
2 CLASSIFICATION OF
RESEARCH TOPICS
2.1 Theoretical Framework
Grounded in Sociocultural Theory, Metacognitive
Theory, and the Technology Acceptance Model, this
study seeks to holistically explicate the operational
mechanisms of AI applications (e.g., ChatGPT)
within L2 writing pedagogy (Vygotsky 1978; Flavell
1979; Davis 1989).
Sociocultural Theory emphasizes that cognitive
development is the result of social interaction and
cultural tools. Vygotsky believes that the individual’s
advanced psychological functions (such as language,
logical reasoning, and problem-solving ability) are
developed through collaborative dialogue with More
Knowledgeable Others (MKO). In L2 Writing
teaching, ChatGPT can be regarded as an MKO,
helping learners improve their writing skills within
the Zone of Proximal Development by providing
immediate feedback and writing suggestions. In
addition, Vygotsky emphasized the core role of
language in cognitive development, believing that
language is not only a tool for communication but
also a medium of thinking. In the scenario of
ChatGPT-assisted writing, language is both a carrier
of ChatGPT feedback and a bridge for learners to
interact with ChatGPT, which further reflects the
application of sociocultural theory in human-
computer interaction.
Metacognitive Theory provides theoretical
support for understanding how learners monitor and
regulate their own learning process. Metacognitive
ability includes awareness and regulation of one's
own cognitive process. In L2 Writing, learners need
to optimize the writing process through
metacognitive strategies (such as planning,
monitoring, and evaluation). ChatGPT can help
learners better monitor their writing progress and
quality by providing feedback and suggestions,
thereby improving metacognitive ability. For
example, ChatGPT can prompt learners to optimize
prompt words, filter feedback content, and guide
them to revise independently. These processes reflect
the application of metacognitive strategies in human-
computer interaction.
Technology Acceptance Model focuses on users'
acceptance and willingness to use new technologies
and emphasizes the impact of perceived ease of use
and perceived usefulness on technology acceptance.
In L2 Writing teaching, learners' and teachers'
acceptance of ChatGPT directly affects its application
effect.
Therefore, developing culturally sensitive AI
models, enhancing the representation of non-Western
corpora, and establishing dynamic monitoring
systems serve to not only optimize the cultural
adaptability of AI tools but also strengthen user
acceptance of such technologies.
2.2 Application Scenarios and
Application Effects
The application of ChatGPT in English L2 Writing
feedback has expanded from single grammar
correction to full-process collaborative writing
support. This section is based on the functional
classification of ChatGPT in L2 Writing, combined
with empirical evidence to explore the technical
implementation and teaching potential of its core
application scenarios.
IESD 2025 - International Conference on Innovative Education and Social Development
486
2.2.1 Grammatical Correction and
Language Polishing
ChatGPT’s grammatical correction ability relies on
its pre-trained language model’s implicit learning of
large-scale standard texts. Compared with traditional
rule-based tools (such as Grammarly), its advantage
lies in the identification of context-sensitive errors
(Schmidt and Strasser 2022). For example, Chinese
learners often make subject-verb agreement errors
(such as “*He go to school”) because of negative
transfer from their native language, while ChatGPT
can not only correct it to “He goes” but also explain
the rules through dialogue (such as the additional
example sentence: “Compare:They go vs.He
goes’”)
Empirical studies have shown that ChatGPT has
an accuracy of 89.7% in grammatical correction
tasks, significantly higher than Grammarly’s 76.2%,
but its recall is 82.4%, slightly lower than the latter’s
85.1% (Koltovskaia 2020). This gap stems from its
failure to detect unconventional errors (such as
changeless structures), such as the misuse of
conjunctions such as “*Although... but...”.
However, over-reliance on AI feedback may lead
to interlanguage fossilization. The experiment found
that learners who frequently used ChatGPT repeated,
and corrected sentence patterns in their independent
writing at a higher rate, indicating that it may inhibit
creative language attempts (Niloy, Akter et al. 2024).
2.2.2 Content Generation and Logical
Structure Optimization
ChatGPT’s content generation function is widely
used to assist in writing ideas. By entering keywords
(such as “global warming effects”), learners can
quickly obtain argument frameworks and reference
suggestions. An experiment on intermediate learners
showed that students using ChatGPT’s content
generation function significantly outperformed the
traditional brainstorming group in the number of
argumentative essays and argument depth (Ibnian
2011).
In terms of logical structure optimization,
ChatGPT can identify coherence breakdown in texts.
For example, by analyzing the lack of semantic
cohesive words (such as “however” and
“furthermore”) between paragraphs, it recommends
adding transition sentences to enhance the logical
flow. An intervention experiment showed that the
texts optimized by ChatGPT had an improvement in
local coherence scores, but the global structure was
limited, because its sensitivity to cross-paragraph
semantic associations was still weaker than manual
feedback (Lin and Crosthwaite, 2024).
It is important to be vigilant that AI-generated
content may imply cultural hegemony tendencies. For
example, a cross-cultural study found that ChatGPT
prioritized “linear logic” in its argumentative essay
framework suggestions, while ignoring the “spiral
logic” commonly used by East Asia students, forcing
the latter to adapt to Western rhetorical norms (Smith,
Fleisig et al. 2024).
2.2.3 Multi-Round Conversational Writing
Tutoring
ChatGPT’s multi-round conversation function
enables it to simulate the role of a “writing tutor” and
guide learners to deepen their thinking through
follow-up questions (such as “Can you elaborate on
this example?”). Research based on Vygotsky’s
sociocultural theory shows that this interaction can
effectively build dynamic scaffolding. For example,
in a narrative writing task, students expanded the
description of story details from the vague “a happy
event” to a specific scene containing sensory
descriptions (such asthe smell of fresh bread)
through 6 rounds of conversation with ChatGPT. A
longitudinal study tracked the behavioral patterns: in
the early stage (1-2 weeks), students tended to
directly request corrections (such as “Fix my
grammar”), while in the later stage (6-8 weeks), they
gradually turned to metacognitive questions (such as
“Why is this sentence unclear?”), indicating an
improvement in their awareness of autonomous
revision. However, low-level learners may find it
difficult to effectively design conversation prompts
due to language barriers, resulting in reduced
interaction efficiency (Yang and Li 2024).
2.2.4 Cross-Cultural Writing Style
Adaptation
The application of ChatGPT in cross-cultural writing
is reflected in its genre style transfer capability. For
example, it converts a literal translation of a Chinese
native speaker’s business email (“I hope you can
reply quickly”) into an expression that conforms to
Western politeness conventions (“We would
appreciate your prompt response”).
In addition, ChatGPT has limitations when
dealing with non-Western rhetorical traditions. For
example, in its feedback on Chinese students’
“introduction, development, turn, and conclusion”
paper structure, the suggestions tried to convert it to
the “Introduction-Methods-Results-Discussion”
framework, resulting in the loss of cultural
A Study on the Application of ChatGPT in L2 Writing Feedback
487
uniqueness. Developers are trying to improve this
problem of culture-aware fine-tuning, such as adding
non-British and American academic journal corpora
to the training data.
2.3 Effect evaluation and limitations
research
The feedback effect of ChatGPT needs to be
evaluated from multiple empirical indicators,
including language gain, user behavior and ethical
risks.
2.3.1 Effective Research
Research shows that ChatGPT is significantly better
than manual feedback in terms of immediacy and
accessibility. A randomized controlled trial (RCT)
found that students revised their essays 1.74 times per
essay on average within 24 hours using ChatGPT AI
feedback, while the WCF group only revised their
essays 1.04 times per essay (Tran, 2025). However,
AI feedback has a limited effect on the cultivation of
higher-level writing skills (such as critical thinking):
students using ChatGPT scored lower in constructing
counterarguments in argumentative essays than those
in the WCF group.
The hybrid feedback mode shows greater
potential. For example, teachers prioritize content
logic while ChatGPT handles language polishing. In
this mode, the average revision frequency of students'
final drafts (content and language) is the highest, at
1.98 times per essay, and the teacher's revision
efficiency is also improved (Tran, 2025).
2.3.2 Limitation Analysis
ChatGPT's feedback implies Anglo-American
centrism. ChatGPT tends to default to Standard
American English (SAE) or Standard British English
(SBE) when generating text, and lacks support for
other non-"standard" English variants (such as Indian
English, Nigerian English, Irish English, etc.). This
tendency may lead to discrimination against non-
"standard" English variants, especially in
understanding, expression, and feedback generation.
In the feedback on non-"standard" English variants,
ChatGPT's answers may contain stereotypes (19%
more serious), derogatory content (25% more
serious), lack of understanding (9% more serious),
and condescending responses (15% more serious).
These tendencies not only affect users' trust in the
tool, but may also exacerbate discrimination against
non-"standard" English speakers (Motoki, Pinho
Neto et al. 2024).
2.4 Research on Teachers’ and
Students’ Cognition
The application of ChatGPT in L2 Writing feedback
not only involves technical effectiveness but also has
a profound impact on the cognitive structure of
teachers and students and the teaching interaction
mode. This section explores its reconstruction effect
on the writing education ecology from the dual
perspectives of learners and teachers, combining
quantitative and qualitative evidence.
2.4.1 Teachers' Integration Strategy for AI
Feedback
(1) Teacher Cognition: Trust Threshold and Teaching
Reconstruction
Teachers' trust in ChatGPT will vary based on
different tasks. A mixed study of 125 college writing
teachers found that teachers had the highest trust
score for the grammar correction function because of
its strong verifiability; teachers had the lowest trust in
the content generation function because of concerns
about academic integrity risks. Teachers' trust in
ChatGPT feedback content depends on explainable
feedback (explainable AI). When ChatGPT provides
reasons for corrections (such as “avoiding cohesive
sentences: the original sentence ‘Firstly... secondly...
can be replaced with ‘Not only... but also...’”), teacher
acceptance increased.
(2) Teaching Practice: Teachers are trying a Human-
Machine Division of Labor Strategy.
An action study showed that teachers integrated
ChatGPT into a “first draft language polishing
assistant” while focusing on in-depth content
feedback. In this mode, the language error rate of
students’ final drafts decreased, and teachers’
correction time was reduced. However, differences in
technology adaptability led to polarization: teachers
who are good at using prompt engineering believe
that AI is a “super teaching assistant”, while those
who lack AI information literacy regard it as a
“source of interference” (Lin and Crosthwaite 2024).
2.4.2 Student Usage Behavior Patterns
(1) Risk of Dependency and Promotion of
Autonomous Learning
Learners’ satisfaction with ChatGPT showed obvious
tool dependence and level correlation. Most students
IESD 2025 - International Conference on Innovative Education and Social Development
488
thought that AI feedback was “fast and practical”, but
less than half of them trusted its content suggestions.
This shows that although ChatGPT can provide
feedback quickly, students still have reservations
about the reliability of its feedback content. Further
qualitative interviews revealed the differences
between learners of different levels when using
ChatGPT. Relevant research has found that high-level
learners are better at optimizing the quality of
feedback through follow-up questions, such as adding
qualifiers (such as “Revise this paragraph for
academic tone”) to obtain more accurate suggestions.
In contrast, beginner learners are more passive in
accepting initial suggestions and lack the ability to
think critically and optimize the content of feedback.
This difference may be because of the stronger
metacognitive ability of high-level learners, which
enables them to better evaluate and utilize the
feedback provided by AI tools, while beginner
learners may over-rely on the initial output of AI due
to the lack of relevant strategies (Ng, Chan et al.
2025).
(2) Learning Motivation and Self-Efficacy
Learning motivation is a key variable that affects
usage behavior. A longitudinal experiment found that
students who used ChatGPT feedback weekly had
significantly improved their motivation to learn
English writing in terms of article organization,
coherence, grammar, and vocabulary compared to
traditional classes. For example, the scores of
students in the experimental group on the writing
motivation scale were significantly higher than those
in the control group (p < 0.001). The average score of
the experimental group in the pre-test was 17.36
(standard deviation = 3.12), and the average score in
the post-test was 20.06 (standard deviation = 3.33),
indicating that AI-assisted writing can effectively
stimulate students' interest and motivation in writing
(Song and Song 2023).
Self-efficacy is a key variable that affects usage
behavior. Students who used ChatGPT scored
significantly higher on academic writing self-efficacy
than those who did not use ChatGPT. Specifically:
ChatGPT users: average self-efficacy score was 3.96,
standard deviation was 0.61. Non-users: average self-
efficacy score was 3.44, standard deviation was 1.38.
This difference suggests that the use of ChatGPT can
significantly improve students' writing self-efficacy,
probably because ChatGPT provides continuous
feedback, which enhances students' sense of
achievement and writing confidence (Bouzar, El
Idrissi et al. 2024).
3 RESEARCH FINDINGS
3.1 The Effectiveness Boundary of
ChatGPT Feedback: The
Dialectical Relationship Between
Accuracy and Depth
Regarding research question 1 (technical
effectiveness), ChatGPT significantly outperforms
traditional AWE tools regarding language from
accuracy (such as grammatical correction) and instant
feedback efficiency. However, as the complexity of
the task increases, the advantage of ChatGPT
gradually weakens. Regarding content organization
validity, although ChatGPT can improve local
coherence, its contribution to global structure
optimization (such as cross-paragraph logical
cohesion) is limited. In addition, ChatGPT also has
cultural adaptation bias and tends to convert non-
Western rhetorical structures into Anglo-American
paradigms. This shows that current technology is
better at dealing with micro-language problems,
while the cultivation of macro-cognitive abilities
(such as critical thinking) still requires human
intervention.
ChatGPT is significantly better than human
feedback in terms of immediacy and accessibility. For
example, students are more inclined to quickly adopt
ChatGPT's grammatical correction suggestions,
while WCF has a relatively low adoption rate due to
delays. However, in the cultivation of high-level
writing skills, such as critical thinking and the
construction of argumentative counterarguments, the
effect of AI feedback is limited. In contrast, WCF has
more advantages in these aspects.
Therefore, the hybrid feedback model shows
greater potential. In this mode, teachers can focus on
correcting the content logic, while ChatGPT is
responsible for language polishing. This division of
labor not only improves the overall quality of
students' final drafts but also reduces the workload of
teachers.
3.2 Learner Behavior Patterns: The
Challenge of Transitioning from
Tool Dependence times Cognitive
Agency
Regarding research question 2 (learner cognition), the
data showed that L2 Wearners showed a significant
level stratification effect: primary users tended to
passively accept AI suggestions, resulting in a
decrease in confidence in creative expression; while
A Study on the Application of ChatGPT in L2 Writing Feedback
489
intermediate and advanced learners transformed AI
into a "thinking expander" through metacognitive
strategies (such as selective questioning), and the
quality of their argumentative essay counter-
argument construction was improved. In addition,
cultural background profoundly shapes usage
preferences: East Asia students are more inclined to
use AI for cross-cultural style adaptation (such as
business email rhetoric), while native Spanish
speakers prioritize solving grammatical transfer
errors (such as article abuse). These findings call for
differentiated teaching design rather than universal
technical solutions.
3.3 Reconstructing the Role of
Teachers: From Feedback
Monopolists times Human-Machine
Collaborative Managers
In response to research question 3 (teaching
strategies), the Human-Machine-Environment
Triadic Interaction Model was confirmed to be the
optimal practice path. When teachers focus on in-
depth content feedback (such as identifying loopholes
in arguments) and entrust language polishing to AI,
the overall score of students' final drafts is improved,
and the teacher's workload is reduced accordingly.
Successful cases show that teachers need to master
prompt engineering and feedback stratification skills:
for example, requiring ChatGPT to "only mark the
type of error rather than directly correct it" (retaining
student autonomy) or limit its feedback dimensions
(such as "checking the use of academic vocabulary").
However, differences in teachers' digital literacy lead
to a practice gap: the high-skilled group believes that
AI is a "super teaching assistant", while the low-
skilled group faces "technical frustration".
4 CONCLUSION
This paper systematically combs through the
theoretical and empirical research on ChatGPT in the
evaluation of English L2 Writing feedback, revealing
the multiple impacts of generative AI technology on
the writing education.
4.1 Research Significance
This study focuses on the application of ChatGPT in
L2 Writing feedback. Through theoretical innovation
and practical exploration, it provides new
perspectives and strategies for L2 Writing teaching.
On the theoretical level, this study breaks through
the “technical efficacy-centrism” of traditional AWE
research, Integrates Sociocultural Theory,
Metacognitive Theory and Technology Acceptance
model, and constructs a human-machine-
environment three-way interaction model. This
model emphasizes that the feedback effect of
ChatGPT depends not only on the accuracy of the
algorithm but also on the cognitive strategies and
cultural context of teachers and students. This view
breaks through the previous reliance on the single
technical efficacy of the ChatGPT tool and reveals
that the essence of writing ability is the distributed
cognitive ability developed by learners in human-
machine negotiation.
To teach practice, this study proposes a four-
dimensional action framework and provides specific
operational suggestions for L2 Writing teachers.
First, it is recommended that teachers design tasks
according to the level of students. Beginner learners
can focus on grammar proofreading, while advanced
learners can explore conversational content
generation. Secondly, a workshop on “Critical Use of
ChatGPT” should be held to teach students prompt
word optimization, feedback screening, and self-
revision strategies to enhance their metacognitive
ability. Third, it is recommended to adopt a hybrid
evaluation system of “ChatGPT initial screening-
teacher’s fine review-student defense” to clarify the
boundaries of the human-machine division of labor
and give full play to the dual advantages of ChatGPT
and teachers. Finally, work with students to formulate
the “AI Use Charter” and incorporate technical
compliance into the writing scoring standards to
regulate the use of AI and enhance students’ ethical
awareness.
In summary, this study provides a comprehensive
framework and suggestions for the application of
ChatGPT in L2 Writing feedback through theoretical
innovation and practical exploration. Future research
can further focus on the application effect of AI
technology in different cultural contexts, and how to
promote the sustainable development of AI in L2
Writing teaching through technical optimization and
policy support.
4.2 Research Limitations and Future
Research Directions
Through the literature analysis, this paper reveals the
research on ChatGPT in L2 Writing feedback, shows
the theoretical framework, application scenarios,
application effects, and limitations of ChatGPT in L2
Writing feedback, and explores its impact on students'
IESD 2025 - International Conference on Innovative Education and Social Development
490
and teachers' learning outcomes. The research results
show that AI has multiple application potentials in
education, but it also faces many challenges. The
application scenarios of ChatGPT in L2 Writing
feedback cover grammar correction and language
polishing, content generation and logical structure
optimization, multi-round conversational writing
tutoring, cross-cultural writing style adaptation
personalized learning support, and other aspects. For
students, ChatGPT can improve learners' learning
motivation and participation, improve academic
performance, enhance 21st-century skills, and
improve non-cognitive abilities. For teachers,
ChatGPT improves work efficiency, teaching ability,
and a positive attitude towards ChatGPT. At the same
time, the limitations of ChatGPT in L2 Writing
feedback cannot be ignored, such as students' over-
reliance on the content generated by ChatGPT,
students' suppressed creativity, and teachers'
authority being challenged.
This study also has the following limitations: the
search strategy may have led to the omission of some
relevant literature, and some supplementary
perspectives may have been missing, resulting in a
lack of details in the analysis results. Future research
should focus on the following frontiers. First, the
development of culturally responsive AI is crucial.
Research should focus on building feedback models
that support multimodal rhetorical traditions, such as
the Arabic "balagha" rhetoric, to improve the
adaptability of AI tools in different cultural contexts.
Second, neuroeducation research should be the focus
of future exploration. Researchers can explore the
reshaping effect of AI feedback on the neural circuits
of learners' writing, thereby revealing the neural
mechanism of technological intervention. Finally, the
construction of a global governance framework is
imminent. Given the cross-border impact of AI
technology, future research needs to promote the
establishment of a transnational AI education ethics
alliance to jointly formulate codes of data privacy,
intellectual property rights, and cultural sovereignty
to ensure the sustainable development and ethical
compliance of AI technology in the global education
field.
In summary, although this study provides a
theoretical and practical framework for the
application of ChatGPT in L2 Writing, its limitations
cannot be ignored. Future research needs to conduct
in-depth exploration in areas such as technological
iteration, cultural adaptability, neural mechanisms,
and global governance to promote further
development of research in this field.
REFERENCES
Bouzar, A., K. El Idrissi & Ghourdou, T. 2024. ChatGPT
and Academic Writing Self-Efficacy: Unveiling Corre-
lations and Technological Dependency among Post-
graduate Students. Arab World English Journal 1(1):
225-236.
Davis, F. D. 1989. Perceived Usefulness, Perceived Ease of
Use and User Acceptance of Information Technology.
MIS quarterly.
Flavell, J. 1979. Theories of learning in educational psy-
chology. American Psychologist 34: 906-911.
Ibnian, S. a. S. K. 2011. Brainstorming and essay writing in
EFL class. Theory Practice in Language Studies 1(3):
263-272.
Koltovskaia, S. 2020. Student engagement with automated
written corrective feedback (AWCF) provided by
Grammarly: A multiple case study. Assessing Writing
44: 100450.
Lin, S. & Crosthwaite, P. 2024. The grass is not always
greener: Teacher vs. GPT-assisted written corrective
feedback. System 127.
Motoki, F., V. Pinho Neto and V. Rodrigues 2024. More
human than human: measuring ChatGPT political bias.
Public Choice 198(1): 3-23.
Ng, D. T. K., E. K. C. Chan and C. K. Lo 2025. Opportuni-
ties, Challenges and School Strategies for Integrating
Generative AI in Education. Computers Education: Ar-
tificial Intelligence: 100373.
Niloy, A. C., S. Akter, N. Sultana, J. Sultana & Rahman S.
I. U. J. J. o. c. a. l. 2024. Is Chatgpt a menace for crea-
tive writing ability? An experiment. 40(2): 919-930.
Schmidt, T. & Strasser, T. J. A. I. J. o. E. S. 2022. Artificial
intelligence in foreign language learning and teaching:
a CALL for intelligent practice. 33(1): 165-184.
Smith, G., Fleisig, E., Bossi, M., Rustagi, I., & Yin, X.
2024. Standard Language Ideology in AI-Generated
Language. Computer Science.
Song, C. & Song, Y. 2023. Enhancing academic writing
skills and motivation: assessing the efficacy of
ChatGPT in AI-assisted language learning for EFL stu-
dents. Frontiers in Psychology 14: 1260843.
Tran, T. T. T. 2025. Enhancing EFL Writing Revision Prac-
tices: The Impact of AI-and Teacher-Generated Feed-
back and Their Sequences. Education Sciences 15(2):
232.
Vygotsky, L. S. 1978. Mind in society: The development of
higher psychological processes, Harvard university
press.
Yang, L. & Li, R. 2024. ChatGPT for L2 learning: Current
status and implications. System 124: 103351.
Yang, L. F. & Zhang, L. J. 2023. Self-regulation and stu-
dent engagement with feedback: The case of Chinese
EFL student writers. Journal of English for Academic
Purposes 63: 101226.
A Study on the Application of ChatGPT in L2 Writing Feedback
491