Web-based Writing Assessment to Enhance Students’ English
Writing Performance
Erikson Togatorop
School of Education University of Newcastle Newcastle, Australia
Politeknik Negeri Batam, Indonesia
Keywords: Writing, English Writing, Writing Assessment, English Writing Assessment, Writing Performance,
English Writing Performance.
Abstract: Difficulties with English writing are a common cause of low English writing performance particularly for the
students who learn English as a second or a foreign language. This has been long become the concern of the
English educators and researchers, but not yet address how to facilitate the teachers or lecturers to do the
writing assessment easier and faster so that the students may get feedback promptly which is very important
for them to get progress in their writing learning and improve their writing performance. This study tried to
apply a web-based writing assessment and attempted to examine its affectivity to enhance the student writing
performance by employing a quantitative method in quasi-experiment design in the State Polytechnic of
Batam (Polibatam), Indonesia. It found that the web-based writing assessment as the treatment group was
better in enhancing students writing performance compared to the traditional conventional writing assessment
as the control group.
1 INTRODUCTION
English is a global language of communication and
writing is essential for students, both during their
study and after they graduate. Most student
assignments are written, so improving their writing
skills may give students a better chance to show what
they have learnt, whether they submit on paper or
through electronic media (Freiberg, 2008). When
students finish their study and move to the workforce,
writing will remain one of the determining factors of
a successful career. Inadequate writing proficiency
has provided serious difficulties for some university
graduates and produced strain in doing work related
to writing skills (Bonwell & Eison, 1991).
Writing is indeed a complex process that
encompasses invention of ideas and clear expression
of them in well- organized statements and paragraphs
(Nunan, 2003). This requires consolidation of
cognitive effort, attention control, and self- regulation
(Graham & Harris, 2003) and this makes writing
difficult to master, even for first language (L1)
learners (Nacira, 2010). Writing difficulties may
include lack of knowledge about the topic to be
written, lack of strategy in planning the text and lack
of proficiency in producing or revising it (Graham &
Harris, 2003). Second language (L2) writing learners
have more difficulty since they may find that words
are different when spoken and written and languages
differ in grammar.
Writing is important but it causes difficulties not
only for students who learn to write but also for the
teachers or lecturers who assess their pieces of
writing. Some researchers reported emerging
problems due to the teachers’ incompetence in
assessing writing (Edwards, 2012; Lee, 2008, 2009;
Montgomery & Baker, 2007) and even if the teachers
or lecturers are able to do the writing assessment well,
it does still take time. This long writing assessment
turnaround means that students frequently receive
late feedback. Students who receive late feedback are
often disappointed and find it more difficult to
improve their writing. Indonesian teachers and
lecturers face many large classes. The complex
nature of writing needs extra time and the large
classes in Indonesia make checking and marking
student writing difficult. This situation is common for
most Asian and some other developing countries
(Exley & Dennick, 2004) and often leads to low
quality, delayed and sometimes non-existent
feedback on assessment of student writing (Chang,
2007). Problems in giving prompt feedback and the
366
Togatorop, E.
Web-based Writing Assessment to Enhance Students’ English Writing Performance.
DOI: 10.5220/0010357903660373
In Proceedings of the 2nd International Conference on Applied Economics and Social Science (ICAESS 2020) - Shaping a Better Future Through Sustainable Technology, pages 366-373
ISBN: 978-989-758-517-3
Copyright
c
2021 by SCITEPRESS Science and Technology Publications, Lda. All rights reserved
complexity of the writing (Brown, 2001; Harmer,
2007; Nunan, 1999) both increase student
dissatisfaction, leading to low writing performance
(Nemati, Alavi, Mohebbi, & Masjedlou, 2017). There
is an emerging need of a more effective way that
could help teachers and lecturers to assess students’
writing and to give them prompt feedback to improve
their writing performance.
This research then tried to evaluate the use of a
web-based support for lecturers to do a faster
assessment on students writing tasks. The provision
of the web-based writing assessment was expected to
enable the lecturer to reduce the turnaround time in
assessing the students’ abundant pieces of writing.
This allowed the students to get prompt feedback on
their writing work hence was expected to increase
their writing performance.
2 LITERATURE REVIEW
Writing is a complex but very essential skills needed
by the students. This proficiency has a vital role in
the academic progress and success of a student as it is
still one of the main learning practice and assessment
especially in the tertiary schools (Chang, 2007).
Writing difficulties are usually associated with its
complicated components such as the development of
ideas, syntax, grammar, organization, vocabulary,
content, communication skills, and the use of
punctuation (Brown, 2001; Harmer, 2007). These
complexities make writing skill difficult to acquire
and frequently bring students to a level of
discouragement. These issues then have long become
the concern of English educators and researchers. A
long series of research literature has tried to find the
solution including the effort to apprehend the nature
of the writing itself to formulate a definition of
writing performance, English writing performance
both in L1 and L2, writing assessment, feedback in
writing assessment and the use of technology in
writing assessment.
A. Writing Performance
The effort to find the solution for the writing
difficulty should be started from understanding the
nature of writing itself in both in L1 and L2 context.
Nunan (2003, p. 88) defines writing as “the mental
work of inventing ideas, thinking about how to
express them, and organizing them into statements
and paragraphs that will be clear to the reader.” Other
experts stress the process it takes. Oshima and Hogue
(2007) for example defines writing as a repeated
process i.e. revising and rewriting. It was initially
accepted that the L1 writing approach could be used
as the “starting point” for L2 writing.
Further research though has been conducted to
formulate a more comprehensive definition of L2
writing performance. One of the most central research
objects, in this notion, is the position of linguistic
competence in L1 and L2 writing performance. L2
learner writers commonly still struggle with
grammatically correct sentences building in
accordance with their level of language proficiency.
Whereas the L1 learner writers with different levels
of linguistic competence, are at least familiar with
linguistic features, so that they have no substantial
problem with grammatical sentence forms. The
researchers in this line could conveniently define the
L1 writing performance as “a writer’s creativity,
logic, voice, style, success at self-discovery, and skill
at knowledge transforming” (Gennaro, 2006, p. 11).
The same competencies are embedded to the
definition L2 writing performance with the addition
of some more essential elements such as “L2
linguistic proficiency, balance between linguistic and
rhetorical sophistication (organization, coherence,
development) and task demands” (Gennaro, 2006, p.
11). In more detail, those L2 writing variables
competencies are divided into the discourse
competencies and the language competencies. The
discourse competencies cover the “organization,
coherence, progression, development of ideas, and,
depending on task, the ability to integrate or
summarize sources” (Gennaro, 2006, p. 11). The
language competencies, on the other hand,
encompass “vocabulary, illocutionary markers,
morphosyntax, spelling, and punctuation” (Gennaro,
2006, p. 11). The more emphasis on the role of the
linguistic features gives the distinction of
competencies of the L2 writing compared to the ones
of the L1 writing and will be also a consideration of
the L2 writing performance assessment.
B. Writing Assessment
Assessment is “a process for documenting, in
measurable terms, the knowledge, skills, attitudes,
and the beliefs of the learner” (Capraro et al., 2012, p.
1). In the context of wring assessment, there are two
emerging methods of assessing writing ability i.e. the
direct and the indirect writing assessment (Weigle,
2002), as can be seen in the table below.
Web-based Writing Assessment to Enhance Students’ English Writing Performance
367
Table 1. Writing Assessment Method
Writing
Assessment
Method
Treatment
1. Direct Method
r
equi
r
es writing at least one
sample writing assessing the
writing competence as a
whole
a. Holistic
gives a single score to all
elements of a piece of writing
based on an overall or a general
impression of the rater
b. Analytic
gives different separated score
for each element of the rating.
c. Primary Trait
concentrates of a certain element
as a prime trait but may still have
a secondary trait with a less
percentage
d. Weighting
Trait
weighting more on elements
2. Indirect
measures different features
of language competence
separately such as grammar or
vocabulary typically in form
multiple choice test
The difference of direct and indirect writing
assessment can be clearly seen in the table above. The
Direct Writing Assessment requires the examinee to
produce an actual writing that could be assessed
either by Holistic, Analytic, Primary Trait and
Weighting Trait Methods. The Holistic Scoring is
based on some criterion, but the rater does not score
on individual criteria. The Education Testing Service
(ETS) is one example of an organization that applies
this Holistic Writing Scoring. The Analytic, on the
other hand, gives individual scores on some different
elements of writing. For example, it may score five
different elements of content, organization,
vocabulary, language use and mechanics which each
will get 20 percent. The Weighted Trait Scoring is
basically the same with the Analytical one but
weighting more on elements. It may for instance give
more percentage on content or other element of the
writing. When a particular writing element is given
much more percentage than the others, it is then a
Weighted Trait Assessment Method. Indirect writing
assessment, on the other hand, does not require an
actual writing. The test taker’s competency will be
assessed on their ability on “the appropriate use of
language in a series of objective test items which
often follow a multiple-choice format” (Stiggins,
1982, p. 102). Those analyses above have been used
many times, for example by Boring (2005) in
comparing the validity of human and computerized
writing assessment, by Jayamani, Heng, and Bakri
(2016) in assessing Manufacturing Subjects, by
Ounis (2017) in assessing Speaking Course and by
Swain and Friedrich (2018) in assessing Writing
Course.
The L1 writing assessment can commonly be used
to the L2 assessment with more emphasis on the
linguistic elements. These elements in more detail are
called linguistic micro features divided into three
features i.e. lexical, synthetic and cohesion. In lexical
feature, the better scored L2 writings are the ones
with more words and with the words of more letters
or syllables and present greater lexical variety. In the
synthetic feature on the other hand, good L2 writings
are expected to present more subordinations and
passive voice uses. While in the cohesion features, the
L2 writers are supposed to use more conjunctions
(Crossley, Kyle, Allen, Guo, & McNamara, 2014).
Even though the L2 writing assessment gives
more emphasis on linguistic features than the L1 one
does, it does not necessarily mean that these features
become the most important. These items are easily
scored but it would not be hard to generate gibberish
that is well scored well on these criteria. As these
features are well known, many schools and language
training centers in Asia have provided English
international test preparation programs that only
focus on test strategy skills. Even though some may
achieve success this way, the writing assessment must
be kept in line with the learning purpose. How well a
test taker response to a writing task by providing
related points and relevant information should
become the main consideration of the assessment.
Good writing should also address “for what” and “for
whom” the writing is constructed. The content then
should be more important than the linguistic features
(Leki, 2004).
C. Feedback in Writing Assessment
Besides scoring, giving feedback is supposed to be an
essential integral part of the writing assessment. The
Assessments for Learning (AfL) theory, as can be
seen in the Diagram 1 below, particularly mentions
that it is the feedback that keeps clearly telling the
students “where they are now, where they need to go
and how they can best get there” (Edwards, 2012, p.
21).
The feedback needs to be given promptly to help
the students know their writing position in the
learning goal: what level is the students writing now,
ICAESS 2020 - The International Conference on Applied Economics and Social Science
368
what improvements they need to do and how they can
do those improvements. Students then will get serious
difficulties to revise their writing without having
adequate and prompt feedback from their lecturer.
Even those that belong to autonomous students still
need assistance intervention from their lecturer in
revising their writing. The writing complexities that
need long turnaround time to assess though often
make lecturers get late in giving the feedback. Some
lecturers even tend to skip this important step,
expecting that the students will just become
autonomous by themselves (Edwards, 2012). In some
cases, the writing learning process even ended up in
the students’ submitting their writing assignments
since those submitted assignments were never
returned to them (Chang, 2007).
Figure 1. Diagram of The Significance of Feedback in
Writing Learning According to The Assessments for
Learning (Afl) Theory
The problem of giving feedback is still
multifaceted by the problem of big classes which is
still prevalent in most of Asian developing countries
(Exley & Dennick, 2004), includes Indonesia.
Lecturers then should face complex assessment of
abundant pieces of students writing task regularly.
This problem leads to a low-quality feedback, slow
feedback or even no feedback at all (Chang, 2007). It
enhances dissatisfaction, brings discouragement to
some students, and finally boils down to low writing
performance (Nemati et al., 2017).
D. The Use of Information Technology in Writing
Assessment
The use of the information technology has long
become an alternative in writing assessment,
including the practice of Automatic writing
Evaluation (AWE) programs. Research has done
comparisons between this kind of assessment and
traditional manual writing assessment. It shows that
this computer based automatic assessment is better in
terms of shortening the assessment turnaround time
and has increased the students motivation and
convenience in receiving such prompt feedback
(Zhang & Hyland, 2018). The computer-based
feedback was also found to have a positive correlation
with the student writing performance and with the
students’ perception about the feedback (Ebyary &
Windeatt, 2010). Other research though also found
that while automatic writing assessment does shorten
turnaround time, it is only effective at word and
sentence level assessment. It does not work well in
providing adequate feedback at the content level
(Warschauer & Grimes, 2008). Automatic marking
can lose the content importance, particularly when it
applies analytic scoring methods in L2 writing
assessment. The L2 analytic assessment gives
separate marks to each writing element and test wise
students may easily “game” the test. For example, a
test taker who is only good in vocabularies could get
high writing score by producing more varieties of
words but less in meaning. The automatic assessment
machine could give high score on the words or
sentences feature which are meaningless in context.
The Education Testing Service (ETS) choice of using
holistic instead of analytic method is probably to
avoid this faulty scoring happening. As the largest
language test that relies on the combination of
automatic machine and human raters, ETS anticipates
the students who are only good in vocabulary levels
to get high marks on writing.
Another alternative to assist lecturers in doing a
faster assessment and providing effective feedback is
web-based feedback involving both human assessors
and technology. The writing assessment is still done
by human but is facilitated and assisted by
technology. Research has also covered this area and
found it is supportive (Bikowski & Vithanage, 2016;
Lin, Liu, & Yuan, 2001; Togatorop, 2015). Most of
them though were at the level of web-based peer
collaborative feedback (which is often not qualified
enough), not on lecturer’s feedback to students’
writing. Further research on the effort of providing a
technology-based writing assessment that could help
lecturers with the assessment is required. This
research still allowed them to give prompt feedback
on students writing to increase their writing
performance.
3 METHODOLOGY
This research applied a quantitative method
(Creswell, 2012). It used a quasi-experiment design
since it was done in two intact class groups without
fully randomization to avoid the learning system
Web-based Writing Assessment to Enhance Students’ English Writing Performance
369
disruption in the research site (Creswell, 2012; Vogt,
2005).
A. Data Sources
The study was done in Politeknik Negeri Batam
(PoliBatam): The State Polytechnic of Batam,
Indonesia. The population of the research was the
students of about 3319 persons in the institution. The
research applied purposive sampling that will chose
about 57 students who enrolled for the English
Writing Course in two of most similar characteristic
classes in the odd semester of academic year
2019/2020. Those two classes are in semester I of the
Business Management Department. The 57
respondents are mostly female and are commonly
seventeen to twenty years old. Before entering the
PoliBatam, they have also got English lesson for
years in their Primary and High School.
B. Research Scheme and Design
The research provided and analyzed two
different treatments of writing assessment in two
different classes. The Web-based Writing
Assessment as the treatment group and the
Traditional Manual Writing Assessment as the
control group. The impact of each of both treatments
was tested writing performance as can be seen in the
research scheme diagram 2 below.
Figure 2. Diagram of The Research Scheme
The study employed a pretest-posttest design
(Creswell, 2012) as can be seen in the table below.
Table 2. Research Design
All student participants in the two classes was
firstly given writing pretest to map their initial writing
performance level in the beginning of the course
before having the treatments. The two sample classes
then received the two treatments, one class got one
treatment, chosen randomly. And by the end of the
treatment, the writing performance level of each
group was reassessed with a posttest to see their
writing ability scale after following the treatment.
The pretest and the posttest were then analyzed and
the gain of each group both writing performance level
was calculated and compared to see how each
assessment system impacted the samples’ writing
performance.
C. Research Instrument
The instrument to measure the samples’ writing
performance in this research is a form of short
argumentative essay (about 300 words length). The
writing performance measurement instrument for
both pretest and posttest use same kinds of writing
and similar level of difficulty. The topic was different
though considering that writing is a skill which is
going to progressively get better after a rehearsal
process.
D. Data Analysis
The pre- and the post-test of writing performance in
this research were scored using the same marking
system applied by the Polibatam to assess their
students’ writing which is analytical writing scoring
model. The study was authentically reflecting what is
happening in the research site. As the result showed a
difference, it should be a difference in a way that the
institution could recognize. The pre and post-tests
writing score data where be analyzed using SPSS
to conduct descriptive and inferential statistic tests
to the collected data.
4 DATA FINDING AND
DISCUSSION
The students writing performance got increased in
both the Traditional and the Web-based writing
assessment group as can be seen from the writing pre-
test, post-test and gain of both group in the following
table.
Table 3. Writing Performance Gain of Traditional and
Web-Based Writing Assessment
ICAESS 2020 - The International Conference on Applied Economics and Social Science
370
As can be seen in the table above, the post-test
means of both groups are higher than the pre-test ones
meaning that the students’ writing performance of
both groups at the end of the treatment is better than
one in the beginning. The following table below
shows whether the difference of the pre- and post- test
writing performance of the Traditional writing
assessment group is significant or not.
Table 4. Writing Performance Pre-and Post-Test Paired T
Test of The Traditional Assessment Group.
The Sig (2-Tailed) value of .000 which is smaller
than .05 in in the last column of the output table of the
paired t test above shows that the difference of writing
post- and pre-test in the Traditional writing
assessment group is statistically significant at the at
the probability level of .05. This seems to indicate that
there is a significant impact of Traditional writing
assessment treatment on students writing
performance. The result of the same test conducted to
the pre- and post- test writing performance of the
Web-based writing assessment group can be seen in
the table below.
Table 5. Writing Performance Pre- and Post-Test Paired T
Test of The Web-Based Assessment Group
The Sig (2-Tailed) value of .000 which is smaller
than .05 in the output table of the paired t test above
shows that the difference of writing post- and pre-
test in the Web-based writing assessment group is
also statistically significant at the at the probability
level of .05. This seems to indicate that there is a
significant impact of Web-based writing assessment
treatment on students writing performance.
Even though both the Traditional and the Web-
based writing assessment groups gave a significant
impact to the students writing performance, the
writing performance gain of both groups is different
as can be seen in on the very left column of the Table
III above. That difference can be more clearly seen in
the graph below.
Figure 3. Graph of The Writing Performance Gain of
Traditional and Web-Based Writing Assessment.
The graph above clearly shows that the writing
performance gain of the Web-based group is almost
twice as the once of the one of the Traditional groups.
To know whether this writing performance gain
difference is significant or not, an Independent
Sample Test was conducted, and the result is as
shown by the following table.
Table 6. Independent Sample Test of The Traditioanal And
the Web-Based Assessment Group Writng Performance
Gain
In the above Independent Sample Test output
table, Sig. of the Levene Test for Equality of Variance
is .115 which is >0.05. It means that the data variance
between the Traditional and the Web-based group is
homogeneous. The interpretation of the Independent
Sample Test output then will be based on the "Equal
variance assumed" in the table. The sig (2-tailed) of
the "equal variance assumed" is .000 which is < .05.
It can be concluded then that the difference of the
writing performance gain between Traditional and the
Web-based group is statistically significant at the
probability of .05.
The finding of the research is in line with previous
studies that found the use of the computer web-based
writing assessment enhanced the students writing
performance (Ebyary & Windeatt, 2010; Zhang &
Hyland, 2018). This study did find that not only the
web-based but also the traditional writing assessment
increased the students writing performance. The
writing performance gain resulted by the Web-based
Web-based Writing Assessment to Enhance Students’ English Writing Performance
371
writing assessment group though was much higher
than the one by the Traditional writing assessment
group. The web-based technology helped the lecturer
to do the writing assessment easier and faster. It
enabled the lecture to give prompt and more
comprehensive feedback on students writing
assignments. The students then could better recognize
the strength and the weaknesses of their writings and
do improvements on their writing mistakes. The
web-based writing assessment practice in this study
could deal with the classic obstacles of the traditional
writing assessment, such as the writing assessment
complexities and long turnaround time (Edwards,
2012), big classes (Exley & Dennick, 2004), also
slow and low quality feedback (Chang, 2007). This
way, the students perceived the writing feedback
more positively and got it more useful for their
writing performance development.
The use of web technology in this study facilitated
a more effective writing assessment, but it was still
the lecturer himself who did the assessment using the
web media. This hence could overcome the weakness
of the web-based collaborative writing practice which
was also found in other previous studies give a
positive impact to students writing performance yet
not very effective since the feedback was given by
student peers (Bikowski & Vithanage, 2016; Lin et
al., 2001; Togatorop, 2015) who were not qualified
enough in writing as lecturer was. In the same way,
the combination of the lecturer feedback and the use
of the web technology could cope with the
shortcoming of the computer Automatic Writing
Evaluation (AWE) which was found effective in
shortening the assessments time and in assessing the
students writing in words and sentence level (Zhang
& Hyland, 2018) but not in content level (Warschauer
& Grimes, 2008). The feedback given by this lecture
web- based writing assessment paid enough attention
to the student writing content as it is an essential part
of writing assessment (Leki, 2004). This way the
lecture web-based writing assessment feedback could
play its very central role to enable the students to
recognize the shortcomings of their writing, what
improvements they need to do and how to do it
(Edwards, 2012).
5 CONCLUSIONS
This study found that the Web-based writing
assessment is significantly better than the Traditional
conventional writing assessment. The writing
performance gain of the Web-based writing
assessment class as the treatment group is much
higher than the one of the Traditional conventional
writing assessment class as the control group. The
Independent Sample Test conducted to test the
writing performance gain difference of the treatment
and the control group shows that difference is
statistically significant at probability level of 0.05.
The practice of the lecturer web-based writing
assessment could cope with long turnaround
assessment time of the traditional writing assessment.
Regarding to the above conclusion, it is
recommended to continue applying and expanding
the use of the web-based writing assessment in
Polibatam. For further studies in this topic, it was
encouraged to do similar research with other different
variables on a bigger population.
ACKNOWLEDGMENT
I would like to acknowledge supports that the
Indonesian Endowment Fund for Education (LPDP)
and the Indonesian Ministry of Education and Culture
in making this research and presentation possible
under the Dikti Scholarship.
REFERENCES
Bikowski, D., & Vithanage, R. (2016). Effect of web based
collaboration writing on individual L2 writing
development. Language Learning & Technology,
20(1), 79-99.
Bonwell, C. C., & Eison, J. (1991). Higher Education
Report. Retrieved from http://www.active-learning-
site.com/
Boring, R. (2005). The validity of human and computerized
writing assessment. Paper presented at the Human
Factors and Ergonomics Society 49th Annual Meeting,
Orlando.
Brown, H. D. (2001). Teaching by principles: An
interactive approach to language pedagogy (2 ed.).
California: San Francisco State University.
Capraro, R. M., Roe, M. F., Caskey, M. M., Strahan, D.,
Bishop, P., Weiss, C., & Swanson, K. W. (2012).
Research summary: Assessment. Retrieved from
https://pdxscholar.library.pdx.edu/cgi/viewcontent.cgi
?article=1006&context=ci_fac
Chang, G. C. (2007). Writing feedback as an exclusionary
practice in higher education. International Journal of
Pedagogies and Learning, 3(2), 18-30.
Creswell, J. W. (2012). Educational research: Planning,
conducting, and evaluating quantitative and qualitative
research (4th ed.). Boston, MA: Pearson.
Crossley, S. A., Kyle, K., Allen, L. K., Guo, L., &
McNamara, D. S. (2014). Linguistic microfeatures to
predict L2 writing proficiency: A case study in
ICAESS 2020 - The International Conference on Applied Economics and Social Science
372
Automated Writing Evaluation. The Journal of Writing
Assessment, 7(1), 1-16.
Ebyary, K., & Windeatt, S. (2010). The impact of
computer-based feedback on students’ written work.
International Journal of English Studies, 10(2), 121-
142.
Edwards, E. (2012). Applying action research to investigate
the use of goal setting for EFL-ESL writing. English
Australia Jounal, 29(1), 19-38.
Exley, K., & Dennick, R. (2004). Giving a lecture - from
presenting to teaching. London: Routledge Falmer.
Freiberg, J. (2008). Criteria, standards and intuitions in the
imprecise work of assessing writing. Literacy Learning,
16(13), 20-26.
Gennaro, K. d. (2006). Second language writing ability:
Towards a complete construct definition. TESOL &
Applied Linguistics, 6(2).
Graham, S., & Harris, K. R. (2003). Students with learning
disabilities and the process of Writing. A meta-analysis
of SRSD studies. New York: Guilford.
Harmer, J. (2007). The practice of English language
teaching (4 ed.). Harlow: Longman.
Jayamani, E., Heng, S. K., & Bakri, M. K. (2016). Effect of
developing analytic/task specific rubric for an
enhanced student learning in manufacturing subjects.
Paper presented at the Annual International Conference
on Computer Science Education: Innovation &
Technology.
Lee, I. (2008). Understanding teachers' written feedback
practises in Hong Kong secondary classrooms. Journal
of Second Language Writing, 17(2), 69-85.
Lee, I. (2009). Ten mismatches between teachers' beliefs
and written feedback practice. ELT Journal, 63(1), 13-
22.
Leki, I. (2004). Teaching second-language writing: Where
we seem to be. In T. Kral (Ed.), Teacher Development
(pp. 170-178 ). Washington, DC: English Language
Programs Division of the USIA.
Lin, S. S. J., Liu, E. Z. F., & Yuan, S. M. (2001). Web-based
peer assessment: Feedback for students with various
thinking-styles. Journal of Computer Assisted
Learning, 17, 420-432.
Montgomery, J. L., & Baker, W. (2007). Teacher-written
feedback: Student perception, teacher self-assessment,
and actual teacher performance. Journal of Second
Language Writing, 16, 82-99. Retrieved from
https://doi.org/10.1016/j.jslw.2007.04.002
Nacira, G. (2010). Poor writing production: The case study
of 3rd year students at the English Department - Batna
University. (Master). University of Setif, Algeria.
Nemati, M., Alavi, S. M., Mohebbi, H., & Masjedlou, A. P.
(2017). Speaking out on behalf of the voiceless learners
- written corrective feedback for English language
learners in Iran. Issues in Educational Research, 27(4),
822-841.
Nunan, D. (1999). Second language teaching and learning.
Boston, MA: Heinle & Heinle.
Nunan, D. (2003). Practical English Language Teaching.
Singapore: Mc Graw-Hill Company.
Oshima, A., & Hogue, A. (2007).
Introduction to Academic
Writing (3 ed.). White Plains, NY: Longman.
Ounis, M. (2017). A comparison between holistic and
analytic assessment of speaking. Journal of Language
Teaching and Research, 8(4), 679-690.
Stiggins, R. J. (1982). A Comparison of Direct and Indirect
Writing Assessment Methods. Research in the
Teaching of English, 16(2), 101-114.
Swain, S. S., & Friedrich, L. (2018). A common language
and criteria to boost students' writing: an analytic
writing rubric helps teachers and students see what's
present and possible in a piece of student writing.
Educational Leadership, 75(7), 66-71.
Togatorop, E. (2015). Teaching writing with a web-based
collaborative learning. International Journal of
Economics and Financial Issues, 5(Special Issue), 247-
256.
Vogt, W. P. (2005). Dictionary of statistics and
methodology: A nontechnical guide for the social
sciences (3rd ed.). Thousand Oaks, CA: Sage.
Warschauer, M., & Grimes, D. (2008). Automated writing
assessment in the classroom. Pedagogies: An
International Journal, 3, 32-36.
Weigle, S. C. (2002). Assessing writing. Cambridge:
Cambridge University Press.
Zhang, Z. V., & Hyland, K. (2018). Student engagement
with teacher and automated feedback on L2 writing.
Assessing Writing, 36, 90-102.
Web-based Writing Assessment to Enhance Students’ English Writing Performance
373