Pearson 2022 Student Engagement With Teacher Written Feedback On Rehearsal Essays Undertaken in Preparation For Ielts
Pearson 2022 Student Engagement With Teacher Written Feedback On Rehearsal Essays Undertaken in Preparation For Ielts
research-article20222022
SGOXXX10.1177/21582440221079842SAGE OpenPearson
Original Research
SAGE Open
William S. Pearson1
Abstract
Due to pressure to meet goals, some test-takers preparing for the IELTS (International English Language Testing System)
Writing test solicit written feedback (WF) from an expert provider on their rehearsal essays, in order to identify and close
gaps in performance. The extent self-directed candidates are able to utilize written feedback to enhance their language and
writing skills in simulated Task 2 essays has yet to be investigated. The present study addresses one learner factor deemed
prominent in mediating the learning potential of WF, student engagement. The study used assessments of student writing
according to the public band descriptors and text-analytic descriptions from three Task 2 rehearsal essays triangulated with
five rounds of semi-structured interviews to explore how four learners preparing for IELTS Writing engaged affectively,
behaviorally, and cognitively with asynchronous, electronic written feedback provided using the Kaizena app. The study
found that, while the learners highly valued WF, they were not always able to understand the intentions behind comments
or envisage an appropriate response, leading to negative emotional reactions from two learners in the form of anxiety
and frustration. Written progress across the essays was limited, stemming from an initial lack of buy-in to making content
revisions and surface-level approaches to WF processing. Moderate behavioral engagement with indirect error treatment
was exhibited, although meaningful accuracy gains were apparent for only one learner and content changes meant many
errors went uncorrected. The implications for practitioners of IELTS Writing preparation are discussed.
Keywords
written feedback, student engagement with written feedback, high-stakes English writing assessment, IELTS, learning to write
Creative Commons CC BY: This article is distributed under the terms of the Creative Commons Attribution 4.0 License
(https://creativecommons.org/licenses/by/4.0/) which permits any use, reproduction and distribution of
the work without further permission provided the original work is attributed as specified on the SAGE and Open Access pages
(https://us.sagepub.com/en-us/nam/open-access-at-sage).
2 SAGE Open
Table 1. Textual Features Referenced in the IELTS Writing Task 2 Public Band Descriptors.
provider with expertise (usually a teacher or successful test situations, actions, and processes and evidence as anecdote
veteran) on simulated writing tasks integral to enhancing and experience (Moore & Morton, 2005). As such textual
performance (Pearson, 2019), particularly since IELTS features generally diverge from the expectations of tertiary-
Writing is underscored by certain idiosyncrasies that do not level academic writing, test outcomes should be interpreted
cohere with the expectations of academic writing generally, as indicators of pre-study academic readiness or language
(see Moore & Morton, 2005). Yet the extent test-takers make learning aptitude (Davies, 2008).
sense of and respond to comprehensive, unfocused CFWF Writing Task 2 is assessed using four equally weighted
and FFWF to develop language and writing skills in prepara- analytical assessment criteria across nine performance bands
tion for high-stakes English writing assessment is largely unique to the test (see IELTS, 2019b). As shown in Table 1,
unknown. To address this gap, the present mixed methods Task Response (TR) and Coherence and Cohesion (CC)
case study explores how four candidates preparing for IELTS mostly encompass the global features of response, while
affectively, behaviorally, and cognitively engage with writ- Lexical Resource (LR) and Grammatical Range and
ten feedback on two drafts of three Task 2 rehearsal essays. Accuracy (GRA) concern the surface-level. Underscoring
the assessment of IELTS Writing is a general proficiency
theoretical approach. The test eschews specific language
Literature Review structures and functions, instead claiming the existence of
generic academic language use, generalizable to any aca-
Rehearsing for IELTS Writing Task 2 demic domain (Davies, 2008). This is reflected in the public
IELTS is a high-stakes English language proficiency (ELP) band descriptors (PBDs) (see IELTS, 2019b). For instance, at
test undertaken to generate evidence that a candidate meets TR band 7.0 (“good user”), candidates are expected to pres-
the linguistic requirements determined by a test-user for the ent “a clear position throughout the response,” with main
target language use (TLU) domain in which successful can- ideas that are “extended” and “supported.” If “a relevant
didates are expected to operate (IELTS, 2019a). The primary position” with “relevant main ideas” is presented, albeit with
TLU domain of IELTS is academic study with English as the some ideas that are “inadequately developed/unclear,” band
medium of instruction (the Academic test), and to a lesser 6.0 (“competent user”) is awarded. There is minimal elabora-
extent, everyday social, transactional, and workplace lan- tion of what such statements entail in official practice materi-
guage use for migrants to Anglophone countries (General als, requiring learners to infer features of written performance
Training). Both forms of the Writing test feature two tasks, to from the designated overall score and summary comments
be completed within 1 hour with no recourse to external on sample essays, a challenging prospect for many candi-
sources. Task 2, the focus of this study, consists of a 250- dates (and practitioners of IELTS preparation).
word essay requiring candidates to present an argument on Prior research has shown the practice of undertaking
an issue with the goal of convincing the reader that some- rehearsal tasks in simulated test conditions is a central fea-
thing “is the case” or “should be the case,” termed analytical ture of self-directed and teacher-led IELTS Writing prepara-
and hortatory exposition respectively (Mickan & Slater, tion (Allen, 2016; Chappell et al., 2019; Hu & Trenkic, 2019;
2003, p. 63). Candidates are required to adopt a position on Mickan & Motteram, 2009; Smirnova, 2017). Rehearsal
topics of “general interest and suitability for test-takers essays are popular with candidates because they provide
entering undergraduate or postgraduate studies” (Academic) opportunities to self-evaluate test format familiarity (Pearson,
or of “general interest” (General Training) (IELTS, 2019a, p. 2019), enhance understanding of typical topics and prompt
13). Prompts and the conditions in which responses are writ- frames (Coffin, 2004; Craven, 2012), and practice skills and
ten result in particular idiosyncratic academic conventions, strategies used to lower the burden of test taking (Chappell
such as personalized opinion-giving, a focus on real world et al., 2019; Winke & Lim, 2014). Nevertheless, improving
Pearson 3
future test performance via rehearsal essay writing is not students’ attitudes toward WF and the emotional reactions it
assured. It may take over 200 hours of English language might evoke. The behavioral dimension concerns students’
learning to tangibly improve writing outcomes (Green, textual responses (usually operationalized as quantifiable
2005), while variability in topics and the absence of syllabi revision operations, time spent editing their work, or number
of structures or functions limits the use of memorized mate- of submissions for feedback) as well as the activities and
rial. As non-experts in how Task 2 is assessed, developing strategies learners undertake to improve the accuracy of
writers may struggle to diagnose problematic textual issues, drafts or develop their L2. Lastly, the cognitive dimension
a situation compounded by the vague phrasing of perfor- encompasses students’ awareness and understanding of WF,
mance standards in the public band descriptors. As a conse- and cognitive and metacognitive operations involved in pro-
quence, dependence on the assistance of experts who offer cessing and response. Learners’ mental states and processes
feedback is often reported (Allen, 2016; Estaji & Tajeddin, are usually explored qualitatively, either through interview-
2012; Mickan & Motteram, 2009). There is also a danger ing or immediate or retrospective verbal reports. The model
that, instead of undertaking what is really required—upgrades has yet to be applied to preparation for high-stakes English
in ELP (Ingram & Bayliss, 2007)—candidates entrap them- writing assessment, where there are notable pressures to per-
selves in extensive test preparation, believing that exploiting form in simulated tasks that resemble product writing.
design features of the test will yield the requisite gains The present literature has shown that student engagement
(Alsagoafi, 2018; Chappell et al., 2019; Hamid & Hoang, with WF is complex, dynamic, and often-times unsatisfac-
2018; Hu & Trenkic, 2019). tory or inconsistent (Han & Hyland, 2015; Han & Xu, 2021;
Yu et al., 2018; Yu & Jiang, 2020; Zhang & Hyland, 2018;
Zheng & Yu, 2018). Affectively, learners with positive atti-
Student Engagement With Written Feedback tudes, high self-confidence, and agreement with WF are
Rehearsal writing for test preparation purposes provides more likely to invest time in responding to WF (Zhang &
important opportunities for written feedback. The existence Hyland, 2018). However, the negative emotional reactions
of an elaborated series of assessment criteria may facilitate invoked by disappointing textual outcomes can suppress a
understandings of how a learner performed in a task through student’s motivation to respond (Yu & Jiang, 2020; Zhang &
written feedback’s correction, reinforcement, forensic diag- Hyland, 2018). Behaviorally, learners undertake revisions in
nosis, benchmarking, and feed-forward roles (Price et al., order to develop their writing or employ more L1-accepted
2010). Since IELTS preparation is typically undertaken by linguistic forms. Students may avoid responding if they dis-
learners possessing emergent language skills for accultura- agree with the WF (Yu et al., 2018) or perceive no purpose or
tion purposes or as a means of improving future test perfor- benefit in response (Zheng et al., 2020). Cognitively, learn-
mance, feedback’s forensic-diagnosis and benchmarking ers’ abilities to notice or understand errors are associated
functions are salient. Learners frequently seek explanations with their language proficiency (Zhang & Hyland, 2018;
and justifications of existing performance in combination Zheng & Yu, 2018) and feedback literacy (Han & Xu, 2021),
with illustrative guidance on extending outcomes to the next although learners of any ability may misinterpret the intent
band (Pearson, 2019). Nevertheless, Task 2 feedback provid- underlying WF, particularly if it is meaning focused or not
ers are faced with upwards of 17 discrete textual features explained clearly and concisely.
across the four criteria to attend to (Table 1), not to mention
complex content and delivery written feedback decisions
(Chong, 2020), most notably how much to provide, whether
The Study
to refer to directly to the PBDs, how to illustrate target per- This case study stems from a larger research project (see
formance, and whether to treat errors directly, indirectly, or Pearson, 2021) where candidates preparing for IELTS were
at all. recruited to take part in a remote learning-to-write program
A multi-dimensional approach to exploring student centered on the provision of asynchronous, electronic con-
engagement with written feedback has been adopted in sev- tent- and form-focused written feedback on rehearsal writing
eral recent studies, particularly in relation to WCF (written undertaken as preparation for the test. The design of the
corrective feedback) provided on in-sessional English sup- study was guided by the following research question: How
port programs at Chinese tertiary-level institutions (Han & do student writers preparing for IELTS engage affectively,
Hyland, 2015; Tian & Zhou, 2020; Yu et al., 2018; Zhang & behaviorally, and cognitively with asynchronous, electronic
Hyland, 2018; Zheng et al., 2020; Zheng & Yu, 2018). Such written feedback provided on three rounds of Task 2 rehearsal
research is theoretically underpinned by Fredricks et al. essays?
(2004) and Ellis' (2010) componential frameworks for inves-
tigating engagement. The three dimensions, the affective,
Participants
behavioral, and cognitive were originally operationalized in
L2 writing settings by Han and Hyland (2015), with modifi- Four individuals preparing for IELTS were purposively
cations by later authors. The affective dimension denotes selected from a wider pool of 8 students who completed the
4 SAGE Open
project, based on the diversity of certain background charac- inspiration from the current literature (Han & Hyland, 2015;
teristics, that is, gender, L1, current language level, prior test Zheng et al., 2020; Zheng & Yu, 2018). Students’ affective
experience, and IELTS Writing goals. A sample size of four responses were explored through interviewing only. All
was decided upon because it is commensurate with prior interviews were conducted in English using videoconferenc-
research (e.g., Han & Hyland, 2015) and was deemed to pro- ing software (Zoom), with the transcripts analyzed themati-
vide an appropriate balance between variety in learner back- cally (Braun & Clarke, 2006).
ground attributes and study complexity, a characteristic of There were five stages to data collection, represented in a
multi-dimensional research into student engagement (Han & flowchart in Figure 1. Stage 1 comprised an initial semi-
Hyland, 2015; Yu et al., 2018). Table 2 provides background structured interview (see Supplemental Material, part A),
information on the learners (the names used are pseudonyms) undertaken to get to know the participants, query their writ-
in the order in which they were recruited. The individuals, ing needs, identify WF preferences and expectations, and
who were not known to the researcher, were recruited in address questions they had about the study. Thereafter, the
response to a public post advertising the project in an IELTS- learners undertook three rounds of stages 2–4 of the project,
orientated Facebook group. An impressionistic judgment of consisting of writing a Task 2 rehearsal essay in conditions
their linguistic sufficiency to participate was made (CEFR that simulated the test, which were submitted to the researcher
B1), given the demands of the interview questions, based on by email or Facebook Messenger for written feedback (stage
initial textual interactions. Careful consideration was paid to 2). FFWF and CFWF were provided on essay drafts targeting
the ethical implications of the study. It was explained that aspects of students’ written performance that fell short of
participation did not guarantee improvements in written out- goals. At the end of this stage, the essays and written feed-
comes, while participants were reminded that they possessed back were returned electronically through Kaizena, an online
the autonomy to change the written feedback approach or opt application that allows content and surface-level textual fea-
out of the study if participation was perceived as harmful. tures to be highlighted and commented upon through an
Each individual provided their written consent to participate interactive feedback “conversation” between the researcher
in the study, which was approved by the ethics committee of and student (explained in more detail below).
the researcher’s institution. In stage 3, learners attended to the written feedback in a
revised version of their essay. The inclusion of a second
draft, while atypical of IELTS Writing preparation, provided
Data Collection an opportunity for the participants to engage with the written
The study utilizes textual evidence in combination with com- feedback textually as well as to meet their goals in a less
puter-mediated semi-structured interviewing (a preliminary pressurized writing context. Additional summative WF was
and closing interview as well as one after each round of writ- provided on participants’ second drafts, which were also
ing) to generate knowledge of how the learners behaviorally uploaded to Kaizena. Form- and content-focused revision
and cognitively engaged. Revision operations and error den- operations were coded during this stage, while a list of salient
sity measures were deductively coded, drawing on pre-exist- textual issues to address in the interview was drawn up.
ing schema (Christiansen & Bloch, 2016; Han & Hyland, Within 1 or 2 days of being returned, a computer-mediated
2015; van Beuningen et al., 2012; Zhang & Hyland, 2018). semi-structured interview was held to explore participants’
Interviews (with the exception of the preliminary and closing affective, behavioral, and cognitive engagement with written
encounters) comprised ‘talking around the text’ (Ivanič & feedback (stage 4, Supplemental Information, part B).
Satchwell, 2007), featuring screen sharing of learners’ Interviews centered around discussion of the identified
rehearsal essays and the accompanying written feedback, salient issues, presented via screen sharing of learners’ com-
used as a stimulus, along with a schedule that took positions and the accompanying feedback for use as a
Pearson 5
stimulus. Textual features that had not been cleared up along selected by the researcher, based on the diversity of topics and
with new issues were addressed in the form of a tutorial. frames. It was felt three rounds of writing offered a suitable
After the third round of stage 4, a closing interview was held balance between providing opportunities to engage with writ-
with learners (stage 5, Supplemental Information, Part C), ten feedback/(possibly) change the approach to first draft writ-
addressing their progress on the project and evaluations of ing and mitigating the prospect of participant attrition and
participation. The elapsed time for data collection varied excessive study complexity. The learners were instructed to
according to participants’ availability, the speed in which write first drafts in conditions that simulate Writing Task 2,
they submitted their drafts and attended to the written feed- that is, to spend about 40 minutes on the response, write at
back, and their keenness to undertake the real test. Yuri and least 250 words, not refer to external sources of information,
Chandrika completed the research activities in 21 and and include relevant examples from their knowledge or expe-
22 days, while Kushal took 37 days and Min Jung 44. rience. There was no obligation to complete second drafts
under these conditions. Upon submission, participants’ essays
were assessed in relation to the PBDs by the researcher, with a
Task Prompts and Written Feedback score being assigned in the four criteria and overall. Thereafter,
Multiple rounds of writing were chosen, partly to address the WF was generated by the researcher, explicitly orientated
current lack of longitudinalness in engagement research (Han toward helping the learners better meet the demands of the test
& Hyland, 2015; Yu et al., 2018). Three parallel Academic at their stated band score level (see in Table 2) in light of the
Task 2 prompts (see Supplemental Information, Part D) were pre-eminent problematic textual features uncovered in the
6 SAGE Open
assessment. Facilitating the evaluation and feedback provi- deductively coded using categories established in prior multi-
sion, the researcher possessed an MA in Applied Linguistics, dimensional engagement with written feedback research
several years’ experience of teaching IELTS preparation (Han & Hyland, 2015; Zhang & Hyland, 2018). Responses to
(involving providing WF on simulated practice tasks), and indirect and direct error treatments were coded using separate
prior training and experience assessing authentic Task 2 scripts but overlapping concepts (illustrated along with the schema
in a professional capacity. for coding content-focused revision operations [CFROs] in
FFWF was provided to address participants’ Lexical Supplemental Material, Parts G and H), reflecting the differ-
Resource and Grammatical Range and Accuracy. First draft ent cognitive processes involved. Additionally, an error den-
errors were comprehensively corrected using a metalinguis- sity measure of “number of errors/total number of words ×
tic code (sometimes including additional explanation) based 100” was calculated (van Beuningen et al., 2012) to track stu-
on Han and Hyland (2015) (Supplemental Information, Part dents’ accuracy across drafts and compositions. CFROs in
E). Indirect correction was adopted to encourage students to response to actionable first-draft comments were deductively
notice patterns in error types and take responsibility for self- coded according to two categories. First, a judgment of how
correcting lexicogrammatical problems. Complex untreat- closely the learner had followed feedback instructions was
able errors (see Ferris et al., 2011) poorly suited to the code made according to the scheme developed by Christiansen and
were addressed directly, as were all second draft errors. Bloch (2016). The second was a measure of the success of the
Written commentary in the text body diagnosed and explained revision in terms of whether it made the text “much better,”
textual features deemed problematic in relation to partici- “better,” “the same,” or “worse” (Christiansen & Bloch,
pants’ desired scores. To promote feed-forward in student 2016). The public band descriptors were drawn on to facili-
revisions, explicit strategies and/or sample reformulations tate the coding. Results of the text-analytic descriptions are
were provided when possible. Additionally, a summative presented as proportions of the overall raw frequency of
outline of the learner’s performance in the four criteria was CFWF and FFWF points. 10% of revision operations (n = 30)
provided along with advice on how to improve. While the were recoded six months later to assess intra-rater reliability.
majority of comments stemmed from criteria outlined in the The initial and recoded data were compared using an Excel
PBDs, occasionally, formative feedback beyond the descrip- formula, which generated an agreement figure of 0.929, evi-
tors was applied. A sample of the written feedback is outlined dence of “good” reliability of coding.
in the Supplemental Material (Part F). Thematic analysis (Braun & Clarke, 2006; Terry et al.,
All written feedback was provided to the participants work- 2017) was applied to the transcribed interview data. The
ing with the researcher in a closed, virtual classroom space on transcripts were read and re-read to attain familiarity, fol-
the Kaizena app (free registration required). This involved the lowed by the inductive and iterative development of codes,
WF, originally generated on a Word document version of the describing important segments of the data that addressed the
essay, being transferred to the application’s “conversation” bar three dimensions of engagement for each participant. Codes
(essentially resembling Microsoft Word’s Reviewing Pane), were then clustered into cross-case themes orientated around
linked to a clean version of the essay uploaded to the space. a central organizing concept (Terry et al., 2017). For exam-
This allowed textual features to be highlighted and targeted ple, “Genuine WF told her what her mistakes were,” WF
with form- and content-focused “comments” (like Word), but assisted in providing direction to writing,” “WF helped pro-
also global comments not linked to any selected text. The vide a foundation” along with six other codes were amal-
rationale for using Kaizena was to provide a singular online gamated into the theme “Unsophisticated explanations of
space for interactions and because content-focused messages how WF helped writing development.” While the analysis
were predicted to be complex or controversial, requiring clari- was undertaken without a priori categories in mind, the
fication or discussion. Additionally, it was hoped learners’ multi-dimensional model of engagement suggested concepts
written responses to Kaizena comments would offer further around which relevant data was coded. These included par-
insights into their cognitive engagement. However, the ticipants’ value judgments of the WF and emotional responses
expected dialogic interactions did not materialize, perhaps (the affective dimension), descriptions of WF processing
because the information was perceived in receptive-transmis- strategies (behavioral), and understandings of the WF (cog-
sion terms, or the participants lacked the confidence to chal- nitive). Themes were recursively reviewed with reference to
lenge messages. Participants were only able to download a the codes and transcripts (Braun & Clarke, 2006). The final
feedback-less version of their essays from Kaizena, prevent- themes (see Table 3) were defined once they were deemed to
ing them from “accepting changes,” as in Word. possess the necessary internal homogeneity and external het-
erogeneity. The results are presented and discussed together,
organized according to the themes present across the three
Data Analysis
dimensions (preceded by an outline of students’ assessed
To analyze students’ textual responses to WF, the study drew written performance across the project). Content- and form-
on the text-analytic descriptive tradition (Ferris, 2012). focused revision operations are addressed under behavioral
Learners’ form-focused revision operations (FFROs) were engagement, mirroring prior research (Han & Hyland, 2015).
Pearson 7
Table 3. Themes Present in the Data Across the Three Dimensions of Engagement.
Dimension Theme
Affective engagement WF collectively was judged positively
Unsophisticated explanations of how WF helped writing development
WF contributed to increases in confidence
Negative affective reactions stemmed from performance deficits
Behavioral engagement Routine, uninformed strategies employed in processing WF
Cognitive engagement Decoding intended meaning of WF was not straightforward
Not understanding a textual problem highlighted by WF or an item of WF
Misinterpreting a textual problem highlighted by WF or an item of WF
Not understanding how to act on WF
Draft one Draft two Draft one Draft two Draft one Draft two
Kushal 7.5 7.5 7.0 7.5 7.5 7.5
Yuri 6.5 6.5 6.0 6.5 6.5 7.0
Min Jung 5.5 5.5 5.5 5.5 5.5 6.0
Chandrika 6.5 6.0 6.0 6.5 6.0 6.5
Extracts of participants’ authentic utterances mostly serve to since IELTS bands lack sensitivity to smaller-scale changes
illustrate the interpretive assertions of the researcher (Terry in L2 writing (Green, 2005; Rao et al., 2003). It may be that
et al., 2017). additional cycles of essay writing were needed to provide
opportunities for responding to WF and/or greater time
between essays/drafts to account for delayed uptake.
Results and Discussion
Students’ Written Progress Across the Three Affective Engagement
Essay Rounds WF collectively was judged positively. All participants con-
As shown in Table 4, only Kushal was consistently able to sistently stated they highly valued a holistic conception of the
meet his desired overall Writing score, failing to achieve 7.5 written feedback. Representative judgments included, “I have
just once. In contrast, Yuri managed his target only via revi- to say I rely on that. So because of that, I made these improve-
sions in response to WF, falling short by 0.5 on four occa- ments” (Chandrika), “they were really helpful to be honest,
sions and by 1.0 in draft one of essay two. Chandrika and and I would really appreciate if I have some, for example,
Min Jung exhibited consistent shortfalls in performance, doubts in the future” (Kushal), and, “it's really helpful when I
with the former achieving 6.0 or 6.5, while the latter was write down and you feedback your explanation” (Min Jung).
consistently one band below her required 6.5. In terms of This finding contributes to a large body of research that dem-
progress across the project, Chandrika, Kushal, and Min onstrates the high regard in which L2 developing writers hold
Jung’s scores in TR, CC, and LR either wholly flatlined or teacher WF generally (Cunningham, 2019a; Ferris, 2011;
exhibited a dip across essays. In just two second drafts did Hyland & Hyland, 2006; Zacharias, 2007). The value placed
these participants enhance the quality of their writing in any on WF in these settings is not surprising since the learners
of the criteria by a band (e.g., Chandrika’s LR in essay two were recruited on the basis of their desire for written feed-
and CC in essay three). Yuri was more successful, raising his back, which was noted as difficult to achieve locally (Allen,
sub-scores in five separate instances. While this suggests he 2016; Chappell et al., 2019). Nevertheless, attitudes were not
was better able to engage with WF, most features of his writ- unequivocally positive, reflecting the complexity of individu-
ing seemed borderline 6.0/7.0. That the learners were not als’ affective responses to WF (Han & Hyland, 2018; Mah-
able to make short-term upgrades in band scores through WF foodh, 2017). Yuri professed that he found 85% of the points
coheres with the findings of prior research into IELTS helpful, with the other 15% attributed to the need for further
Writing preparation (Alsagoafi, 2018; Estaji & Tajeddin, alternatives, underscoring that some learners value choices
2012; Hamid, 2016; Rao et al., 2003). However, caution in how to respond (Treglia, 2008). Interestingly, Min Jung
should be exercised in concluding poor written progress (unprompted) posited a comparable figure of 80% that she
8 SAGE Open
deemed helpful but was referring to the tutorial element of In contrast, Kushal and Yuri’s initial lack of value
the interviews. While this seems a rather damning verdict, it attached to revising drafts stemmed from limited experience
provides support for the claim that L2 students value face-to- undertaking revisions in these settings and a belief in simu-
face follow up on written feedback (Han, 2017; Hedgcock & lating rehearsed writing in authentic test-like settings. For
Lefkowitz, 1994; Saito, 1994). Kushal, this meant much problematic content in essay two
The participants disclosed few specific facets of the con- was addressed through renewed simulation, as opposed to
tent and delivery of WF they were positively disposed meaningful engagement with the comments. As a cogni-
toward. One feature that was praised was the indirect treat- tively overwhelmed writer, it is perhaps of little surprise that
ment of errors using the metalinguistic code: “the error code Yuri merely stated, “it's hard to say something concrete
makes us that very like a teaching that we learned from the about this”. He perceived baby steps across the essays,
school” (Chandrika) and, “it's very structured and helpful” unable to provide explanations of how WF usefully served
(Kushal). This was unexpected, as it was anticipated the par- his needs. Finally, the clarity, structure, and ease of naviga-
ticipants would privilege task-related understandings tion underlying WF presentation in the Kaizena platform
(Alsagoafi, 2018; Chappell et al., 2019; Hamid & Hoang, using the comment and highlight functions were cited as
2018; Hu & Trenkic, 2019) over general grammar feedback. adding value (with the exception of Yuri), providing further
This finding likely reflects the tendency of L2 learners to evidence L2 learners are open to new technologies that
perceive correction as crucial in the strive for error-free writ- facilitate writing development (Chong, 2020; Cunningham,
ing (Ferris, 2011; Hedgcock & Lefkowitz, 1994; Saito, 2019b). The participants found the possibility to respond to
1994). As in other studies (e.g., Elwood & Bode, 2014), individual comments reassuring, although rarely utilised
detailed descriptions and explanations of the qualities of stu- this functionality.
dent writing through commentary were also highly valued.
This stemmed from learners’ impatience to better understand WF Contributed Increases in Test-Taking Confidence. It is
how to achieve their IELTS performance goals (Alsagoafi, perhaps unrealistic to expect intermediate-level learners to
2018; Saif et al., 2021). In other settings, developing writers elaborate the precise mechanism(s) in which comprehensive
sometimes feel unhappy receiving a lot of feedback as it sug- WF enhanced their written development. A more prevalent
gests an increased workload during revisions (Mahfoodh, theme underlying learners’ appraisals was the value added
2017; Yu et al., 2018; Zacharias, 2007). through raising their self-confidence in meeting task expec-
tations, a finding mirroring studies into candidates’ perspec-
Unsophisticated explanations of how WF helped writing tives toward IELTS Listening and Speaking preparation
development. In addition to the characteristics of WF pro- (Chappell et al., 2019; Winke & Lim, 2014; Yang & Badger,
vision that were deemed beneficial, a further attitudinal 2015). Interestingly, the confidence-boosting power of WF
theme encompassed explanations of how WF added value was even stated by Chandrika and Min Jung, who never met
to their writing. These covered a narrow range of feedback their goals in a rehearsal composition: “it's so much valuable
features and lacked sophistication, reflecting learners’ lim- time for me, because I really got to know. So I think from
ited awareness of the steps required to meet their band score now, I feel quite confidence” (Min Jung). As in the Speak-
requirements (Allen, 2016; Chappell et al., 2019; Mickan ing module (see Yang & Badger, 2015), greater confidence
& Motteram, 2009). One explanation, that WF demysti- accompanied learners’ growing familiarity with what it was
fied IELTS through explicating textual deficiencies and the like responding to topics parallel to test tasks: “I have dis-
means to remedy them (Treglia, 2008), cohered with a belief covered some ideas in my mind regarding this topic, for
in WF’s critical role in this context: “Because after this revi- example, and if I am to face this all at least similar topic
sion, I know more about my flaws. What are the specific again” (Yuri). Furthermore, improved confidence derived
areas that I should focus in writing” (Chandrika). Interest- from tackling a range of prompt topics and frames (although
ingly, it was the weaker writers, Chandrika and Min Jung, Yuri stated, “participants should experience all the type of
who stated that the process of undertaking revisions added the essays”), being more aware of one’s own mistakes, and
value to their participation in the project: “For me is much gaining a “feel” for higher-level content through reading the
more helpful, revise and revise again. So it can be what I got reformulations.
what was wrong, what I have to not do again in the future” Since the written feedback tended to be highly evaluative,
(Min Jung). The engagement of lower-level learners with there was a propensity for it to constitute a threat to self-
WF is documented to take place at the surface level (Barka- esteem. The lower-than-expected essay scores accompanied
oui, 2007; Porte, 1997; Radecki & Swales, 1988). However, by critical feedback undermined the confidence of Min Jung
since both Chandrika and Min Jung fell short of their target and Yuri. In the case of Min Jung, this resulted in instances
scores, they had good reason to perceive value in undertak- where she was afraid to make revisions because, “I don't
ing content revisions. CFROs enabled them to hone the qual- want to be the wrong anymore.” Similarly, despite meeting
ity of their writing incrementally, as achieving their goals in his targets in several criteria and overachieving in the second
authentic test conditions appeared unrealistic. draft of essay three, Yuri felt concerned about replicating
Pearson 9
such performance in future: “It's formidable to understand, spite of their disappointment or distrust in the assigned
what exactly do they need to do in order to achieve the same scores.
result”. Self-doubts persisted over the project owing to the
unpredictability of prompt topics/frames and worries over an
inability to replicate successful performance in authentic set- Behavioral Engagement
tings (Estaji & Tajeddin, 2012). It is probably only after Content- and form-focused revision operations. A key mea-
achieving their desired outcomes in the actual test that many sure of engagement is the extent learners are willing and able
learners truly feel confident in their abilities to perform at the to undertake textual revisions in response to WF, operational-
requisite level. ized textually as frequency counts of deductively coded revi-
sion operations and their outcomes (Han & Hyland, 2015;
Negative Affective Reactions Associated with Not Knowing Yu et al., 2018; Zhang & Hyland, 2018). Active behavioral
How to Adjust Written Performance. Min Jung and Yuri’s engagement is important since learners are unlikely to make
emotional reactions across the project demonstrated that, like progress unless there is an underlying sense of personal
in other contexts (Hyland & Hyland, 2001; Lee, 2008; Zach- agency and willingness to use WF (Barkaoui, 2007; Price
arias, 2007), WF can exert a detrimental affective impact on et al., 2010). Of particular note were CFROs, as a relatively
learners. It is hardly unsurprising that these learners evinced small number of TR and CC issues made the difference in
negative reactions as they consistently performed 0.5 to whether the participants achieved their desired band scores
1.0 band lower than their targets (especially in first drafts). or not. Reluctance to address textual features that contributed
Both exhibited disappointment stemming from the frustra- to tangible band score deficits attenuated both assessed out-
tion and despair associated with not being able to accom- comes and writing development. Much CFWF in this context
plish a task to the required level (Estaji & Tajeddin, 2012), suggested how a text may conform more closely to a concep-
as exemplified by Yuri: “I've grown a bit weary of this. . . tion of higher-level writing, constituting the phenomenon of
due to the fact that I’ve been struggling for this for two years teacher appropriation (Tardy, 2019). The literature character-
with this.” To be told repeatedly they were constantly mak- izes L2 writers as either willing to grant the teacher abso-
ing mistakes negatively affected Min Jung and Yuri. It pro- lute power to appropriate, complying with their demands
gressively exasperated Min Jung, lowering her confidence in submissively, or by “contesting” comments through non or
her abilities and creating self-doubts: “I don't know when I perfunctory revisions (Radecki & Swales, 1988). Investing
can finish about IELTS.” Yuri’s case illustrates where a mis- in CFROs reveals developing writers’ willingness to accept
match between teachers’ responding behaviors and students’ the expertise of the feedback provider and regard criticisms
desires can induce negative affective responses (Hyland, as suggestions that can help polish compositions (Orsmond
1998). The abundance of feedback that did not contribute & Merry, 2013).
concrete understandings of expected test performance at The CFROs uncovered in the study varied notably among
band 7.0 bred anxiety: “I'm almost driven crazy with IELTS the participants, a finding consistent with other multi-dimen-
uncertainty. . . because nobody can say for sure for certain sional engagement studies (e.g., Yu et al., 2018; Zhang &
that it's 100% way to do this,” which eventually resulted in Hyland, 2018). However, the results should be interpreted
disagreement with some WF. cautiously as it appeared certain patterns in learners’ revision
It was evident Min Jung and Yuri’s feelings of frustration operations were not always consistent with the interview
and disappointment also stemmed from lower-than-expected findings. Indeed, this is one of the advantages of the multi-
written performance. IELTS test-takers’ assumption of linear dimensional model, as one perspective alone provides insuf-
gains across multiple test undertakings and confusion with ficient insights (Han & Hyland, 2015). As shown in Table 5,
inconsistent scores is documented in the literature (Hamid, it is perhaps surprising in light of his negative affective
2016; Pearson, 2019). It may have been accentuated in the responses that Yuri was the most compliant learner in
present study since the learners had recruited outside exper- addressing CFWF, attending to 77.7% of comments. His
tise to supplement existing preparation activities and implicitly obstinacy to revise is more visible in the low proportion of
expected progression over the project. Min Jung, particu- text omitted in response to CFWF (2.8%) and the 19.4% of
larly, held the faulty assumption that written progress directly comments that were ignored. Additionally, while evincing
reflected the amount of effort put in: “I'm too much disap- the highest amount of content revisions that made the texts
pointed actually my, I think I put it too much effort at least I “much better” (19.4%), 41.7% provided no improvement to
have to get a 6.5 either 6.” Fortunately, the high stakes of the textual quality, stemming from issues envisaging what acting
learning context served to induce an activating response on the WF entailed.
(Pekrun, 2006), mitigating the discouragement that can stem A learner who, behaviorally, was a WF receptor (Radecki
from comprehensive critical feedback (Lee, 2008; Zacharias, & Swales, 1988) was Chandrika, whose CFROs followed
2007). This motivated Min Jung and Yuri to persevere in instructions in 65.1% of instances, with only 13% of
10 SAGE Open
Draft one Draft two Draft one Draft two Draft one Draft two
Kushal 2.94 2.03 2.06 1.52 1.81 0.55
Yuri 2.65 1.18 3.90 2.51 3.90 3.13
Min Jung 6.67 7.40 8.47 6.90 6.23 4.27
Chandrika 4.94 7.50 7.63 7.01 5.80 6.86
comments being ignored. As an inexperienced candidate, comments can be an indication of disengagement (Han &
high levels of WF compliance likely reflected self-doubts in Hyland, 2015) or growing writer autonomy and agency
her abilities and high levels of trust in the expertise of the (Mahfoodh, 2017), characteristics more applicable to Yuri
WF provider. Among the participants, Chandrika recorded and Kushal. For Min Jung, this behavior stemmed ultimately
the greatest amount of revision operations that made her from noticeable weaknesses in ELP since problematic textual
texts “better” or “much better” (69.6%). An important caveat issues often cut across multiple assessment criteria, increas-
is that this figure reflects her notable first draft underperfor- ing the difficulties of clear and concise written feedback pro-
mance vis-à-vis what she could achieve in less controlled vision and its successful resolution.
conditions. Kushal’s proportions of revision operations that Table 6 shows participants’ lexicogrammatical errors rates
followed instructions were 44.4%, with an equal figure that per 100 words across the essays. Kushal was the sole learner
improved his texts. These were disappointing outcomes in to exhibit a reduced error rate across first drafts, probably
light of his test experience and language proficiency. Omitted because he was “only” required to lower the frequency of
text constituted 33.3% of all CFROs, notably higher than the errors from “a few” to “occasional.” As Müller (2015)
other three learners. This does not necessarily suggest a lack stresses, the IELTS band 7.0 writer is vastly different in qual-
of engagement (Uscinski, 2017), rather, that Kushal per- ity to the band 6.0 writer, with six times fewer distracting
ceived revisions as second attempts to generate essay content errors, none of which impinge on communicative quality. At
in simulated test conditions. 6.0, there are a diffuse array of treatable and untreatable errors
Min Jung followed the instructions of 39.9% of com- that require learners’ attention, taxing their cognitive capabili-
ments (5.7% fully), evidently a concern resulting from weak- ties during processing. That three learners did not demon-
nesses in ELP and cognitive engagement. The figure strate meaningful gains in lexicogrammatical accuracy
highlights that following the feedback provider’s instructions coheres with the findings of several studies investigating the
should not be equated to willingness to revise, as response to impact of cognitively-demanding unfocused WCF (Frear &
commentary is clearly contingent on understandings of the Chiu, 2015; Sheen et al., 2009). Kushal and Yuri consistently
underlying intent of the WF. This is further illustrated by the reduced their respective error rates in each second draft, pro-
lowly 37.2% of revisions that resulted in textual improve- viding further evidence of the effectiveness of indirect error
ments. The worryingly high 42.9% of comments that were treatment (Ferris, 1995; Ferris et al., 2011), even when the
ignored should also be interpreted cautiously. Disregarding pressure to resolve surface-level issues is heightened by
Pearson 11
notable global concerns. However, this outcome did not apply strategies (Porte, 1997; Yu et al., 2018; Zheng & Yu, 2018).
to learners whose accuracy levels were lower (i.e., Chandrika These included making sense out of the information (Chan-
and Min Jung) owing to ELP limitations and significant con- drika), determining if the WF point required action or not
tent issues vis-à-vis their desired band. (Kushal), and deciding which sentences to add or remove
Looking at the form-focused revision operations in Table (Min Jung). Not much time was spent during processing and
7, it is apparent a low proportion of errors were ignored in response, although Min Jung’s pre-occupation with IELTS
comparison to other studies of student engagement (Han & preparation and significant performance deficits resulted in
Hyland, 2015; Uscinski, 2017; Zheng & Yu, 2018). This is a remarkable 3 hours spent revising essay two after a 5-hour
likely because the participants were motivated to resolve self-study session.
errors as a means of improving performance in LR and GRA. There are a number of explanations of the surface-level
Incorrect resolutions were low for all participants (below approaches to written feedback processing evinced by the par-
21%), although it is surprising that the linguistically strongest ticipants. Since the revision of rehearsal essays is uncommon
learner (Kushal) accounted for the highest proportion of in these settings, the participants lacked knowledge and expe-
failed resolutions. Deletions encompassed notable proportion rience utilizing WF to develop their writing and test-taking
of Kushal (45%), Yuri (43.5%), and Chandrika’s (33.3%) skills (Barkaoui, 2007; Porte, 1997). They rarely undertook
error resolutions. Deletions often resulted from the require- supplementary learning activities, few of which addressed lan-
ment for substantial content revisions, calling into question guage proficiency, a characteristic of IELTS preparation gen-
the merits of instructors investing time and effort in metalin- erally (Gan, 2009; Saif et al., 2021; Smirnova, 2017). Secondly,
guistic corrections that may ultimately be ignored. Kushal perhaps because the revision of essay drafts was considered
and Min Jung appeared the most compliant in incorporating unusual and perceived as for the benefit of the researcher,
direct corrections on first drafts, in contrast to Yuri who usu- there was a lack of initial participant buy-in (Yu et al., 2018).
ally removed directly treated errors. Finally, the presence of largely facile revision strategies is per-
haps not surprising since, other than a basic set of guidelines
Routine, Uninformed Strategies Employed in Processing for undertaking revisions, the participants did not receive
WF. The other aspect to learners’ behavioral engagement training on how to act on the written feedback.
with written feedback was the explicit strategies and skills
they employed when processing and responding to written
feedback, explored in the interviews. It was found, regard- Cognitive Engagement
less of proficiency level, the learners reported routine, Decoding intended meaning of WF not straightforward. Owing
uninformed strategies and skills when processing the WF. to its role in explaining/modeling desired performance where
Typically, they conceptualized processing as a matter of a gap was exhibited, the written feedback ended up being
“going through” feedback points one-by-one (Chandrika, comprehensive and unfocused. As a consequence, engage-
Kushal) or merely contemplating how to respond (Min ment was cognitively taxing for all participants, regardless
Jung, Yuri). Cunningham (2019a) cautions that the mere of performance level and goals. Kushal noted, “I need to be
reading of feedback should not necessarily be considered more, like, focused when I'm going through those feedbacks,”
insufficient (in comparison to not reading the feedback), while Chandrika felt, “I have to read more times your feed-
although in the context of Task 2 underperformance and the back when I get something new.” For Yuri, not knowing how
restrictions to achieving goals that result, learners ought to to respond resulted in him spending, “15 or 20 minutes to
be relied upon to read the feedback. Processing the feed- think what I'm gonna do what?” This finding mirrors stud-
back was accompanied by mostly surface level skills and ies situated in other L2 writing settings that posit successful
12 SAGE Open
cognitive engagement with WF requires considerable men- settings has the propensity to cause affective harm raises
tal effort (Kim & Bowles, 2019; Yu et al., 2018; Zheng & serious questions over the impact of IELTS preparation on
Yu, 2018), largely in consideration of how and to what extent learners who are not close to attaining their goals.
texts should be revised (Storch & Wigglesworth, 2010; Zheng
& Yu, 2018). Such difficulties may have been mitigated had Not understanding a textual problem highlighted by WF. Suc-
more thorough guidance been provided on how to process cessful behavioral engagement with CFWF was hampered
and use the feedback, which can reduce misunderstandings by the fact that the learners did not always understand what
(Elwood & Bode, 2014; Uscinski, 2017). Alternatively, a particular comments “meant,” a common theme in both L1
mid-focused WF approach, defined by a focus on two to five (Chanock, 2000; Weaver, 2006) and L2 written feedback
pre-eminent textual features, may have lessened the cogni- research (Conrad & Goldstein, 1999; Hyland, 1998; Mah-
tive burden, although at the expense of certain textual issues foodh, 2017). Failure to understand CFWF encompassed
being attended to and in contravention of learners’ desires for misinterpreting or not understanding the nature of a tex-
comprehensive WF. tual issue highlighted by WF or an item of WF itself and
An added difficulty was developing shared understand- not knowing how to act on a comment (accompanied by
ings of criterion-referenced statements of written perfor- awareness of the issue). Incidences where the participants
mance. Notably, the learners struggled to understand how professed or demonstrated non-understanding of a textual
characteristics of their writing cohered with statements out- issue or item of WF were relatively low, occurring just 14
lined in the PBDs. This encompassed localized concerns times across the dataset. This was somewhat surprising given
such as Kushal bemoaning, “occasional errors I don't know the outcomes of learners’ CFROs (Table 5) and the relatively
what does that mean” as well as issues understanding marks poor progress across the project as measured in band scores.
awarded overall: “I’ve got only also this 6 band in task On the other hand, the learners were not complete novices to
response, but actually, I can't understand how is it does it writing in this context, which might explain why not under-
work? You know this? Why 6? Why not 5 or why not 7?” standing how to act on commentary constituted a noticeably
(Yuri). This was because the students lacked access to the more common source of cognitive difficulty (23 instances).
discourse in which the information was presented (Chanock,
2000), which is targeted toward language assessment spe- Misinterpreting the Nature of a Textual Issue Targeted by
cialists. Unfortunately, detailed descriptive/explanatory writ- WF. Since the written feedback was comprehensive, unfo-
ten feedback could not overcome the lack of clarity inherent cused, and married to linguistically complex criterion-refer-
in IELTS’ general proficiency model of language assessment enced descriptors that specify performance in general terms,
(Davies, 2008). As such, the merits of the public band there was great potential for misunderstanding and miscom-
descriptors as a pedagogical tool to facilitate learners’ written munication, as indicated in other studies of WF on L2 writ-
development seem limited. ing (Conrad & Goldstein, 1999; Hyland, 1998; Hyland &
In fact, Chandrika, Min Jung, and Yuri varyingly became Hyland, 2001). It is, thus, perhaps a positive indicator that
cognitively overwhelmed by the WF as the project pro- “only” 14 instances emerged in the interviews where the par-
gressed (Han & Hyland, 2015; Yu et al., 2018; Zhang & ticipants had misunderstood the intent of the WF or a textual
Hyland, 2018). Yuri noted, “it’s hard to develop these ideas issue it was targeting. Yet since the rates of fully follow-
when you have this mess in your head with this lots of ques- ing the directions of CFWF were between 5.7% to 22.2%,
tions why this? Why not this?” a perspective echoed by clearly the learners encountered difficulties interpreting what
Chandrika: “there are so many things that I can't tell every- was expected of them. It is likely that linguistic shortcomings
thing that are running in my mind”. Feeling cognitively over- caused them to misunderstand the intention behind CFWF,
whelmed stemmed from the learners being unable to utilize perhaps because they were unable to decode messages obfus-
WF to address performance deficits. Overloaded with com- cated by burdensome language assessment terminology or
prehensive and unfocused information that lacked a clear hedging that was employed to “soften the blow” (Hyland &
focus (Bitchener, 2008; Sheen, 2007), these three partici- Hyland, 2001).
pants lacked sufficient attentional capacity to process and One trend evinced by the weaker writers, Chandrika and
respond to the multitude of global- and surface-level textual Min Jung, was the conflation of content- and form-focused
features (Bitchener, 2008; Sheen, 2007). Repeated over a concerns. Min Jung understood the feedback condemned
series of drafts, the outcome was cognitive overload, confu- memorized generic academic-sounding vocabulary. However,
sion, and discouragement, which may have resulted in the this resulted in a perception that improving her response to
quiet resistance to some written feedback (Han & Hyland, the task was a matter of solely using the right vocabulary:
2015; Radecki & Swales, 1988), particularly with regard to “Task response always a problem, how to describe about the
TR-focused points on Yuri’s essay three and Min Jung’s sub- making suitable words, like making suitable using words.”
sequent re-use of memorized material. That WF in these Similarly, Chandrika understood her second drafts as lengthy
Pearson 13
but felt her inability to write in, “a proper way” was due to a Table 5, the learners often deleted content highlighted as
low range of vocabulary. Indeed, she noted, “if you don't have deficient (Conrad & Goldstein, 1999; Hyland, 1998), per-
that much of vocabulary to express one example, so you can haps to avoid the issue. The situation was exacerbated by the
that mean I can go for two [supporting ideas].” This diver- nebulous Task 2 assessment criteria, exemplified by Kushal’s
gence is reflective of the dichotomy whereby teachers per- struggle to comprehend how examiners delineate a “well-
ceive revision as a generative process where meaning is developed” response and Yuri’s skepticism of his ideas being
reassessed and a text is reshaped, yet inexperienced learners “unclear.”
view it as the correction of surface-level errors (Barkaoui,
2007; Radecki & Swales, 1988). Radecki and Swales (1988)
Conclusions
conclude that such a narrow attitude “can only hinder their
development as L2 writers” (p. 364), a perspective that As the study explored how a small sample of candidates pre-
coheres with the findings of this study. paring for IELTS (with an overrepresentation of test veter-
ans) engaged with written feedback using a case study
Not Understanding How to Act on Written Feedback. The approach, the findings are not generalizable. Additionally,
most notable characteristic of participants’ cognitive engage- the research was simplified for the manageability of data col-
ment was a lack of purported understandings of how to act lection, analysis, and reporting by a single researcher. A
on CFWF, despite awareness of the issue(s) shown. Failure prominent omission was participants’ cognitive engagement
to generate suitable revisions to address teacher commentary with FFWF, which was beyond the time constraints of the
is not uncommon in the literature (Christiansen & Bloch, interviews, particularly as discussion of often-complex
2016; Conrad & Goldstein, 1999; Hyland & Hyland, 2006; CFWF points proved time consuming. A further limitation
Mahfoodh, 2017), although is a complex, highly learner- stems from the multi-dimensional model of student engage-
dependent issue. This is illustrated in the present study by ment itself. While it has exhibited increasing acceptance
the inability of Chandrika to characterize her written ideas among scholars in tertiary-level learning-to-write contexts
with relevance, clarity, and development simultaneously, of (Yu et al., 2018; Yu & Jiang, 2020; Zheng et al., 2020), facets
Kushal to translate understandings of needing to be more of the approach were not entirely well suited to exploring
focused into reality, of Min Jung to textually realize how to engagement in this context. These include the emphasis on
structure a clearly developed argument regardless of topic, surface-level issues (which comprise half of the Task 2 crite-
and of Yuri to develop ideas that seemed complete and incor- ria), behavioral engagement operationalized predominantly
porate content suggestions he disagreed with. As Price et al. as revision operations (since content changes influenced
(2010) stress, being able to act on written feedback is crucial, FFROs, especially deletions and removals, and revisions are
underscoring its role in closing performance gaps in skill- uncommon in this context), and a lack of sophisticated
based settings through feed-forward (Price et al., 2010) and schema for coding CFROs. Similarly, coding qualitative data
usability (Walker, 2009). To do either, WF must be designed as one particular dimension seemed reductive and compart-
to “help the student to reduce or close the gap” (Walker, mentalizing. It was not always possible to say for certain that
2009, p. 68). Nevertheless, the association between ELP a code was positioned within a single dimension, with the
limitations and notable performance gaps made specifying nebulous border between behavioral/cognitive engagement
expected revision strategies beyond native-speaker reformu- particularly problematic.
lations in linguistically simplified terms challenging. The study found that, despite detailed written feedback
Another explanation was participants’ inability to con- that explained and exemplified performance at participants’
ceive of what response to WF at their desired band score target band, student engagement was insufficient to bridge
entailed, a finding of prior research (Chappell et al., 2019). performance deficits across the project. Cognitively, the
One consistent explanation across multiple participants was learners did not always understand the intentions behind
a deficit of topic knowledge (Craven, 2012). Chandrika comments or what actions to take to resolve problematic tex-
explained in regard to prompt two, “there’s a law and age tual features within how the task is assessed (Conrad &
limitation in our countries, but still, I don't know how they Goldstein, 1999; Hyland, 1998). Behaviorally, despite suc-
use those things,” a point also made by Yuri; “I can’t be cessful error resolutions across drafts, meaningful improve-
aware of everything. You know, how it works in other coun- ments in lexicogrammatical accuracy were not apparent,
tries.” There was also a linguistic dimension to this theme, while notable content changes between drafts meant many
that is, the participants whose ELP deficiencies resulted in errors went uncorrected. Processing WF tended to be at the
essay ideas characterized as consistently unclear or inade- surface-level (Yu et al., 2018; Zheng & Yu, 2018), accompa-
quately developed (Chandrika, Min Jung, and Yuri) lacked nied by a lack of buy-in to making content-focused revisions
the linguistic ability to discern why this was the case. Instead in the early stages of the project and a tendency to omit faulty
of remedying a deficit of content knowledge, for example, content rather than meaningfully address it. Affectively, the
through reading about the essay topics online, as shown in WF was highly valued and deemed confidence building
14 SAGE Open
(Chappell et al., 2019; Yang & Badger, 2015), contributing Supplemental Material
an activating effect (Pekrun, 2006). Nevertheless, underper- Supplemental material for this article is available online.
formance accompanied by comprehensive, unfocused WF
information resulted in two learners feeling anxious, frus- References
trated, and sometimes overwhelmed, although the pressure
Allen, D. (2016). Investigating washback to the learner from the
to meet their IELTS goals mitigated the propensity of WF in IELTS test in the Japanese tertiary context. Language Testing
these settings to be deactivating. in Asia, 6(7), 1–20. https://doi.org/10.1186/s40468-016-0030-z
The findings of the study feature implications for practi- Alsagoafi, A. (2018). IELTS economic washback: A case study on
tioners of IELTS preparation who provide written feedback English major students at King Faisal University in Al-Hasa,
to prospective test-takers. First, feedback providers are Saudi Arabia. Language Testing in Asia, 8(1), 1–13. https://
advised to be cautious about what can be achieved and to doi.org/10.1186/s40468-018-0058-3
manage students’ expectations of WF being a panacea to Barkaoui, K. (2007). Revision in second language writing: What
notable deficits in performance. Practitioners may seek to teachers need to know. TESL Canada Journal, 25(1), 81–92.
focus on a few key features across learners’ compositions in https://doi.org/10.18806/tesl.v25i1.109
relation to the public band descriptors, rather than overload- Bitchener, J. (2008). Evidence in support of written corrective feed-
back. Journal of Second Language Writing, 17(2), 102–118.
ing them with comprehensive WF that lacks a clear focus
https://doi.org/10.1016/j.jslw.2007.11.004
(Bitchener, 2008; Sheen, 2007). To address TR and CC Braun, V., & Clarke, V. (2006). Using thematic analysis in psychol-
issues, it may be preferable to model performance at the ogy. Qualitative Research in Psychology, 3(2), 77–101. https://
learners’ required level by appropriating students’ texts, doi.org/10.1191/1478088706qp063oa
rather than time-consuming descriptions and explanations of Chanock, K. (2000). Comments on essays: Do students understand
performance. Additionally, teachers should consider whether what tutors write? Teaching in Higher Education, 5(1), 95–
comprehensive grammar correction is worth their time, par- 105. https://doi.org/10.1080/135625100114984
ticularly if students are tasked with undertaking content- Chappell, P., Yates, L., & Benson, P. (2019). Investigating test prep-
based re-writes. It might be advisable to adopt a ‘mid-focused’ aration practices: Reducing risks (IELTS Research Reports
approach to FFWF, highlighting a handful of noteworthy Online Series, No. 3). British Council, Cambridge Assessment
recurring errors, or focusing on ones that disturb comprehen- English and IDP, IELTS Australia. https://www.ielts.org/-/
media/research-reports/2019-3-chappell_et_al_layout.ashx
sion as these are penalized more heavily. Alternatively,
Chong, S. W. (2020). A research report: Theorizing ESL com-
FFWF could be delivered separately once students have had munity college students’ perception of written feedback.
the opportunity to engage with CFWF. Future research in Community College Journal of Research and Practice, 44(6),
face-to-face settings with a larger cohort of learners is 463–467. https://doi.org/10.1080/10668926.2019.1610675
required before more equivocal judgments concerning the Christiansen, M. S., & Bloch, J. (2016). “Papers are never finished,
effectiveness of WF in IELTS Writing Task 2 preparation set- just abandoned”: The role of written teacher comments in the
tings can be made. revision process. Journal of Response to Writing, 2(1), 6–42.
http://www.journalrw.org/index.php/jrw/article/view/32
Declaration of Conflicting Interests Coffin, C. (2004). Arguing about how the world is or how the
world should be: The role of argument in IELTS tests. Journal
The author(s) declared no potential conflicts of interest with respect
of English for Academic Purposes, 3(3), 229–246. https://doi.
to the research, authorship, and/or publication of this article.
org/10.1016/j.jeap.2003.11.002
Conrad, S. M., & Goldstein, L. M. (1999). ESL student revision
Funding after teacher-written comments: Text, contexts, and individu-
The author(s) received no financial support for the research, author- als. Journal of Second Language Writing, 8(2), 147–179.
ship, and/or publication of this article. https://doi.org/10.1016/S1060-3743(99)80126-X
Craven, E. (2012). The quest for IELTS Band 7.0: Investigating
Ethics Statement English language proficiency development of international
students at an Australian university (IELTS Research Reports,
Approval of the University of Exeter’s Research Ethics and Volume 13). IDP: IELTS Australia and British Council. https://
Governance Office was obtained prior to the commencement of www.ielts.org/-/media/research-reports/ielts_rr_volume13_
data collection (Reference Number D1920-049). report2.ashx
Cunningham, J. M. (2019a). Composition students’ opinions of
Consent and attention to instructor feedback. Journal of Response to
All participants provided their written consent to participate in the Writing, 5(1), 4–38. https://journalrw.org/index.php/jrw/arti-
study by digitally signing a bespoke participant information sheet. cle/view/133/90
Cunningham, K. J. (2019b). Student perceptions and use of tech-
nology-mediated text and screencast feedback in ESL writ-
ORCID iD ing. Computers and Composition, 52, 222–241. https://doi.
William S. Pearson https://orcid.org/0000-0003-0768-8461 org/10.1016/j.compcom.2019.02.003
Pearson 15
Davies, A. (2008). Assessing academic English language profi- Han, Y., & Hyland, F. (2019). Academic emotions in writ-
ciency: 40+ years of U.K. language tests. In J. Fox, M. Wesche, ten corrective feedback situations. Journal of English for
D. Bayliss, L. Cheng, C. E. Turner, & C. Doe (Eds.), Language Academic Purposes, 38, 1–13. https://doi.org/10.1016/j.
testing reconsidered (pp. 73–86). University of Ottawa Press. jeap.2018.12.003
Ellis, R. (2010). Epilogue: A framework for investigating oral Han, Y., & Xu, Y. (2021). Student feedback literacy and engage-
and written corrective feedback. Studies in Second Language ment with feedback: A case study of Chinese undergraduate
Acquisition, 32(2), 335–349. https://doi.org/10.1017/S027226 students. Teaching in Higher Education, 26(2), 181–196.
3109990544 https://doi.org/10.1080/13562517.2019.1648410
Elwood, J. A., & Bode, J. (2014). Student preferences vis-à- Hedgcock, J., & Lefkowitz, N. (1994). Feedback on feedback:
vis teacher feedback in university EFL writing classes in Assessing learner receptivity to teacher response in L2 com-
Japan. System, 42(1), 333–343. https://doi.org/10.1016/j.sys- posing. Journal of Second Language Writing, 3(2), 141–163.
tem.2013.12.023 https://doi.org/10.1016/1060-3743(94)90012-4
Estaji, M., & Tajeddin, Z. (2012). The learner factor in washback Hu, R., & Trenkic, D. (2019). The effects of coaching and repeated
context: An empirical study investigating the washback of the test-taking on Chinese candidates’ IELTS scores, their
IELTS academic writing test. Language Testing in Asia, 2(1), English proficiency, and subsequent academic achievement.
5–25. https://doi.org/10.1186/2229-0443-2-1-5 International Journal of Bilingual Education and Bilingualism,
Fan, Y., & Xu, J. (2020). Exploring student engagement with peer 24(10), 1486–1501. https://doi.org/10.1080/13670050.2019.16
feedback on L2 writing. Journal of Second Language Writing, 91498
50, 100775. https://doi.org/10.1016/j.jslw.2020.100775 Hyland, F. (1998). The impact of teacher written feedback on indi-
Ferris, D. R. (1995). Student reactions to teacher response in mul- vidual writers. Journal of Second Language Writing, 7(3),
tiple-draft composition classrooms. TESOL Quarterly, 29(1), 255–286. https://doi.org/10.1016/S1060-3743(98)90017-0
33–53. https://doi.org/10.2307/3587804 Hyland, F., & Hyland, K. (2001). Sugaring the pill: Praise and
Ferris, D. R. (2011). Treatment of error in second language student criticism in written feedback. Journal of Second Language
writing (2nd ed.). The University of Michigan Press. Writing, 10(3), 185–212. https://doi.org/10.1016/S1060-
Ferris, D. R. (2012). Written corrective feedback in second lan- 3743(01)00038-8
guage acquisition and writing studies. Language Teaching, Hyland, K., & Hyland, F. (2006). Feedback on second language
45(4), 446–459. https://doi.org/10.1017/S0261444812000250 students’ writing. Language Teaching, 39(2), 83–101. https://
Ferris, D. R., Brown, J., Liu, H. S., & Stine, M. E. A. (2011). doi.org/10.1017/S0261444806003399
Responding to L2 students in college writing classes: Teacher IELTS. (2019a). Guide for educational institutions, governments,
perspectives. TESOL Quarterly, 45(2), 207–234. https://doi. professional bodies and commercial organisations. Cambridge
org/10.5054/tq.2011.247706 Assessment English, The British Council, IDP Australia.
Frear, D., & Chiu, Y. H. (2015). The effect of focused and unfo- https://www.ielts.org/-/media/publications/guide-for-institu-
cused indirect written corrective feedback on EFL learners’ tions/ielts-guide-for-institutions-2015-uk.ashx
accuracy in new pieces of writing. System, 53, 24–34. https:// IELTS. (2019b). IELTS Task 2 Writing band descriptors (Public
doi.org/10.1016/j.system.2015.06.006 version). https://takeielts.britishcouncil.org/sites/default/files/
Fredricks, J. A., Blumenfeld, P. C., & Paris, A. H. (2004). School ielts_task_2_writing_band_descriptors.pdf
engagement: Potential of the concept, state of the evidence. Ingram, D., & Bayliss, A. (2007). IELTS as a predictor of academic
Review of Educational Research, 74(1), 59–109. https://doi. language performance, part 1 (IELTS Research Reports,
org/10.3102/00346543074001059 Volume 7). British Council and IELTS Australia Pvt Limited.
Gan, Z. (2009). IELTS preparation course and student IELTS per- https://www.ielts.org/-/media/research-reports/ielts_rr_vol-
formance: A case study in Hong Kong. RELC Journal, 40(1), ume07_report3.ashx
23–41. https://doi.org/10.1177/0033688208101449 Ivanič, R., & Satchwell, C. (2007). Boundary crossing: Networking
Green, A. (2005). EAP study recommendations and score gains on and transforming literacies in research processes and college
the IELTS Academic Writing test. Assessing Writing, 10(1), courses. Journal of Applied Linguistics, 4(7), 101–124. https://
44–60. https://doi.org/10.1016/j.asw.2005.02.002 doi.org/10.1558/japl.v4i1.101
Hamid, M. O. (2016). Policies of global English tests: Test-takers’ Kim, H. R., & Bowles, M. (2019). How deeply do second language
perspectives on the IELTS retake policy. Discourse: Studies in learners process written corrective feedback? Insights gained
the Cultural Politics of Education, 37(3), 472–487. https://doi. from think-alouds. TESOL Quarterly, 53(4), 913–938. https://
org/10.1080/01596306.2015.1061978 doi.org/10.1002/tesq.522
Hamid, M. O., & Hoang, N. T. H. (2018). Humanising language Lee, I. (2008). Student reactions to teacher feedback in two
testing. TESL-EJ, 22(1), 1–20. http://www.tesl-ej.org/word- Hong Kong secondary classrooms. Journal of Second
press/issues/volume22/ej85/ej85a5/ Language Writing, 17(3), 144–164. https://doi.org/10.1016/j.
Han, Y. (2017). Mediating and being mediated: Learner beliefs and jslw.2007.12.001
learner engagement with written corrective feedback. System, Mahfoodh, O. H. A. (2017). “I feel disappointed”: EFL university
69, 133–142. https://doi.org/10.1016/j.system.2017.07.003 students’ emotional responses towards teacher written feed-
Han, Y., & Hyland, F. (2015). Exploring learner engagement with back. Assessing Writing, 31, 53–72. https://doi.org/10.1016/j.
written corrective feedback in a Chinese tertiary EFL classroom. asw.2016.07.001
Journal of Second Language Writing, 30, 31–44. https://doi. Mickan, P., & Motteram, J. (2009). The preparation practices of
org/10.1016/j.jslw.2015.08.002 IELTS candidates: Case studies (IELTS Research Reports,
16 SAGE Open
Volume 10). IELTS Australia Pty Limited and British Council. ESL Learners. TESL Canada Journal, 11(2), 46–70. https://
https://www.ielts.org/-/media/research-reports/ielts_rr_vol- doi.org/10.18806/tesl.v11i2.633
ume10_report5.ashx Sheen, Y. (2007). The effect of focused written corrective feed-
Mickan, P., & Slater, S. (2003). Text analysis and the assessment back and language aptitude on ESL learners’ acquisition
of academic writing (IELTS Research Reports, Volume 4). of articles. TESOL Quarterly, 41(2), 255–283. https://doi.
IELTS Australia Pty Limited. https://www.ielts.org/-/media/ org/10.1002/j.1545-7249.2007.tb00059.x
research-reports/ielts_rr_volume04_report2.ashx Sheen, Y., Wright, D., & Moldawa, A. (2009). Differential effects
Moore, T., & Morton, J. (2005). Dimensions of difference: A of focused and unfocused written correction on the accurate
comparison of university writing and IELTS writing. Journal use of grammatical forms by adult ESL learners. System, 37(4),
of English for Academic Purposes, 4(1), 43–66. https://doi. 556–569. https://doi.org/10.1016/j.system.2009.09.002
org/10.1016/j.jeap.2004.02.001 Smirnova, E. A. (2017). Using corpora in EFL classrooms: The
Müller, A. (2015). The differences in error rate and type between case study of IELTS preparation. RELC Journal, 48(3), 302–
IELTS writing bands and their impact on academic workload. 310. https://doi.org/10.1177/0033688216684280
Higher Education Research & Development, 34(6), 1207– Storch, N., & Wigglesworth, G. (2010). Learners’ processing,
1219. https://doi.org/10.1080/07294360.2015.1024627 uptake, and retention of corrective feedback on writing: Case
Orsmond, P., & Merry, S. (2013). The importance of self-assessment studies. Studies in Second Language Acquisition, 32(2), 303–
in students’ use of tutors’ feedback: A qualitative study of high 334. https://doi.org/10.1017/S0272263109990532
and non-high achieving biology undergraduates. Assessment & Tardy, C. M. (2019). Appropriation, ownership, and agency:
Evaluation in Higher Education, 38(6), 737–753. https://doi. Negotiating teacher feedback in academic settings. In K. Hyland
org/10.1080/02602938.2012.697868 & F. Hyland (Eds.), Feedback in second language writing:
Pearson, W. S. (2019). “Remark or retake”? A study of candidate Contexts and issues (pp. 64–82). Cambridge University Press.
performance in IELTS and perceptions towards test failure. Terry, G., Hayfield, N., Clarke, V., & Braun, V. (2017). Thematic
Language Testing in Asia, 9(17), 1–20. https://doi.org/10.1186/ analysis. In C. Willig (Ed.), The SAGE handbook of qualitative
s40468-019-0093-8 research in psychology (pp. 17–36). SAGE Publications Ltd.
Pearson, W. S. (2021). Student engagement with teacher written Tian, L., & Zhou, Y. (2020). Learner engagement with automated
feedback on IELTS Writing Task 2 rehearsal essays. University feedback, peer feedback and teacher feedback in an online EFL
of Exeter. http://hdl.handle.net/10871/127764. writing context. System, 91, 102247. https://doi.org/10.1016/j.
Pekrun, R. (2006). The control-value theory of achievement emo- system.2020.102247
tions: Assumptions, corollaries, and implications for educa- Treglia, M. O. (2008). Feedback on feedback: Exploring student
tional research and practice. Educational Psychology Review, responses to teachers’ written commentary. The Journal of
18(4), 315–341. https://doi.org/10.1007/s10648-006-9029-9 Basic Writing, 27(1), 105–137. https://wac.colostate.edu/docs/
Porte, G. K. (1997). The etiology of poor second language writing: jbw/v27n1/treglia.pdf
The influence of perceived teacher preferences on second lan- Uscinski, I. (2017). L2 learners’ engagement with direct written
guage revision strategies. Journal of Second Language Writing, corrective feedback in first-year composition courses. Journal
6(1), 61–78. https://doi.org/10.1016/S1060-3743(97)90006-0 of Response to Writing, 3(2), 36–62. http://journalrw.org/
Price, M., Handley, K., Millar, J., & O’Donovan, B. (2010). index.php/jrw/article/view/68
Feedback: All that effort, but what is the effect? Assessment & van Beuningen, C., de Jong, N. H., & Kuiken, F. (2012). Evidence
Evaluation in Higher Education, 35(3), 277–289. https://doi. on the effectiveness of comprehensive error correction in sec-
org/10.1080/02602930903541007 ond language writing. Language Learning, 62(1), 1–41. https://
Radecki, P. M., & Swales, J. M. (1988). ESL student reaction to doi.org/10.1111/j.1467-9922.2011.00674
written comments on their written work. System, 16(3), 355– Walker, M. (2009). An investigation into written comments on
365. https://doi.org/10.1016/0346-251X(88)90078-4 assignments: Do students find them usable? Assessment &
Ranalli, J. (2021). L2 student engagement with automated feed- Evaluation in Higher Education, 34(1), 67–78. https://doi.
back on writing: Potential for learning and issues of trust. org/10.1080/02602930801895752
Journal of Second Language Writing, 52, 100816. https://doi. Weaver, M. R. (2006). Do students value feedback? Student
org/10.1016/j.jslw.2021.100816 perceptions of tutors’ written responses. Assessment &
Rao, C., McPherson, K., Chand, R., & Khan, V. (2003). Assessing Evaluation in Higher Education, 31(3), 379–394. https://doi.
the impact of IELTS preparation programs on candidates’ org/10.1080/02602930500353061
performance on the General Training reading and writing Winke, P., & Lim, H. (2014). The effects of testwiseness and test-
test modules (IELTS Research Reports, Volume 5). IELTS taking anxiety on L2 Listening test performance: A visual
Australia Pty Limited. https://www.ielts.org/-/media/research- (eye-tracking) and attentional investigation (IELTS Research
reports/ielts_rr_volume05_report5.ashx Report Online Series, No. 3). British Council, Cambridge
Saif, S., Ma, J., May, L., & Cheng, L. (2021). Complexity of test English Language Assessment and IDP: IELTS Australia.
preparation across three contexts: Case studies from Australia, https://www.ielts.org/-/media/research-reports/ielts_online_
Iran and China. Assessment in Education: Principles, Policy rr_2014-3.ashx
& Practice, 28(1), 37–57. https://doi.org/10.1080/09695 Yang, Y., & Badger, R. (2015). How IELTS preparation courses
94X.2019.1700211 support students: IELTS and academic socialisation. Journal
Saito, H. (1994). Teachers’ practices and students’ preferences for of Further and Higher Education, 39(4), 438–465. https://doi.
feedback on second language writing: A case study of adult org/10.1080/0309877X.2014.953463
Pearson 17
Yu, S., & Jiang, L. (2020). Doctoral students’ engagement with revisions. Assessing Writing, 43, 100439. https://doi.org/10.1016/
journal reviewers’ feedback on academic writing. Studies in j.asw.2019.100439
Continuing Education, 1–18. https://doi.org/10.1080/01580 Zhang, Z. V., & Hyland, K. (2018). Student engagement with
37X.2020.1781610 teacher and automated feedback on L2 writing. Assessing
Yu, S., Zhang, Y., Zheng, Y., Yuan, K., & Zhang, L. (2018). Writing, 36, 90–102. https://doi.org/10.1016/j.asw.2018.
Understanding student engagement with peer feedback on 02.004
master’s theses: A Macau study. Assessment & Evaluation in Zheng, Y., & Yu, S. (2018). Student engagement with teacher
Higher Education, 44(1), 50–65. https://doi.org/10.1080/0260 written corrective feedback in EFL writing: A case study of
2938.2018.1467879 Chinese lower-proficiency students. Assessing Writing, 37,
Zacharias, N. T. (2007). Teacher and student attitudes toward 13–24. https://doi.org/10.1016/j.asw.2018.03.001
teacher feedback. RELC Journal, 38(1), 38–52. https://doi. Zheng, Y., Zhong, Q., Yu, S., & Li, X. (2020). Examining students’
org/10.1177/0033688206076157 responses to teacher translation feedback: Insights from the
Zhang, Z. (Victor). (2020). Engaging with automated writing evalu- perspective of student engagement. SAGE Open, 10(2), 1–10.
ation (AWE) feedback on L2 writing: Student perceptions and https://doi.org/10.1177/2158244020932536