Research Repository

Teacher Rating of Class Essays Written by Students of English as a Second Language: A Qualitative Study of Criteria and Process

Alghannam, Manal Saleh Mohammad (2018) Teacher Rating of Class Essays Written by Students of English as a Second Language: A Qualitative Study of Criteria and Process. PhD thesis, University of Essex.

[img]
Preview
Text
After correction for online submission copy.pdf

Download (3MB) | Preview

Abstract

This study is concerned with a neglected aspect of the study of L2 English writing: the processes which teachers engage in when rating essays written by their own students for class practice, not exams, with no imposed rating/assessment scheme. It draws on writing assessment process research literature, although, apart from Huot (1993) and Wolfe et al. (1998), most work has been done on scoring writing in exam conditions using a set scoring rubric, where all raters rate the same essays. Eight research questions were answered from data gathered from six teachers, with a wide range of relevant training, but all teaching university pre-sessional or equivalent classes. Instruments used were general interviews, think aloud reports while rating their own students' essays, and follow up immediate retrospective interviews. Extensive qualitative coding was undertaken using NVivo. It was found that the teachers did not vary much in the core features that they claimed to recognise in general as typical of ‘good writing’, but varied more in what criteria they highlighted in practice when rating essays, though all used a form of analytic rating. Two thirds of the separate criteria coded were used by all the teachers but there were differences in preference for higher versus lower level criteria. Teachers also differed a great deal in the scales they used to sum up their evaluations, ranging from IELTS scores to just evaluative adjectives, and most claimed to use personal criteria, with concern for the consequential pedagogical value of their rating for the students more than achieving a test-like reliable score. A wide range of information sources was used to support and justify the rating decisions made, beyond the essay text, including background information about the writer and classmates, and teacher prior instruction. Teacher comments also evidenced concern with issues arguably not central to rating itself but rather exploring implications for the teacher and writer. Similar to Cumming et al. (2002), three broad stages of the rating process were identified: reading and exploiting information such as the writer’s name and the task prompt as well as perhaps skimming the text; reading and rereading parts of the essay, associated with interpretation and judgment; achievement of a summary judgment. In detail, however, each teacher had their own individual style of reading and of choice and use of criteria.

Item Type: Thesis (PhD)
Subjects: L Education > L Education (General)
P Language and Literature > P Philology. Linguistics
Divisions: Faculty of Social Sciences > Language and Linguistics, Department of
Depositing User: Manal Alghannam
Date Deposited: 07 Sep 2018 09:04
Last Modified: 07 Sep 2018 09:04
URI: http://repository.essex.ac.uk/id/eprint/22871

Actions (login required)

View Item View Item