“She is such a B!” – “Really? How can you tell?” : A qualitive study into inter-rater reliability in grading EFL writing in a Swedish upper-secondary school
Sammanfattning: This project investigates the extent to which EFL teachers’ assessment practices of two students’ written texts differ in a Swedish upper-secondary school. It also seeks to understand the factors influencing the teachers regarding inter-rater reliability in their assessment and marking process. The results show inconsistencies in the summative grades given by the raters; these inconsistencies include what the raters deem important in the rubric; however, the actual assessment process was very similar for different raters. Based on the themes found in the content analysis regarding what perceived factors affected the raters, the results showed that peer-assessment, assessment training, context, and time were of importance to the raters. Emerging themes indicate that the interpretation of rubrics, which should actually matter the most when it comes to assessment, causes inconsistencies in summative marking, regardless of the use of the same rubrics, criteria and instructions by the raters. The results suggest a need for peer-assessment as a tool in the assessment and marking of students’ texts to ensure inter-rater reliability, which would mean that more time needs to be allocated to grading.
HÄR KAN DU HÄMTA UPPSATSEN I FULLTEXT. (följ länken till nästa sida)