Exploring relationships between automated and human evaluations of L2 texts

Author(s)
Matthews, Joshua
Wijeyewardene, Ingrid
Publication Date
2018-10-01
Abstract
Despite the current potential to use computers to automatically generate a large range of text-based indices, many issues remain unresolved about how to apply these data in established language teaching and assessment contexts. One way to resolve these issues is to explore the degree to which automatically generated indices, which are reflective of key measures of text quality, align with parallel measures derived from locally relevant, human evaluations of texts. This study describes the automated evaluation of 104 English as a second language texts through use of the computational tool Coh-Metrix, which was used to generate indices reflecting text cohesion, lexical characteristics, and syntactic complexity. The same texts were then independently evaluated by two experienced human assessors through use of an analytic scoring rubric. The interrelationships between the computer and human generated evaluations of the texts are presented in this paper with a particular focus on the automatically generated indices that were most strongly linked to the human generated measures. A synthesis of these findings is then used to discuss the role that such automated evaluation may have in the teaching and assessment of second language writing
Citation
Language Learning & Technology, 22(3), p. 143-158
ISSN
1094-3501
Link
Language
en
Publisher
University of Hawai'i, National Foreign Language Resource Center
Title
Exploring relationships between automated and human evaluations of L2 texts
Type of document
Journal Article
Entity Type
Publication

Files:

NameSizeformatDescriptionLink