Skip to main content
Elsevier - PMC COVID-19 Collection logoLink to Elsevier - PMC COVID-19 Collection
editorial
. 2020 Jul 4;45:100472. doi: 10.1016/j.asw.2020.100472

Volume 45 Editorial

David Slomp, Martin East
PMCID: PMC7334927  PMID: 38620279

In our editorials over the past several years we have been highlighting the importance of a consequential view of writing assessment research and practice. In this present moment as the coronavirus continues to devastate our social and economic systems and as protests against systemic racism echo around the globe, the importance of a consequential perspective becomes clearer.

We are only beginning to come to terms with the policy decisions and practices that have been made over the past months in response to the current pandemic (shuttering of schools, cancellations of assessments, forced social isolation), and over the past decades and centuries with respect to systemic racism and inequality. Accompanying identification of intended positive outcomes that a policy or practice was designed to achieve, a consequential view demands that we equally examine the unintended outcomes that result from these decision and practices.

Reflecting on the past 25 years of scholarship in Assessing Writing, White (2019) draws our attention to the fact that for many students, especially those from underserved populations, writing assessment is seen as “part of the oppressive apparatus that has traditionally worked to their detriment” (p. 5). He points to the concept of fairness as a mechanism for addressing these concerns. Broadly conceptualized within the scholarship in Assessing Writing, fairness has focused on a number of key issues: pursuit of validity, acknowledgement of social context, legal responsibility, ethical obligation, and elimination of bias (Poe & Elliot, 2019).

Missing from this body of research is any exploration of the broadest aspect of fairness: equity of opportunity to learn (Moss et al., 2008). When research on writing assessment examines opportunity to learn, it seeks to uncover the systemic inequities that suppress opportunities for various populations of test-takers. Writing assessment research that focuses on fairness through the lens of ethics seeks to use systems of assessment and the programs of research that accompany them to uncover these inequities and to propose mechanisms to enhance opportunity for those disadvantaged by the assessment itself. (Elliot, 2016; Slomp, 2016).

Applying this perspective explicitly to scholarship on racism and writing assessment, Hammond (2019) calls for greater attention to: (a) the ecological dimensions of racism; (b) clearer definitions of race, ethnicity, and racism in our scholarship; (c) deeper engagement with theories of race and racism; (d) the diversification of voices, interpretations, and accounts of racism in writing assessment; and (e) a broader focus on oppression, inequity and writing assessment.

As instruments of institutional and public policies, writing assessments across the globe and at all levels of education carry with them the potential to either further entrench inequity and oppression or promote opportunity for all.

With this goal in mind we encourage contributors to Assessing Writing to explore these issues in their submissions to this journal. Questions to explore include those that target inequity and those that imagine a more just future:

  • 1

    How have writing assessment practices and policies of the past and present either contributed to or challenged the systematic entrenchment of inequity?

  • 2

    How can research on writing assessment draw attention to the role of writing assessment in confronting inequity and promoting opportunity for all?

In their introduction to Race and Writing Assessment, Inoue and Poe (2012) place such questions at the forefront of a research agenda for our field, stating, “[o]ur job is to understand how unequal outcomes may reflect larger socially organized forces and suggest ways that we could account for the effects of those racial formations in our processes of validating assessments. It’s our ethical responsibility” (p. 5). Inoue and Poe, here, are not calling necessarily for a separate research agenda focused on race and writing assessment; rather they challenge us to integrate a concern for the consequences of assessment (with explicit attention to racism and inequity) into the diverse programs of research that we are already engaged in.

1. In this issue

Once again, we are impressed with the breadth of scholarship included in this volume. Studies published in this volume report on research conducted around the globe, in L1 and L2 contexts, and from primary to tertiary levels of education. These studies can be broadly grouped under two themes: the use of assessment to support teaching and learning, and challenges related to automated and human scoring.

1.1. Assessment to support teaching and learning

Five papers explore issues related to how assessment of writing can support teaching and learning.

In L2 contexts, Written Corrective Feedback (WCF) is a popular method for helping students develop capacity to write error free prose. Research on the effectiveness of various aspects of WCF, however, has yielded both positive and negative evidence of effectiveness. To help us gain a better understanding of the research on the efficacy of WCF, Mao and Lee reviewed 59 articles published between 1979 and 2018 that examined feedback scope in WCF. By examining quantitative, qualitative and mixed methods research, their study expands on previous reviews of research on WCF. In addition to elucidating findings regarding the benefits of WCF, their study points to a number of gaps in the current body of research. These include the need for: (a) clearer definitions of the core constructs associated with various modes of WCF; (b) more studies that look at both comprehensive and focused WCF; (c) greater attention to the individual and contextual factors that shape both the provision and response to WCF; and (d) the diversification of research methods used to explore the effectiveness of WCF. Echoing themes from our review of 25 years of research published in Assessing Writing (Slomp, 2019), Mao and Lee point also to the need for greater attention to the ecological validity of research on WCF, including expanding the populations and contexts in which this research is being conducted (primarily at the tertiary level), focusing on the impact of these approaches on different populations in different contexts, and moving beyond a cognitive perspective to also examining these issues from a sociocultural perspective.

In a study that questions the focus on error in writing assessment and formative feedback, Sandiford and Macken-Horarik examine methods for assessing development in narrative writing. Working with 27 primary and secondary level teachers in Australia, they collected 373 samples of student narrative writing completed in response to timed writing prompts. Drawing on the lens of systemic functional grammatics, they orient the focus of assessment away from a focus on error and toward an appreciation of the “intimations of what is to come,” exemplified by the choices students make and the struggles they work through. The paper presents several samples of student writing—always a welcome element of papers published in the journal—to persuasively demonstrate the insights into the development of writing ability that this lens provides.

In a similar vein, and drawing on a sociocultural view of writing, Qin and Uccelli explored the flexibility with which adolescent and adult writers appropriately employ linguistic resources within academic and colloquial contexts. Participants were EFL learners with backgrounds in Chinese, French and Spanish languages. Their study found complex associations between L1 background, linguistic complexity, register flexibility, and English proficiency. With respect to consequences, their study points to the importance of developing metalinguistic awareness in student writers, particularly with respect to language choices they make across registers and genres to fulfil specific communicative purposes.

Ghaffar, Khairallah and Salloum report on a study conducted with middle school students in Lebanon that examined the impact of co-constructing and using rubrics for formative assessment purposes on students’ attitudes about writing and on their development as writers. They found that this approach enhanced student awareness of criteria, improved attitudes toward writing, and deepened engagement and student-directed learning. They report that this formative assessment focus caused both teacher and students to reconsider “the meaning of writing,” both why they write and how they write. This study highlights the importance of teacher voice and collaboration in assessment design.

Gomes and Ma report on the use of student evaluations of teaching to help gain insights into the functioning of writing programs. They suggest that orienting these evaluation around the construct of helpfulness—the belief that a course has had positive outcomes for the student—could help to resolve historic inequities and biases perpetuated by these forms of assessment, providing students, instructors and program administrators with a shared language about student success in local contexts, leading to more actionable data on student experience.

Collectively, these five studies advance consideration of the consequential aspect of writing assessment by investigating and demonstrating uses of assessment that support instruction and development in writing ability.

1.2. Challenges in human and automated scoring

The final three papers in this volume focus on issues related to scoring, highlighting issues of construct representation in scoring procedures.

Sevgi-Sole and Ünaldi examined how raters negotiate to resolve score discrepancies in both authentic and research scoring sessions by analyzing patterns in verbal exchanges between raters as they negotiate discrepancies in papers they had scored. They found that negotiations within authentic scoring contexts were dramatically shorter than were negotiations within research contexts. Time pressures to complete the scoring process in authentic scoring settings were found to have impacted the duration, coherence, and completeness of rater’s argumentation. These findings highlight the need for more research into the role of contextual factors—including cultural values and modes of argumentation—in shaping rater negotiations. Their finding that fewer than 2% of argumentative moves made during negotiation sessions referred to the rating scale also raises questions of construct underrepresentation as an issue in need of further exploration.

Canz, Hoffmann, and Kania examine presentation mode effects on highly trained raters in the context of a large-scale writing assessment program for upper secondary level students in Germany. Analyzing scores given to 430 essays that were assessed both in their original handwritten mode and in their transcribed computer-typed mode of presentation, they found that computer-typed essays were scored higher than handwritten essays. They also found that this effect was stronger for informative genres than it was for narrative genres and that the effect became stronger as essay quality decreased.

Finally, Kyle examines approaches to expanding construct coverage of automated scoring systems for integrated writing tasks. His study of 480 responses to a TOEFL iBT integrated writing task examined impact of source use (aural versus written source material) on test-taker performance demonstrated that test-takers’ ability to use lecture-based source material resulted in higher writing scores, while reliance on reading source material resulted in lower integrated writing scores. With respect to automated scoring, the study found that e-rater—used to score the TOEFL iBT—does not appear to cover features associated with source text use in its scoring. Features identified in the study that the researcher associated with source use point to the use of overlap indices (word, n-gram, synonym, semantic) to expand construct coverage of automated scoring systems for integrated writing tasks.

Collectively these studies expand our understandings of the complexities involved in scoring writing samples collected under testing conditions.

2. Welcome to new editorial board members

The range of topics, methods and contexts explored in Assessing Writing require a diverse set of expertise from editors, editorial board members, and our reviewers. We appreciate that during this period of instability brought on by the global pandemic, our reviewers and editorial board members continue to offer their expertise in support of the journal. We appreciate the patience of our authors as the process of reviewing manuscripts has at times been lengthened by the challenges each of us is facing at this time. Submissions to the journal continue to climb, placing significant demand on our editorial board members and reviewers. We thank you for your commitment to the journal and to promoting excellence in the research we publish. Through your support, and through the work of our authors, the stature of Assessing Writing continues to grow. Recently released Citescore (3.6) and Impact Factor (2.404) rankings place Assessing Writing in the top 5% of linguistics and literacy journals and in the top 11 % of education journals.

We welcome a number of new members to the Editorial Board:

Chris Anson (North Carolina State University)

Bob Broad (Illinois State University)

Tracey Hodges (University of Alabama at Birmingham)

Gerriet Janssen (University of the Andes)

Vijay Kumar (University of Otago)

Natsuko Shintani (Kobe Gakuin University)

Elke Stracke (University of Canberra)

Jonathan Trace (Keio University)

Shulin Yu (University of Macau)

We thank each of our new board members, alongside our existing board members, for their willingness to serve the writing assessment community in this capacity.

Wishing our readers, contributors, reviewers and editorial board members health, wellness, and peace during these unprecedented times.

References

  1. Elliot N. A theory of ethics for writing assessment. The Journal of Writing Assessment. 2016;9(1) http://journalofwritingassessment.org/article.php?article=98 Retrieved from: [Google Scholar]
  2. Hammond J. Making our invisible racial agendas visible: Race talk in assessing writing. Assessing Writing. 2019;42 doi: 10.1016/j.asw.2019.100425. [DOI] [Google Scholar]
  3. Inoue A.B., Poe M. Vol. 7. Peter Lang; New York: 2012. (Race and writing assessment. Studies in composition and rhetoric). [Google Scholar]
  4. Moss P.A., Pullin D.C., Gee J.P., Haertel E.H., Young L.J., editors. Assessment, equity, and opportunity to learn. Cambridge University Press; Cambridge, UK: 2008. [Google Scholar]
  5. Poe M., Elliot N. Evidence of fairness: Twenty-five years of research in assessing writing. Assessing Writing. 2019;42 doi: 10.1016/j.asw.2019.100418. [DOI] [Google Scholar]
  6. Slomp D. Complexity, consequence, and frames: A quarter century of research in assessing writing. Assessing Writing. 2019;42 doi: 10.1016/j.asw.2019.100424. [DOI] [Google Scholar]
  7. Slomp D. An integrated design and appraisal framework for ethical writing assessment. The Journal of Writing Assessment. 2016;9(1) http://journalofwritingassessment.org/article.php?article=91 Retrieved from: [Google Scholar]
  8. White E. (Re)visiting twenty-five years of writing assessment. Assessing Writing. 2019;42 doi: 10.1016/j.asw.2019.100419. [DOI] [Google Scholar]

Articles from Assessing Writing are provided here courtesy of Elsevier

RESOURCES