1. Student-centered assessment in a time of new challenges
Writing assessment evolves not only because student writing evolves, but because we always have more to learn about advancing opportunity. The year 2020 is powerful evidence: COVID-19 demanded schools and instructors adapt, and ongoing calls for justice highlight that we must do more, on- and off-line, to create equitable opportunities to learn and perform learning. The hope that school campuses mitigate living and learning disparities cannot be counted on when schools close; many instructors need more support for themselves and their students. In sum: ours is a time of new challenges, new catalysts, new opportunities. And as ever, but especially amid uncertainty and disparate learning resources, writing assessment is a crucial site for upholding commitments to diversity, equity, inclusion, fairness, and justice.
In particular, this year has underscored for me that while many of us are trained to consider student-centered pedagogy, we may be less well-versed in student-centered assessment. We may, in other words, be more trained to consider what assessments tell assessors (and) instructors, and less trained to consider what assessments tell and do to students. Yet if we think about the meaning of writing assessment—various research practices involved in evaluating student writing including task design, feedback, results, inferences, uses, and consequences—all aspects include numerous opportunities for considering student-centered concerns.
The Assessing Writing Tools & Tech forum rests on the idea that any assessment tool or technology constitutes writing assessment and writing assessed, and so we need as much information as possible as we choose approaches. The forum creates a space for concise reviews of tools and technologies for ASW’s wide-ranging, international audience of writing practitioners, administrators, and researchers. It aims to keep assessment tools and technologies in conversation with assessment research, through descriptive reviews that delineate assumptions, possibilities, limitations, and future directions.
In this time of new challenges and opportunities, I am very glad to have this year’s Tools & Tech forum address one of the most pointed student-centered aspects of writing assessment—formative feedback—and specifically address online formative feedback. We benefit from two expert reviewers, Angela Laflen of California State University Sacramento and Weronika Fernando of Queen Mary University of London, whose research has focused on online writing assessment long before it became as widespread as now. Dr. Laflen brings to her review her expertise on the use of learning management systems in offering feedback on student writing. With Michelle Smith in 2017, Laflen challenged the emphasis on face-to-face contexts in best practice suggestions for responding to student writing in a call now particularly significant: “changes in writing instruction resulting from new writing and learning technologies warrant renewed attention to the issue of response” (Laflen & Smith, 2017, p. 40). Dr. Fernando’s expertise includes online learning platforms and literacy assessments, and her research offers a welcome message for our time: online learning platforms have clear educational value, including the affordances of multimodal resources in creating innovative assessment practices (Fernando, 2018).
Both reviews help us think about supporting students with formative feedback during the writing process. Each review includes key details: general information about the online writing feedback tool, its purpose and related writing constructs, its limitations and connections to other research and tools, and its possible future developments. These snapshots aim to help readers more easily look across the two reviews; I also hope that they might provide key details for administrators or other colleagues who may be involved with assessment decisions.
2. Evolving writing assessment tools and tech
A brief look at the evolving nature of writing assessment tools and tech helps offer context for these reviews and underscores the earlier point about ongoing learning. Assessing Writing itself provides evidence and context. When the journal began over 25 years ago, predominant themes in writing assessment research included assessor-centered considerations such as rater judgments and the relationship between textual features and writing quality. It also included relatively new assessments that helped shift to student-centered considerations, such as writing portfolios that expanded notions of validity to include impact on students (Huot, 1994). In subsequent decades, writing assessment research expanded to foreground social and cognitive dimensions of written language and related design issues, from task and rubric design to scoring processes and their accompanying interpretation and use arguments as they impact student learning (Behizadeh & Engelhard, 2011; Kane, 2013).
More recent research has worked to draw needed, systematic attention to other factors that impact students. These include various domains related to the construct of writing assessed (Aull, 2020; MacArthur & Graham, 2008; White, Elliot, & Peckham, 2015), the constitutive force of assignment design (2020, Aull, 2017), the overlap between design, interpretation, and consequence of writing assessment (2016b, Kane, 2016a; Poe, 2014), and how opportunity and identity are configured (and disfigured) by writing assessments (Inoue & Poe, 2012). Within such considerations, even single concepts evolve as we learn. Poe and Elliot (2019) study of the changing perceptions of the meaning and importance of fairness in assessment show varied stances and methodological challenges that remain with us today. Most recently, Slomp and Elliot (2020) call for integrated theory, action, and appraisal in support of principled ways of thinking about assessment components, action mechanisms, and consequences in terms of curricular fairness and justice. Their call highlights writing assessment as a dialogic process of student-instructor and student-student interaction in which students infer ideas about writing and themselves as writers, and in which students face particular choices and consequences.
Formative feedback tools are an important part of this dialogic process, and that brings us to the focus of our two reviews in this forum. Early feedback tools such as ETS’s Criterion® and MyReviewers were designed largely according to assessor-centered goals such as providing scores and retrieving data. New tools work within the framework of a digital ecology, including Web-based peer review platforms embedded within, and informed by, fluid writing classrooms. These next generation systems can help support more context-specific, student-centered considerations. Increased focus on formative instructor feedback—the use of feedback to support process and development rather than explain grades—and the giver’s gain—the clear value of peer review for the reviewer—help highlight the collaborative nature of a writing process in which students’ choices and experiences are central.
The particular focus of these reviews on online tools is valuable for the many partially or fully virtual writing courses today. Within online writing instruction, formative feedback may be even more crucial as students navigate their writing development more often on their own. Formative feedback is essential for student writing development, particularly as they draft and revise (Anderson, Anson, Gonyea, & Paine, 2015; Ferris, 2014). It also requires support, to help students feel capable and supported as they respond to and receive feedback, particularly when they feel they themselves are still developing as writers (Hart-Davidson & Meeks, 2020).
3. Two online feedback tools: Moodle quizzes and Eli Review
As I hope is always true in this forum, the two reviews draw attention to the epistemic nature of assessment choices: the fact that any assessment tool constitutes a particular view of writing and a student’s writing to assessors and students. The reviews accordingly draw attention to the possibilities and logistics of the tools as well as their limitations and challenges. These details helps us remain aware, intentional, and critical about the constructs of writing we emphasize and leave out, and to what end, because ethical, valid assessments are characterized by transparency about goals and expectations and consistency across design, interpretation, and consequences (2016b, Kane, 2016a; Poe, 2014).
Fernando reviews Moodle quizzes in light of their possible use in instructors’ formative assessment of academic writing. In particular, Fernando outlines the value of using Moodle quizzes as a way to work toward several goals of formative assessment: facilitating learning, promoting sustained engagement, and delivering ongoing feedback in order to support students. Strengths of this tool described by Fernando, for instance, include its scaffolded process steps and its ability to support multimodal submissions as well as multiple modes of feedback. These options support multiple modes for engagement in keeping with accessible, inclusive practices of online writing instruction (CCCC, C. f. E. P. f. O. W. I., 2013). A useful aspect of Fernando’s review is its practical detail regarding usability and options, including clarifying aspects of the Moodle quiz that might feel counterintuitive within a ‘quiz’ structure but are applicable to providing feedback to student writers.
Fernando also describes possibilities for assessing assessments and inviting students to reflect on their self-efficacy through the Moodle Quiz Gradebook (and question behavior settings therein), learning analytics features that can be used regardless of whether students’ responses are graded. At the same time, Fernando identifies limitations. These include some non-intuitive features and in particular that the Moodle quizzes come short of accommodating peer review, an integral part of formative writing assessment in most classrooms today. Fernando thus underscores that Moodle quizzes can provide one set of tools for supporting students’ academic needs and aspirations, among other tools that instructors use.
As a valuable complement to Fernando’s review, Laflen reviews Eli Review, a tool specifically designed to support peer feedback activities. Laflen draws valuable attention to the fact that we need more knowledge of the student-centered aspects of writing assessment: she writes, “we do not yet fully understand how these different feedback options impact the nature of the feedback provided or student…perceptions of it.” But, she rightly notes, every feedback technology is built on assumptions about writing that function in a given course and assessment task. Some of the possibilities underscored by Laflen include that Eli Review facilitates frequent, small writing tasks that often characterize and support online writing instruction (CCCC, C. f. E. P. f. O. W. I., 2013).
Laflen specifically addresses the many benefits of peer review supported by Eli Review. These reinforce the notion of the “giver’s gain,” including evidence of the particular gain for writers who may not score highly in their own summative assessments. As Laflen shows, Eli Review supports important possibilities like archived feedback and data analytics of both qualitative and quantitative data that can be used in assessment as research (and) assessments of assessments. Her review underscores that students’ own choices and reactions are central in their use of Eli Review, in which they can make plans, rate peer feedback, and receive instructor feedback of their feedback. Laflen’s review also identifies limitations of Eli Review, including cost and inflexibility, which leads Laflen to call for more research on specific platforms and approaches on student writers and their instructors.
4. Supporting decisions regarding assessment student writing
Fernando and Laflen’s reviews focus on online tools for supporting writing communities in their interactive process—communities of instructors and peers who are forming, revising, and discussing their written ideas. With their help, may we make choices that meet this moment not (only) in terms of what institutions or instructors want to know vis-à-vis their expectations for writing, but in terms of student writers’ goals, lives, and learning. In this new time, in other words, may these choices make us newly able to support curricular fairness and justice.
I close with thanks to these two scholars and you, readers, for your dedication to examining writing assessment tools and tech. And I extend my best wishes to you and yours in this uncertain time.
References
- Anderson P., Anson C.M., Gonyea R.M., Paine C. The contributions of writing to learning and development: Results from a large-scale multi-institutional study. Research in the Teaching of English. 2015;50(2):199. [Google Scholar]
- Aull L.L. Corpus analysis of argumentative versus explanatory discourse in writing task genres. Journal of Writing Analytics. 2017;1(1):1–47. [Google Scholar]
- Aull L.L. Modern Language Association; New York: 2020. How students write: A linguistic analysis. [Google Scholar]
- Behizadeh N., Engelhard G. Historical view of the influences of measurement and writing theories on the practice of writing assessment in the United States. Assessing Writing. 2011;16(3):189–211. [Google Scholar]
- CCCC, C. f. E. P. f. O. W. I . 2013. Position statement of principles and example effective practices for online writing instruction (OWI)https://cccc.ncte.org/cccc/resources/positions/owiprinciples Retrieved from. [Google Scholar]
- Fernando W. Show me your true colours: Scaffolding formative academic literacy assessment through an online learning platform. Assessing Writing. 2018;36:63–76. doi: 10.1016/j.asw.2018.03.005. [DOI] [Google Scholar]
- Ferris D.R. Responding to student writing: Teachers’ philosophies and practices. Assessing Writing. 2014;19:6–23. [Google Scholar]
- Hart-Davidson W., Meeks M.G. In: Improving outcomes: Disciplinary writing, local assessment, and the aim of fairnes. Kelly-Riley D., Elliot N., editors. Modern Language Association; 2020. Feedback analytics for peer learning: Indicators of writing improvement in digital environments. (forthcoming) [Google Scholar]
- Huot B. Editorial: An introduction to assessing writing. Assessing Writing. 1994;1(1):1–9. [Google Scholar]
- Inoue A.B., Poe M., editors. Race and writing assessment. Peter Lang; New York: 2012. [Google Scholar]
- Kane M.T. Validating the interpretations and uses of test scores. Journal of Educational Measurement. 2013;50(1):1–73. [Google Scholar]
- Kane M.T. Explicating validity. Assessment in Education Principles Policy and Practice. 2016;23(2):198–211. [Google Scholar]
- Kane M.T. Validation strategies: Delineating and validating proposed interpretations and uses of test scores. Handbook of test development. 2016:64–80. [Google Scholar]
- Laflen A., Smith M. Responding to student writing online: Tracking student interactions with instructor feedback in a learning management system. Assessing Writing. 2017;31:39–52. doi: 10.1016/j.asw.2016.07.003. [DOI] [Google Scholar]
- MacArthur C.A., Graham S. In: Handbook of writing research. MacArthur C.A., Graham S., Fitzgerald J., editors. Guilford Press; 2008. Writing research from a cognitive perspective. [Google Scholar]
- Poe M. The consequences of writing assessment. Research in the Teaching of English. 2014;48(3):271–275. [Google Scholar]
- Poe M., Elliot N. Evidence of fairness: Twenty-five years of research in assessing writing. Assessing Writing. 2019;42 [Google Scholar]
- Slomp D., Elliot N. What’s your theory of action? Making good trouble with literacy assessment. Journal of Adolescent & Adult Literacy. 2020 (forthcoming) [Google Scholar]
- White E.M., Elliot N., Peckham I. University Press of Colorado; 2015. Very like a whale: The assessment of writing programs. [Google Scholar]
