In recent years, the assessment community has been challenged to craft a blueprint for the future of assessment. This notion of a blueprint came about several years ago, and it was an outcome of the research derived from the Gordon Commission led by Professor Edmund Gordon. As the Covid 19 pandemic is currently making waves through the United States, our ideas of normalcy are rapidly changing. This critical juncture in our history creates an opportunity for the discussion surrounding the future of assessment to be revisited. Every industry sector is currently being challenged to reimagine how they conduct business and respond to the needs of their stakeholders both during and after this historic pandemic. Many sectors of society have discovered very quickly that going back to what they have traditionally done will not work anymore. Unsurprisingly, the educational sector has not been spared during this time of transition.
Pandemic on K‐12 and Higher Education
In response to the Covid‐19 pandemic, educators in K‐12 and higher education have had to pivot to respond to the changing needs of their students quickly. In both sectors, educators have been challenged to deliver instructional content online and attempt to teach and engage students in a virtual environment. Currently, educational policy‐makers and administrators in both K‐12 and Higher Education are trying to make critical decisions about when and if they will return to school in fall 2020. Educators have quickly realized that the way in which they have to educate and engage with students has changed significantly. These changes in the education sector have and will continue to have an impact on the need for and use of educational assessments. The educational assessment sector has already felt the initial effects of the pandemic. First, many K‐12 schools were required to cancel the state‐mandated end of the year standardized tests (Barnum & Belsha, 2020; Blume, & Watanabe, 2020). For a majority of schools, this will be the first time in almost two decades, where students will not participate in mandatory yearly testing programs. Similarly, in higher education, many students were unable to take college, graduate, and professional entrance examinations in the spring of 2020 (McCarthy, 2020; Randolph, 2020). Given these limitations, many colleges and universities have decided to lift the entrance examination requirements for the incoming class of students (The Chronicle of Higher Education, 2020). This situation has caused many colleges and universities to examine the importance of including entrance examinations in future admissions models. For example, the University of California system announced in May 2020 that they would no longer require the ACT or SAT tests for admission until at least fall 2024 (University of California, 2020). In sum, we are seeing trends that show both K‐12 and higher education could be potentially moving away from the need for high‐stakes standardized tests and assessments.
Educational Assessment
The current state of education presents a unique opportunity for the assessment community. As with many other industry sectors, the assessment community needs to respond to the period of uncertainty by pivoting to respond to the needs of various stakeholder groups (i.e., students and educators). Consequently, now more than ever is the time for the assessment community to reimagine assessment in the future. The initial conversations about the future of assessment that were spearheaded by the Gordon Commission many years ago have become increasingly important during this historical time. The Gordon Commission members addressed many issues that need to be taken into consideration as we contemplate the future of assessment, including (a) shortening the feedback loop on assessments, (b) using new technologies to enhance assessment delivery and process, (c) utilizing multiple measures in assessment, (d) providing useful instructional data to teachers, (e) acknowledgment that intelligence is malleable, and (e) understanding the cultural diversity among the different test‐taker groups. All of the issues highlighted by the Gordon Commission are worthwhile to address and will be critical to consider as we begin to rethink future assessments. However, for the purpose of this article, I will focus on the latter suggestion raised by many authors that highlights the need for future assessment to recognize and better understand cultural differences.
Cultural Differences
In our current system of assessment, it seems as if the examination of cultural differences is seemingly an afterthought of the assessment development process. Although many may argue that group differences are addressed or tended to in the initial stages of assessment development, this current model is not sufficient. In our reimagined assessment systems, I argue that cultural differences in testing and experiences must be thoroughly explored. One way to understand cultural differences is to utilize Hall's (1974, 1977, 1984) research on cultural context. The author notes that cultural distinctions vary based on high or low context. Hall (1974) defines high‐context groups in the United States that typically include males of European descent, whereas minority groups and females are regarded as low‐context groups. This research has identified seven major characteristics by which these high‐ and low‐context cultural groups differ. These areas include (a) interaction, (b) association, (c) temporality, (d) gender, (e) territoriality, (f) learning, (g) information, and (h) academics (Arbuthnot, 2011; Gallagher et al., 1999; Hall, 1974, 1977, 1984).
Research
To date, several research studies have shown that culture has a significant effect on test‐taking performance (Arbuthnot, 2011a, 2015a, 2015b; Arbuthnot & Lyons‐Thomas, 2016; Gilbert et al., 2008, 2009; Hood, 1998a, 1998b; Hood, Hopson, & Frierson, 2005; Hood, Gilbert, & Arbuthnot 2006;). In the United States, research has shown that cultural context impacts how test‐takers approach standardized tests (Arbuthnot, 2011; Cohen & Ibarra, 2005). In the area of mathematics, research that investigated cultural differences in test performance patterns of low‐ and high‐context culture groups in the United States found when comparing low‐context culture (i.e., White, males) and high‐context culture (Blacks, females) test‐takers, there were certain items or item characteristics (i.e., visual component) that favored one cultural group over the other. Additionally, prior research has found that there are cultural differences in the type of strategies that high‐ and low‐context groups employ when approaching mathematics test items (Arbuthnot, 2009, 2011a; Gallagher, 1998). Specifically, high‐context groups tend to be more conservative in their test‐taking strategy, which may be attributed to their need to perform well on behalf of their group (Gallagher, 1998). However, low‐context cultural groups tend to be less conservative with their strategy choices. Cohen and Ibarra (2005) explained that when items are differentially harder or easier for certain groups of test‐takers, there could be a conflict between one's cultural context represented in the item and the cultural expectations of the examinee.
These research studies provide empirical evidence that culture has an impact on test taking. Consequently, I would argue that the absolute first step in reimagining assessments is to realize that we live in a very diverse world with people who have a myriad of experiences who come from different cultural backgrounds. The idea of a standard test‐taker is outdated and quite frankly does not acknowledge the uniqueness and diversity of the world we live in. Understanding how cultural differences affect testing is the first concept to address as the assessment community is challenged to forge into the future. Once we have taken into account the experiences and approaches of different cultural groups, we can begin to reengineer the assessments for the future. However, if we make a mistake and think about cultural differences as an afterthought, we will continue to perpetuate and rebuild assessment systems that are not culturally responsive and potentially unfair to certain groups of test‐takers. In sum, the Covid‐19 pandemic has provided the assessment community with a grand opportunity to make a significant change. The dialogue about the future of assessments that began several years ago is now reinvigorated. Being able to leverage the research that has been done in the field, related cultural differences will help to lay the foreground for the newly minted assessment system. It is entirely up to the assessment community whether we continue to create tests and assessments that rest on the idea of a standard person or test‐taker or instead take this rare opportunity to move the field forward in a more inclusive way.
Biography
Keena Arbuthnot, Louisiana State University, 130B David Boyd Hall, Baton Rouge, LA 70803; arbuthnot@lsu.edu .
References
- Arbuthnot, K. (2009). The effects of stereotype threat on standardized mathematics test performance and cognitive processing. Harvard Educational Review, 79(3), 448–473. [Google Scholar]
- Arbuthnot, K. (2011a). Filling in the blanks: Understanding standardized testing and the Black/White achievement gap. Charlotte, NC: Information Age Publishing. [Google Scholar]
- Arbuthnot, K. (2011b). The role of standardized testing in the college rankings system. , Oxford, England: Oxford University.
- Arbuthnot, K. (2015a). Understanding culturally relevant assessment and issues of fairness. Annual Meeting of the National Association for Multicultural Education. New Orleans, LA.
- Arbuthnot, K. (2015b). The four tiers of fairness: Examining the complexities of test fairness and the assessment of diverse populations of test takers. Annual Meeting of the American Educational Research Association. Chicago, IL.
- Arbuthnot, K. , & Lyons‐Thomas, J. (2016). The limits of test bias. Annual Meeting of the CREA (Culturally Responsive Evaluation and Assessment) conference. Chicago, IL.
- Barnum, M. , & Belsha K. (2020). All states can cancel standardized tests this year, Trump and DeVos say. Chalkbeat Politics & Policy. (https://www.chalkbeat.org/2020/3/20/21196085/all-states-can-cancel-standardized-tests-this-year-trump-and-devos-say), (accessed March 20, 2020).
- Blume, H. , & Watanabe, T. (2020). Standardized testing: Important changes to AP, SAT; K‐12 tests canceled. Los Angeles Times. (https://www.latimes.com/california/story/2020-03-20/all-about-standardized-testing-delays-due-to-coronavirus), (accessed March 20, 2020).
- Cohen, A. S. & Ibarra, R. A. (2005). Examining gender‐related differential item functioning using insights from psychometric and multicontext theory. In Gallagher A. & Kaufman J. (Eds.), Gender differences in mathematics: An integrative psychological approach (pp. 143–171). New York: Guilford Press. [Google Scholar]
- Gallagher, A. (1998). Gender and antecedents of performance in mathematics testing. Teachers College Record, 100(2), 297–314. [Google Scholar]
- Gallagher, A. , Morely, A. , Levin, J. , Garibaldi, A. M. , Ibarra, R. A. , & Cohen, A. S. (1999). New directions in assessment for higher education: Fairness, access, multiculturalism, and equity. Princeton, NJ: Educational Testing Service. [Google Scholar]
- Gilbert, J. E. , Arbuthnot, K. , Hood, S. , Grant, M. M. , West, M. L. , McMillian, Y. , Cross, E. V. , Williams, P. & Eugene, W. (2008). Teaching algebra using culturally relevant virtual instructors. The International Journal of Virtual Reality, 7(1), 21–30. [Google Scholar]
- Gilbert, J. E. , Eugene, W. , Swanier, C. , Arbuthnot, K. , Hood, S. , Grant, M. M. , & West, M. L. (2009). Culturally relevant design practices: A case study for designing interactive algebra lessons for urban youth. Journal of Educational Technology, 5(3), 54–60. [Google Scholar]
- Hall, E. T. (1974). Handbook for proxemic research. Washington, DC: Society for the Anthropology of Visual Communication. [Google Scholar]
- Hall, E. T. (1977). Beyond culture. Garden City, NY: Anchor Books. [Google Scholar]
- Hall, E. T. (1984). The dance of life: The other dimension of time. Garden City, NY: Anchor Press. [Google Scholar]
- Hood, S. (1998a). Culturally responsive performance‐based assessment: Conceptual and psychometric considerations. The Journal of Negro Education, 67(3), 187–196. [Google Scholar]
- Hood, S. (1998b). Introduction and overview: Assessment in the context of culture and pedagogy: A collaborative effort, a meaningful goal. The Journal of Negro Education, 67(3), 184–186. [Google Scholar]
- Hood, S. , Gilbert, J. , & Arbuthnot, K. (2006). Alternative assessment: Using a culturally relevant, computer‐based interactive tool (AADMLSS) to assess students’ eighth grade algebra knowledge. Annual Meeting of the American Educational Research Association. San Francisco, CA.
- Hood, S. , Hopson, R. K. , & Frierson, H. T. (2005). The role of culture and cultural context: A mandate for inclusion, the discovery of truth and understanding in evaluative theory and practice. Greenwich, Conn: Information Age Publishing. [Google Scholar]
- McCarthy, K. (2020). How coronavirus is impacting SAT and other standardized testing: The College Board cancelled its June SAT test. ABC News. (https://abcnews.go.com/US/coronavirus-impacting-sat-standardized-testing/story?id=70185472), (accessed April 17, 2020).
- Randolph, K. K. (2020). Coronavirus bring nationwide standardized testing to a halt: Everything is cancelled – including standardized testing. Fastweb. (https://www.fastweb.com/student-news/articles/coronavirus-brings-nationwide-standardized-testing-to-a-halt), (accessed March 24, 2020).
- University of California . (2020). University of California Board of Regents unanimously approved changes to standardized testing requirement for undergraduates. University of California Press Room. (https://www.universityofcalifornia.edu/press-room/university-california-board-regents-approves-changes-standardized-testing-requirement), (accessed May 21, 2020).
