Abstract
Students perceive crossword puzzles as enjoyable. In addition to students’ perceptions, crossword puzzles actually improve knowledge retention. However, crossword puzzles increased exam scores for some students but not others. Recommendations have been made for students to create puzzles for their classmates to complete with the rationale that students are encouraged to research and understand the material in order to write meaningful clues for the puzzle. While students enjoy creating their own crossword puzzles, the association between students creating crossword puzzles and knowledge retention is unknown. The purpose of this project was to determine if creating crossword puzzles and completing peers’ crossword puzzles were associated with improved knowledge retention indicated by higher scores on quizzes. Students in a research course from two institutions across three semesters had the option each week to upload a blank puzzle they created prior to completing each other’s puzzles and taking a quiz. Quiz scores were compared between those who did versus did not create their own puzzles and complete their peers’ puzzles. Results varied by institution and programs, as well as the same program within the same institution but different semesters. Results highlight the importance of moving beyond student perceptions and towards assessing knowledge retention while taking into consideration institution, program, and semester.
Keywords: crossword puzzles, knowledge retention, active learning, educational activities, educational measurement
Active learning is defined as “anything that involves students doing things and thinking about the things they are doing” (Bonwell & Eison, 1991, p. 19). In contrast to traditional teacher-centered learning that often positions students as passive recipients of instructional content (Gregory, 2002), active learning requires students to become active participants who must make decisions and solve problems (Franklin, Peat, & Lewis, 2003).
Crossword puzzles are an example of active learning. Crossword puzzles require the ability to remember factual knowledge, providing foundational knowledge, the basic understanding that is necessary for other kinds of learning (Fink, 2013). Thus, crossword puzzles would meet a course objective of demonstrating foundational knowledge, which is at the remembering level of Anderson’s revision of Bloom’s Taxonomy of Educational Objectives (Anderson & Krathwohl, 2001; Krathwohl, 2002).
Crossword puzzles improve knowledge retention (Abuelo et al., 2016; Gaikwad & Tankhiwale, 2012; Murphy et al., 2016; Nirmal et al., 2020; Orawiwatnakul & Wiwat, 2013; Patrick et al., 2018; Singh Matreja et al., 2021). However, crossword puzzles increased exam scores for some students but not others (Davis, Shepherd & Zwiefelhofer, 2009).
Recommendations have been made for students to create puzzles for their classmates to complete with the rationale being that students are encouraged to research and understand the material in order to write meaningful clues for the puzzle (Davis et al., 2009). While students enjoy creating their own crossword puzzles (Coticone, 2013), the association between students creating crossword puzzles and knowledge retention is unknown.
Crossword puzzles can be cumulative or noncumulative. Cumulative learning is an assignment that requires students to draw on information they learned earlier in the semester (Lang, 2016). For example, a weekly cumulative crossword puzzle assesses content in the course up to that point, as opposed to a noncumulative weekly crossword puzzle that only assesses content for that week. Students who have cumulative assessments retain more information than students who have noncumulative assessments (Beagley & Capaldi, 2016; Khanna, Brack, & Finken, 2013; Lawrence et al., 2013). The association between students creating and completing cumulative crossword puzzles with knowledge retention is unknown.
Creating and completing weekly cumulative crossword puzzles may be a method to address the issue that some students can struggle with a semester-long scaffolding project (Vandiver & Walsh, 2010) because by the time they get towards the end of the semester, they have forgotten what they learned early in the semester. One way to address this issue may be to have weekly cumulative assignments throughout the semester. The purpose of this quality improvement project was to determine if creating and completing weekly cumulative crossword puzzles were associated with improved knowledge retention as indicated by higher scores on corresponding weekly cumulative quizzes.
Method
Sample
Participants comprised students from two state institutions across three semesters. The first semester included students in a Doctorate of Nursing Practice (DNP) or Doctor of Philosophy (PhD) program at a university in the Midwest United States. Students were enrolled in Nursing Research, a required course in the DNP (optional in the PhD) program that fulfilled the American Association of Colleges of Nursing (2006) Essentials of Doctoral Education for Advanced Nursing Practice: Clinical Scholarship and Analytical Methods for Evidence-Based Practice. This was a 16-week hybrid course where students met three hours face-to-face once a month for four months, with the rest of the course online. Prior to the start of the course, the university’s education Institutional Review Board (IRB) reviewed and exempted this quality improvement project.
The second and third semesters included students enrolled in a Master of Science in Nursing (MSN) program at a university medical center in the Southeast United States. Students were enrolled in Research Design and Methods for Advanced Nursing Practice, a required course that fulfilled the American Association of Colleges of Nursing (2011) Essentials of Master’s Education in Nursing: Essential IV: Translating and Integrating Scholarship into Practice. This was a 16-week online course. Prior to the start of the course, it was determined by the IRB that this quality improvement project was not considered research and therefore would not be reviewed by the IRB according to institutional guidelines. There were no conflicts of interest in either course.
Context
Both courses examined quantitative and qualitative research methods; the interrelationships among theory, research and practice; the research process; critical evaluation of research findings; and applying ethical criteria for the protection of human subjects in research. The prerequisites were graduate standing or consent of the instructor. The courses were taught by a PhD-prepared nurse with an active program of research, who designed the current quality improvement project as a participant in the Wisconsin Teaching Fellows and Scholars (WTFS) Program. The WTFS program offered University of Wisconsin faculty and teaching academic staff a unique opportunity to collaborate with other teachers from various disciplines across the University of Wisconsin System. In addition to discussing the pedagogical literature, participants were guided through the process of completing a Scholarship of Teaching and Learning (SoTL) project with input from fellow participants and the program co-directors.
Intervention
Students were assigned two chapters most weeks from the textbook Nursing Research: Generating and Assessing Evidence for Nursing Practice (Polit & Beck, 2012). Online quizzes that corresponded with the chapter readings were assigned. All questions came from the test bank of the course textbook available at no cost. Completing weekly quizzes was associated with improved knowledge retention; higher scores on the weekly cumulative quizzes were associated with higher scores on the cumulative final exam (Torres, 2019), providing evidence of predictive validity, a form of criterion validity (Polit & Beck, 2012). Evidence for reliability in the quizzes was found with a Cronbach alpha of .69.
For extra credit and to assist with preparing for the weekly cumulative quizzes, weekly cumulative crossword puzzles that corresponded with the chapter readings were due four days prior to the quizzes. Uploading a blank puzzle and corresponding answer key to the learning management site four days prior to the quiz due date allowed students sufficient time to complete each other’s puzzles prior to taking the weekly quiz. Students were provided a free link to complete the puzzles: https://crosswordlabs.com/. Every puzzle was to have at least one term from each chapter, with the length of the puzzles increasing each week from at least two terms on the puzzle the first week to at least twenty-six terms on the final puzzle at the end of the semester. Students scored 100% if submitted on time and answers were correct, and 0% if not submitted on time, or any answer was not correct. Two PhD-prepared faculty checked each submission to ensure adherence to guidelines, and assigned puzzle grades into a spreadsheet, one of whom checked the completeness and accuracy of the data. All of the extra credit crossword puzzles combined was worth an additional 1% of the final grade in the course. The crossword puzzle assignment was created in response to students having difficulty understanding the content in other assignments at the understanding, applying, analyzing, evaluating and creating levels.
Statistical Analysis
Reliability was assessed with the parallel-forms procedure (Waltz et al., 2005) to determine if uploading a blank puzzle and corresponding answer key to the learning management site four days prior to the quiz due date was comparable to completing each other’s puzzles prior to taking the weekly quiz. Po is the proportion of observed agreements in classifications in both puzzles (100% if submitted on time and correctly, and 0% if not submitted, not submitted on time, or not submitted correctly). K is the proportion of persons consistently classified in the same category with both puzzles beyond expected by chance. Kruskal-Willis was performed for the entire sample comparing weekly quiz scores between those who did versus did not complete the corresponding weekly puzzle correctly to determine the predictive validity of the puzzle scores on quiz scores, or whether the observed quiz outcomes were associated with the crossword puzzle intervention. The same analyses were performed stratified by semester using Mann-Whitney U test with a Bonferonni correction (p<.0167) to account for multiple analyses and reduce the risk of Type I error since context matters (Ogrinc et al., 2019), such as internal elements (modality: hybrid vs. 100% online), external elements (institution) and characteristics of the individuals (masters vs. doctoral students).
Results
The first semester comprised 14 students in a DNP program and 2 in a PhD program with all students completing the course. The second semester was a year after the first semester and comprised 20 students in a MSN program at a separate institution from the first semester with all students completing the course. The third semester immediately followed the second semester and comprised 53 students in the same MSN program as the previous semester, with 51 students completing the course. The total sample size was 87. The parallel-form reliability found Po ranged from 0.75 to 1.00, while K ranged from −0.56 to −0.00 (see Table 1).
Table 1.
Parallel Form Reliability of Creating versus Completing Crossword Puzzles
Puzzle | |||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Semester | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | |
1st | Po | 1.00 | 0.94 | 0.88 | 0.88 | 0.94 | 0.88 | 0.94 | 0.88 | 1.00 | 1.00 | 1.00 | 0.94 | 1.00 | 0.94 |
K | −0.56 | −0.53 | −0.50 | −0.36 | −0.34 | −0.36 | −0.25 | −0.36 | −0.02 | −0.06 | −0.02 | −0.16 | −0.06 | −0.16 | |
2nd | Po | 0.95 | 1.00 | 1.00 | 0.95 | 0.95 | 0.75 | 0.85 | 0.90 | 0.75 | 0.90 | 0.85 | 0.80 | 0.90 | 0.80 |
K | −0.11 | −0.49 | −0.81 | −0.47 | −0.47 | −0.35 | −0.43 | −0.34 | −0.31 | −0.34 | −0.25 | −0.20 | −0.18 | −0.17 | |
3rd | Po | 0.88 | 0.94 | 0.96 | 0.86 | 0.90 | 0.90 | 0.94 | 0.88 | 0.96 | 0.98 | 1.00 | 0.96 | 0.90 | 0.94 |
K | −0.23 | −0.24 | −0.24 | −0.14 | −0.11 | −0.09 | −0.06 | −0.13 | −0.05 | −0.03 | −0.00 | −0.04 | −0.10 | −0.06 |
As can be seen in Table 2, when all the semesters were combined, results from the Kruskal-Willis test found those who completed puzzles 5, 6, 7 and 13 correctly had statistically significantly higher scores on the corresponding quizzes compared to those who did not complete those puzzles. The same analyses were performed stratified by semester using the Mann-Whitney U test. In the first semester (n=16), those who completed puzzle 13 correctly had significantly higher scores on the corresponding quiz compared to those who did not (97.6 vs. 90.0, p=.022), although this was not statistically significant with a Bonferonni correction. In the second semester (n=20), those who completed puzzles 6 (93.3. vs. 80.3, p=.039) and 11 (92.5 vs. 82.0, p=.002) had significantly higher scores on the corresponding quiz, although only week 11 was statistically significant after Bonferroni corrections. In the third semester (n=53), those who completed puzzle 7 had significantly higher scores on the corresponding quiz (92.9 vs. 89.0, p=.019), although this was not statistically significant with a Bonferonni correction.
Table 2.
Mean Quiz Scores based on Puzzle Completion
Total (n=87) | 1st Semester (n=16) | 2nd Semester (n=20) | 3rd Semester (n=51) | |||||
---|---|---|---|---|---|---|---|---|
Completed Assignment | Did not complete assignment | Completed Assignment | Did not complete assignment | Completed Assignment | Did not complete assignment | Completed Assignment | Did not complete assignment | |
Quiz 1 | 93.1 n=51 |
91.7 n=36 |
96.4 n=14 |
100 n=2 |
91.7 n=6 |
89.3 n=14 |
91.9 n=31 |
92.5 n=20 |
p-value | .77 | .93 | .90 | .78 | ||||
Quiz 2 | 98.5 n=65 |
97.7 n=22 |
100 n=13 |
100 n=3 |
97.1 n=17 |
83.3 n=3 |
98.6 n=35 |
100 n=16 |
p-value | .64 | 1.00 | .15 | .33 | ||||
Quiz 3 | 93.4 n=66 |
97.6 n=21 |
91.7 n=12 |
95.8 n=4 |
95.5 n=19 |
100 n=1 |
92.8 n=35 |
97.9 n=16 |
p-value | .06 | .60 | .80 | .07 | ||||
Quiz 4 | 95.7 n=49 |
96.4 n=38 |
95.5 n=11 |
95.0 n=5 |
92.4 n=16 |
90.8 n=4 |
98.3 n=22 |
97.4 n=29 |
p-value | .99 | .91 | .82 | .52 | ||||
Quiz 5 | 88.1 n=53 |
82.6 n=34 |
94.2 n=12 |
85.0 n=4 |
86.0 n=15 |
82.0 n=5 |
86.5 n=26 |
82.4 n=25 |
p-value | .016 | .08 | .45 | .16 | ||||
Quiz 6 | 94.1 n=46 |
89.9 n=41 |
97.0 n=11 |
90.0 n=5 |
93.3 n=12 |
80.3 n=8 |
93.2 n=23 |
92.6 n=28 |
p-value | .04 | .12 | .04 | .83 | ||||
Quiz 7 | 92.7 n=50 |
88.7 n=37 |
94.7 n=11 |
95.0 n=5 |
91.0 n=14 |
82.2 n=6 |
92.9 n=25 |
89.0 n=26 |
p-value | .005 | .74 | .13 | .019 | ||||
Quiz 8 | 93.7 n=44 |
91.7 n=43 |
94.9 n=11 |
90.0 n=5 |
93.0 n=14 |
82.3 n=6 |
93.4 n=19 |
93.8 n=32 |
p-value | .24 | .18 | .08 | .90 | ||||
Quiz 9 | 93.7 n=46 |
91.8 n=41 |
94.1 n=9 |
95.8 n=7 |
92.3 n=11 |
88.7 n=9 |
94.1 n=26 |
91.8 n=25 |
p-value | .26 | .40 | .30 | .25 | ||||
Quiz 10 | 86.8 n=51 |
87.4 n=36 |
85.3 n=10 |
87.7 n=6 |
84.9 n=14 |
77.0 n=6 |
88.3 n=27 |
89.9 n=24 |
p-value | .60 | .79 | .35 | .60 | ||||
Quiz 11 | 91.9 n=44 |
89.9 n=43 |
87.8 n=7 |
91.0 n=9 |
92.5 n=11 |
82.0 n=9 |
92.9 n=26 |
92.4 n=25 |
p-value | .40 | .41 | .002 | .72 | ||||
Quiz 12 | 91.6 n=44 |
90.5 n=43 |
89.6 n=10 |
92.4 n=6 |
88.1 n=10 |
84.7 n=10 |
94.0 n=24 |
92.3 n=27 |
p-value | .87 | .49 | .35 | .91 | ||||
Quiz 13 | 94.5 n=44 |
89.9 n=43 |
97.6 n=10 |
90.0 n=6 |
91.3 n=12 |
88.0 n=8 |
94.9 n=22 |
90.3 n=29 |
p-value | .019 | .02 | .31 | .22 | ||||
Quiz 14 | 95.9 n=43 |
91.2 n=44 |
97.1 n=11 |
95.7 n=5 |
96.0 n=9 |
92.0 n=11 |
95.3 n=23 |
90.1 n=28 |
p-value | .20 | .44 | .20 | .79 |
No changes were made in the puzzle assignment between semesters due to small sample sizes. There was no missing data. An unintended consequence was a dramatic increase in faculty workload when the number of students increased from 20 in the fall to 53 in the spring, without an increase in the number of faculty in the course.
Discussion
Creating cumulative crossword puzzles and completing peers’ cumulative puzzles were associated with higher scores on most corresponding quizzes. These results are consistent with previous literature that found students who have cumulative assessments retain more information than students who have noncumulative assessments (Beagley & Capaldi, 2016; Khanna et al., 2013; Lawrence et al., 2013). A strength of the current quality improvement project is that it incorporated recommendations for students to create puzzles for their classmates to complete, encouraging students to research and understand the material in order to write meaningful clues for the puzzle (Davis et al., 2009).
The current results that found crossword puzzles were associated with higher scores on most corresponding quizzes coincide with previous studies that found crossword puzzles improve knowledge retention (Abuelo et al., 2016; Murphy et al., 2016; Nirmal et al., 2020; Orawiwatnakul & Wiwat, 2013; Patrick et al., 2018; Shawahna & Jaber, 2020; Singh Matreja et al., 2021; Zamani et al., 2021). However, there was not a statistically significant association between most puzzles and their corresponding quizzes. Similar results have been found where completing crossword puzzles was not always associated with improved knowledge retention (Sumanasekera et al., 2020), and student question generation was not associated with statistically significant improvement in achievements (Aflalo, 2018). In the current analysis, regardless of semester, most of the quiz scores were in the 80’s and 90’s, resulting in a ceiling effect, or a reduction in the amount of upward change that is detectable (Polit & Beck, 2012). Our recommendation is to only incorporate crossword puzzles for those quizzes with habitually low scores, such as below 80%, for at least two semesters in the same course at the same institution. The current results move beyond students’ subjective perceptions of crossword puzzles and towards objectively assessing knowledge retention.
Reliability of the Crossword Puzzle Assignment
Results from the Po indicated a good level of agreement in classifying learners as 100% where both puzzles were submitted on time and were correct, versus 0% if not submitted, not submitted on time, or information in puzzles was not correct. However, results from the K indicated uploading a blank puzzle and corresponding answer key to the learning management site was not consistent with completing each other’s puzzles prior to taking the weekly quiz. There were several reasons for this discrepancy. Some created a puzzle but did not complete their peer’s puzzle even though both were required in order to earn full credit. Similarly, some completed their peer’s puzzle but did not create their own. Some did not upload a corresponding answer key. Some uploaded a puzzle from a previous week by mistake. Some did not upload their puzzle by the due date. Some provided a nonfunctional link to their puzzle. Thus, results of the criterion-referenced parallel-forms procedure indicate agreement in correctly completing both puzzles. However, not surprisingly, creating one’s own puzzles is not consistent with completing peers’ puzzles. The effectiveness of creating versus completing crossword puzzles should be examined further.
Strengths and Limitations of the Crossword Puzzle Assignment
A major strength of the current work is its addition to a body of literature objectively assessing knowledge retention resulting from crossword puzzles in a variety of disciplines such as English (Orawiwatnakul, 2013) and health professions (Abdulmajed et al., 2015) including pharmacology (Patrick et al., 2018), nursing (Shawahna & Jaber, 2020), dentistry (Nirmal et al., 2020), and sociology (Davis et al., 2009). To our knowledge, the current project is the first to examine crossword puzzles in graduate education and in a hybrid course.
There are several limitations. The results are limited to the students in the current quality improvement project. Efforts were made to minimize this limitation by including students across three semesters, across three programs (MSN, DNP and PhD), across two instructional modalities (online and hybrid), across two institutions. In addition, this quality improvement project had a quasi-experimental design as there was an intervention without randomization, thereby threatening internal validity (Polit & Beck, 2012). Efforts were made to minimize this limitation by including a control group, i.e. those who did not complete the puzzle assignment. Finally, the sample size was small with deviations from normality necessitating non-parametric statistics, which are not as powerful as parametric statistics (Polit & Beck, 2012). Efforts were made to minimize this limitation by including a Bonferroni correction to adjust for multiple testing.
Conclusion
Students perceive crossword puzzles as an enjoyable, creative and innovative way to learn (Abuelo et al., 2016; Franklin et al., 2003; Gaikwad & Tankhiwale, 2012; Singh Matreja et al., 2021; Sumanasekera et al., 2020; Zamani et al., 2021). Based on the current results, our recommendation is to only incorporate crossword puzzles for those quizzes with habitually low scores for at least two semesters in the same course at the same institution. An important implication is the importance of moving beyond students’ subjective perceptions and towards objectively assessing knowledge retention. The current work should be replicated in additional disciplines beyond nursing; undergraduate and graduate education; and face-to-face, hybrid and online courses.
Acknowledgements
I would like to gratefully acknowledge the Wisconsin Teaching Fellows and Scholars Program, especially Drs. Cyndi Kernahan and David Voelker for their guidance and feedback.
Funding
The authors disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work was supported by the Clinical and Translational Science Award program through the NIH National Center for Advancing Translational Sciences [grant numbers UL1TR000427, KL2TR000428] and the Mississippi Center for Clinical and Translational Research [grant number 5U54GM115428]. The funding sources had no role in the project design, collection, analysis, interpretation of data, writing of the report, or the decision to submit for publication. The content is solely the responsibility of the author and does not necessarily represent the official views of the NIH.
Footnotes
Conflicts of Interests
The authors declare that there is no conflict of interest regarding the publication of this article.
Contributor Information
Elisa R. Torres, University of Mississippi Medical Center
P. Renée Williams, University of Mississippi Medical Center.
Wondwosen Kassahun-Yimer, University of Mississippi Medical Center.
Xiaoshan Zhu Gordy, University of Mississippi Medical Center.
References
- Abdulmajed H, Park YS, & Tekian A (2015). Assessment of educational games for health professions: A systematic review of trends and outcomes. Medical Teacher, 37(Supplement 1), S27–32. Retrieved from https://www.tandfonline.com/doi/full/10.3109/0142159X.2015.1006609 [DOI] [PubMed] [Google Scholar]
- Abuelo A, Castillo C, & May SA (2016). Usefulness of crossword puzzles in helping first-year BVSc students learn veterinary terminology. Journal of Veterinary Medical Education, 43, 255–262. Retrieved from https://pubmed.ncbi.nlm.nih.gov/27111003/ [DOI] [PubMed] [Google Scholar]
- Aflalo E (2021). Students generating questions as a way of learning. Active Learning in Higher Education, 22, 63–75. Retrieved from https://journals.sagepub.com/doi/10.1177/1469787418769120 [Google Scholar]
- American Association of Colleges of Nursing. (2006). Essentials of doctoral education for Advanced nursing practice: Clinical scholarship and analytical methods for evidence-based practice. Washinton, DC: Author. Retrieved from https://www.aacnnursing.org/Portals/42/Publications/DNPEssentials.pdf [Google Scholar]
- American Association of Colleges of Nursing. (2011). The essentials of master’s education in nursing. Washinton, DC: Author. Retrieved from https://www.aacnnursing.org/portals/42/publications/mastersessentials11.pdf [Google Scholar]
- Anderson LW, & Krathwohl DR (2001). A taxonomy for learning, teaching, and Assessing: A revision of Bloom’s Taxonomy of Educational Objectives. New York: Addison Wesley Longman, Inc. Retrieved from https://www.uky.edu/~rsand1/china2018/texts/Anderson-Krathwohl%20-%20A%20taxonomy%20for%20learning%20teaching%20and%20assessing.pdf [Google Scholar]
- Beagley JE, & Capaldi M (2016). The effect of cumulative tests on the final exam. Problems, Resources, and Issues in Mathematics Undergraduate Studies, 26, 878–888. Retrieved from https://www.tandfonline.com/doi/abs/10.1080/10511970.2016.1194343 [Google Scholar]
- Bonwell CC, & Eison JA (1991). Active learning: Creating Excitement in the classroom. 1991 ASHE-ERIC Higher Education Reports. Washington D. C.: Association for the Study of Higher Education - Eric Clearinghouse on Higher Education. Retrieved from https://eric.ed.gov/?id=ED336049 [Google Scholar]
- Corticone SR (2013). Utility of self-made crossword puzzles as an active learning method to study biochemistry in undergraduate education. Journal of College Science Teaching, 42(4), 33–39. Retrieved from https://eric.ed.gov/?id=EJ1011749 [Google Scholar]
- Davis TM, Shepherd B, & Zwiefelhofer T (2009). Reviewing for exams: Do crossword puzzles help in the success of student learning? Journal of Effective Teaching, 9(3), 4–10. Retrieved from https://uncw.edu/jet/articles/vol9_3/davis.pdf [Google Scholar]
- Fink LD (2013). Creating significant learning experiences: An integrated approach to designing college courses. San Francisco: Jossey-Bass. Retrieved from https://www.iup.edu/WorkArea/linkit.aspx?LinkIdentifier=id&ItemID=253275&libID=253298 [Google Scholar]
- Franklin S, Peat M, & Lewis A (2003). Non-traditional interventions to stimulate discussion: The use of games and puzzles. Journal of Biological Education, 37, 79–84. Retrieved from https://www.tandfonline.com/doi/abs/10.1080/00219266.2003.9655856 [Google Scholar]
- Gaikwad N & Tankhiwale S (2012). Crossword puzzles: Self-learning tool in pharmacology. Perspectives on Medical Education, 1(5–6), 237–48. Retrieved from https://pubmed.ncbi.nlm.nih.gov/23240102/ [DOI] [PMC free article] [PubMed] [Google Scholar]
- Gregory MR (2002). Constructivism, standards, and the classroom community of inquiry. Educational Theory, 52, 397–408. Retrieved from https://eric.ed.gov/?id=EJ673909 [Google Scholar]
- Khanna MM, Brack AS, & Finken LL (2013). Short- and long-term effects of cumulative finals on student learning. Teaching of Psychology, 40, 175–182. Retrieved from https://journals.sagepub.com/doi/abs/10.1177/0098628313487458 [Google Scholar]
- Krathwohl DR (2002). A revision of Bloom’s taxonomy: An overview. Theory into Practice, 41, 212–218. Retrieved from https://www.depauw.edu/files/resources/krathwohl.pdf [Google Scholar]
- Lang JM (2016). Small Teaching. San Francisco: Jossey-Bass. Retrieved from https://www.wiley.com/en-us/Small+Teaching%3A+Everyday+Lessons+from+the+Science+of+Learning-p-9781118944493 [Google Scholar]
- Lawrence NK (2013). Cumulative exams in the introductory psychology course. Teaching of Psychology, 40, 15–19. Retrieved from https://journals.sagepub.com/doi/abs/10.1177/0098628312465858 [Google Scholar]
- Murphy M, Spillane K, Cully J, Navarro-Pardo E, & Moret-Tatay C (2016). Can word puzzles be tailored to improve different dimensions of verbal fluency? A report of an intervention study. Journal of Psychology, 17, 150(6), 743–54. Retrieved from https://pubmed.ncbi.nlm.nih.gov/27224052/ [DOI] [PubMed] [Google Scholar]
- Nirmal L, Muthu MS, & Prasad M (2020). Use of puzzles as an effective teaching-learning method for dental undergraduates. International Journal of Clinical Pediatric Dentistry, 13(6), 606–610. Retrieved from https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8060935/pdf/ijcpd-13-606.pdf [DOI] [PMC free article] [PubMed] [Google Scholar]
- Ogrinc G, Armstrong GE, Dolansky MA, Singh MK, & Davies L (2019). SQUIRE-EDU (Standards for Quality Improvement Reporting Excellence in Education): Publication Guidelines for Educational Improvement. Academic Medicine, 94, 1461–1470. Retrieved from https://pubmed.ncbi.nlm.nih.gov/30998575/ [DOI] [PMC free article] [PubMed] [Google Scholar]
- Orawiwatnakul W (2013) Crossword puzzles as a learning tool for vocabulary development. Electronic Journal of Research in Educational Psychology, 11, 413–428. Retrieved from https://www.redalyc.org/pdf/2931/293128257006.pdf [Google Scholar]
- Patrick S, Vishwakarma K, Giri VP, Datta D, Kumawat P, Singh P, & Matreja PS (2018). The usefulness of crossword puzzle as a self-learning tool in pharmacology. Journal of Advances in Medical Education & Professionalism, 6(4), 181–185. Retrieved from https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6191832/pdf/JAMP-6-181.pdf [PMC free article] [PubMed] [Google Scholar]
- Polit DF & Beck CT (2012). Nursing research: Generating and assessing evidence for nursing practice. Philadelphia: Wolters Kluwer Health. https://www.amazon.com/Nursing-Research-Generating-Assessing-Evidence/dp/1605477087 [Google Scholar]
- Shawahna R, & Jaber M (2020). Crossword puzzles improve learning of Palestinian nursing students about pharmacology of epilepsy: Results of a randomized controlled study. Epilepsy & Behavior. 106, 107024. Retrieved from https://www.epilepsybehavior.com/article/S1525-5050(20)30203-1/fulltext [DOI] [PubMed] [Google Scholar]
- Singh Matreja P, Kaur J, & Yadav L (2021). Acceptability of the use of crossword puzzles as an assessment method in pharmacology. Journal of Advances in Medical Education & Professionalism, 9(3), 154–159. Retrieved from https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8273532/ [DOI] [PMC free article] [PubMed] [Google Scholar]
- Sumanasekera W, Turner C, Ly K, Hoang P, Jent T, & Sumanasekera T (2020). Evaluation of multiple active learning strategies in a pharmacology course. Currents in Pharmacy Teaching & Learning, 12(1), 88–94. Retrieved from https://www.sciencedirect.com/science/article/abs/pii/S1877129718300698 [DOI] [PubMed] [Google Scholar]
- Torres ER (2019). Cumulative quizzes in a nursing research course for nursing doctoral students. Journal of Nursing Education, 58, 243–246. Retrieved from https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6639015/pdf/nihms-1031999.pdf [DOI] [PMC free article] [PubMed] [Google Scholar]
- Vandiver DM, & Walsh JA (2010) Assessing autonomous learning in research methods courses: Implementing the student-driven research project. Active Learning in Higher Education, 11(1), 31–42. Retrieved from https://journals.sagepub.com/doi/10.1177/1469787409355877 [Google Scholar]
- Waltz CF, Strickland OL, & Lenz ER (2005). Measurement in Nursing and Health Research. New York: Springer Publishing Company. https://www.thefreelibrary.com/Measurement+in+Nursing+and+Health+Research%2C+3rd+ed-a0143215378 [Google Scholar]
- Zamani P, Biparva Haghighi S, & Ravanbakhsh M (2021). The use of crossword puzzles as an educational tool. Journal of Advances in Medical Education & Professionalism, 9(2), 102–108. Retrieved from https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8106739/pdf/JAMP-9-102.pdf [DOI] [PMC free article] [PubMed] [Google Scholar]