Skip to main content
The Journal of Chiropractic Education logoLink to The Journal of Chiropractic Education
. 2018 Jun 6;32(2):84–89. doi: 10.7899/JCE-17-20

Description of a change in teaching methods and comparison of quizzes versus midterms scores in a research methods course

Stephanie G B Sullivan, Kathryn T Hoiriis, Lucia Paolucci
PMCID: PMC6192476  PMID: 29873246

Abstract

Objective:

We describe a change in teaching method from extended face-to-face instruction to a blended classroom environment in a research methods course and compare student scores following a change in assessment from mid-term examination to weekly quizzes.

Methods:

The course traditionally had been taught using a weekly 2-hour lecture for each academic term. A change in teaching methods was designed to include 20 minutes of lecture followed by 30 minutes of topic-specific in-class group discussions. The students then continued group work for an additional hour at an alternative location of their choice, such as the library, café, student study areas, or at home. Student homework/reading assignments were given as topics for weekly group discussions. In addition, the midterm examinations were replaced with weekly quizzes. Using t-test and analysis of variance, scores for four student cohorts in two successive academic terms were compared using identical multiple-choice questions from the midterms and quizzes for two topics. Student verbal feedback was elicited at the end of each term.

Results:

Quiz scores showed significant improvement over midterm scores for the more challenging statistics multiple-choice questions (t[371] = −2.21, p = .03, d = 0.23) with no significant improvement in multiple-choice questions about the safety of human subjects (t[374] = −.401, p = .69, d = 0.04). Student verbal feedback indicated higher satisfaction with the blended classroom and experiential learning style.

Conclusion:

Assessment using quizzes in an early and often format, instead of a midterm examination, was associated with higher scores on identical questions. Students preferred the blended classroom environment with experiential learning exercises and weekly quizzes.

Key Indexing Terms: Teaching Methods, Educational Activities, Learning, Chiropractic

INTRODUCTION

Creating a learning environment and assessment methodologies that facilitate, rather than inhibit learning, are ongoing challenges for the professorate. Large class sizes, the inherent challenges of group projects, and student perceptions of content importance often stifle learning and application. With this in mind, emerging trends in education, such as blended experiential learning, and alternative assessment timelines may provide a solution.

Historically, large class sizes have been taught using a lecture format; the professor conveys knowledge to large numbers of students, providing limited opportunity for dialogue.1 The challenge with lecture classes is the tendency for students to become spectators, not engaging with the material.1,2 In answer to the lecture format, blended learning evolved and includes numerous definitions. Most notably, the experience of blended learning includes face-to-face interaction and electronic resource use out of the classroom, such as videos and exercises.1,3 There are several benefits to blended learning,1 which is one reason this form of learning is beginning to emerge in professional programs and businesses around the world. Benefits include greater flexibility of scheduling, improved student motivation, reduction in wasted time, decreased cost, and better control.3 This allows the professor freedom in class to expand on key principles or integrate additional learning techniques, such as experiential learning.

Experiential learning, learning by doing, engages students in the learning process.45 Through carefully designed and directed experiences, students have active control over the learning process; thus, expanding knowledge acquisition beyond rote material knowledge to include development of metacognitive proficiencies.5 Commonly practiced in a group environment, students work together to solve problems, develop necessary skills, critically evaluate a topic, and reach an academic goal.5 This form of active engagement has been successful for the development of writing skills, critical thinking, advanced project development, and clinical care.1,2,4,5 The introduction of blended learning strategies provides a gateway for integration of experiential learning activities within the classroom environment.2 While experiential learning can be conducted without face-to-face interaction, there is benefit to reserving class time for activity oversight. Often surrounding experiential learning activities is a level of ambiguity, which may evoke discomfort among some students.4 Modest oversight assists with continued constructive progress, faculty nurturing of metacognitive skills (e.g., planning and reflection), and availability for answering questions. The addition of experiential learning complements the lecture to provide a deeper understanding of the material.

Another, often underappreciated technique to enhance learning is the inclusion of scheduled quizzes (QUs), which may further the focus on key learning outcomes and content understanding. Intentional development of mechanisms designed to reinforce the importance of key concepts, paired with the freedom that may be afforded by blended learning or environment flexibility, may provide workable solutions for the evolving classroom.68 Shifting the view of assessments away from measuring student performance success to use as a learning aid changes the perspective, making them an integral part of the learning experience,9,10 a potential solution for overcoming the effects of a “divide and conquer” mentality during a group project and an avenue for valuable feedback to the professor.68

The implementation of blended learning, experiential activities, and a modified assessment schedule requires a considerable time commitment from the professor. Existing educational material and assessments must be adapted to meet the needs of the revised class format. We describe the yearlong process and outcomes of a change in teaching methods, in which the instructor replaced extended face-to-face instruction with a multimodal approach and alternative assessment timelines for a research methods class with greater than 80 students per quarter.

METHODS

This descriptive study was approved by the Life University institutional review board and provided for the reporting of course methodologies, student perceptions, and student assessment scores for a research methods course. The 2-hour course included didactic learning, blended learning strategies, experiential group work, an alternative work environment, and assessments of student learning. In addition to learning the fundamentals of research methodology, students worked in groups to develop a research study, including completion of a written proposal and institutional review board application. This research study served as the final summative assessment (final exam) and was designed as a group experiential activity. Students chose and researched their topic, and group activities were implemented to guide the process, engaging dialogue and reflection.

Four successive quarters (Fall, Winter, Spring, Summer) were observed for the study (n = 376 total number of students); each quarter included a combination of lecture and face-to-face time allocation for group work and on-line tools, additional activities and resources provided through the Blackboard e-Education platform. Unique to quarters 2 through 4 (Winter, Spring, and Summer) was the formalization of the lecture and alternative work environment. Lecture content was adapted to consistently reflect a 20-minute time frame and 30 minutes of topic-specific in-class group discussions. To simulate a smaller class environment that allowed for professor oversight of group work, the class was divided into two separate combination lecture/work group sections. The students self-selected into a work group for the experiential activities, which then was assigned to a section through a random in-class drawing. For the duration of the quarter, one section was required to report at 9:00 AM for lecture/group work, and a second section reported at 10:00 AM for the same. The students also were required to meet as a group outside of class or through use of technology to fulfill the requirements of the experiential learning research project.

In quarters 3 and 4 (Spring and Summer) the midterm (MT) exam was replaced by four QUs. Two QUs were short answer, but the other two were in multiple choice question format (MCQs) identical to the MT questions and may be compared easily. These questions tested information from a training from the National Institutes of Health (NIH; Bethesda, MD) on Safety of Human Subjects and on basic Statistics (Stats) from an assigned reading and lecture presentation. Additional activities that reflected the lecture timeline and included benchmarks of the summative research project also were incorporated as graded class requirements. Examples included a draft of the study statistics plan and a report on study protocol for the protection of human subjects. These activities replaced an article summary activity and group citation exercise.

Upon completion of each course, all students were provided with a university course evaluation. To evaluate student performance between quarters, scores for student cohorts in the two successive academic terms using identical MT MCQs were compared to QU MCQs for two topics, Stats and safety of human subjects (NIH) over two successive academic terms. Final scores for the summative research project also were compared across four quarters, and student verbal feedback was elicited at the end of each term.

The student scores were entered into an Excel spreadsheet (Microsoft, Inc, Redmond, WA) for computing descriptive statistics (percent change, mean and SD) and SPSS version 24 (IBM Corp, Armonk, NY) was used to test differences among quiz and MT scores and a 1-way analysis of variance (ANOVA, post hoc Games-Howell) was performed for final project score evaluations.

RESULTS

Descriptive statistics included number of students, mean and standard deviations (SD) for each type of assessment, QU versus MT for two topics, Stats and NIH MCQs. For QU scores (n = 160, mean = 86.9, SD = 14.1), student performance on the NIH MCQs was only slightly higher than the MT (n = 216, mean = 86.3, SD = 13.6) performance (percent change = 0.7%), and no significant mean score differences were reported between the two groups, −0.58 (95% confidence interval [CI], −3.4–2.3; t[374] = −0.40; p = .69, d = 0.04). Further, there were no significant differences in NIH MCQs within quarters using similar assessment modalities: mean score difference between quarters assessed with MT format was 1.5 (95% CI, −2.2–5.1; t[214] = 0.78; p = .43, d = 0.11) and mean score difference was −2.7 (95% CI, −7.06–1.71; t[158] = −1.20, p = .23, d = 0.19) for quarters assessed through QUs. For the MCQs about statistics, a significant difference was noted between the QU (n = 156, mean = 72.9, SD = 21.6) and MT (n = 217, mean = 67.7, SD = 22.5) MCQs with a mean score difference of −5.147 (95% CI, −9.72 to −0.573; t[371] = −2.213, p = .03, d = 0.23) and a percent change of 7.6%. Within quarters, using similar assessment modalities for the statistics MCQs, no significant differences were noted for the MT-only quarters, mean score difference = 3.4 (95% CI, −2.62–9.46; t[215] = 1.12, p = .27, d = 0.15). However, a significant difference was observed within the QU-only quarters, mean score difference = −12.5 (95%CI, −19.1 to −5.9; t[154] = −3.76, p < .005, d = 1.11; Table 1). A comparison of the Fall and Summer quarter MCQ scores for the NIH and Stats questions also was completed. The mean score difference for the NIH MCQs in MT format (n = 113, mean = 87.0, SD = 13.2) compared to QU format (n = 79, mean = 88.2, SD = 16.5) was −1.2 (95% CI, −5.47–3.0; t[190] = −0.58, p = .57, d = 0.08) and for the Stats MCQs the mean score difference was −9.6 (95% CI, −16.0 to −3.14; t[194] = −2.94, p = .004, d = 0.429) comparing MT (n =115, mean = 69.2, SD = 23.0) to QU format (n = 81, mean = 78.9, SD = 21.6).

Table 1.

Comparison of Student Performance on Quiz versus Midterm Assessments

MCQs Comparison
Term: Mean (n), Variable One
Term: Mean (n), Variable Two
p
95% CI of the Difference
FA to WI NIH – MT only FA: 86.99 (113) WI: 85.53 (103) .434 −2.21–5.12
FA to WI Stats – MT only FA: 69.32 (115) WI: 65.90 (102) .266 −2.62–9.46
SP to SU NIH – QU only SP: 85.56 (81) SU: 88.23 (79) .231 −7.06–1.71
SP to SU Stats – QU only SP: 66.37 (75) SU: 78.89 (81) .000 −19.1 to −5.9
MT to QU – NIH MT: 86.30 (216) QU: 86.88 (160) .688 −3.41–2.26
MT to QU – Stats MT: 67.72 (217) QU: 72.86 (156) .028 −9.72 to −0.57
FA to SU – NIH FA: 86.99 (113) SU: 88.23 (79) .565 −5.47–3.0
FA to SU – Stats FA: 69.32 (115) SU: 78.89 (81) .004 −15.96 to −3.14

Term: MCQ = multiple choice question, FA = Fall, WI = Winter, SP = Spring, SU = Summer.

For the final summative assessment, a 1-way ANOVA with post hoc Games-Howell test (homogeneity of variances was violated as assessed by Levene's test, p = .001) was performed. Final summative assessment score was statistically different between quarters (Welch's F[3,199.74] = 4.63, p = .004, ω2 = 0.039). Scores for Fall (n = 114, mean = 94.2, SD = 5.3), Winter (n = 103, mean = 94.3, SD = 4.6), and Summer (n = 83, mean = 94.9, SD = 4.1) were higher than scores obtained by students in the Spring (n = 81, mean = 91.7, SD = 6.7) quarter. Significant differences were observed only between Spring quarter and all other quarters with Fall mean difference of −2.5 (95% CI, −4.8 to −0.2; p = .031), Winter mean difference of −2.6 (95% CI, −4.8 to −0.3; p = .021), and Summer mean difference of −3.2 (95% CI, −5.5 to −1.0; p = .002; Table 2).

Table 2.

Comparison of Student Performance on Final Group Projects

Quarter
N
Mean
SD
95% CI of the Mean
FA 114 94.18 5.27 93.20 to 95.15
WI 103 94.25 4.60 93.35 to 95.15
SP 81 91.69 6.73 90.20 to 93.18
SU 83 94.94 4.08 94.05 to 95.83

Student verbal feedback indicated higher satisfaction with the alternative environment for the experiential learning project and preference for the QU to the MT. Although limited feedback was provided through the university end-of-term student assessments, the Summer end-of-term student assessments reflected a higher percentage of perceived meta-competency attainment (Table 3). For Spring, the transition quarter from MT to QU with the addition of activities related to the lecture and project timelines, neither student completing the end-of-term student assessment felt meta-competency attainment was achieved (Table 3). For the question “The course objectives and requirements were clearly communicated,” most students completing the survey felt the course usually or always met the criteria (Fall 3 of 4, Spring 2 of 2, Summer 5 of 6). No end-of-term student assessments were performed during the Winter quarter. There were few comments on the end-of-term student assessments (Table 3).

Table 3.

Student Feedback by End of Term Survey

Question: Usually/Always
FA (n)
SP (n)
SU (n)
The course provided opportunities to identify problems and develop realistic solutions 75% (4) 0 (2) 83.3% (6)
The course provided opportunities to acquire the knowledge and skills 75% (4) 0 (2) 100% (6)
The course provided opportunities to acquire the knowledge and skills necessary to perform well in higher level courses 75% (4) 0 (2) 100% (6)
The course objectives and requirements were clearly communicated 75% (4) 100% (2) 83.3% (6)
Comments “Way too much busy work.” “I really liked that [the instructor] divided the class into two groups, it made it more personal and effective and it was less daunting when it came to the presentation.”
“Students would be better served if we were able to get feedback from [the instructor] for more than the 25 minutes.” “[The instructor] was very clear in her expectations and always available to answer student questions."

DISCUSSION

Education is evolving. Continued questioning by the professorate, technology advancements, and inherent changes in the way students interact with knowledge have paved the way for a holistic perspective of the learning process. To enhance student learning, faculty are encouraged to integrate new pedagogy into the classroom, and these efforts are supported through changes in accrediting agency requirements and government policy.11 While the temptation may be to use strategies, such as blended learning, to provide additional lecture time, this is not recommended.1 There are benefits and challenges to each new methodology; integration of multiple strategies provides a solution that can overcome many challenges.

The introduction of blended learning strategies provides a gateway for integration of experiential learning activities within the classroom environment.2 While experiential learning can be conducted without face-to-face interaction, there is benefit to reserving class time for activity oversight. Often surrounding experiential learning activities is a level of ambiguity, which may evoke discomfort among some students.4 Modest oversight, as included in this study, assists with continued constructive progress, faculty nurturing of metacognitive skills (e.g., planning and reflection), and availability for answering questions. The addition of experiential learning complements the lecture to provide a deeper understanding of the material.

To capitalize on the benefits of experiential learning in a group environment, strategies must be adopted to overcome some of the inherent challenges, especially in a large classroom environment. Noise levels, the practice of divide and conquer, free-loaders, and scheduling issues detract from the benefits if not addressed properly.1214 Dedicated in-class time resolves issues related to scheduling; however, group discussions may result in increased noise levels. This is distractive to individuals with attention-related disabilities, and although the intent of the professor is to engage and mentor students during their group exercise, the number of groups may be preventive. Some allowance for non–face-to-face group work time or environment flexibility may need to be made. Further, bad habits such as “divide and conquer” and free-loading, can lead to segregated learning. Experiential activities often are project-based, combining several learning goals into one project. Without direct guidance, students not “assigned” a topic by their group may not adequately learn the material. This may directly impact summative exam performance.12 Breaking the project down into component parts stresses the importance of each piece of the overall project.

Understanding, however, that summative assessment of student knowledge is required to evaluate the success of a program and the level of a student's mastery in a content area,9,11,15 summative assessments, such as group projects, are meant to gauge the level of content integration, understanding, and appropriate application of knowledge.11 The task is to employ strategies that motivate the learner to remain engaged with the material during the entire course of study. Alternative assessment timelines, such as frequent QUs as opposed to a MT exam, provide lower stakes assessment alternatives that have been shown to assist with recall, increase study time, decrease failure rates, and result in higher overall performance.68 However, as with all research, context applies and factors, such as content, quiz venue, value, and scheduling, may change the outcome. Further, frequent QUs and formative exercises may not be practical and may increase the burden of work in large class sizes.9

This study describes a multimodal approach to teaching a large research methods class. Curricular changes, such as replacing extended face-to-face instruction with a blended classroom environment, which incorporated experiential learning exercises and weekly assessments, allowed for creation of an artificially smaller classroom environment, improved MCQ scores and integration of advanced meta-competency skills training. The subject of smaller class size has implications beyond score performance. From an administrative and funding perspective governments and universities weigh the benefits with costs, and often the results in university settings are class sizes greater than 100.16,17 This reality has helped to fuel the integration of unique approaches to education. While the negative effect on grades seems to be mitigated with increasing student age, diminished teacher-pupil interaction and classroom engagement may prevent attainment of advanced meta-competency skills and perceived learning.16,18 In a review by Cusea, research indicated that large class sizes tend to have negative effects on student learning and often result in lowering student engagement with the course content, professor, and individuals within the class.13 This may be due partly to the instructors' use of lecture as the dominant pedagogy in large class sizes.1

In this study, the artificially decreased class size from Fall to Winter resulted in a slight downward trend in MT scores, although not significantly. This may be a result of the transition between the two class formats or simply due to chance. Between Winter and Spring additional summative project- and lecture-related activities were added to the curriculum in addition to a transition from MT to QU format. The change to QU versus MT showed only a slight improvement in student scores. As the professor works through the change in the program, one would suspect a curricular adaptation period would have some impact on student performance. In this study, remarkable changes were not noted in student performance until the Summer quarter. From Spring to Summer and from Fall to Summer significant changes were observed in student scores on the QU versus MT MCQs for the more challenging Stats questions, with slight improvement in scores observed for the NIH questions and final summative project.

Comparing Fall to Summer scores, accounting for the transition period, the scores reflected previous studies that demonstrated improvement in student performance with the addition of QUs and QUs in lieu of MT exams.6,19 Pennebaker et al. examined changes in student performance on identical questions when MTs were replaced with daily, in class on-line QUs, and results showed student improvement of half a letter grade in classes that received daily QUs.6 Additionally, the improvement in scores carried over to concurrent classes and classes from the next semester. Pennebaker et al. proposed that the review for the daily QUs assisted in the development of improved study habits for the students.6 Beyond comparisons between MT assessments and QUs, research also demonstrated improved memory and retention with more frequent QUs, creating a “learning effect” through quiz use.8 One challenge with group work is the divide and conquer task assignments. This results in some students gaining segmented mastery of course material and a disproportionate placement of importance on particular learning outcomes. The “learning effect” of QU use may help to reinforce the importance and learning of required student learning outcomes.

The addition of QUs and artificially simulated smaller class size were two ways to adapt class performance and overcome the challenges of group work in a large class environment. Facilitating teamwork activities during class periods also assisted in overcoming some of the traditional hurdles reported previously.14 During the Spring and Summer quarters, additional consequential activities were added to the curriculum. Tied to the lecture and summative group project timeline, time was allowed during the class period to work on the activities, and professor availability for questions was theoretically enhanced by the smaller class size. However, among the end-of-term comments, one student in the Spring quarter reported the need for additional classwork time, which was limited due to the split nature of the class and the alternative group work environment. The artificially smaller class size evolved from the increased time availability provided through use of blended learning strategies. Hybrid courses, such as those using blended learning techniques, have been shown to be more effective than lecture courses on measures of student engagement and equally as effective in measures of student learning.1,20,21 Further, the smaller class size, QUs, benchmark activities, and group work oversight potentially provided an opportunity for enhancement of metacognitive proficiencies. Active engagement through group work, especially in student-driven learning that forces critical evaluation, provides for the development of critical thinking, planning, and reflection proficiencies.2,4 Although student participation was limited, a modest improvement in end-of-term scores for meta-competency–related topics was demonstrated between Fall and Summer, and, although anecdotal, student preference was overwhelmingly positive for the simulated smaller classes.

This descriptive study has limitations and documents the progression of a research methods classroom environment in which the professor gradually implemented new learning methodologies in hopes of improving student learning and understanding of the material. Retrospective in nature, no preliminary or post surveys regarding student attitudes or perceptions of learning were implemented. Further, given that the changes occurred organically, there were transition periods during Winter and Spring. It is suspect that, although the material taught and professor were the same for consecutive quarters, the delivery may not have been as seamless. The artificial creation of a smaller class size may not be realistic for other courses and could impact faculty compensation. However, faculty should be provided the flexibility to engage new learning strategies to enhance the learning process. The student perceptions and results were favorable for each of the curricular changes, and future studies should include additional classes with a prospective design that measures changes in metacognitive proficiencies in addition to student performance and attitudes.

CONCLUSION

Teaching large sized classes without using lecture-based strategies is a difficult challenge in education. We described the implementation of a blended teaching method using experiential learning with a group project and a change in assessment strategies from MT examinations to weekly QUs. These changes in teaching methods and assessments were associated with some improvement in student scores on MCQs.

FUNDING SOURCES AND CONFLICTS OF INTEREST

This work was funded internally. The authors have no conflicts of interest to declare relevant to this work.

REFERENCES

  • 1.Illig K. Techniques and technology to revise content delivery and model critical thinking in the neuroscience classroom. J Undergrad Neurosci Educ. 2015;13(3):A160–165. [PMC free article] [PubMed] [Google Scholar]
  • 2.Wu W, Experiential Hyatt B. and project-based learning in BIM for sustainable living with tiny solar houses. Procedia Eng. 2016;145:579–586. [Google Scholar]
  • 3.Sigaroudi AE, Ghiyasvandian S, Nasabadi AN. Understanding doctoral nursing students experiences of blended learning: a qualitative study. Acta Med Iran. 2016;54(11):743–749. [PubMed] [Google Scholar]
  • 4.Grace S, Innes E, Patton N, Stockhausen L. Ethical experiential learning in medical, nursing and allied health education: a narrative review. Nurse Educ Today. 2017;51:23–33. doi: 10.1016/j.nedt.2016.12.024. [DOI] [PubMed] [Google Scholar]
  • 5.Tanaka K, Dam HC, Kobayashi S, Hashimoto T, Ikeda M. Learning how to learn through experiential learning promoting metacognitive skills to improve knowledge co-creation ability. Procedia Comput Sci. 2016;99:146–156. [Google Scholar]
  • 6.Pennebaker JW, Gosling SD, Ferrell JD. Daily online testing in large classes: boosting college performance while reducing achievement gaps. PLoS One. 2013;8(11):e79774. doi: 10.1371/journal.pone.0079774. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Kouyoumdjian H. Influence of unannounced quizzes and cumulative final on attendance and study behavior. Teach Psychol. 2004;31(2):110–111. [Google Scholar]
  • 8.Roediger HL, Karpicke JD. The power of testing memory: basic research and implications for education practice. Perspect Psychol Sci. 2006;1(3):181–210. doi: 10.1111/j.1745-6916.2006.00012.x. [DOI] [PubMed] [Google Scholar]
  • 9.Duers LE, Brown N. An exploration of student nurses' experiences of formative assessment. Nurse Educ Today. 2009;29:654–659. doi: 10.1016/j.nedt.2009.02.007. [DOI] [PubMed] [Google Scholar]
  • 10.Koh LC. Refocusing formative feedback to enhance learning in pre-registration nurse education. Nurse Educ Pract. 2008;8:223–230. doi: 10.1016/j.nepr.2007.08.002. [DOI] [PubMed] [Google Scholar]
  • 11.McClendon K, Ho T. Building a quality assessment process for measuring and documenting student learning. Assess Update. 2016;28(2):7–14. [Google Scholar]
  • 12.Davies WM. Groupwork as a form of assessment: common problems and recommended solutions. High Educ. 2009;58:563–584. [Google Scholar]
  • 13.Frash RE, Kline S, Stahura JM. Mitigating social loafing in team-based learning. J Teach Trav Tourism. 2003;3(4):57–77. [Google Scholar]
  • 14.Goddard A. From passive to active learning: a case study using a modified team-based learning approach. Employment Relations Record. 2014;14(1):26–39. [Google Scholar]
  • 15.Brown G, LTSN Generic Centre . Assessment: A Guide for Lecturers. York: Learning & Teaching Support Network; 2001. [Google Scholar]
  • 16.Chapman L, Ludlow L. Can downsizing college class sizes augment student outcomes? An investigation of the effects of class size on student learning. J Gen Educ. 2010;59(2):105–123. [Google Scholar]
  • 17.Cuseo J. The empirical case against large class size: adverse effects on the teaching, learning, and retention of first-year students. J Fac Dev. 2007;21(1):5–21. [Google Scholar]
  • 18.Blatchford P, Bassett P, Brown P. Examining the effect of class size on classroom engagement and teacher – pupil interaction: differences in relation to pupil prior attainment and primary vs. secondary schools. Learn Instr. 2011;21:715–30. [Google Scholar]
  • 19.Zhang N, Henderson C. Can formative quizzes predict or improve summative exam performance. J Chiropr Educ. 2015;29(1):16–21. doi: 10.7899/JCE-14-12. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Delialioglu O, Yildirim Z. Design and development of a technology enhanced hybrid instruction based on MOLTA model: its effectiveness in comparison to traditional instruction. Comput Educ. 2008;51:474–483. [Google Scholar]
  • 21.Kakish KM, Pollacia L, Heinz A, Sinclair JL, Thomas A. Analysis of the effectiveness of traditional versus hybrid student performance for elementary statistics course. IJ-SoTL. 2012;6(2):1–9. [Google Scholar]

Articles from The Journal of Chiropractic Education are provided here courtesy of Association of Chiropractic Colleges

RESOURCES