Skip to main content
. 2018 Aug 1;48(1):20180027. doi: 10.1259/dmfr.20180027

Table 4.

Types of interventions, study designs and learning outcome measures reported in included studies

Studies Intervention design Study design Outcome measures
*=E-learning
#=Blended learning
Quantitative outcomes (p value or mean) Qualitative outcomes (p value or mean)
Al-Rawi, 2007 Online interactive assessment modules on “basics” of CBCT* Split cohorts Assessment outcomes—no significant difference (p = 0.14) “Attitude”—“positive”
Busanello, 2015 3 e-learning sessions of 110 digitally altered images for recognition and diagnosis of changes* Split cohorts Significant writing (p < 0.004) and practice (p < 0.003) test results Preference—(mean = 90.5%)
Cruz, 2014 “e-course”: digital periapical images with texts and questions to evaluate the maxillofacial anatomy* Two cohorts Assessment outcomes —no significant difference (p > 0.05) “Satisfaction”—(mean = 8.47)
Howerton, 2002 27 online interactive video clips for virtual exposing, developing and mounting dental radiographs on a mannikin* Split cohorts Performance—– no difference in quality of radiograph (p = 0.30) “Preference”—(p < 0.0001)
Howerton, 2004 27 online interactive video clips blended with 3 Power point lectures for exposing dental radiographs# Split cohorts Performance—no significant difference in posttest (p = 0.98) “Preference”—(p < 0.0001)
Kavadella, 2012 e-class course blended with weekly lectures for knowledge of radiological lesions# Split cohorts Significant post test results for knowledge (p < 0.005) “Attitude”—“positive” (mean = 91%)
Meckfessel, 2011 Online “Medical schoolbook” blended with 20 lectures on the positioning of X-ray apparatus and to obtain radiographs virtually” and knowledge of “physical basics”# Two cohorts Significant examination results for Knowledge grade (p < 0.001) “Attitude”—“positive” (mean = 70%)
Mileman, 2003 Computer-assisted learning for the detection of proximal caries on digitized bitewings* Split cohorts Significant higher sensitivity for caries detection (p = 0.005)
Nilsson, 2007 2 self-directed sessions using software for a virtual “tube shift technique”. Cohort 1.
Tutor led session with “10 cases of computerized materials” on the impact of tube positioning. Cohort 2*
Two cohorts Cohort 1 had significantly better pre and post “proficiency and radiography test” for interpreting spatial information (p < 0.01)
Nkenke, 2012 8 online e-learning modules blended with 8 lectures for “radiological science course” # Split cohorts No significant examination results for Knowledge grade (p = 0.449) “Attitude”—“positive” (p = 0.020)
Nkenke, 2012 8 lectures followed by e-mail with MCQs and feedback on “radiological science course” # Split cohorts “spent more time with learning content” (p < 0.0005) “Attitude”—“positive” (p = 0.022)
Silveira, 2008 Lecture first, blended with 30 min online virtual procedures for bisecting angle technique then tested tube positioning on simulated patient and then radiograph exposure on a manikin# Split cohorts Significant “simulation” test grade (p < 0.01) more confident and better prepared” for real patient
Silveira, 2009 Interactive e-learning using virtual objects, animations and quizzes to identify 28 cephalometric landmarks* Split cohorts Significant knowledge grade of correct landmark identification (p < 0.05) by delayed post-test Preference—(mean = 82.5%)
Tan, 2009 Lectures first, blended with 8 e-learning modules for the “radiological science course” # Split cohorts Significant knowledge assessment scores in examination (p < 0.01) Perception—(p < 0.05)
Tsao, 2016 Lectures first, blended with 5 e-learning modules for the diagnosis of interproximal caries# Split cohorts No significant difference assessment scores in diagnostic accuracy (p = 0.45) Perception—(mean = 62.5%)
Vuchkova, 2011 Use of conventional textbook on oral pathosis, blended with a seminar with 3D software on depth relationships of pathosis on panoramic radiographs # Only one cohort Use of 3D software did not improve outcome (p > 0.05) “Preference”—(mean = 88%)
Vuchkova, 2012 First cohort: Use of online textbook followed by online “digital tool” (first cohort) and vice versa (second cohort).
Phase 2: use of conventional textbook followed by “digital tool” for the knowledge of radiographic anatomy#
Two cohorts in Phase 1 one cohort in Phase 2 No significant difference in test scores for knowledge (p > 0.05) “Preference”—(mean = 94%)

Where, Two cohort study design means students compared from different semesters or years. Split cohorts study design means students from same semester or year divided into two groups.