Skip to main content
. 2019 May 7;19:92. doi: 10.1186/s12874-019-0736-6

Table 2.

Distribution of evaluation designs according to the evaluation field (n = 105)

Health promotion/Prevention (n = 29) Clinical care/ Health services research (n = 76)
Total
n (% a)
Including process and/or mechanism evaluation
n (%b)
Total
n (%a)
Including process and/or mechanism evaluation
n (%b)
Individual randomized trials 6 (20.7) 5 (83.3) 17 (22.4) 9 (52.9)
Randomized trial adaptations 9 (31.0) 9 (100) 37 (48.7) 34 (91.9)
Cluster randomized trials 5 (17.2) 5 (100) 19 (25) 18 (94.7)
Pragmatic trials 1 (3.4) 1 (100) 8 (10.5) 7 (87.5)
Cluster and pragmatic 3 (10.3) 3 (100) 10 (13.2) 9 (90)
Alternative methods to RCT 14 (48.3) 13 (92.9) 22 (28.9) 14 (63.6)
Quasi-experimental 7 (24.1) 6 (85.7) 7 (9.2) 6 (85.7)
Cohort study 0 (0) 0 6 (7.9) 1 (16.7)
Realist evaluation 2 (6.9) 2 (100) 5 (6.6) 5 (100)
Case studies and other approaches 5 (17.2) 5 (100) 4 (5.3) 2 (50)

an/ number of design (N = 108)

bn/number of such type of desing (for example: 87.5% of Individual RCT are combined with process and/or mechanism evaluation)