Table 3.
Evaluation approacha |
Use of approachb, % “yes” response | Familiarity with approachc | Proving effectivenessd | |||
|
|
Mean | % of 3 + 4 (n/N) | Mean | % of 3 + 4 (n/N) | |
Pilot/feasibility | 58 (SD 32.7) | 2.9 (SD 0.5) | 2.3 (SD 0.3) |
|
||
|
3. Feasibility studye | 94 | 3.6 | 88 (28/42) | 2.6 | 52 (16/31) |
|
4. Questionnairee | 100 | 3.4 | 84 (27/63) | 2.5 | 52 (16/31) |
|
8. Single-case experiments or n-of-1 study (N=1) | 28 | 2.5 | 43 (13/60) | 2.0 | 27 (8/30) |
|
12. Action research study | 41 | 2.6 | 50 (15/58) | 2.3 | 38 (11/29) |
|
44. A/B testing | 25 | 2.5 | 45 (13/58) | 2.2 | 36 (10/28) |
Development and usability | 37 (SD 29.1) | 2.5 (SD 0.4) | 2.1 (SD 0.3) |
|
||
|
5. Focus group (interview) | 91 | 3.2 | 81 (26/62) | 2.3 | 32 (10/31) |
|
6. Interview | 94 | 3.1 | 75 (24/62) | 2.3 | 35 (11/31) |
|
23. Think-aloud method | 66 | 2.6 | 52 (15/59) | 1.7 | 14 (4/29) |
|
25. Cognitive walkthrough | 31 | 2.4 | 37 (11/59) | 1.8 | 17 (5/30) |
|
27. eHealthf Analysis and Steering Instrument | 12 | 2.4 | 55 (16/58) | 2.4 | 48 (14/29) |
|
28. Model for Assessment of Telemedicine applications (MAST) | 22 | 2.5 | 48 (14/59) | 2.4 | 37 (11/30) |
|
29. Rapid review | 31 | 2.0 | 23 (7/58) | 1.8 | 7 (2/29) |
|
30. eHealth Needs Assessment Questionnaire (ENAQ) | 6 | 2.4 | 45 (13/58) | 2.0 | 24 (7/29) |
|
31. Evaluative Questionnaire for eHealth Tools (EQET) | 3 | 2.4 | 52 (15/58) | 2.3 | 41 (12/29) |
|
32. Heuristic evaluation | 19 | 2.2 | 31 (9/57) | 2.1 | 24 (7/29) |
|
33. Critical incident technique | 9 | 2.0 | 24 (7/59) | 1.8 | 4 (1/28) |
|
36. Systematic reviewe | 94 | 3.1 | 67 (20/62) | 2.9 | 69 (20/29) |
|
39. User-centered design methodse | 53 | 3.2 | 73 (22/62) | 2.5 | 50 (14/28) |
|
43. Vignette study | 41 | 2.2 | 31 (9/58) | 1.6 | 7 (2/28) |
|
45. Living lab | 34 | 2.5 | 41 (12/58) | 2.3 | 54 (15/28) |
|
50. Method for technology-delivered health care measures | 9 | 2.3 | 39 (11/58) | 2.1 | 25 (7/28) |
|
54. Cognitive task analysis (CTA) | 16 | 2.1 | 23 (7/59) | 1.9 | 18 (5/28) |
|
60. Simulation study | 41 | 2.5 | 50 (15/60) | 2.2 | 34 (10/29) |
|
62. Sociotechnical evaluation | 22 | 2.3 | 37 (11/60) | 2.1 | 29 (8/28) |
All phases | 11 (SD 4) | 2.3 (SD 0.2) | 2.2 (SD 0.2) |
|
||
|
21. Multiphase Optimization Strategy (MOST) | 6 | 2.3 | 45 (13/58) | 2.3 | 39 (11/28) |
|
26. Continuous evaluation of evolving behavioral intervention technologies (CEEBIT) framework | 6 | 2.4 | 48 (14/60) | 2.3 | 38 (11/29) |
|
40. RE-AIMg frameworke | 19 | 2.6 | 61 (17/59) | 2.4 | 52 (14/27) |
|
46. Normalization process model | 9 | 2.0 | 25 (7/57) | 1.9 | 18 (5/28) |
|
48. CeHResh Roadmap | 16 | 2.4 | 43 (12/58) | 2.3 | 41 (11/27) |
|
49. Stead et al [82] evaluation framework | 12 | 2.2 | 38 (11/58) | 2.1 | 22 (6/27) |
|
51. CHEATSi: a generic information communication technology evaluation framework | 6 | 2.3 | 41 (12/58) | 2.1 | 26 (7/27) |
|
52. Stage Model of Behavioral Therapies Research | 9 | 1.9 | 21 (6/58) | 2.0 | 22 (6/27) |
|
53. Life cycle–based approach to evaluation | 12 | 2.3 | 45 (13/58) | 2.0 | 21 (6/28) |
Effectiveness testing | 45 (SD 23) | 2.6 (SD 0.3) | 2.6 (0.4) |
|
||
|
1. Mixed methodse | 87 | 3.2 | 81 (26/63) | 2.9 | 65 (20/31) |
|
2. Pragmatic randomized controlled triale | 62 | 3.1 | 77 (24/63) | 3.3 | 83 (25/30) |
|
7. Cohort studye (retrospective and prospective) | 81 | 2.7 | 58 (18/61) | 2.5 | 58 (18/31) |
|
9. Randomized controlled triale | 91 | 3.3 | 71 (22/63) | 3.3 | 74 (23/31) |
|
10. Crossover studye | 44 | 2.7 | 57 (17/61) | 2.7 | 59 (17/29) |
11. Case series | 50 | 2.1 | 20 (6/60) | 1.8 | 10 (3/29) | |
|
13. Pretest-posttest study designe | 62 | 2.6 | 45 (14/60) | 2.5 | 50 (15/30) |
|
14. Interrupted time-series study | 44 | 2.5 | 43 (13/59) | 2.7 | 59 (17/29) |
|
15. Nested randomized controlled trial | 31 | 2.3 | 37 (11/59) | 2.8 | 55 (16/29) |
|
16. Stepped wedge trial designe | 56 | 2.8 | 70 (21/60) | 3.2 | 90 (26/29) |
|
17. Cluster randomized controlled triale | 50 | 2.8 | 60 (18/60) | 3.1 | 69 (20/29) |
|
19. Trials of intervention principles (TIPs)e | 23 | 2.5 | 42 (13/61) | 2.5 | 43 (13/30) |
20. Sequential Multiple Assignment Randomized Trial (SMART) | 9 | 2.4 | 45 (13/58) | 2.7 | 62 (18/29) | |
|
35. (Fractional-)factorial design | 22 | 2.3 | 45 (13/58) | 2.2 | 36 (10/28) |
|
37. Controlled before-after study (CBA)e | 37 | 2.6 | 50 (15/60) | 2.4 | 52 (15/29) |
|
38. Controlled clinical trial /nonrandomized controlled trial (CCT/NRCT)e | 47 | 2.9 | 70 (21/60) | 2.9 | 71 (20/28) |
|
41. Preference clinical trial (PCT) | 19 | 2.1 | 24 (7/58) | 2.1 | 25 (7/28) |
42. Microrandomized trial | 9 | 2.2 | 24 (7/59) | 2.4 | 50 (14/28) | |
55. Cross-sectional study | 72 | 2.5 | 40 (12/60) | 2.1 | 29 (8/28) | |
|
56. Matched cohort study | 37 | 2.2 | 30 (9/59) | 2.3 | 46 (13/28) |
57. Noninferiority trial designe | 53 | 2.6 | 47 (14/60) | 2.6 | 48 (14/29) | |
|
58. Adaptive designe | 19 | 2.6 | 52 (15/58) | 2.5 | 50 (14/28) |
59. Waitlist control group design | 34 | 2.1 | 28 (8/59) | 2.0 | 32 (9/28) | |
|
61. Propensity score methodology | 31 | 2.1 | 30 (9/59) | 2.0 | 21 (6/29) |
Implementation | 54 (SD 28) | 2.8 (SD 0.5) | 2.6 (SD 0.5) |
|
||
|
18. Cost-effectiveness analysis | 81 | 3.4 | 87 (27/63) | 3.2 | 70 (21/30) |
|
22. Methods comparison study | 16 | 2.0 | 17 (5/59) | 2.0 | 21 (6/28) |
|
24. Patient reported outcome measures (PROMs)e | 84 | 3.1 | 80 (24/60) | 2.9 | 73 (22/30) |
|
34. Transaction logfile analysis | 25 | 2.4 | 45 (13/57) | 2.1 | 21 (6/28) |
47. Big data analysise | 62 | 3.0 | 73 (22/61) | 2.8 | 59 (17/29) |
bBased on the rating question: “does your research group use this approach, or did it do so in the past?”; the percentage of “yes” responses is shown.
cBased on the rating question: “according to your opinion, how important is it that researchers with an interest in eHealth will become familiar with this approach?”; average rating scores ranging from unimportant (1) to absolutely essential (4) and percentages of categories 3 plus 4 are represented.
dThe “proving effectiveness” column corresponds with the rating question: “according to your opinion, how important is the approach for proving the effectiveness of eHealth?” Average rating scores ranging from unimportant (1) to absolutely essential (4) and percentages of categories 3 plus 4 are presented.
eThis approach scored above average on the rating questions “familiarity with the approach” and “proving effectiveness, ” which is plotted in the upper right quadrant of the Go-Zone graph (Figure 3).
feHealth: electronic health.
gRE-AIM: Reach, Effectiveness, Adoption, Implementation, and Maintenance.
hCeHRes: Centre for eHealth Research and Disease management.
iCHEATS: Clinical, human and organizational, educational, administrative, ethnical and social explanatory factors in a randomized controlled trial intervention.