Table 2.
Interrater reliability of CAPE and DASH variables in Bihar, India, 2015–2017 (N = 73 simulation debrief videos)
| Indicator | N | Reliability | Level |
|---|---|---|---|
| CAPE Tool | |||
| Communication | |||
| Instructor questions | 73 | 0.52 (0.24–0.69) § | Fair |
| Instructor statements | 73 | 0.14 (−0.36–0.46) § | Poor |
| Trainee responses | 73 | 0.96 (0.93–0.97) § | Excellent |
| Instructor questions to instructor statements ratio | 73 | 0.19 (−0.29–0.49) # | Poor |
| Trainee responses to instructor questions + statements ratio | 73 | 0.77 (0.63–0.85) § | Excellent |
| Objectives | |||
| Behavioral objectives mentioned | 73 | 0.67 (0.48–0.79) § | Good |
| Cognitive objectives mentioned | 73 | 0.34 (0.04–0.59) § | Poor |
| Technical objectives mentioned | 73 | 0.35 (0.07–0.64) # | Fair |
| Structure | |||
| Length of debrief video | 71 α | 1.0 (0.99–1.0) § | Excellent |
| All three debrief structural phases present | 73 | 0.72 (0.56–0.25) ‡ | Good |
| Length of description phase | 73 | 0.78 (0.65–0.86) # | Excellent |
| Length of analysis phase | 73 | 0.92 (0.87–0.95) § | Excellent |
| Length of application phase | 73 | 0.67 (0.48–0.79) # | Good |
| Video Use | |||
| Incorporation of simulation video | 73 | 1.0 (1.0–1.0) ‡ | Excellent |
| Length of tape segments played | 73 | 0.98 (0.97–0.99) § | Excellent |
| Number of times tape paused during | 73 | 0.99 (0.98–0.99) # | Excellent |
| DASH Tool | |||
| Elements | |||
| 1: Maintain engaged learning environment | 71 α | 0.26 (−0.18–0.54) § | Poor |
| 2: Organize the debrief | 73 | 0.59 (0.34–0.74) § | Fair |
| 3: Facilitate discussion | 72 α | 0.64 (0.43–0.77) § | Good |
| 4: Identify growth opportunities | 72 α | 0.38(0.02–0.61) § | Poor |
| 5: Create success plan | 72 α | 0.22 (−0.25–0.51) § | Poor |
| Mean DASH score | 68 α | 0.37 (0.02–0.61) § | Poor |
§ ICC calculated for continuous variables (95% CI).
# ICC calculated from normalized data (95% CI).
‡ Cohen’s kappa calculated for binary variables (95% CI).
α Some forms had missing data.