Table 3.
Data analysis.
Study ID | Study | Construct Measured | Tools Used | Methods Used to Analyze Fidelity Data Post-Intervention | Details and Findings of Fidelity Outcomes |
---|---|---|---|---|---|
Intervention Studies | |||||
1 | Asher et al., (2021) | Competence | Ethiopian adaptation of the ENhancing Assessment of Common Therapeutic factors (ENACT) structured observational rating scale | For each time point, mean items scores were generated for each CBR worker, then double-rated competence assessments were averaged. Summary means were generated for each time point and for role play assessments. | Mean scores showed improvement in CBR worker competence throughout the training and the intervention. Empathy scores showed earliest improvements, and problem-solving and advice-giving saw the least improvements. More supervision by specialists was needed. |
2 | Atif et al. (2019) | Competence | Quality and Competence Checklist, an observational tool used by trainers to rate a group session on 6 areas of competencies | Each area of the fidelity tool was scored on a Likert scale (0–2), ranging from “not demonstrated” to “partially demonstrated” and “demonstrated well”, with an option of not applicable, and then converted to a percentage. A percentage of 70% indicated competence. | All 31 of the 45 peer facilitators who were retained over five years, all of them achieved satisfactory competence. Six of the 14 peers who dropped out did so because they could not achieve satisfactory competence. |
3 | Cross et al. (2015) | Fidelity and Competence | Intervention-specific tool measuring both adherence and competence | An exploratory factor analysis (EFA) was used to examine the factor structure an intra-class correlation was used to measure inter-rater reliability of the tool. Descriptive statistics were used to summarize adherence and fidelity, and multilevel analyses validated that implementer fidelity measures were clustered around the implementer rather than attributable to other factors. | Variance in fidelity scores was explained by the implementer, and intra-class correlations were satisfactory. The EFA revealed two domains of the tool: adherence and competence. Summative adherence and competence scores varied widely and predicted children's enhanced response to the intervention, but not externalizing behavior. |
4 | Diebold et al. (2020) | Competence | Revised Cognitive Therapy Rating Scale | Descriptive statistics and linear mixed models were used to examine average competence scores and adherence, including fixed effects for study arm and session number. Models also examined site, facilitator, and client-specific effects. | There were no differences between paraprofessionals and professionals for overall adherence or competence. Surprisingly, facilitators with a Master's degree or higher had lower average adherence, and facilitators who were trained via audio recording rather than 1-on-1 had lower average adherence. |
5 | Garber-Epstein et al. (2013) | Fidelity | The Illness Management and Recovery Fidelity Scale | Analysis of variance (ANOVA) was used to determine mean differences between clinicians delivering the intervention and two groups of nonspecialists (trained peers and other nonspecialists). | Each group of facilitators achieved satisfactory fidelity, with other nonspecialists (not peers) receiving the greatest improvement in mean fidelity scores between timepoint 1 and timepoint 2. |
6 | Johnson et al., (2021) | Fidelity and Competence | Intervention-specific tool measuring three domains: coaching skills, intervention stages and phases, and peer role | Qualitative interviews were thematically coded. Descriptive statistics and count of prevalence were used to analyze quantitative data from the fidelity tool. | Nonspecialists delivered the intervention with fidelity in more than 90% of sessions. |
8 | Khan et al. (2019) | Fidelity and Competence | Intervention-specific fidelity tool that measured both competence and fidelity, conceptualized as “counseling skills” and intervention strategies | N/A | Competence scores were low at first for nonspecialists, but with subsequent training and supervision, nonspecialists improved. |
10 | Landry et al. (2019) | Fidelity | Teacher Behavior Rating Scale- Bilingual Version (TBRS-B) | Descriptive statistics (frequency and percentage scores) from the TBRS-B, which was rated on a Likert scale from 1 to 6 | TBRS-B scores increased more for professionally-trained intervention teachers and nonspecialist teachers compared to the control group. |
11 | Laurenzi et al. (2020) | Competence | Home Visitor Communication Skills Inventory (HCSI) with three domains measuring competence: active delivery, active connecting, and active listening | Descriptive statistics (proportions, frequencies) and correlations between average visit duration, and active delivery and active connecting | Nonspecialists had higher scores in active listening and active delivery than in active connecting. |
12 | Mastroleo et al. (2009) | Competence | Peer Proficiency Assessment (PEPA) | Correlations computed between nonspecialist and specialist coder scores to examine inter-rater reliability, the PEPA questions and MI adherent scores to examine construct validity, between PEPA scores and effectiveness outcomes (drinking behaviors) to examine predictive validity. | PEPA scores indicated MI adherence (r – 0.872). Assessments also revealed high inter-rater reliability between student and master coders and good correlations between previously established fidelity tools. |
13 | Munodawafa et al. (2017) | Fidelity | Intervention-specific fidelity tool | Descriptive statistics (mean fidelity scores per session supplemented with key informant interviews | On average, nonspecialists achieved moderate to good intervention facility. Qualitative interviews revealed that the manual and ongoing and training and supervision served as facilitators to achieving intervention fidelity |
14 | Puffer et al. (2021) | Fidelity and Competence | Intervention-specific fidelity tool guided by the ENACT scale | Descriptive statistics and visual plotting of fidelity and competence ratings to explore patterns of change across steps of the intervention and variability on specific competencies. Inductive coding of focus groups data and card sorting methods. | Nonspecialists achieved adequate fidelity scores. The highest competence score was structured problem exploration and the lowest was cognitive behavioral skills for children, which are least frequently used. |
15 | Rahman et al. (2019) | Competence | ENhancing Assessment of Common Therapeutic factors (ENACT) structured observational rating scale | Descriptive statistics (mean scores) were generated, and mean differences in scores for nonspecialists trained virtually and in-person were generated | There were no significant differences in scores between groups of nonspecialists |
16 | Singla et al., (2020) | Fidelity and Competence | Therapist Quality Scale, measuring both fidelity and competence (treatment-specific and general skills) | Assessment of inter-rater reliability (intraclass correlation coefficients), internal consistency, and predictive validity of patient outcomes (depression) | There were moderate to excellent scores of inter-rater reliability among specialists (ICC = 0.779) and nonspecialists (ICC = 0.714); there was high internal consistency (α = 0.814 for specialist coders and α = 0.843) for nonspecialist coders, and TQS ratings were not significantly related to clinical outcomes (r = 0.375, p < 0.01). |
Tool Development and Validation Studies | |||||
7 | Jordans et al., (2021) | Competence | WeACT instrument, which was modeled after ENACT | Assessment of inter-rater reliability (intraclass correlation coefficients) and internal consistency. | At timepoint 1 (N = 8 raters), ICC = 0.47 (95% C.I. 0.26–0.72), α = 0.91; At timepoint 2 (N = 6 raters), ICC = 0.68 (95% C.I. 0.48–0.86), α = 0.94 |
9 | Kohrt et al. (2015) | Competence | ENhancing Assessment of Common Therapeutic factors (ENACT) structured observational rating scale | Assessment of inter-rater reliability (intraclass correlation coefficients) | ICC = 0.88 for experts (95% C.I. 0.81–0.93); ICC = 0.67 (95% CI 0.60–0.73) for nonspecialists. |