Abstract
Background
Measurement of intervention fidelity is an essential component of any scientifically sound intervention trial. However, few papers have proposed ways to integrate intervention fidelity data into the execution of these trials.
Objective
The purpose of this paper is to describe the intervention fidelity process used in a randomized controlled trial of a human patient simulator intervention and how these data were used to monitor drift and provide feedback to improve the consistency of both intervention and control delivery over time in a multisite education intervention for parents of children with newly diagnosed type 1 diabetes.
Methods
Intervention fidelity was measured for both the intervention and control condition by direct observation, self-report of interventionist delivery, and parent participant receipt of educational information. Intervention fidelity data were analyzed after 50%, 75% and 100% of the participants had been recruited and compared by group (treatment and control) and research site.
Results
The sample included 191 parents of young children newly diagnosed with type 1 diabetes. Observations scores in both intervention and control groups indicated a high level of intervention fidelity. Treatment receipt was also high and did not differ by treatment group. The teaching session attendance rates by site and session were significantly different at time point 1 (50% enrollment); following study staff retraining and reinforcement, there were no significant differences at time point 3 (100% enrollment).
Implications
Results demonstrate the importance of monitoring intervention fidelity in both the intervention and control condition over time and using these data to correct drift during the course of a multi-site clinical trial.
Keywords: children; diabetes mellitus, type 1; human patient simulation; intervention fidelity; parents; patient education; randomized controlled trial
Intervention fidelity (also called: treatment fidelity, implementation fidelity, research integrity, and adherence to protocol) is a central component of any well-developed and executed clinical trial (Bellg et al., 2004; Breitenstein et al., 2010; Carroll, et al. 2007; Resnick et al., 2005; Santacroce, Maccarelli & Grey, 2004; Sidani & Braden, 2011). Intervention fidelity refers to the processes used to ensure that research interventions are delivered as planned and thus any treatment effects found (or not found) are due to the intervention and not alterations in study execution. Intervention fidelity requires attention to the study design, development of an intervention manual, training of all research staff and measuring the consistency with which the intervention is delivered and received by the study participants (Resnick et al., 2005).
The idea of measuring intervention fidelity has been around for more than 35 years (see Gearing et al., 2011 for a comprehensive review); however, few papers discuss what to do with fidelity data once collected. Consequently, considerable time, effort and money is spent measuring intervention fidelity; however, the usefulness of these data often remains elusive. Thus, many researchers simply report their intervention fidelity findings as an outcome variable or an assessment of how well the researchers did at executing their study. Instead, greater emphasis needs to be placed on monitoring drift (a gradual change during a trial in how the intervention and control conditions are implemented) at specific time points throughout the trial, providing feedback to the research team to correct this drift, and using the intervention fidelity data as a variable in the final analysis. The purpose of this brief report is to advance these ideas about intervention fidelity using processes and data from the Parent Education Through Simulation-Diabetes (PETS–D) study (Sullivan-Bolyai et al., 2015) as an example.
The Exemplar: PETS-D
PETS-D was a randomized controlled trial to examine the efficacy of a human patient simulator as an additive education strategy to teach parents with young children newly diagnosed with type 1 diabetes (T1D) about management of this condition (Sullivan-Bolyai et al., 2015). For example, parents in the intervention group were able to experience a simulated seizure with administration of a glucagon placebo to the simulator compared with the control group parents who were taught about seizures but did not have the hands-on experience of simulating seizure management or administration of glucagon. The self-regulation intervention was theory-based (Sullivan-Bolyai et al., 2014) and involved three education sessions conducted at baseline, one month and three months that covered basic T1D management survival skills, including recognizing and treating hypoglycemia (Session 1), sick day management, hyperglycemia and nutritional issues (Session 2) and day-to-day management, including blood glucose pattern recognition (Session 3). The control condition included presentation of the same three vignettes by a certified diabetes educator, but without the use of a simulator. All study participants received standard of care diabetes education. The intervention and control condition was delivered by two different certified diabetes educators at each site. All of the diabetes educators used the same formalized teaching vignettes outlined in the protocol manual. Parents were recruited for the study at the time of their child’s initial T1D diagnosis. All procedures related to this study were approved by the Institutional Review Boards at both study sites. Procedures and primary findings from the study are described in Sullivan-Bolyai et al. (2015) and Sullivan-Bolyai et al. (2014).
PETS-D Intervention Fidelity
Table 1 outlines the components of the PETS-D intervention fidelity monitoring plan, the planned analyses and lessons learned about this process. Since a major component of intervention fidelity is research team orientation and training in protocol implementation, the research team had study initiation and monthly training sessions throughout the conduct of the clinical trial. These sessions included training in the importance of intervention fidelity, including maintaining fidelity to the manualized intervention and control conditions, the reasons why we measure intervention fidelity, and the different ways that intervention fidelity was being measured in the PETS-D study. Education sessions specific to intervention fidelity were held four times during the conduct of the trial.
TABLE 1.
PETS-D Intervention Fidelity Components, Planned Analyses, and Lessons Learned
| Component | Planned analysisa | Lessons learned |
|---|---|---|
| Observation | •Differences by session | •Scores were high throughout the study |
| •Differences by rater | •Less useful for monitoring drift | |
| •Differences by site | •Helpful for understanding intervention delivery challenges | |
| •Differences by group | •Helpful for understanding control delivery challenges | |
| Attendance | •Session attended by site | •Helpful for monitory IF issues early in trial |
| •Session attendance by group | •Helpful for understanding attendance problems | |
| •Facilitated changes to improve attendance consistency | ||
| Delivery | •Time for session by group | •Sites spent more time on intervention |
| •Time for session by site | •It was not possible to equalize time: treatment and control | |
| Receipt | •REC: total scores by group | •Scores were high throughout the study |
| •REC: item responses by group | •Site variation by topicb was identified and corrected | |
| •REC: total scores by site | •Low topic coveragec was identifiable and correctable | |
| •REC: item responses by site |
Note. IF = intervention fidelity; PETS-D = Parent Education Through Simulation–Diabetes; REC = Receipt of Education Content.
All analyses were planned after 50, 75, and 100% of participants enrolled.
Topic not covered at one site was related to ketones.
Glucagon and pattern management was low early in the study and corrected over time.
Intervention fidelity was measured as a continuous process in several ways: (a) direct observation of 10% of the sample for both the intervention and control condition at both sites by a nurse observer not involved in data collection; (b) measurement of attendance at each session by site, by treatment group and by educator; (c) diabetes educator self-report of delivery at each participant visit; and (d) study participant (parent) receipt of information at the conclusion of their study participation. We chose direct observation over video or audio taping the sessions based on feedback obtained from parents during our pilot work in which they indicated that a nurse observer would be the most acceptable and the least intrusive method.
The PETS-D research team set several intervention fidelity goals or benchmarks which included achieving (a) consistent delivery of the intervention and control group education sessions (per the protocol manual) by observation; (b) no significant differences in the time spent delivering the intervention and control teaching conditions; (c) an average of 2.5 teaching visits attended (intervention/control dose) by study participants (out of a possible 3 visits); (d) high attendance rates as follows: 100% for Session 1, 80% for Session 2 and 75% for Session 3 and no significant difference in attendance rates by group assignment; and (e) high level receipt (90% coverage) of the education materials by parental self-report. To emphasize intervention fidelity as a continuous process, evaluation of these data were planned after 50%, 75% and 100% of the sample were enrolled into the study. In this way, we could observe problems with drift during the study and provide feedback and training sessions that could address any issues at critical time points rather than waiting until the end of data collection when it would be too late to make changes.
There were several important challenges to intervention fidelity. The study was conducted at two different research sites with multiple nurse diabetes educators and study teams with somewhat different practice styles. Although these differences were viewed positively as closely reflecting real world practices, it was necessary to measure intervention fidelity as a continuous process to maintain rigor. In addition, because of the anticipated differences in practice styles it was important to measure variability in the control condition to assess whether this remained stable over time between the two sites. We believe that there is often variability in the control condition that goes undetected in many intervention studies when investigators fail to measure fidelity to all conditions. Bellg et al. (2004) stressed the importance of including measures that ensure the same treatment dose across all conditions but fell short of specifying that this includes the control or usual care condition.
PETS-D Intervention Fidelity Measures
Intervention Fidelity Observation Form
The observation form was developed for this study by the research team and included: subject ID, initials of the interventionist and the observer, date of intervention, group assignment, which session was observed (there were three different sessions), and four items measured on a 4-point scale, from 0 = none to 3 = high level of intervention fidelity. The items measured the consistency with which (a) information was covered according to the study manual, (b) skills were covered according to the manual, (c) the simulator was used as planned in the intervention group, and (d) the simulator was used when responding to parent questions in the intervention group. For the control condition, the last two items measured responsiveness to parent questions and evidence of positive interactions between parents and educators rather than interactions using the simulator. Thus the possible total scale scores ranged from 0 = no intervention fidelity to 12 = a high level of intervention fidelity for both intervention and control conditions.
Diabetes Educator Documentation (DED) Forms
The DED forms were developed by the research team to capture the most salient aspects of the education sessions for both the intervention and control conditions. The DED forms captured the time spent teaching (in minutes), the session number, the content covered (hyperglycemia, hypoglycemia, sick day management, and pattern management) and the technical skills practiced (glucose checks, drawing up and administering insulin, and glucagon administration). The DED forms were completed by the diabetes educators after completion of each intervention and control session.
Parental Receipt
Parental receipt of T1D content was evaluated using a 23-item, investigator- developed questionnaire. The 23 items measure the main content covered by the intervention and control group education sessions (hypoglycemia, nutrition, and hypoglycemia management). Each item was rated on a 3-point scale: 0 = not covered; 1 = covered somewhat – but would have like more information about this; or 2 = covered completely. Therefore, the possible scale scores ranged from 0 = no content covered to 46 = all content covered completely. Parents completed this form at the completion of the study. Reliability for the 23-item scale score responses estimated using Cronbach’s alpha was 0.93.
Data Analysis
Chi-squared tests for independence, and t-tests were used to examine differences between the intervention and control groups for the intervention fidelity measures. When appropriate, differences by study site were also evaluated. SPSS version 22.0 was used to analyze these data
PETS-D Fidelity Results
Sample
The sample included 191 parents of 116 children newly diagnosed with T1D. (Both parents could participate in the study and they were assigned to the same treatment group after the index parent serving as primary caregiver was randomly assigned). The majority of study participants were mothers (n =114; 59.7%), who were married or living with a partner (n =153; 80.1%), White (n =165; 86.4%), employed (n =145; 75.9%), with a high school or greater education (n =156; 81.7%). The average age of the parents was 38.0 years (SD =7.3). The average age of the children at diagnosis was 9.0 years (SD =4), with 28 children (24%) under the age of 6 years.
Observation
Twenty-one direct observations were conducted, distributed over the three education sessions (Session 1: five observations; Session 2: seven observations; Session 3: eight observations) and included both the intervention and control conditions. Scores ranged from 8–12, indicating a high level of observed intervention fidelity. The mean observation score was 11.0 (SD = 1.1) for the intervention and 11.0 (SD = 1.3 for the control condition. There were no significant differences in observation score by rater, session, research site, or group assignment.
Delivery and Attendance
Tables 2 and 3 depict results from the intervention fidelity measures over the three planned time points. These data were further broken down by site (Tables 2 and 3) and session (Table 3 only) to identify any issues with the delivery of the intervention and control conditions. Results revealed that there was a significant difference in the time spent teaching, with the intervention group taking more time. This finding was consistent at all three time points, suggesting that the use of the human patient simulator required more time to complete the education session. Our observations suggested that the time difference was partially due to the time needed to (a) work the simulator, (b) provide parents instructions about how to interact with the simulator and (c) respond to additional questions posed by parents in response to participating in simulated instruction. This time spent teaching variable was subsequently used in the major study analysis (Sullivan et al., 2015) to evaluate the study outcomes; but did not significantly alter the study findings.
TABLE 2.
Teaching Session Time by Time Point, Group, and Study Site
| T1 (min) | T2 (min) | T3 (min) | ||||||
|---|---|---|---|---|---|---|---|---|
| Teaching-related IF | Site | Group | M | (SD) | M | (SD) | M | (SD) |
| Time (min) | 1 | C | 99.3 | (41.8) | 99.6 | (33.9) | 103.7 | (33.7) |
| 1 | I | 103.6 | (30.2) | 113.5 | (41.1) | 109.6 | (42.7) | |
| 2 | C | 50.6 | (26.6) | 45.0 | (25.0) | 59.5 | (30.5) | |
| 2 | I | 81.1* | (35.5) | 86.1** | (34.8) | 93.3** | (34.9) | |
| B | C | 71.9 | (40.0) | 70.7 | (40.1) | 77.8 | (38.5) | |
| B | I | 91.0* | (39.8) | 100.8** | (40.5) | 101.9** | (39.8) | |
| Sessions attended (number) | 1 | C | 2.50 | (0.7) | 2.52 | (0.7) | 2.54 | (0.7) |
| 1 | I | 2.20 | (0.7) | 2.32 | (0.7) | 2.30 | (0.7) | |
| 2 | C | 2.30 | (0.7) | 2.16 | (0.7) | 2.53 | (0.8) | |
| 2 | I | 2.10 | (0.6) | 2.16 | (0.7) | 2.36 | (0.7) | |
Note. Significant findings from t-tests are marked. B = both sites combined; C = control; I = intervention; IF = intervention fidelity; T1 = time point 1 (50% enrolled); T2 = time point 2 (75% enrolled); T3 = time point 3 (100% enrolled).
p < .05.
p < .001.
TABLE 3.
Participation (Attendance Rate) by Time Point, Site, Session, and Group
| Site | Session | T1 (%) | T2(%) | T3 (%) |
|---|---|---|---|---|
| 1 | 1 | 97.7 | 95.8 | 96.3 |
| 2 | 1 | 67.3* | 75.3** | 95.6 |
| 1 | 2 | 67.4 | 71.8 | 75.6 |
| 2 | 2 | 81.8* | 81.8* | 87.4 |
| 1 | 3 | 69.8 | 70.4 | 69.6 |
| 2 | 3 | 67.3 | 67.5 | 72.0 |
| Group | ||||
| C | 1 | 84.1 | 85.7 | 96.5 |
| I | 1 | 82.7 | 84.1 | 94.3 |
| C | 2 | 76.9 | 80.0 | 89.3 |
| I | 2 | 74.3 | 72.5 | 74.4** |
| C | 3 | 66.6 | 67.1 | 78.3 |
| I | 3 | 67.9 | 70.6 | 71.4 |
Note. Significant findings from χ2 tests are marked. C = control; I = intervention; IF = intervention fidelity; T1 = time point 1 (50% enrolled); T2 = time point 2 (75% enrolled); T3 = time point 3 (100% enrolled).
p < .05.
p < .01.
The mean number of teaching sessions attended did not differ by assignment or site (Table 2). However, the actual attendance rate by site and session was significantly different by site for Session 1 and 2 at time point 1 and Session 1 at time point 2 (Table 3). Because of these findings, we worked with the study teams to identify and implement strategies to improve attendance at all sessions. Ultimately, the attendance rate at completion of enrollment was no longer significantly different, which suggested that reinforcement and continued training throughout the study was helpful to achieving this goal.
Receipt of Educational Content
Of the 191 subjects enrolled in the study, 65% of the subjects (n = 124) completed the parental receipt forms. Those who completed the forms were not significantly different from parents who did not complete the form, based on gender, age, or education. The average total parental receipt score was similar for both treatment and control groups (M = 42.3, SD = 6.1; M = 41.4, SD = 5.0; p =.44), and scores indicated a high level of treatment receipt (possible range: 0–46). Table 4 illustrates the responses to the individual receipt items by treatment group. Chi-squared analysis revealed no significant difference by group for any of the parental receipt items. Although some of the receipt skills were lower than desired (e.g., side effects of glucagon, symptoms associated with acidosis, evaluating blood glucose patterns, and correcting glucose patterns onside the range), in general the four diabetes educators did an excellent job of covering the topics in the manual for both groups on a consistent basis over the duration of the trial.
TABLE 4.
Parental Receipt of Education Content by Treatment Group
| Completely | Somewhat | Not covered | ||||
|---|---|---|---|---|---|---|
| Item | I | C | I | C | I | C |
| Hypoglycemia: signs, symptoms | 90.3 | 95.1 | 9.7 | 4.9 | 0.0 | 0.0 |
| Hypoglycemia: causes | 90.3 | 90.3 | 9.7 | 9.7 | 0.0 | 0.0 |
| Hypoglycemia: actions when suspected | 91.9 | 95.2 | 8.1 | 4.8 | 0.0 | 0.0 |
| Hypoglycemia: treatment | 91.9 | 95.2 | 8.1 | 4.8 | 0.0 | 0.0 |
| Hypoglycemia: severitya | 82.3 | 72.6 | 17.7 | 27.4 | 0.0 | 0.0 |
| Hypoglycemia: checking blood glucose after episode | 88.7 | 96.8 | 11.3 | 3.2 | 0.0 | 0.0 |
| Complex carbohydrate: when to give | 77.4 | 90.3 | 22.6 | 8.1 | 0.0 | 1.6 |
| Daily regimen: when to adjust to correct hypoglycemia pattern | 72.1 | 69.4 | 24.6 | 27.4 | 3.3 | 3.2 |
| Glucagon: when to use | 91.9 | 91.9 | 8.1 | 8.1 | 0.0 | 0 |
| Glucagon: side effects | 67.7 | 48.4 | 24.2 | 37.1 | 8.1 | 14.5 |
| Ketones: when to check | 93.5 | 88.7 | 6.5 | 11.3 | 0.0 | 0.0 |
| Ketone test strips: care and storage | 87.1 | 82.3 | 11.3 | 16.1 | 1.6 | 1.6 |
| Ketone production: causes | 90.2 | 82.3 | 9.8 | 17.7 | 0.0 | 0.0 |
| Ketones: why stopping production is important | 88.7 | 79.0 | 11.3 | 21.0 | 0.0 | 0.0 |
| Ketones: correct technique for testing | 90.0 | 87.1 | 8.3 | 12.9 | 1.7 | 0.0 |
| Ketones: interpreting urine test results | 91.5 | 80.6 | 8.5 | 19.4 | 0.0 | 0.0 |
| Ketones: actions for level of ketones | 84.7 | 74.2 | 15.3 | 24.2 | 0.0 | 1.6 |
| Acidosis: symptoms | 72.4 | 58.1 | 22.4 | 37.1 | 5.2 | 4.8 |
| Assistance: when to call the diabetes team | 93.1 | 95.2 | 6.9 | 4.8 | 0.0 | 0.0 |
| Blood glucose: your child’s target range | 91.5 | 98.4 | 8.5 | 1.6 | 0.0 | 0.0 |
| Blood glucose: your child’s pattern | 83.1 | 77.4 | 16.9 | 21.0 | 0.0 | 1.6 |
| Blood glucose patterns outside target range: evaluating causes | 69.5 | 64.5 | 30.5 | 33.9 | 0.0 | 1.6 |
| Blood glucose patterns outside target range: ways to correct | 69.5 | 61.3 | 27.1 | 33.9 | 3.4 | 4.8 |
Note. Entries are percents in each group responding completely covered, somewhat covered, or not covered. All chi-squared tests for independence were nonsignificant. C = control group; I = intervention group.
Differences between mild, moderate, and severe hypoglycemia.
Implications
The findings from this exemplar suggest that intervention fidelity was successfully achieved for this clinical trial. However, this statement does not reveal the complete picture. The truth is high level fidelity was accomplished with considerable forethought, effort, feedback and reinforcement. After each time point analysis, we met with the study team and shared the findings. We then strategized about ways to improve the fidelity to the manualized intervention. It is our opinion that intervention fidelity is most useful when it is used as a continuous process, evaluated at key time points during the study and data are shared with the research team to keep everyone on track. Additionally, variables captured by intervention fidelity processes that vary significantly should be used in the main analysis to examine the effect of these differences on treatment outcomes. For example, in our study we included the mean duration of time spent teaching for the intervention and control conditions in the main analysis of trial data (Table 2; see also Sullivan-Bolyai et al., 2015).
Our experience also suggests that parent reports of treatment receipt and the documentation of the diabetes educators (the interventionists) were the most helpful data for deciding on training issues throughout the course of the trial. The independent observations were less useful for this purpose, probably because the interventionists were fully aware of our presence and the scores varied little. However, the independent observations were very helpful for understanding the complexities of delivering the intervention and control conditions on a day-to-day basis and the need for tailoring the delivery of content in this type of trial. As suggested by Perrin et al. (2006), when conducting research in clinical settings, it may be necessary to be flexible and adapt the intervention to address different types of patients and cultures. Consequently, intervention fidelity assessments need to be malleable within the scope of protocol standards.
Conclusion
The goal of this brief report was to highlight the importance of monitoring intervention fidelity over time and using these data to correct drift in both the intervention and control conditions during the course of a complex clinical trial. We also advance the idea of monitoring all treatment conditions, including the control group with the same rigor as the experimental condition. In this way, researchers can account for variability that may occur over time, as well as between interventionists and research sites. Finally, including key intervention fidelity variables in the main study analysis will add to precision with which we evaluate nursing interventions in the future.
Acknowledgments
All phases of this study were supported by NINR-NIH grant #5R01NR011317.
Footnotes
The authors have no conflict of interest to report.
Clinical Trial Registration: NCT 0517269. This paper is a secondary analysis of data from the trial.
Contributor Information
Carol Bova, University of Massachusetts Medical School, Graduate School of Nursing, Worcester MA.
Carol Jaffarian, University of Massachusetts Medical School/Graduate School of Nursing, Worcester MA.
Sybil Crawford, University of Massachusetts Medical School/Department of Medicine, Division of Preventive and Behavioral Medicine, Worcester MA.
Jose Bernardo Quintos, Pediatric Endocrinology, Hasbro Children’s Hospital, Providence, RI.
Mary Lee, University of Massachusetts Medical School/Department of Pediatrics, Worcester MA.
Susan Sullivan-Bolyai, New York University, College of Nursing, New York, NY.
References
- Bellg AJ, Borrelli B, Resnick B, Hecht J, Minicucci DS, Ory M Treatment Fidelity Workgroup of the NIH Behavior Change Consortium. Enhancing treatment fidelity in health behavior change studies: Best practices and recommendations from the NIH Behavior Change Consortium. Health Psychology. 2004;23:443–451. doi: 10.1037/0278-6133.23.5.443. [DOI] [PubMed] [Google Scholar]
- Breitenstein SM, Gross D, Garvey CA, Hill C, Fogg L, Resnick B. Implementation fidelity in community-based interventions. Research in Nursing & Health. 2010;33:164–173. doi: 10.1002/nur.20373. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Carroll C, Patterson M, Wood S, Booth A, Rick J, Balain S. A conceptual framework for implementation fidelity. Implementation Science. 2007;2:1–9. doi: 10.1186/1748-5908-2-40. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Gearing RE, El-Bassel N, Ghesquiere A, Baldwin S, Gillies J, Ngeow E. Major ingredients of fidelity: A review and scientific guide to improving quality of intervention research implementation. Clinical Psychology Review. 2011;31:79–88. doi: 10.1016/j.cpr.2010.09.007. [DOI] [PubMed] [Google Scholar]
- Perrin KM, Burke SG, O’Connor D, Walby G, Shippey C, Pitt S, Forthofer MS. Factors contributing to intervention fidelity in a multi-site chronic disease self-management program. Implementation Science. 2006;26:1–6. doi: 10.1186/1748-5908-1-26. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Resnick B, Inguito P, Orwig D, Yahiro JY, Hawkes W, Werner M, Magaziner J. Treatment fidelity in behavior change research: A case example. Nursing Research. 2005;54:139–143. doi: 10.1097/00006199-200503000-00010. [DOI] [PubMed] [Google Scholar]
- Santacroce SJ, Maccarelli LM, Grey M. Intervention fidelity. Nursing Research. 2004;53:63–66. doi: 10.1097/00006199-200401000-00010. [DOI] [PubMed] [Google Scholar]
- Sidani S, Braden CJ. Design, evaluation and translation of nursing interventions. Chichester, UK: Wiley-Blackwell; 2011. Intervention fidelity; pp. 125–145. [Google Scholar]
- Sullivan-Bolyai S, Crawford S, Bova C, Lee M, Quintos JB, Johnson K, Melkus G. PETS-D: Impact on diabetes management outcomes. Diabetes Educator. 2015;41:537–549. doi: 10.1177/0145721715598383. [DOI] [PubMed] [Google Scholar]
- Sullivan-Bolyai S, Johnson K, Cullen K, Hamm T, Bisordi J, Blaney K, Melkus G. Tried and true: Self-regulation theory as a guiding framework for teaching parents diabetes education using human patient simulation. Advances in Nursing Science. 2014;37:340–349. doi: 10.1097/ANS.0000000000000050. [DOI] [PMC free article] [PubMed] [Google Scholar]
