Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2015 Nov 1.
Published in final edited form as: Appl Nurs Res. 2014 Feb 10;27(4):213–218. doi: 10.1016/j.apnr.2014.02.001

Revisiting a Non-significant Findings Study: A Parent Mentor Intervention Trial as Exemplar

Susan Sullivan-Bolyai 1, Carol Bova 2, Lesley Lowes 3, Sue Channon 4
PMCID: PMC4127375  NIHMSID: NIHMS567846  PMID: 24661347

Abstract

The purpose of this paper is to describe an interactive process for revising a parent social support intervention study with non-significant quantitative findings but strong clinical significance. We will present the methodological challenges that were problematic in the original intervention that potentially contributed to the non-significant findings, and a revised plan of action for conducting a future parent social support intervention. Of note, we have reconsidered the theory used to frame the original study, the randomization process, the intervention clarity and fidelity plan, what measures would better capture the effect, and the development of a more robust analysis plan that considers intra-family correlation, mediation and moderation (mixed model analysis). We will present the revision for each of these methods supported by recent empirical literature. Although this process may not be appropriate for all non-significant interventions, it should be considered with any study that has clinical significance.

Keywords: non-significant findings, theory-driven interventions, parent social support


“Many of life's failures are people who did not realize how close they were to success when they gave up.”

— Thomas A. Edison

Thomas Edison’s inspirational quote brings to mind the ‘file drawer problem,’ a phrase given to a phenomenon where the file drawer becomes the ‘home’ for 95% of studies completed that did not have significant findings (Rosenthal 1979). Rosenthal indicated that journals primarily accept studies with statistically significant results (~5% of all studies conducted). This publication bias influences what we know and how we conduct future studies. For instance, when considering meta-analyses, which influences future funding, clinical recommendations, and policy, only significant findings are included. Thus, some of the unpublished non-significant findings could alter the p values and effect sizes (the estimate of the magnitude of the intervention on outcome variables) reported (Berman and Parker 2002).

According to the Tufts Clinical Translational Science Institute (CTS) study phase criteria (2013) when non-significant findings occur it is suggested that researchers revisit the study in an iterative process to consider possible barriers that may have contributed to these results (2013). This iterative process includes the following five steps for behavioral studies:

(1). consider a different theoretical framework that better addresses the intended intervention (including mediators and moderators to better explain the effect of, and strength of the intervention)(Keller, Fleury et al. 2009);(Resnick, Inguito et al. 2005); (2). consider an alternative randomization process, especially for interventions that are preference-directed (Coward 2002; Sidani, Miranda et al. 2009);

(3). develop a precise description of the dose and timing of the intervention and comparison group as well as a more precise intervention fidelity plan over the course of the study (Conn, Rantz et al. 2001; Gross 2005; Reed, Titler et al. 2007; Conn 2009);

(4). consider different measures based on the alternative framework (Keller, Fleury et al. 2009);

(5). consider effect sizes and a more robust analysis plan (Baron and Kenny 1986; Mays and Melnyk 2009)

In fact, ‘missing what was really there’ (Clark 1996) is a commonly reported problem in intervention research where non-significant findings are countered by positive qualitative analysis with the study participants (Rearick, Sullivan-Bolyai et al. 2011). Too often hearing ‘I could have found something’ defeats the purpose of the fidelity of the study. We must remain as Clark states “conscientious about making sure we do not find what is not there.” Embracing non-significant findings is central to science and requires us to continue to be curious about what didn’t emerge in the findings. To quote Neil Gershenfeld a director at MIT ‘Truth is just a model. The common misperception about science is that scientists seek and find truth. They don’t-they make and test models (Jha 2011).” Thus, it may be a reasonable goal to revisit key components of theory-driven interventions with non-significant findings when clinical significance speaks to the efficacy from the recipients’ perspectives.

Taking all of this information into consideration, we decided to use the iterative process outlined above (Tufts, 2013) and revisit a study that some would suggest stay in the file drawer due to non-significant quantitative findings. The study in question, STEP (Social Support To Empower Parents) was a fully powered Randomized Controlled Trial (RCT) that matched parents of children newly diagnosed with type 1 diabetes (T1D) with a peer parent mentor (an experienced parent raising a child with T1D) who would provide social support as needed (Sullivan-Bolyai, Bova et al. 2010). The quantitative findings were non-significant, however, the qualitative findings strongly supported clinical significance (Rearick, Sullivan-Bolyai et al. 2011). We have been encouraged by the parents of children with T1D and our parent mentor community to consider other possible approaches and rework the intervention. In addition, the Cardiff University, Wales, Diabetes Team have been networking for several years with our team at the University of Massachusetts Medical School to develop a similar project for the parents in their clinic. The encouragement and collegiality has sparked common interests to further explore family focused parent support interventions.

Thus, the purpose of this article is to describe an interactive process for revising a study using the STEP study as a starting point. We will focus on several methodological challenges that were problematic in the original study that may have contributed to the non-significant findings. These changes include reconsidering a theory-driven intervention vs. original theoretical model that framed our study, preference-driven randomization, intervention clarity and fidelity, different measures, and a more robust analysis plan that considers intra-family correlation, mediation and moderation (mixed model analysis). We will present the revision for each of these methods supported by recent empirical literature.

Description of the STEP Study

In STEP (RCT) we carefully selected, consented and trained 10 (7 mothers and 3 fathers) parent mentors who were non-threatening, excellent at active listening, and raising children with T1D (using a parent mentor curriculum adapted from Ireys’ (Ireys, Chernoff et al. 2001;Sullivan-Bolyai, Bova et al. 2010). They were trained to provide 1:1 social support peer mentorship (via visits, phone call and/or e-mail communication) that included informational, affirmational and emotional support over the course of 12 months. After training the mentors, we recruited, consented, and randomized 60 parents of children newly diagnosed with T1D from two pediatric diabetes centers in the Northeast United States. The parent mentors provided social support to 32 mothers in the experimental arm. Control group parents (28 mothers) were given a phone number of a parent contact (an experienced parent who did not receive the parent mentor curriculum training) who they could call for support.

There were no statistically significant differences (Sullivan-Bolyai, Bova et al. 2010) between the two groups at any of the data points (3, 6, or 12 months) for parent concern, confidence, worry, impact on the family, or perceived social support. This occurred despite the fact that we (a) conducted a pilot study that suggested that the intervention was feasible and had potential efficacy (Sullivan-Bolyai, Grey et al. 2004), (b) used a theoretical framework to guide the intervention, (c) conducted a power analysis, (d) thoroughly trained all of the parent mentors, and (e) used reliable and valid measures.

However, there were positive qualitative findings through post-intervention interviews with those parents who received the intervention. So why then, did we not see quantitative differences when commonly we heard from parents “You need to offer this support to every parent with a child newly diagnosed”? Using the Tufts CTS five-point structure to consider these seemingly contradictory findings, we will present several areas in the research process that we want to revisit and test.

Furthermore, our lack of quantitative findings appear to be common and are reflected in a recent systematic review of peer parent support studies whereby seven quantitative and one mixed methods peer parent support intervention studies did not consistently substantiate its’ benefits (Shilling, Morris et al. 2013). Their recommendations include addressing many of the methodological challenges we plan to address in this article.

Theoretical framework and theory-driven interventions: an alternative approach

Theory is critical in the development and implementation of complex behavioral interventions. It helps target the problem at hand, and identify the variables or concepts that we want to influence, or change. It helps us craft the actual intervention, focusing on the critical inputs (the concepts at the heart of the intervention such as education, or social support) and the key ingredients (the actual activities that form the intervention, such as teaching something visually or providing a parent mentor who offers parents affirmational support) (Keller, Fleury et al. 2009). Theory is critical in guiding our choice of measures and for attaining the specificity necessary to show change in outcome variables, especially with complex behavioral interventions in a family system where change may result in only small effect sizes (Baronowski, Lin et al. 1997; Hampson, Skinner et al. 2000; Keller, Fleury et al. 2009). Thus, if we are hoping to demonstrate an intervention effect, using rigorous specificity in development of the intervention is critical.

Theory provides the broader context for understanding the potential generalizability of results. Theory-driven interventions can also help predict behavioral changes, linking causal factors to help explain why and how some interventions may work better for targeting certain clinical outcomes through the use of mediator and moderator variables. Thus, using theory to help define the problem, select possible variables that may mediate between the intervention and the outcomes, and/or moderate the intervention will help us determine what interventions work with a certain population, and help explain how it works.

Intervention mapping is an efficient approach to help determine how to best select a theory with variables of interest, and the key ingredients of the intervention (specific activities in the intervention) that result in dependent variable change (Conn, Rantz et al. 2001). Mapping the intervention is being incorporated into the UK Cardiff team work in a feasibility study of parent mentoring families with a child recently diagnosed with T1D (Parent Listen Understand and Support (PLUS). Whilst broadly following the STEP intervention model, this team is using a logic model format (Kellogg 2004) to develop a framework that links the theoretical assumptions of the program with the outcomes (both short-term and long-term) and program activities and processes.

As we ponder the non-significant findings of STEP we must reconsider the theory that framed the study, that being Ireys Social Support Theory (Ireys, 2001). It may be that this theory was at too early a stage of development with limited empirical testing to use with a different sample ie parents at a different point in relation to the diagnosis. In Ireys’ work the parent mentors worked with parents at least one year out from their child’s diagnosis whilst in STEP their children were newly diagnosed with T1D.

We explored other theories that might be appropriate for families early in crisis mode and considered several alternative frameworks such as stress and coping, adaptation, or transitions theory. Since we were working specifically with families who had a child newly diagnosed with a chronic condition the Family Management Style Framework (Knafl, Deatrick et al. 2012) is a good alternative since it deals with family functioning and day-to-day management (a large focus of the parent mentor support).

The problem that spurred the development of STEP was the phenomenal learning curve that occurred as parents incorporated the diabetes management into family life in a short period of time after diagnosis (Sullivan-Bolyai, Knafl et al. 2003). In order to help these parents with day-to-day management and especially the ‘tricks of the trade’ of management, a parent mentor intervention (as a contextual influence) was offered to enhance the family management and the parent role in proactively responding to the day-to-day management needs. Thus, considering the problem, a family management framework may better guide the intervention than a generic social support model. Based on social ecology theory, Family Management Style Framework (FMSF) was developed through qualitative interviews with family members raising children with special health care needs to articulate how families internalize ‘the work.’ It helps explain how families incorporate day-to-day management of a child’s condition into their daily life and is comprised of three concepts: family’s definition of the situation, management behaviors, and their perceived consequences (the impact the condition has on their family life). Each concept also has several dimensions related to family perceptions which may more precisely capture the effect of a parent mentor’s support through the sharing of ‘tricks of the trade’ and affirmation with families of children newly diagnosed with T1D. The FMSF also includes contextual influences that address available resources including those through the health care system. A parent mentor intervention fits nicely into this concept (Knafl, Deatrick et al. 2012).

The specificity of the FMS framework allows one to focus on the family day-to-day management, which is the pragmatic point of interest in our intervention: how to best help parents incorporate all the new knowledge and skills necessary to provide daily care of their child. We will also be able to determine the effect of this intervention indirectly on parent functioning through mediation of family management style and their building blocks (concepts) and the associated dimensions as measured by the Family Management measure (FAMM). The Cardiff group, in exploring the feasibility of delivering the parent mentoring intervention in a different cultural and healthcare context, have opted to focus on qualitative analysis to embed their results in the parents’ experience of the intervention from both sides of the mentoring relationship. The degree of fit between their results and the FMSF will add to the understanding of the cross-cultural aspects of families’ experience of the diagnosis and the ‘work’ of the first year of living with diabetes.

Randomization process alternative

Typically, in an RCT one of the key strengths and properties is to use randomization (random assignment). Study participants are afforded equal chance of being assigned to either the experimental or control group, preventing selection bias. It also allows for equal distribution between both groups of any extraneous variables that could affect the dependent variables or outcomes vs. the effect of the intervention. Unfortunately, some interventions are preference-driven due to parental personality difference, perceptions of appropriateness (viewed as reasonable) or acceptable (perceptions of how helpful the treatment might be, or how intrusive). Regardless, preference of treatment must be considered as a potential threat to internal and/or external validity. Social support interventions fall into the preference treatment category. People tend to have a personal preference as to joining ‘group activities.’ We repeatedly heard after randomization, ‘darn, I wanted to have the parent mentor’ or just the opposite, “I probably won’t use her.’ Thus, there was a documented seeking other types of support to make up for not receiving the formal support, and those who didn’t use the parent mentor, diluting the effect of the intervention group.

One way to compensate for this problem is to offer a partial randomization technique (Brewin and Bradley 1989; Coward 2002). In this randomization design you ask the participant if they have a preference and if they do you assign them to that arm. If they don’t they are randomized in the traditional manner (experimental or control arm). The result is 4 arms, with two accounting for preference. However, there are some limitations such as unequal distribution across groups, the need to power with greater numbers, increasing the cost and the potential for overestimating the treatment effect (Gemmel and Dunn 2011) It does account though for those who have a strong preference and would in the case of a parent mentor intervention have the potential to strengthen the dose and use frequency. It would also allow us to better determine the varying needs by preference group including different parenting styles, personality traits, and interpersonal skills. The very selective characteristics of the parent mentors could also be further explored by the partial randomization groups.

Revised intervention and intervention fidelity plan

Key to the intervention is the fidelity of the treatment provided for both study arms. Santacroce et al. (2004) defines it as a process of ensuring the intervention (or key ingredient that is, ‘what the experimental group received that the comparison group didn’t and in this case, the one-to-one interactions with a trained parent mentor) and comparison treatment is consistently being delivered to all participants. Following Resnick & Orwig’s Treatment Fidelity Framework (See Table 1) there are five components that comprise an intervention fidelity plan to ensure study internal validity is enhanced and to minimize threats to internal validity over time.(Santacroce, Maccarelli et al. 2004; Resnick, Inguito et al. 2005; Whitmer, Sweeney et al. 2005; Horner, Rew et al. 2006; Kearney and Simonelli 2006; Spillane, Byrne et al. 2007; Stein, Sargent et al. 2007). Following is a summary of how past and future strategies will strengthen the fidelity of the intervention.

Table 1.

Intervention Fidelity and Monitoring Plan

Category Data Reviewed Timeframe
Design *Protocol manual and parent mentor training curriculum will include application of diabetes education application component Review Q 3 months
Training *Parent mentor booster sessions to include diabetes education application Review Q 3 months
Treatment Delivery *Debriefings with parent mentors
*Intervention interaction audio recordings to analyze active ingredient
*Field observations with parent mentor coordinator or PI
After each parent mentor-participant interaction

Random visits
Treatment Receipt *Qualitative interviews
*Participant quantitative receipt checklist
End of each trial
Treatment Enactment Qualitative and quantitative response 6 month post trial

Design

We will continue to use the curriculum and procedure manual originally developed by Ireys et al (2001) for parent mentor training, documentation and monitoring time spent and content discussed during parent-to-parent discussions and debriefing with a parent mentor coordinator after these interactions. The Cardiff team are also using this approach.

We plan to include in the new version of STEP (STEP II) an ‘educational application’ component along with social support. In a recent systematic review Chesla (Chesla 2010) underscored the strength in diabetes interventions that were not limited to social support but that included an education component. We will craft a diabetes education component with our diabetes team that will provide intervention parents a review and application of the pragmatic tricks of the trade in diabetes management including carbohydrate counting, hypoglycemia management and how to educate others (including grandparents and teachers) about diabetes management. Although this was originally part of the key ingredient that both the parent mentors and parents that received the intervention discussed as being helpful, it was not explicitly measured.

Training

All parent mentors were trained using the theory-based parent mentor curriculum developed by Ireys et al. (Ireys, Chernoff et al. 2001) and received 6-month booster get-togethers with review of the curriculum components. We will add the diabetes education application to the training, making this part of the intervention more explicit. We will increase the booster get-togethers to quarterly. Careful documentation of the parent training for both study arms will be monitored including training session by length of time, number of follow-up and booster sessions attended and that the intervention as implemented is consistent with the social support framework.

Treatment delivery

This category consists of strategies to monitor the intervention delivery over time to ensure it is being administered as intended, and can serve as an evaluation tool of fidelity (Horner et al.). The flexibility in the ‘intervention dose’ in STEP made it difficult to ensure everyone in the experimental arm received comparable amount of the key ingredient, introducing threats to internal validity and opportunities to commit type 1 error. In reality conducting a community-based intervention using lay experts to provide the active ingredient was difficult to measure with random observations. Videotaping was not conducted due to both the cost and acceptability to parent mentors who felt it would interfere with the reciprocity they were trying to develop with their assigned parents. Instead, we had frequent peer debriefing with the mentor supervisor (an experienced and trained parent mentor) after their parent encounters as well as documentation of the parent mentor-participant interactions lasting more than five minutes. With the revisited study plan we will ask the parent mentor to audio record each session to better monitor the content covered, the tone and intensity of the intervention sessions, and to better analyze the dose effect (number of minutes spent, and content covered and applied to day-to-day management) (Brooten and Youngblut 2006; Reed, Titler et al. 2007). We will also develop an observation document to be used for random observation visits conducted by the parent mentor coordinator or the PI of the study. In addition, every 2 week team meetings will also be held to discuss study progress and to further monitor intervention procedures.(Artinian, Froelicher et al. 2004)

Treatment receipt

It is important not only to ensure the treatment is being delivered as planned, but to also ensure the parents receiving it feel that social support and education application were provided. We conducted qualitative interviews with many of the parents in the experimental arm as well as a few in the control arm and plan to do the same with the next study, as do the Cardiff group. In addition, in the revisited study we will develop a diabetes management checklist for parents to complete at the end of the study to better capture and measure the key ingredient of the intervention.

Treatment enactment

This component ensures that beyond the study, the intent of the intervention was helpful and parents incorporated the social support into their daily lives. We had a family ‘thank you’ event at the Barton Center for Diabetes Education at the end of the study where we anecdotally were told how helpful the intervention had been during a very stressful time in parents’ lives. The revisited study will conduct 6 month post intervention interviews to assess the lasting effect of the parent support and application offered.

Thus, the alternative Intervention fidelity plan will be strengthened to better measure the validity of the intervention delivery to include random observations by the parent mentor coordinator or PI using a developed observation checklist (similar to one we developed for our current parent education RCT). We will also collect quantitative data from parents as proof of social support and educational receipt.

More precise measures

The original framework was focused on social support only and the previous measures lacked the sensitivity to capture changes between the two groups. With the decision to use the FMSF, there are several alterations that have the potential to better measure the effect of the intervention. First, by looking at the contextual influences that include social support (including identification of community resources) and diabetes education application there are several instruments that are reliable and valid regarding their psychometrics. Previously, we didn’t measure the diabetes knowledge gained from the application of diabetes management that was shared between parent mentor and with the assigned parent. The adapted Diabetes Awareness and Reasoning Test-Parents (DART-P) (Sullivan-Bolyai, Bova et al. 2012)has the potential of showing differences from having formal application of knowledge during parent mentor interactions. Parents have reported fear of hypoglycemia as the main fear in day-to-day management and Hypoglycemia Fear Scale-Parents (Clarke, Gonder-Frederick et al. 1998)specifically measures this issue. In STEP we also did not use a diabetes-specific instrument to sensitively measure self-efficacy in parents, but now have SED-P (Grossman, Brinks et al. 1987)which has been reported to have excellent reliability & validity. Second, the Family Management Measure (FaMM) (Knafl, Deatrick et al. 2011)is a well-constructed family functioning instrument developed by the framework developers that will allow us to specifically measure the parental variation in concepts of definition of the situation, management behaviors and perceived consequence of the chronic illness from the perspective of the parents. Along with the qualitative interviews and partial randomization clusters of parents who really want this type of 1:1 mentorship, we will hopefully be able to better measure quantitatively why select parents find STEP very helpful in the early months after their children are diagnosed with T1D.

Analysis plan allowing more insight

Finally, our original sample size and analysis plan (multivariate analysis) did not allow for the measurement of finite causal pathways associated with the intervention that structural equation modeling (SEM) allows. SEM will provide a better explanation of how the intervention worked, and for whom the intervention is most effective. It is possible that the unique family management perspectives mediate the effect of the parent mentor support provided as a contextual influence. By using SEM we will be able to measure differences in parent perspectives on view of the child, parent mutuality, their management approaches, and future expectations of the child within the context of the family. It may be that 2-parent families where one parent may see management as very difficult and the other parent has discordant views may benefit from one-to-one parent support vs. a family where parent perspectives are more in agreement. We will also be able to measure moderators such as parent education and family composition to help direct what parents most benefit from this type of support STEP offers, and again by clusters of parents who specifically select (through partial randomization) this type of intervention. It is critical to analyze mediators and moderators to determine when, for whom, and how the key ingredients of the intervention work to elicit change in behaviors (Rothman 2011). This analysis also will save time, effort and money by offering the intervention only to those that want this type of social support (Park, Green et al. 2012).

Conclusion

This paper described an integrative process for revising a file drawer parent mentor social support intervention study. Our commitment and our family community support has resulted in a decision to use a different framework, an alteration in the randomization process (to allow for parents who personality-wise would select this type of social support, the explication of a part of the key ingredient of the intervention (application of diabetes knowledge), a more precise intervention fidelity plan to support the intervention, more sensitive and theory-driven measures, and a more sophisticated analysis plan. More attention to all of these components of the research process will allow us to revisit this family-focused intervention with a new lens in hopes of quantifying behavioral differences that occur as a result of the intervention. Similarly the work of our colleagues in the U.K. who are exploring the feasibility of the model in their healthcare context will add to our understanding of the key ingredients in parent social support interventions. Non-significant findings can be frustrating but at the same time lead us on a curiosity journey which forces us to use all of our creative skills to determine how an intervention works. It may be easier to place the project in the file drawer but just think, if Thomas Edison had taken that attitude who knows how long it would have taken to perfect the light bulb, as he so aptly said “I have not failed. I've just found 10,000 ways that won't work.” There are so many lessons to be learned from the non-significant findings, pushing us in the quest for answers.

Acknowledgments

This research was partially supported by NIH-NINR 1R01NR011317

Footnotes

Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

Contributor Information

Susan Sullivan-Bolyai, NYU College of Nursing.

Carol Bova, University of Massachusetts, Worcester.

Lesley Lowes, Cardiff University, Wales.

Sue Channon, Cardiff University, Wales

References

  1. Artinian NT, Froelicher ES, et al. Data and safety monitoring during randomized controlled trials of nursing interventions. Nursing Research. 2004;53:414–418. doi: 10.1097/00006199-200411000-00010. [DOI] [PubMed] [Google Scholar]
  2. Baron RM, Kenny DA. The moderator-mediator variable distinction in social psychological research: Conceptual, strategic, and statistical considerations. Journal of Personality and Social Psychology. 1986;51:1173–1182. doi: 10.1037//0022-3514.51.6.1173. [DOI] [PubMed] [Google Scholar]
  3. Baronowski T, Lin L, et al. Theory as mediating variables: Why aren't community interventions working as desired? Annals of Epidemiology. 1997;S7:S89–S95. [Google Scholar]
  4. Berman NG, Parker RA. Meta-analysis: Neither quick nor easy. BMC Medical Research Methodology. 2002;2 doi: 10.1186/1471-2288-2-10. [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Brewin C, Bradley C. Patient preferences and randomized clinical trials. British Medical Journal. 1989;299:313–315. doi: 10.1136/bmj.299.6694.313. [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Brooten D, Youngblut JM. Nurse dose as a concept. Journal of Nursing Scholarship. 2006;38:94–99. doi: 10.1111/j.1547-5069.2006.00083.x. [DOI] [PubMed] [Google Scholar]
  7. Chesla CA. Do family interventions improve health? Journal of Family Nursing. 2010;16:355–377. doi: 10.1177/1074840710383145. [DOI] [PubMed] [Google Scholar]
  8. Clark AJ. Optimizing the intervention in research studies. Advanced Practice Nursing Quarterly. 1996;2(3):1–4. [PubMed] [Google Scholar]
  9. Clarke WL, Gonder-Frederick LA, et al. Maternal fear of hypoglycemia in their children with insulin dependent diabetes mellitus. Journal of Endocrinology & Metabolism. 1998;11:189–194. doi: 10.1515/jpem.1998.11.s1.189. [DOI] [PubMed] [Google Scholar]
  10. Conn VS. The devil is in the details. Western Journal of Nursing Research. 2009;31:139–140. doi: 10.1177/0193945908327786. [DOI] [PubMed] [Google Scholar]
  11. Conn VS, Rantz MJ, et al. Designing effective nursing interventions. Research in Nursing and Health. 2001;24:433–442. doi: 10.1002/nur.1043. [DOI] [PubMed] [Google Scholar]
  12. Coward D. Partial randomization design in a support group intervention. Western Journal of Nursing Research. 2002;24:406–421. doi: 10.1177/01945902024004008. [DOI] [PubMed] [Google Scholar]
  13. Gemmel I, Dunn G. The statistical pitfalls of the partially randomized preference design in non-blinded trials of psychological interventions. International Journal of Methods in Psychiatric Research. 2011;20:1–9. doi: 10.1002/mpr.326. [DOI] [PMC free article] [PubMed] [Google Scholar]
  14. Gross D. On the merits of attention-controlk groups. Research in Nursing and Health. 2005;28:93–94. doi: 10.1002/nur.20065. [DOI] [PubMed] [Google Scholar]
  15. Grossman HY, Brinks S, et al. Self-efficacy in adolescent girls and boys with insulindependent diabetes mellitus. Diabetes Care. 1987;1987:324–329. doi: 10.2337/diacare.10.3.324. [DOI] [PubMed] [Google Scholar]
  16. Hampson SE, Skinner TC, et al. Behavioral interventions for adolescents with type 1 diabetes: How effective are they? Diabetes Care. 2000;23:1416–1422. doi: 10.2337/diacare.23.9.1416. [DOI] [PubMed] [Google Scholar]
  17. Horner S, Rew L, et al. Enhancing intervention fidelity: A means of strengthening study impact. Journal of Specialists in Pediatric Nursing. 2006;11:80–89. doi: 10.1111/j.1744-6155.2006.00050.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  18. Ireys H, Chernoff R, et al. Maternal outcomes of a randomized controlled trial of a community-based support program for families of children with chronic illnesses. Archives Pediatric Adolescent Medicine. 2001;155:771–777. doi: 10.1001/archpedi.155.7.771. [DOI] [PubMed] [Google Scholar]
  19. Jha A. The Guardian. London: GMG Guardian Media Group; 2011. We must learn to love uncertainty and failure, say leading thinkers. [Google Scholar]
  20. Kearney MH, Simonelli MC. Intervention fidelity: Lessons learned from an unsuccessful pilot study. Applied Nursing Research. 2006;19:163–166. doi: 10.1016/j.apnr.2005.11.001. [DOI] [PubMed] [Google Scholar]
  21. Keller C, Fleury J, et al. Fidelity to theory in PA intervention research. Western Journal of Nursing Research. 2009;31:289–311. doi: 10.1177/0193945908326067. [DOI] [PubMed] [Google Scholar]
  22. Kellogg WK. W.K Kellogg Foundation’s logic model development guide. 2004 from Retrieved April 1, 2013 from: http://www.wkkf.org/knowledge-center/resources/2006/02/WK-Kellogg-Foundation-Logic-Model-Development-Guide.aspx.
  23. Knafl K, Deatrick J, et al. Assessment of the Psychometric Properties of the Family Management Measure. Journal of Pediatric Psychology. 2011;36:494–505. doi: 10.1093/jpepsy/jsp034. [DOI] [PMC free article] [PubMed] [Google Scholar]
  24. Knafl KA, Deatrick J, et al. Continued development of the family management style framework. Journal of Family Nursing. 2012;18:11–34. doi: 10.1177/1074840711427294. [DOI] [PubMed] [Google Scholar]
  25. Mays MZ, Melnyk BM. A call for the reporting of effect sizes in research reports to enhance critical appraisal and evidence-based practice. Worldviews on Evidence-Based Nursing 3rd Quarter. 2009:125–129. doi: 10.1111/j.1741-6787.2009.00166.x. [DOI] [PubMed] [Google Scholar]
  26. Park MJ, Green J, et al. Hidden decay of impact after education for self-management of chronic illnesses: hypotheses. Chronic Illness. 2012;9:73. doi: 10.1177/1742395312453351. [DOI] [PMC free article] [PubMed] [Google Scholar]
  27. Rearick E, Sullivan-Bolyai S, et al. Parents of children newly diagnosed with type 1 diabetes: Experiences with social support and family management. The Diabetes Educator. 2011;37:508–518. doi: 10.1177/0145721711412979. [DOI] [PubMed] [Google Scholar]
  28. Reed D, Titler MG, et al. Measuring the dose of nursing intervention. International Journal of Nursing Terminologies and Classifications. 2007;18:121–130. doi: 10.1111/j.1744-618X.2007.00067.x. [DOI] [PubMed] [Google Scholar]
  29. Resnick B, Inguito P, et al. Treatment fidelity in behaviour change research. Nursing Research. 2005;54:139–143. doi: 10.1097/00006199-200503000-00010. [DOI] [PubMed] [Google Scholar]
  30. Rosenthal R. The "file drawer" problem and tolerance for null results. Psychological Bulletin. 1979;86:638–641. [Google Scholar]
  31. Rothman AJ. Be prepared: Capitalizing on opportunities to advance theory and practice. Journal of Public Health Dentistry. 2011;71:549–550. doi: 10.1111/j.1752-7325.2011.00240.x. [DOI] [PubMed] [Google Scholar]
  32. Santacroce SJ, Maccarelli LM, et al. Intervention fidelity. Nursing Research. 2004;53:63–66. doi: 10.1097/00006199-200401000-00010. [DOI] [PubMed] [Google Scholar]
  33. Shilling V, Morris C, et al. Peer support for parents of children with chronic disabling conditions: A systematic review of quantitative and qualitative studies. Developmental Medicine & Child Neurology. 2013:1–8. doi: 10.1111/dmcn.12091. [DOI] [PubMed] [Google Scholar]
  34. Sidani S, Miranda J, et al. Influence of treatment preferences on validity: A review. Canadian Journal of Nursing Research. 2009;41(4):52–67. [PubMed] [Google Scholar]
  35. Spillane V, Byrne MC, et al. Monitoring treatment fidelity in a randomized controlled trial of a complex intervention. Journal of Advanced Nurisng. 2007;60:343–352. doi: 10.1111/j.1365-2648.2007.04386.x. [DOI] [PubMed] [Google Scholar]
  36. Stein KF, Sargent JT, et al. Intervention research: Establishing fidelity of the independent variable in nursing clinical trials. Nursing Research. 2007;56:54–62. doi: 10.1097/00006199-200701000-00007. [DOI] [PubMed] [Google Scholar]
  37. Sullivan-Bolyai S, Bova C, et al. Development and pilot testing of a parent education intervention for type 1 diabetes: parent education through simulation-diabetes. Diabetes Educator. 2012;38:50–57. doi: 10.1177/0145721711432457. [DOI] [PMC free article] [PubMed] [Google Scholar]
  38. Sullivan-Bolyai S, Bova C, et al. Social Support To Empower Parents: An intervention for parents of young children newly diagnosed with type 1 diabetes. The Diabetes Educator. 2010;36:88–97. doi: 10.1177/0145721709352384. [DOI] [PubMed] [Google Scholar]
  39. Sullivan-Bolyai S, Grey M, et al. Helping other mothers effectively work at raising young children with type 1 diabetes. Diabetes Educator. 2004;30:476–484. doi: 10.1177/014572170403000319. [DOI] [PubMed] [Google Scholar]
  40. Sullivan-Bolyai S, Knafl KA, et al. Maternal management behaviors for young children with type 1 diabetes. MCN, American Journal of Maternal Child Nursing. 2003;28 doi: 10.1097/00005721-200305000-00005. [DOI] [PubMed] [Google Scholar]
  41. Tufts Tufts Clincal Translational Science Institute (2013) 2013 Retrieved March 29, 2013, from http://tuftsctsi.org. Retrieved March 27, 2013, from www.tuftsctsi.org.
  42. Whitmer K, Sweeney C, et al. Strategies for maintaining integrity of a behavioral intervention. Western Journal of Nursing Research. 2005;27:338–345. doi: 10.1177/0193945904270087. [DOI] [PubMed] [Google Scholar]

RESOURCES