Skip to main content
Translational Behavioral Medicine logoLink to Translational Behavioral Medicine
. 2011 Sep 13;1(4):515–522. doi: 10.1007/s13142-011-0066-7

Reach of a kiosk-based pediatric injury prevention program

Nancy L Weaver 1,, Tonja R Nansel 2, Janice Williams 3, Julia Tse 2, Maria Botello-Harbaum 4, Katherine Willson 2
PMCID: PMC3650892  NIHMSID: NIHMS460364  PMID: 23667402

ABSTRACT

While controlled trials are important for determining the efficacy of public health programs, implementation studies are critical to guide the translation of efficacious programs to general practice. To implement an evidence-based injury prevention program and examine program use and completion rates in two implementation phases, Safe N′ Sound, an evidence-based program, was implemented in five pediatric clinics. Data on program use were collected from program files and patient census data. Program use averaged 12.1% of eligible patients during implementation and 9.5% during the continuation phase. Program completion averaged 9.7% and 6.5%, respectively. Findings from this study can inform the dissemination of evidence-based public health programs, particularly in practice-based clinical settings.

KEYWORDS: Injury prevention, Anticipatory guidance, Pediatric counseling, Reach, Implementation, Dissemination, Evidence-based

INTRODUCTION

More children die from unintentional injuries than any other cause; from 1999–2007, 15,299 children ages 1–4 sustained fatal unintentional injuries nationwide [1]. The morbidity, mortality, and costs associated with these injuries can be significantly reduced by the use of appropriate safety products and behaviors. However, such prevention measures are not reliably used by parents and caregivers [2].

The American Academy of Pediatrics recommends that pediatricians provide anticipatory guidance on the ways in which parents can keep their young children safer from injuries [3]. Research has shown that while many families consider their pediatrician to be a valuable and credible source of injury prevention information [4], pediatricians do not routinely provide such guidance [5, 6]. Many injury experts have called for approaches that complement the role of the pediatrician [7, 8], and notable organizations have developed injury prevention materials to meet this need [911].

While the provision of injury control information is important for advancing parental prevention behaviors, the number of behaviors that need to be addressed in order to make homes safer for children can be overwhelming. Thus, prioritized communication approaches are useful so that parents can be given the most appropriate information about their child’s individual injury risks. Safe N’ Sound (SNS) is such a program designed for use in the pediatric primary care environment [1215]. After a parent completes an electronic self- and home assessment using a touch screen computer, the program prints a tailored booklet for the parents and a corresponding summary for their pediatrician. This kiosk-based program was designed to facilitate anticipatory guidance and promote the use of safety practices by parents to keep young children (0–4 years old) safer from car crashes, falls, drowning, poisonings, and burns. The SNS program has been evaluated in multiple settings, [1214] adapted for use in community-based clinics, [15] and translated to offer the program in both Spanish- and English-speaking communities. This collection of evidence thus far supports SNS as a valuable tool in promoting injury prevention behaviors. Compared to generic materials, parents who received the SNS materials were twice as likely to adopt a behavior to prevent childhood injury; notably SNS promoted the use of more complex behaviors (e.g., car seat installation), and was particularly effective for low-income parents [12].

The application of SNS in these studies, however, was supported by the research infrastructure required to conduct an efficacy trial, with research assistants recruiting participants, directing them to the kiosk, distributing the parent- and provider-tailored information, and ensuring continued functioning of the kiosk. In comparison, use of SNS in routine health care would require organizational support to initiate and maintain the program. The degree to which this program could be implemented and sustained in routine health care without external research support was not known.

This programmatic challenge is not unique to the field of injury control. It is well documented that there is a disconnect between the development and efficacy testing of behavioral interventions and their translation to general practice [16, 17]. While efficacy trials are certainly required to test interventions under tightly controlled conditions, research to determine the extent and nature of program implementation in practice-based settings is critical for bridging this translational gap [18]. One tool to guide the dissemination of clinical approaches in health care settings is the RE-AIM framework [1921]. In this framework, the public health impact of an intervention is described as a function of its Reach, Efficacy, Adoption, Implementation, and Maintenance. As such, an efficacious intervention can only have a substantial public health impact if an adequate number of the target population participate, it is adopted by relevant organizations, implemented with integrity, and maintained over time [19, 20, 2224]. RE-AIM has been used to guide the development of programs that are more likely to be effective in real-world settings, in part by drawing attention to the relative strengths and weaknesses of different approaches to health promotion and chronic illness self-management—such as in-person counseling, group education classes, telephone counseling, and Internet resources [20, 25, 26].

This manuscript focuses on the first construct in the RE-AIM framework—Reach—in evaluating the implementation of SNS in routine clinical practice without on-site research support. The primary purpose of the study was to assess the reach of the program to parents and providers over two phases of program implementation in five participating clinics.

METHOD

Overview

Pediatric clinics within a healthcare consortium in North Carolina were invited to adopt the SNS program, and those that agreed were provided the computer, computer program, supplies, and training. Adoption and use were tracked with various data collection methods over two implementation phases. This study was approved by the Institutional Review Board (IRB) of the Carolinas Medical Center in Charlotte, NC and took place between July 2006 and April 2008.

Recruitment

Because the purpose of this study was to document the extent of program reach, it was our goal to involve a range of different types of clinics that were generally representative of national pediatric offices that provide well-child care. We approached clinics individually by sending an email to a physician or office manager within each site with whom study staff had previous working relationships on injury projects or who were recommended by recruited physicians as colleagues who might be interested in involvement as a site. Offices were provided a description of the program, copies of the efficacy studies, estimated time requirements on their part, confirmation that all IRB and legal issues would be resolved prior to placement, assurance that patient flow would not impacted, and a commitment of $1,500 for staff incentives. In total, we invited 16 of the 23 clinics in the healthcare consortium and took the first five of differing office types that agreed to participate.

The most commonly cited reasons for choosing to participate were the program fit with existing patient care practices, availability of information technology support, lack of a research requirement to guarantee a specific number of program users, and the brevity of the program encounter. Additional factors reported to facilitate a decision to participate included previous contact with study staff, practice commitment to injury prevention, provision of funds to support data collection, and the low level of time commitment required of providers and staff for the evaluation components. Clinics also noted the ability of our project team to be available during the study period for feedback, observations, and support. For the 11 clinics that did not participate, key factors included impending staff or location change, lack of agreement between providers and office staff, unfamiliarity with the program, and the belief that time would be too large of a constraint. Participating offices are described in Table 1.

Table 1.

Clinic and user characteristics

Clinic 1 Clinic 2 Clinic 3 Clinic 4 Clinic 5
Clinic type
 Practice area Rural Urban Urban edge Suburban Suburban
 Practice structure Independent Clinic/teaching Family prtc Independent Multi-site
 Payor Medicaid/private Medicaid/sliding Mixed Private ins Private ins
 Patient census Moderate High Low High Moderate
Clinic staff
 No. of providers 2 5 4 10 6
 No. of office managers 1 1 1 1 3
 No. of nurse managers 1 1 1 1 1
 No. of front desk staff 2 8 2 8 2
 Kiosk user demographics (n = 53) (n = 173) (n = 161) (n = 255) (n = 100)
Child race/ethnicity (%)
 White 71.2 9.4 63.1 58.2 83.3
 Black 21.2 42.1 12.7 7.4 5.2
 Hispanic 5.8 29.8 9.6 6.1 4.2
 Asian 1.9 3.5 4.5 4.9 4.2
 American 0.0 4.1 2.5 4.9 0.0
 Indian 0.0 11.1 7.6 18.4 3.1
 Other
Family income (%)
 <$29,999 12.0 67.1 10.3 15.7 6.9
 $30,000–49,999 18.0 15.1 12.4 8.3 19.5
 $50,000–74,999 38.0 5.9 20.7 15.3 26.4
 $75,000+ 32.0 11.8 56.6 60.6 47.1
Parent/guardian completing assessment (%)
 Mother 83.0 68.8 83.2 48.6 82.0
 Father 9.4 13.3 12.4 25.5 15.0
 Other 7.6 17.9 4.3 25.9 3.0
Education of parent/guardian completing assessment (%)
 < High school degree 3.8 27.3 5.8 15.9 2.1
 High school deg or equivalent 11.5 30.9 3.8 8.2 6.3
 Some college 23.1 18.8 19.9 9.4 15.6
 College graduate 46.2 12.7 53.8 38.2 53.1
 Postgraduate 15.4 10.3 38.3 28.3 22.9

Note that not all users elected to provide demographic information

Implementation

The program was placed in one practice, the smallest site, for 3 months as a pilot location. Based on the experiences with the pilot site, materials and procedures were modified for use in the remaining four sites (e.g., paper format, flyers, screen savers, data collection materials). The program was then implemented in the remaining four clinics for a 3-month implementation phase and a 6-month continuation phase. During the implementation phase, the clinics were provided general project support (e.g., suggestions on program placement, flow, and supplementary materials) and staff were available by phone or email for questions. In the continuation phase, we largely withdrew program support to assess the degree of sustainability.

Data sources

To examine the implementation of SNS at each phase, we collected data from three sources:

  1. Program use data. The SNS program involves a user completing a self-assessment of childhood injury risk factors present in their homes and cars. Because the program was intended to be integrated into routine care rather than conducted as part of an efficacy trial, we did not collect identifying information from program users. Program data were exported to enumerate the number of parents who used and also the number who completed the program during the project phases, in order to compute measures of program reach to parents. A program user was defined as someone who entered any information into the program, but may or may not have completed the assessment. A completer was defined as someone who completed the assessment sufficiently to generate the tailored feedback.

  2. Census data. We conducted a query of clinic census data to determine the number of parents who were eligible to complete the program during each of the program phases. Eligibility criteria included: having a child age four or younger, English speaking, and visiting the clinic for a routine well-child visit (billing code V20.2).

  3. Chart audits. Over the course of the 9-month project, 910 charts were audited from randomly selected days at the five clinic sites to determine how many physician feedback reports were included in the patient charts. Because one component of this program is to provide a summary sheet to the pediatrician to guide anticipatory guidance, we define the presence of this report in the parent chart as an indicator of program reach to pediatricians. It should be noted, however, that the chart audits were conducted on a randomly selected sample of eligible patients, not specifically parents who had completed the program, as no individual identifiers were collected by the program. During the chart reviews, we also captured documentation of the provision of injury prevention anticipatory guidance. If so, we recorded the injury prevention topics that were noted as being discussed, and for those charts containing the SNS feedback sheet, whether or not those topic(s) were consistent with the injury areas indicated by SNS.

Analysis

To describe reach to parents and providers, three measures of program use were computed, each reflecting a greater degree of program reach. First, kiosk use was computed as the proportion of eligible potential program users who entered any information into the kiosk, whether or not they completed the assessment. This proportion thus includes those parents who were interested in the kiosk and started to use the kiosk, but who did not complete the program. For the second measure of reach, we computed “kiosk completion” as the proportion of eligible potential users who completed the kiosk assessment. For the third measure of reach, we computed the physician use, or the number of patient charts that included the physician SNS feedback relative to the number of eligible users. Because program completion data were anonymous, it was not possible to link program completion data to specific charts. Therefore, using the proportion of eligible persons who completed the assessment, and the proportion of eligible patients with SNS documentation in the chart, we calculated the estimated proportion of program completers with SNS documentation in the chart.

Analysis was conducted using SPSS 17 [27]. Usage rates for each clinic site were compared using Marascuilo procedure for comparison of multiple proportions [28], and descriptive statistics for the user characteristics were generated.

RESULTS

Reach to parents

Based on clinic census data, 11,884 parents were eligible to complete the SNS program during any of the three phases at the five participating clinics, meaning they were within the appropriate age range (0–4), had a well-child appointment with their pediatrician (V20.2 code of billing), and were English speaking. Kiosk use by clinic site and phase are provided in Table 2, along with aggregate data across all sites. Across all five clinics and all phases, an average of 10.2% of parents used the program to some degree, and 7.6% completed the program. Total use across the study period at the five clinics varied from 6.5% to 9.7%. Program completion across the study period ranged from 5.4% to 7.3%. Comparison of multiple proportions (Marascuilo procedure) indicated no significant differences between clinics in the total percent of either users or completers. Changes in program use and completion from implementation to continuation varied across clinics. Program use and completion were relatively stable in clinic 2, and increased across time in clinic 5. Conversely, program use and completion in clinic 3 showed a marked decline from the implementation to continuation phase; a smaller decline was observed in clinic 4.

Table 2.

Eligible patients and kiosk use by clinic site

Eligible patients Kiosk used Completed assessment
N (% of eligible) N (% of eligible)
Pilot Impl Cont Total Pilot Impl Cont Total Pilot Impl Cont Total
Clinic 1 1,108 N/A N/A 1,108 72 (6.5) N/A N/A 72 (6.5) 60 (5.4) N/A N/A 60 (5.4)
Clinic 2 N/A 1,253 2,255 3,508 N/A 99 (7.9) 217 (9.6) 316 (9.0) N/A 85 (6.8) 150 (6.7) 235 (6.7)
Clinic 3 N/A 1,363 1,194 2,557 N/A 187 (13.7) 25 (2.1) 212 (8.3) N/A 159 (11.7) 13 (1.1) 172 (6.7)
Clinic 4 N/A 2,140 3,371 5,511 N/A 225 (10.5) 230 (6.8) 455 (8.3) N/A 159 (7.4) 155 (4.6) 314 (5.7)
Clinic 5 N/A 612 1,044 1,656 N/A 34 (5.6) 126 (12.1) 160 (9.7) N/A 33 (5.4) 88 (8.4) 121 (7.3)
Total 1,108 4,491 6,285 11,884 72 (6.5) 545 (12.1) 598 (9.5) 1215 (10.2) 60 (5.4) 436 (9.7) 406 (6.5) 902 (7.6)

N/A not applicable

Reach to providers

Across all sites, 910 charts were audited to determine the presence of the SNS physician component. Of these, 17 contained SNS feedback sheets (1.9% of eligible visits). Percentages at individual sites ranged from 0.5% to 8.0%, with the highest percentage observed at the pilot site (clinic 1, 8.0%; clinic 2, 0.5%; clinic 3, 2.1%; clinic 4, 0.5%; clinic 5, 1.9%). Relative to the 5.6% of eligible visits in which SNS was completed, the 1.9% rate of SNS chart documentation suggests that 34% (1.9/5.6) of those completing SNS would have had the physician feedback placed in the chart, assuming the charts audited are representative of all visits during the study period. In other words, if we would have been able to identify and audit the charts of patients whose parent completed SNS, we would estimate that we would have found the physician feedback in 34% of the charts.

There was low correspondence of documentation of injury prevention anticipatory guidance with the risk behaviors indicated on the physician summary sheet. Of the 17 charts that contained SNS feedback sheets, only two included documentation of anticipatory guidance on one of the risks identified by SNS and nine documented anticipatory guidance on other injury risks.

DISCUSSION

The gap between the development of programs with demonstrated efficacy and their translation to clinical practice is well documented; however, the body of research assessing translation of such programs is limited. SNS serves as a useful case study to inform these issues. The program was designed to complement existing clinical care practices and recommendations and to require minimal staff effort to implement and maintain. Nevertheless, the translation of this program into the clinical setting absent on-site research support resulted in low reach to the intended audience.

Across all clinics and phases, the program was completed at 7.6% of eligible visits, with no significant differences between clinics in rates of use or completion. At each clinic, the program was placed in the waiting room, and users self-selected to participate. While rates would be expected to be higher if the program were more fully integrated into the visit process, doing so would require a much greater level of organizational commitment and effort. Nevertheless, the ability to reach even 5% of families having a clinic visit with a health communication program shown to increase the adoption of injury prevention behaviors would be expected to have a meaningful public health impact over time. In this study, an effective injury prevention message was delivered to 5% of 11,884 eligible parents. Based on previously demonstrated estimates of program efficacy, that equates to 594 parents receiving an injury prevention message that is twice as likely as typical materials to result in a safer home or car for a young child.

Reach of the program to providers was estimated to be about one third of those completing the program. However, among those for which the SNS document was placed in the chart, the majority did not document provision of injury prevention anticipatory guidance consistent with the risk areas identified by the program. Considering the low rates of provider discussion of the SNS materials reported by parents in the previous efficacy study [12], findings suggest that the provider component of SNS may be the least translatable aspect of the program.

It is suggested in the implementation literature that uptake will be slow at first and then will increase as the program becomes more routinized. Our findings are not fully consistent with this expectation; in two of the five clinics, usage decreased between phases and in one clinic the overall usage increased. This certainly could be because the minimal project support that was offered during the implementation phase was largely removed during the maintenance phase. Also, our implementation phase was only 9 months; while this might be long enough to resolve the logistical issues related to implementation, it is not long enough for a new process to become routinized in a clinic environment already cluttered with parental obligations. In fact, in all of the participating clinics there was at least one staff change in a primary leadership position (nurse manager, front desk manager, office manager or provider) during the study period. In particular, clinic 3 showed the greatest decline in usage rates over the project phases, which may have been due to the complexity of different staff types, greater space constraints for kiosk placement, or unique barriers associated with wireless technology. To explore these factors more fully, we conducted interviews and focus groups with clinic staff and providers, analysis of which will be presented in a forthcoming paper.

Computer-delivered interventions have been shown to increase a variety of health behaviors [29]. In pediatric practice, computers have been used to directly deliver educational material, such as text to increase general child health knowledge [30] or video to increase positive discipline approaches [31]. Computer-tailored interventions are those that require input from the user and then produce customized content. Such systems are nicely described by Vinson et al. in the inaugural issue of this journal [32]. Tailored waiting room programs have been implemented in clinical settings for adults [33], and Internet tools have been provided to support walking in family medicine practices [34]. Applications of these systems in pediatric settings include using pre-visit assessments by parents to improve well-child visits [35, 36], to address overweight using tailored text messages [37], and to provide tailored injury information to parents [38].

In the majority of these applications, computer-tailored approaches have shown an advantage over generic strategies, have been well received by the user, and may be cost-effective [39, 40]. There is limited data, however, on the usage rates of these systems, e.g., their reach. The history of the ASQ provides some insight—among Medicaid patients, screening rates for autism increased from 15% to 75% with concerted policy and practice initiatives [41], suggesting the need for targeted efforts to achieve widespread use. Regarding pediatrician involvement, an implementation study of ConnectedKids: Safe, Strong, Secure, suggests that implementing programs in pediatric clinics is feasible, and that implementation success is more likely for those who understand the complexity of program adoption [42]. All 27 participating physicians reported using the materials with the target population. However, use of the CD-ROM option, which accompanied the program and would be more similar in nature to Safe N’ Sound, was much lower (5% for CD-ROM only). In a clinic-based study to promote healthy weight in preschool children, 32% of physicians conducted the project screening form—the authors note the barrier of time and the challenges of organizational change in clinic settings [43].

Safe N’ Sound is delivered from a computer device, which is approached by caregivers who answer questions that determine the content of the print materials that are generated. To fully benefit from this program, front desk staff need to ask the patient to participate, the patient would then choose to participate, the patient would answer truthfully, the patient or nurse would give the handout to the provider, and staff would use and insert the handout in the patient chart. By design, the program therefore requires multiple groups to be involved in the process and hence the possibility for multiple behavioral, technical, and organizational and structural attributes to be a barrier or facilitator.

In addition to organizational changes to support program adoption, various strategies should be evaluated for increasing patient use. These could include passive approaches (e.g., increased marketing efforts, engaging product design, and product placement) and more active strategies such as establishing program completion as a necessary step prior to well-child visits, as in the case of reimbursable programs. Internet-based systems may also offer an advantage since they can be completed on a personal computer or handheld device, and systems linking health information from medical records and prevention messages are expected to offer substantial user advantages.

Funders, practitioners, and researchers have developed various measures to capture the reach of program [44]. Some measures refer broadly to any type of program use (i.e., utilization), others denote a proportion of eligible users of effective treatment relative to a clearly defined denominator [45, 46]. In the current study, we describe reach according to three criteria, each reflecting a greater degree of fidelity to the program as intended which also allows a greater comparability across studies. Of further interest are variables related to the initial adoption timetables, reach effects over time, and effects on reach with program adjustments. These questions can be addressed in future studies, with more finely calibrated data collection intervals and tools.

This manuscript describes a case study of implementing technology-dependent prevention programs in clinical settings. The participating clinics self-selected, so while they generally represent typical clinical settings, there may be some bias associated with their participation. Adoption of the program by the clinics invited was about one third, with five of 16 clinics approached agreeing to participate. Given the low time and resource commitment required and the concordance of the program content with recommended anticipatory guidance topics, this participation rate suggests a need to better understand the factors that influence program adoption. As this study was designed to focus primarily on reach within participating practices (rather than being a study of program adoption), we did not systematically collect data to evaluate possible factors influencing adoption. We did provide a monetary incentive (to overcome a clear adoption barrier) and did not require a long-term adoption commitment so that we could study the natural evolution of the program. Further, there were several times during the study period when the consulting project staff found that the device was not accessible (e.g., having been turned off). While this is a limitation in that we cannot accurately document and account for these periods, it likely means that our measures of reach were conservative.

Future work is needed to determine the hierarchy, acceptability, and feasibility of organizational changes that are needed if the program is to be adopted in routine clinical practice, especially given the expectation of immediate success in these settings. Approaches might consider pediatric practice-based networks [47] injury prevention cooperatives (e.g., from the National Association of Children’s Hospitals and Related Institutions) and Injury Prevention Coalitions. In addition, public health programs might be more effectively disseminated via quality improvement activities such as those addressed with Plan, Do, Study, Act cycles [48, 49] or the approaches for lean process improvements [50], both of which are commonplace in medical settings. While program evaluations typically measure health outcomes, behaviors, and attitudes, we should also measure program effects on patient satisfaction, staffing, cost-effectiveness, and quality of care—metrics that by necessity drive the clinical encounter and thus are powerful implementation considerations.

Evidence-based programs are plentiful, yet few organizations routinely seek out, implement, and sustain such programs. The field of translational research will benefit greatly from further investigations about the factors related to the organization, the user or the program that affect program use and sustainability, and innovative applications of technology in health care settings.

Acknowledgments

Sources of support

This research was supported by the Intramural Research Program of the Eunice Kennedy Shriver National Institute of Child Health and Human Development.

Footnotes

Implications

Policy: Organizational policies can support the adoption and implementation of evidence-based programming.

Research: Researchers should design for dissemination and should develop and utilize common metrics for dissemination research.

Practice: Practitioners should set realistic goals for program reach when implementing patient initiated prevention programs in clinical settings.

Contributor Information

Nancy L Weaver, Email: weavernl@slu.edu.

Tonja R Nansel, Email: nanselt@mail.nih.gov.

Janice Williams, Email: Janice.williams@carolinashealthcare.org.

Julia Tse, Email: tsej2@mail.nih.gov.

Maria Botello-Harbaum, Email: mbotello-harbaum@emmes.com.

Katherine Willson, Email: katerinawillson@gmail.com.

References

  • 1.Centers for Disease Control and Prevention, National Center for Injury Prevention and Control. (2011). Web-Based Injury Statistics Query and Reporting System (WISQARS). Accessed 18 Jul 2011.
  • 2.Injury and violence prevention: a pressing public health concern in 2011. Atlanta: Centers for Disease Control and Prevention (National Center for Injury Prevention and Control); 2011. [Google Scholar]
  • 3.Recommendations for preventive pediatric health care periodicity schedule 2008. Elk Grove Village: American Academy of Pediatrics; 2008. [Google Scholar]
  • 4.Hesse BW, et al. Trust and sources of health information: the impact of the Internet and its implications for health care providers: findings from the first Health Information National Trends Survey. Archives of Internal Medicine. 2005;165(22):2618–2624. doi: 10.1001/archinte.165.22.2618. [DOI] [PubMed] [Google Scholar]
  • 5.Chen J, et al. Injury-prevention counseling and behavior among US children: results from the second Injury Control and Risk Survey. Pediatrics. 2007;119(4):e958–e965. doi: 10.1542/peds.2006-1605. [DOI] [PubMed] [Google Scholar]
  • 6.Ballesteros MF, Gielen AC. Patient counseling for unintentional injury prevention. American Journal of Lifestyle Medicine. 2010;4(1):38–41. doi: 10.1177/1559827609348472. [DOI] [Google Scholar]
  • 7.Cohen LR, et al. Pediatric injury prevention counseling priorities. Pediatrics. 1997;99(5):704–710. doi: 10.1542/peds.99.5.704. [DOI] [PubMed] [Google Scholar]
  • 8.Wright MS. Pediatric injury prevention: preparing residents for patient counseling. Archives of Pediatrics & Adolescent Medicine. 1997;151(10):1039–1043. doi: 10.1001/archpedi.1997.02170470073013. [DOI] [PubMed] [Google Scholar]
  • 9.Speltz ML, et al. Assessment of injury risk in young children: a preliminary study of the injury behavior checklist. Journal of Pediatric Psychology. 1990;15(3):373–383. doi: 10.1093/jpepsy/15.3.373. [DOI] [PubMed] [Google Scholar]
  • 10.Bright futures: prevention and health promotion for infants, children, adolescents and their families. Elk Grove Village: American Academy of Pediatrics; 2011. [Google Scholar]
  • 11.TIPP: The Injury Prevention Program. (1994). Implementing Safety Counseling in Office Practice. Elk Grove Village, IL: American Academy of Pediatrics.
  • 12.Nansel T, et al. Preventing unintentional pediatric injuries: a tailored intervention for parents and physicians. Health Education Research. 2008;23(4):656–669. doi: 10.1093/her/cym041. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Nansel T, et al. Baby, Be Safe: the effect of pediatric injury prevention tailored communications provided in a primary care setting. Patient Education and Counseling. 2002;46(3):175–190. doi: 10.1016/S0738-3991(01)00211-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Vladutiu C, et al. Differential strength of association of child injury prevention attitudes and beliefs on practices: a case for audience segmentation. Injury Prevention. 2006;12(1):35–40. doi: 10.1136/ip.2004.007153. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Weaver N, et al. Translation of an evidence based tailored childhood injury prevention program. Journal of Public Health Management and Practice. 2008;14(2):177–184. doi: 10.1097/01.PHH.0000311897.03573.cc. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Glasgow RE, Lichtenstein E, Marcus AC. Why don't we see more translation of health promotion research to practice? Rethinking the efficacy-to-effectiveness transition. American Journal of Public Health. 2003;93(8):1261–1267. doi: 10.2105/AJPH.93.8.1261. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Kerner J, Rimer B, Emmons K. Introduction to the special section on dissemination. Dissemination research and research dissemination: how can we close the gap. Health Psychology. 2005;24(5):443–446. doi: 10.1037/0278-6133.24.5.443. [DOI] [PubMed] [Google Scholar]
  • 18.Fixen DL, et al. Implementation research: a synthesis of the literature. Tampa: University of South Florida, The Louis de la Parte Florida Mental Health Institute, Department of Child & Family Studies; 2005. [Google Scholar]
  • 19.Glasgow RE, Vogt TM, Boles SM. Evaluating the public health impact of health promotion interventions: the RE-AIM framework. American Journal of Public Health. 1999;89(9):1322–1327. doi: 10.2105/AJPH.89.9.1322. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Glasgow RE, et al. The RE-AIM framework for evaluating interventions: what can it tell us about approaches to chronic illness management? Patient Education and Counseling. 2001;44(2):119–127. doi: 10.1016/S0738-3991(00)00186-5. [DOI] [PubMed] [Google Scholar]
  • 21.Dzewaltowski D, et al. RE-AIM: evidence-based standards and a web resource to improve translation of research into practice. Annals of Behavioral Medicine. 2004;28(2):75–80. doi: 10.1207/s15324796abm2802_1. [DOI] [PubMed] [Google Scholar]
  • 22.Glasgow RE, Emmons KM. How can we increase translation of research into practice? Types of evidence needed. Annual Review of Public Health. 2007;28:413–433. doi: 10.1146/annurev.publhealth.28.021406.144145. [DOI] [PubMed] [Google Scholar]
  • 23.Green LW, Glasgow RE. Evaluating the relevance, generalization, and applicability of research: issues in external validation and translational methodology. Evaluation & the Health Professions. 2006;29(1):126–153. doi: 10.1177/0163278705284445. [DOI] [PubMed] [Google Scholar]
  • 24.Glasgow RE. What types of evidence are most needed to advance behavioral medicine? Annals of Behavioral Medicine. 2008;35:19–25. doi: 10.1007/s12160-007-9008-5. [DOI] [PubMed] [Google Scholar]
  • 25.Klesges LM, et al. Beginning with the application in mind: designing and planning health behavior change interventions to enhance dissemination. Annals of Behavioral Medicine. 2005;29:66–75. doi: 10.1207/s15324796abm2902s_10. [DOI] [PubMed] [Google Scholar]
  • 26.Glasgow RE, Toobert DJ. RE-AIM for program planning and evaluation: overview and recent developments. Washington, DC: Center for Health Aging: Model Health Programs for Communities/National Council on Aging (NCOA); 2007. [Google Scholar]
  • 27.SPSS base 17.0 for Windows user's guide. Chicago: SPSS Inc; 2008. [Google Scholar]
  • 28.Marascuilo LA. Large-sample multiple comparisons. Psychological Bulletin. 1966;65(5):280–290. doi: 10.1037/h0023189. [DOI] [PubMed] [Google Scholar]
  • 29.Portnoy DB, et al. Computer-delivered interventions for health promotion and behavioral risk reduction: a meta-analysis of 75 randomized controlled trials, 1988–2007. Preventive Medicine. 2008;47:3–16. doi: 10.1016/j.ypmed.2008.02.014. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30.Sanghavi DM. Taking well-child care into the 21st century: a novel, effective method for improving parent knowledge using computerized tutorials. Archives of Pediatrics & Adolescent Medicine. 2005;159(5):482–485. doi: 10.1001/archpedi.159.5.482. [DOI] [PubMed] [Google Scholar]
  • 31.Scholer SJ, Hudnut-Beumler J, Dietrich MS. A brief primary care intervention helps parents develop plans to discipline. Pediatrics. 2010;125(2):e242–e249. doi: 10.1542/peds.2009-0874. [DOI] [PubMed] [Google Scholar]
  • 32.Vinson C, et al. Adapting research-tested computerized tailored interventions for broader dissemination and implementation. Translational Behavioral Medicine. 2011;1(1):93–102. doi: 10.1007/s13142-010-0008-9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33.Prochaska JJ, et al. PACE+: interactive communication technology for behavior change in clinical settings. American Journal of Preventive Medicine. 2000;19(2):127–131. doi: 10.1016/S0749-3797(00)00187-2. [DOI] [PubMed] [Google Scholar]
  • 34.Goodrich D, et al. Integrating an internet-mediated walking program into family medicine clinical practice: a pilot feasibility study. BMC Medical Informatics and Decision Making. 2011;11(1):47. doi: 10.1186/1472-6947-11-47. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35.Bergman DA, Beck A, Rahm AK. The use of internet-based technology to tailor well-child care encounters. Pediatrics. 2009;124(1):e37–e43. doi: 10.1542/peds.2008-3385. [DOI] [PubMed] [Google Scholar]
  • 36.Beck A, et al. Using implementation and dissemination concepts to spread 21st-century well-child care at a health maintenance organization. The Permanente Journal. 2009;13(3):10–17. doi: 10.7812/tpp/08-088. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 37.Bauer S, et al. Enhancement of care through self-monitoring and tailored feedback via text messaging and their use in the treatment of childhood overweight. Patient Education and Counseling. 2010;79(3):315–319. doi: 10.1016/j.pec.2010.03.014. [DOI] [PubMed] [Google Scholar]
  • 38.van Beelen ME, et al. ‘BeSAFE’, effect-evaluation of internet-based, tailored safety information combined with personal counselling on parents' child safety behaviours: study design of a randomized controlled trial. BMC Public Health. 2010;10:466. doi: 10.1186/1471-2458-10-466. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 39.Glasgow RE, Eakin EG. Psychology in diabetes care. New York: Wiley; 2005. Medical office-based interventions; pp. 141–168. [Google Scholar]
  • 40.Suggs LS. A 10-year retrospective of research in new technologies for health communication. Journal of Health Communication. 2006;11(1):61–74. doi: 10.1080/10810730500461083. [DOI] [PubMed] [Google Scholar]
  • 41.Pinto-Martin JA, et al. Developmental stages of developmental screening: steps to implementation of a successful program. American Journal of Public Health. 2005;95(11):1928–1932. doi: 10.2105/AJPH.2004.052167. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 42.Levin-Goodman R. Connected kids implementation project: preliminary findings December 2007. Elk Grove Village: American Academy of Pediatrics; 2007. [Google Scholar]
  • 43.McKee MD, et al. Implementation of a pilot primary care lifestyle change intervention for families of pre-school children: lessons learned. Patient Education and Counseling. 2010;79(3):299–305. doi: 10.1016/j.pec.2010.02.025. [DOI] [PubMed] [Google Scholar]
  • 44.Rabin BA, et al. A glossary for dissemination and implementation research in health. Journal of Public Health Management Practice. 2008;14(2):117–123. doi: 10.1097/01.PHH.0000311888.06252.bb. [DOI] [PubMed] [Google Scholar]
  • 45.Measuring reach of quitline programs. Phoenix: NAQC Quality Improvement Initiative; 2009. [Google Scholar]
  • 46.Glasgow RE, et al. Evaluating the impact of health promotion programs: using the RE-AIM framework to form summary measures for decision making involving complex issues. Health Education Research. 2006;21(5):688–694. doi: 10.1093/her/cyl081. [DOI] [PubMed] [Google Scholar]
  • 47.Wasserman R, Slora E, Bocian A. Current status of pediatric practice-based research networks. Current Problems in Pediatric and Adolescent Health Care. 2003;33(4):115–123. doi: 10.1067/mps.2003.12. [DOI] [PubMed] [Google Scholar]
  • 48.Deming D. The new economics for industry, government, and education. Cambridge: The MIT Press; 2000. [Google Scholar]
  • 49.Courtlandt CD, Noonan L, Feld LG. Model for improvement—part 1: a framework for health care quality. Pediatric Clinics of North America. 2009;56(4):757–778. doi: 10.1016/j.pcl.2009.06.002. [DOI] [PubMed] [Google Scholar]
  • 50.Varkey P, Reller MK, Resar RK. Basics of quality improvement in health care. Mayo Clinic Proceedings. 2007;82(6):735–739. doi: 10.4065/82.6.735. [DOI] [PubMed] [Google Scholar]

Articles from Translational behavioral medicine are provided here courtesy of Oxford University Press

RESOURCES