Skip to main content
Annals of Family Medicine logoLink to Annals of Family Medicine
. 2005 Jul;3(Suppl 2):s28–s32. doi: 10.1370/afm.341

Practice-Based Research in Primary Care: Facilitator of, or Barrier to, Practice Improvement?

Thomas Bodenheimer 1, Denise M Young 2, Kate MacGregor 1, Jodi Summers Holtrop 3
PMCID: PMC1466975  PMID: 16049078

Abstract

PURPOSE In what ways is primary care practice-based research a facilitator of practice improvement vs a barrier to practice change? This article aims to alert investigators to the pitfalls they may face in undertaking the dual agenda of research and practice improvement.

METHODS We derived examples of the relationship between the research and practice improvement goals of 17 Prescription for Health (P4H) grantees from verbal communications with the grantees, field notes from interviews and site visits, and entries made by grantees to an online diary managed by the P4H Analysis Team.

RESULTS An analysis of key themes identified factors facilitating and impeding the dual goals of research and practice improvement. The requirements of conducting research mandated by institutional review boards, including patient enrollment and consent, often constituted barriers to practice improvement. The choice of practices in which to conduct research and improvement activities and the manner in which the practices are approached may affect the outcome of both research and practice improvement goals. Approaching practices with a time-limited project mentality can interfere with a process of permanent practice change. The RE-AIM construct (reach, efficacy/effectiveness, adoption, implementation, and maintenance) is useful in designing research interventions that facilitate practice improvement.

CONCLUSIONS Projects that meld research studies and practice improvement goals must pay attention to the potential conflicts between research and practice change, and must attempt to design research studies so that they facilitate rather than inhibit practice improvement.

Keywords: Research, practice improvement, behavior change, primary care, practice-based research

INTRODUCTION

Evidence-based medicine has traditionally relied on efficacy research—research conducted under relatively ideal, controlled conditions. The conclusions of efficacy research, however, may not be appropriate to real-world conditions in which medical practices face multiple competing demands and patients have a variety of comorbidities and personal preferences. Efficacy research, moreover, is often conducted in academic medical center sites whose populations are not representative of the general US population.1 In contrast, effectiveness research refers to studies conducted under real-world conditions.

More than one half of all office visits in the United States are to primary care practitioners.2 Effectiveness research for many clinical questions thus needs to be conducted in primary care settings. Practice-based research networks (PBRNs) have been created as primary care laboratories for conducting effectiveness research.3,4

PBRNs have recently assumed a function separate from but related to their research mission: practice improvement. There is growing awareness that primary care is not able to live up to its promises to provide high-quality and accessible chronic illness and preventive care to all patients.5,6 Given these problems, PBRNs are increasingly seen as institutions that can simultaneously conduct effectiveness research and catalyze practice change.

A number of authors have reported on PBRN-based research with implications for practice improvement.710 These authors, however, do not comment on difficulties that may arise in balancing the dual goals of research and practice change.

In this article we explore the question, Is practice-based research in primary care a facilitator of or a barrier to practice improvement? The discussion uses as a framework the RE-AIM model developed by Glasgow and colleagues.1113 The purpose of the article is to alert investigators to the pitfalls they may face in undertaking the dual agenda of research and practice improvement.

The 5 components of RE-AIM are reach, efficacy/effectiveness, adoption, implementation, and maintenance. Reach refers to the percentage and representativeness of the at-risk population affected by a quality improvement intervention. Efficacy/effectiveness signifies the extent to which the intervention enhances the outcomes of each person touched by the improvement. The AIM portion of the RE-AIM model turns attention to the organization in which a quality improvement intervention takes place. What proportion of organizations adopt the intervention as a practice improvement, and are these organizations a representative sample? Is the intervention implemented by the organization in which the study takes place, and is it maintained over time? In the Results section below, references to these RE-AIM components are italicized.

METHODS

Design

We conducted a cross-case content analysis on data gathered from 17 PBRNs receiving grants from the Robert Wood Johnson Foundation’s P4H program to study and encourage the adoption of interventions to improve patients’ health-related behaviors.14

Data Collection

Qualitative data included verbal communications with grantees, field notes from interviews and site visits, and entries that project team members made to an online diary managed by the P4H Analysis Team (A-Team). The primary data source was online diaries written by investigative team members involved in project implementation. The A-Team responded to entries to encourage clarification, elaboration, and reflection among project members. Entries were loaded into Folio Views (ver 4.11; Open Market Inc, Burlington, Mass), a data management program. Quotations cited in the Results section came from project diary entries and statements made by grantees at the September 2004 P4H closing convocation.

A-Team Analysis

The A-Team’s approach for analyzing these data is discussed by Cohen et al15 in this supplement. Weekly meetings were conducted to review and reflect on diary entries using an immersion-crystallization approach.16,17 Overarching organizing themes were identified for each grantee. This iterative process resulted in 17 preliminary case reports that articulated project themes in a comprehensive manner. These summary reports were then used to create a cross-case comparison to identify themes common to multiple grantees. Project members were frequently consulted for clarification of discrepancies and confirmation of A-Team findings.

Based on the oral presentations of P4H investigators at the closing convocation, the authors (including 1 member of the A-Team) identified cross-cutting issues important to multiple P4H investigators. Four themes were apparent: working with institutional review boards (IRBs), the influence of the patient consent process, appropriate patient selection for research projects, and sustaining practice involvement in research. Further in-depth interviews with investigators were conducted to better understand how these issues manifested themselves in the implementation of their studies. The A-Team member then conducted a word search of the A-Team database for each theme and reanalyzed the resulting sections in the database. For the IRB theme, the words “IRB” and “HIPAA” (Health Insurance Portability and Accountability Act) were used. For the patient consent theme, the word “consent” was used. For the sustainability theme, the words “sustainability” and “recruitment” were used. Data were de-identified before they were shared with the coauthors. This additional examination confirmed that the themes presented in the Results section were of concern to several investigators and may provide useful insight for those considering similar research.

RESULTS

Grantees faced challenges when attempting to conduct research and encourage practice improvement at the same time. These challenges included (1) obtaining IRB approval for projects that are unlike typical clinical research trials, (2) the impact of the consent process, (3) tension between study patients and patients clinically appropriate for behavior change interventions, (4) attention to practice recruitment methods and strategies, and (5) engaging practices in sustainable improvement efforts.

Working With IRBs

For most P4H grantees, gaining IRB approval posed a barrier to research and practice improvement goals, thereby limiting the adoption of improvement activities by primary care practices. Although slow IRB processing and elaborate paperwork are common to all research, PBRN-based research adds further complications. Because PBRN research is conducted in multiple practices, both academic and community-based, IRB approval from several institutions may be required. Grantees stated, “A check of IRB status across the 8 sites found that 7 of 8 are still working on IRB approval. Each board has different takes on the research,” and “Our major problem is convincing 1 of the 2 institutions to extend assurance for unaffiliated physicians in the 1 practice that does not fall under either IRB.”

IRBs have experience evaluating traditional randomized clinical trials in which their primary role involves protecting patients. Lack of experience with PBRN-based research may lead to cautious and less flexible behavior from IRBs. In addition to studying patients, PBRN-based research assesses practices. One grantee reported that the practice assessment part of the study had to be dropped because of IRB opposition. One grantee spoke for many others in expressing that IRB consent should not be so problematic because the research is noninvasive.

The Research Consent Process

Almost all grantees enrolled patients, caregivers, or both into their research studies. The process of enrolling patients often involved research assistants explaining long consent forms. This consent process complicated the implementation of practice improvement interventions.

First, the consent discussions sometimes interrupted patient flow. Because smooth patient flow is critical to the tight appointment schedules universal in primary care, its interruption may alienate practice personnel from the project. One grantee reported that practices motivated to conduct the study complained of difficulties enrolling patients, who demanded staff time to answer questions about the study.

Second, the consent process changed the nature of the intervention. In 2 cases, this change turned out to have a positive impact, with the patient consent discussion activating patients to consider healthy behavior changes. One grantee said, “The consent process was an important element in initiation of successful behavioral change.”

Whether patient consent requirements facilitate or complicate a behavior change and practice change intervention, it must be recognized that the very process of obtaining consent shifts an effectiveness study toward an efficacy study by moving the intervention away from a real-world situation.

Study Patients vs Appropriate Patients

There is a difference between asking practices to recruit a specified group of patients for a study and engaging practices in projects that improve the care of all patients with a particular health problem. The former approach restricts the reach of the intervention by reducing the percentage and representativeness of the at-risk population affected by an intervention study.

Some grantees found that patients recruited for the purposes of conducting research on a behavior change intervention differed from the patients who would be clinically appropriate to receive the intervention. For example, in a study that encouraged clinicians to engage in goal-setting discussions with patients with cardiovascular risk, action plan forms were placed in front of the chart of study patients, thereby prompting clinicians to have goal-setting discussions during those study visits. One grantee stated, “This was an unnatural intervention, since goal-setting discussions only make sense when the clinician and patient decide during the visit that such a discussion would be worthwhile.” Even with this disconnect between study subjects and clinically appropriate patients, the research did stimulate practice improvement because clinicians learned how to engage in goal-setting discussions with their patients.

Practice Recruitment Issues

The design of research studies can engender conflict between research and improvement goals. When PBRN practices, rather than clinicians or patients, are randomized, some practices endure the disruption of enrolling patients without the benefit of a practice-improving intervention. As a result, one P4H grantee reported “depression and anger in control practices.” Another grantee decided not to design a trial with control practices because such a design could interfere with the PBRN’s practice improvement agenda.

One grantee provided health behavior interventions to a group of practices owned by a hospital system and to another group of independent practices. The hospital system told the practices to participate, resulting in a variety of responses from the positive (“I was interested in this anyway”) to the negative (“When will this be over?”). One might expect that practices coerced into participating would be less inclined to initiate improvements than practices choosing to participate. In this case, however, the grantee stated, “At project end, all practices in the hospital system implemented some aspect of a practice improvement plan, while only 6 of 10 independent practices did so.”

The choice of practices in which to conduct research and improvement activities, and the manner in which the practices are approached influence the degree of adoption of an improvement by the universe of practices that might benefit from the improvement.

Sustainability

How practices are approached to participate in research and improvement activities may also affect the long-term sustainability—the maintenance component of RE-AIM—of the proposed interventions. If the project is marketed as research with a beginning and an end, the practice may view it as a self-limited intervention that, in the words of one grantee, “will be over soon.” If the project is explained as an improvement effort, the practice may embrace the innovation as a permanent change. A grantee offered the insight that the research team and member practices may have a “project mentality,” treating the intervention as having a definable endpoint, rather than seeing it as a permanent improvement.

One sustainability issue involves placing research assistants in practices vs asking the practices to perform the behavior change interventions with their own personnel. P4H projects involving intensive behavior change counseling relied on externally placed nurse-educators, coaches, community health associates, or medical students. In such cases, practice change is less encouraged because the practice continues its old ways. On the other hand, most practices do not have personnel with time to do things in a new way. Grantees stated, “Their [practice staff’s] biggest concern is how the project impacts their job and what will they be expected to do,” and “Much of [a staff member’s] anxiety seemed to be alleviated when I told her that a research team would come in and work with the patient intervention piece with the physician.”

A project that initiated behavior change in practice staff through distribution of pedometers and initiation of competition among practice staff (hypothesizing that a motivated staff would encourage patients to adopt healthier behaviors) showed that projects can create enthusiasm rather than anxiety among practice staff, thereby enhancing the probability of permanent change.

The fact that projects are funded by grants can thwart sustained practice change. Research assistants disappear once funding stops; for example, a behavior change coach was eliminated when grant money ran out, leaving practices without the means to maintain the intervention. One grantee, addressing sustainability, suggested that trained students, rotating through practices year after year, could provide assistance to clinics over an indefinite period of time. Students have flexible schedules, enabling them to conduct research projects and assist practices while obtaining valuable experience and class credit.

Interventions that provide information and training to clinicians and practice staff appeared to align research and sustainable improvement goals. One grantee provided tools and training to help pediatric practices establish systems (1) to document body mass index or provide an interpretation of growth status at well child visits, and (2) to initiate counseling on diet, physical activity, and related behaviors. The physicians and staff members were grateful to obtain the information and tools, leading to improved management of their patients.

Another grantee trained physicians in a new and sometimes threatening paradigm, by which physicians were asked to work in a collaborative fashion with patients rather than in the traditional mode of physicians telling patients what to do. This grantee noted more resistance to physicians taking on the new paradigm as a permanent practice change.

DISCUSSION

Applying the 5 RE-AIM components to projects in PBRNs may assist in harmonizing the research and practice improvement goals of the projects.

  • Reach: If research studies can be designed to avoid enrolling and consenting patients, they may affect a larger and more representative sample of the general population.

  • Efficacy/effectiveness: Studies of practice improvement interventions that deviate as little as possible from the day-to-day realities of clinical practice shift the research toward the effectiveness pole of the efficacy/effectiveness continuum.

  • Adoption: In the majority of the P4H projects, practices were asked to participate; random sampling of practices was not undertaken. Practices that volunteer are more receptive to sustaining an intervention than randomly sampled practices; however, limiting improvement projects to practices that volunteer reduces the breadth of adoption of practice improvement.

  • Implementation: Some of the examples presented above suggest that research studies may fail to encourage a practice to implement the research intervention, particularly if the intervention is carried out by external research assistants and if the project is marketed as a limited research effort.

  • Maintenance: The on-again, off-again nature of research funding undermines the maintenance (sustainability) of a practice improvement intervention over time. Mechanisms put into place at the beginning of a research or practice improvement project—for example, using students who continue to work in the practice year after year—could increase the chances of interventions being sustained.

CONCLUSIONS

Practice-based research has the potential to bring quality improvement into primary care practices, to train and assist practices to adopt these improvements, and to evaluate how the improvements are working for practitioners, practice staff, and patients. Research and practice improvement can be natural partners, with research acting as a facilitator of practice change.

How the research is conducted, however, matters a great deal. Some of the examples cited above are emblematic of pitfalls that can turn facilitators into barriers. Research projects with dual goals—the generation of knowledge and the improvement of practice quality—must try to avoid these pitfalls. The RE-AIM construct can be used as a checklist in designing projects in which research truly serves as a facilitator of practice improvement.

Conflicts of interest: none reported

Funding support: This work was supported by Prescription for Health, a national progam of The Robert Wood Johnson Foundation with support from the Agency for Healthcare Research and Quality.

REFERENCES

  • 1.Green LA, Fryer GE, Yawn BP, et al. The ecology of medical care revisited. N Engl J Med. 2001;344:2021–2025. [DOI] [PubMed] [Google Scholar]
  • 2.American Academy of Family Physicians. The New Model of Primary Care: Knowledge Bought Dearly. Washington, DC: AAFP; 2004.
  • 3.Lindbloom EJ, Ewigman BG, Hickner J. Practice-based research networks: the laboratories of primary care research. Med Care. 2004;42(4 Suppl):III45–III49. [PubMed] [Google Scholar]
  • 4.Nutting P, Beasley JW, Werner JJ. Practice-based research networks answer primary care questions. JAMA. 1999;281:686–688. [DOI] [PubMed] [Google Scholar]
  • 5.Grumbach K, Bodenheimer T. A primary care home for Americans: putting the house in order. JAMA. 2002;288:889–893. [DOI] [PubMed] [Google Scholar]
  • 6.Future of Family Medicine Project Leadership Committee. The future of family medicine: a collaborative project of the family medicine community. Ann Fam Med. 2004;2(Suppl 1):S3–S32. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Main DS, Quintela J, Araya-Guerra R, Holcomb S, Pace WD. Exploring patient reactions to pen-tablet computers: a report from CaReNet. Ann Fam Med. 2004;2:421–424. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Ariza AJ, Binns HJ, Christoffel KK. Evaluating computer capabilities in a primary care practice-based research network. Ann Fam Med. 2004;2:418–420. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Feifer C, Ornstein SM. Strategies for increasing adherence to clinical guidelines and improving patient outcomes in small primary care practices. Jt Comm J Qual Saf. 2004;30:432–441. [DOI] [PubMed] [Google Scholar]
  • 10.Beasley JW, Hankey TH, Erickson R, et al. How many problems do family physicians manage at each encounter? A WReN study. Ann Fam Med. 2004;2:405–410. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Glasgow RE, Vogt TM, Boles SM. Evaluating the public health impact of health promotion interventions: the RE-AIM framework. Am J Public Health. 1999;89:1322–1327. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Glasgow RE, McKay HG, Piette JD, Reynolds KD. The RE-AIM framework for evaluating interventions: what can it tell us about approaches to chronic illness management? Patient Educ Couns. 2001;44:119–127. [DOI] [PubMed] [Google Scholar]
  • 13.Glasgow RE, Lichtenstein E, Marcus A. Why don’t we see more translation of health promotion research to practice? Rethinking the efficacy to effectiveness transition. Am J Public Health. 2003;93:1261–1267. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Cifuentes M, Fernald DH, Green LA, et al. Prescription for Health: changing primary care practice to foster healthy behaviors. Ann Fam Med. 2005;3(Suppl):S4–S11. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Cohen DJ, Tallia AF, Crabtree BF, Young DM. Implementing health behavior change in primary care: lessons from Prescription for Health. Ann Fam Med. 2005;3(Suppl):S12–S19. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Miller WL, Crabtree BF. The dance of interpretation. In: Crabtree BF, Miller WL, eds. Doing Qualitative Research. 2nd ed. Thousand Oaks, Calif: Sage Publications; 1999:127–143.
  • 17.Borkan J. Immersion/crystallization. In: Crabtree BF, Miller WL, eds. Doing Qualitative Research. 2nd ed. Thousand Oaks, Calif: Sage Publications; 1999:179–194.

Articles from Annals of Family Medicine are provided here courtesy of Annals of Family Medicine, Inc.

RESOURCES