Skip to main content
Health Services Research logoLink to Health Services Research
. 2007 Feb;42(1 Pt 1):84–103. doi: 10.1111/j.1475-6773.2006.00610.x

Clinical Practice Guideline Implementation Strategy Patterns in Veterans Affairs Primary Care Clinics

Sylvia J Hysong, Richard G Best, Jacqueline A Pugh
PMCID: PMC1955743  PMID: 17355583

Abstract

Background

The Department of Veterans Affairs (VA) mandated the system-wide implementation of clinical practice guidelines (CPGs) in the mid-1990s, arming all facilities with basic resources to facilitate implementation; despite this resource allocation, significant variability still exists across VA facilities in implementation success.

Objective

This study compares CPG implementation strategy patterns used by high and low performing primary care clinics in the VA.

Research Design

Descriptive, cross-sectional study of a purposeful sample of six Veterans Affairs Medical Centers (VAMCs) with high and low performance on six CPGs.

Subjects

One hundred and two employees (management, quality improvement, clinic personnel) involved with guideline implementation at each VAMC primary care clinic.

Measures

Participants reported specific strategies used by their facility to implement guidelines in 1-hour semi-structured interviews. Facilities were classified as high or low performers based on their guideline adherence scores calculated through independently conducted chart reviews.

Findings

High performing facilities (HPFs) (a) invested significantly in the implementation of the electronic medical record and locally adapting it to provider needs, (b) invested dedicated resources to guideline-related initiatives, and (c) exhibited a clear direction in their strategy choices. Low performing facilities exhibited (a) earlier stages of development for their electronic medical record, (b) reliance on preexisting resources for guideline implementation, with little local adaptation, and (c) no clear direction in their strategy choices.

Conclusion

A multifaceted, yet targeted, strategic approach to guideline implementation emphasizing dedicated resources and local adaptation may result in more successful implementation and higher guideline adherence than relying on standardized resources and taxing preexisting channels.

Keywords: Guidelines, qualitative research, implementation research, evidence based medicine, health services research


Clinical practice guidelines (CPGs) have been used increasingly to standardize diagnosis and treatment procedures based on the latest clinical evidence, and thereby improve the quality of care. The Veterans Health Administration (VHA), the largest integrated health care system in the United States, mandated the implementation of CPGs throughout all its facilities starting in 1996; supportive resources were provided including Veterans Affairs (VA) developed or approved CPGs, external performance evaluation on guideline-specific measures, an electronic medical record with clinical reminder capabilities, and national training sessions on implementation principles (Kizer 1996). Since that time, VA has shown marked improvement in quality of care compared with previous performance levels (Jha et al. 2003). Further, recent research indicates the care currently provided at VA facilities is better than that provided by Medicare fee-for-service program participants as reflected in 11 of 13 preventive, outpatient, and inpatient quality of care indicators (Jha et al. 2003). Research comparing VA care to care provided by the private sector similarly found VA delivered higher quality of care overall, with particular advantages in chronic disease and preventive care (Asch et al. 2004), and comparable performance in chronic disease care to commercial managed care organizations (Singh and Kalavar 2004). Despite these improvements, significant performance variability still exists among individual facilities (Doebbeling et al. 2002; Krein et al. 2002; Fung et al. 2004). Given increasingly positive perceptions and evidence of the utility of CPGs in improving quality of care (VanOstenberg 1996; Smith and Hillner 2001; Bartell and Smith 2005; Pagaiya and Garner 2005), it is important to identify factors associated with CPG implementation success.

Previous research has identified several potential sources of variability including differences in mental models toward guidelines (Hysong et al. 2005), leadership style and commitment (Best et al. 2003), knowledge of the guidelines (Ward et al. 2002), organizational features, and patient population characteristics (Vaughn et al. 2002). Another source of variability in implementation success may lie in the strategy patterns used by individual facilities to implement CPGs. Strategies such as peer opinion leaders, academic detailing, and audit and feedback, have been associated with implementation success of specific guidelines (Jamtvedt et al. 2000; Thomson O'Brien et al. 2000; Markey and Schattner 2001). However, research identifying patterns of strategies associated with implementation success across multiple guidelines is more scarce.

CPG IMPLEMENTATION: EVIDENCE FROM TRIAL STUDIES

CPG implementation research has examined the effect of various strategies on implementation success; reviews by the Cochrane Effective Practice and Organization of Care Group (EPOC) have identified over 20 categories of implementation interventions (Grol, Wensing, and Eccles 2005). Trial studies of single strategies suggest that certain strategies, such as peer opinion leaders, academic detailing, and clinical reminders, are useful for implementation in specific situations. Trial studies evaluating multifaceted intervention strategies have yielded varying results in identifying an optimal combination of strategies for improving CPG adherence. Similar to the single intervention research, multifaceted interventions yield positive results for condition-specific outcomes (Frijling et al. 2003), but evidence on the effectiveness of multifaceted interventions on guideline implementation as a whole is less clear. A recent review of the area found (Grimshaw et al. 2004) “multifaceted interventions did not appear to be more effective than single interventions and the effects of multifaceted interventions did not appear to increase with the number of component interventions” (p. 61), in part due to considerable variation in the outcomes of the studies reported in the review.

Further complicating the translation of these findings to the real world of health care delivery, most health care facilities must address multiple CPGs simultaneously, not sequentially or individually as has been examined in the published trials. Research has suggested that when multiple CPGs are applied simultaneously to patients, significantly more time is required than is available in a typical patient visit, and in some cases could have adverse effects (Boyd et al. 2005; Ostbye et al. 2005); hence the need to study guideline adherence across multiple conditions. Published research has yet to address what patterns of strategies work best to implement multiple CPGs simultaneously. The present study addresses this gap by qualitatively comparing implementation strategies used by VA primary care clinics that have high and low levels of adherence for six different CPGs.

METHOD

Site Selection

The present data are part of a larger data collection effort at 15 VA facilities, which examined barriers and facilitators to CPG implementation. The original 15 facilities were selected using a stratified purposive sample from four geographically diverse regional networks based on their adherence to CPGs: facilities with a sustained record of high adherence to CPGs (high performing facilities, or HPF); a sustained record of low adherence to CPGs (low performing facilities, or LPF), and whose adherence record had significantly improved over a 2 year period (improvers). Group membership was determined via External Peer Review Program (EPRP) rankings, a random chart abstraction process conducted by an external contractor to audit performance at all VA facilities (see EPRP rankings section). Additionally, to be eligible, facilities had to be sufficiently large to accommodate at least two primary care teams, each containing at least three MD providers. To address the present paper's specific research question, only sites from the high (n =3) and low (n =3) performing categories were included in the sample. Despite its small size (which would be inadequate were we using it to conduct probability-based hypothesis tests), a purposive sample of this sort, if selected rigorously, i.e., “explicitly and thoughtfully picking cases that are congruent with the study purpose and that will yield data on major study questions”(Patton 1999), can yield important findings not often discoverable through more probabilistic methods (Devers 1999). As Daft and Lewin (1990, p. 6) pointed out regarding studies of organizations: “The average organization does not exist and by definition is never on the frontier of the phenomenon under study. … The use of outlier research—studying the best and worst cases—is helpful when making prescriptive recommendations for practice.” The included sites ranged in size and type from small, rural, general medicine and surgery centers (approximately 20,000 patients) to large tertiary care facilities in major metropolitan areas (>80,000 patients). This reflects the general distribution of Veterans Affairs Medical Centers across the country.

Participants

One hundred and two employees across six facilities were interviewed. Within each facility, we contacted the chief quality officer and/or the associate chief of staff for primary care, who helped identify clinical and managerial personnel with the requisite knowledge, experience, and involvement in guideline implementation to serve as potential interviewees. Specifically, we asked for names of individuals who planned implementation efforts, served as formal or informal “guideline champions,” or were involved in guideline implementation in their outpatient clinics, across three hierarchical levels of the organization: facility leadership (e.g., facility director, chief of staff), middle and support management (e.g., quality assurance manager, primary care chief, information technology manager), and primary care personnel (e.g., physicians, nurses, nurse practitioners, and physicians' assistants). The research team then directly invited these individuals to participate. These individuals often suggested others for us to include, in particular others at the frontline of care. Although using facility leadership to identify potential participants may have resulted in a bias toward representing the facility in a more positive light, the results of our interviews suggest that participants were quite willing to discuss frustrations and concerns.

All three levels were adequately represented in the sample (see Table 1). No significant differences in the distributions of participants were found by facility or hierarchical level (χ102, NS). All participation was strictly voluntary, and consistent with local institutional review board requirements.

Table 1.

Number of Participants by Facility and Hierarchical Level

Hierarchical Level

Facility PC Personnel Middle/Support Management Facility Leadership Total
1 14 2 3 19
2 6 10 7 23
3 7 4 3 14
4 4 8 4 16
5 3 4 4 11
6 7 10 2 19
Total 41 38 23 102

Measures and Procedures

EPRP Rankings

We obtained chart abstraction EPRP data from VHA's Office of Quality and Performance (OQP) for fiscal year 2001 reflecting facility-specific compliance with the guideline recommendations for each of six conditions: diabetes, depression, tobacco use cessation, ischemic heart disease, chronic obstructive pulmonary disease, and hypertension. Each condition is monitored via multiple performance indicators; in total, 20 performance indicators were used to describe compliance across the six conditions. Facilities were rank ordered from 1 to 15 (15 being the highest performer) on each performance indicator; all 20 performance indicator ranks were then summed together to obtain an indicator rank sum (IRSUM) score (higher IRSUM scores indicate higher performance).1 Facilities were then ranked by IRSUM score to identify the three highest and three lowest performing facilities.

Interviews

Three pairs of interviewers (research investigators of various backgrounds—nursing, medicine, sociology, psychology, with in-depth knowledge of the project, interviewing, and field note protocol) visited each participating site for 2 days during the Spring of 2001. Each pair interviewed participants for 1 hour either individually or in small groups, depending on the participants' schedule and availability. Interviewers took turns leading the interview; one interviewer led the conversation, while the other took notes and asked follow-up questions; interviewers then reversed roles for the following interview. To minimize interviewer bias, interviewers were (a) blinded to the facility's performance category and (b) paired with different partners for each site visit.

Participants were asked how CPGs were currently implemented at their facility, including strategies, barriers, and facilitators (see Interview Protocol in Appendix A). Although we used prepared questions to guide the interview process, participants were free to offer additional relevant information not explicitly solicited by the interviewers. Interviews were audio recorded with the participants' consent.

Data Analysis

Interview transcripts were analyzed via coding techniques commonly used in grounded theory (Strauss and Corbin 1998) and content analysis (Weber 1990), using qualitative data analysis software (Scientific Software Development 1999). The lead interviewer searched the original transcript for instances of CPG implementation strategies; the secondary interviewer then reviewed the coded transcripts for corroboration. We defined a strategy as any effort expressly undertaken for guideline implementation purposes. All coders participated in frame-of-reference training, to ensure common understandings of the concepts to be coded. Any disagreements in coding were discussed by the two coders and resolved by consensus.

Following this initial coding, the passages identified in the aforementioned process were classified into themes (Strauss and Corbin 1998); definitions for these themes were composed by the principal author and corroborated by a second investigator. This resulted in a total of 368 coded passages yielding 122 unique strategies reported by the six facilities as activities or initiatives undertaken to help implement and adhere to CPGs (by unique strategy we mean a specific type of activity, such as using clinical reminders; a single strategy could be reported by more than one respondent, and in more than one facility). Of these 122 strategies, only 16 were reported by more than one facility; these 16 strategies and their definitions are listed in Table 2.

Table 2.

Strategy Definitions

Clinic configuration: Changes to the membership of a clinic (e.g., moving toward a configuration of one doctor, one nurse, and one clerk for each clinic team)
Clinical reminders: Computerized (primarily) or physical (occasionally) reminders instructing the provider that some clinical action is due for a particular patient
Clinical reminders (under development): Facilities have not finished developing a full set of clinical reminders for use with CPRS
Computerized template: Standardized computer screens for entry of specific patient information, such as depression screening, standardized text entries for progress notes
CPGs committee: Presence of a committee whose sole purpose is to review, evaluate, and discuss guideline-related issues
Customizing clinical reminders: Customizing the human–computer interface reminders to make them more user-friendly
Data warehouse: Integration of electronic medical records across multiple facilities in a region
Electronic communication: Telecons, videoconferencing, synchronous (e.g., IM) and asynchronous (e.g., e-mail) telecommunications methods
Electronic medical record (fully implemented): The facility reports that their electronic medical record is fully operational and running
Electronic record exchange across facilities: Software package that allows the electronic transfer of records or orders from one system to another
EPRP as monitoring/feedback tool: Using data from EPRP reports as a form of feedback on guideline adherence for the providers
Executive boards handle guideline issues: Existing committees (not dedicated to guidelines) like the medical executive board, clinical executive board, professional standards board discuss performance improvement efforts related to guidelines
External performance benchmarking: Comparing internal performance to some external reference
Identifying a champion: Identify someone knowledgeable and supportive of clinical practice guidelines to serve as a credible source to change attitudes and beliefs about guidelines
Physician specific EPRP reports: Each individual physician gets a report based on EPRP data on their individual performance, when available
Staff/team meetings: Using regular staff or team meetings to disseminate guideline information

CPRS, Computerized Personal Record System; EPRP, External Peer Review Program.

FINDINGS

Strategies Used by All Facilities

No single strategy was reported by all six facilities. However, two strategies were reported in at least one interview in five of the six facilities: use of clinical reminders, and use of EPRP as a monitoring or feedback tool. Both of these strategies were available to all facilities via national initiatives: reminders via the Computerized Personal Record System (CPRS, the VA's electronic medical record), and audit reports from data gathered by EPRP. However, to be useful for CPG implementation both of these resources required further local deployment. For example, although CPRS could generate clinical reminders, no nationally developed reminders were available at the time of data collection so local IT personnel were required to write the logic to activate the reminder. Similarly, although facilities received audit data, further analysis and presentation work were needed to distribute the data to clinics and providers.

Clinical Reminders

Five facilities reported using electronic clinical reminders in CPRS, reminding the provider that some clinical action is due for the patient, as explained by this physician:

There are reminders that come up in … CPRS that initiate if they have a diagnosis of hypertension, COPD, diabetes,—for their Hemoccult, or different things that have been generated by their diagnosis. And then some things are real general like smoking … So those guidelines come up when I initially start seeing a patient, I scan through those and see what needs to be addressed at that visit. They are dated. So if they've been addressed, they won't pop up as being due.

All facilities who reported using clinical reminders mentioned that these reminders had played an important role in guideline adherence, as illustrated by this physician's comments:

Well the most facilitating thing is the computerization; you're much more advanced than I was in the military as far as electronic record, which allows then you to really see the results of pathways. And then you have reminders and things like that. Have you done this? Have you done that? … and that facilitates it all.

EPRP as a Monitoring or Feedback Tool

As part of the EPRP, all facilities receive quarterly reports documenting facility performance on various performance indicators. Of note, the national sampling strategy for these chart audits has never been powered to evaluate individual provider behavior; rather it is used to evaluate facility and regional level performance. Most facilities indicated that they shared this report with providers and other primary care personnel. However, the manner in which this information was used varied by facility. One LPF reported that EPRP was the primary and/or only source of performance data for providers:

To be honest, most of the monitoring has really been done through the EPRP data collection. If one looks at some of the other guidelines, such as our COPD guideline, there … we really don't have a formal system set up for monitoring that. So if one really looks at performance and outcomes, EPRP remains probably our primary source of those types of data.

HPF, however, tended to use the EPRP report as a starting point for more detailed, provider-specific feedback:

Well what happens is once we get the results of the [EPRP] survey, we also receive the individual worksheets for each patient that is looked at by the abstractor. … One is generated, the computer-generated per patient, that we know exactly what is falling out as far as CPG's or PI's [performance indicators] go. [I] will then sit down with those, identify who the Primary Care provider is, highlight the ones that should have been met. … I will identify everything then give it to the chief of staff along with the cover memorandum. … on the areas that we know that we are not compliant because of provider problems, the information is given to the chief of staff where they send it forward to the individual provider saying, “this is where you were found lacking, etcetera, etcetera, etcetera.”… And then all that information has been sent to one of my peers up in quality management who will then work with the providers who superviser when it comes down for re-privileging or re-credentialing or what have you [Note—this process happens once a month].

Strategies Common to HPF

Table 3 lists the individual strategies reported by more than one facility, the number of HPF and LPF reporting each strategy, and their corresponding EPOC category. Definitions of each strategy appear in Table 2. As can be seen from the table, a common theme across all three HPF is their noticeable emphasis on technology and infrastructure, particularly use of CPRS. A fully implemented CPRS, including customized clinical reminders and templates, and customized physician specific reports based on CPRS data all form part of what appears to be a strategic effort to create an infrastructure of actionable information:

Table 3.

Number of HPF and LPF Reporting a Given Strategy*

Strategy HPF LPF Total EPOC Category
Clinical reminders 3 2 5 CR
Computerized template 3 1 4 OS
Customizing clinical reminders 3 1 4 CR
Physician specific EPRP reports 3 1 4 AF
Fully implemented electronic medical record 3 0 3 OS
EPRP as a monitoring tool 2 3 5 AF
CPG committee 2 0 2 OS
Electronic communication 2 0 2 OS
External performance benchmarking 2 0 2 AF
Identifying a champion 2 0 2 OL
Staff/team meetings 1 3 4 ED
Clinical reminders under development 0 3 3 CR
Clinic configuration 0 2 2 OS
Data warehouse 0 2 2 AF
Electronic record exchange across facilities 0 2 2 OS
Executive boards handle guideline issues 0 2 2 OS
*

EPOC categories are listed for reference only. Only strategies that were reported by more than one facility are presented here. Strategy definitions for all strategies listed are available in Table 2. HPF, high performing facility; LPF, low performing facility; AF, audit and feedback; CR, clinical reminders; ED, education; OL, opinion leaders; OS, organizational structural.

We have a totally electronic medical record here. So that makes our chart reviews a lot easier. You could sit at your desk, type, or your laptop here and dial in and review a record. You can be a physician at home and, ‘oh I forgot to document this.’ And sit down at your home computer and dial-in and be able to enter a progress note.

Of the strategies common to all HPF, only one strategy, the fully implemented electronic medical record, was also unique to HPF (i.e., reported by all HPF and no LPF). Thus, some of the strategies common across all HPF were also reported by some LPF. However, examination of the actual reports of strategy use suggests that these strategies manifest themselves differently for each type of facility. For example, in HPF, the strategy of physician-specific reports occurs as a direct result of the data contained in CPRS, as explained by this individual at a HPF:

I've got my computer setup where I can just plug in the numbers, get a new set of numbers and then update my overall cumulative scores within ten, fifteen minutes. And that's what gets fed back very, very quickly. And so I think so far as we've seen an improvement over the last year.

Conversely, in one of the LPF, this strategy refers to manual modification of the EPRP report (with therefore limited data for individual providers):

… we also use the EPRP report. And that is really a poor report. I am having to take that thing and completely redo that whole entire report so that I can get individual feedback to the providers on their individual patients. Plus, not to mention the fact, we have to break out … I do not know how I am doing here from the EPRP report because they lump our CBOCs [community based outpatient clinics] and satellites and everything into one report. And so I have to break all that out so I can even identify where the, you know, opportunities for improvement are.

Strategies Common to LPF

Similar to HPF, LPF emphasize technology and infrastructure concentrating on the electronic medical record. However, the strategies reported in the LPF suggested an earlier stage of progress toward a fully functioning electronic medical record. For example, unlike HPF, all three LPF reported having sets of clinical reminders still under development:

I started meeting with the preventive medicine committee and [the] P & T [Pharmacy & Therapeutics] committee kind of gave me some recommendations that they wanted after they started looking at what the reminders could actually do for the clinician, you know. … So we've been kind of working with the Primary Care group again to get them rolled out and give them the tools that they need … when they sit down and encounter their patient that they know what that patient needs at that time or coming up in the very near future.

Also of note, at least two LPF reported relying heavily on regionally or nationally provided resources for their guideline implementation efforts (e.g., a data warehouse, which included patient information from multiple facilities in a single data store), rather than tailoring their efforts to their individual facility. This was not the case in HPF.

Comparing HPF and LPF

The key difference separating the HPF from the LPF appears to revolve around dedicated effort and local adaptation. For example in LPF, the medical executive committee interpreted, disseminated, and helped implement the guidelines, in addition to addressing many other medical concerns. In contrast, HPF reported having a separate committee exclusively to handle CPG implementation. In addition, HPF also made use of a guidelines advocate/champion; no LPF reported using champions in this manner. Electronic resources, such as the EPRP report and the clinical reminders, tended to be used “as is” by the LPF, whereas the HPF tended to adapt these resources to their own needs, adding (e.g., 100 percent sampling for the chart abstractions conducted by one HPF) and revising (e.g., customizing clinical reminders and templates) as necessary. In sum, HPF allocated dedicated resources toward guideline implementation, and customized their resources to their local facility to facilitate their adoption and use; conversely, LPF relied primarily on preexisting resources for guideline implementation, engaging in little if any tailoring to their facility.

Within-Facility Analyses: Strategies Unique to Individual Facilities

Important insights may be overlooked in examining the strategies from a cross-facility perspective, including rival explanations for the cross-facility findings.2 To explore this possibility, we tabulated strategies uniquely reported by each facility (i.e., strategies reported by one, and only one, facility, see Table 4); for each facility we asked, “is there a common pattern to the set of unique strategies reported by this facility?” We then asked, “are there any cross-facility commonalities in their observed strategy patterns?” Although each facility exhibited individualized solutions to the problem of guideline implementation and adherence, there were still some noteworthy commonalities. The LPF seemed to lack focus—no clear directive was present in any of these facilities to address CPG adherence; in addition, many of the specific strategies reported further supported the finding that LPF tended to use resources with little local adaptation (e.g., VISN-level committee, written dissemination of the guidelines, using best practices from other facilities). Conversely, although each HPF adopted a different approach to the CPG adherence problem (e.g., organizing and coordinating information; improving quality of verbal communication; automation and introspection), which would appear to contradict our initial interpretations, all HPF appeared to approach implementation with a clear direction in mind. This is consistent with previous observations regarding dedicated effort (in a particular direction) and local adaptation (the specific direction, although clear, varies by facility). Additionally, the strategic use of guideline and/or performance information was a central component in all HPF, albeit manifested differently across facilities (see strategy patterns noted in Table 4). This too, is consistent with local adaptation. Thus, these within-facility patterns support our interpretations of dedicated effort and local adaptation.

Table 4.

Strategies Reported by Single Facilities

Low Performing Facilities: No Clear Strategy Pattern High Performing Facilities: Clear, Yet Locally Adapted Direction
1 2 3 4 5 6
Attrition/layoffs Best practice sharing across facilities Shared decision-making (with the patient) Temporary, problem-specific committee Strategy pattern: Coordinating and organizing information Strategy pattern: Improving and maintaining the quality of verbal communication Strategy pattern: Automation and introspection
Case manager Designing strategic and/or action plans for policy and implementation Centralized after-hours phone hotline Encouragement from top to use computers Empowering at the lowest possible level
Centralized testing Prioritizing based on population needs Contract nurse program/care coordination Expectation that reminders will be satisfied Following the baldrige model
Changing the manner in which data are monitored and utilized Veterans Integrated Service Network-level committee Empowering within scope of practice Solicit provider input Internal performance review (not necessarily charts)
Changing the way patients are scheduled Written dissemination (of the guideline) Keeping the appointed schedule Verbal feedback to provider Plan Do Study Act process
Clerical Patient ed: education room/library Clinical patient record coordinators, Recruiting quality staff
Committee-based clinic Communication patterns Direct communication with decision makers Restructuring the administration
Consolidation of equipment Communications with VISN Posters Walk-in clinic
Consult with others about appropriateness of guideline First come, first serve Electronic medical record (historical) Re-instituting primary care teams
Electronic medical record (partially implemented) Physician decides Central repository for process documentation Routine procedure
Gaming? Order sets Open relationship with IRM Standing orders Automated chart reviews
Grant money for special initiatives Periodic clinical reminder review
Have patient bring in their meds Reference materials in CPRS
Linking pin communication mechanism
More time w/patient
On-the-job training
Patient ed: handouts/literature
Patient ed: one on one demos
Pdas for physicians
Policy change
Pretesting/test prioritizing
Product line configuration
Quality Manager as a CPG communication channel Quick turnaround equipment in the clinic
Re-staffing
Revised tracking and/or encounter forms
Single point of contact for patient
Staffing
Standardized/quick/ computerized order sets
Strategizing around workflow
Threats
Town hall meeting

CPRS: Computerized Patient Record System; CPG: Clinical Practice Guideline; OTJ: On-the-job; QM: Quality Manager; VISN: Veterans Integrated Service Network; PDSA: Plan Do Study Act.

DISCUSSION

This research examined the patterns of strategies and initiatives employed by HPF and LPF in implementing CPGs. How do our findings help fill the gap of identifying strategy patterns that affect implementation of multiple CPGs simultaneously? First, HPFs appear to constantly scan for less labor intensive ways to obtain data about their own performance and feed it back to their providers in an actionable form. Instead of resisting implementation of the electronic medical record, these facilities welcomed it, examined its capabilities for improving care/implementing CPGs, and invested significant local resources to customize it. Second, rather than trying to incorporate CPG implementation into their usual business structures, HPFs invested in new or separate structures to address the specific needs related to CPG implementation. We postulate that the HPFs' success is not related to the specific structure chosen (e.g., CPG committee, CPG champion, etc.) but rather to the fact that the new structure can function independently of competing priorities faced by existing structures. For example, one can imagine the range of items that might be on a standing medical executive committee's agenda; CPGs might occupy a small portion of that agenda. In contrast a committee formed expressly to plan CPG implementation would not be distracted by those other agenda items.

Third, HPFs' strategy choices suggest a more strategic, deliberate approach than that of LPFs. Rather than trying a wide range of strategies but none in depth, the HPFs seemed to concentrate on a few, related strategies into which they expended significant energy. What this work does not tell us is the reasoning behind their choices and whether or not a conscious evaluation of the likely success of a potential strategy was actually employed during the choice process.

Limitations

First, the study's relatively small sample size of six facilities, as well as some of the pragmatic design choices we faced (e.g., interviewer assignment was logistically driven, thereby resulting in one interviewer being assigned to four of the six sites in the sample) potentially limits the generalizability of our results. However, characteristic of rigorous qualitative studies, we provided a rich description of these few facilities and a more in-depth comparison of the differences in strategies between high and low performers than would have been possible via survey methods. The strategies were reported by multiple individuals at each facility; between 10 and 25 people were interviewed at each facility, for a total of 102 respondents (see Table 1). Additionally, as discussed in the site selection section, care was given in the initial selection of facilities to insure a range of types of facilities that would represent the spectrum of VA facilities in terms of size, geography, and primary care capabilities. Further, all the strategies reported by multiple facilities are consistent with the strategies reported in systematic reviews by the EPOC Group, suggesting precedent for the effectiveness of these strategies in aiding implementation efforts.

Second, with the chosen research design we can make no assertions about cause and effect. It is possible that a “culture of innovation” in the HPFs predated any specific efforts at guideline implementation; similarly, we cannot determine from these data whether a common historical factor exists, such as resources from a VISN (Veterans Integrated Service Network) or preexisting computing capabilities, which might have had a substantial impact on guideline adherence.

Third, significant time has passed since these data were collected; in that time, VA has continued to provide resources to aid in CPG implementation, particularly in IT: almost every facility now uses CPRS, whereas that was not the case when these data were collected. Performance data are now warehoused at the network level, so cross-facility comparisons can be made with relative ease, using a tool called the Executive Dashboard. Nevertheless, significant between-facility variability continues to exist in guideline adherence levels, despite nearly universal availability of performance data and guideline details. This could indicate the need to go beyond the basics via dedicated resources and local adaptation, though at this point that would be strictly speculative.

Conclusion and Future Directions

We conclude that facilities with a record of successful CPG implementation are more likely to have (a) invested enthusiastically in adopting CPRS and customizing it to provide performance feedback, (b) chosen strategies more tactically than LPFs, and (c) devoted dedicated resources to CPG implementation rather than incorporating it into usual managerial structures. Implementation of multiple CPGs, the norm in usual care, likely requires multifaceted but carefully chosen strategies. These findings have potential applicability in large managed care organizations, whose organizational structure is similar in various ways to that of the VA (e.g., salaried physicians, system-wide design and implementation of policies and procedures). Thus, our findings should be interpreted not as predictive of what happens in such organizations, but as suggestive of how guidelines could better be implemented in such settings. Admittedly, however, this model of managed care organization is quite uncommon today in the United States; other types of organizations may require somewhat different implementation solutions. Future research should test the generalizability of these results across an array of settings (e.g., small and/or rural family practices, larger, commercial hospitals) and examine the challenges faced by these other types of health care organizations in adhering to CPGs (particularly those with different staffing models and fewer resources); Further, the sustainability of these strategies and how they change as an implementation effort matures, and whether other facility characteristics, such as organizational structure and culture, may be as or more important than the actual strategies used at predicting performance, are still largely unanswered questions.

Acknowledgments

The research reported here was supported by the Department of Veterans Affairs, Veterans Health Administration, Health Services Research and Development Service (CPI #99-129 & HFP #98-002/REAP #05-129). Dr. Hysong is a Health Services Researcher at the Houston Center for Quality of Care & Utilization Studies, VHA Health Services Research & Development Center of Excellence; and an Instructor of Medicine at Baylor College of Medicine. This work was performed during her tenure at the Veterans Evidence-Based Research Dissemination and Implementation Center (VERDICT), a VHA Health Services Research Enhancement Award Program. Dr. Best is a Senior Healthcare Consultant at Lockheed Martin Information Technology; this work was performed during his tenure at VERDICT. Dr. Pugh is the director of VERDICT, a staff physician at the South Texas Veterans Health Care System, and Professor of Internal Medicine at the University of Texas Health Science Center San Antonio.

Disclosures: All three authors' salaries were supported in part by the Department of Veterans Affairs.

Disclaimers: The views expressed in this article are those of the authors and do not necessarily reflect the position or policy of the Department of Veterans Affairs, Baylor College of Medicine, the University of Texas, or Lockheed Martin Corporation.

NOTES

1

More detailed information on the specific measures and rankings is available upon request from the authors.

2

We would like to thank one of our reviewers for suggesting a more detailed exploration of this portion of the data.

Supplementary Material

The following supplementary material for this article is available online:

APPENDIX A

Interview Guide.

hesr0042-0084-s1.pdf (103.1KB, pdf)

References

  1. Asch SM, McGlynn EA, Hogan MM, Hayward RA, Shekelle P, Rubenstein L, Keesey J, Adams J, Kerr EA. Comparison of Quality of Care for Patients in the Veterans Health Administration and Patients in a National Sample. Annals of Internal Medicine. 2004;141(12):938–45. doi: 10.7326/0003-4819-141-12-200412210-00010. [DOI] [PubMed] [Google Scholar]
  2. Bartell J, Smith M. US Physicians' Perceptions of the Effect of Practice Guidelines and Ability to Provide High-Quality Care. Journal of Health Service Research Policy. 2005;10(2):69–76. doi: 10.1258/1355819053559056. [DOI] [PubMed] [Google Scholar]
  3. Best RG, Hysong SJ, McGhee C, Moore FI, Pugh JA. An Empirical Test of Nonaka's Theory of Organizational Knowledge Creation. E-Journal of Organizational Learning and Leadership. 2003;2(2):1–19. Available at http://www.weleadinlearning.org/rboct03.htm.
  4. Boyd CM, Darer J, Boult C, Fried LP, Boult L, Wu AW. Clinical Practice Guidelines and Quality of Care for Older Patients with Multiple Comorbid Diseases: Implications for Pay for Performance. Journal of the American Medical Association. 2005;294(6):716–24. doi: 10.1001/jama.294.6.716. [DOI] [PubMed] [Google Scholar]
  5. Daft RL, Lewin AY. Can Organization Studies Begin to Break Out of the Normal Science Straitjacket? An Editorial Essay. Organization Science. 1990;1(1):1–9. [Google Scholar]
  6. Devers KJ. How Will We Know ‘Good’ Qualitative Research When We See It? Beginning the Dialogue in Health Services Research. Health Service Research. 1999;34(5, part 2):1153–88. [PMC free article] [PubMed] [Google Scholar]
  7. Doebbeling B, Vaughn T, Woolson R, Peloso P, Ward M, Letuchy E, BootsMiller B, Tripp-Reimer T, Branch L. Benchmarking Veterans Affairs Medical Centers in the Delivery of Preventive Health Services: Comparison of Methods. Medical Care. 2002;40(6):540–54. doi: 10.1097/00005650-200206000-00011. [DOI] [PubMed] [Google Scholar]
  8. Frijling B, Hulscher ME, van Leest LA, Braspenning JC, van den Hoogen H, Drenthen AJ, Grol RP. Multifaceted Support to Improve Preventive Cardiovascular Care: A Nationwide, Controlled Trial in General Practice. British Journal of General Practice. 2003;53(497):934–41. [PMC free article] [PubMed] [Google Scholar]
  9. Fung C, Woods J, Asch S, Doebbeling B. Chicago: SGIM Annual Meeting; 2004. Variation in the Use of Computerized Clinical Reminders in an Integrated National Delivery System. [PubMed] [Google Scholar]
  10. Grimshaw JM, Thomas RE, MacLennan G, Fraser C, Ramsay CR, Vale L, Whitty P, Eccles MP, Matowe L, Shirran L, Wensing M, Dijkstra R, Donaldson C. Effectiveness and Efficiency of Guideline Dissemination and Implementation Strategies. Health Technology Assessment (Winchester, England) 2004;8(6):iii–iiv. doi: 10.3310/hta8060. [DOI] [PubMed] [Google Scholar]
  11. Grol R, Wensing M, Eccles M. Improving Patient Care: The Implementation of Change in Clinical Practice. Edinburgh: Elsevier; 2005. [Google Scholar]
  12. Hysong SJ, Best RG, Pugh JA, Moore FI. Not of One Mind: Mental Models of Clinical Practice Guidelines in the VA. Health Services Research. 2005;40(3):823–42. doi: 10.1111/j.1475-6773.2005.00387.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Jamtvedt G, Young JM, Kristoffersen DT, Thomson O'Brien MA, Oxman AD. Audit and Feedback: Effects on Professional Practice and Health Care Outcomes. 2000. [Update of Cochrane Database Systematic Reviews(2):CD000259; PMID: 10796520]. Cochrane Database of Systematic Reviews 2003;(3):CD000259. [DOI] [PubMed]
  14. Jha AK, Perlin JB, Kizer KW, Dudley RA. Effect of the Transformation of the Veterans Affairs Health Care System on the Quality of Care. New England Journal of Medicine. 2003;348(22):2218–27. doi: 10.1056/NEJMsa021899. [DOI] [PubMed] [Google Scholar]
  15. Kizer KW. Prescription for Change: The Guiding Principles and Strategic Objectives Underlying the Transformation of the Veterans Healthcare System. Washington, DC: Department of Veterans Affairs; 1996. [Google Scholar]
  16. Krein SL, Hofer TP, Kerr EA, Hayward RA. Whom Should We Profile? Examining Diabetes Care Practice Variation among Primary Care Providers, Provider Groups, and Health Care Facilities. Health Services Research. 2002;37(5):1159–80. doi: 10.1111/1475-6773.01102. [DOI] [PMC free article] [PubMed] [Google Scholar]
  17. Markey P, Schattner P. Promoting Evidence-Based Medicine in General Practice—The Impact of Academic Detailing (see comment) Family Practice. 2001;18(4):364–6. doi: 10.1093/fampra/18.4.364. [DOI] [PubMed] [Google Scholar]
  18. Ostbye T, Yarnall KS, Krause KM, Pollak KI, Gradison M, Michener JL. Is There Time for Management of Patients with Chronic Diseases in Primary Care? Annals of Family Medicine. 2005;3(3):209–14. doi: 10.1370/afm.310. [DOI] [PMC free article] [PubMed] [Google Scholar]
  19. Pagaiya N, Garner P. Primary Care Nurses Using Guidelines in Thailand: A Randomized Controlled Trial. Tropical Medicine and International Health. 2005;10(5):471–7. doi: 10.1111/j.1365-3156.2005.01404.x. [DOI] [PubMed] [Google Scholar]
  20. Patton MQ. Enhancing the Quality and Credibility of Qualitative Analysis. Health Service Research. 1999;34(5, part 2):1189–208. [PMC free article] [PubMed] [Google Scholar]
  21. Scientific Software Development. Atlas.ti: The Knowledge Workbench (Computer Program). Version 4.2 Build 061. Berlin: Scientific Software Development; 1999. [Google Scholar]
  22. Singh H, Kalavar J. Quality of Care for Hypertension and Diabetes in Federal- versus Commercial-Managed Care Organizations. American Journal of Medical Quality: The Official Journal of the American College of Medical Quality. 2004;19(1):19–24. doi: 10.1177/106286060401900104. [DOI] [PubMed] [Google Scholar]
  23. Smith TJ, Hillner BE. Ensuring Quality Cancer Care by the Use of Clinical Practice Guidelines and Critical Pathways. Journal of Clinical Oncology. 2001;19(11):2886–97. doi: 10.1200/JCO.2001.19.11.2886. [DOI] [PubMed] [Google Scholar]
  24. Strauss AL, Corbin J. Basics of Qualitative Research: Techniques and Procedures for Developing Grounded Theory. 2. Thousand Oaks, CA: Sage Publications; 1998. [Google Scholar]
  25. Thomson O'Brien MA, Oxman AD, Haynes RB, Davis DA, Freemantle N, Harvey EL. Local Opinion Leaders: Effects on Professional Practice and Health Care Outcomes. 2000. Cochrane Database of Systematic Reviews (2): CD000125. [DOI] [PubMed]
  26. VanOstenberg PR. Practice Guidelines and Quality: Is There a Connection? Special Care in Dentistry. Joint Commission Journal on Accreditation of Healthcare Organizations. 1996;16(4):147–9. doi: 10.1111/j.1754-4505.1996.tb00850.x. [DOI] [PubMed] [Google Scholar]
  27. Vaughn TE, McCoy KD, BootsMiller BJ, Woolson RF, Sorofman B, Tripp-Reimer T, Perlin J, Doebbeling BN. Organizational Predictors of Adherence to Ambulatory Care Screening Guidelines. Medical Care. 2002;40(12):1172–85. doi: 10.1097/00005650-200212000-00005. [DOI] [PubMed] [Google Scholar]
  28. Ward MM, Vaughn TE, Uden-Holman T, Doebbeling BN, Clarke WR, Woolson RF. Physician Knowledge, Attitudes and Practices Regarding a Widely Implemented Guideline. Journal of Evaluation in Clinical Practice. 2002;8(2):155–62. doi: 10.1046/j.1365-2753.2002.00337.x. [DOI] [PubMed] [Google Scholar]
  29. Weber RP. Basic Content Analysis. 2. Newbury Park, CA: Sage; 1990. [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

APPENDIX A

Interview Guide.

hesr0042-0084-s1.pdf (103.1KB, pdf)

Articles from Health Services Research are provided here courtesy of Health Research & Educational Trust

RESOURCES