ABSTRACT
BACKGROUND
The move to team-based models of health care represents a fundamental shift in healthcare delivery, including major changes in the roles and relationships among clinical personnel. Audit and feedback of clinical performance has traditionally focused on the provider; however, a team-based model of care may require different approaches.
OBJECTIVE
Identify changes in audit and feedback of clinical performance to primary care clinical personnel resulting from implementing team-based care in their clinics.
DESIGN
Semi-structured interviews with primary care clinicians, their department heads, and facility leadership at 16 geographically diverse VA Medical Centers, selected purposively by their clinical performance profile.
PARTICIPANTS
An average of three interviewees per VA medical center, selected from physicians, nurses, and primary care and facility directors who participated in 1-hour interviews.
APPROACH
Interviews focused on how clinical performance information is fed back to clinicians, with particular emphasis on external peer-review program measures and changes in feedback associated with team-based care implementation. Interview transcripts were analyzed, using techniques adapted from grounded theory and content analysis.
KEY RESULTS
Ownership of clinical performance still rests largely with the provider, despite transitioning to team-based care. A panel-management information tool emerged as the most prominent change to clinical performance feedback dissemination, and existing feedback tools were seen as most effective when monitored by the nurse members of the team. Facilities reported few, if any, appreciable changes to the assessment of clinical performance since transitioning to team-based care.
CONCLUSIONS
Although new tools have been created to support higher-quality clinical performance feedback to primary care teams, such tools have not necessarily delivered feedback consistent with a team-based approach to health care. Audit and feedback of clinical performance has remained largely unchanged, despite material differences in roles and responsibilities of team members. Future research should seek to unpack the nuances of team-based audit and feedback, to better align feedback with strategic clinical goals.
KEY WORDS: audit and feedback, patient-aligned care teams, primary care, qualitative methods
BACKGROUND
Team-based healthcare represents a fundamental shift in the way healthcare organizations deliver care. In primary care, for example, the traditional care model involved a physician assisted by a nurse, with the physician assuming primary responsibility for the patient. In a team-based approach, a care team consisting of multiple professionals, including physicians and physician extenders, nurses or care managers, and other resources such as pharmacists, nutritionists, and mental health professionals, are all collectively responsible for the patient.1,2 Team-based care involves more complex coordination among clinical staff, which tends to be more difficult to perform to standard than work not requiring coordination;3,4 it also involves new roles, responsibilities, and relationships among existing clinical personnel.2,5,6 Healthcare facilities run by the US Department of Veterans Affairs (VA) recently transitioned to a team-based model of primary care, known within VA as the Patient-Aligned Care Team (PACT).
The VA has markedly improved quality of care and clinical performance in the last decade through clinical performance measurement (evidence-based, quantitative indicators of the quality of health care delivered)7 and audit and feedback (A&F),8–14 which involves measuring an individual’s professional practice or performance, comparing it to professional standards or targets (in VA’s case clinical performance measures), and delivering results of this comparison to the individual.15 Recent health services research has finally begun to unpack factors that make A&F more effective in clinical settings.10,15,16 However, most A&F research uses the individual as the unit of analysis—even studies comparing group-level aggregations of feedback versus individual-level feedback16,17 assume that the recipient is an individual.
Management-based and psychology-based research suggests that goal setting and feedback to a team might require different strategies to achieve effectiveness; though the evidence is somewhat scarce, the critical issue is likely related to clarity over how individual contributions impact team goals. In an individual setting, feedback directs individual attention to details of the task, thereby affecting subsequent goal setting and performance. In a team setting, however, providing individual feedback alone would direct the individual’s attention to the task, but provide no information about how changes in the individual’s performance impact team outcomes, which partially depend on the individual performance of others. For example, Mitchell & Silver18 found that giving individual goals to members of a team decreased team performance; along similar lines, Crown & Rosse19 observed that “groupcentric” goals (individual goals focusing on contributions to team performance) combined with team goals led to the highest team performance. Finally, DeShon and colleagues20 noted parallel processes for individual-level and team-level goals and feedback, with team members performing to whichever feedback level provided the most and highest-quality feedback: those receiving individual-level feedback performed best at individual-level goals and measures, whereas those receiving group-level feedback focused on team performance. Team members receiving both types of feedback, however, did not perform as well at either level as those receiving only one type.
To the extent that the team structure aims to empower its members to provide quality care, the literature suggests aiming feedback practices at teams rather than individuals. However, little data exist to determine whether current feedback practices are aligned well to support teams. A clearer understanding of current practices in the care team setting is therefore needed to optimize feedback effectiveness.
In this article, we describe how A&F is delivered in an increasingly team-based primary care environment with a strong history of provider-level A&F. We report experiences of primary care clinicians and leadership at 16 VA Medical Centers, to identify changes in A&F practices occurring alongside PACT implementation.
METHOD
This study was part of a larger funded research project examining differences among high, low, and moderately performing facilities regarding feedback strategies, feedback characteristics, and feedback-related organizational culture. Detailed methods for this project are published elsewhere and summarized here.9 Our local Institutional Review Board approved this study.
Design and Setting
The primary study examined telephone interviews with facility leadership and primary care personnel at 16 VA Medical Centers, selected purposively to represent a variety of geographic regions and outpatient clinical performance levels.9 The current paper explored broad changes in clinical performance feedback associated with PACT implementation, irrespective of differences in facility characteristics. Nationally, VA has made certain tools available that facilitate delivery of clinical performance data to individuals, including the Primary Care Almanac (a panel management information tool) and the PACT Compass (used to track indicators such as coordination), along with several other reporting tools. How these tools are implemented and used, however, is left up to individual facilities, as is the case for other clinical performance feedback practices.
Participants
At each facility, we sought to interview four informants: the facility director, the associate chief of staff (ACOS) for primary care, one full-time primary care physician, and one full-time primary care nurse.
Procedure
Interviewer Training
The principal investigator (an industrial/organizational psychologist) and co-investigator (a general internist) instructed interviewers in interviewing techniques; both are experienced in interviewing and qualitative research. Instruction included a didactic session on interviewing technique, observation of interviews conducted by the principal investigator (PI) and co-investigator, and two mock interviews with critique. A master’s-level industrial/organizational psychologist, a registered dietitian, and two bachelor’s-level health-science specialists with backgrounds in biology and sociology (respectively) comprised the interviewing team.
Participant Recruitment and Telephone Interviews
We invited prospective participants via e-mail to enroll in the study. Those agreeing to participate after initial or follow-up contact were scheduled for a consent discussion and interview. Trained research assistants interviewed each participant for 1 hour, using a semi-structured interview guide. Participants answered questions about the types of External Peer-Review Program (EPRP) and other quality/clinical performance information they receive and actively seek out, opinions and attitudes about the utility of EPRP data as a form of feedback, and ways they use this information. EPRP is a nationally abstracted database containing performance data for all VA medical facilities on over 90 indicators covering access, quality of care, cost effectiveness, and patient-satisfaction domains; data are abstracted monthly and reported quarterly.21 EPRP is the official data source for VA’s clinical performance management system, providing indicators that leadership uses to gauge performance and make administrative decisions about facilities in their networks.
Using a constant comparative approach, we identified several PACT-related themes emergent in the first third of interviews; we iteratively adapted our interview guide to capture additional information about the extent to which PACT had been implemented, and changes in clinical performance feedback since PACT implementation. (See Appendix for PACT-related interview questions and distribution of interviews across interviewers and interviewees.) An independent service transcribed interview recordings; interviewers cross-checked transcripts to recordings for accuracy.
Data Analysis
We analyzed transcripts using techniques adapted from grounded theory and content analysis, using Atlas.ti v. 6.2.22
We conducted automated searches of transcripts for PACT-related terms to aid in later, more manual coding for thematic content. We then identified and categorized direct responses to our PACT-related questions from the interview guide. We followed this initial coding with a manual transcript review (aided by terms tagged previously in the automatic search process) in search of passages providing answers to our PACT-related questions, even if they were not the direct result of a PACT-related question. Coded passages were then categorized according to major emergent themes, reviewed for negative cases, and a central story was identified.
RESULTS
Our analyses indicated four primary themes emergent from the data, based on 48 interviews from 16 sites: (1) ownership of clinical performance still rests largely with the provider; (2) the Primary Care Almanac is the most prominent change to clinical performance feedback (aggregation and information dissemination), with decreasing reliance on periodic EPRP reports; (3) existing feedback tools are seen as most effective when monitored by nurse members of PACTs; and (4) facilities report no appreciable changes to assessment of clinical performance since transitioning to PACT.
Provider “Ownership” of Clinical Performance
The PACT model represents a shift from traditional approaches, where responsibility for clinical outcomes rested primarily on the primary care provider. However, the strongest theme encountered in our data was that, although great efforts have been made to transition to a team-based model of care, feedback about clinical performance is still structured largely according to the individual-provider model. For example, in contrast to data on measures related to PACT implementation, generally shared with all team members, access to clinical performance information for nonprovider team members depended largely on the provider. Specifically, facility-generated reports of clinical performance data were considered disseminated to a team when provided to the team’s physician:
In my clinic, … we distribute the data to the team, which usually gets handed to the provider … but it stays in my hands only momentarily before my RN takes it to begin getting into the meat of the—the information and identifying who to call and who to arrange for labs for; things like that.
—Site J Primary Care Director
Often, interviewees’ language suggested that the provider was considered the owner both of the team (e.g., “my RN”) and its clinical performance outcomes. The idea that providers are the core of the team and that they will “have their own RN, LPN, and clerk to assist” them (Site G Nurse) was a commonly held perspective. A nurse at Site D also noted that, when comparing data, they may look to see if “Dr. A’s patients are getting better compared to Dr. B’s patients and all that,” indicating that the team’s performance was defined in terms of the provider. This language is in contrast to the stated ideals of facility leaders that all PACT members take ownership of the clinical-quality outcomes of their patients:
One of the chief responsibilities of me and my staff is to try and work with people. It doesn’t happen overnight. But to try and change their perspective or their thinking on how they function. First and foremost is team, not as necessarily the physician driving or the provider calling all the shots, but bringing everyone up to the higher level of performance, and it’s a work in progress.
—Site M Primary Care Director
“In terms of priorities, it is our top priority thatthat we assess and report clinical performance, um, so that our PACT teams and teamlets can—can know that and continue to improve on their clinical performance.”
—Site K Primary Care Director
However, there seemed to be divergent views between leadership and clinicians as to who receives or should receive feedback. For example, at one site, the ACOS discusses how (s)he limits access to certain clinical performance data to providers, and prevents providers from seeing one another’s data, with no mention of connecting other primary care team members with the data:
There’s a report also on Veterans Support Service Center (VSSC)…. But you can also get this report from Computerized Patient Records System (CPRS); each provider can pull their own data on what’s called the primary care almanac … They can’t see all the VSSC stuff unless they get access to that database, and I haven’t given them access to that because like I said before; they will see everybody’s [every provider’s] data. So I get that stuff and I send it to our primary care leaders and they send it out to everybody.
—Site D Primary Care Director
At that same site, however, the nurse interviewee reported petitioning his/her facility leadership to gain access to the Almanac, because (s)he saw it as essential to his/her new panel-management role:
“We, myself, my supervisor […], kind of petitioned with leadership and said,…the nurses need to be able to access at least limited information in the Almanac. …how are we supposed to manage a high patient cohort that has significant physical and emotional and mental and all problems with all these comorbidities when we can’t even find out who they are until we ask the doctor to run the report? Whereas if I have access to the Almanac, I go in at any time I need.”
—Site D Nurse
Another example of this is Site J, where the facility director somewhat hesitantly indicates that the facility targets data to teams, the ACOS notes the facility targets data to teams by offering it to providers (quoted previously), and the nurse wishes for direct access to data and the technical knowledge to use it effectively.
What I can tell you is the tools that are being built for comparing performance across teams are now shared, so our historical model is we would just engage the provider… and now we’resharing that information, uh, on—probably, uh, on the—with the team; not just the provider.
—Site J Facility Director
It has been mentioned and I think we signed up to get access [to the Almanac], but that’s all. …and we may have had like a little brief in-service, but, you know, it didn’t translate to anything. …in the ideal world I think this [panel management] would be under my job description; that I would be tracking them, and that they wouldn’t be getting lost; and, you know, I had some great big huge database that I was allowed to do that, and chronic disease management, I guess. …. I may have the tools available to me. I have no idea how to use them.
—Site J Nurse
The Primary Care Almanac as a Feedback Tool
Interviewees reported that the Primary Care Almanac was introduced concurrently with PACT implementation. Data in the Almanac can be viewed in aggregated form at multiple levels, including by facility and provider (though not by team or individual team member—some team members serve multiple PACTs). The Almanac can, therefore, be used as one tool for assessing and feeding back information about overall clinical performance. The PACT Compass was also widely referenced; however, it primarily reports on nonclinical indicators beyond the scope of this article.
Attitudes toward the Almanac varied greatly, ranging from perceptions that it is a key tool for achieving clinical improvement,
The most profound change has been the availability of the Almanac. The Almanac is, as you know, the—a way for each … provider to look at his or her own group of patients, if you will, their flock, and to see how everyone’s doing and who specifically is not doing well… and so I think that’s probably the most profound and powerful tool … that we have now at the provider level.
—Site N Primary Care Director
to preferences for home-grown tools instead:
The dashboard is similar to the Almanac. It’s a very nice system. You can drill down from the entire region to the VISN [Veterans Integrated Service Network] to the site to the provider. …You can put anything on the dashboard you darn well please, and it comes up in a very nice web-based format … There’s red stop signs and yellow triangles and green diamonds to show you, graphically, your performance, with the statistics associated with those; and for any given provider, you can pull up the data any way you want by a few simple clicks. …The dashboard for us is shared…on our website so our nurse care managers and our clerical staff can get in there, and we have a very coordinated approach…that’s why we really like the dashboard, because it’s a very detailed and effective tool that can be accessed and used by a lot of different people to work on the same goal, and…it updates instantaneously, whereas the EPRP we have to wait quite some time to see our statistics improve.
—Site E Physician
Others have abandoned use of the Almanac, citing multiple reasons, such as staffing shortages, timeliness of data, and alignment of Almanac data with the facility’s goals:
I mean, another problem is when you try to use a tool; and it doesn’t meet your needs. Then I’m sort of—you know, we’re kind of done with it. You know, unless somebody came back and promoted it and said, you know, we’ve made all these great updates to it; now it’s more useful to you.
—Site J Physician
At facilities where more robust data-dissemination tools than the Almanac existed prior to PACT, giving nonproviders access to these tools was viewed as key to effective implementation of the PACT model.
Nurses and Clinical Performance Feedback
Physicians noted that they lack time to review clinical performance data with sufficient frequency; to some extent, the PACT nurse care manager role has emerged to fill this gap. Whether because of lack of time or interest in ensuring appropriate follow-up, many physicians perceived clinical performance feedback tools as most useful to physicians when another person was available to monitor and manage their information. Facilities reported some improvements in clinical performance outcomes when feedback was made available to nurse members of the team, particularly in cases where provider data had previously been “in the red.”
What we find is thatwhen the RNs are where we distribute the data to, particularly, we made a lot of in-roads on the hemoglobin A1C parameter, because just identifying who needed to come in and have blood work shifted the numbers significantly and by just having the RNs go through the data, identifying those patients who needed to come in for labs and arranging for them to come in for labs was a very successful intervention.
—Site J Primary Care Director
Changes in Clinical Performance Assessment Since Transition to PACT
When directly asked how assessment of clinical performance had changed since transitioning to PACT, interviewees often reported that “indications of quality of care are the same under PACT” (Site C Facility Director). For example, one facility director noted that although new PACT-related performance measures related to chronic disease management, access, and satisfaction had been added, actual measurement of clinical quality had not changed. In addition, interviewees sometimes interpreted the question as inquiring about changes to their facility’s actual performance on quality measures, and answers ranged from uncertainty:
…with regards to EPRP and clinical-practice outcomes, I’d have to say the jury may be still out in terms of the way that […] the implementation of PACT has made any changes.
—Site A Facility Director
to no apparent effect.
…the implementation of PACT has not affected our clinical-outcome results.
—Site E Facility Director
DISCUSSION
We sought to identify changes due to PACT implementation in clinical performance A&F to primary care clinical personnel. Despite deployment of new reporting tools and leadership’s desire to feed back clinical performance to the entire team, our findings indicate ownership and responsibility for clinical performance still rest largely with the provider. Further, though some of these new tools provide features desirable to quality feedback, such as the capacity to individualize and customize,10 access to them is limited to providers and leadership in certain facilities. The premise of the PACT model is that clinical-quality outcomes depend on the actions of all team members, yet facilities’ approaches to clinical performance feedback did not reflect this.
Although many facilities cited a need to increase PACT “ownership” of patient panel clinical outcomes, current systems of clinical performance feedback (including the Primary Care Almanac) imply provider rather than team ownership of data. Although the concept of delivering data directly to all team members was supported by facilities in principle, we saw little evidence to suggest this was fully implemented at the time of our interviews. Yet, our interviewees considered existing clinical performance feedback tools as most useful when targeted toward nonprovider team members. One possible explanation for current practice may be an assumption that, if data are delivered to the provider, then, by definition, they have been delivered to the team.
Implications
Our findings suggest a misalignment between operations’ vision of feedback to PACTs and the feedback culture in the clinic, highlighting the need to align clinical-feedback systems with PACT strategic objectives (or those of any Patient-Centered Medical Home [PCMH] outside VA). For example, one explanation for the observation that feedback tools work best when targeted toward nonproviders could be that current measures capture portions of the clinical performance domain that are more effectively influenced by nonproviders than by the providers who have traditionally been recipients of such feedback.
On a more practical level, our findings suggest the need to not only grant individual team members direct access to clinical performance feedback tools, but also to structure the data within said tools to the individual team member, so that he/she can monitor specific patients for which he/she is responsible; merely trading a provider’s name for a team designation in clinical performance databases will not increase the likelihood that clinical feedback will reach the most appropriate PACT member’s hands. Modifications to this tool at a national level may be warranted so that data are instead organized and accessible by the appropriate team member, to better align it with the principles of the PACT/PCMH model (for example, a pharmacist who services multiple PACTs should be able to see data for each PACT he/she serves, with capability to disaggregate to the patient level). This is consistent with the approach taken with clinical reminders within VA, with reminders delivered to the individual responsible for handling the clinical issue in question (e.g., clinical reminders for tasks regularly done by nurses, such as tobacco screening, are received by nurses but not providers). Several questions require answers for this to be accomplished, including which PACT members play a part in effecting change to each clinical performance measure, and how much interaction is appropriate for each team member to have with such information.
Finally, feedback linked to team roles is part of a broader transformation of any clinical team (in or outside VA), involving shared discussion and planning about how to respond to performance feedback. If, as DeShon and colleagues suggest,20 there are parallel processes for individual and team-based feedback and goals, then simply stopping at delivering clinical data to the team (i.e., knowledge of results), even if it has the qualities of actionable feedback,10 is insufficient to ensure improved quality. The team must reflect on the feedback as a team and plan as a team how to address quality gaps observed for the feedback to have maximum impact.20 Such reflection could occur, for example, in the daily PACT huddle; however, in our interviews, we did not observe this to be a universal practice.
Limitations
This study had several limitations. First, it consisted of cross-sectional interviews with questions about how performance feedback had changed since PACT, with threat of recall bias and limited ability to detect it. Second, 16 of the originally targeted 64 interviews were not conducted, because of declined invitations, ineligibility, and/or a limited pool of potential interviewees for a given role at a given site. Reasons for declining varied considerably; often no reason was explicitly given. However, we saw no appreciable differences in numbers of nonrespondents across roles.
The original study was primarily interested in clinical performance feedback to physicians. Thus, our clinician interviews included only physicians and registered nurses in primary care and excluded other primary care providers, so our data did not include the perspectives of individuals in these roles (e.g. nurse practitioner, physician assistant); however, these roles do have their own patient panels, and thus, receive clinical performance data through the tools mentioned here in the same way as physicians. In addition, although we interviewed nurses, we did not hear from nursing leadership in our interviews; so the perspectives here are predominantly from primary care physicians and administrative leadership.
Conclusions/Future Directions
We conclude that although new tools have been created to support higher-quality clinical performance feedback concurrent with adoption of PACTs, they are not as effective at meeting the feedback needs of clinical teams as they could be, both due to clinic culture dynamics, as well as specific features of these tools (e.g., individualization to shared PACT members). Future research should seek to unpack the nuances of team-based A&F, including issues such as appropriate distribution of clinical tasks and clinical performance feedback to each member of the team, and the relationship among individual, group, and group-centric clinical performance goals and feedback. Without a system delivering appropriate feedback to all PACT members at both individual and team levels, it may be difficult to achieve the intended vision of the PACT model of care.
Acknowledgements
Contributors
The authors wish to thank Richard SoRelle, Kristen Broussard Smitham, and Khai-El Johnson for their contributions in conducting the interviews and initial organization of the data for this paper, and for their critiques and suggestions during the revision process.
Funding
The research reported here was supported by the US Department of Veterans Affairs Health Services Research and Development Service (grant nos. IIR 09–095, CD2-07-0181), and partly supported by the facilities and resources of the Houston VA HSR&D Center of Excellence (COE) (HFP 90–020). Dr. Hysong is a health services researcher at the Houston VA HSR&D COE and an assistant professor of Medicine at Baylor College of Medicine in Houston. Ms. Knox is a research health sciences specialist at the Houston VA HSR&D COE; Dr. Haidet is a professor of medicine at Penn State University Hershey Medical College. All authors’ salaries were supported in part by the Department of Veterans Affairs. The views expressed in this article are solely those of the authors and do not necessarily reflect the position or policy of the authors’ affiliate institutions, the Department of Veterans Affairs or the US government. The authors do not report any financial conflicts of interest.
Conflicts of Interest
The authors declare that they do not have a conflict of interest.
Presentations
The research reported here has not been previously presented in any public forum.
Appendix
Table 1.
Interviewer | Facility Director or delegate | Head of Primary Care | Physician | Nurse | Total |
---|---|---|---|---|---|
A1 | 4 | 2 | 5 | 3 | 14 |
B2 | 3 | 3 | 2 | 3 | 11 |
C3 | 2 | 4 | 2 | 2 | 10 |
D4 | 2 | 4 | 1 | 2 | 9 |
E5 | 0 | 0 | 0 | 2 | 2 |
F5 | 0 | 0 | 2 | 0 | 2 |
Total | 11 | 13 | 12 | 12 | 48 |
Note: Interviewer backgrounds: 1Registered Dietitian; 2Bachelor’s level health sciences specialist with background in biology; 3Master’s level industrial/organizational psychologist; 4Bachelor’s level health sciences specialist with background in sociology; 5Two additional research assistants conducted interviews early in the project
Table 2.
Interview Question | Possible Probes |
---|---|
To what extent has PACT been implemented at your facility? | • How long has PACT been around at your facility? • What does PACT look like at your facility? • To what extent had PACT been implemented at the time of our earlier interview? |
Since the introduction of PACT at your facility, how has the measurement and assessment of clinical performance changed, if at all? | • What is newly being measured? • At this time, what performance measures are given the greatest emphasis at your facility? How has this changed over time? • How well does what gets measured match with what is important? |
Since the transition to PACT, what changes have you noticed about the clinical performance information made available to you and your team? What has stayed the same?* | • Describe the types of clinical performance information made available. • How has the introduction of PACT affected clinic or facility priorities? • Describe any new reports or changes to reports that might be related to PACT changes. |
How does your PACT teamlet use clinical performance information?* | If the participant doesn’t provide suggestions • Tell me about the last time you or your team received information about your clinical performance. • Is there other information you would prefer to receive on your PACT’s performance? • What is useful/not useful about the information you receive? |
*Modified versions of the last two questions were asked of interviewees in leadership roles; Since the transition to PACT, what changes have been made to the clinical performance information made available to your staff? How are the PACT teamlets expected to use clinical performance information?
REFERENCES
- 1.American Academy of Family Physicians (AAFP), American Academy of Pediatrics, American College of Physicians, American Osteopathic Association. Joint principles of the patient-centered medical home. American Academy of Family Physicians (AAFP) 2007Available from: URL: http://www.aafp.org/dam/AAFP/documents/practice_management/pcmh/initiatives/PCMHJoint.pdf
- 2.Patient Centered Primary Care Implementation Work Group. Patient Centered Medical Home Model Concept Paper. U.S. Department of Veterans Affairs Primary Care Program Office; 2011.
- 3.Hysong SJ, Khan MM, Amspoker AB, Petersen LA. All Clinical Performance Measures are Not Created Equal: The Role of Interaction Among Clinical Personnel on Measured Performance. Boston 2012.
- 4.Hysong SJ, Esquivel A, Sittig DF, Paul LA, Espadas D, Singh S, et al. Toward Successful Coordination of Electronic Health Record Based-Referrals: A Qualitative Analysis. Implementation Science 2011;6(84). [DOI] [PMC free article] [PubMed]
- 5.Medical Group Management Association. The Patient Centered Medical Home: 2011 Status and Needs Study—Reestablishing Primary Care in an Evolving Healthcare Marketplace; 2011.
- 6.Hysong SJ, Best RG, Pugh JA. Overlap of Job Tasks in VHA Primary Care: Implications for Staffing and Clinic Efficiency. DC: Washington; 2006. [Google Scholar]
- 7.Performance Measurement: Accelerating Improvement. Washington DC: National Academies Press; 2006. [Google Scholar]
- 8.Hysong SJ, Pugh JA, Best RG. Clinical practice guideline implementation patterns in VHA outpatient clinics. Health Serv Res. 2007;42(1 Pt 1):84–103. doi: 10.1111/j.1475-6773.2006.00610.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Hysong SJ, Teal CR, Khan MJ, Haidet P. Improving quality of care through improved audit and feedback. Implement Sci. 2012 May 18;7(1):45. [DOI] [PMC free article] [PubMed]
- 10.Hysong SJ, Best RG, Pugh JA. Audit and feedback and clinical practice guideline adherence: making feedback actionable. Implement Sci. 2006;1(1):9. doi: 10.1186/1748-5908-1-9. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Jha AK, Perlin JB, Kizer KW, Dudley RA. Effect of the transformation of the Veterans Affairs health care system on the quality of care. New Engl J Med. 2003;348(22):2218–27. doi: 10.1056/NEJMsa021899. [DOI] [PubMed] [Google Scholar]
- 12.Jha AK, Wright SM, Perlin JB. Performance measures, vaccinations, and pneumonia rates among high-risk patients in Veterans Administration health care. Am J Public Health. 2007 Dec;97(12):2167–72. [DOI] [PMC free article] [PubMed]
- 13.Kizer KWJG. Reinventing VA Health Care: Systematizing Quality Improvement and Quality Innovation. Med Care 2000;38(6 (VA QUERI Supplement)). [PubMed]
- 14.Hysong SJ, Khan M, Petersen LA. Passive monitoring versus active assessment of clinical performance: impact on measured quality of care. Med Care. 2011 Oct 1;49(10):883–90. [DOI] [PubMed]
- 15.Ivers N, Jamtvedt G, Flottorp S, Young JM, Odgaard-Jensen J, French SD, et al. Audit and feedback: effects on professional practice and healthcare outcomes. Cochrane Database Syst Rev 2012;6:CD000259 [DOI] [PMC free article] [PubMed]
- 16.Hysong SJ. Meta-analysis: audit & feedback features impact effectiveness on care quality. Med Care. 2009;47(3):356–63. doi: 10.1097/MLR.0b013e3181893f6b. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17.Kluger AN, DeNisi A. The effects of feedback interventions on performance: A historical review, a meta-analysis, and a preliminary feedback intervention theory. Psychol Bull. 1996;119(2):254–84. doi: 10.1037/0033-2909.119.2.254. [DOI] [Google Scholar]
- 18.Mitchell TR, Silver WS. Individual and group goals when workers are interdependent: Effects on task strategies and performance. J Appl Psychol. 1990;75:185–93. doi: 10.1037/0021-9010.75.2.185. [DOI] [Google Scholar]
- 19.Crown DF, Rosse JG. Yours, mine, and ours: Facilitating group productivity through the integration of individual and group goals. Organ Behav Hum Perform. 1995;64:595–608. [Google Scholar]
- 20.Deshon RP, Kozlowski SWJ, Schmidt AM, Milner KR, Wiechmann D. A multiple-goal, multilevel model of feedback effects on the regulation of individual and team performance. J Appl Psychol. 2004;89(6):1035–56. doi: 10.1037/0021-9010.89.6.1035. [DOI] [PubMed] [Google Scholar]
- 21.VHA Office of Analytics and Business Intelligence. Electronic Technical Manual . Veterans Health Administration 2013 November 1Available from: URL: http://vaww.reporting.rtp.med.va.gov/ReportServer/Pages/ReportViewer.aspx?%2fPerformance+Reports%2fMeasure+Management%2fMeasureSummary&rs%3aCommand=Render
- 22.Atlas.ti [computer program]. Version 6.2. Berlin, Germany: Scientific Software Development; 2010.