Skip to main content
Lippincott Open Access logoLink to Lippincott Open Access
. 2017 Feb 8;40(1):26–35. doi: 10.1097/JAC.0000000000000166

“Salt in the Wound”

Safety Net Clinician Perspectives on Performance Feedback Derived From EHR Data

Arwen E Bunce 1,, Rachel Gold 1, James V Davis 1, MaryBeth Mercer 1, Victoria Jaworski 1, Celine Hollombe 1, Christine Nelson 1
PMCID: PMC5137808  NIHMSID: NIHMS818756  PMID: 27902550

Abstract

Electronic health record (EHR) data can be extracted for calculating performance feedback, but users' perceptions of such feedback impact its effectiveness. Through qualitative analyses, we identified perspectives on barriers and facilitators to the perceived legitimacy of EHR-based performance feedback, in 11 community health centers (CHCs). Providers said such measures rarely accounted for CHC patients' complex lives or for providers' decisions as informed by this complexity, which diminished the measures' perceived validity. Suggestions for improving the perceived validity of performance feedback in CHCs are presented. Our findings add to the literature on EHR-based performance feedback by exploring provider perceptions in CHCs.

Keywords: algorithms, attitude of health personnel, community health centers, electronic health records, feedback, qualitative research, quality improvement, safety net providers


CARE QUALITY measures created with data extracted from electronic health records (EHRs) can provide valuable performance feedback to clinicians, with the potential to improve patient medical care. Thus far, however, such performance feedback has had only a limited impact on care quality (Bailey et al., 2014; Persell et al., 2011, 2012; Ryan et al., 2014). One important reason for this limitation is that users (recipients of the performance feedback) can perceive EHR data-based measures to be invalid or unfair (Brooks, 2014; Gourin & Couch, 2014; Kizer & Kirsh, 2012; Van der Wees et al., 2014), which diminishes the feedback's influence on provider behaviors (Dixon-Woods et al., 2012; Ivers et al., 2014a; Kansagara et al., 2014). This lack of trust in the legitimacy and accuracy of EHR-based performance feedback both stems from and illustrates the challenges of creating quality metrics based on readily extractable EHR data (Baker et al., 2007).

Previous qualitative research identified some strategies for creating EHR data-based feedback measures that providers consider credible and valid, but with a few exceptions (Ivers et al., 2014a; Kansagara et al., 2014; Rowan et al., 2006), this research was conducted in large, academic/integrated health care settings. Little is known about perceptions of and strategies for improving such feedback in the community health center (CHC) setting. Yet, CHCs–-the United States' health care “safety net”–-differ from other health care settings in critical ways, most notably their patients' socioeconomic vulnerability. Thus, there is a need to better understand barriers to the perceived legitimacy of EHR-based feedback measures in primary care CHCs, and approaches to crafting performance feedback that effectively improves care quality in this setting. To that end, we present an in-depth qualitative assessment of how primary care providers perceived EHR-based performance feedback, and their suggestions for increasing the utility of such feedback data, as reported in data collected in the context of a clinic-randomized implementation trial conducted in CHCs.

The terminology used to describe performance feedback in the literature varies. Here, performance metrics means the aggregate measurement of a given care point (eg, rate of guideline-concordant statin prescribing, shown as a percentage on a graph). Data feedback means potentially actionable data linked back to individual patients (eg, a list of patients with diabetes who are indicated for a statin but not prescribed one). Performance feedback encompasses both types of measurement.

METHODS

The “ALL Initiative” (ALL) is an evidence-based intervention designed to increase the percentage of patients with diabetes who are appropriately prescribed cardioprotective statins and angiotensin-converting enzyme inhibitors (ACEI)/angiotensin II receptor blockers (ARB). The data presented here were collected in the context of a 5-year pragmatic trial of the feasibility and impact of implementing ALL in 11 primary care CHCs in the Portland, Oregon, area. The ALL intervention included encounter-based alerts, patient panel data roster tools, and educational materials, described in detail elsewhere (Gold et al., 2012, 2015). We also extracted data from the study CHCs' shared EHR to create performance metrics on the percentage of diabetic patients who had active prescriptions for statins and ACEI/ARBs, if indicated for those medications per national guidelines. These study-specific metrics were calculated for each clinic and clinician, using aggregated data (Figure 1), and given to the study CHCs' leadership as monthly clinic-level reports; in addition, patient panel summaries were given to each provider at the study clinics at varying intervals. Individual patients' “indicated and active” status was also given to providers by request (Figure 2). The CHCs' leaders and individual providers distributed this performance feedback to clinic staff as desired; for details on how the feedback was disseminated, see Table 1.

Figure 1.

Figure 1.

Provider-specific performance metrics.

Figure 2.

Figure 2.

Provider-specific data feedback.

Table 1. Distribution of Study-Related Performance Feedback, by Organization.

Study Performance Study Data
Organization Metrics Feedback
A Initially staff used study rosters to identify patients “indicated but not active,” which they would note on the EHR problem list for provider review
Distributed to individual providers 1 time in study year 4 In study year 4, intending to increase reliance on the real-time alert, began inserting only the intervention logic into the EHR problem list
Roster continued to be used by an RN diabetes QI lead for individual meetings with providers to discuss overall care of their diabetic patients
B Quarterly, site coordinator sent metrics for each provider and clinic to the medical director, who then disseminated to clinic-based lead providers Monthly, site coordinator posted roster-based lists of patients “indicated but not active” by the care team (usually 2 providers) on the organization's shared drive
Some lead providers presented the metrics at clinic-specific provider team meetings Staff had to take the initiative to search for and pull the list
Graphs depicting overall clinic progress sometimes posted on clinic bulletin boards, at discretion of clinic managers
C Site coordinator pulled provider-specific percentages of “indicated and active” from the study results and e-mailed them in graph form (along with the clinic-wide percentages) to individual providers 4 times over the course of the 5-y study Approximately every 6 wk, site coordinator created roster-based provider-specific lists of patients “indicated but not active”
Leadership sometimes used the clinic metrics as a springboard for discussion in leadership and QI meetings Distributed paper copies in-person and e-mailed electronic copies (varied). Usually given only to providers, but by request sometimes shared with other members of the care team

Abbreviations: EHR, electronic health record; QI, quality improvement; RN, registered nurse.

Using a convergent design within a mixed-methods framework (Fetters et al., 2013), we collected qualitative data on the dynamics and contextual factors affecting intervention uptake (Bunce et al., 2014). The extent of qualitative data collection at each clinic was informed by pragmatic constraints and data saturation (the point at which no new information was observed) (Guest et al., 2006). Table 2 details our methods and sampling strategy. The intervention was implemented in June 2011 and supported through May 2015; qualitative data were collected between December 2011 and October 2014. Qualitative analysis was guided by the constant comparative method (Parsons, 2004), wherein each finding and interpretation is compared to previous findings to generate theory grounded in the data. We used the software program QSR NVivo to organize and facilitate analysis of the interview and group discussion transcripts and observation field notes. We identified key emergent concepts, or codes, and assigned them to appropriate text segments. Each code's scope and definition was then refined, and additional themes identified, through iterative immersion/crystallization cycles (deep engagement with the data followed by reflection) (Borkan, 1999). Our interpretations of the data were confirmed through regular discussions among the research team, which included experts in clinical care, quality improvement, and quantitative and qualitative research, and clinic leadership at the study CHCs. This study presents our qualitative findings on CHC physicians' perspectives on the performance feedback provided to them as described earlier.

Table 2. Qualitative Data Collection Methods.

Method Sampling Strategy Number of Resulting Documents Detail
Observation Convenience
  • Shadowed teams with multiple DM appointments in a single day

  • All relevant meetings and trainings, as allowed by clinics

  • As possible when in clinics for meetings or interviews

126 field notes
  • Shadowed teams at all 11 clinics as they cared for patients with diabetes

  • Observed relevant clinic and team meetings and trainings

  • Informal observations and conversations throughout study

Semistructured interviews Purposive
  • Sampled for high and low prescribers; MD/DO vs NP/PA; range of enthusiasm for the intervention

34 transcripts
  • Explored the thoughts and opinions of clinic staff as related to the implementation process and the intervention itself

  • Interviewed 23 PCPs (MD = 15; PA/NP = 8) and 11 RNs

Group discussions Purposive
  • Sampled for diversity of staff role across clinics and organizations

8 transcripts
  • Guided discussions that explored within-group opinions as related to the implementation process and the intervention itself

  • Stand-alone or dedicated time during routine staff meetings

  • 8 separate group discussions divided by clinic role. Participation by a total of 79 staff: 27 PCPs, 16 RNs, 19 MAs, 7 TAs, 6 PCCs, 2 administrative, 2 pharmacists

Diaries by site coordinators Not applicable 31 mo of entries
  • Clinic-based study-site coordinators (4) wrote weekly entries about the surprises, challenges, solutions, unresolved issues, and day-to-day logistics of implementation based on informal observations and discussions

  • Monthly e-mail exchanges between qualitative researchers and site coordinators to clarify and expand on original entries

Document collection Not applicable 201 documents
  • Relevant clinic and contextual documents (eg, in-house newsletters and plans to implement health care reform)

  • Communications (eg, e-mail strings among the study team; outreach to clinics)

Chart review Varied by the organization
  • Org A: All patients indicated but not on an ALL medication (195 charts)

  • Org B: Purposive sample of patients indicated but not on an ALL medication from 9 providers at 5 (of 6) clinics (100 charts)

  • Org C: List of all patients seen in past 36 mo and indicated but not on in all 4 (of 4) clinics, filtered by medical record number; reviewed first 136 (136 charts)

431 unique patients
  • Goal: Determine why some patients considered indicated for an ALL medication (statin or ACEI/ARB) per intervention logic are not prescribed the medication

  • One site coordinator at each organization reviewed charts from sample of patients indicated for but not prescribed an ALL medication

Abbreviations: ACEI, angiotensin-converting enzyme inhibitor; ALL, ALL Initiative; ARB, angiotensin II receptor blocker; DM, diabetes (diabetes mellitus); DO, doctor of osteopathic medicine; MA, medical assistant; MD, doctor of medicine; NP, nurse practitioner; PA, physician assistant; PCC, patient care coordinator; PCP, primary care provider; RN, registered nurse; TA, team assistant.

This study was approved by the Kaiser Permanente NW Institutional Review Board. Study participants (clinic staff) gave verbal consent prior to data collection.

RESULTS

CHC providers' perceptions of performance feedback measures

CHC providers stated that they often questioned the validity of performance feedback measures, usually because the feedback measures did not account for CHC patients' needs or the complexity of their lives, or for clinical decisions made by providers who understood this complexity. Although similar concerns have been reported in other care settings (Dixon-Woods et al., 2012; Ivers et al., 2014a; Kansagara et al., 2014; Kizer & Kirsh, 2012; Powell et al., 2012; Stange et al., 2014), the socioeconomic vulnerabilities and fluidity of the CHC patient population added specific barriers to the perceived trustworthiness of the feedback measures.

Defining the population: Who counts as “indicated and active”?

In this study, the feedback measures' denominator was the number of patients indicated for a given medication (ACEI/ARB or statin), and the numerator was the number prescribed the indicated medication in the last year. However, CHC patients' socioeconomic circumstances (eg, lack of money to pay for the medication and housing instability) or related clinician judgment (eg, perceived likelihood of medication nonadherence and preference for a stepwise approach to prescribing for patients with complex needs) could be barriers to prescribing a given “indicated” medication. For example, some CHC patients bought medications in their home country, where they cost less, or took medications that family members or friends had discontinued. Without documentation of these circumstances in the EHR, however, the feedback measures would identify the patient's prescription as expired. Two examples illustrate this:

[Provider] asked about one medication that [the patient] said he was taking but it looked in the chart like he was out of. Patient explained that his son was taking the same medication but had recently been prescribed a higher dose, so he gave his dad (the patient) his remaining pills of the lower dose. “Because I don't have money.” (Field note)

... If patients are not on the medications, it is not because it wasn't offered. [The provider] believes that if the patient was not on medications it is due to education level affecting understanding, lack of resources for scripts and tests, or patient flat out refuses. The concern ... is how this information is reflected in the statistics or data. (Field note)

Furthermore, CHC providers reported that their patients are often unable to see their primary provider for periods of time (eg, if they are out of the country, or in prison), or are not available for other reasons (eg, transient populations and inaccurate/frequently changing contact information). When patients on a provider's panel were temporarily receiving care elsewhere (eg, while in jail), and medication data were not shared between care sites, feedback measures would be affected. Similarly, migrant workers remain on the provider's panel (and thus in the feedback measures' denominator) even when they are out of the country and cannot be reached by the clinic. Their prescriptions might expire while the patient was unable to see their provider, negatively impacting rates of guideline-based prescribing in the performance feedback measures.

In addition, CHC patients are not enrolled members, which can affect measurement of care quality by making it unclear whether a patient was not receiving appropriate care, versus out of reach. Patients identified as lost to follow-up were removed from the feedback measures' denominators; however, as clinics used different methods for defining patients as lost to follow-up, accounting for this accurately in data extraction was difficult. For example, patients could be considered in a given provider's denominator if they were “touched” by the clinic in the last year (eg, by attempted phone calls) even if no actual contact was made. Thus, patients who were never seen in person could be included in the feedback measure's denominator.

Situations such as these could not be effectively captured by the extraction algorithm, as the EHR lacked discrete data fields where providers could record them, so these exceptions were not reflected in the performance data. As a result, the CHC providers often questioned the measures' validity and fairness. One provider said that receiving such reports can feel like “salt in the wound.” Another provider noted:

... we get these stupid reports all the time telling you you're good, you're bad. I mean, just one less thing to like have somebody pointing fingers at me. ... It's horrible as a provider, really, to get all of these measurements ... It's like saying you're going to be graded on this. (PCP)

Gap between potential and actuality

Providers consistently described struggling between a desire to use feedback data to improve patient care and their inability to do so given inherent situational constraints. This could lead to feeling overwhelmed, anxious, frustrated, or guilty when they received the feedback reports.

... the possibilities for data and what we could do with it in a systematic way are amazing. But we are so completely overloaded ... that we just can't even deal with the data that we get... (PCP)

However, a few positives were noted. Some providers acknowledged that performance feedback can be helpful reminders of the importance of the targeted medications in diabetes care, which motivated them to discuss this with their patients.

I like it, personally, because ... somebody is helping me to see. Sometimes it is difficult to see the whole picture .... It is not because you lack the knowledge or the experience. But you can't catch everything. (PCP)

Others appreciated the feedback as a safeguard, even though they were often already aware of the patients flagged as needing specific actions or medications. Conversely, others thought it was not worth reviewing the reports, as they already knew their patients' issues:

[I would look over] the patients who were indicated for certain meds... that weren't on them, and just kind of just quickly review who those patients were. Just kind of ... do I recognize this patient? Oh, am I surprised that they're not on a statin or ACE? No I'm not. Okay. (PCP)

Provider suggestions for improving performance feedback measures

Despite the tensions described earlier, most providers said they wanted to receive feedback data, but many noted that organizational changes (eg, to workflow, staffing, and productivity expectations) would be necessary precursors to its effective use. Without these changes, providers thought such data would primarily serve as snapshots of current care quality, but not as tools to improve performance. They suggested a number of ways to improve both the acceptability and utility of performance feedback.

Staffing and resources

Dedicated, management-supported “brain time” was suggested as a means to enable care teams to review feedback data together and identify next steps to addressing care gaps. Providers also recommended designating a trusted team member (eg an RN) as responsible for identifying potentially actionable items from the feedback data.

... [what] I'm kind of looking for is a QI [quality improvement] person to come in here that has the data, and goes to the team meetings, and can [be] sort of non-judgmentally preventive. ... So it's not so much as you bad person ... but hey, we look a little low here, how about if we just talk for a few minutes about, you know, what one little step we could take, and let's try it for a few months and see how it works. But being in the team so that they can support that work, and then checking back in. (PCP)

Action plans

Providers also requested concrete suggestions for how to prioritize and act on feedback data (along with resources to do so), saying that data alone are insufficient to drive change.

[What] I'd really like is here's your data, and here's what we're going to do with this. ... Here's the twelve patients that you have six things wrong with them, that if you got these patients in they're really high yield, something like that. (PCP)

Holistic, patient-specific format

Many providers commented that patient-level data would be more useful than aggregate performance metrics, and asked that such feedback data include patient-specific information along with the panel-based metrics. Many also requested that such patient-level feedback data include relevant clinical indicators in addition to measures targeted by a given initiative (eg, a diabetes “dashboard” that shows HbA1c, blood pressure, and low-density lipoprotein results along with guideline-indicated medications) for a more holistic view of the patient's needs.

I guess ... if you were trending in the wrong direction that would be useful information ... But for me ... probably meatier is pulling lists and looking at specific individuals and saying, you know, here's this woman ... she's not on statin ... is there a reason why? (PCP)

The patient panel data (Figure 2), an example of this approach, were generally well-received by care teams. The colors are a “stoplight” tool: green indicated measurement within normal limits, yellow indicated that a measure approaching a concerning level, and red indicated a problem.

DISCUSSION

Previously reported challenges to the effective use of EHR data-based performance feedback measures include users' questions about “what counts”/how the measures are calculated (Dixon-Woods et al., 2012; Ivers et al., 2014a; Kansagara et al., 2014; Parsons et al., 2012); welcoming the feedback but also feeling judged by it (Ivers et al., 2014a); and acting on population-based care measurement and expectations while providing patient-centered, individualized care (Dixon-Woods et al., 2012, 2013b; Ivers et al., 2014a; Kansagara et al., 2014; Powell et al., 2012). Feedback measures perceived to be supportive, rather than evaluative (Casalino, 1999; Dixon-Woods et al., 2013a, 2013b; Ivers et al., 2014a), that include goal setting and/or action plans (Hysong, 2009; Ivers et al., 2014b, 2014c), and that users believe account for patient and provider priorities, are more likely to be trusted, and thus potentially impactful (Dixon-Woods et al., 2012, 2013b; Ivers et al., 2014a; Kansagara et al., 2014; Malina, 2013; Mannion & Braithwaite, 2012; Powell et al., 2012).

Our findings concur, and add to this literature by exploring provider perceptions within the safety net setting. CHC patients are often unable to follow care recommendations for financial reasons, may receive care elsewhere for periods of time, or may be otherwise unavailable to clinic staff, leading to inaccuracies in feedback measures. CHC providers, understanding their patients' barriers to acting on recommended care, are understandably disinclined to trust feedback data that do not account for such barriers. Thus, in this important setting, creating EHR-based performance feedback that users perceive as valid may be particularly challenging because of limitations in how effectively such measures can account for the socioeconomic circumstances of CHC patients' lives.

Limitations on the ability to extract data in a way that accounts for such factors are inherent to most EHRs (Baker et al., 2007; Baus et al., 2016; Gardner et al., 2014; Persell et al., 2006; Urech et al., 2015). EHR data extraction entails accessing data recorded in discrete fields accessible and searchable by a computer algorithm. The type of “nonclinical” patient information discussed earlier as barriers to care, as well as the reasoning behind the nonprovision of recommended care, is rarely documented in standardized locations or in discrete data fields (if at all) (Behforouz et al., 2014; Matthews et al., 2016; Steinman et al., 2013), compromising the ability to extract comprehensive performance feedback data recognized as legitimate by users. Improved EHR functions for documenting exceptions might enable more accurate quality measurement, and thus improve providers' receptiveness to and trust of feedback data. In prior research, providers were more receptive to EHR-based clinical decision support when documentation of exceptions was enabled (Persell et al., 2008); the same may apply for feedback measures. Another EHR adaptation that could improve such measures' accuracy would be heightened capacity for health information exchange, so that data on care that CHC patients receive external to their CHC could be reviewed by their primary care provider.

This study's CHC providers' suggestions for improving the legitimacy and utility of EHR data-based performance feedback did not directly speak to the challenges of using EHR data to create accurate measures, but they do so indirectly. For example, the providers recommended giving designated staff time and support for reviewing and acting on performance feedback. Such support could include ensuring that the appropriate people understand how each measure is extracted and constructed, and what a given measure might miss due to limitations in data structures. Providers who dispute performance feedback that is extracted from their own EHR data may feel more confident in the feedback if they understand how the metrics and reports are calculated from the raw data (eg, the algorithm will not catch free text documentation of patient refusal; to remove that patient from the measure denominator it is necessary to use the alert override option). In addition, the providers' ambivalence about the performance measures illuminates the need to acknowledge that care quality cannot be judged simplistically, and to ensure that focusing on measurement does not conflict with patient-centered care. Proactively acknowledging these needs and working with providers to address them could further strengthen trust in feedback measures.

This study has several limitations. The study clinics were involved in other, concurrent practice change efforts, some of which also involved performance feedback. Given this, provider reactions may have been atypical, limiting generalizability of the findings. Interviews and observations were conducted by members of the research team potentially perceived to have an investment in intervention outcomes; respondents may therefore have moderated their responses. Finally, results are purely descriptive and are not correlated with any quantitative outcomes.

CONCLUSION

Provider challenges to the legitimacy of EHR data-based performance feedback measures have impeded the effective use of such feedback. Addressing issues related to such measures' credibility and legitimacy, and providing strategies and resources to take action as necessary, may help realize the potential of EHR data-based performance feedback in improving patient care.

Footnotes

Thank you to Colleen Howard for her many contributions to the project, including data collection and interpretation, and to Jill Pope, Elizabeth L. Hess, and C. Samuel Peterson for editorial and formatting assistance. Development of this article and the study that it describes were supported by grant R18HL095481 from the National Heart, Lung and Blood Institute. Drafts of portions of this work have been presented at the 2014 NAPCRG annual meeting, the 2015 Society for Implementation Research Collaboration conference, and the 8th Annual Conference on the Science of Dissemination and Implementation in Health.

Clinical Trial Registration Number: NCT02299791.

We have no conflict of interest, financial or otherwise, to disclose in relation to the content of this article.

REFERENCES

  1. Bailey L. C., Mistry K. B., Tinoco A., Earls M., Rallins M. C., Hanley K., Woods D. (2014). Addressing electronic clinical information in the construction of quality measures. Academiae Pediatria, 14, S82–S89. [DOI] [PubMed] [Google Scholar]
  2. Baker D. W., Persell S. D., Thompson J. A., Soman N. S., Burgner K. M., Liss D., Kmetik K. S. (2007). Automated review of electronic health records to assess quality of care for outpatients with heart failure. Annals of Internal Medicine, 146, 270–277. [DOI] [PubMed] [Google Scholar]
  3. Baus A., Zullig K., Long D., Mullett C., Pollard C., Taylor H., Coben J. (2016). Developing methods of repurposing electronic health record data for identification of older adults at risk of unintentional falls. Perspectives Health Information Management, 13, 1b. [PMC free article] [PubMed] [Google Scholar]
  4. Behforouz H. L., Drain P. K., Rhatigan J. J. (2014). Rethinking the social history. New England Journal of Medicine, 371, 1277–1279. [DOI] [PubMed] [Google Scholar]
  5. Borkan J. (1999). Immersion/Crystallization. In Crabtree B. F., Miller W. L. (Eds.), Doing qualitative research (2nd ed). Thousand Oaks, CA: Sage Publications; 179–194. [Google Scholar]
  6. Brooks J. A. (2014). The new world of health care quality and measurement. American Journal of Nursing, 114, 57–59. [DOI] [PubMed] [Google Scholar]
  7. Bunce A. E., Gold R., Davis J. V., McMullen C. K., Jaworski V., Mercer M., Nelson C. (2014). Ethnographic process evaluation in primary care: Explaining the complexity of implementation. BMC Health Services Research, 14, 607. [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Casalino L. P. (1999). The unintended consequences of measuring quality on the quality of medical care. New England Journal of Medicine, 341, 1147–1150. [DOI] [PubMed] [Google Scholar]
  9. Dixon-Woods M., Leslie M., Bion J., Tarrant C. (2012). What counts? An ethnographic study of infection data reported to a patient safety program. Milbank Quarterly, 90, 548–591. [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Dixon-Woods M., Leslie M., Tarrant C., Bion J. (2013a). Explaining matching Michigan: An ethnographic study of a patient safety program. Implement Sciences, 8, 70. [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Dixon-Woods M., Redwood S., Leslie M., Minion J., Martin G. P., Coleman J. J. (2013b). Improving quality and safety of care using “technovigilance”: An ethnographic case study of secondary use of data from an electronic prescribing and decision support system. Milbank Quarterly, 91, 424–454. [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. Fetters M. D., Curry L. A., Creswell J. W. (2013). Achieving integration in mixed methods designs: Principles and practices. Health Services Research, 48, 2134–2156. [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Gardner W., Morton S., Byron S. C., Tinoco A., Canan B. D., Leonhart K., Scholle S. H. (2014). Using computer-extracted data from electronic health records to measure the quality of adolescent well-care. Health Services Research, 49, 1226–1248. [DOI] [PMC free article] [PubMed] [Google Scholar]
  14. Gold R., Muench J., Hill C., Turner A., Mital M., Milano C., Nichols G. A. (2012). Collaborative development of a randomized study to adapt a diabetes quality improvement initiative for federally qualified health centers. Journal of Health Care for the Poor and Underserved, 23, 236–246. [DOI] [PMC free article] [PubMed] [Google Scholar]
  15. Gold R., Nelson C., Cowburn S., Bunce A., Hollombe C., Davis J., DeVoe J. (2015). Feasibility and impact of implementing a private care system's diabetes quality improvement intervention in the safety net: A cluster-randomized trial. Implement Sciences, 10, 83. [DOI] [PMC free article] [PubMed] [Google Scholar]
  16. Gourin C. G., Couch M. E. (2014). Defining quality in the era of health care reform. JAMA Otolaryngology Head and Neck Surgery, 140, 997–998. [DOI] [PubMed] [Google Scholar]
  17. Guest G., Bunce A., Johnson L. (2006). How many interviews are enough? An experiment with data saturation and variability. Field Methods, 18, 59–82. [Google Scholar]
  18. Hysong S. J. (2009). Meta-analysis: Audit and feedback features impact effectiveness on care quality. Medical Care, 47, 356–363. [DOI] [PMC free article] [PubMed] [Google Scholar]
  19. Ivers N., Barnsley J., Upshur R., Tu K., Shah B., Grimshaw J., Zwarenstein M. (2014a). “My approach to this job is ... one person at a time”: Perceived discordance between population-level quality targets and patient-centred care. Canadian Family Physician, 60, 258–266. [PMC free article] [PubMed] [Google Scholar]
  20. Ivers N. M., Grimshaw J. M., Jamtvedt G., Flottorp S., O'Brien M. A., French S. D., Odgaard-Jensen J. (2014b). Growing literature, stagnant science? Systematic review, meta-regression and cumulative analysis of audit and feedback interventions in health care. Journal of General Internal Medicine, 29, 1534–1541. [DOI] [PMC free article] [PubMed] [Google Scholar]
  21. Ivers N. M., Sales A., Colquhoun H., Michie S., Foy R., Francis J. J., Grimshaw J. M. (2014c). No more “business as usual” with audit and feedback interventions: Towards an agenda for a reinvigorated intervention. Implement Sciences, 9, 14. [DOI] [PMC free article] [PubMed] [Google Scholar]
  22. Kansagara D., Tuepker A., Joos S., Nicolaidis C., Skaperdas E., Hickam D. (2014). Getting performance metrics right: A qualitative study of staff experiences implementing and measuring practice transformation. Journal of General Internal Medicine, 29(Suppl. 2), S607–S613. [DOI] [PMC free article] [PubMed] [Google Scholar]
  23. Kizer K. W., Kirsh S. R. (2012). The double edged sword of performance measurement. Journal of General Internal Medicine, 27, 395–397. [DOI] [PMC free article] [PubMed] [Google Scholar]
  24. Malina D. (2013). Performance anxiety—what can health care learn from K-12 education? New England Journal of Medicine, 369, 1268–1272. [DOI] [PubMed] [Google Scholar]
  25. Mannion R., Braithwaite J. (2012). Unintended consequences of performance measurement in healthcare: 20 salutary lessons from the English National Health Service. Internal Medical Journal, 42, 569–574. [DOI] [PubMed] [Google Scholar]
  26. Matthews K. A., Adler N. E., Forrest C. B., Stead W. W. (2016). Collecting psychosocial “vital signs” in electronic health records: Why now? What are they? What's new for psychology? American Psychologist, 71, 497–504. [DOI] [PubMed] [Google Scholar]
  27. Parsons K. W. (2004). Constant Comparison. In Lewis-Beck M. S., Bryman A., Futing Liao T. (Eds.), Encyclopedia of social science research methods. SAGE Publications, Inc; 181–182. [Google Scholar]
  28. Parsons A., McCullough C., Wang J., Shih S. (2012). Validity of electronic health record-derived quality measurement for performance monitoring. Journal of the American Medical Informatics Association, 19, 604–609. [DOI] [PMC free article] [PubMed] [Google Scholar]
  29. Persell S. D., Dolan N. C., Baker D. W. (2008). Medical exceptions to decision support: A tool to identify provider misconceptions and direct academic detailing. AMIA Annuual Symposium Proceedings, 1090. [PubMed] [Google Scholar]
  30. Persell S. D., Kaiser D., Dolan N. C., Andrews B., Levi S., Khandekar J., Baker D. W. (2011). Changes in performance after implementation of a multifaceted electronic-health-record-based quality improvement system. Medical Care, 49, 117–125. [DOI] [PubMed] [Google Scholar]
  31. Persell S. D., Khandekar J., Gavagan T., Dolan N. C., Levi S., Kaiser D., Baker D. W. (2012). Implementation of EHR-based strategies to improve outpatient CAD care. American Journal of Managed Care, 18, 603–610. [PubMed] [Google Scholar]
  32. Persell S. D., Wright J. M., Thompson J. A., Kmetik K. S., Baker D. W. (2006). Assessing the validity of national quality measures for coronary artery disease using an electronic health record. Archives of Internal Medicine, 166, 2272–2277. [DOI] [PubMed] [Google Scholar]
  33. Powell A. A., White K. M., Partin M. R., Halek K., Christianson J. B., Neil B., Bloomfield H. E. (2012). Unintended consequences of implementing a national performance measurement system into local practice. Journal of General Internal Medicine, 27, 405–412. [DOI] [PMC free article] [PubMed] [Google Scholar]
  34. Rowan M. S., Hogg W., Martin C., Vilis E. (2006). Family physicians' reactions to performance assessment feedback. Canadian Family Physician, 52, 1570–1571. [PMC free article] [PubMed] [Google Scholar]
  35. Ryan A. M., McCullough C. M., Shih S. C., Wang J. J., Ryan M. S., Casalino L. P. (2014). The intended and unintended consequences of quality improvement interventions for small practices in a community-based electronic health record implementation project. Medicine Care, 52, 826–832. [DOI] [PubMed] [Google Scholar]
  36. Stange K. C., Etz R. S., Gullett H., Sweeney S. A., Miller W. L., Jaén C. R., Glasgow R. E. (2014). Metrics for assessing improvements in primary health care. Annual Review of Public Health, 35, 423–442. [DOI] [PMC free article] [PubMed] [Google Scholar]
  37. Steinman M. A., Dimaano L., Peterson C. A., Heidenreich P. A., Knight S. J., Fung K. Z., Kaboli P. J. (2013). Reasons for not prescribing guideline-recommended medications to adults with heart failure. Medicine Care, 51, 901–907. [DOI] [PMC free article] [PubMed] [Google Scholar]
  38. Urech T. H., Woodard L. D., Virani S. S., Dudley R. A., Lutschg M. Z., Petersen L. A. (2015). Calculations of financial incentives for providers in a pay-for-performance program: Manual review versus data from structured fields in electronic health records. Medicine Care, 53, 901–907. [DOI] [PMC free article] [PubMed] [Google Scholar]
  39. Van der Wees P. J., Nijhuis-van der Sanden M. W., van G. E., Ayanian J. Z., Schneider E. C., Westert G. P. (2014). Governing healthcare through performance measurement in Massachusetts and the Netherlands. Health Policy, 116, 18–26. [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from The Journal of Ambulatory Care Management are provided here courtesy of Wolters Kluwer Health

RESOURCES