Skip to main content
Journal of General Internal Medicine logoLink to Journal of General Internal Medicine
. 2014 Mar 8;29(8):1105–1112. doi: 10.1007/s11606-014-2783-3

Exploration of an Automated Approach for Receiving Patient Feedback After Outpatient Acute Care Visits

Eta S Berner 1,, Midge N Ray 1, Anantachai Panjamapirom 2, Richard S Maisiak 3, James H Willig 1, Thomas M English 4, Marc Krawitz 5, Christa R Nevin 1, Shannon Houser 1, Mark P Cohen 6, Gordon D Schiff 7
PMCID: PMC4099452  PMID: 24610308

ABSTRACT

BACKGROUND

To improve and learn from patient outcomes, particularly under new care models such as Accountable Care Organizations and Patient-Centered Medical Homes, requires establishing systems for follow-up and feedback.

OBJECTIVE

To provide post-visit feedback to physicians on patient outcomes following acute care visits.

DESIGN

A three-phase cross-sectional study [live follow-up call three weeks after acute care visits (baseline), one week post-visit live call, and one week post-visit interactive voice response system (IVRS) call] with three patient cohorts was conducted. A family medicine clinic and an HIV clinic participated in all three phases, and a cerebral palsy clinic participated in the first two phases. Patients answered questions about symptom improvement, medication problems, and interactions with the healthcare system.

PATIENTS

A total of 616 patients were included: 142 from Phase 1, 352 from Phase 2 and 122 from Phase 3.

MAIN MEASURES

Primary outcomes included: problem resolution, provider satisfaction with the system, and comparison of IVRS with live calls made by research staff.

KEY RESULTS

During both live follow-up phases, at least 96 % of patients who were reached completed the call compared to only 48 % for the IVRS phase. At baseline, 98 of 113 (88 %) patients reported improvement, as well as 167 of 196 (85 %) in the live one-week follow-up. In the one-week IVRS phase, 25 of 39 (64 %) reported improvement. In all phases, the majority of patients in both the improved and unimproved groups had not contacted their provider or another provider. While 63 % of providers stated they wanted to receive patient feedback, they varied in the extent to which they used the feedback reports.

CONCLUSIONS

Many patients who do not improve as expected do not take action to further address unresolved problems. Systematic follow-up/feedback mechanisms can potentially identify and connect such patients to needed care.

KEY WORDS: interactive voice response system, health outcomes, ambulatory care, follow-up studies

INTRODUCTION

Two key principles of new models of healthcare delivery are patient engagement and care coordination across settings, as well as the need to create “learning systems” for continuous improvement.15 To improve care coordination, engage patients and improve the overall quality of care, it is essential that organizations develop systematic, efficient systems for patient follow-up and feedback to physicians.59 Feedback is a fundamental principle for all systems to function smoothly, as well as to learn and improve. For most ambulatory acute care encounters, clinicians lack systematic feedback on the accuracy of their diagnostic decisions, or the outcomes of the medications they prescribe, creating an “open-loop system” incapable of optimally monitoring patients and learning from patient outcomes.6 While there have been a limited number of reports of efforts to track patient outcomes after Emergency Department visits10,11 or inpatient hospitalizations,12 there has been almost no systematic tracking of diagnostic and medication outcomes in the outpatient setting, particularly for acute care encounters.13

The typical scenario for an acute care outpatient visit is that a patient presents for a brief visit, and the physician (or other clinician) makes a presumptive diagnosis and prescribes treatment. Although return visits are sometimes scheduled, there is variation in the return visit interval,14 and the providers the patients subsequently see, and physicians may assume that patients will notify them if the treatment has not worked as expected. Thus, there are no informal or structured ways to track outcomes of diagnoses made or therapies prescribed.

Interactive voice response systems (IVRS) have potential to contact large numbers of patients at a relatively low cost.15 IVRS have been successfully deployed to conduct surveillance of adverse drug effects,1518 but there has been little research examining diagnosis or therapeutic outcomes by integrating IVRS feedback into ongoing clinical operations, and comparing IVRS with other methods of follow-up and feedback. The present study describes the results of an automated follow-up/feedback system that was implemented for acute care encounters in three ambulatory clinic settings. This study was part of a larger study, the overall aim of which was to “close the feedback loop,” by developing and assessing an automated system to provide post-visit feedback to providers about patient outcomes in acute care visits. The description of the development of the system has been described previously,19 and an ancillary study that examined patient reactions to the use of follow-up was reported elsewhere.20

METHODS

Settings

Three different settings were utilized: a family medicine clinic (FM), an HIV clinic (HIV), and a clinic for disabled patients, primarily with cerebral palsy (CP). The FM and HIV clinics were affiliated with the same academic medical center, but were located in different cities in the Southeast. The CP site was located in the same city as the HIV clinic. This study focused on acute care visits at all three sites. The patient population was predominantly Caucasian (52–67 %), but included significant numbers of African American patients (varying from 33 % to 46 % across the three sites). Patients with acute illnesses who presented to the FP, HIV “sick call,” and CP clinics during the three phases of data collection were eligible to participate.

Primary providers were resident physicians at the FM clinic, academic attending physicians at the HIV sick call clinic, and a mix of attending and resident physicians from local hospitals at the CP clinic. The three clinics had different electronic health record (EHR) systems: Allscripts TouchWorks21 at the FM clinic; WorldVista,22 an open source EHR based on the Veteran’s Administration VistA system at the CP clinic; and a self-developed EHR at the HIV clinic.

Overall Design

We developed a system to automate the follow-up and feedback process by extracting data from the EHR to identify patients to call, automating the telephone call, and providing feedback on patient outcomes for physician review. The focus of the call was on patient self-report of improvement (or worsening), but also included questions about medication use, as well as subsequent contact patients had with the healthcare system to address problems that failed to resolve. The questions were pilot tested by telephoning a small sample of patients who were not part of the study. Patients who indicated they were not improving as expected were transferred at the end of the telephone call to the clinic where they had received care, where usual care processes and protocols for triaging incoming patient telephone calls were used. The results from the automated call were compared to live calls made by research staff.

Patients were provided with information about the study after their clinic visit and all patients with acute illnesses who volunteered to be called were included in the study. These patients provided a telephone number and preferred time for the call, plus a four-digit authentication number. Verbal consent was obtained at the beginning of the call. The protocol was approved by the University of Alabama at Birmingham (UAB) Institutional Review Board.

Data Collection Phases

Phase 1 (4 months)—Baseline—Data were collected by a telephone call from study personnel three weeks after the index visit. It was felt that by three weeks most of the acute illnesses should have been resolved. No feedback was provided to physicians. Up to five attempts were made to reach each patient.

Phase 2 (4 months)—Partial automation—Study personnel called patients one week after the index visit. Feedback was provided automatically to the physician who had seen the patient for the acute care visit. Up to five attempts were made to reach each patient. This phase tested the processes of providing feedback and provided data on outcomes with early follow-up by a human interviewer.

Phase 3 (2 months)—Full automation—Data were collected using an IVRS and feedback was provided automatically to the physician. Initially five attempts were made to reach each patient, and this was subsequently increased to ten attempts.

Because of the patients’ cognitive and physical limitations, the cerebral palsy clinic only participated in the first two phases. At the end of the third phase, physicians were asked about their use and reaction to receipt of feedback.

These phases corresponded to milestones in the development of the fully automated system, and logistical reasons led to some differences in the amount of time of each phase. Phase 1 provided data on usual care without intervention to compare to the main focus, which was the one-week intervention. Phase 2 tested the automated provision of feedback and also provided data on a live human intervention. Phase 3 represented the full automation—both data collection and feedback were automated.

Automated System

Details of the development of the automated system have been previously described.19 Briefly, the FM clinic used the analytic tools within their commercial EHR to extract relevant patient information needed for data collection, transmitted these data to the staff who called the patient manually or to the IVRS, and used secure messaging within the EHR to provide feedback from the calls to the physician. At the HIV and CP clinics, a new system was developed, known as Clinical Documentation and Notification Application (CDNA)©, which automatically extracted patient data from the EHR (name, gender, race, diagnosis, medications), allowed data entry of the patient-provided information (telephone number, authentication code, preferred time for call), and provided this information to the human or IVRS for subsequent patient calls. Data from the calls were stored in the CDNA, which then delivered automated feedback to the patient’s physician. Figure 1 shows a sample of the feedback the physician received.

Figure 1.

Figure 1

Example of physician feedback.

Measures

Structured data were collected on patient report of (1) improvement (5-point Likert-type question from “much better” to “much worse,” with a midpoint of “no change”); (2) use of medications prescribed for the acute problem (filled prescription, took as directed, or encountered problems); (3) new problems; and (4) any subsequent actions taken [none, called or returned to clinic, saw another provider (emergency room, hospital, ambulatory)]. Patients could expand answers in both the human and IVRS calls.

We also assessed provider acceptance. All physicians at the FM and HIV clinics who took part in phases 2 and 3 (36 residents at the FM clinic and seven attending physicians at the HIV clinic) provided data on their use of the system, perceived usefulness of the feedback, and whether they wanted to receive reports in the future. The provider questionnaire included structured answers and free text fields. Questionnaires were administered anonymously via a paper form and, for residents who had finished their residency, as a web-based survey.

Data Analysis

Proportions were computed for categorical and binary response items. Exact tests such as the Fisher Exact test were used for tests when data were sparse. Chi-square tests were used to test differences between proportions. A two-sided p value of 0.05 was considered a significant statistical result. All data analyses were conducted using SPSS software.23

RESULTS

Table 1 shows the proportion of consented patients in each phase whose reasons for visit fell into each ICD-9-CM diagnostic category in each time period. Because the CP clinic had very few patients and did not participate in all phases, we have excluded their diagnoses. There were no significant differences in the ICD-9-CM diagnostic categories across the phases.

Table 1.

ICD-9-CM Diagnostic Categories for Reasons for Visits to the Family Practice and HIV Clinics

ICD-9-CM Diagnostic Categories Phase Total
Baseline 1-Week Non- Automated IVRS
Infectious and parasitic diseases Count 18 19 0 37
% within Phase 20.2 % 11.3 % 0 % 13.1 %
Endocrine, nutritional and metabolic diseases, and immunity disorders Count 6 10 2 18
% within Phase 6.7 % 6.0 % 7.7 % 6.4 %
Diseases of the blood and blood-forming organs Count 0 3 0 3
% within Phase 0 % 1.8 % 0 % 1.1 %
Mental disorders Count 1 3 0 4
% within Phase 1.1 % 1.8 % 0 % 1.4 %
Diseases of the nervous system Count 0 2 1 3
% within Phase 0 % 1.2 % 3.8 % 1.1 %
Diseases of the sense organs Count 5 7 0 12
% within Phase 5.6 % 4.2 % 0 % 4.2 %
Diseases of the circulatory system Count 0 4 0 4
% within Phase 0 % 2.4 % 0 % 1.4 %
Diseases of the respiratory system Count 25 43 7 75
% within Phase 28.1 % 25.6 % 26.9 % 26.5 %
Diseases of the digestive system Count 5 6 1 12
% within Phase 5.6 % 3.6 % 3.8 % 4.2 %
Diseases of the genitourinary system Count 3 8 4 15
% within Phase 3.4 % 4.8 % 15.4 % 5.3 %
Diseases of the skin and subcutaneous tissue Count 4 7 2 13
% within Phase 4.5 % 4.2 % 7.7 % 4.6 %
Diseases of the musculoskeletal system and connective tissue Count 8 20 0 28
% within Phase 9.0 % 11.9 % 0 % 9.9 %
Signs, symptoms, and ill-defined conditions Count 12 34 7 53
% within Phase 13.5 % 20.2 % 26.9 % 18.7 %
Injury and poisoning Count 2 2 2 6
% within Phase 2.2 % 1.2 % 7.7 % 2.1 %
Total Count 89 168 26 283

Table 2 shows patient involvement during each study phase. The percentage of patients reached in the baseline phase (3-week human calls) was significantly higher than in either of the other two phases (p = 0.001 for baseline compared to Phase 2 and p = 0.02 for baseline compared to Phase 3). There were no significant differences in the percentages of patients reached between the second and third phases (p = 0.07).

Table 2.

Number and Percentages of Patients in Each Phase Who Were Called, Reached, Consented and Completed the Questionnaire

Phase 1 Baseline (3-Week follow-up) Phase 2 Partial automation (1-Week Follow-up) Phase 3 IVRS/Full Automation (1 Week/2 sites) Total
Number Called 142 352 122 616
Number Reached (65 % of those called) 113 (80 % of those called) 203 (60 % of those called) 82 (67 % of those called) 398
Number Consented (89 % of those reached) 111 (98 % of those reached) 197 (97 % of those reached) 45 (55 % of those reached) 353
Number Completed (97 % of those consented; 56 % of those who initially agreed to be called) 109 (98 % of those consented; 77 % of those who initially agreed to be called) 197 (100 % of those consented; 56 % of those who initially agreed to be called) 39 (87 % of those consented; 32 % of those who initially agreed to be called) 345

More than 97 % of Phase 1 and Phase 2 patients who were reached consented to the interview, but in Phase 3, the percentage was significantly lower (55 %, p < 0.001). In all phases, the vast majority (87–100 %) of those who consented completed all of the questions. However, the percentage of those consented who completed the interview was significantly lower in the IVRS phase compared to each of the other two phases (p < 0.001). There were no significant differences in IVRS completers versus non-completers on mean age, race, gender, ethnicity or diagnosis (ICD-9) codes, and there were no significant differences between clinic sites in the proportion of completers/non-completers.

Acute Problem Resolution

In the baseline cohort, 12 % (13/111 respondents) of patients who consented to the interview reported either no improvement (n = 6) or worsening (n = 7). In the cohort called at one week by research staff, 15 % (29/196) reported showing no improvement (n = 24) or worsening (n = 5). In the group receiving the one-week IVRS call, 36 % (14/39) of those consented reported they were not improved (none reported being worse). The percentage of patients called by the IVRS who were unimproved was significantly (p = 0.001) greater than percentages in the baseline or one-week calls by research staff.

Overall, most patients (96 % in Phase 1, 95 % in Phase 2, 87 % in Phase 3) reported filling their prescription and taking it as prescribed. There were no significant differences among the phases (p > 0.05). Four percent (10/241) of patients who filled their prescriptions reported any type of problem. These data are similar to the data from these same patients with regard to medicine taken on a regular basis, in that most patients reported good adherence.24

Patient Interaction with the Healthcare System

Table 3 shows the relationship between making contact with the healthcare system and patient-reported improvement.

Table 3.

Patient-Reported Outcomes for Resolution of Acute Illness

Phase* Time interval Type of Call Patient Reported Outcome Total Took No Action Made contact with any health system in the time period between visit and call
# # (%) # (%)
Baseline 3 weeks after clinic visit Human Better 95 77 (81 %) 18 (19 %)
Same or worse 13 3 (23 %) 10 (77 %)
Partial Automation 1 week after clinic visit Human Better 163 145 (89 %) 18 (11 %)
Same or worse 28 19 (68 %) 9 (32 %)
Full Automation 1 week after clinic visit IVRS Better 25 22 (88 %) 3 (12 %)
Same or worse 14 12 (86 %) 2 (14 %)

*All three sites participated in Phases 1 and 2. Only the HIV and family medicine clinics participated in Phase 3. Note: A few respondents failed to complete some items

Overall, only 38 % (21/55) of unimproved patients contacted any healthcare provider during the follow-up period. The percentage was significantly higher in Phase 1 compared to Phases 2 (p = 0.008) or 3 (p < 0.001), with no significant difference between phases 2 and 3 (p = 0.14). When contact was made, the most frequent actions were to call and/or make an appointment with their own doctor, with 38 % (8/21), reporting seeing a different provider. Most of the patients who reported their symptoms had improved did not contact their physicians, with no significant differences among the three phases (p = 0.42).

During Phase 3, patients who reported they were not improved received a second IVRS call two weeks after the first call (three weeks after the initial visit) to see if the follow-up and feedback affected problem resolution. Only five patients completed these calls and four of the five reported being better.

Physician Acceptance of the Feedback System

Fifty-six percent of FM residents and 100 % of HIV attending physicians responded (63 % overall response rate). Seventy-eight percent of those who responded reported that they had looked at the feedback reports and 76 % of those reported that the reports spurred them to do additional follow-up in the patients’ charts. Sixty-three percent of respondents reported that they would like the feedback reports as a part of routine care.

DISCUSSION

This study examined the impact of a proactive follow-up and feedback system. The data from this study demonstrate that without such proactive follow-up efforts, a sizeable number of patients who are still having problems after an acute care visit do not contact their provider. This information, depending on the initial diagnostic and therapeutic formulation, could be important to prompt providers to reconsider their initial diagnosis or therapy choice, or start additional interventions to benefit a patient’s condition.

The percentage of patients who reported improvement and had not contacted anyone was lowest during Phase 1, most likely because the three-week interval between visit and follow-up call meant that patients had a longer time to contact someone before they received the outreach call, as well as more opportunity for their problems to resolve by the time of the call. However, it is noteworthy that, even after three weeks, 23 % of those who reported being unimproved did not contact anyone. With the one-week follow-up period (Phases 2 and 3), higher percentages had not contacted anyone. Most patients, during these latter phases, although they did not report improvement, may not yet have gotten worse and may not have felt the urgency to take action.

These data show that a surprising number of patients whose symptoms have not improved do not contact a healthcare provider. While some patients may require more time for problem resolution, it is likely that many of the patients with unimproved symptoms who did not re-contact their healthcare provider represent missed opportunities to optimize care or learn from unexpected outcomes that might potentially be found with an effective, efficient and proactive follow-up/feedback system. One could envision a practice nurse either reaching out to such unimproved patients or even transferring calls in real time, to review ongoing patient symptoms and concerns and assess the need for additional follow-up care. The IVRS technology has the potential to make that follow-up process more efficient. The new models of healthcare delivery emphasize value, which includes efficiency and quality with a focus on patient outcomes.14,25 Incorporating a follow-up and feedback system such as we have described could facilitate addressing a number of the goals of these new models. The technical solution we deployed allowed the feedback reports to be accessed without reprogramming the EHR systems’ source codes. Thus, this process could be fairly easily replicated with multiple commercial EHRs, since it does not depend on a particular EHR technology.

Like others who have used IVRS, we encountered challenges in patient recruitment and call completion. Because we were performing the intervention as a research study, we only interviewed those patients who both agreed in advance to be contacted and who gave consent on the telephone, factors that markedly reduced our sample size, though likely increased response rates. In addition, Phase 3 was shorter than the other phases due to staffing issues at the clinics, which also lowered our sample size for that phase. Although our sample sizes were not large compared to the entire clinic population, we found that, for the first two phases, almost all of those who agreed to participate did complete the telephone call. Even in the IVRS phase, we reached 67 % of patients who had agreed to be called, 55 % of those reached consented to participate, and 87 % of those completed the call. This compares favorably to the findings from an IVRS study by Haas and colleagues assessing medication use in ambulatory care, where calls were initiated to all patients unless they opted out.16 Haas et al. only reached 52 % of those who had not opted out, and only 61 % of those reached consented to participate, with 71 % of the consenting patients completing the calls. For conducting research using IVRS calls, there is obviously a tradeoff in getting a smaller number of patients who agree to be contacted up front versus calling all patients, but having a larger proportion fail to complete the call.

In the clinical rather than the research setting, the attrition of patients who are not interested in answering the calls may be less of an issue. For the human calls, over 97 % of the patients who were reached consented to participate and 12–15 % of those interviewed reported being the same or worse. For the IVRS calls, only 55 % of those reached consented to the interview and 36 % of those interviewed reported being the same or worse, suggesting the outreach mode likely influences both response rates and the mix of patients who do participate.

Although these differences may appear large, it may be reasonable to assume that those patients who do not complete the IVRS call may not have had much to report and were less constrained about hanging up on a “computer” than were those who spoke with the live interviewer. If we use as the denominator for each group the number of patients reached, rather than the number consented, the percentage who report being the same or worse for the IVRS calls is 17 %, which is not significantly higher (p = 0.54) than the percentages in the other two phases. These data suggest that patients who fail to improve may be the ones more likely to be responsive to the automated follow-up phone calls. Thus, IVRS calls, even with a smaller response rate, may be a worthwhile intervention to detect problems, since it will likely pick up the patients who are most concerned about their condition or who were more ill. Future research might be able to test that hypothesis.

Limitations

In addition to the sample size issues discussed above, we relied on patient-reported outcomes of improvement. While this was a key focus for this study, we do not have data from a health professional or the EHR on patients’ actual health outcomes.

A related issue is the patients who did not complete the call. While one could postulate that most of them were improved and hence did not want to stay on the automated call, we do not know this with certainty.

While the IVRS used in this study did allow patients the opportunity to describe their health status or problems, it did not furnish extensive interaction. Depending on the nature of the feedback desired, this approach may not generalize to settings or conditions where a different type of interaction is needed.

The choice for a single follow-up interval rather than a variable interval based on the patients’ diagnoses, while more feasible to implement, limits a precise determination as to whether the lack of improvement in this interval invariably represented an unexpected or significant clinical problem.

Because we were researching methods to develop and integrate a new system into actual clinical practice, there were no policies or precedents related to how physicians should best use the feedback reports. Only some of the physicians regularly reviewed or used the reports, making feedback on the usefulness of the report somewhat limited. In addition, we did not assess whether the physicians were aware of the patient status outside of our feedback.

Finally, like many research studies, we relied on volunteers who were willing to participate, and we do not have data on how responsive patients would be if the system were implemented as part of routine clinic policy.

CONCLUSIONS

Timely follow-up can improve the quality of care by detecting problems at an early stage and can potentially avert more serious healthcare outcomes. We found that without timely follow-up, a sizable number of ambulatory care patients reported that their acute problems were not improved, even three weeks after their acute care visit. However, many of these patients failed to contact their healthcare providers when they did not improve as expected. The present study showed that IVRS can feasibly be used with EHRs to automate outreach and follow-up for feedback on patient outcomes in ambulatory settings. Further studies on the effectiveness of this technology and a better assessment of the impact of early problem detection are needed to determine the optimal role of outreach strategies in routine healthcare settings. Comparison of the relative merits of IVRS, including comparative effectiveness, cost, and patient acceptability of various alternative outreach methods, such as nurse outreach or other technologies (email, text messaging), represents an important and fruitful area for future research.

Acknowledgements

Contributors

No other contributors.

Funders

This research was supported by grant #R18HS017060 from the Agency for Healthcare Research and Quality (AHRQ), and was also supported by grant # P30 AI027767 from NIH-NIAID.

Prior presentations

Portions of this manuscript were presented at the following conferences or lectures listed below.

Berner E et al. (March 2013) Automated Follow-up of Patients in Ambulatory Care: Physician and Patient Views. Presented at the 8th Annual AUPHA Academic Forum, HIMSS-2013, New Orleans, LA.

Berner ES, Burkhardt J, Houser S, et al. Closing the feedback loop to improve diagnostic quality. Presentation at AHRQ HIT Grantees Meeting; June 2010; Bethesda, MD.

Ray MN, Willig J, Cohen M, et al. Follow-up phone calls to improve patient safety in primary care: Issues encountered and lessons learned. Poster presentation at AHRQ HIT Grantees Meeting; June 2010; Bethesda, MD.

Berner ES, Ray MN, Schiff GD, et al. Closing the Feedback Loop to Improve Diagnostic Quality. Poster and abstract at AHRQ Annual HIT Annual Conference; September 7–10, 2008; Bethesda, MD.

Ray MN, Berner ES, Schiff GD, et al. Closing the Feedback Loop to Improve Diagnostic Quality. Poster presentation at Diagnostic Errors in Medicine Conference; June 2008; Phoenix, AZ.

Conflict of Interest

CDNA is copyrighted by the University of Alabama at Birmingham. Eta Berner, Midge Ray, James Willing, Marc Krawitz, and Anantachai Panjamapirom are CDNA inventors. Dr. Berner receives book royalties from Springer-Verlag London Ltd and Health Administration Press. Dr. Panjamapirom is employed by the Advisory Board Company. Dr. Willig has consulted with Qwest Diagnostics and received grants from Definicare. Mr. Krawitz is employed with CareFusion, is a co-owner of Physician Innovations, LLC, and is employed part time by the University of Phoenix.

REFERENCES

  • 1.Berwick DM. Launching accountable care organizations—the proposed rule for the Medicare Shared Savings Program. New Engl J Med. 2011;364(16):e32. doi: 10.1056/NEJMp1103602. [DOI] [PubMed] [Google Scholar]
  • 2.American College of Physicians. The advanced medical home: a patient-centered, physician-guided model of health care. Philadelphia: American College of Physicians; 2005.
  • 3.Davis K, Schoenbaum SC, Audet AM. A 2020 vision of patient-centered primary care. J Gen Intern Med. 2005;20(10):953–957. doi: 10.1111/j.1525-1497.2005.0178.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Agency for Healthcare Research and Quality. Patient centered medical home resource center [January 6, 2013]. Available from: http://pcmh.ahrq.gov/.
  • 5.Engineering a Learning Healthcare System. A look at the future: workshop summary. Washington, DC: The National Academies Press; 2011. [PubMed]
  • 6.Schiff GD. Minimizing diagnostic error: the importance of follow-up and feedback. Am J Med. 2008;121(5 Suppl):S38–S42. doi: 10.1016/j.amjmed.2008.02.004. [DOI] [PubMed] [Google Scholar]
  • 7.Schiff GD, Bates DW. Can electronic clinical documentation help prevent diagnostic errors? New Engl J Med. 2010;362(12):1066–1069. doi: 10.1056/NEJMp0911734. [DOI] [PubMed] [Google Scholar]
  • 8.Singh H, Graber M. Reducing diagnostic error through medical home-based primary care reform. JAMA. 2010;304(4):463–464. doi: 10.1001/jama.2010.1035. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Crandall B, Wears RL. Expanding perspectives on misdiagnosis. Am J Med. 2008;121(5 Suppl):S30–S33. doi: 10.1016/j.amjmed.2008.02.002. [DOI] [PubMed] [Google Scholar]
  • 10.Wears RL, Schiff GD. One cheer for feedback. Ann Emerg Med. 2005;45(1):24. doi: 10.1016/j.annemergmed.2004.10.035. [DOI] [PubMed] [Google Scholar]
  • 11.Chern CH, How CK, Wang LM, Lee CH, Graff L. Decreasing clinically significant adverse events using feedback to emergency physicians of telephone follow-up outcomes. Ann Emerg Med. 2005;45(1):15–23. doi: 10.1016/j.annemergmed.2004.08.012. [DOI] [PubMed] [Google Scholar]
  • 12.Forster AJ, Murff HJ, Peterson JF, Gandhi TK, Bates DW. The incidence and severity of adverse events affecting patients after discharge from the hospital. Ann Intern Med. 2003;138(3):161–167. doi: 10.7326/0003-4819-138-3-200302040-00007. [DOI] [PubMed] [Google Scholar]
  • 13.Committee on Identifying and Preventing Medication Errors. Preventing medication errors: quality chasm series. Aspden P, Wolcott J, Bootman JL, Cronenwett LR, editors. Washington, DC: The National Academies Press; 2007.
  • 14.DeSalvo KB, Block JP, Muntner P, Merrill W. Predictors of variation in office visit interval assignment. Int J Qual Health Care. 2003;15(5):399–405. doi: 10.1093/intqhc/mzg067. [DOI] [PubMed] [Google Scholar]
  • 15.Haas JS, Amato M, Marinacci L, Orav EJ, Schiff GD, Bates DW. Do package inserts reflect symptoms experienced in practice?: assessment using an automated phone pharmacovigilance system with varenicline and zolpidem in a primary care setting. Drug Safety. 2012;35(8):623–628. doi: 10.1007/BF03261959. [DOI] [PubMed] [Google Scholar]
  • 16.Haas JS, Iyer A, Orav EJ, Schiff GD, Bates DW. Participation in an ambulatory e-pharmacovigilance system. Pharmacoepidemiology and Drug Safety. 2010;19(9):961–969. doi: 10.1002/pds.2006. [DOI] [PubMed] [Google Scholar]
  • 17.Haas JS, Klinger E, Marinacci LX, Brawarsky P, Orav EJ, Schiff GD, et al. Active pharmacovigilance and healthcare utilization. American Journal of Managed Care. 2012;18(11):e423–e428. [PubMed] [Google Scholar]
  • 18.Byrom B. Using IVRS in Clinical Trial Management. Appl Clin Trials. 2002;2002:36–42. [Google Scholar]
  • 19.Willig JH, Krawitz M, Panjamapirom A, Ray MN, Nevin CR, English TM, et al. Closing the feedback loop: an interactive voice response system to provide follow-up and feedback in primary care settings. J Med Syst. 2013;37(2):9905. PubMed PMID: 23340825. [DOI] [PMC free article] [PubMed]
  • 20.Houser SH, Ray MN, Maisiak R, Panjamapirom A, Willig J, Schiff GD, et al. Telephone follow-up in primary care: can interactive voice response calls work? Stud Health Technol Inform. 2013;192:112–116. PubMed PMID: 23920526. [PMC free article] [PubMed]
  • 21.Allscripts [04/25/2013]. Available from: http://www.allscripts.com/.
  • 22.WorldVista [04/25/2013]. Available from: http://worldvista.org/.
  • 23.SPSS Statistics for Windows. 17.0 ed. Chicago, IL: SPSS, Inc.; 2008.
  • 24.Maisiak RS, Ray MN, Panjamapirom A, Houser S, Willig JH, English TM, et al. The general medication adherence (GMA) scale: Psychometric analysis of a new scale for the primary care setting. Washington, DC: Society for Behavioral Medicine; 2011. [Google Scholar]
  • 25.Blumenthal D, Tavenner M. The “meaningful use” regulation for electronic health records. New Engl J Med. 2010;363(6):501–504. doi: 10.1056/NEJMp1006114. [DOI] [PubMed] [Google Scholar]

Articles from Journal of General Internal Medicine are provided here courtesy of Society of General Internal Medicine

RESOURCES