Skip to main content
Health Services Research logoLink to Health Services Research
. 2015 May 18;51(1):76–97. doi: 10.1111/1475-6773.12315

Community‐Level Quality Improvement and the Patient Experience for Chronic Illness Care

Megan McHugh 2, Jillian B Harvey 1,, Raymond Kang 3, Yunfeng Shi 4, Dennis P Scanlon 4
PMCID: PMC4722218  PMID: 25989319

Abstract

Objective

To determine whether chronically ill adults from communities participating in a community‐level quality improvement effort reported greater improvement on four domains of patient experience: care coordination, patient satisfaction, provider interaction and support, and receipt of recommended care for diabetes.

Study Setting

The Robert Wood Johnson Foundation's Aligning Forces for Quality (AF4Q) initiative provides multistakeholder alliances with funding and technical assistance to improve quality in their communities.

Study Design

This is a quasi‐experimental, pre‐post study. We used a difference‐in‐difference approach to detect relative changes over time on 16 survey‐based outcome measures representing the four patient experience domains.

Data Collection

We surveyed adults with chronic illness(es) in 14 AF4Q communities and a national comparison group. Wave 1 was completed in 2008 (8,140 respondents) and wave 2 in 2012 (9,565 respondents).

Principal Findings

Respondents from AF4Q communities reported modestly greater improvement on patient satisfaction and receipt of recommended care for diabetes.

Conclusions

Results suggest that community‐level QI efforts led by multistakeholder alliances hold the potential to improve patient satisfaction and receipt of recommended care for diabetes, but the magnitude of the effect may be limited. However, there is less evidence that community‐level QI can improve patient perceptions of care coordination or provider interaction and support.

Keywords: Quality improvement, chronic illness, patient experience


In response to clear evidence that our health system does not perform as well as it should (Institute of Medicine [IOM] 2000; McGlynn et al. 2003; Mangione‐Smith et al. 2007), there has been much effort devoted to improving quality. Health care providers are increasingly engaged in a variety of quality improvement (QI) efforts (Chassin and Loeb 2011) and have been encouraged to do so by the growing number (and expanding scope) of pay‐for‐performance programs (Baker and Delbanco 2007; Kuthmerker and Hartman 2007; Centers for Medicare & Medicaid Services [CMS] 2009). Although these QI efforts have resulted in many successes, quality problems persist (Landrigan et al. 2010; Agency for Healthcare Quality [AHRQ] 2011; Dentzer 2011).

Some have suggested that to stimulate meaningful and sustainable improvement, QI efforts should advance from organization‐level initiatives to community‐level approaches that follow a coherent, overarching vision (Ferlie and Shortell 2001; Margolis et al. 2001; Leatherman and Sutherland 2007). Community‐level QI is potentially more effective than individual approaches because it can consolidate duplicate efforts, reduce fragmentation, improve information sharing, and influence factors that are out of the control of individual providers (e.g., payment reforms) (Donabedian 1988; Margolis et al. 2001; Miller n.d.). Community approaches to QI appear to be gaining traction. For example, the State Action on Avoidable Rehospitalizations initiative provides strategic guidance, support, and technical assistance to cross‐continuum teams to improve care transitions (Boutwell et al. 2011; Mittler et al. 2013); chartered value exchanges provide partnerships of providers, employers, insurers, and consumers with access to performance data to enhance local public reporting (U.S. Department of Health and Human Services [USDHHS] 2008); and the Robert Wood Johnson Foundation's Aligning Forces for Quality (AF4Q) program offers financial support and technical assistance to multistakeholder alliances to “reweave the fabric of their health care systems to be stronger, more resilient, and of higher quality across the full continuum of care” (Painter and Lavizzo‐Mourey 2008).

Many community‐level QI initiatives are specifically aimed at improving care for people with chronic illness (Pittsburgh Regional Health Initiative [PRHI] 2012; CMS 2013), the leading cause of death and disability in the United States (National Center for Chronic Disease Prevention and Health Promotion [NCCDPHP] 2012). The cost and complexity of caring for people with chronic illness is considerably higher than for individuals without chronic illness (MedPAC 2009). Individuals with chronic illness see multiple providers (Pham et al. 2007), and community‐level QI holds the potential to improve care coordination so that chronically ill patients get the right care at the right time, with the goal of avoiding unnecessary duplication of services (CMS 2013).

Our objective was to explore the impact of community‐level QI initiatives aimed at improving chronic illness care. Specifically, we sought to determine, approximately 6 years after the program's initiation in 2006, whether chronically ill adults in 14 AF4Q communities reported greater improvement in care coordination, patient satisfaction, provider interaction and support, and receipt of recommended care for diabetes than patients in non‐AF4Q communities. Importantly, our use of patient‐reported outcomes reflects a growing appreciation that understanding patients’ experiences is the key in creating a patient‐centered health system (National Quality Forum [NQF] 2012). Further analysis of the impact of AF4Q on a broader set of outcomes (e.g., readmissions, health outcomes, physician‐reported QIs, patient engagement and behavior) is ongoing as the AF4Q program continues.

Methods

The Intervention

AF4Q is the largest privately funded QI program to date (Scanlon et al. 2012). Funding is directed to multistakeholder (payers, providers, consumers, and purchasers) alliances that facilitate improvement by securing and coordinating resources, promoting collaboration across providers, disseminating information, and prioritizing common goals and initiatives (Harvey et al. 2012). The alliances are expected to meet goals in several programmatic areas, including QI (i.e., improving care delivery), public reporting, reduction in health disparities, and consumer engagement. The Foundation committed $300 million to the program, with the objective “to help the Aligning Forces for Quality communities improve the quality of care for everyone in these communities by 2015” (Painter and Lavizzo‐Mourey 2008, p. 747).

In 2006, the overarching goal of AF4Q was to help alliances substantially improve the quality of health care provided in ambulatory care settings for persons with chronic diseases. Grantees were broadly charged with helping providers in the community improve the quality of ambulatory, chronic illness care. In 2008, the QI component of AF4Q was expanded to include hospital care (for all conditions, not just chronic illness). The Foundation provided alliances with technical assistance, including webinars, peer‐to‐peer learning forums, and access to QI consultants and tool kits.

The alliances were given considerable latitude regarding how to pursue their work. For example, alliances could establish their own activities related to chronic care improvement, partner with other organizations, or use a combination of the two approaches. Improving care management processes, encouraging the adoption of patient‐centered medical homes, and reducing readmissions were the most commonly identified foci of alliances’ efforts. Alliances had the flexibility to focus on any number of health conditions, but diabetes and congestive heart failure were the most commonly targeted areas. Additional details on AF4Q and the alliances’ QI work can be found in other publications (see McHugh et al. 2012; Scanlon et al. 2012).

Data

We used data from the AF4Q Consumer Survey (AF4QCS; Penn State Center for Health Care Research and Policy 2015), conducted as a part of the AF4Q evaluation and funded by the Robert Wood Johnson Foundation. The AF4QCS was administered in two waves. The baseline survey was a random‐digit‐dial survey completed in August 2008 for chronically ill adults (age 18 or older) for the 14 AF4Q communities, as well as a comparison group (the national sample) randomly sampled from the rest of the country. Areas with a high percentage of minorities were oversampled to ensure a sufficient number of minority respondents. All respondents had visited health care professionals during the previous 2 years for the care of at least one of the following five conditions: diabetes, hypertension, asthma, chronic heart disease, and depression. The first wave AF4QCS has been discussed and used in previous studies (e.g., Maeng et al. 2012; Martsolf et al. 2012, 2013). The same respondents were surveyed in the second wave, completed in November 2012. To account for attrition and change in the population, the second wave was also supplemented with an additional RDD sample. The response rate was 32.3 percent for the first wave and 34.2 percent for the second wave, based on the American Association of Public Opinion Research method; 45.8 and 42.1 percent, respectively, based on the Council of American Survey Research Organizations method. The second wave response rate was the weighted average of the panel and the new RDD samples. The whole sample used in this study includes 8,140 individuals from the first wave and 9,565 from the second wave. More information can be found in the survey methods report accessible at http://www.hhdev.psu.edu/cms/CHCPR/alignforce/surveys/consumer.html.

Outcome Measures

To improve patient‐centered care, the Institute of Medicine (IOM) and others have emphasized the importance of measuring patient experience—how patients perceive specific aspects of the care they receive. We analyzed patient‐reported outcomes related to four domains of the patient experience selected at the outset of the AF4Q program, prior to knowing what approaches the alliances would use to implement community‐level QI (National Health Service [NHS] 2011; National Clinical Guideline Center [NCGC] 2012). However, the domains represent areas in which program planners and evaluators hypothesized an impact by the program. Table 1 contains the survey questions that comprise the variables in each of the four domains. Questions were pretested in both survey waves.

Table 1.

Development of Outcome Variables

Domain/Measure (Abbreviated) Survey Question Measurement
Care coordination
Care coordination not a problem In general, do you think that coordination among your health care professional(s) and alternative health care practitioner(s) is a major problem, a minor problem, or not a problem at all? Not a problem at all = 1; Otherwise = 0
Patient satisfaction
Satisfaction rating We want to know your overall rating of all your care in the past 12 months from all health care professionals who helped you take care of your condition(s). Use any number from 0 to 10, where 0 is the worst care possible and 10 is the best care possible 0–10
Provider interaction and support
Thinking about the last 6 months, please tell us whether you strongly agree, agree, or strongly disagree with each statement about your health care professionals…
Explain Explain things in a way you could understand Strongly agree = 1; Otherwise = 0
Time Spend enough time with you Strongly agree = 1; Otherwise = 0
Respect Treat you with respect and dignity Strongly agree = 1; Otherwise = 0
Diet Help you set specific goals to improve your diet Strongly agree or Agree = 1; Otherwise = 0
Exercise Help you set specific goals for exercise Strongly agree or Agree = 1; Otherwise = 0
Monitor Teach you how to monitor your condition(s) so you could tell how you are doing Strongly agree or Agree = 1; Otherwise = 0
  In the past 12 months, did you…
Phone call Receive a phone call from any of your health care professionals or insurance company to see how you were doing without you calling them first? Yes = 1; Otherwise = 0
Reminder letter Receive a letter, etc., reminding you that you may be due for an appointment? Yes = 1; Otherwise = 0
Materials Get any materials from your health care professionals or health insurance company on how to care for your condition(s)? Yes = 1; Otherwise = 0
Resources Did your doctor or nurse arrange for you to see or attend any of the following for help to improve your care? Response options: Dietician/Nutritionist, support group, health coach, social worker, smoking cessation program, exercise consultant, health related classes, alternative health practitioners, something else/other Yes = 1; Otherwise = 0
Receipt of recommended care for diabetes a
Cholesterol About how many times in the past 12 months has a doctor, nurse, or other health care professional checked your cholesterol level? Once a year or more = 1; Otherwise = 0
A1c About how many times in the past 12 months has a doctor, nurse, or other health care professional checked you for A1c? Twice a year or more = 1; Otherwise = 0
Eye screening Have you had an eye screening or eye exam by an eye care professional, optometrist, or ophthalmologist, in the past 12 months? Yes = 1; Otherwise = 0
Circulation Have you had a foot exam by a health care professional to look for circulation problems in the past 12 months? Yes = 1; Otherwise = 0
a

Questions were limited to individuals who reported having diabetes.

Care Coordination (Single Measure)

Care coordination is one of 20 national priorities for QI (IOM 2003). The growing burden of chronic illnesses coupled with the complexity of chronic illness care creates challenges for coordination (McDonald, Sundaram, and Bravata 2007). The survey asked respondents whether coordination among their health care professional(s) and alternative health care practitioner(s) is a major problem, a minor problem, or not a problem at all. The question was created for the AF4QCS and has been used in related studies (Maeng et al. 2012).

Patient Satisfaction (Single Measure)

Patient satisfaction is a key component of patient‐centered care (IOM 2001), and results of patient satisfaction and patient experience surveys are increasingly being used as benchmarks in pay‐for‐performance (Cromwell et al. 2011). AF4QCS includes a modified version of the patient satisfaction measure used in the RAND Improving Chronic Illness Care Evaluation (Baker et al. 2005). Specifically, respondents were asked to rate their overall care over the past 12 months on a 10‐point scale.

Provider Interaction and Support (Multiple Measures)

Quality of provider interaction and support is another dimension of patient‐centered care that reflects how patients perceive different aspects of their care (Martsolf et al. 2012, 2013). These measures are useful for highlighting aspects that need improvement (Jenkinson, Coulter, and Bruster 2002). AF4QCS includes a series of items related to patients’ perception of the quality of on‐site interaction with the provider, treatment goal‐setting, and out‐of‐office contact. These survey items have been previously used in related studies (Hays et al. 1999; Wasserman et al. 2001; Glasgow et al. 2005).

Receipt of Recommended Care for Diabetes (Multiple Measures)

Because all but two of the alliances reported a specific focus on improving care for patients with diabetes, we also examined four items pertaining to the receipt of recommended care services for individuals with diabetes. These items reflect care guidelines based on current consensus opinion of experts about the appropriate number and timing of services (American Diabetes Association [ADA] 2014). They are also important because receipt of these services is associated with a decrease in avoidable hospitalizations (AHRQ 2011). Because we were interested in understanding the impact of QI interventions aimed at improving care delivery, we focused our attention on the survey items that were mainly physician‐driven (e.g., Have you had a foot exam by a health care professional to look for circulation problems in the past 12 months?) rather than the items that were mainly patient‐driven (e.g., Do you check your blood sugar at least once a week?).

With the exception of the single patient satisfaction measure, which is measured on a 0 to 10 scale, all other survey items analyzed were binary. Results show, for example, the percentage of respondents reporting that care coordination was not a problem. In an effort to condense the number of measures for the domains with multiple measures (provider interaction and support and receipt of recommended care for diabetes), we also created continuous composite variables representing the percentage of measures that were positive. We also constructed and tested an all‐or‐none composite for the diabetes measures.

Analysis

We compared scores for AF4Q communities and the national comparison group during Round 1 and Round 2. For all AF4Q communities combined, each AF4Q community, and the national comparison group, we calculated the change in the percent of respondents who reported that care coordination was not a problem, mean level of their overall satisfaction, percent of respondents reporting good provider interaction or support, and percent of respondents having received the recommended services for diabetes. Then we tested the differences between the change for respondents in AF4Q communities and the change for respondents in the national sample. To evaluate the impact of community‐level QI in these communities to date, we estimated the following difference‐in‐difference model1 :

Yit=α+β1AFQi+β2POSTt+β3AFQiPOSTt+β4Xit+εit

where “Y” is the outcome, AFQ is the indicator for AF4Q communities (where AFQ is equal to 1 for the 14 AF4Q communities and 0 for the control sample), POST is the indicator for second wave, and X's are patient characteristics (age, gender, race, insurance coverage, income, education, usual source of care, type and number of chronic conditions, patient activation). Patients and time periods are indexed by “i” and “t”. The coefficient of primary interest is β 3, the estimate for the difference between the change among the patients living in AF4Q communities and the change in the outcome of interest among the patients living in the rest of the country (i.e., the difference in difference estimate). The difference‐in‐differences estimate removes the bias in the comparison of follow‐up outcomes between the treated and exposed caused by time invariant differences between the groups. It also removes the effect of underlying but similar trends between the groups (Meyer 1995; Angrist and Krueger 2000).

For each outcome, we estimated the above specification for each AF4Q community and for all AF4Q communities combined, with ordinary least‐square regression and robust standard errors.

Finally, based on the difference‐in‐difference coefficients obtained, for each AF4Q community, we counted the number of the statistically significant (p < .05) outcomes that showed improvement relative to the national comparison group in each of the four domains.

Results

Characteristics of the Sample

Compared to the national sample, respondents in AF4Q communities were more likely to be nonminority, have private health insurance, live above the poverty threshold, and have a college degree (Table 2). In both groups, most respondents were women and reported a physician as the usual source of care. The most common chronic illness was hypertension, and respondents in both AF4Q and non‐AF4Q communities had an average of 1.6 chronic illnesses.

Table 2.

Round 1 Respondent Demographics

All AF4Q Communities (%) Non‐AF4Q Communities (%) Significance
N 7,337 803
Age (mean) 58.44 58.70
Gender
Female 67.59 70.36
Race
Black 24.60 29.51 ***
Hispanic 7.48 14.94
Language of interview
English 98.60 95.02 ***
Spanish 1.40 4.98
Insurance
Medicare 33.12 32.63 *
Medicaid 14.87 18.43
Private 44.11 40.47
Poverty
Below poverty level 23.39 29.02 ***
Education
Less than H.S. degree 10.64 15.57 ***
College grad or higher 59.89 54.55
Usual source of care
Usual care‐MD 77.77 74.60
Usual care‐clinic 10.75 11.46
Usual care‐ED 2.59 3.49
Chronic conditions
Diabetes 29.87 31.51
Hypertension 65.49 68.54
Heart disease 17.55 17.35
Asthma 17.32 18.60
Depression 27.08 26.53
Number of chronic conditions (mean) 1.57 1.62

T‐tests and Chi‐square tests used to compare all AF4Q to Non‐AF4Q.

*p < .05, **p < .01, ***p < .001.

AF4Q versus Non‐AF4Q Results

We compared scores for AF4Q communities and the national sample during Round 1 and Round 2. There were no significant differences in care coordination, patient satisfaction, or receipt of recommended care in Round 1 (Table 3). However, with regard to the provider interaction and support domain, respondents in AF4Q communities reported better interpersonal exchanges with their health professionals (e.g., reported that health professionals spent enough time with them, explained things clearly, and treated them with respect). In Round 2, there were no statistically significant differences for any outcome.

Table 3.

Raw Rates and Difference in Difference Estimates for All Outcomes

Domain/Measure Unadjusted Round 1 Scores Unadjusted Round 2 Scores Difference‐in‐Difference (%)
N AF4Q Communities (%)a Non‐AF4Q Communities (%) N AF4Q Communities (%)a Non‐AF4Q Communities (%)
Care coordination
Coordination not a problem 8,046 73.23 70.92 8,541 75.44 76.48 −3.33***
Patient satisfaction
Satisfaction rating (mean) 8,074 8.25 8.3 8,549 8.38 8.37 0.08**
Provider interaction and support
Explain 8,011 48.78** 43.11 8,512 50.56 48.27 −1.06
Time 8,018 44.38** 39.02 8,505 43.26 42.02 −2.34*
Respect 8,035 57.46* 53.66 8,510 59.69 57.13 0.98
Diet 7,189 69.79 70.48 7,806 73.65 75.11 −1.31
Exercise 7,301 64.38 64.76 7,856 70.61 71.49 −1.01
Monitor 7,650 74.42 73.27 8,220 82.00 82.12 −0.88
Phone call 8,114 34.00 32.75 8,574 39.27 39.42 −1.52*
Reminder letter 8,111 67.23 65.88 8,583 73.48 71.92 0.81
Materials 8,082 49.70 51.44 8,566 51.84 52.44 1.51
Resources 8,140 34.29 33.12 8,599 36.02 36.75 −2.50***
Receipt of recommended care for diabetes
Cholesterol 2,374 93.29 96.31 2,793 91.98 93.33 1.49**
A1c 2,298 64.37 64.26 2,800 65.32 63.24 2.51
Eye screening 2,424 76.99 81.27 2,815 73.68 72.92 3.85***
Circulation 2,414 63.75 66.53 2,813 65.77 63.08 5.28***
a

Chi‐square tests and T‐tests were used to compare rates in AF4Q versus non‐AF4Q areas for Round 1 and 2.

*p < .05, **p < .01, ***p < .001.

AF4Q communities showed significantly greater improvement than the national sample on three of the four measures in the receipt of recommended care for diabetes domain (Table 3). Improvement on the cholesterol level check was 1.5 percent greater in AF4Q communities, improvement in eye screening was 3.9 percent greater in AF4Q communities, and improvement on the blood pressure check was 5.3 percent greater in AF4Q communities. The change in A1c testing in AF4Q communities was not significantly different from the national sample.

Respondents in the AF4Q communities also reported greater improvement in satisfaction, but the magnitude of the difference was small (0.08 points on a 0 to 10 scale). However, respondents in the national sample reported greater improvement in perceived care coordination (3.33 percent) than those in AF4Q communities, and greater improvement on three measures in the provider interaction and support domain: health professionals spending enough time with them, receiving a phone call from health professionals to see how they were doing, and receiving additional resources to help them improve their care.

Community‐Level Analysis

Table 4 shows fairly consistent results across AF4Q communities in three of the domains. None of the AF4Q communities improved on care coordination relative to the national sample. However, all but two AF4Q communities (Detroit and Humboldt County) showed relative improvement on patient satisfaction, and 12 of the 14 AF4Q communities experienced relative improvement on at least half of the measures in the receipt of recommended care for diabetes domain. Results were less consistent in the provider interaction and support domain. Only 5 of 14 AF4Q communities improved on half of the measures, relative to the national sample.

Table 4.

Number of Measures That Significantly Improved Relative to the National Sample, by Community and Patient Experience Domain

Care Coordination (1 Measure) Patient Satisfaction (1 Measure) Provider Interaction and Support (10 Measures) Receipt of Recommended Care for Diabetes (4 Measures)
All AF4Q communities 0 1 0 3
Cincinnati, OH 0 1 3 3
Cleveland, OH 0 1 4 3
Detroit, MI 0 0 1 3
Humboldt County, CA 0 0 3 0
Kansas City 0 1 6 2
Maine 0 1 7 3
Memphis, TN 0 1 3 1
Minnesota 0 1 2 2
Puget Sound, WA 0 1 3 3
South Central Pennsylvania 0 1 5 3
West Michigan 0 1 3 3
Western New York 0 1 1 4
Willamette Valley, OR 0 1 7 3
Wisconsin 0 1 8 4

Composite Outcomes

Results from the composite analysis were consistent with previous analyses. Table 5 shows that there was greater improvement on the diabetes measures in AF4Q communities than the national sample but no significant difference in improvement on the provider interaction and support measures.

Table 5.

Difference‐in‐Differences in Composite Outcomes

Provider Interaction and Support (10 Measures) Receipt of Recommended Care for Diabetes‐All‐or‐None (4 Measures) Receipt of Recommended Care for Diabetes‐Percent (4 Measures)
Coeff. (%) p‐Value Coeff. (%) P‐Value Coeff. (%) p‐Value
All AF4Q communities −0.39 .48 3.24 .521 12.89 .001
Cincinnati, OH −2.14 0 6.52 .188 16.39 0
Cleveland, OH 0.75 .045 1.34 .782 13.38 .001
Detroit, MI −1.09 .002 5.27 .276 12.20 .001
Humboldt County, CA −2.71 0 5.68 .245 6.13 .056
Kansas City 0.64 .03 4.00 .404 12.05 .001
Maine −0.60 .04 4.86 .286 12.14 .001
Memphis, TN −1.31 .001 2.73 .563 9.33 .006
Minnesota −0.53 .059 2.68 .574 12.25 .001
Puget Sound, WA 0.22 .407 5.70 .252 13.69 .001
South Central Pennsylvania −0.53 .049 2.14 .659 11.72 .002
West Michigan −1.53 0 7.24 .147 16.49 0
Western New York −1.49 0 9.54 .06 16.91 0
Willamette Valley, OR 2.10 0 2.64 .584 13.85 0
Wisconsin 1.75 0 0.56 .904 14.92 0

Discussion

Our findings suggest that a well‐resourced community‐level QI initiative led to modest improvement in patient experience in two domains: receipt of recommended care for adults with diabetes and patient satisfaction among adults with chronic illness. Improvement on receipt of recommended care is not surprising given AF4Q's early emphasis on chronic care management. Consistent with this emphasis, the majority of AF4Q alliances reported that they were specifically targeting diabetes in their efforts, encouraging better care management, and facilitating the adoption of patient‐centered medical homes, which are designed to promote better care and adherence to treatment guidelines for patients with diabetes (and other chronic conditions) (Weingarten et al. 2002; National Committee for Quality Assurance [NCQA] 2013). These improvements in care management may have driven the increase in patient satisfaction scores. Improving care processes has previously been shown to boost patient satisfaction (Hardy, West, and Hill 1996).

However, the community‐level QI effect does not appear to have improved patients’ perception of care coordination. Six of the 14 alliances specifically reported focusing on care transitions or care coordination, and PCMHs are also designed to facilitate coordination (NCQA 2013). Our results may be explained by the timing of the interventions. Alliances reported focusing on improving care processes for patients with diabetes in the first years of the program (2006–2008), whereas efforts directed toward care coordination became more prevalent after 2010. Previous studies have found that implementation of QI interventions often leads to an initial state of disruption before benefits are realized (Ash, Berg, and Coiera 2004). Our survey may have occurred during this state of disruption as practices changed behavior. Another possible explanation is that providers in communities where respondents in the national sample resided were also working to improve care coordination. Since 2008, Quality Improvement Organizations have been working with hospitals across the country to improve care transitions (Chen et al. 2011). Additionally, the Affordable Care Act introduced financial penalties for hospitals with excess preventable readmissions, thereby drawing attention to the issue of care coordination and care transitions (Kaiser Family Foundation [KFF] 2013). The national sample communities may have been able to focus their attention on that dimension of quality more so than the AF4Q communities, which, as a condition of funding, spent resources and efforts in a number of different areas, including public reporting, consumer engagement, and disparities reduction. Although working in these broad areas may have diluted the alliances’ specific focus on care coordination, it is possible that tackling these broad areas may lead to greater improvement in outcomes in the long run. That is the subject of future investigation.

Improving the providers’ interaction with patients and support of patient engagement was not a major focus of the alliances’ work, which may explain the lack of improvement in those areas. The domains in which community‐level QI had little effect (care coordination and provider interaction and support) possess certain attributes that may contribute to our findings. First, care coordination and provider interaction and support require behavior change from both physicians and support staff. For example, effective care coordination typically requires cooperation of physicians from multiple practices and their scheduling staffs. Given the well‐documented challenges associated with provider behavior change (Shem 2001; Mostoflian et al. 2015), QI implementation may be even more challenging as the number of participants grows. Second, QI interventions aimed at improving clinical outcomes, such as receipt of recommended care, attempt to reduce variability in processes of care and promote use of strict clinical guidelines (Chandrasekaran, Senot, and Boyer 2012). In contrast, improving patient experience for care coordination or provider interaction and support may involve tailoring each encounter to the specific needs and preferences of the patient (Campbell and Frei 2011). The latter may be more challenging.

One of the benefits of allowing AF4Q grantees flexibility in their QI approaches is that we can examine the approaches used by the communities with the greatest improvement relative to the national sample. Wisconsin, Maine, and the Willamette Valley, OR, improved on the largest number of measures. However, there is little commonality among their approaches to QI. The alliance in Maine primarily led its AF4Q QI activities; the alliance in Wisconsin primarily delegated QI activities to a partner organization, and the alliance in Oregon used a mixed approach. The three alliances focused on different conditions (though Wisconsin and Maine both reported a focus on diabetes) and different QI topic areas (though Maine and Willamette Valley, OR, both focused on PCMH), but all three used learning collaboratives as an approach to QI. The diversity across the sites suggests that improvement may be achieved through different organizational structures. Still, it makes it difficult to explain the variation in outcomes. We speculate that the variation is due to a variety of factors including community‐level factors (e.g., history of engaging in QI, market characteristics), approaches to QI taken, and variations in the dose of the intervention across sites. Currently there is no widely accepted approach to measuring the dose of community‐level QI interventions, yet a program's reach or intensity may drive a program's success (or lack thereof). This is an important area for future research.

The Foundation did not identify specific outcome measures or benchmarks in advance of the launch of the AF4Q initiative that would signal success. However, given the program's focus on chronic illness care coupled with the Foundation's stated objective to improve care quality for everyone in AF4Q communities, we expected to see improvement in the four domains investigated here. Our results suggest a modest interim impact given the Foundation's investment, but judgments about the cost‐effectiveness of the AF4Q initiative should be reserved until after 2015, the final year of the AF4Q initiative, when conclusions can be drawn about the sustainability of the QI activities as well as change in outcome measures across a variety of other areas, including readmissions and changes in care processes reported by physician practices. These long‐term findings will provide greater insight regarding whether dissemination of the AF4Q program may be warranted.

Limitations

There are several limitations. First, we cannot control for all other QI efforts within the AF4Q communities. It is possible that in some of the larger communities, the AF4Q intervention represented a small percentage of the QI work being undertaken, making it difficult to attribute improvement to the community‐level QI intervention. Second, AF4Q communities are not nationally representative and the intervention may not have the same impact in other communities. Third, this analysis included only a subset of outcome measures that may be influenced by community‐level QI. It is possible that we will find different results when we look beyond the patient‐reported outcome measures and examine, for example, claims‐based process of care measures or physician‐reported QI measures. Fourth, self‐reported measures of receipt of recommended care for diabetes may be subject to bias. However, we compared the A1c measure to similar measures from aggregate Medicare claims data for the same regions and the same age group during similar data period, and the rates are comparable. Also, prior studies have shown that despite the issues with self‐reported measures, they are well correlated with other measures obtained in clinical settings and are predictive of outcomes (Rhee et al. 2005; Gehi et al. 2007; Jerant et al. 2008). Fifth, many of the statistically significant coefficients in the results appear small. It is difficult to determine a threshold that signifies a meaningful difference in terms of clinical or policy relevance. However, if those effects reflect population‐level changes, then even small changes can be meaningful. Sixth, while the difference‐in‐differences approach has many advantages, it also assumes that the trend in average outcomes between the treated and controls groups is parallel over time (Abadie 2005). Given that the AF4Q communities were not randomly selected, this assumption may not be valid. Seventh, there may be unobserved patient‐level factors that we could not control for in our model. Eighth, response rates were somewhat low, resulting in risk of nonresponse bias (Groves and Peytcheva 2008). However, our response rates are in line with the general declining trend in survey response rates (Cull et al. 2005; Johnson and Wislar 2012). Recent studies have shown substantial variation in reported response rates depending on the methods of calculation (Martsolf et al. 2012, 2013; Davern 2013), and the lack of association between response rates and nonresponse bias (Johnson and Wislar 2012; Davern 2013). Moreover, surveys with lower response rates do not lead to significantly different estimates (Holle et al. 2006; Keeter et al. 2006), and using extra resources to achieve a better response rate may actually introduce bias (Davern et al. 2010). Finally, respondents were selected based on their home address, but we do not know whether respondents or their physician practices were exposed to the AF4Q intervention.

Conclusion

Our findings suggest that community‐level QI may lead to improvement in receipt of recommended care services and patient satisfaction as reported by patients, but the effect size may be small. There is less evidence of improvement regarding patient perceptions of care coordination or provider interaction and support.

Supporting information

Appendix SA1: Author Matrix.

Appendix SA2: Aligning Forces for Quality: Alliances, Technical Assistance, and Quality Improvement Activities.

Acknowledgments

Joint Acknowledgment/Disclosure Statement: This research was supported by a grant from the Robert Wood Johnson Foundation for the evaluation of its Aligning Forces for Quality initiative.

Disclosures: None.

Disclaimers: None.

Note

1

Except for the patient satisfaction measure (which was on a 0 to 10 scale), all measures were binary. However, we used ordinary least‐square as our primary estimator, since the difference in difference coefficients have more transparent interpretations in a linear probability model. As a comparison, we also estimated logistic regression models with the same set of outcomes and covariates. The results are similar.

References

  1. Abadie, A. 2005. “Semiparametric Difference‐in‐Differences Estimators.” Review of Economic Studies 72 (250): 1–19. [Google Scholar]
  2. Agency for Healthcare Quality [AHRQ] . 2011. National Healthcare Quality Report, pp. 1–224. Rockville, MD: Agency for Healthcare Quality. [Google Scholar]
  3. American Diabetes Association [ADA] . 2014. “Standards of Medical Care in Diabetes–2014.” Diabetes Care 37 (Suppl 1): S14–80. doi:10.2337/dc14‐S014. [DOI] [PubMed] [Google Scholar]
  4. Angrist, J. , and Krueger A.. 2000. “Empirical Strategies in Labor Economics” In Handbook of Labor Economics, edited by Ashenfelter O. and Card D., pp. 1277–366. Elsevier. [Google Scholar]
  5. Ash, J. S. , Berg M., and Coiera E.. 2004. “Some Unintended Consequences of Information Technology in Health Care: The Nature of Patient Care Information System‐Related Errors.” Journal of the American Medical Informatics Association 11 (2): 104–13. doi:10.1197/jamia.M1471.Medical. [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Baker, G. , and Delbanco S.. 2007. Longitudinal Survey Results with 2007 Market Updates. San Francisco, CA: Med Vantage. [Google Scholar]
  7. Baker, D. W. , Brown J., Chan K. S., Dracup K. A., and Keeler E. B.. 2005. “A Telephone Survey to Measure Communication, Education, Self‐Management, and Health Status for Patients with Heart Failure: The Improving Chronic Illness Care Evaluation (ICICE).” Journal of Cardiac Failure 11 (1): 36–42. doi:10.1016/j.cardfail.2004.05.003. [DOI] [PubMed] [Google Scholar]
  8. Boutwell, A. E. , Johnson M. B., Rutherford P., Watson S. R., Vecchioni N., Auerbach B. S., Griswold P., Noga P., and Wagner C.. 2011. “An Early Look at a Four‐State Initiative to Reduce Avoidable Hospital Readmissions.” Health Affairs 30 (7): 1272–80. doi:10.1377/hlthaff.2011.0111. [DOI] [PubMed] [Google Scholar]
  9. Campbell, D. , and Frei F.. 2011. “Market Heterogeneity and Local Capacity Decisions in Services.” Manufacturing & Service Operations Management 13 (1): 2–19. [Google Scholar]
  10. Centers for Medicare & Medicaid Services [CMS] . 2009. Changes to the Hospital Inpatient Prospective Payment Systems for Acute Care Hospitals and Fiscal Year 2010 Rates Final Rule. Baltimore, MD: Federal Register. [Google Scholar]
  11. Centers for Medicare & Medicaid Services [CMS] . 2013. “Accountable Care Organizations (ACOs): General Information” [accessed May 27, 2013]. Available at http://www.innovation.cms.gov/initiatives/ACO/
  12. Chandrasekaran, A. , Senot C., and Boyer K. K.. 2012. “Process Management Impact on Clinical and Experiential Quality: Managing Tensions between Safe and Patient‐ Centered Healthcare.” Manufacturing and Service Operations Management 14 (4): 548–66. [Google Scholar]
  13. Chassin, M. R. , and Loeb J. M.. 2011. “The Ongoing Quality Improvement Journey: Next Stop, High Reliability.” Health Affairs 30 (4): 559–68. doi:10.1377/hlthaff.2011.0076. [DOI] [PubMed] [Google Scholar]
  14. Chen, A. , Clarkwest A., Croake S., Felt‐lisk S., Maxfield M., Smith L., Witmer S., Zurovac J., Lucado J., McGivern L., Paez K., and Schur C.. 2011. Independent Evaluation of the Ninth Scope of Work, QIO Program: Final Report, Vol. I, pp 1–106. Baltimore, MD: Mathmatica Policy Research. [Google Scholar]
  15. Cromwell, J. , Trisolini M. G., Pope G. C., Mitchell J. B., and Greenwald L. M.. 2011. Pay for Performance in Health Care: Methods and Approaches, pp. 1–390. Research Triangle Park, NC: RTI Press. [Google Scholar]
  16. Cull, W. L. , O'Connor K. G., Sharp S., and Tang S. F.. 2005. “Response Rates and Response Bias for 50 Surveys of Pediatricians.” Health Services Research 40 (1): 213–26. [DOI] [PMC free article] [PubMed] [Google Scholar]
  17. Davern, M. 2013. “Nonresponse Rates are a Problematic Indicator of Nonresponse Bias in Survey Research.” Health Services Research 48 (3): 905–12. [DOI] [PMC free article] [PubMed] [Google Scholar]
  18. Davern, M. , McAlpine D., Beebe T. J., Ziegenfuss J., Rockwood T., and Call K. T.. 2010. “Are Lower Response Rates Hazardous to Your Health Survey? An Analysis of Three State Telephone Health Surveys.” Health Services Research 45 (5 Pt 1): 1324–44. [DOI] [PMC free article] [PubMed] [Google Scholar]
  19. Dentzer, S. 2011. “Still Crossing the Quality Chasm—or Suspended over It?” Health Affairs 30 (4): 554–5. doi:10.1377/hlthaff.2011.0287. [DOI] [PubMed] [Google Scholar]
  20. Donabedian, A. 1988. “The Quality of Care. How Can It Be Assessed?” Journal of the American Medical Association 260 (12): 1743–8. [DOI] [PubMed] [Google Scholar]
  21. Ferlie, E. B. , and Shortell S. M.. 2001. “Improving the Quality of Health Care in the United Kingdom and the United States: A Framework for Change.” The Milbank Quarterly 79 (2): 281–315. [DOI] [PMC free article] [PubMed] [Google Scholar]
  22. Gehi, A. K. , Ali S., Na B., and Whooley M. A.. 2007. “Self‐Reported Medication Adherence and Cardiovascular Events in Patients with Stable Coronary Heart Disease: The Heart and Soul Study.” Archives of Internal Medicine 167: 1798–803. [DOI] [PMC free article] [PubMed] [Google Scholar]
  23. Glasgow, R. E. , Wagner E. H., Schaefer J., Mahoney L. D., Reid R. J., and Greene S. M.. 2005. “Development and Validation of the Patient Assessment of Chronic Illness Care (PACIC).” Medical Care 43 (5): 436–44. [DOI] [PubMed] [Google Scholar]
  24. Groves, R. M. , and Peytcheva E.. 2008. “The Impact of Nonresponse Rates on Non Response Bias: A Meta‐Analysis.” Public Opinion Quarterly 72 (2): 167–89. [Google Scholar]
  25. Hardy, G. E. , West M. A., and Hill F.. 1996. “Components and Predictors of Patient Satisfaction.” British Journal of Health Psychology 1 (1): 65–86. [Google Scholar]
  26. Harvey, J. B. , Beich J., Alexander J. A., and Scanlon D. P.. 2012. “Building the Scaffold to Improve Health Care Quality in Western New York.” Health Affairs 31 (3): 636–41. doi:10.1377/hlthaff.2011.0761. [DOI] [PubMed] [Google Scholar]
  27. Hays, R. D. , Shaul J. A., Williams V. S., Lubalin J. S., Harris‐Kojetin L. D., Sweeny S. F., and Cleary P. D.. 1999. “Psychometric Properties of the CAHPS 1.0 Survey Measures. Consumer Assessment of Health Plans Study.” Medical Care 37 (3 Suppl): MS22–31. [DOI] [PubMed] [Google Scholar]
  28. Holle, R. , Hochadel M., Reitmeir P., Meisinger C., and Wichman H. E.. 2006. “Prolonged Recruitment Efforts in Health Surveys.” Epidemiology 17 (6): 639–43. [DOI] [PubMed] [Google Scholar]
  29. Institute of Medicine [IOM] . 2000. To Err is Human. Committee on Quality of Health Care in America, p. 312 Washington, DC: National Academy Press. [Google Scholar]
  30. Institute of Medicine [IOM] . 2001. Crossing the Quality Chasm: A New Health System for the 21st Century, pp. 1–360. Washington, DC: National Academy Press. [Google Scholar]
  31. Institute of Medicine [IOM] . 2003. Priority Areas for National Action: Transforming Health Care Quality, pp. 1–160. Washington, DC: National Academy Press. [PubMed] [Google Scholar]
  32. Jenkinson, C. , Coulter A., and Bruster S.. 2002. “The Picker Patient Experience Questionnaire: Development and Validation Using Data from In‐Patient Surveys in Five Countries.” International Journal for Quality in Health Care 14 (5): 353–8. [DOI] [PubMed] [Google Scholar]
  33. Jerant, A. , DiMatteo R., Arnsten J., Moore‐Hill M., and Franks P.. 2008. “Self‐Report Adherence Measures in Chronic Illness Retest Reliability and Predictive Validity.” Medical Care 46: 1134–9. [DOI] [PubMed] [Google Scholar]
  34. Johnson, T. P. , and Wislar J. S.. 2012. “Response Rates and Nonresponse Errors in Surveys.” Journal of the American Medical Association 307 (17): 1805–6. [DOI] [PubMed] [Google Scholar]
  35. Kaiser Family Foundation [KFF] . 2013. Summary of the Affordable Care Act, pp. 1–13. Menlo Park, CA: OECD/Korea Policy Centre, Korea. [Google Scholar]
  36. Keeter, S. , Kennedy C., Dimock M., Best J., and Craighill P.. 2006. “Gauging the Impact of Growing Nonresponse on Estimates from a National RDD Telephone Survey.” Public Opinion Quarterly 70 (4): 125–48. [Google Scholar]
  37. Kuthmerker, K. , and Hartman T.. 2007. Pay‐for‐Performance State Medicaid Programs. Kathryn Kuhmerker, The Kuhmerker Consulting Group, LLC, pp. 1–170. New York: Commonwealth Fund. [Google Scholar]
  38. Landrigan, C. , Parry G., Bones C., Hackbarth A., and Goldmann D.. 2010. “Temporal Trends in Rates of Patient Harm Resulting from Medical Care.” New England Journal of Medicine 363 (22): 2124–34. [DOI] [PubMed] [Google Scholar]
  39. Leatherman, S. , and Sutherland K.. 2007. “Designing National Quality Reforms: A Framework for Action.” International Journal for Quality in Health Care 19 (6): 334–40. doi:10.1093/intqhc/mzm049. [DOI] [PubMed] [Google Scholar]
  40. Maeng, D. D. , Martsolf G. R., Scanlon D. P., and Christianson J. B.. 2012. “Care Coordination for the Chronically Ill: Understanding the Patient's Perspective.” Health Services Research 47 (5): 1960–79. doi:10.1111/j.1475‐6773.2012.01405.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  41. Mangione‐Smith, R. , DeCristofaro A. H., Setodji C. M., Keesey J., Klein D. J., Adams J. L., Schuster M. A., and McGlynn E. A.. 2007. “Quality of Ambulatory Care Delivered to Children in the United States.” New England Journal of Medicine 357 (15): 1515–23. doi:10.1056/NEJMsa064637. [DOI] [PubMed] [Google Scholar]
  42. Margolis, P. A. , Stevens R., Bordley W. C., Stuart J., Harlan C., Keyes‐Elstein L., and Wisseh S.. 2001. “From Concept to Application: The Impact of a Community‐Wide Intervention to Improve the Delivery of Preventive Services to Children.” Pediatrics 108 (3): E42. [DOI] [PubMed] [Google Scholar]
  43. Martsolf, G. R. , Alexander J. A., Shi Y., Casalino L. P., Rittenhouse D. R., Scanlon D. P., and Shortell S. M.. 2012. “The Patient‐Centered Medical Home and Patient Experience.” Health Services Research 47 (6): 2273–95. doi:10.1111/j.1475‐6773.2012.01429.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  44. Martsolf, G. R. , Schofield R. E., Johnson D. R., and Scanlon D. P.. 2013. “Editors and Researchers Beware: Calculating Response Rates in Random Digit Dial Health Surveys.” Health Services Research 48:2, Part 1:665–76. [DOI] [PMC free article] [PubMed] [Google Scholar]
  45. McDonald, K. , Sundaram V., and Bravata D.. 2007. “Closing the Quality Gap: A Critical Analysis Quality Improvement Strategies” Volume 7‐Care Coordination. Technical Review 9 (Prepared by Standford University‐UCSF Evidence‐based Practice Center under contract 290‐02‐0017) AHRQ Publication No. 04.(07)‐0051‐7, edited by Shojania K. G., McDonald K., Wachter R., and Owens D., pp. 1–210. Rockville, MD: Agency for Healthcare Quality and Research. [Google Scholar]
  46. McGlynn, E. A. , Asch S. M., Adams J., Keesey J., Hicks J., DeCristofaro A., and Kerr E. A.. 2003. “Quality of Health Care Delivered to Adults in the United States.” New England Journal of Medicine 348 (26): 2634–45. [DOI] [PubMed] [Google Scholar]
  47. McHugh, M. , Harvey J. B., Aseyev D., Alexander J. A., Beich J., and Scanlon D. P.. 2012. “Approaches to Improving Healthcare Delivery by Multi‐Stakeholder Alliances.” American Journal of Managed Care 18 (6 Suppl): S156–62. [PubMed] [Google Scholar]
  48. MedPAC . 2009. Report to the Congress: Improving Initiatives in the Medicare program. Washington, DC: MedPAC. [Google Scholar]
  49. Meyer, B. 1995. “Natural and Quasi‐Experiments in Economics.” Journal of Business & Economic Statistics 13 (2): 151–61. [Google Scholar]
  50. Miller, H. n.d. Regional Health Improvement Collaboratives: The Foundation for Successful Health Care Reform. Pittsburgh, PA: Jewish Healthcare Foundation and Pittsburgh Regional Health Initiative. [Google Scholar]
  51. Mittler, J. N. , O'Hora J. L., Harvey J. B., Press M. J., Volpp K. G., and Scanlon D. P.. 2013. “Turning Readmission Reduction Policies into Results: Some Lessons from a Multistate Initiative to Reduce Readmissions.” Population Health Management 16 (4): 255–60. doi:10.1089/pop.2012.0087. [DOI] [PMC free article] [PubMed] [Google Scholar]
  52. Mostofian, F. , Ruban C., Simunovic N., and Bhandari M.. 2015. “Changing Physician Behavior: What Works?” American Journal of Managed Care 21 (1): 75–84. [PubMed] [Google Scholar]
  53. National Center for Chronic Disease Prevention and Health Promotion [NCCDPHP] . 2012. “Chronic Diseases and Health Promotion” [accessed May 27, 2013]. Available at http://www.cdc.gov/chronicdisease/overview/index/htm
  54. National Clinical Guideline Center [NCGC] . 2012. Patient Experience in Adult NHS Services: Improving the Experience of Care for People Using Adult NHS Services: National Institute for Health and Clinical Excellence. Manchester, United Kingdom: National Clinical Guideline Center. [Google Scholar]
  55. National Committee for Quality Assurance [NCQA] . 2013. NCQA Patient Centered Medical Home. Washington, DC: NCQA. [Google Scholar]
  56. National Health Service [NHS] . 2011. “NHS Patient Experience Framework” [accessed on May 6, 2015]. Available at https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/215159/dh_132788.pdf
  57. National Quality Forum [NQF] . 2012. “Patient‐Reported Outcomes in Performance Measurement” [accessed October 6, 2014]. Available at http://www.qualityforum.org/Publications/2012/12/Patient-Reported_Outcomes_in_Performance_Measurement.aspx
  58. Painter, M. W. , and Lavizzo‐Mourey R.. 2008. “Aligning Forces for Quality: A Program to Improve Health and Health Care in Communities across the United States.” Health Affairs 27 (5): 1461–3. doi:10.1377/hlthaff.27.5.1461. [DOI] [PubMed] [Google Scholar]
  59. Penn State Center for Health Care Research and Policy . 2015. “AF4Q Evaluation Consumer Survey” [accessed on April 27, 2015]. Available at http://www.hhdev.psu.edu/cms/CHCPR/alignforce/surveys/consumer.html
  60. Pham, H. H. , Schrag D., O'Malley A. S., Wu B., and Bach P. B.. 2007. “Care Patterns in Medicare and Their Implications for Pay for Performance.” New England Journal of Medicine 356 (11): 1130–9. doi:10.1056/NEJMsa063979. [DOI] [PubMed] [Google Scholar]
  61. Pittsburgh Regional Health Initiative [PRHI] . 2012. “Excellence in Chronic Care Forums: A PRHI Series” [accessed October 6, 2014]. Available at https://www.prhi.org/perfecting-patient-care/ppc-university/33-initiatives/chronic-care/51-excellence-in-chronic-care
  62. Rhee, M. K. , Slocum W., Ziemer D. C., Culler S. D., Cook C. B., El‐Kebbi I. M., D. L. Gallina , Barnes C., and Phillips L. S.. 2005. “Patient Adherence Improves Glycemic Control.” Diabetes Education 31 (2): 240–50. doi:10.1177/0145721705274927. [DOI] [PubMed] [Google Scholar]
  63. Robert Wood Johnson Foundation . 2012. “About Aligning Forces for Quality 2012” [accessed March 17, 2012]. Available at http://www.rwjf.org/qualityequality/af4q/about.jsp
  64. Scanlon, D. P. , Beich J., Alexander J. A., Christianson J. B., Hasnain‐Wynia R., McHugh M. C., and Mittler J. N.. 2012. “The Aligning Forces for Quality initiative: Background and Evolution from 2005 to 2012.” American Journal of Managed Care 18 (6 Suppl): s115–25. [PubMed] [Google Scholar]
  65. Shem, D. 2001. “Changing Physician Behavior.” Western Journal of Medicine 175 (3): 167. [DOI] [PMC free article] [PubMed] [Google Scholar]
  66. U.S. Department of Health and Human Services [USDHHS] . 2008. “HHS Secretary Awards Health Leaders with Special Disctinction for Improving Quality and Value of Health Care.” [accessed February 9, 2012]. Available from http://www.hhs.gov/news/press/2008pres/02/20080201a.html
  67. Wasserman, J. , Boyce‐Smith G., Hopkins D., Shabert V., Davidson M., Ozminkowski R., and Kennedy S.. 2001. “A Comparison of Diabetes Patient's Self‐Reported Health Status with Hemoglobin A1c Results in 11 California Health Plans.” Managed Care 10 (3): 58–62, 65–68, 70. [PubMed] [Google Scholar]
  68. Weingarten, S. , Henning J., Badamgarav E., Knight K., Hasselblad V., Gano A. J., and Ofman J.. 2002. “Interventions Used in Disease Management Programmes for Patients with Chronic Illness—Which Ones Work?” British Medical Journal 325 (7370): 925. [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Appendix SA1: Author Matrix.

Appendix SA2: Aligning Forces for Quality: Alliances, Technical Assistance, and Quality Improvement Activities.


Articles from Health Services Research are provided here courtesy of Health Research & Educational Trust

RESOURCES