Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2018 Mar 12.
Published in final edited form as: Acad Pediatr. 2016 Jul 21;16(8):750–759. doi: 10.1016/j.acap.2016.07.005

Primary Care Physicians’ Experiences With and Attitudes Toward Pediatric Quality Reporting

Joseph S Zickafoose a, Henry T Ireys b, Adam Swinburn a, Lisa A Simpson c
PMCID: PMC5847285  NIHMSID: NIHMS939025  PMID: 27452883

Abstract

Objectives

To assess primary care providers’ experiences with and attitudes toward pediatric-focused quality reports and identify key associated physician/practice characteristics.

Methods

We performed a cross-sectional survey of pediatricians and family physicians providing primary care to publicly insured children in three states (North Carolina, Ohio, Pennsylvania). The survey included questions about receipt of pediatric quality reports, use of reports for quality improvement (QI), and beliefs about the effectiveness of reports for QI. We used multivariable analyses to assess associations between responses and physician/practice characteristics, including exposure to federally funded demonstration projects aimed at increasing quality reporting to physicians serving publicly insured children. We supplemented these analyses with a thematic investigation of data from 46 interviews with physicians, practice staff, and state demonstration staff.

Results

727 physicians responded to the survey (overall response rate: 45.2%). The majority of physicians were receiving quality reports related to pediatric care (79.8%, 95% confidence interval [CI] 77.2–82.4%) and believed that quality reports can be effective in helping guide quality improvement (70.5%, 95% CI 67.5–73.5%). Fewer used quality reports to guide QI efforts (32.5%, 95% CI 29.5–35.6%). There were no significant associations between demonstration exposure and experiences or attitudes. Interview data suggested physicians were receptive to quality reporting, but significant barriers remain to using such reports for QI, such as limited staff time or training in QI.

Conclusion

While pediatric quality reporting is considered a promising strategy, in this study state efforts appeared insufficient to overcome the barriers to using reports to guide practice-based QI.

Keywords: primary care, physician survey, quality measurement, quality reporting

Introduction

The quality of ambulatory care for children in the United States is inconsistent.14 Challenges in delivering high quality care are particularly significant for providers caring for children who face increased risks for health care problems, including publicly insured children.24 Quality measurement and reporting at the physician level is a common, potentially effective approach to improve the quality of health care,5,6 and physician recertification programs have included a requirement for involvement in measuring the quality of care and quality improvement activities.7,8

Quality measurement and reporting in child health care has lagged behind efforts in adult health care, but many recent state and federal initiatives have sought to close that gap.5,914 The largest example of these efforts is the Children’s Health Insurance Program Reauthorization Act of 2009 (CHIPRA) Quality Demonstration Grant Program (“the demonstration”), which provided $100 million in funding from 2010 to 2015 for 10 grants, including 18 states, to identify effective, replicable strategies for enhancing quality of care for children enrolled in Medicaid and CHIP.9 Six demonstration states used funding to develop quality reporting programs that target primary care physicians who care for children enrolled in Medicaid and CHIP.13,15

However, there has been limited progress in understanding when quality measurement and reporting is most effective and for whom.6 Physicians’ experiences with and attitudes toward quality reporting are key influences on the effectiveness of these efforts,16 but few studies have assessed the experiences with and attitudes toward quality reporting for primary care providers for children.6,17 To address this gap, we conducted a survey of physicians in three states to examine the degree to which primary care providers for children report receiving quality reports, the sources and content of reports, related QI efforts, and attitudes about quality reporting. We assessed the associations between these experiences and attitudes and physician characteristics, inclduing exposure to demonstration states’ projects. We hypothesized that physicians exposed to demonstration projects would (1) be more likely to receive pediatric-specific quality reports, (2) be more likely to use quality reports for QI, and (3) have more favorable attitudes toward quality reports than other physicians, after controlling for other key factors. This study was conducted as part of a national evaluation of the CHIPRA Quality Demonstration Grant Program.9

Methods

We performed a mixed-methods study using data from a survey of physicians in three states supplemented by semi-structured interviews with providers, practice staff, and CHIPRA program administrators in two demonstration states.

Study Design and Data Sources

In 2014 we conducted a cross-sectional survey of physicians who provide primary care to children in two demonstration states (North Carolina, Pennsylvania) and one non-demonstration state (Ohio). North Carolina and Pennsylvania were selected to represent two different approaches to quality reporting by state Medicaid agencies.9 North Carolina implemented a statewide pediatric quality measurement program that included producing and distributing quality reports specifically for practices serving publicly insured children. Pennsylvania was working with a group of large health care systems and several smaller health care organizations to generate pediatric quality measures from electronic health record data and to use that information for QI.15 Ohio was selected as a comparison state because of similarities with the two demonstration states in the characteristics of the states’ overall population and population of child-serving physicians, and no known statewide pediatric quality reporting programs for children in Medicaid or CHIP.

We used the American Medical Association Masterfile updated in February 2014 to identify a sample of physicians in these states who were likely to provide primary care to children. We included physicians who had an active medical license, primarily worked in an office-based setting, and had a listed specialty of pediatrics, internal medicine-pediatrics, family practice, or general practice. We generated a random sample stratified by state and physician specialty (pediatrics and internal medicine-pediatrics versus family practice and general practice). In Pennsylvania, we additionally stratified the sample between physicians practicing in an organization involved in the demonstration (“exposed”), and thus hypothesized to have greater exposure to quality reports, and physicians not practicing in those organizations (“un-exposed”), based on rosters provided by the Pennsylvania demonstration staff. Physicians were eligible to respond to the survey if they provided primary care for children and adolescents covered by Medicaid or CHIP.

To develop the survey instrument, we reviewed several large publicly-available physician surveys (for example, the National Ambulatory Medical Care Survey and several American Academy Pediatrics Periodic Surveys of Fellows) for content and specific questions related to quality measurement, reporting, and improvement and practice characteristics hypothesized to be associated with these activities, such as use of electronic health records and patient-centered medical home recognition. We developed or adapted questions based on input from the national evaluation research team, a technical expert panel of researchers with expertise in physician surveys, and results of pre-testing with five physicians. The final 8-page paper-and-pencil instrument took approximately 15–20 minutes to complete (Appendix 1).

The survey was fielded in June through October 2014. All selected physicians were sent an advance letter notifying them of their selection to participate, and the survey packet was sent 1–2 weeks later. The packet include a cover letter, a $5 pre-pay incentive, the questionnaire, and a business reply envelope. We performed a staged followed up with all non-respondents that included a reminder letter, at least one reminder call to the physician’s practice number, email reminders when email address was available, reminder post cards, and a second mailing of the survey packet.

We supplemented our survey data with qualitative data collected through interviews with individuals involved in the demonstration during evaluation site visits in 2012 and 2014. Trained research staff conducted semi-structured interviews using protocols that included questions related to quality reporting efforts, including receipt of quality reports from state agencies and other sources, understandability of reports, and use of reports in QI. For this analysis, we used responses from providers, practice staff, and CHIPRA program administrators involved in quality measurement and improvement efforts in North Carolina (32 interviews with 29 individuals) and Pennsylvania (22 interviews with 17 individuals). Interviews were conducted in-person by two member teams with one member conducting the interview and the second member taking near-verbatim notes. After the interviews, the members of the research team cleaned interview notes, used audio recordings to fill in gaps, and coded the notes in a qualitative research software program (NVivo version 10.0, QSR International), using a coding scheme aligned with the interview protocol.

All collection of data was approved by the Office of Management and Budget and the New England Institutional Review Board with a waiver of documentation of consent.

Dependent Variables

We focused on several variables identified in the literature and by demonstration states as key intermediate steps between quality measurement, quality reporting, and quality improvement activities.6,9,13,14,18 The primary dependent variables included: 1) receipt of any quality reports; 2) receipt of reports with key measures relevant to pediatric care; 3) engaging in any QI efforts; 4) engaging in QI in response to quality reports; and 5) perceiving quality reports as an effective tool in QI efforts. Additionally, we asked physicians about their opinions on the usefulness of specific types of information used to create quality reports, such as the populations of children included in the report and comparisons to benchmarks.

Independent Variables

In multivariable analyses, the primary independent variable of interest was physician exposure to a quality reporting program in a demonstration state. We also included other physician and practice variables that could influence physician engagement with quality measurement, reporting, and improvement, including years since graduation from medical school, specialty, employment type, number of physicians in the practice, presence of any nurse practitioners or physician assistants in the practice, practice ownership, proportion of patients covered by Medicaid or CHIP, medical home recognition, and use of an electronic health record.

Analysis

We calculated descriptive statistics for the whole respondent population, pediatricians and family physicians, and four key subgroups. These subgroups included physicians exposed to the demonstration projects (specifically, all physicians in North Carolina and physicians in participating organizations in Pennsylvania) and those not exposed (specifically, all physicians in Ohio and physicians not in participating organizations in Pennsylvania). We tested unadjusted differences in responses across these groups using the chi-squared test and then performed multivariable analyses using logistic regression. We performed a pre-study power analysis assuming 1,050 respondents and dichotomous outcome percentages from 25% to 50%, which showed minimum detectable differences between comparison groups that ranged from 8% to 17%.

In all analyses, we used sampling weights, which were adjusted for nonresponse to reflect the total population of office-based physicians with an active license in the targeted specialties in each state. We did not include adjustments for clustering of physicians within practices or states for several practical and statistical reasons: the sampling frame did not include information on practice affiliation, the size of the population and sample in each state made it unlikely that selected physicians would share practice affiliations, and the size of the population and sample in each state make it likely that within state variance would be very high compared to between state variance, resulting in a low clustering effect.

Seven hundred twenty-seven physicians responded to the survey yielding an overall response rate of 45.2% based on the American Association of Public Opinion Research response rate four, which assumes the same rate of eligibility among respondents and nonrespondents.19 We performed a nonresponse bias analysis, which suggested the risk for bias was low in each of the three states (Appendix 2).

To supplement the survey findings, we performed a thematic analysis of the semi-structured interview data focused on providers’ attitudes toward quality reporting and improvement, with a focus on facilitators and barriers to adoption. A research analyst (AS) extracted relevant text from interview notes, and the analyst and a researcher (JSZ) independently reviewed the excerpts for themes. The analyst and researcher then discussed independent findings to reach consensus on final themes.

Results

Characteristics of physician respondents and their practices

The characteristics of responding physicians are shown in Table 1. About 42% were pediatricians. Overall, most respondents (63%) were employees in practices, and, based on respondent estimates, about one-third (31%) of patients in these practices were enrolled in Medicaid or CHIP.

Table 1.

Individual and Practice Characteristics of Primary Care Pediatricians and Family Physicians who Provide Care to Children Covered by Medicaid and CHIP in North Carolina, Ohio, and Pennsylvania, 2014

Full Sample (n=727) Pennsylvania (exposed)
(n=55) a
Pennsylvania (unexposed)
(n=187)
North Carolina
(n=242) b
Ohio (n=243) c
Age in years, weighted mean (SD) 50.6 (10.5) 49.7 (8.9) 52.3 (11.6) 48.7 (9.6) 51.3 (10.6)
Years since medical school graduation, weighted mean (SD) 23.5 (10.9) 23.6 (10.1) 25.6 (11.8) 21.2 (9.9) 24.0 (10.8)
Specialty, weighted % (95% CI)
 Pediatricsd 41.9 (38.8–45.0) 72.0 (62.5–81.4) 33.5 (28.1–39.0) 45.2 (39.9–50.6) 42.8 (37.5–48.0)
 Family medicinee 58.1 (55.0–61.2) 28.0 (18.6–37.5) 66.5 (61.0–71.9) 54.8 (49.4–60.1) 57.2 (52.0–62.5)
Employment, weighted % (95% CI)
 Owner 36.7 (33.6–39.9) 8.9 (1.7–16.0) 36.6 (30.4–42.8) 39.6 (34.2–44.9) 39.1 (33.8–44.5)
 Employee 62.9 (59.8–66.1) 91.1 (84.0–98.3) 63.4 (57.2–69.6) 60.0 (54.7–65.4) 60.2 (54.8–65.6)
 Contractor <1 <1 <1 <1 <1
Practice characteristics
 Number of physicians in practice, median (IQR) 4 (2–6) 5 (3–8) 3 (2–5) 4 (2–6) 3 (2–6)
 Any nurse practitioners, weighted % (95% CI) 47.8 (44.0–51.7) 64.6 (51.0–78.1) 46.4 (38.7–54.1) 50.8 (44.3–57.3) 44.4 (37.9–50.8)
 Any physician assistants, weighted % (95% CI) 29.7 (26.2–33.3) 37.9 (24.4–51.4) 30.8 (23.5–38.1) 47.6 (41.1–54.1) 13.5 (9.0–18.0)
Practice ownership, weighted % (95% CI)
 Physician or physician group 47.5 (44.3–50.8) 11.6 (3.5–19.8) 51.0 (44.5–57.5) 48.3 (42.9–53.7) 49.3 (43.8–54.7)
 Academic health system 12.2 (10.1–14.3) 38.8 (27.3–50.3) 11.4 (7.3–15.5) 10.8 (7.5–14.1) 9.7 (6.6–12.9)
 Other health system 31.8 (28.8–34.9) 47.0 (35.2–58.7) 29.0 (23.0–35.0) 31.5 (26.5–36.5) 32.5 (27.4–37.6)
 Otherf 8.5 (6.7–10.3) 2.6 (0.0–5.8) 8.6 (5.0–12.2) 9.4 (6.2–12.6) 8.5 (5.5–11.5)
Physician estimates of patient insurance for all ages, weighted % (95% CI)
 Medicaid/CHIP 31.0 (29.5–32.5) 41.6 (35.8–47.3) 30.6 (27.8–33.4) 31.7 (29.0–34.3) 29.1 (26.5–31.8)
 Medicare 15.2 (14.1–16.3) 7.8 (4.7–10.9) 17.0 (14.8–19.2) 15.6 (13.6–17.5) 14.3 (12.7–16.0)
 Private 43.5 (41.9–45.1) 43.8 (37.9–49.6) 42.7 (39.7–45.8) 41.1 (38.4–43.9) 46.2 (43.4–48.9)
 Uninsured 6.3 (5.8–6.8) 3.3 (2.6–3.9) 5.6 (4.8–6.4) 7.7 (6.7–8.7) 6.3 (5.3–7.2)
 Other 3.8 (3.1–4.4) 2.0 (1.1–2.9) 4.0 (2.7–5.4) 3.4 (2.4–4.4) 4.1 (3.0–5.2)
Medical home recognition, weighted % (95% CI) 45.0 (41.7–48.2) 54.0 (42.0–66.0) 43.9 (37.4–50.4) 52.4 (47.0–57.9) 38.8 (33.5–44.1)
Electronic health record, weighted % (95% CI) 89.6 (87.6–91.6) 100.0 (100.0–100.0) 87.6 (83.5–91.7) 94.9 (92.5–97.3) 85.7 (82.0–89.4)
a

Includes respondents practicing in health systems participating in the CHIPRA Quality Demonstration Grant Program intervention in Pennsylvania

b

Includes all respondents in North Carolina, where CHIPRA Quality Demonstration Grant Program reporting efforts were targeted statewide

c

Includes all respondents in Ohio, a comparison state that did not participate in the CHIPRA Quality Demonstration Grant Program or have an identified statewide quality reporting effort focused on children

d

Includes internal medicine-pediatrics

e

Includes general practice

f

Includes community health centers and health maintenance organizations

Physician experiences with quality reporting

For the sample as a whole, about 80% of primary care physicians for children in these three states reported receiving any quality reports about children in their practice from some external source (Table 2). The most common sources of quality reports were commercial health plans (59% of physicians) and Medicaid/CHIP agencies or managed care organizations (58% of physicians). About 70% of physicians reported receiving quality reports with any of 10 common pediatric quality measures, most frequently immunization rates for children at ages two and 13 years (63 and 52%, respectively). Almost 80% of all respondents indicated that they had participated in some QI effort during the prior 2 years, but only about one-third indicated that they had used quality reports to help guide QI efforts during this time.

Table 2.

Primary Care Pediatricians and Family Physicians’ Experiences With and Attitudes About Pediatric Quality Reporting in North Carolina, Ohio, and Pennsylvania, 2014

Weighted % (95% CI)
Full Sample Pennsylvania (exposed)a Pennsylvania (unexposed) North Carolinab Ohioc Pediatricians Family Physicians
Received pediatric quality reports from external sources:
 Any source 79.8 (77.2–82.4) 91.8 (85.6–97.9)e 86.7 (82.0–91.3) 72.3 (67.5–77.2) 77.0 (72.3–81.7) 92.9 (90.6–95.1)e 70.3 (66.2–74.4)
 Commercial plans 58.6 (55.4–61.8) 79.5 (70.2–88.8)e 76.4 (70.7–82.0) 48.3 (42.9–53.8) 46.1 (40.6–51.5) 72.4 (68.3–76.5)e 49.0 (44.3–53.6)
 Medicaid/CHIP agency or managed care plans 57.8 (54.6–61.0) 62.8 (51.4–74.3)e 64.7 (58.3–71.0) 48.2 (42.8–53.6) 57.9 (52.5–63.3) 69.1 (64.9–73.2)e 50.0 (45.3–54.6)
 Provider organization/health system 25.7 (22.8–28.5) 58.4 (46.7–70.1)e 21.0 (15.6–26.4) 21.1 (16.6–25.5) 28.7 (23.8–33.6) 33.9 (29.6–38.2)e 19.9 (16.2–23.6)
Received quality reports with any key pediatric quality measures d 72.1 (69.2–75.0) 87.6 (79.5–95.7)e 84.1 (79.4–88.9) 58.2 (52.7–63.6) 68.3 (63.2–73.5) 83.3 (80.0–86.7)e 63.8 (59.5–68.2)
Any quality improvement effort for children in prior 2 years 78.2 (75.5–80.9) 88.3 (81.4–95.2) 78.4 (72.9–83.8) 78.9 (74.6–83.3) 75.9 (71.2–80.6) 92.0 (89.6–94.4)e 68.3 (64.0–72.5)
Started using quality reports in pediatric quality improvement in prior 2 years 32.5 (29.5–35.6) 46.8 (35.0–58.6) 33.8 (27.7–40.0) 33.4 (28.3–38.5) 28.3 (23.4–33.1) 40.4 (36.1–44.8)e 26.8 (22.7–31.0)
Felt quality reports were moderately or very effective for improving quality of care for children 70.5 (67.5–73.5) 85.2 (77.1–93.3)f 72.2 (66.4–77.9) 72.8 (67.9–77.7) 64.7 (59.4–69.9) 74.2 (70.2–78.1) 67.8 (63.5–72.2)
a

Includes respondents practicing in health systems participating in the CHIPRA Quality Demonstration Grant Program intervention in Pennsylvania

b

Includes all respondents in North Carolina, where CHIPRA Quality Demonstration Grant Program reporting efforts were targeted statewide

c

Includes all respondents in Ohio, a comparison state that did not participate in the CHIPRA Quality Demonstration Grant Program or have an identified statewide quality reporting effort focused on children

d

Respondents reported receiving quality reports with any of the following pediatric quality measures: up-to-date immunizations at age 2 years, up-to-date immunizations at age 13 years, body mass index screening, developmental screening, well-child visits by age 15 months, well-child visits ages 3–6 years, well-child visits ages 12–21 years, appropriate pharyngitis testing, emergency department visits for asthma, and medication follow-up visits for attention deficit-hyperactivity disorder.

e

Chi-squared test across the state or specialty comparison groups significant at p<0.01

f

Chi-squared test across the state or specialty comparison groups significant at p<0.05

In unadjusted analyses, exposed physicians in Pennsylvania generally reported more experience with quality reports than other subgroups (Table 2). For example, about 88% of exposed physicians in Pennsylvania indicated that the quality reports they had received included key pediatric quality measures compared to only 58% of physicians in North Carolina and 68% in Ohio (overall chi square test significant at p<0.01). Pediatricians were significantly more likely than family physicians to report receiving pediatric quality reports (93% versus 70%, p<0.01), receiving reports with any key pediatric quality measures (83% versus 64%, p<0.01), participating in any pediatric QI in the last two years (92% versus 68%, p<0.01), or using quality reports in pediatric QI in the last two years (40% versus 27%, p<0.01).

We present key multivariable results in Table 3 and full model results in Appendix 3. In multivariable analyses, exposed and unexposed physicians in Pennsylvania had higher odds of receiving pediatric quality reports (adjusted odds ratios [AOR]: 1.98 [95% CI 0.66–5.90] and 2.20 [95% CI 1.15–4.23], respectively) and receiving quality reports with key pediatric quality measures (AOR: 2.64 [95% CI 0.97–7.17] and 3.05 [95% CI 1.70–5.49, respectively) compared to physicians in Ohio, although the results were only significant for unexposed physicians. Physicians in North Carolina had significantly lower odds of reporting receiving quality reports with key pediatric quality measures compared to those in Ohio (AOR: 0.57 [95% CI: 0.37–0.88]). There were no significant differences between these groups in reporting pediatric QI efforts in the prior two years or reporting using quality reports in pediatric QI. Compared to family physicians, pediatricians had significantly higher odds of receiving pediatric quality reports (AOR: 6.16 [95% 3.62–10.49], receiving quality reports with key pediatric quality measures (AOR: 2.79 [95% CI 1.76–4.41]), engaging in child-focused QI (AOR: 4.37 [95% CI 2.75–6.93], and using quality reports in child-focused QI (AOR: 1.55 [95% CI 1.02–2.35]. Physicians practicing in formally recognized medical homes had significantly higher odds of receiving pediatric quality reports (AOR: 1.91 [95% CI 1.19–3.06]) and using quality reports in child-focused QI (AOR: 2.02 [95% CI 1.40–2.93]).

Table 3.

Primary Care Pediatricians and Family Physicians’ Experiences With and Attitudes About Pediatric Quality Reporting in North Carolina, Ohio, and Pennsylvania, 2014: Multivariable Analysisa

Adjusted Odds Ratios (95% CI)
Physician and Practice Characteristics Received External Pediatric Quality Reports Received Reports with Any Key Pediatric Quality Measuresb Any Pediatric QI Effort in Prior Two Years Started Using Quality Reports in Pediatric QI in Prior Two Years Felt Quality Reports are Effective for QI
Location and CHIPRA Quality Demonstration participation
 Ohio ref ref ref ref ref
 North Carolina 0.68 (0.42–1.11) 0.57 (0.37–0.88) 1.19 (0.73–1.93) 1.14 (0.74–1.76) 1.37 (0.88–2.13)
 Pennsylvania (unexposed) 2.20 (1.15–4.23) 3.05 (1.70–5.49) 1.60 (0.90–2.84) 1.41 (0.89–2.24) 1.46 (0.91–2.36)
 Pennsylvania (exposed)c 1.98 (0.66–5.90) 2.64 (0.97–7.17) 1.61 (0.62–4.22) 1.74 (0.85–3.54) 2.55 (0.99–6.56)
Pediatrics (ref: family medicine) 6.16 (3.62–10.49) 2.79 (1.76–4.41) 4.37 (2.75–6.93) 1.55 (1.02–2.35) 1.22 (0.81–1.83)
Medical home recognition (ref: no recognition) 1.91 (1.19–3.06) 1.62 (1.08–2.44) 1.24 (0.79–1.97) 2.02 (1.40–2.93) 1.26 (0.85–1.88)
Electronic health record (ref: no electronic health record) 0.73 (0.33–1.61) 0.83 (0.45–1.55) 1.77 (0.86–3.68) 1.72 (0.81–3.65) 1.67 (0.91–3.07)

Abbreviations: CHIP – Children’s Health Insurance Program, CHIPRA - Children’s Health Insurance Program Reauthorization Act of 2009, QI – quality improvement

a

The full model also adjusted for physician’s years since graduation, number of physicians in the practice, any nurse practitioners or physician assistants in the practice, practice ownership, and the proportion of patients covered by Medicaid or CHIP. None of these variables were large or significant predictors. The full model results are available in Appendix 3.

b

Respondents reported receiving quality reports with any of the following pediatric quality measures: up-to-date immunizations at age 2 years, up-to-date immunizations at age 13 years, body mass index screening, developmental screening, well-child visits by age 15 months, well-child visits ages 3–6 years, well-child visits ages 12–21 years, appropriate pharyngitis testing, emergency department visits for asthma, and medication follow-up visits for attention deficit-hyperactivity disorder.

c

Includes respondents practicing in health systems participating in the CHIPRA Quality Demonstration Grant Program intervention in Pennsylvania

Physician attitudes about quality reporting

Overall, about 70% of the physicians felt that quality reports were moderately or very effective for improving care for children (Table 2). There were no significant differences in this attitude across state groups, specialty, or practice characteristics (Table 3). The majority of child-serving primary care physicians felt it would be useful to receive quality reports that included information about their own patients and all patients in the practice, comparisons with a variety of benchmarks internal and external to their practice, quality measures for children with specific chronic conditions, and recommendations for areas to target for improvement, including those who had and had not previously received this kind of information (Table 4). Relatively few physicians (35% or less) felt it would be useful to receive quality measures grouped by children’s demographic characteristics, such as race/ethnicity or insurance type. When asked to choose the most useful pieces of information in quality reports, the largest proportions of physicians chose information about their own patients (52%), groups of children with specific chronic conditions (44%), and comparisons with state or national benchmarks (43%).

Table 4.

Primary Care Pediatricians and Family Physicians Attitudes about Content of Pediatric Quality Reports in North Carolina, Ohio, and Pennsylvania, 2014

Weighted % (95% CI)
Information about: Of physicians who have received reports with given information, proportion who found it useful Of physicians who have not received reports with given information, proportion who believe it would be useful “Top Three” Most Usefula
Comparisons with past performance 83.9 (79.2–88.5) 81.1 (77.1–85.0) 29.0 (25.4–32.5)
Groups of children with specific chronic conditions 81.8 (77.8–85.9) 86.2 (82.3–90.2) 44.1 (40.3–48.0)
All patients in the practice 80.9 (76.3–85.5) 79.9 (75.5–84.4) 34.7 (31.0–38.4)
Physician’s own patients 78.8 (74.9–82.6) 91.4 (87.5–95.4) 52.3 (48.4–56.2)
Comparisons with state or national benchmarks 78.5 (73.1–84.0) 82.9 (79.2–86.7) 43.1 (39.3–47.0)
Recommendations for improvement 78.4 (72.9–83.9) 86.1 (82.6–89.6) 39.3 (35.5–43.1)
Comparisons with other practices 76.1 (70.2–82.1) 74.4 (70.1–78.7) 31.4 (27.7–35.0)
Comparisons with other physicians in the same practice 71.1 (65.7–76.5) 65.4 (60.4–70.4) 17.1 (14.2–20.0)
Other groupings of children (for example, race/ethnicity, insurance type) 27.5 (0.3–54.7) 35.1 (20.1–50.1) 0.9 (0.3–1.5)
a

Physicians were asked to choose the three pieces of information from this list that they would find “most useful for improving the quality of care for children” in their practice.

In semi-structured interviews in both Pennsylvania and North Carolina, physicians and other respondents involved in the demonstration felt that quality measurement and reporting was effective in helping to increase the rates of a variety of important screenings and procedures (Table 5). Respondents reported that it was particularly helpful that the demonstration reporting programs in each state fit with activities already going on in practices and, thus, reflected primary care practices’ priorities. Respondents also described financial incentives as key potential facilitators to the use of quality reports, either through pay-for-performance or enhanced billing for targeted quality measures.

Table 5.

Experiences and Attitudes toward Quality Measurement and Reporting: Thematic Analysis of Interviews with CHIPRA Quality Demonstration Grant Program State Leaders and Participating Primary Care Physicians in North Carolina and Pennsylvania

Theme Sub-themes Illustrative Quotes
Facilitators to engaging providers in quality measurement and reporting efforts Alignment of measurement and reporting with existing practice services and priorities “Being a pediatrician, I think if you look at 24 measures, it could be considered overwhelming. But when I look at it, it is part of what I was doing.”
Introduction of a limited number of measures at a time “If you really want to do something with QI [quality improvement], you’ve got to focus it down. Doing QI and moving measures doesn’t happen overnight, especially trying to introduce population management and going through those steps, it takes time. I think there are way too many measures…”

“We’re down to 8. They were all great measures. The challenge of some of the 24 was that some were hard to get good data on. Some things require multiple databases, like ER [emergency room] measures where we need to integrate outpatient and inpatient EHRs [electronic health records] and assume no one went to other another ER. I thought that the set of 8 so far are all reportable. But the 24 are all good goals.”
Education of providers on coding and billing for services targeted by quality measures “We worked with the folks at the state level to train all of our Qis [quality improvement specialists] to provide dental varnishing training to practices. It’s one of the easiest sells. It reimburses at $52 per varnish and the provider doesn’t have to do it themselves…The fact that it reimburses so well is a helpful point in talking to practices.”
Barriers to engaging providers in quality measurement and reporting efforts Resistance to perceived external intrusion “Practicing folks assume that you are dictating from above. Unfortunately it’s hard to convince people that you had practicing providers on the panel even when you did."
Concerns about implications of providing new services “The concern of trying to manage a problem that they can’t treat…If you identify someone with maternal depression then the follow through is huge to ensure that all the needs of that patient are met. And so there were some logistical, medical, and legal concerns related to that.”
Mismatch between measure specifications and practice reporting systems “Just little differences exist, like the BMI measure for CHIPRA is for kids aged 3–17 and meaningful use is for kids aged 2–17. Just matching up the measures so that when you’re working on reporting you can report as one [would reduce the burden].”

“The BMI measure is all about reporting the BMI percentile, not the BMI. Some of the systems might show the percentile while the doctor has the patient in the room, but the percentile is not stored. So when the quality measure is calculated, the doctor will score poorly.”
Changes made in response to quality reporting Improved attention to service provision “Because it was addressed with everybody and it was pushed it’s happening more…The physicians are taking that more seriously… I think that makes a huge difference. You could look at dental varnish and say big deal but they are looking at it as this is part of our treatment now for these children.

“The autism screenings – making sure we changed policies and that we knew that we continuously follow up and wouldn’t let kids fall off the grid. That was a big thing for our practice. We learned to help these children that needed more special attention to make sure they had more individual nursing time.”
Attention to documenting and reporting “The rate [at which we are documenting BMI] has gone up to 100% from a much lower rate than that - probably less than 50%.”

Respondents from primary care practices in both states expressed frustration over the lack of timeliness of data included in quality reports and the inclusion of measures over which they believed practices had limited control. Some physicians felt that they did not have the staff time or skills needed to take on new quality reporting and improvement work, and this feeling was exacerbated when they felt the measures in quality reports did not fit with existing work flows, the existing Medicaid billing guide, or the electronic health record incentive programs. Additionally, in North Carolina, some respondents mentioned that physicians are likely to be resistant to “mandates from above,” especially when practicing physicians were not involved in measure development, or to measures that promote changes in practice they felt they or their community were ill-equipped to address, such as adolescent or maternal mental health.

Discussion

The results from this study show that, at least in these three states, the majority of primary care physicians for children were receiving quality reports related to pediatric care and felt that reports can be effective in helping guide QI. This finding suggests significant receptivity among physicians to the use of quality reporting to improve health care for children, and is similar to the results from a prior study, which found that the majority of pediatricians nationally felt that measuring quality of care was effective for improving care.20

Despite high levels of exposure to quality reports and beliefs in their utility, only about one-third of physicians in this study reported using quality reports for QI in pediatric care. This finding underscores that production and distribution of quality reports might be insufficient for practices to use them as a tool for quality improvement without some other forms of support or incentives, such as technical assistance on the use of reports or financial incentives for improvement.1315,2123 Consistent with other research,6,14,18 the survey and qualitative findings in this study suggest that physicians are more likely to be receptive to and use quality reports when reports align with physicians’ priorities, contain information specific to their patients with clear benchmarks for comparison, are timely, provide recommendations for improvement, and are developed in consultation with practicing physicians.6,14,18 Furthermore, physicians need to have the skills and time to do the improvement work, which is not typically a reimbursed activity. Historically, the primary care delivery system and its financing has provided few supports for providers to traverse the gap between quality measurement and action.

It is also important to note that, although many experts emphasize the importance of stratifying quality measure results by sociodemographic characteristics to allow quality improvement efforts to target health disparities for children,24,25 only about one-third of physicians in this study reported that they would find it useful to receive quality reports with this kind of information. Additional work is needed to understand child-serving physicians’ views on their roles in identifying and addressing health inequities in their own practices.

Physicians in practices with medical home recognition were significantly more likely to have received quality reports or used them in QI efforts, which is consistent with emphasis on quality improvement in medical home programs2629 and suggestive of a role for formal medical home recognition in improving the quality of care for publicly insured children. Pediatricians were significantly more likely than family physicians to have received pediatric quality reports, conducted pediatric QI, and used quality reports in pediatric QI. A significant proportion of children are cared for by family physicians,30 particularly in rural communities, pointing to a need to include family physicians in pediatric quality reporting and improvement efforts.

Contrary to our hypotheses, physicians’ experiences with and attitudes toward quality reporting were not significantly associated with exposure to the demonstration activities in North Carolina or Pennsylvania. Although physicians in organizations participating in the demonstration in Pennsylvania were more likely to receive pediatric quality reports compared to physicians in Ohio, this was also true for other physicians in Pennsylvania, suggesting other statewide influences. Surprisingly, despite a pediatric quality reporting program that was focused statewide in North Carolina, physicians there were no more likely to report exposure to pediatric quality reporting than those in Ohio. Our qualitative findings from North Carolina did not shed light on why there was not a higher level of exposure to quality reports there, but rates in Ohio might have been higher than anticipated due to a large regional pediatric Medicaid accountable care organization that reports quality measures to primary care physicians and other work by Medicaid managed care organizations in the state.31

The results from this study should be viewed in the context of several limitations. First, the design of the demonstration and this study create a possibility of confounding for any comparisons between two or more groups of physicians. We adjusted for observable characteristics in our multivariable modeling within the limits of this approach. Second, the survey was fielded in three states and exposed physicians in Pennsylvania were primarily from large, integrated health systems, potentially limiting generalizability to other states. However, the personal and practice demographic characteristics of physicians in this study are similar to those of other recent studies of pediatricians and family physicians.3234 Third, the response rate raises the possibility of nonresponse bias, although a nonresponse bias analysis was reassuring within the limits of observable data from our sampling frame and survey responses. Fourth, we could not account for all public and private sector quality measurement and reporting activities that could be occurring in these states that might have influenced results. Fifth, respondents in our qualitative interviews were self-selected participants in the demonstration program and might not represent the views of broader population of child-serving primary care physicians in their states.

Conclusion

In this three-state study, we found that the majority of primary care physicians who serve publicly insured children received pediatric quality reports and felt that reports can be an effective tool to improve care. However, relatively few physicians used quality reports to guide their practices’ QI efforts despite concerted state program to increase such use. For quality reporting to achieve its promise, additional interventions are likely to be required, such as financial incentives and training physicians and practice staff in the use of quality reports to guide improvement activities.

Supplementary Material

Appendix 1
Appendix 2
Appendix 3

What’s New

In three states (survey response rate 45%), an estimated 80% of pediatricians and family physicians had received pediatric quality reports, and 70% believed reports were effective for quality improvement (QI). However, only 33% had started using reports in QI efforts.

Acknowledgments

The national evaluation of the CHIPRA Quality Demonstration Grant Program was supported by a contract (HHSA29020090002191) from the Agency for Healthcare Quality and Research (AHRQ) to Mathematica Policy Research (MPR) and its partners, the Urban Institute (UI) and AcademyHealth. The observations contained in this article represent the views of the authors and do not necessarily reflect the opinions or perspectives of any state or federal agency, Mathematica Policy Research, or the Urban Institute. Special thanks are due to Cindy Brach, MPP, and Linda Bergofsky, MSW, MBA, of the AHRQ for providing valuable comments on the survey and manuscript, to Amanda Mims, MS and John Schurrer, MPP of Mathematica Policy Research for their support of the analysis, and to the many members of the national evaluation research team and technical expert panel for providing support in the design, implementation, and analysis of the study. We also wish to thank the reviewers for their thoughtful comments and Kelli Sebastian in the Pennsylvania Department of Public Welfare for assistance in identifying physicians in Pennsylvania health systems participating in the CHIPRA Quality Demonstration Grant Program.

Funding Source: This study was completed as part of the national evaluation of the CHIPRA Quality Demonstration Grant Program, which was supported by a contract from the Agency for Healthcare Quality and Research (AHRQ) (HHSA29020090002191) to Mathematica Policy Research and its partners, the Urban Institute and AcademyHealth and funded by the Centers for Medicare and Medicaid Services (CMS).

Abbreviations

CHIP

Children’s Health Insurance Program

CHIPRA

Children’s Health Insurance Program Reauthorization Act of 2009

QI

quality improvement

Footnotes

Financial Disclosure: None

Conflict of Interest: The authors have no conflicts of interest to disclose.

References

  • 1.Mangione-Smith R, DeCristofaro AH, Setodji CM, et al. The quality of ambulatory care delivered to children in the United States. N Engl J Med. 2007;357(15):1515–1523. doi: 10.1056/NEJMsa064637. [DOI] [PubMed] [Google Scholar]
  • 2.Berdahl TA, Friedman BS, McCormick MC, Simpson L. Annual report on health care for children and youth in the United States: trends in racial/ethnic, income, and insurance disparities over time, 2002–2009. Acad Pediatr. 2013;13(3):191–203. doi: 10.1016/j.acap.2013.02.003. [DOI] [PubMed] [Google Scholar]
  • 3.Irwin CE, Adams SH, Park MJ, Newacheck PW. Preventive care for adolescents: few get visits and fewer get services. Pediatrics. 2009;123(4):e565–e572. doi: 10.1542/peds.2008-2601. [DOI] [PubMed] [Google Scholar]
  • 4.Berdahl TA, Owens PL, Dougherty D, McCormick MC, Pylypchuk Y, Simpson L. Annual report on health care for children and youth in the United States: racial/ethnic and socioeconomic disparities in children’s health care quality. Acad Pediatr. 2010;10(2):95–118. doi: 10.1016/j.acap.2009.12.005. [DOI] [PubMed] [Google Scholar]
  • 5.Stanek M. Quality Measurement to Support Value-Based Purchasing: Aligning Federal and State Efforts. Washington, DC: 2014. Available at: http://www.nashp.org/sites/default/files/Quality.Measurement.Support.ValueBasedPurchasing.pdf. Accessed August 12, 2015. [Google Scholar]
  • 6.Ivers NM, Grimshaw JM, Jamtvedt G, et al. Growing literature, stagnant science? Systematic review, meta-regression and cumulative analysis of audit and feedback interventions in health care. J Gen Intern Med. 2014;29(11):1534–1541. doi: 10.1007/s11606-014-2913-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.American Board of Pediatrics. MOC Requirements. 2015 Available at www.abp.org/content/moc-requirements. Accessed April 26, 2015.
  • 8.American Board of Family Medicine. Maintenance of Certification for Family Physicians. 2015 Available at www.theabfm.org/moc/index.aspx. Accessed April 26, 2015.
  • 9.Agency for Healthcare Research and Quality. National Evaluation of the CHIPRA Quality Demonstration Grant Program. Available at http://www.ahrq.gov/policymakers/chipra/demoeval/index.html. Accessed August 12, 2015.
  • 10.Agency for Healthcare Research and Quality. Pediatric Quality Measures Program (PQMP) Centers of Excellence Grant Awards. 2014 Available at http://www.ahrq.gov/policymakers/chipra/pubs/pqmpfact.html. Accessed April 10, 2015.
  • 11.Mistry KB, Chesley F, LLanos K, Dougherty D. Advancing Children’s Health Care and Outcomes Through the Pediatric Quality Measures Program. Acad Pediatr. 2014;14(5):S19–S26. doi: 10.1016/j.acap.2014.06.025. [DOI] [PubMed] [Google Scholar]
  • 12.Sachdeva R, McInerny T, Perrin JM. Quality Measures and the Practicing Pediatrician: Perspectives From the American Academy of Pediatrics. Acad Pediatr. 2014;14(5):S10–S11. doi: 10.1016/j.acap.2014.03.007. [DOI] [PubMed] [Google Scholar]
  • 13.Ferry GA, Ireys HT, Foster L, Devers KJ, Smith L. How Are CHIPRA Demonstration States Approaching Practice-Level Quality Measurement and What Are They Learning?: Evaluation Highlight No. 1. Rockville, MD: Agency for Healthcare Research and Quality; 2013. Available at http://www.ahrq.gov/policymakers/chipra/demoeval/what-we-learned/highlight01.html. Accessed August 12, 2015. [Google Scholar]
  • 14.Anglin G, Hossain M. How Are CHIPRA Quality Demonstration States Using Quality Reports to Drive Health Care Improvements for Children?: Evaluation Highlight No 11. Rockville, MD: Agency for Healthcare Research and Quality; 2015. Available at http://www.ahrq.gov/policymakers/chipra/demoeval/what-we-learned/highlight11.html. Accessed August 12, 2015. [Google Scholar]
  • 15.Foster L. How Are CHIPRA Quality Demonstration States Encouraging Health Care Providers to Put Quality Measures to Work?: Evaluation Highlight No 5. Rockville, MD: Agency for Healthcare Research and Quality; 2013. Available at http://www.ahrq.gov/policymakers/chipra/demoeval/what-we-learned/highlight05.html. Accessed August 12, 2015. [Google Scholar]
  • 16.Goldberg DG, Mick SS, Kuzel AJ, Feng LB, Love LE. Why do some primary care practices engage in practice improvement efforts whereas others do not? Health Serv Res. 2013;48(2 Pt 1):398–416. doi: 10.1111/1475-6773.12000. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Meropol SB, Schiltz NK, Sattar A, et al. Practice-Tailored Facilitation to Improve Pediatric Preventive Care Delivery: A Randomized Trial. Pediatrics. 2014;133(6):e1664–e1675. doi: 10.1542/peds.2013-1578. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Gardner B, Whittington C, McAteer J, Eccles MP, Michie S. Using theory to synthesise evidence from behaviour change interventions: The example of audit and feedback. Soc Sci Med. 2010;70(10):1618–1625. doi: 10.1016/j.socscimed.2010.01.039. [DOI] [PubMed] [Google Scholar]
  • 19.The American Association of Public Opinion Research. Standard Definitions: Final Dispositions of Case Codes and Outcome Rates for Surveys. 8th. Deerfield, IL: The American Association of Public Opinion Research; 2011. Available at https://www.aapor.org/AAPORKentico/AAPOR_Main/media/publications/Standard-Definitions2015_8theditionwithchanges_April2015_logo.pdf. Accessed on August 12, 2015. [Google Scholar]
  • 20.American Academy of Pediatrics Division of Health Services Research. Periodic Survey #76 Quality Improvement Executive Summary. Elk Grove Village, IL: American Academy of Pediatrics; 2011. Available at https://www.aap.org/en-us/professional-resources/Research/pediatrician-surveys/Pages/Periodic-Survey-76-Quality-Improvement.aspx. Accessed on August 12, 2015. [Google Scholar]
  • 21.Chien AT, Conti RM, Pollack HA. A pediatric-focused review of the performance incentive literature. Curr Opin Pediatr. 2007;19(6):719–725. doi: 10.1097/MOP.0b013e3282f1eb70. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Chien AT, Chin MH, Alexander GC, Tang H, Peek ME. Physician financial incentives and care for the underserved in the United States. Am J Manag Care. 2014;20(2):121–129. [PMC free article] [PubMed] [Google Scholar]
  • 23.Chien AT, Song Z, Chernew ME, et al. Two-year impact of the alternative quality contract on pediatric health care quality and spending. Pediatrics. 2014;133(1):96–104. doi: 10.1542/peds.2012-3440. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Chin MH, Alexander-Young M, Burnet DL. Health Care Quality-Improvement Approaches to Reducing Child Health Disparities. Pediatrics. 2009;124(Supplement 3):S224–S236. doi: 10.1542/peds.2009-1100K. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Lion KC, Raphael JL. Partnering Health Disparities Research With Quality Improvement Science in Pediatrics. Pediatrics. 2015;135(2):354–361. doi: 10.1542/peds.2014-2982. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.National Committee for Quality Assurance. Standards and Guidelines for NCQA’s Patient-Centered Medical Home (PCMH) 2014. Washington DC: National Committee for Quality Assurance; 2014. [Google Scholar]
  • 27.The Joint Commission. Approved Standards & EPs for The Joint Commission Primary Care Medical Home Option. Oakbrook Terrace, IL: The Joint Commission; 2011. [Google Scholar]
  • 28.URAC. Patient Centered Medical Home. Washington, DC: URAC; 2015. [Google Scholar]
  • 29.Accreditation Association for Ambulatory Health Care. Medical Home. Skokie, IL: Accreditation Association for Ambulatory Health Care; 2015. [Google Scholar]
  • 30.Freed GL, Dunham KM, Gebremariam A, Wheeler JRC. Which pediatricians are providing care to America’s children? An update on the trends and changes during the past 26 years. J Pediatr. 2010;157(1):148–152.e1. doi: 10.1016/j.jpeds.2010.01.003. [DOI] [PubMed] [Google Scholar]
  • 31.Kelleher KJ, Cooper J, Deans K, et al. Cost Saving and Quality of Care in a Pediatric Accountable Care Organization. Pediatrics. 2015;135(3):e582–e589. doi: 10.1542/peds.2014-2725. [DOI] [PubMed] [Google Scholar]
  • 32.Lehmann CU, O’Connor KG, Shorte VA, Johnson TD. Use of Electronic Health Record Systems by Office-Based Pediatricians. Pediatrics. 2014;135(1):e7–e15. doi: 10.1542/peds.2014-1115. [DOI] [PubMed] [Google Scholar]
  • 33.Zickafoose JS, Clark SJ, Sakshaug JW, Chen LM, Hollingsworth JM. Readiness of primary care practices for medical home certification. Pediatrics. 2013;131(3):473–482. doi: 10.1542/peds.2012-2029. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34.Schulte BM, Mannino DM, Royal KD, Brown SL, Peterson LE, Puffer JC. Community size and organization of practice predict family physician recertification success. J Am Board Fam Med. 2014;27(3):383–390. doi: 10.3122/jabfm.2014.03.130016. [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Appendix 1
Appendix 2
Appendix 3

RESOURCES