Abstract
Introduction
Under the CHIPRA Quality Demonstration Grant Program, CMS awarded $100 million through 10 grants that 18 state Medicaid agencies implemented between 2010 and 2015. The program’s legislatively-mandated purpose was to evaluate promising ideas for improving the quality of children’s health care provided through Medicaid and CHIP. As part of the program’s multifaceted evaluation, this study examined the extent to which states sustained key program activities after the demonstration ended.
Methods
We identified 115 potentially sustainable elements within states’ CHIPRA demonstrations and analyzed data from grantee reports and key informant interviews to assess sustainment outcomes and key influential factors. We also assessed sustainment of the projects’ intellectual capital.
Results
56% of potentially sustainable elements were sustained. Sustainment varied by topic area: Elements related to quality measure reporting and practice facilitation were more likely to be sustained than others, such as parent advisors. Broad contextual factors, the state’s Medicaid environment, implementation partners’ resources, and characteristics of the demonstration itself all shaped sustainment outcomes.
Discussion
Assessing sustainment of key elements of states’ CHIPRA quality demonstration projects provides insight into the fates of the “promising ideas” that the grant program was designed to examine. As a result of the federal government’s investment in this grant program, many demonstration states are in a strong position to extend and spread specific strategies for improving the quality of care for children in Medicaid and CHIP. Our findings provide insights for policymakers and providers working to improve the quality of health care for low income children.
Keywords: Sustainability, Child health, Demonstration grants, CHIPRA, Quality
Introduction
The Children’s Health Insurance Program Reauthorization Act of 2009 (CHIPRA) established the CHIPRA Quality Demonstration Grant Program to “evaluate promising ideas for improving the quality of children’s health care” provided under Medicaid and the Children’s Health Insurance Program (Children’s Health Insurance Program Reauthorization Act 2009). To implement the program, the Centers for Medicare and Medicaid Services (CMS) awarded 5-year grants ranging from $8.8 million to $11.3 million to Medicaid agencies in 10 states. Because six grants involved multi-state partnerships, a total of 18 states received demonstration funds. Demonstration grants were scheduled to end in February 2015, but CMS gave most states a no-cost extension of up to 1 year.
In addition to specifying five project categories (Table 1), CMS’ solicitation noted that grants should (1) build quality improvement infrastructure, rather than pay for direct services, and (2) yield state-level partnerships that would create a critical mass of stakeholders committed to system transformation (Centers for Medicare & Medicaid Services, September 30, 2009). The demonstration states implemented 52 projects across the five categories, with each state allocating grant dollars to several projects concurrently. Overall, projects were modest in their reach. For example, states working to encourage adoption of patient-centered medical home (PCMH) features usually involved fewer than 20 practices.
Table 1.
Grant category (number of states with projects) | CMS’s specific goals | Example project (state) and associated elements |
---|---|---|
A Report and use quality measures for children, including the CMS child core set (10 states) | Demonstrate collection and reporting on core set of child quality measures Learn how best to collect data for measures and overcome barriers Learn how stakeholders use quality measures Measure impact of use of core measures |
Report and foster improvement on quality measures (Alaska) Elements Fielded Consumer Assessment of Healthcare Providers and Systems survey in 3 practices in preparation for reporting patient experience measure Hired data analyst to address obstacles to reporting child core set measures from Medicaid administrative data Engaged Medicaid and public health staff to monitor performance on measures and develop QI strategies |
B Develop or enhance HIT, such as electronic health records (EHRs) and health information exchange (HIE) (12 states) | Learn how best to implement HIT, including HIT promotion and how barriers can be overcome Learn how to use HIT data for quality improvement and cost reduction Measure impact of HIT on children’s health care quality Determine if and how HIT increases transparency and consumer choice |
Use HIT to improve information exchange and care coordination (Utah) Elements Laid groundwork for interstate HIE and shared immunization data with Idaho Developed and tested portal for pediatric medical record Enhanced online resources to help physicians and parents care for children with special health care needs Developed and implemented electronic platform that practices use to share information about quality improvement work |
C Develop or expand provider-based care models, such as PCMHs, SBHCs, or CMEs (17 states) | Demonstrate that selected model can be implemented Learn how best to implement models of care and identify how barriers can be overcome Determine impact of selected model |
Help 18 child-serving practices improve quality of care (South Carolina) Elements Provided practice facilitation (one-on-one technical assistance) Held 8 learning collaboratives Provided maintenance-of-certification credit to physicians Funded parent involvement in quality improvement work Hosted training on integration of primary and behavioral health care |
D Implement and evaluate the impact of a model EHR format for childrena (2 states) | Evaluate impact of format on pediatric healthcare quality and costs Learn to use data from the format to improve quality and reduce costs Determine how to promote meaningful use of the format Identify issues with the format, such as interoperability or privacy concerns |
Test format’s usefulness with 4 health systems and a federally qualified health center (Pennsylvania) Elements Provided health care organizations funding to incorporate new Format requirements into their EHR systems Assessed usefulness of the format |
E Additional activities to enhance work under another category or improve quality in another CMS priority area (11 states) | Demonstrate if new or expanded model of care to improve quality of children’s health care can be implemented Learn how best to implement these models and identify how barriers can be overcome Determine impact of model of care |
Improve access to and quality of crisis response and family support services (Maryland) Elements Held focus groups with families and youth on crisis response and peer support Revised service delivery structure for mobile crisis and peer support services |
Source: CMS’s solicitation for the CHIPRA Quality Demonstration Grant Program; analysis of data collected by the national evaluation team through interviews and document review PCMH patient centered medical home, SBHC school-based health center, CME care management entities, which aim to improve services for children and youth with serious emotional disorders
The model EHR format was developed under a separate AHRQ contract, in partnership with CMS
Within the categories, projects differed in their specific objectives (Devers et al. 2013). For example, the ten demonstration projects that aimed to strengthen state infrastructure for using quality measures (Burwell 2016) typically focused on hiring staff to generate standardized procedures for reporting the measures but two also established statewide committees to champion improvements in quality measures for children.
Although CMS did not specifically emphasize program sustainment as a demonstration goal, the focus on building infrastructure and testing of promising ideas implies that CMS expected that some demonstration programs would continue beyond the grant period with ongoing beneficial effects (Moore et al. 2017). This study, part of the evaluation of the CHIPRA Quality Demonstration Grant Program that was led by the Agency for Healthcare Research and Quality (2016), examines the extent to which states sustained key elements of their demonstrations.
The decision to sustain an element indicates that stakeholders view it as sufficiently valuable to warrant continuation (Scheirer and Dearing 2011). By studying sustainment outcomes and contributory factors, we shed light on possible long-term effects of federal investments in demonstration programs. Furthermore, by highlighting sustained elements and factors contributing to sustainment, we provide information that states not participating in the demonstration may find useful in prioritizing their own efforts to improve child health.
Specifically, this study addresses two questions: (1) Which elements in the CHIPRA quality demonstrations were sustained? (2) What factors influenced sustainment? To address these questions, we identified key elements in each state’s demonstration project, assessed the likelihood of sustainment, and developed brief case studies of four purposively-selected states. These case studies are available in a supplementary file.
Methods
This study combines qualitative methods with analysis of frequency counts of elements implemented under the grant program.
Definitions
We use the term element to refer to a discrete activity or set of closely related activities that states implemented using grant funds. Many of the states’ projects included multiple elements (for example, reporting quality measures to CMS and developing feedback reports for practices), each of which could be sustained independently of others. We categorized elements as sustained or highly likely to be sustained if states had developed or implemented specific plans to continue them in the same or in a largely similar form after the grant period ended (Blasinsky et al. 2006). (For the remainder of this article we will refer to these elements as sustained.) Elements for which states were developing specific plans for continuation we designated as may be sustained, while elements that the state indicated would not continue or for which they had no specific plans for continuation were designated as not sustained. In addition to project elements, the grants generated intellectual capital, defined as the experience, knowledge, and influence gained by state staff and their contractors or partners responsible for state-level grant activities (Choo and Bontis 2002; Santos-Rodrigues et al. 2013).
Data Sources
We used the following data sources: states’ progress reports submitted to CMS near the end of the demonstration, in August 2014 and February 2015; states’ final reports submitted to CMS before July 30, 2015; 356 semi-structured interviews conducted during site visits in mid-2014 that were coded and entered into NVivo (Bazeley 2007); notes from telephone and email contacts with one to three key staff in all 18 demonstration states between May and July 2015 to clarify our understanding of their sustainment plans; and interviews with one to three program staff in four states in August 2015, which we used to complete our case studies. The Office of Management and Budget and the institutional review boards for Mathematica and the Urban Institute approved our data collection methods.
Analytic Methods
We reviewed program elements identified in the sources noted above, using standard qualitative methods to identify key themes (Bradley et al. 2007; Bazeley 2007). Based on this review, we grouped elements into nine mutually exclusive categories: (1) learning collaboratives (a structured group learning approach through which practices received didactic instruction and opportunities for peer-to-peer learning); (2) practice facilitators (coaches who provide direct assistance to providers, such as helping them develop practice-based quality reports, engage with families, or obtain PCMH recognition); (3) financial or labor resources provided to practices or school-based health centers (SBHCs) participating in quality improvement (QI) activities, including stipends and staff subsidized by the state; (4) maintenance-of-certification programs and other structured QI trainings; (5) quality measure reporting (including reporting quality measures to CMS and other stakeholders, such as health insurers or practices); (6) health IT applications (such as efforts to improve the functionality or use of electronic health records or health information exchanges); (7) efforts to enhance family engagement with child-serving providers or agencies; (8) efforts to develop multi-stakeholder partnerships focused on QI; and (9) other elements (such as writing QI specifications for managed care contracts). Because states frequently combined learning collaboratives with practice facilitators, financial and labor resources, and health care training or certification programs, we aggregated them into the topic of service delivery transformation.
We excluded from our analyses elements that (1) were not designed to be sustained (for example, demonstration projects’ technical advisory panels) and (2) had begun but were discontinued before the demonstration’s fifth year.
To determine which elements were sustained (our first research question), we drew on evidence available as of August 31, 2015—after the end of the original grant period but before 14 states’ no-cost extension periods were over. We assessed a state’s grant-generated intellectual capital as being sustained if (1) grant staff were continuing to play leadership roles in developing or implementing state-level QI initiatives for children or (2) the evidence indicated that the state staff involved with the grant would continue to work on state-level QI for children with organizations (such as state universities) that had provided the grant’s intellectual leadership.
Researchers and analysts with substantial knowledge of individual demonstration states assessed available evidence and made initial sustainment determinations for each element. To ensure inter-rater reliability, four of the study’s researchers reviewed these initial determinations and, if necessary, discussed the evidence until consensus was reached. As a final quality control check, we asked state staff to review our determinations and, if they believed them to be inaccurate, to provide additional pertinent information. For about 15% of the determinations, additional information led us to change our judgment regarding sustainability.
To assess which factors influenced sustainment (our second research question), we constructed case studies of four states (Alaska, Maryland, South Carolina, and Utah), purposively selected to illustrate variation across element categories and pathways to sustainment outcomes (Patton 1996; Yin 2014). For each case study, we relied on the sources noted above to develop narratives that described the elements implemented under the demonstration, their sustainment outcomes, and key contributing factors. We also drew on research that identified some factors potentially affecting sustainability of demonstration programs (Blasinsky et al. 2006; Choo and Bontis 2002; Gruen et al. 2008; Scheirer 2005; Scheirer and Dearing 2011; Stirman et al. 2012; Proctor et al. 2011; Santos-Rodrigues et al. 2013; Savaya et al. 2008). We queried our NVivo database to identify and analyze data for the case studies, and then conducted an additional interview with staff from each case study state to refine our understanding of factors influencing sustainment. Team members iteratively reviewed each case until we agreed it faithfully represented the data. For further validation, each case study was reviewed by a state representative.
Results
The 18 demonstration states implemented 114 elements by the grant program’s 5th year (Table 2). States varied in the number of potentially sustainable elements they implemented because they used different strategies in allocating grant funds. Some states spread funds across numerous elements (3 states each implemented 8 or more elements); others focused on fewer elements (7 states each implemented 4 or 5).
Table 2.
State | Number of elements implemented (number sustained or highly likely to be sustained) |
Number of elements implemented in topic areas (number sustained)
|
Intel- lectual capital |
||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
Service delivery transformation
|
Quality reporting | Health IT | Family engage- ment |
Partnerships | Other | ||||||
Learning collabora- tives |
Financial or labor resources |
Facilitators | Training, certifica- tion |
||||||||
Total | 114 (64) | 12 (6) | 10 (2) | 13 (10) | 5 (3) | 26 (19) | 22 (10) | 9 (3) | 8 (4) | 9 (7) | |
Alaska | 5 (3) | 1 (1) | 1 (0) | 1 (1) | – | 2 (1) | – | – | – | NS | |
Colorado | 4 (3) | – | 1 (1) | 1 (1) | – | – | 1 (1) | 1 (0) | – | – | Sa |
Florida | 8 (2) | 1 (0) | – | 1 (0) | – | 2 (1) | 2 (1) | 1 (0) | 1 (0) | – | NS |
Georgia | 4 (3) | – | – | – | 1 (0) | – | 2 (2) | – | 1 (1) | Sa | |
Idaho | 7 (2) | 1 (0) | 1 (0) | 1 (1) | – | – | 2 (0) | 1 (0) | 1 (1) | – | Sb |
Illinois | 14 (7) | 1 (0) | 1 (0) | 1 (0) | – | 4 (4) | 3 (0) | – | 2 (2) | 2 (1) | Sa |
Maine | 6 (4) | 1 (1) | – | – | – | 4 (3) | – | 1 (0) | – | Sc | |
Maryland | 4 (3) | – | – | – | 1 (1) | – | 1 (0) | – | 2 (2) | Sc | |
Massachusetts | 7 (4) | 1 (1) | 1 (0) | – | – | 4 (2) | – | 1 (1) | Sc | ||
New Mexico | 5 (3) | – | 1 (0) | 1 (1) | – | – | 1 (1) | 1 (1) | – | 1 (0) | Sc |
North Carolina | 9 (7) | 1 (0) | 3 (3) | – | 3 (2) | 1 (1) | – | – | 1 (1) | Sa | |
Oregon | 7 (2) | 1 (0) | 1 (0) | 2 (1) | – | 2 (1) | 1 (0) | – | – | Sc | |
Pennsylvania | 5 (3) | – | – | – | – | 2 (2) | 3 (1) | – | – | – | Sa |
South Carolina | 6 (4) | 1 (1) | – | 1 (1) | 2 (1) | 1 (1) | – | 1 (0) | – | – | Sb |
Utah | 7 (3) | 1 (1) | 1 (0) | – | – | – | 4 (2) | 1 (0) | – | – | Sc |
Vermont | 6 (5) | 1 (1) | 1 (1) | 1 (1) | – | 1 (1) | 1 (1) | – | 1 (0) | – | NS |
West Virginia | 4 (1) | 1 (0) | 1 (0) | – | – | 1 (1) | 1 (0) | – | – | – | NS |
Wyoming | 6 (5) | – | – | – | 1 (1) | – | 2 (2) | – | 1 (0) | 2 (2) | Sa |
Source: Analysis of data collected by national evaluation team through interview and document review
Notes: Dashes (–) mean the state implemented no elements in the topic area. The number of elements is not an index of total effort. Some elements require more resources than others. Also, some states allocated grant dollars to elements not intended to be sustained; these elements are not included in this list. S sustained, NS not sustained. “Other projects” spanned a variety of topics, such as tracking psychotropic prescribing patterns in children or writing QI specifications for managed care contracts
State staff stayed in current position or moved to new one in the same agency, remaining closely involved in QI initiatives
Developed new administrative entity to continue QI activities for children
Retained staff via ongoing contracts with state university or other entity
Which elements in the CHIPRA quality demonstrations were sustained?
Across all states, 56% of elements were sustained (Table 3). In some topic areas, a few states contributed disproportionately to the total number of elements and the number of elements sustained. For example, as 1 of the 12 states working on quality measure reporting, Illinois contributed 4 of the 26 elements implemented across the states (15%) and 4 of the 20 sustained elements (20%). The percent of sustained elements varied by topic, ranging from 20 to 77%. Elements related to practice facilitation and quality measure reporting were more likely to be sustained than elements in other areas.
Table 3.
Topics | Number of states with designated element | Total number of elements | Percent of elements
|
||
---|---|---|---|---|---|
Sustained | May be sustained | Not sustained | |||
Total | 18 | 114 | 56 | 9 | 34 |
Service delivery transformation | 17 | 40 | 53 | 5 | 43 |
Learning collaboratives | 12 | 12 | 50 | 8 | 42 |
Financial or labor resources | 10 | 10 | 20 | 10 | 70 |
Facilitators | 10 | 13 | 77 | 0 | 23 |
Training, certification | 4 | 5 | 60 | 0 | 40 |
Quality reporting | 11 | 26 | 73 | 4 | 23 |
Health IT | 12 | 22 | 45 | 23 | 32 |
Family engagement | 8 | 9 | 33 | 0 | 67 |
Partnerships | 7 | 8 | 50 | 13 | 38 |
Other | 6 | 9 | 78 | 11 | 11 |
Source: Analysis of data collected by national evaluation team through interviews and document review
Notes: “Other” includes a diverse group of elements such as tracking psychotropic prescribing patterns in children or writing QI specifications for Medicaid managed care contracts. Percentages may not add to 100 because of rounding
Service Delivery Transformation
Seventeen demonstration states implemented 40 elements within this topic area. States sustained 53% of these elements, but some types of elements were more likely to be sustained than others. For example, states sustained 77% of their practice facilitator elements, compared with 60% of their training and certification elements, 50% of their learning collaboratives, and 20% of their financial and labor-support elements (such as stipends or subsidies for practice staff). In New Mexico, for example, state-funded practice facilitators continued to help SBHCs implement QI efforts after the demonstration; however, the state stopped providing SBHCs financial incentives to support those efforts.
Quality Measure Reporting
These elements involved developing strategies, expertise, or data manipulation procedures to report the Child Core Set of quality measures to CMS, or to report performance on other quality measures (for example, practice-level rates of developmental screening) to state policymakers, practices, health care systems, managed care organizations, or the public. Overall, 77% of these elements were sustained.
Of the 11 states that worked on quality measurement, eight developed activities or built infrastructure for reporting quality measures to CMS (for example, Massachusetts linked data from several sources, including health plans and a state database that stores Medicaid and commercial insurance data); all of these activities were sustained (data not shown). Seven worked on elements related to quality measure reports for stakeholders within the state (for example, reports showing a practice’s performance on selected measures such as rates of well-child visits or immunizations); 64% of these elements were sustained (data not shown).
Health IT
Twelve states implemented a diverse range of elements related to health IT, which included providing technical assistance to improve collection and use of EHR data for quality improvement, enhancing data system interoperability, and establishing Web sites with information for providers or families about chronic health conditions. Although 45% of these elements were sustained, an additional 23% are in the “may be sustained” category, a substantially higher percentage compared with other topic areas. (Some health IT activities were not included in our calculations because they were planned but not implemented or were abandoned in the early years of the grant, usually for reasons related to technical feasibility.)
Family Engagement
Some demonstration projects aimed to enhance engagement with families (such as giving stipends to practices to support family advisors). Elements related to family engagement were least likely to be sustained (33%).
QI Partnerships
Some states used demonstration funds to establish statewide QI partnerships, often using the National Improvement Partnership Network model (see https://www.uvm.edu/medicine/nipn/). For example, Idaho developed a new statewide partnership that will serve to continue QI activities for children. More than half (56%) of elements related to QI partnerships were sustained.
Intellectual Capital
The intellectual capital acquired during the demonstration was sustained in varying forms in 14 of the 18 states (Table 2). Based on demonstration grant experiences, two states developed new entities to oversee QI work for children. In six states, key state staff either stayed in their positions or moved to other positions in the Medicaid agency, remaining closely involved in child health QI activities. Another six states built on demonstration activities through continued relationships with contracted staff at state universities and other entities. In many states, program staff whom we interviewed noted that the grant had (1) substantially increased their state’s overall investment in QI activities for children because of new partnerships with other agencies, providers, and quality specialists and (2) made it possible for them to take advantage of new grant or legislative opportunities through which they could strategically extend the knowledge gained from the demonstration.
What factors influenced sustainment?
Our case studies (see supplemental file) indicate that several factors interacted to shape sustainment decisions. As other studies have found (e.g., Gruen et al. 2008), leadership and availability of new financial support almost always played key roles. For example, in North Carolina, the project’s leadership, which included a well-known and highly-respected pediatrician, regularly informed stakeholders about project activities.
Interestingly, four other factors also were important in our case study states. First, sustainment of some program elements resulted from the state’s investment in infrastructure and institutionalization of procedures developed with grant funds. For example, Alaska used CHIPRA quality demonstration funds to hire a data analyst to develop procedures for improving and linking data sources needed to report some of the measures in the Child Core Set. These new procedures were integrated into the state’s standard operations for reporting measures to CMS, in anticipation of the possibility that this voluntary reporting may eventually become mandatory.
Second, early planning for sustainment, combined with systematic evidence of a program’s effects from state-based process evaluations, sometimes contributed to sustainment. For example, within the first year of the project, South Carolina’s leadership team established a 15-member steering committee that focused on developing sustainment plans. In addition, researchers at the University of South Carolina (a key partner) systematically gathered information about the experience and performance of the 18 practices participating in the demonstration, as well as the state’s performance on selected core quality measures. According to the program’s leadership, this information, when it was shared with the director of the Medicaid agency, helped demonstrate the program’s value and contributed to a decision to establish a new unit in the Medicaid agency that would focus specifically on improving quality of care for children.
Third, states sometimes sustained programs by aligning program activities with the broader goals of the host agency. For example, Maryland used its demonstration to expand on its long-standing efforts to improve intensive, cross-agency service coordination for children with complex behavioral health needs. The state and the University of Maryland (its partner with extensive experience in this area): (1) incorporated new modules into an existing training program for care coordinators, (2) customized an existing data system to fit local care coordination needs, (3) improved data infrastructure for monitoring services across agencies, and (4) developed and submitted a Medicaid state plan amendment (SPA) to improve access to and quality of services. When it was subsequently approved by CMS, the SPA provided a new funding stream to sustain several elements.
Fourth, stakeholder support was a critical factor in several states. For example, Utah used its grant funds to develop a website with modules describing chronic conditions affecting children, offering information tailored for physicians and families, and hosting a newsletter and blogs. As a result of the website’s popularity with providers both within and outside the state, the state sought and received support from other grants and a major hospital system to cover the costs of the website’s maintenance. In contrast, another health IT application (a platform for portable medical records) was developed and tested for several years but ultimately not sustained. Few providers could use the platform because of technical problems with the state’s HIE and a key implementation partner viewed it as a low priority.
Discussion
Assessing sustainment of key elements of the states’ CHIPRA quality demonstration projects provides insight into the fates of the “promising ideas” for improving the quality of children’s health care that the grant program was designed to examine. These findings may provide useful insights for policymakers and child-health practitioners who have opportunities to invest resources for improving quality of care for Medicaid-enrolled children. Our findings indicate that more than half of the elements that demonstration states implemented by the program’s 5th year were sustained after the grant period ended. Moreover, most states found a way to sustain the intellectual capital developed during the grant period. As a result of the federal government’s investment in this grant program, many demonstration states were in a strong position to extend and spread specific strategies for improving the quality of care for publicly-insured children.
Multiple factors influenced sustainment decisions. Depending on the particular state and element type, broad contextual factors at the federal and state level (including availability of new funds), the state’s Medicaid policy and program environment, implementation partners’ resources and clout, and characteristics of the demonstration itself all shaped sustainment outcomes. For example, demonstration states sustained 77% of quality measurement elements because of factors such as states’ existing measure-related contracts with universities, relatively low costs of institutionalizing procedures first developed with demonstration funds, and anticipation that voluntary reporting of quality measures to CMS would become mandatory.
The cost of sustaining elements, while an important factor, was not necessarily decisive. For example, practice facilitation has high operating costs (Geonotti et al. 2015), but was nevertheless one of the elements that was often sustained—an outcome shaped in part by the rising popularity of this method as an important component of practice transformation efforts. In some cases, the strength of an implementation partner’s influence also made a difference. Influential implementation partners were sometimes able to secure continued funding for elements in which they had a special interest.
Few states sustained elements that involved providing financial or labor support directly to practices, reflecting the fact that stipends for participating in QI activities and paying for parent partners are outside of Medicaid’s usual payment models. In this case, lack of congruence between this type of element and states’ administrative mechanisms made it difficult to find pathways for sustainment.
In a demonstration program that tests promising ideas, failure to sustain an element is not necessarily negative. Program leaders may decide not to continue an element because they discover that underlying assumptions are faulty. For example, two states discontinued direct secure messaging efforts because of meager provider interest as well as technical challenges.
This study has several limitations. First, elements are not equivalent in scope. Some were large, expansive endeavors; others were small and narrowly focused. Hence, the number of elements is not an index of the overall magnitude of a state’s effort. Second, our results, which derive from evidence available before many of the no-cost extensions had ended, should be interpreted in the context of an extremely fluid environment: projects sustained in 1 month can be canceled the next, or vice versa. Our analysis drew primarily on interviews with state staff, who may not have represented the perspective of all individuals involved in the grant. Similarly, we examined sustainment at the state level only, and therefore did not assess whether practices, school-based health centers, or other participating organizations continued elements without state support. Finally, despite purposive sampling of our four case study states, we may have failed to reach data saturation in identifying factors influencing sustainment decisions.
Our findings suggest that no single factor guarantees that demonstration elements will be sustained, but certain actions may increase the likelihood of sustainment. These include building a foundation for sustainment by aligning program goals with the goals of the home institution; seeking new sources of funding; engaging in early sustainment planning based on evidence about the program’s perceived value; institutionalizing routines and infrastructures as much as possible; and leveraging the experience and influence of implementation partners. Finally, as our and other case studies have shown, even effective programs are unlikely to be sustained without a healthy dose of skill and dedication from the leadership team. Although this study grew from an evaluation of a federal demonstration to improve children’s health care quality, our findings may be applicable to assessing state-level sustainment of other federal demonstration initiatives.
Supplementary Material
Significance.
What is already known on this subject?
Previous reports have identified factors affecting sustainment of grant programs—such as available funding, relationships with implementation partners, and program complexity—but few studies have focused specifically on federal grants to states to improve the quality of children’s health care.
What does this study add?
This study provides new information about the extent to which states sustained key elements of federal grants designed to identify promising strategies to enhance quality of care for Medicaid-enrolled children. It also identifies critical factors that influenced sustainment outcomes.
Acknowledgments
This report was prepared for the Agency for Healthcare Research and Quality (AHRQ) by Mathematica Policy Research and its partners the Urban Institute and AcademyHealth under contract HHSA29020090002191. We appreciate the contributions of key staff of the national evaluation team including Vanessa Forsberg, Alicia Haelen, Amanda Napoles, Christal Ramos, and Rebecca Peters. We also thank our colleague at AHRQ, Linda Bergofsky, and staff at CMS, including Karen LLanos, Elizabeth Hill, and Barbara Dailey. Special thanks to staff and stakeholders in the demonstration states, some of whom read and commented on several versions of the case studies.
Footnotes
Electronic supplementary material The online version of this article (https://doi.org/10.1007/s10995-017-2391-z) contains supplementary material, which is available to authorized users.
References
- Agency for Health Care Research and Quality. National evaluation of the CHIPRA Quality Demonstration Program. 2016 http://www.ahrq.gov/policymakers/chipra/demoeval/index.html. Accessed 31 August 2016.
- Bazeley P. Qualitative data analysis with NVivo. Thousand Oaks, CA: Sage Publications; 2007. [Google Scholar]
- Blasinsky M, Goldman H, Unutzer J. Project IMPACT: A report on barriers and facilitators to sustainability. Administration and Policy in Mental Health and Mental Health Services Research. 2006;33(6):718–729. doi: 10.1007/s10488-006-0086-7. [DOI] [PubMed] [Google Scholar]
- Bradley E, Curry L, Devers K. Qualitative data analysis for health services research: Developing taxonomy, themes, and theory. Health Services Research. 2007;42:1153–1188. doi: 10.1111/j.1475-6773.2006.00684.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Burwell S. 2015 Annual Report on the Quality of Care for Children in Medicaid and CHIP. Baltimore, Maryland: Department of Health and Human Services; 2016. [Google Scholar]
- Centers for Medicare and Medicaid Services. Invitation to Apply for FY2010 CHIPRA Quality Demonstration Grants (CFDA 93.767) Baltimore, Maryland: Department of Health and Human Services; 2009. [Google Scholar]
- Children’s Health Insurance Program Reauthorization Act of 2009. 42 U.S. C. Section 401. 2009 [Google Scholar]
- Children’s Health Insurance Program Reauthorization Act Of 2009 (CHIPRA): Sect 401(D) Invitation to Apply for FY2010 CHIPRA Quality Demonstration Grants (CFDA 93.767) Baltimore, Maryland: Department of Health and Human Services; [Google Scholar]
- Choo C, Bontis N. The strategic management of intellectual capital and organizational knowledge. Oxford: Oxford University Press; 2002. [Google Scholar]
- Devers K, Foster L, Brach C. Nine states’ use of collaboratives to improve children’s health care quality in Medicaid and CHIP. Academic Pediatrics. 2013;13:S95–S102. doi: 10.1016/j.acap.2013.04.008. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Geonotti K, Peikes D, Taylor EF, McNellis R, Genevro J. Engaging primary care practices in quality improvement: Strategies for practice facilitators. Rockville, MD: Agency for Healthcare Research and Quality; Apr, 2015. Publication No. 15-0015-3-EF. [Google Scholar]
- Gruen R, Elliot J, Nolan M, Lawton A, McLaren C, Lavis J. Sustainability science: An integrated approach for health-programme planning. Lancet. 2008;372:1579–1589. doi: 10.1016/S0140-6736(08)61659-1. [DOI] [PubMed] [Google Scholar]
- Hearld L, Bleser W, Alexander J, Wolf L. A systematic review of the literature on the sustainability of community health collaboratives. Medical Care Research and Review. 2016;73(2):127–181. doi: 10.1177/1077558715607162. [DOI] [PubMed] [Google Scholar]
- Moore J, Mascarenhas A, Bain J, Straus S. Developing a comprehensive definition of sustainability. Implementation Science. 2017;12:110–119. doi: 10.1186/s13012-017-0637-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Patton M. Utilization-focused evaluation. 3rd. Thousand Oaks, CA: Sage Publications; 1996. [Google Scholar]
- Proctor E, Silmere H, Raghavan R, Hovmand P, Aarons G, Bunger A, Griffey R, Hensley M. Outcomes for implementation research: Conceptual distinctions, measurement challenges, and research agenda. Administration and Policy in Mental Health and Mental Health Services Research. 2011;38:65–76. doi: 10.1007/s10488-010-0319-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Santos-Rodrigues H, Faria J, Cranfield D, Morais C. Intellectual capital and innovation: A case study of a public healthcare organization in Europe. Electronic Journal of Knowledge Management. 2013;11(4):361–372. [Google Scholar]
- Savaya R, Spiro S, Elhran-Barak R. Sustainability of social programs. A comparative case study analysis. American Journal of Evaluation. 2008;29(4):478–493. [Google Scholar]
- Scheirer M. Is sustainability possible? A review and commentary on empirical studies of program sustainability. American Journal of Evaluation. 2005;26(3):320–347. [Google Scholar]
- Scheirer MA, Dearing JW. An agenda for research on the sustainability of public health programs. American Journal of Public Health. 2011;101:2059–2067. doi: 10.2105/AJPH.2011.300193. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Stirman S, Kimberly J, Cook N, Calloway A, Castro F, Charns M. The sustainability of new programs and innovations: A review of the empirical literature and recommendations for future research. Implementation Science. 2012;7(17):17. doi: 10.1186/1748-5908-7-17. http://www.implementationscience.com/content/7/1/17. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Yin R. Case study research: Design and methods. 5th. Thousand Oaks, CA: Sage Publications, Inc; 2014. [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.