Skip to main content
. 2016 Mar 7;31(Suppl 1):61–69. doi: 10.1007/s11606-015-3567-0

Table 3.

Evidence and Policy Implications by Implementation Framework Category

Implementation Framework Category Study Evidence Themes from KI Interviews Policy Implications
Program design features Thirteen studies920,50 examined program design features and found:
• Measures linked to quality and patient care were positively related to improvements in quality and greater provider confidence in the ability to provide quality care, while measures tied to efficiency were negatively associated.
• Perceptions of program effectiveness were related to the perception that measures aligned with organizational goals, and perceived financial salience related to measure adherence, as did perceptions of target achievability.
• Different payment models result in differences in both bonuses/payments and performance.
• More statistically stringent methods of creating composite quality scores was more reliable than raw sum scores.
• The cost effectiveness of P4P varies widely by measure.
• Programs should include a combination of process-of-care and patient outcome measures.
• Process-of-care measures should be evidence-based, clear and simple, linked to specific actions rather than complex processes, and clearly connected to a desired outcome.
• Measure targets should be grounded in clinical significance rather than data improvement.
• Disseminate the evidence behind and rationale for incentivized measures.
• Measures should reflect the priorities of the organization, its providers, and its patients.
• Incentives should be designed to stimulate different actions depending on the level of the organization at which they are targeted.
• Incentives must be large enough to motivate, and not so large as to encourage gaming—with hypotheses ranging from 5 to 15%.
• Incentives should be based on improvements, and all program participants should have the ability to earn incentives.
• The magnitude of an incentive attached to a specific measure should be relative to organizational priorities.
• Consider distributing incentives to clinical and non clinical staff.
• Programs that emphasize measures that target process of care or clinical outcomes that are transparently evidence-based and viewed as clinically important may inspire more positive change than programs that use measures targeted to efficiency or productivity, or do not explicitly engage providers from the outset.
• The incentive structure needs to carefully consider several factors including incentive size, frequency, and target.
Implementation processes Eight studies2128 examined changes in implementation, with seven specifically related to updating or retiring measures, and found:
• Under both the QOF and in the VHA, removing an incentive from a measure had little impact on performance once a high performance level had been achieved.
• Increasing maximum thresholds resulted in greater increases by poorer-performing practices.
• Evaluate measures regularly and consider increasing thresholds or removing incentives once high performance has been achieved.
• Stakeholder involvement and provider buy-in are critical.
• A bottom-up approach is effective.
• Provide reliable data/feedback to providers in a non-judgmental fashion.
• P4P programs should target areas of poor performance and consider de-emphasizing areas that have achieved high performance.
Outer setting Six studies17,2931,33,34,48 examined implementation factors related to the outer setting.
• There is no clear evidence that setting (e.g., region, urban vs. rural) or patient population predict P4P program success in the long term.
• Measures should be realistic within the patient population and health system in which they are used.
• Programs should be flexible to allow organizations to meet the needs of their patient populations.
• P4P programs should have the capacity to change over time in response to ongoing measurement of data and provider input.
Inner setting Eighteen studies15,30,3348 examined implementation factors related to the inner setting. Studies found:
• For providers, being a contractor rather than being employed by a practice was associated with greater efficiency and higher quality.
• Under the QOF, practices improved regardless of list size, with larger practices performing better in the short term.
• Under the QOF, there is limited evidence that group practice and training status was associated with a higher quality of care.
• Findings were less clear in the U.S. and elsewhere with regard to practice size and training status.
• Resources must be devoted to implementation, particularly when new measures are introduced.
• Provide support at the local level, including designating a local champion.
• Incentives are just one piece of an overall quality improvement program. Other important factors may include a strong infrastructure, organizational culture, allocation of resources, and public reporting.
• Public reporting is a strong motivator, and future research should work to untangle public reporting from P4P.
• Programs that emphasize measures that target process of care or clinical outcomes that are transparently evidence-based and viewed as clinically important may inspire more positive change than programs that use measures targeted to efficiency or productivity, or do not explicitly engage providers from the outset.
• P4P programs should have the capacity to change over time in response to ongoing measurement of data and provider input.
Provider characteristics Five studies13,29,34,43,49 examined characteristics of the individuals involved, and provided no strong evidence that provider characteristics such as gender, experience, or specialty play a role in P4P program success.

Note: Categories are not mutually exclusive.