Abstract
Researchers often focus on the data and methods to assess policy changes, but data and methods can also be policy tools. To improve, health care systems need mechanisms and incentives for continually gathering, assessing, and acting on data. This requires (1) more comprehensive data, (2) converting data into information, and (3) incentives to apply that information. Restructured economic incentives can encourage clinicians to increase value (higher quality and/or lower cost) for their patients. While necessary, incentives are not sufficient—information is also needed. Incentives can lead clinicians to demand better information. Much of the necessary data is already used in patient care and billing; some additional variables will come directly from patients. The notion builds on two concepts: collective intelligence and positive deviance. The former characterizes knowledge gained from observing the behavior of many independent actors adapting to changing situations. Positive deviants are those who achieve far better results than expected. By rewarding positive deviants, rather than trying to identify and “correct” those who are problematic, providers will voluntarily identify themselves and their methods for achieving superior outcomes.
Keywords: Health economics, health care financing/insurance/premiums, health care organizations and systems, health policy/politics/law/regulation, incentives in health care, payment systems: FFS/capitation/RBRVS/DRGs/risk-adjusted payments
Health services researchers often focus on the data and methods needed to assess new policies, but data and methods can also be policy tools. A health care system needs internal mechanisms and incentives for continually gathering, assessing, and acting on data to improve. This requires converting data into useful information and incentives to apply that information. I describe elsewhere (Luft 2008) fundamental reform of the health care system to become an adaptive system with economic incentives rewarding clinicians increasing value (higher quality and/or lower cost) for their patients. This creates a “business case” for clinicians to demand better information.
The first part of this paper discusses aggregating operational data from many different sources. Such aggregation, however, requires careful attention to data quality and the interests of the data suppliers. Assuming reasonably reliable datasets,1 the second section introduces two concepts: collective intelligence (CI) and positive deviance. The former characterizes knowledge gained from observing the behavior of many independent actors (Center for Collective Intelligence 2010). Positive deviants are those who achieve far better results than one would expect (Marsh 2004; Sack 2009;). The third section describes how various incentives can accelerate the use of information to enhance quality and efficiency.
MULTIPLE USES OF DATA
Health care providers typically collect data primarily to treat patients or to justify billing. With improved coding consistency, standardized physician identifiers, and linking patients across providers, one can develop a dataset that captures nearly everything provided in the medical care system. Real-time data sharing is not necessary; the usual timeliness of claims submission are adequate. Data are extracted from existing systems and transmitted in uniform formats, consistent with “federated models” for research data.2
Linkage across independent providers and systems is not a trivial task (Bradley, Penberthy, and Devers 2010). Significant care is needed to ensure data quality, but as data are increasingly shared for patient care purposes, higher standards in coding with routine audits can be expected. Linking and sharing patient information for use beyond patient care raises important confidentiality concerns (Lane and Schnur 2010). HIPAA recognizes society's interests in sharing data without explicit patient consent. Total deidentification often degrades the usefulness of data, but more sophisticated encryption algorithms combined with penalties targeted at data users, however, can markedly improve security.3
Combining data from various sources and deidentifying records are technical problems; access to the data is a policy question. Scientific research is a classic public good, so all data should be in the public domain, shareable under HIPAA-type protections. Given the societal benefits, (a) public funding should support data linkage and preparation, and (b) all data used in the publicly supported system (including tax-subsidized “private insurance”) should be in the files.4 Data users would agree to (a) data use agreements regarding security and the purposes for which the data are to be used, and (b) make public the methods used in the analysis of the data.
CI AND POSITIVE DEVIANCE
CI refers to shared or group “intelligence” emerging from the collaboration and competition of many individuals.5 Tracking Google search frequency identifies influenza and listeriosis weeks before the official announcements (Wilson and Brownstein 2009). Likewise, data from routine claims and clinical systems can create information on what is being done daily by thousands of providers and millions of patients. The CI concept assumes that individuals constantly adapt to their environment. Because neither the environment nor the behavioral responses are completely described in advance, by observing such behavior much can be learned.
The growing literature on “pragmatic (or practical) clinical trials” is based on observing how drugs are used in day-to-day practice (Tunis, Stryer, and Clancy 2003; Glasgow et al. 2005;). External factors, however, such as insurance formularies, may influence the inferences one can draw. Intelligently using CI requires careful consideration of the observational context, much as economists must carefully define the factors they consider to be independent (exogenous), and those that are dependent (endogenous) variables.
Learning from CI will be enhanced by the search for unexpected successes rather than unusual failures. Failures should be rare, hence difficult to detect statistically; few involved in failures voluntarily come forward for examination. Here, the notion of positive deviance is important. It “is the observation that in most settings a few at risk individuals follow uncommon, beneficial practices and consequently experience better outcomes than their neighbours who share similar risks” (Marsh 2004).
We therefore need datasets and analytic approaches allowing unobtrusive and confidential monitoring to identify patterns of care leading to unexpectedly good outcomes. The problems in using routinely collected data to develop public report cards are well known (Hofer et al. 1999; Marshall et al. 2000; Shahian and Normand 2008;). Occasional “bad” results can be statistical flukes; the absence of bad results may reflect small sample size (Luft and Hunt 1986). These problems, however, primarily arise when attempting to publicly identify specific providers.
Any collection of “good” statistical outliers (apparent positive deviants) includes some just by chance. In research work at PAMFRI, we examined resource use by primary care physicians (PCPs) for patients with diabetes, adjusting for patient-level risk factors. Instead of the expected 5 percent outlier rate, 22 percent were outliers, with many PCPs having high- or low-cost patterns over several years (Luft and Eaton 2010).
The above findings were derived from a single system with researchers/analysts having direct access to the data. How would it work with data coming from multiple sources and providers being promised their names will be kept confidential? A trusted third party could combine and make available the data without real patient or provider names. The analyst can ask this central source to invite those providers appearing to be positive deviants to discuss what, if anything, they may have done to achieve such good results. Self-identification is voluntary, potentially yielding professional recognition (and more patients).
This melding of CI and positive deviance requires no central agency deciding what to examine or what the evidence means. It only needs to deidentify data already being used for patient care and payment. Multiple analysts can simultaneously address the same question.6 The model is PubMed, which makes available access to information, rather than the Agency for Healthcare Policy and Research, which was almost defunded because of findings opposed by a special interest group (Meier 2009).
INCENTIVES TO USE INFORMATION IN A (SLIGHTLY) REFORMED SYSTEM
A realignment of the payment system could rapidly drive improvements in value, that is, more efficient production of better quality care through economic incentives and the enhanced availability of information (Luft 2008). Current legislation falls short of such a fundamental change; simply making the data available, however, could enhance incentives for its conversion into information for improving value.
Currently, many insurers voluntarily provide their data to Ingenix to create a deep enough sample for provider-specific studies. Although Ingenix is owned by UnitedHealth Group, which competes directly with these plans, they have contractual agreements precluding the sharing of sensitive information. The missing player in this process, however, is Medicare, but that could be readily changed. Routine claims data for all patients and providers could then be linked and given new identifiers by an entity I refer to as a data consolidator.
There is also a business case for this. In some states, such as Florida, electronic billing information is currently funneled through a single source.7 This offers significant savings to providers and insurers; a single hub preprocesses claims to correct potential errors, facilitating faster payment. Unlike Ingenix, which contracts with payers, the consolidator would serve providers.8 Furthermore, the consolidator would create the datasets that can be accessed subject to the policies described above.
Service use varies widely across geographic areas and medical centers. Analysts accessing this newly consolidated information could search for those physicians (or local aggregations of physicians) who manage their patients' care at lower than average cost. Implicit grouping may be necessary because some patients seem not to stay with a particular PCP (Pham et al. 2007). Insurers using such analysts' services would not know the identities of the providers, but they could send an e-mail to all providers indicating an interest in negotiating with those whose code numbers are on their list of “positive deviants.” The consolidator maintains provider anonymity during the negotiations.
Providers could self-identify to insurers offering an attractive arrangement. Most providers will be seen as efficient not because of low fees or short visits, but because they order fewer tests and their patients see fewer specialists. Currently, insurers passively benefit from the conservative practice styles of such physicians, but they have no way to identify and reward them with higher payments or more patients. With information, an insurer could selectively offer episode-based payment (Bundorf, Baker, and Royalty 2010) or fees for case management.
Insurers have tried to develop tiers of preferred providers but without much success because (a) they only have data on a small fraction of the physician's patients, (b) the data primarily reflect charges by those physicians, rather than all providers of the episode of care, and (c) the tiers are largely cost based, without considering quality differences (Trude and Conwell 2004). With the insurers keeping their own data confidential, providers are skeptical of the selection criteria. Pooled data markedly improve the statistical reliability and eliminate a provider appearing efficient in one insurer's data but inefficient in another's. Open-source algorithms eliminate “black-box” assessments.
Even if insurers truly cared only about minimizing cost, making data available encourages a self-correcting system. Suppose some providers are excluded from an insurer's “Preferred Tier” based on cost. Those excluded may believe that they provide better than average care; some will be able to demonstrate their superior quality with the assistance of analysts examining the data for quality measures. The quality scores of providers in the “Preferred Tier” may not initially be public, but employers and consumer groups will pressure the insurer to disclose them, allowing public assessment of whether the Preferred Tier is really of high value (Joshi and McHugh 2010).
Quality measures should be outcome-focused rather than process-focused to encourage process innovations; process measures privilege status quo techniques. The critique of outcomes derived from administrative data is that death and readmissions are rare and often caused by factors other than poor quality. Death and readmissions, however, are inadequate measures of outcomes. By analogy, everyone wants to avoid an unsafe airline, but the rate of crashes is close to zero and not very helpful in choosing an airline. On-time arrival and lost baggage rates (controlling for airports used) and passenger assessments of the travel experience are more useful. Providers can locate their patients and can request supplemental patient-reported measures such as functional status, return to work, indicators of depression, and the patient's perception of the “service quality.”
As one should adjust an airline's on-time arrival rate for the airports it uses, outcome measures require careful risk adjustment. Administrative datasets are often criticized by physicians for not including clinical measures. Adding a few variables to hospital discharge abstracts approximates the accuracy offered by far more extensive (and expensive) data collection systems (Pine et al. 2007). Large numbers of hospitals now routinely collect substantial information beyond that required by public agencies.9 The “Lake Wobegon effect” can be leveraged here.10 All providers seem to believe that their performance is above average. If their results do not support that self-perception, they can (potentially) demonstrate this by collecting and reporting the necessary clinical measures.
Data collection is never perfect, but greater data density (far easier with electronic feeds than with manual abstracting) allows internal cross-checking. Concerns about assessments can also drive improvement. Public outcome reporting led some surgeons to claim that they avoided high-risk patients out of fear of “ruining their scores” (Werner and Asch 2005). Instead, clinicians could flag, prior to treatment, patients for exclusion from all reported outcome measures. If the flagged patients collectively have higher death rates than predicted, the modelers would query the clinicians about what unmeasured factors raised their concerns and add those to the models.
This creates feedback loops plausible to providers. Nearly all desire to improve the care they deliver, but few know they are not doing as well as is possible or have specific ideas on how to improve. Within a provider system, sharing data and methods is easy; the challenge is to encourage improvements among clinicians not practicing together. Outcomes measures alone give providers no information on how to improve. With new data access, analysts can detect providers with apparently excellent outcomes—the positive deviants, and they can be contacted (via the consolidator). To vet these “outliers,” information will be gathered on the processes they used to get such excellent outcomes. In exchange for a public “gold star,” their processes will be shared with others for quality improvement efforts. The pooled data then automatically allow monitoring to see if they work as well in different settings.
There is an even stronger business case with respect to efficiency improvements. With episode or bundled payments, providers more efficient at delivering care with comparable or superior outcomes increase their net revenue. Analysts will seek to learn the processes that work best and advise others on how to best implement them. Physicians may hire such analysts as consultants, perhaps with support from health plans seeking to improve the efficiency of physicians on their panels.
Yet another opportunity from harnessing CI and positive deviance is reform of the malpractice system. The current system is extraordinarily expensive, especially relative to the compensation eventually provided to patients (Studdert et al. 2006). Worse, it yields few lessons for quality improvement. Policy analysts have proposed compensation for potentially avoidable events (PAEs) through a fast, noncontentious administrative process (Barringer et al. 2008). We should go further, making the documentation from those proceedings available in a confidential fashion to other attorneys. They would use this information to identify patterns of PAEs resulting in compensation, but for which there is no apparent effort by the responsible organization to improve processes. A second lawsuit could then allege “corporate negligence” rather than individual malpractice.
Hospitals will self-insure for the relatively low-cost payouts for PAEs, but most will buy liability coverage (or reinsure) against the much larger and less frequent corporate negligence suits. The primary defense against such suits will be demonstrating good faith steps to prevent the problems from reoccurring, even if those steps were unsuccessful. Liability insurers will prefer to write coverage for (and offer lower premiums to) hospitals undertaking such quality improvement efforts. Self-insuring hospitals will directly lower their own PAE costs through such efforts. Malpractice carriers (or consultants to self-insured hospitals) will want to learn how to spread best practices, learning from positive deviants.
SUMMARY AND CONCLUSIONS
The current delivery system segments and sequesters data within disconnected entities, offering inadequate incentives for data sharing and information use. Collecting and linking data used in patient care and payment, improving data quality, and overcoming issues of confidentiality, ownership, and access are nontrivial challenges. The “ways” to do so are within our grasp; the issue is whether there is the “will.” Plausible changes to payment incentives can create the demand for providers to derive from newly consolidated and available data the information they need to improve both patient quality and their own net incomes.
CI suggests that much can be learned through careful observation of patterns in data. Because it is impossible to define in advance all the relevant patterns, open access maximizes the utility of these data. Rewarding positive deviants, rather than singling out those to blame, encourages self-disclosure by providers. Instead of heavy-handed regulation, we need legislation and leadership by federal payers to facilitate data consolidation and secure sharing. Better and more accessible data can then be a key component of delivery system reform, not just a way of measuring it.
Acknowledgments
Joint Acknowledgment/Disclosure Statement: None.
NOTES
This is a nontrivial assumption—the technical challenges are many and are not addressed here. I am not envisioning a comprehensive, interoperable data system linking all patients and providers. Providers may have their own systems with varying degrees of sophistication.
See, for example, http://en.wikipedia.org/wiki/Federated_database_system (accessed May 17, 2010).
See http://www.californiahealthline.org/Articles/2009/1/5/New-Laws-Governing-Health-Care-Issues-Take-Effect-in-California.aspx (accessed April 30, 2009). Legal sanctions will not prevent bad behavior by a rogue individual, but they will cause organizations to tighten security. All the data access discussed below would be under data use agreements with significant organizational risk for inappropriate use of data. The more patients included in the database, moreover, the greater the protection afforded each. It is easier to reidentify fully deidentified data from a single hospital than a limited data set drawn from hundreds of hospitals. The sheer scale of the latter, moreover, makes it easier to control access to the files.
Legislation would not require that private entities divulge business-critical information to their competitors, but rather that certain data already used in payment and subsidized with federal funds be shared in a manner that nonetheless protects certain degrees of organizational confidentiality. For example, California requires hospitals to submit organizationally identifiable (and patient linkable via Social Security number) data. It releases less sensitive versions of that data in public use forms and provides files with additional variables and identifiers for research under data use agreements.
See, for example, the MIT Center for Collective Intelligence (http://cc.mit.edu) and the discussion it is hosting at http://www.socialtext.net/mit-cci-hci/index.cgi?what_is_collective_intelligence (accessed May 9, 2009).
he term analyst here includes not only researchers driven to publish their findings but also those with similar skills seeking to extract information useful to providers and others for business and clinical reasons. They may work directly in such settings or as consultants.
The firm is called Availity. See “Payer-owned electronic network grows” at http://www.ama-assn.org/amednews/2009/04/20/bibf0420.htm (accessed May 9, 2009).
The data are most useful if they are comprehensive, including information from all payers. If Medicare and Medicaid were to require use of the consolidator for claims submission, the vast majority of providers would acquire the capability to access the consolidator. Private insurers will have lower claims processing costs with data submitted through the consolidator, so this process will spread rapidly.
See http://www.calhospitalcompare.org/About-Us.aspx (accessed May 9, 2009).
See http://en.wikipedia.org/wiki/Lake_Wobegon_effect (accessed May 9, 2009).
REFERENCES
- Barringer PJ, Studdert DM, Kachella AB, Mello MM. Administrative Compensation of Medical Injuries: A Hardy Perennial Blooms Again. Journal of Health Politics, Policy and Law. 2008;33:725–60. doi: 10.1215/03616878-2008-014. [DOI] [PubMed] [Google Scholar]
- Bradley, Penberthy, Devers 2010. (Summit paper)
- Bundorf, Baker, Royalty 2010. (Summit paper)
- Center for Collective Intelligence, Massachusetts Institute for Technology. Malone, T. W., “What Is Collective Intelligence and What Will We Do about It? 2006. accessed July 8, 2010. Available at http://cc.mit.edu.
- Glasgow RE, Magid D, Beck A, Ritzwoller D, Estabrooks P. Practical Clinical Trials for Translating Research to Practice: Design and Measurement Recommendations. Medical Care. 2005;43:551–7. doi: 10.1097/01.mlr.0000163645.41407.09. [DOI] [PubMed] [Google Scholar]
- Hofer TP, Hayward RA, Greenfield S, Wagner EH, Kaplan SH, Manning WG. The Unreliability of Individual Physician “Report Cards” for Assessing the Costs and Quality of Care of a Chronic Disease. Journal of the American Medical Association. 1999;281(22):2098–105. doi: 10.1001/jama.281.22.2098. [DOI] [PubMed] [Google Scholar]
- Joshi, McHugh 2010. (Summit paper)
- Lane, Schnur 2010. (Summit paper)
- Luft HS. Total Cure: The Antidote to the Health Care Crisis. Cambridge, MA: Harvard University Press; 2008. [Google Scholar]
- Luft HS, Eaton LJ. Physician Cost Profiling: Digging Deeper for Consistency. New England Journal of Medicine. 2010 doi: 10.1056/NEJMc1004716. (in press) [DOI] [PubMed] [Google Scholar]
- Luft HS, Hunt SS. Evaluating Individual Hospital Quality through Outcome Statistics. Journal of the American Medical Association. 1986;255(20):2780–4. [PubMed] [Google Scholar]
- Marsh DR, Schneider DG, Dearden KA, Sternin J, Sternin M. The Power of Positive Deviance. British Medical Journal. 2004;329:1177–9. doi: 10.1136/bmj.329.7475.1177. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Marshall MN, Shekelle PG, Leatherman S, Brook RH. The Public Release of Performance Data: What Do We Expect to Gain? A Review of the Evidence. Journal of the American Medical Association. 2000;283:1866–74. doi: 10.1001/jama.283.14.1866. [DOI] [PubMed] [Google Scholar]
- Meier B. 2009. “New Effort Reopens a Medical Minefield,”New York Times, May 7, 2009 [accessed on May 9, 2009]. Available at http://www.nytimes.com/2009/05/07/business/07compare.html?_r=1&scp=3&sq=comparative%20effectiveness&st=cse.
- Pham HH, Schrag D, O'Malley AS, Wu B, Bach PB. Care Patterns in Medicare and Their Implications for Pay for Performance. New England Journal of Medicine. 2007;356:1130–9. doi: 10.1056/NEJMsa063979. [DOI] [PubMed] [Google Scholar]
- Pine M, Jordan HS, Elixhauser A, Fry DE, Hoaglin DC, Jones B, Meimbam R, Warner D, Gonzales J. Enhancement of Claims Data to Improve Risk Adjustment of Hospital Mortality. Journal of the American Medical Association. 2007;297(1):71–6. doi: 10.1001/jama.297.1.71. [DOI] [PubMed] [Google Scholar]
- Sack K. 2009. “Hospitals Reap Benefits of ‘Positive Deviance’ ”New York Times blog [accessed July 7, 2010]. Available at http://thelede.blogs.nytimes.com/2009/03/26/hospitals-may-benefit-from-listening-to-orderlies/
- Shahian DM, Normand S-LT. Comparison of “Risk-Adjusted” Hospital Outcomes. Circulation. 2008;117:1955–63. doi: 10.1161/CIRCULATIONAHA.107.747873. [DOI] [PubMed] [Google Scholar]
- Studdert DM, Mello MM, Gawande AA, Gandhi TK, Kachalia A, Yoon C, Puopolo AL, Brennan TA. Claims, Errors, and Compensation Payments in Medical Malpractice Litigation. New England Journal of Medicine. 2006;354:2024–33. doi: 10.1056/NEJMsa054479. [DOI] [PubMed] [Google Scholar]
- Trude S, Conwell LJ. 2004. “Rhetoric vs. Reality: Employer Views on Consumer-Driven Health Care,” Center for Studying Health System Change Issue Brief No. 86 [accessed July 8, 2010]. Available at http://www.hschange.org/CONTENT/692/?PRINT=1#ib7.
- Tunis SR, Stryer DB, Clancy CM. Increasing the Value of Clinical Research for Decision Making in Clinical and Health Policy. Journal of the American Medical Association. 2003;290:1624–32. doi: 10.1001/jama.290.12.1624. [DOI] [PubMed] [Google Scholar]
- Werner RM, Asch DA. The Unintended Consequences of Publicly Reporting Quality Information. Journal of the American Medical Association. 2005;293:1239–44. doi: 10.1001/jama.293.10.1239. [DOI] [PubMed] [Google Scholar]
- Wilson K, Brownstein JS. Early Detection of Disease Outbreaks Using the Internet. Canadian Medical Association Journal. 2009;180:829–31. doi: 10.1503/cmaj.090215. [DOI] [PMC free article] [PubMed] [Google Scholar]
