Abstract
Introduction:
The use of information from clinical registries for improvement and value-based payment is increasing, yet information about registry use is not widely available. We conducted a landscape survey to understand registry uses, focus areas and challenges. The survey addressed the structure and organization of registry programs, as well as their purpose and scope.
Setting:
The survey was conducted by the National Quality Registry Network (NQRN), a community of organizations interested in registries. NQRN is a program of the PCPI, a national convener of medical specialty and professional societies and associations, which constitute a majority of registry stewards in the United States.
Methods:
We surveyed 152 societies and associations, asking about registry programs, governance, number of registries, purpose and data uses, data collection, expenses, funding and interoperability.
Results:
The response rate was 52 percent. Many registries were self-funded, with 39 percent spending less than $1 million per year, and 32 percent spending $1-9.9 million. The typical registry had three full-time equivalent staff. Registries were frequently used for quality improvement, benchmarking and clinical decision support. 85 percent captured outpatient data. Most registries collected demographics, treatments, practitioner information and comorbidities; 53 percent captured patient-reported outcomes. 88 percent used manual data entry and 18 percent linked to external secondary data sources. Cost, interoperability and vendor management were barriers to continued registry development.
Conclusions:
Registries captured data across a broad scope, audited data quality using multiple techniques, and used a mix of automated and manual data capture methods. Registry interoperability was still a challenge, even among registries using nationally accepted data standards.
Introduction
Clinical registries are organized systems that use observational study methods to collect uniform data to evaluate specified outcomes for a population defined by a particular disease, condition, or exposure, and that serve one or more scientific, clinical, or policy purposes. Registry purposes include quality improvement, clinical research and informing value-based payment models. Registry data are generally used for activities that address the purpose(s) for which the registry was created i.e., treatment, payment, quality improvement, benchmarking, and clinical research [1,2,3,4].
There is an increasing need to measure and improve the performance of the health care delivery system in the United States in order to provide higher quality care at the same or lower cost. A national system of clinical performance measures is in place, with thousands of measures available, many hundreds of which are approved for use in value-based payment programs such as the Centers for Medicare and Medicaid Services (CMS) Quality Payment Program (QPP). Because of their ability to capture structured, validated data on real-world patient populations across organizational and geographic boundaries over varying periods of time, clinical registries have become an important platform for performance measurement and improvement [5].
Despite their increasing importance, available information about the use of registry data in the United States is lacking. The National Quality Registry Network (NQRN) is a community of organizations interested in clinical registries. NQRN is a program of the PCPI, a national convener of medical specialty and health care professional societies and associations in the United States. In 2015, NQRN conducted a clinical registry landscape survey to understand registry data uses, challenges and progress in the field. The survey addressed the structure and organization of registry programs, study populations, purposes and data uses, data collection, data standards and interoperability [6,7,8].
The survey design was informed by the NQRN Clinical Registry Maturational Framework (Framework). The Framework lays out a set of domains suitable for assessing registry capability. The domains include a function domain, which outlines the functionality designed into the registry in support of its purpose(s), as well as other domains that describe the registry capabilities that support this functionality. The other domains include data collection scope, data capture and transmission, standardization and quality control, performance measurement, reporting, and participant support. The Framework is a roadmap for achieving the highest level of value from a registry. It is intended to be used as a guide to assessing the underlying capabilities and infrastructure necessary to achieve that value [9].
Methods
Survey Theoretical Framework and Rationale
The survey was designed to examine registry characteristics across the Framework domains. Questions and answer choices were developed using content from the Framework that described the expected characteristics of a registry in each domain. Overall our questions covered basic registry program information, purpose, governance, data collection and use, scale and scope, and program activities.
We began our survey by asking respondents if their organizations operate one or more clinical registries. We asked about registry governance and funding. We continued with questions about data collection scope, type and methods, performance measurement, feedback and quality improvement. We asked about the types of information captured in order to identify clinical areas appropriate for the development of voluntary consensus data standards needed to improve interoperability.
Next, we asked about registry purposes and data uses, defining registry purposes as broad activities directly related to observing progress toward overall health care goals, facilitated by information from the registry. We defined registry uses as more granular and specific activities that support the purposes for which the registry was designed. Of the various types of information captured in registries, patient-reported and -sourced information is increasing in importance due to the focus on measuring health outcomes and linking them to improved processes of care. We asked respondents if they are capturing, or plan to capture information direct from patients [10].
We asked registries to report on the methods they used to collect data. Given the need to reduce data entry burden, [11,12] we wanted to understand the methods by which registries extract data from electronic health records (EHRs), given that EHR data vary in content and format [13,14,15]. Given these variations, registry data are often captured by trained abstractors using a manual process [16,17,18]. Since manual abstraction is time-consuming and expensive, we sought additional information on the use of abstraction as a data entry method [19].
In order for registry data to be suitable for their intended uses in measuring quality for value-based payment programs and for other uses, the data must be of high quality. Registries ensure quality through the use of trained abstractors and by conducting data audits [12,14,17,18,20]. We asked respondents how they audit their data. Additionally, endorsement or certification of registries by external bodies has the potential to further improve and standardize registry data quality expectations. We asked registry stewards to rate the importance to their organizations of external endorsement of their registry, quality measures, or quality improvement programs.
Operating registries and connecting them with source data systems requires a significant investment in technology. We asked respondents to report on the associated costs (one-time and monthly) that vendors are charging to connect their products to a registry. Given the need in QPP alternative payment models to capture data across clinical specialty boundaries, we asked respondents if they linked their registry with other registries or data sources [21].
Finally we asked questions designed to help us understand the registry business model, including expenses and sources of funding. We asked respondents to tell us their views on the greatest barriers to developing and sustaining their programs in the long term.
Survey Instrument Validation
We developed the survey instrument with the assistance of local experts in survey design, and we conducted a face validity expert panel test on the survey prior to execution. The expert panel test consisted of a validation questionnaire asking how well each survey question asked what it was intended to ask. Feedback from this process was used to finalize the survey text. Among the experts who helped us with survey design and validation were clinical researchers as well as market research and analytics professionals at a large national nonprofit in health care.
Distribution List Development and Survey Execution
Our survey distribution list was initiated from a list of 2015 PCPI member societies and associations. Organizations were added to the list through snowball sampling, other organizational memberships, and from lists of attendees of PCPI and NQRN national meetings. A hand comparison of these lists revealed no significant differences. Although there are other organizations operating registries, we assumed that our list was representative of the population of medical specialty and health care professional societies and associations in the United States, and that this population was representative of the national clinical registry steward community of interest.
To ensure that recruitment and information sharing was conducted fairly for all organizations, we consulted with the University of Illinois Institutional Review Board, which exempted this study from federal regulations for the protection of human subjects.
We sent the survey to 152 organizations. In addition to the initial email communication to organizations on the distribution list, PCPI staff conducted multiple rounds of email and phone follow up to these organizations over a period of approximately four weeks.
Results
Survey Response Rate and Demographics
73 organizations responded to the survey for a 48 percent overall response rate. Among the respondents, 38 operated registry programs for a 52 percent response rate among organizations with registry programs.
Participants reported on their registry program business models. 61 percent (23/38) said that their programs are self-funded e.g., through society dues or other revenue streams not including participation fees. 53 percent (20/38) charged a registry participation fee. 45 percent (17/38) reported that at least some of their registry program funding came from private or federal grants, with most of the funding provided through the private sector 37 percent (14/38) of respondents reported obtaining at least some revenue from fees charged for data use, analysis or custom reporting. A small number of respondents indicated that their registries have received at least some funding for research projects. Registry program annual operating budgets, including staff salary and benefits as well as other expenses, were spread out with 39 percent (15/38) reporting a budget of less than $1 million for their registry program on an annual basis. 32 percent (12/38) reported that they spent between $1-9.9 million, and two respondents reported budgets of $10 million or more. Nine respondents answered the question but did not indicate a budget range.
A significant component of the cost of operating a registry was staff resources. Many respondents reported retaining an in-house staff to run their registry [37]. organizations answered the question, with 32 reporting their number of full-time equivalent staff (FTEs), and five indicating that they didn’t know that information. The mean number of FTEs was seven. Respondents took advantage of the opportunity to leverage their registry program investment through multiple specific registries, with 47 percent (17/36) of registry programs operating multiple registries.
Registry Data Collection and Scope
Registries were developed for a variety of purposes, and registry data supported multiple uses. The results from questions about registry purposes and data uses are listed in Table 1.
Table 1.
PURPOSE | RESPONSE |
---|---|
Quality improvement | 94% (88/94) |
Benchmarking | 86% (81/94) |
Clinical effectiveness | 59% (55/94) |
Safety or harm | 44% (41/94) |
Comparative effectiveness research | 37% (35/94) |
Cost effectiveness | 24% (23/94) |
Device surveillance | 18% (17/94) |
Population surveillance | 17% (16/94) |
Public health surveillance | 4% (4/94) |
Other | 3% (3/94) |
USE | RESPONSE |
Clinical decision support development | 61% (57/94) |
Education development | 54% (51/94) |
Measure development | 53% (50/94) |
Qualified Clinical Data Registry (QCDR) | 39% (37/94) |
Guideline development | 35% (33/94) |
Certification | 29% (27/94) |
Public reporting | 26% (24/94) |
Payment | 17% (16/94) |
Population management | 15% (14/94) |
Other | 5% (5/94) |
Licensure | 1% (1/94) |
Registry representativeness varied, with registries tending to capture either a large or a small percentage of eligible clinicians. 34 organizations answered the question, with 44 percent (15/34) of respondents reporting that their registry captured between 1-25 percent of eligible participants. 21 percent (7/34) captured 26-50 percent, 6 percent (2/34) captured 51-75 percent and 29 percent (10/34) captured between 76-100 percent. Registries captured data from a variety of settings. 85 percent (29/34) captured data from physician offices i.e., group or solo practices. 68 percent (23/34) captured inpatient data. 26 percent (9/34) of respondents collected data from ambulatory surgery centers, and a small number of respondents captured data from nursing home and home health providers. 26 percent (9/34) of respondents captured data directly from patients. A number of registries collected data from other settings e.g., dialysis facilities, birth centers, and home births.
Next, we asked respondents about the information their registries capture. These results are summarized in Table 2.
Table 2.
TYPE | CURRENTLY CAPTURING | PLANNING TO CAPTURE |
---|---|---|
Patient demographics | 91% (31/34) | |
Treatments | 88% (30/34) | |
Individual practitioner information | 74% (25/34) | |
Co-morbidities | 68% (23/34) | |
Adverse events | 59% (20/34) | |
Organization demographics | 53% (18/34) | |
Patient-reported outcomes | 53% (18/34) | |
Laboratory results | 44% (15/34) | |
Quality of life | 41% (14/34) | |
Test imaging results | 41% (14/34) | |
Pharmaceutical | 41% (14/34) | |
Functional status | 41% (14/34) | 24% (8/34) |
Patient experience | 35% (12/34) | 32% (11/34) |
Vital signs | 32% (11/34) | |
Patient satisfaction | 29% (10/34) | 38% (13/34) |
Device information | 24% (8/34) | |
Genetic information | 18% (6/34) | |
Patient understanding of self-care | 15% (5/34) | 26% (9/34) |
Frailty | 15% (15/34) | |
Other | 12% (4/34) | |
Other patient-reported | 12% (4/34) | 24% (8/34) |
Cost | 9% (3/34) | |
Patient engagement | 9% (3/34) | 38% (13/34) |
Patient activation | 6% (2/34) | 24% (8/34) |
Patient-sourced direct from medical devices | 3% (1/34) | 24% (8/34) |
Performance Measurement, Feedback and Quality Improvement
Clinical registries are used as platforms for performance measurement on a national level. Respondents reported collecting data for and executing a variety of performance measure types, listed in Table 3.
Table 3.
MEASURE TYPES | USING TODAY | PLANNING TO USE |
---|---|---|
Process | 86% (30/35) | 6% (3/35) |
Outcome | 74% (26/35) | 20% (7/35) |
Safety | 62% (21/34) | 26% (9/34) |
Structure | 46% (16/35) | 9% (3/35) |
Patient-reported outcome | 47% (16/34) | 29% (10/34) |
Utilization | 41% (14/34) | 38% (13/34) |
Other | 12% (4/33) | 9% (3/33) |
Cost | 6% (2/34) | 53% (18/34) |
Personalized medicine | 6% (2/33) | 15% (5/33) |
In addition to implementing measures developed by other organizations, some respondents were developing their own performance measures. 61 percent (20/33) were engaged in the development and testing of performance measures. 33 percent (11/33) were implementing measures developed by other organizations. 21 percent (7/33) planned to begin measure development 1-3 years in the future, and 24 percent (8/33) were not performing any measure development activities.
91 percent (32/35) respondents reported that their registry program provides reports back to the organizations or individuals from whom the registry receives data. Of those, 50 percent (16/32) provided feedback in real-time. 53 percent (17/32) provided feedback at less than a real-time pace, but still on a quarterly basis or faster 16 percent (5/32) provided feedback, but at less than a quarterly pace. Of those who reported less than quarterly, some said they provide feedback a few times each year
Qualified Clinical Data Registries (QCDRs) are registries that successfully self-nominate to CMS and are approved as submission mechanisms for participation in the QPP Participants of registries that have achieved QCDR status can participate in the QPP through those registries [22,23]. 40 percent (14/35) of respondents reported that at least one of their registries had become a QCDR. 40 percent (14/35) were either planning to become or evaluating whether to become a QCDR, and 20 percent (7/35) were not considering QCDR status for any of their registry(s).
46 percent (12/26) of respondents felt that QCDR status had a positive impact on their registry business model. Among the positives, respondents offered that QCDR status provides a society or association with enough performance measures to report to CMS, and that it justifies charging participation fees by providing a tangible use case for members to participate in the registry. One respondent saw QCDR status as an incentive to develop new measures that are more meaningful to their members’ clinical practice. Another felt that QCDR status may drive the development of registry services for a broader market outside of the responding organization’s membership. Several commented that QCDR status made it easier for their member clinicians to participate in CMS payment programs and avoid negative payment adjustments. Other respondents expressed appreciation for the help that QCDR status provides registry participants in meeting their regulatory requirements, but also reported burdens in keeping up with changing annual program requirements and associated increased development cost.
Data Quality
95 percent (36/38) of respondent organizations with registries reported that they audit their data. A small number responded that their registries were new and that their audit methodology had not been developed yet, or that they did not know the method used. Respondents used a variety of methods, with 56 percent (20/36) using an automated audit process, 50 percent (18/36) performing remote or third-party audits, 42 percent (15/36) conducting comparisons with source data and 22 percent (8/36) performing on-site audits.
Respondents gave mixed feedback on their perception of the value of external endorsement, with a greater level of enthusiasm for external endorsement of performance measures vs. for registries. Table 4 describes respondents’ views on external endorsement.
Table 4.
EXTERNAL ENDORSEMENT OF | RESPONSE |
---|---|
Registry program | 50% (18/36) rated this less important, 39% (14/36) rated it more important, and a few respondents said it does not apply to them. |
Performance measures | 31% (11/36) rated this less important, 56% (20/36) rated it more important, and a few said it does not apply. |
Quality improvement program | 44% (16/36) rated this less important, 33% (12/36) rated it more important, and a greater number (eight) said it does not apply |
Registry Technology and Interoperability
88 percent (30/34) of registry programs used manual entry to capture at least some of their data. 68 percent (23/34) extracted data from EHRs, and 35 percent (12/34) capture data from other electronic data sources. 53 percent (18/34) of respondents reported that their registry used a nationally-accepted standard format for its data e.g., the SNOMED CT standard. These respondents used a variety of data standards. Several respondents indicated that their organizations were engaged in data standards development projects. Standards reported in free-text in the “Other” category included RXNorm, the Health Level Seven International (HL7) messaging standard, and RadLex. Table 5 lists the standards used.
Table 5.
STANDARD | USING |
---|---|
International Classification of Diseases (ICD) version 9, 10 | 44% (15/34) |
Current Procedural Terminology (CPT) | 29% (10/34) |
Healthcare Common Procedure Coding System (HCPCS) | 24% (8/34) |
XML | 24% (8/34) |
SNOMED CT | 21% (7/34) |
Logical Observation Identifiers Names and Codes (LOINC) | 18% (6/34) |
HL7 Quality Reporting Document Architecture Category III (QRDA III) | 18% (6/34) |
Other | 12% (4/34) |
National Drug Codes (NDC) | 6% (2/34) |
Unique Device Identification (UDI) | 3% (1/34) |
HL7 Fast Health Interoperable Resources (FHIR) | 3% (1/34) |
18 percent (6/34) respondents reported that their registries were linked to an external data source. External data sources included cancer registries, CMS and health plan claims data sources, and other registries.
Registry Program Business Model
30 percent (6/20) of respondents indicated that their registry charged participation fees per individual clinician. Of these, three charged between $1 – 249, two between $250 – 499 and one between $500 – 999. Of the 60 percent (12/20) that charged for participation on an organizational basis, two charged between $1 – 249, two charged between $1,000 – 2,499, three between $2,500 – 4,999, four between $5,000 – 7,499. One respondent charged over $10,000. A few registries reported that registry participation was included in membership dues.
Respondents reported using a variety of vendors to capture their registry data. 58 percent (18/31) reported one-time vendor fees between $0 – 2,499. 26 percent (8/31) reported one-time fees between $2,500 – 24,999, and 19 percent (6/31) reported onetime fees of $30,000 or more. 58 percent (18/31) reported not being charged any monthly vendor connection fees. The 42 percent (13/31) that reported paying monthly fees were roughly evenly split between paying more or less than $500 per month.
Conclusion
33 respondents listed barriers to the long-term sustainability of their registry programs, and of these, 64 percent (21/33) listed cost, and 39 percent (13/3) commented on interoperability and vendor issues they faced. 18 percent (6/33) mentioned issues with participant engagement and the need for culture change at the practice level in order for clinicians to gain value from their registry participation. Other barriers included legal issues, challenges related to collecting patient-reported outcomes, and articulating the value proposition of registry participation beyond meeting regulatory requirements.
Respondents provided closing comments that communicated their current and ongoing registry development activities, their overall enthusiasm for registry programs as critical assets for their organization, and calls for increased alignment of data collection priorities across registry steward organizations.
Discussion
Where We Are as a Registry Community in the United States
Registries specialize in collecting specific data elements of importance to the clinical domains in which they focus. However, certain types of data e.g., demographics are being collected in most registries. Other data such as cardiology clinical data are likely being collected not only in cardiology-specific registries, but also in other registries capturing data on patients undergoing treatment in related clinical domains in which cardiovascular disease is a factor. For example, cardiology data elements may exist in registries focused on surgery, endocrinology and other domains. Despite the commonality of data, at the time of the survey most registries were not capturing these data elements in a common format. There is currently no standard that defines how common data elements should be defined and captured in registries overall. As shown by the survey results, some of the registries are using data standards, but the adoption in registries of those standards was not universal. Even among those registries that used standards, the standards were not harmonizing data at the level of granularity required to achieve semantic interoperability. Implementing data standards at the vocabulary terminology and clinical concept level is needed to support semantic interoperability, which is the ability to preserve meaning across a data transfer [24]
One driver of demand for registry interoperability is the increasing use of registries in federal payment programs. In 2014, CMS initiated the QCDR program described earlier [25] From 2014-16, QCDRs were approved to collect both PQRS measures and “non-PQRS” measures – those developed by the QCDR steward organization and that met certain requirements as spelled out in the QCDR program requirements [26,27]. In 2017 there were over 60 QCDRs, between them making a wide selection of performance measures available to clinicians to provide feedback on their performance, and for participation in the QPP [28]
The Medicare Access and CHIP Reauthorization Act of 2015 (MACRA) legislation, promulgated as rulemaking through the MACRA Final Rule in 2016, began to replace PQRS and other reporting programs with the QPP [27,29]. The QPP includes two paths – the Merit-Based Incentive Payment System (MIPS) and Advanced Alternative Payment Models (APMs) paths. MIPS consists of four components – Quality Cost, Advancing Care Information and Improvement Activities. With the launch of the QPP in the 2017 performance year, QCDRs made available to participating clinicians both MIPS and QCDR measures, with QCDR measures developed by the QCDR steward to standards spelled out in the 2017 QCDR requirements. QCDR participation can help clinicians meet QPP requirements. It is anticipated that an increasing number of APMs may use measures captured and reported by QCDRs in the future.
Registry Accomplishments
Registries continued to be used for a variety of purposes and were capturing and measuring the kinds of data and information needed to power a value-based, learning health care delivery system. Given the national emphasis on measuring patient health outcomes, registries responded by developing measures and data collection methods that make implementing these measures feasible [14,30,31]. Registries provide timely, actionable and specific feedback to participating clinicians, enabling them to understand their performance relative to their peers, both locally and nationally. Registry feedback can provide clinicians with specific, actionable and useful information about their performance at varying levels of attribution. These feedback reports are not practical to produce from EHR data alone, and this has historically been a driver of registry development [13].
As registries expanded the scope of their data collection, they continued their focus on high levels of data quality, through internal and external audit processes as well as increased use of automated extraction of data from source data systems. In this way registries worked to reduce inter-rater reliability issues as well as the extra costs that result from manual chart abstraction.
Registries have implemented nationally accepted data standards, but not to a level that supports semantic interoperability between registries and their source data systems, nor linking with other registries or accessing external data such as publicly-available reference data sets from federal government agencies. We found that in 2015 it was still a challenge for registries to link with other data systems, and that linking was still mostly not done despite an increasing demand for the development of cross-cutting performance measures that require data spanning multiple registries.
Limitations
This survey has important limitations. First, although we tried to be comprehensive and complete in selecting our survey population, were rigorous in our outreach and follow-up, and achieved a high response rate, the survey responses were still a convenience sample. Thus, there may have been differences between respondents and nonrespondents that impacted the results. The survey population included United States organizations only, so registry organizations outside the United States did not get a chance to participate. Second, despite a validated survey instrument, the answers to many of the survey questions required detailed knowledge of the organization’s registry program and were open to interpretation, so that the meaning and depth of answers to the same question by different respondents might not have been comparable.
Conclusion
Registry Use and Interoperability
With the increased emphasis in value-based payment models, demand for data from registries is increasing [32]. But due to the high cost, it is often not feasible to capture the data registries need from EHRs alone, nor to capture them efficiently with manual chart abstraction. Thus, improvement in semantic interoperability between registries and source data systems is needed. However, the work required to achieve this level of interoperability is considerable. Due to the lack of structure and standardization of EHR data, most registries still operated in a mixed data collection environment with continued dependence on manual data entry through clinical chart abstraction. Data that can be extracted automatically from EHRs, health system data warehouses and other sources can typically only be captured in registries using customized technical interfaces, which map source data into compatible formats and transmit them to the registry. Validation of automatically-extracted data may present different challenges vs. manually-abstracted data, and further research in this area is recommended.
Despite the aforementioned benefits for national performance measurement provided by registries, CMS must still harmonize the data from multiple QCDRs in order to measure the performance of clinicians across all specialties and rate them using MIPS scoring in the QPP. It is currently difficult to aggregate data across multiple QCDRs to come up with uniform data that are valid for computing MIPS scores for clinicians across a wide variety of specialties. These and other factors have created a burning platform for increased semantic interoperability between registries and their data sources [33].
Registry Business Model
Cost was a major factor in registry sustainability Due to the expertise required to successfully operate a registry, staff expenses were a significant component of this cost. Increased levels of interoperability have the potential to reduce the cost of operating a registry and thus of participating in registries, by reducing or eliminating the need for manual chart abstraction.
Registry participation has typically been voluntary but many registries benefit from regulatory or financial incentives to drive participation. As registries continue to develop and strengthen their programs, they have opportunities to continually look for ways to support their members and registry participants in ways that are increasingly value-added. One way in which registries responded to this is by supporting multiple individual registries using the same infrastructure. As an example, individual registries sharing a single program structure focused on different clinical specialties, collecting data elements of interest to those specialties. This sharing of registry infrastructure may allow registry programs to more easily collect data elements that are common across the individual registries i.e., demographics. Programmatic infrastructure designed to leverage registry data, such as training, education and quality improvement programs can be similarly leveraged, increasing the registry steward’s return on investment.
Quality Improvement
Registries continued to serve as important platforms for performance measurement. Registry information informs the work of quality improvement programs and initiatives from the local to national level. A recommended area for further study is the relationship between the use of registry information and performance improvement [30,34,35,36].
References
- 1.Gliklich, et al. Registries for Evaluating Patient Outcomes: A User’s Guide. 3rd ed. Rockville: Agency for Healthcare Research and Quality (US); 2014. [PubMed] [Google Scholar]
- 2.Levay C. Policies to foster quality improvement registries: lessons from the Swedish case. Journal of internal medicine. 2016; 279(2): p. 160–172. [DOI] [PubMed] [Google Scholar]
- 3.Carroll JD. Transcatheter valve therapy registry is a model for medical device innovation and surveillance. Health Affairs. 2015; 34(2): p. 328–334. [DOI] [PubMed] [Google Scholar]
- 4.Krucoff MW. ASSLTN. Bridging unmet medical device ecosystem needs with strategically coordinated registries networks. Jama. 2015; 314(16): p. 1691–1692. [DOI] [PubMed] [Google Scholar]
- 5.PCPI. [Online].; 2016. [cited 2017 March 11. Available from: http://www.thepcpi.org/pcpi/media/documents/nqrn-what-is-clinical-registry.pdf.
- 6.PCPI. [Online]. [cited March 11, 2017 Available from: http://www.thpcpi.org/.
- 7.PCPI. [Online]. [cited 2017. Available from: http://www.thepcpi.org/programs-initiatives/national-quality-registry-network/.
- 8.PCPI. [Online]. [cited 2017 March 11. Available from: http://www.thepcpi.org/about-pcpi/member-organizations/.
- 9.PCPI. [Online].; 2014. [cited 2017 July 27. Available from: http://www.thepcpi.org/pcpi/media/documents/nqrn-maturational-framework-public.pdf.
- 10.Berwick DM. The triple aim: care, health, and cost. Health Affairs. 2008; 27(3): p. 759–769. [DOI] [PubMed] [Google Scholar]
- 11.Devers K GBRCSABFWT. The feasibility of using electronic health records (EHRs) and other electronic health data for research on small populations Washington: Urban Institute; 2013. [Google Scholar]
- 12.Kondziolka Dea. Development, implementation, and use of a local and global clinical registry for neurosurgery. Big Data. 2015; 3(2):p.80–89. [DOI] [PubMed] [Google Scholar]
- 13.Roth CP.ea. The challenge of measuring quality of care from the electronic health record. American Journal of Medical Quality. 2009; 24(5): p. 385–394. [DOI] [PubMed] [Google Scholar]
- 14.Bhatt DL,ea. ACC/AHA/STS statement on the future of registries and the performance measurement enterprise A Report of the American College of Cardiology/American Heart Association Task Force on Performance Measures and the Society of Thoracic Surgeons. Journal of the American College of Cardiology. 2015; 66(20): p. 2230–2245. [DOI] [PubMed] [Google Scholar]
- 15.Windle JR KADJFEKALTLAPMRFSGSD. 2016. ACC/ASE/ASNC/HRS/SCAI Health Policy Statement on Integrating the Healthcare Enterprise. Journal of the American College of Cardiology. 2016 September; 68(12): p. 1348–64. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16.STS National Database. [Online]. [cited 2017 August 1. Available from: http://www.sts.org/national-database.
- 17.Xian Y FGRMWLBJDVZXODHAPESL. Data quality in the American Heart Association Get With The Guidelines-Stroke (GWTG-Stroke): results from a national data validation audit. American heart journal. 2012. March 31; 163(3): p. 392–8. [DOI] [PubMed] [Google Scholar]
- 18.Messenger JC HKYCSLDJCJDGGFMMRMRI. The National Cardiovascular Data Registry (NCDR) Data Quality Brief. Journal of the American College of Cardiology. 2012. October; 60(16): p. 1484–8. [DOI] [PubMed] [Google Scholar]
- 19.Carrell DS HSTDBDCJCWSG. Using natural language processing to improve efficiency of manual chart abstraction in research: the case of breast cancer recurrence. American journal of epidemiology. 2014. January; 179(6): p. 749–58. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 20.Shahian DM JJEFBJDRPRWCPEMDGF. The society of thoracic surgeons national database. Heart. 2013. January. [DOI] [PubMed] [Google Scholar]
- 21.SM B. Setting value-based payment goals—HHS efforts to improve US health care. N Engl J Med. 2015. March; 372(10): p. 897–9. [DOI] [PubMed] [Google Scholar]
- 22.CMS. [Online]. Accessed Oct. 2017 Available from: https://qpp.cms.gov/docs/QPP_QCDR_Self-Nomination_Fact_Sheet.pdf.
- 23.Quality Payment Program. [Online]. [cited 2017 March 11. Available from: https://qpp.cms.gov/.
- 24.CN M. Data interchange standards in healthcare it-computable semantic interoperability: Now possible but still difficult. do we really need a better mousetrap? Journal of Healthcare Information Management. 2006. January; 20(1): p. 71. [PubMed] [Google Scholar]
- 25.Federal Register. [Online].; 2013. [cited 2017 March 11. Available from: https://www.gpo.gov/fdsys/pkg/FR-2013-12-10/pdf/2013-28696.pdf.
- 26.Federal Register. [Online].; 2014. [cited 2017 March 11. Available from: https://www.gpo.gov/fdsys/pkg/FR-2014-11-13/pdf/2014-26183.pdf.
- 27.Federal Register. [Online].; 2016. [cited 2017 March 11. Available from: https://www.gpo.gov/fdsys/pkg/FR-2016-05-09/pdf/2016-10032.pdf.
- 28.Quality Payment Program. [Online].; 2017. [cited 2017 August 7. Available from: https://qpp.cms.gov/docs/QPP_2017_CMS_ Approved_QCDRs.pdf.
- 29.United States Congress. [Online].; 2015. [cited 2017 March 11. Available from: https://www.congress.gov/bill/114th-congress/house-bill/2.
- 30.Berwick DM. Measuring surgical outcomes for improvement: was Codman wrong? Jama. 2015; 313(5): p. 469–470. [DOI] [PubMed] [Google Scholar]
- 31.Klaiman Tea. Leveraging effective clinical registries to advance medical care quality and transparency. Population health management. 2014; 17(2): p. 127–133. [DOI] [PubMed] [Google Scholar]
- 32.Nelson EC,ea. Patient focused registries can improve health, care, and science. British Medical Journal. 2016; 354(3319). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 33.Adler-Milstein JSCLaAKJ. The number of health information exchange efforts is declining, leaving the viability of broad clinical data exchange uncertain. Health Affairs. 2016; 35(7): p. 1278–1285. [DOI] [PubMed] [Google Scholar]
- 34.Etzioni DA,ea. Association of hospital participation in a surgical outcomes monitoring program with inpatient complications and mortality. Jama. 2015; 313(5): p. 505–511. [DOI] [PubMed] [Google Scholar]
- 35.Osborne NH,ea. Association of hospital participation in a quality reporting program with surgical outcomes and expenditures for Medicare beneficiaries. Jama. 2015; 313(5): p. 496–504. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 36.Richardson D. PCPI. [Online].; 2013. [cited 2017 March 11. Available from: http://www.thepcpi.org/pcpi/media/documents/pcpi-112013-us-clinical-data-registries-faq.pdf.
- 37.American Hospital Association. [Online].; 2017. [cited 2017 March 11. Available from: http://www.aha.org/research/rc/stat-studies/fast-facts.shtml.