Skip to main content
AMIA Annual Symposium Proceedings logoLink to AMIA Annual Symposium Proceedings
. 2018 Dec 5;2018:847–856.

Phenotype Detection Registry System (PheDRS) - Implementation of a Generalizable Single Institution Clinical Registry Architecture

John D Osborne 1, Adarsh Khare 1, Donald M Dempsey 1, J Michael Wells 1, Matt Wyatt 1, Geoff Gordon 1, Wayne H Liang 1, James Cimino 1
PMCID: PMC6371346  PMID: 30815127

Abstract

Precision medicine requires that groups of patients matching clinical or genetic characteristics be identified in a clinical care setting and treated with the appropriate intervention. In the clinical setting, this process is often facilitated by a patient registry. While the software architecture of federated patient registries for research has been well characterized, local registries focused on clinical quality and care have received less attention. Many clinical registries appear to be one-off projects that lack generalizability and the ability to scale to multiple diseases. We evaluate the applicability of existing registry guidelines for registries designed for clinical intervention, propose a software architecture more practical for single-institution clinical registries and report the implementation of a generalizable clinical patient registry architecture at the University of Alabama at Birmingham (UAB).

Introduction

The two classes of databases most commonly used in health care are the patient-focused electronic health record (EHR) and population-focused registries1. The traditional EHR is designed to orchestrate information at the patient level and is used by physicians, nurses and other healthcare providers to coordinate patient care. A patient registry is a managed collection of patients and their clinical data, typically focused around a particular condition or target population2-5. They exist for many health care related purposes, and are also used outside the clinical setting for epidemiological studies, outcomes research, and cohort discovery. Patient registries often collect data from multiple participating healthcare organizations for population health management.

Patient registries are taking on an increasing importance in health care with the rise of precision medicine, population-targeted interventions, as well as value-based care and bundled-payment for care improvement (BPCI) initiative models. Increasingly, they are also being used in a clinical setting, as a tool to monitor health care delivery and to supplement the use of the EHR6,7. Clinical applications include adherence to standards of care, monitoring quality of care and monitoring the cost of delivery for a given condition. This subset of patient registries with such a clinical focus has been termed “clinical registries8 or “clinical quality registries9 by recent systematic reviews. A sharp distinction between clinical registries and the EHR is not always possible since EHRs may incorporate some registry functions or population views10. Additionally, data from clinical registries is often sourced from the EHR and both the EHR and clinical registries have overlapping purposes. Nonetheless, both EHRs and clinical registries operate very differently. While individual EHRs are primarily used by clinicians within a single health provider organization, population-focused clinical registries are used both inside and outside the health system, by clinicians and by non-clinicians alike, for population health management11, often within a multi-institutional software architecture.

In the recent review by Hoque et al9, clinical registries were primarily characterized by having a clinical intervention component that includes not only ongoing data collection (which is typical for any registry), but a feedback mechanism on the performance of the providers in a health system on a continuing basis. This feedback mechanism was loosely defined and could include only institutional-wide reports and quality measures rather than a patient-specific intervention recommendation. Even with this broad intervention criterion, the total number of publications they found between January 1980 to December 2016 describing such clinical registries meeting their criteria was only 17. Thus, many registries are implemented locally at a single institution (an exclusion criterion for the systematic review) or are mainly used to generate publications to inform policy, rather than to directly facilitate patient care. By directly facilitate, we are referring to a registry initiated intervention that acts at the point of care on the identified patient. Unfortunately, “feedback of data to participating clinical centers often lags well behind actual care, making data obsolete and less useful ” for many clinical registries12.

Registry Architecture and Software

The lack of clinical registries providing non-aggregated patient-specific feedback may be related to inadequate technological development. There is a lack of registry-related publications that include sufficient software or technological details relevant to registry implementation13. There are registry software design guidelines14, open source registry software frameworks15,16, open source electronic data capture (EDC) relevant to registry implementation17,18 and some publications of registry implementations containing software architecture and infrastructure details19-22. However, these are of questionable value to implementers of clinical intervention-focused registry software because they are not designed to directly facilitate patient-specific clinical intervention.

For example, i2b216 is a popular software tool implemented at many academic medical centers that provides functionality comparable to a patient registry. However, i2b2’s two main use cases are to “find sets of patients that would be of interest for further research” and to provide a “deep dive” into the phenotype of the set of patients identified “in support of genomic, outcome, and environmental studies”16. Thus, while i2b2 could be adapted for clinical applications, the design of i2b2 was intended to support research, not clinical care. Moreover, when i2b2 data marts (custom subsets of the main i2b2 warehouse) are implemented as one-off or irregularly updated snapshots of batched de-identified data from the i2b2 core, the lack of real-time updates prohibits timely clinical feedback. Another registry with both architecture information and available open source software is the Rare Disease Registry Framework (RDRF)15,23. The code is available on Github (https://github.com/muccg/rdrf) and its infrastructure supports the ability to create registries without additional software development. However, RDRF is focused on data collection, not clinical QI or care, and does not perform patient tracking for clinical management or administrative purposes.

An exception to the lack of clinical focus in patient registries is “ImproveCareNow”20. It provides a “proof-of-concept architecture” designed to support so-called “triple use,” which includes “chronic care management, QI and research”. It implemented a patient registry24 for tracking inflammatory bowel disease (IBD) in children that involved 31 different institutions. It works directly with popular EHR vendors to create forms for data entry using a “single source”25 approach that allows a “collect once, use many”26 design, so that data can be collected directly from the EHR. Despite this tight integration with the EHR, the system architectural remains tied to a unidirectional flow of data from the EHR to the registry, leaving no mechanism for patient level clinical intervention or decision support. It does however have the ability to generate automated monthly quality improvement measures via charts or a dashboard.

Outside the documentation on specific patient registry applications such as i2b2 or RDRF, the literature on patient registry implementation is not extensive. A 2013 publication by Bellgard et al. 13 described 3 common myths of registry construction, but did not provide detailed implementation guidance. To our knowledge, the first published detailed source of information on patient registry software system guidelines is from Lindoerfer and Mansmann, who first published a checklist of design principles in 201427. This review was then updated in 2017 to include the requirements engineering process28 and examples29. Their systematic review used relatively broad inclusion criteria that should have captured both research focused and clinically focused registries. Ultimately 64 publications were included for their quantitative synthesis. Their updated checklist29 included 12 topics with 72 descriptive items, the topics and their descriptions are shown in Table 1. This revised checklist was comprehensive, but it is unclear how extensive its adoption has been and whether it is applicable to clinically-oriented patient registries.

Table 1.

CIPROS Checklist. Summarized from “CIPROS - A Checklist of Items to consider when choosing or developing a software system for patient registries.”29

Topic Description
Software Architecture Multi-tier system architecture, platform independence, open source components.
Development System developed using design model, provides framework for a new registry project, supports questionnaire builder and usability of system tested.
Interfaces and Interoperability Web, mobile device interface and patient interface specifications. Programming interface and interface with other systems.
Interoperability and Semantics and Standardization CRF and data standards, ontology-based vocabularies and metadata, XML schema definition available for structured data exchange.
Internationality Multilingual questionnaires
Data management, data quality and usability Pseudonymous patient identifier, CRF specifications, support for all data types, multiple choice entry with no pre-defined selection, plausibility flags, unplanned visit accommodations, software ergonomics
Data Analysis Query building, report generation, data downloading, graphical presentation and risk analysis
Security Aspects Authorization, role-based access, encrypted data transfer and storage, audit trail, secure server room and firewall, backup management
Privacy Data protection and double pseudonymization for biological/genomic data
General Features Costs, multi-client capacity, update mechanism, source documentation in pdf
Organizational Regulatory compliance, informed consent, data rights and protection
Training User manuals and training, online help and user feedback

Clinical Registry Implementation

The problem of little applicable architecture and software information for clinical registries is compounded by the few, if any, software implementation details by existing clinical registry publications. This includes the 17 registries with clinical interventions covered by the Hoque systematic review9. In 8 of the 17 registries, the frequency and nature of reporting (the basis of clinical intervention) is not stated, and in 6 of the registries (partially overlapping the aforementioned 8 registries) the target of the reporting is not specified. Only 2 registries report in enough time (“real time”) to enable patient interventions; they include the Swedish Registry on Ulcer Treatment (RUT)30 and a stroke registry (Ethos) in the United States31. Unfortunately, neither publication contained or referenced any software-related information. Single institution clinical registries excluded from the Hoque systematic review9 are not any better in providing software or implementation details. A typical example is Liu et al32 who created a clinical registry of ultrasound-guided regional anesthesia for ambulatory arthroscopic shoulder surgery to document efficacy and safety. However, the system for patient identification was not described (likely chart review which limits the overall utility of a registry), nor was the architecture of system elucidated, other than to mention the existence of a clinical registry database. This reflects the reality that the majority of registry publications are published in disease or specialty specific journals where the clinical informatics component of the publication is typically not emphasized by either the authors or the reviewers.

Given the widespread use of clinical registries in healthcare and the dearth of information on clinical registry implementation, we evaluate and extend the CIPROS methodology to clinical registries based on our own experience at the University of Alabama at Birmingham (UAB). We describe the architecture and software implementation details of the Phenotype Detection Registry System (PheDRS), a clinical patient registry framework. We explain how such a framework can meet the need33 for a generalizable architecture that can support multiple diseases and research and clinical needs.

PheDRS Registry Implementation

Our implementation of a clinical registry system (PheDRS) is based on our previous work, the Cancer Registry Control Panel34, substantially rewritten to support a wider range of workflows, including cancer clinical care coordination, and identifying and tracking patients hospitalized with chronic obstructive pulmonary disease (COPD) exacerbations as part of a BPCI initiative,35 as shown in architecture diagram in Figure 1.

Figure 1.

Figure 1.

PheDRS Architecture. PheDRS implements a tiered web architecture. Data is pulled nightly from the EDW database server via ETL scripts into the PheDRS database server which is isolated from the EDW to allow for more consistent performance and testing. The MEDICS schema hosts documents from both the EDW and other document database servers. On-demand Natural Language Processing (NLP) services are provided via a REST interface to a UIMA broker.

PheDRS utilizes UAB’s Enterprise Data Warehouse (EDW), which runs Oracle 11.2 and hosts tables from Cerner PowerInsightTM containing ICD-10 codes, Diagnosis-Related Group (DRG) codes, encounters and other structured data. Our application database server also runs Oracle 11.2 and houses 3 different schemas: the PheDRS schema to support the registry, the UMLS 2016AB schema and the MEDICS document schema. The majority of tables in the PheDRS schema consist of copies of tables in the EDW, but containing only data on registry patients. Structured data is represented in PheDRS using the CHADO schema36, specifically the db, dbxref, cv, cvterm, cvterm_dbxref and cvtermprop tables. The MEDICS schema stores documents and their associated metadata and can be found at https://github.com/ozborn/medics-schema. The web application server uses Apache 2.4.6 on the front-end and forwards its requests to Tomcat 8.0 and Jersey 1.18 via AJP where they are processed by the web application (https://github.com/heenachitkara/PHEDRS). The web application server communicates using JDBC to the database server only. A series of ETL (Extract Transform Load) and update scripts run independently on the web application server and communicates via JSON to web services on the NLP application server which runs Tomcat 8.5 and Jersey 1.18 to handle document handling requests. Large document volume processing is handling by the ActiveMQ broker in UIMA-DUCC (https://uima.apache.org/doc-uimaducc-whatitam.html). The front end is a Single Page Application and runs JavaScript/HTML utilizing the jQWidgets-JavaScript UI Widgets framework (https://www.jqwidgets.com/), jQuery-1.11.1, moment (https://momentjs.com/), popmodal (https://github.com/vadimsva/popModal) & dalert (http://andrewdex. github.io/dalert/) and front end open source framework known as Bootstrap (https://getbootstrap.com/). All components are open source and available online or by request with the exception of jQWidgets, which is free for non-profit organizations.

Registry Interface and Clinical Workflow

The basic workflow of PheDRS involves managing tabbed views of patients who match registry-defined criteria, as shown in Figure 2. The goal is to avoid or minimize time spent by registrars in conducting manual chart reviews. The registrar (user) is presented with a list of appropriate patients in a registry tab identified by population-targeted algorithms as potential candidates to be included in the specific clinical patient registry. Patients can move on, off, or between specific tabs depending on whether a clinical intervention, quality measure, and/or another registry requirement has been met. This can replace paper lists or Excel spreadsheets used for patient tracking. Each patient in a registry is always in one of 4 mutually exclusive states. A patient can be a candidate patient if they have not yet been vetted by registrars but have been algorithmically identified by registry-specified criteria, a potential or putative case in the registry if the patient status has not yet been resolved, accepted if appropriate after human review and is tracked in the registry, or rejected from the registry after human review. The workflow status of the patients is tracked separately and is used to indicate whether an event which warrants the attention of the registrar has occurred. Examples of events include the presence of diagnosis codes (DIAG), inclusion codes (INC) or encounter information (ENC) which represent in this case ICD-10 codes, DRG codes or encounter related information respectively, which are shown in Figure 2. Other codes, such as concepts extracted from the Natural Language Processing (NLP) of unstructured clinical documents, such as radiology reports or pathology reports, are not shown in Figure 2 but can be specified in the registry configuration. Figure 3 shows the configuration screen for the UAB COPD registry.

Figure 2.

Figure 2.

Patient list from COPD Registry in PheDRS showing potential COPD exacerbation admissions. Patients are removed from this tab once they leave the hospital or are reviewed by a nurse. Multiple tabs describing patients with specific criteria can be defined for a registry.

Figure 3.

Figure 3.

Registry Ontology Management in PheDRS. Registry patient detection criteria can be set here, such as the presence of specific structured data. Patient detection with machine learning is supported through a separate application that adjusts candidate patient registry eligibility scores in the database, but manual configuration is still required.

Registry configuration includes the ability to create registry-defined attributes by registrars, as shown in the COPD registry configuration screen in Figure 3 under “Patient Attributes” and “Encounter Attributes”. For example, adding the “COPD” and “Asthma” attributes allow patients to be annotated with these concepts by the registrars in the patient view screen (not shown) where all the patient-specific information is displayed.

The registry can also be configured with a number of tab attributes that perform registry-defined queries to retrieve a list of patients. Ad-hoc queries and reports can also be generated.

PheDRS Single Institution Clinical Registry Model

We outline our PheDRS model for a single institution clinical registry in Figure 4. One key difference between our model and a research-focused registry is that we do not enforce data standardization for EHR sourced data. Multi-institutional registries need to map data to common standards in order to meaningfully share results. In the context of a single institution, where nurses and other care-givers are typically aware of the data semantics, the need for ontological standards is not immediate. PheDRS does use registry specific controlled terminologies, but any mapping to external standards is deferred until needed. Reducing or eliminating mapping between terminologies by registry participants is critical in a clinical registry, where the utility of the registry is judged both by clinical results and the time it saves clinical registry participants. This is unlike a research registry, which is judged by the quantity and quality of data it can provide. PheDRS can still support research-focused registries by allowing registrars to define their own criteria for diseases, as in the PheDRS-hosted multiple myeloma registry. However, the burden of data standardization lies with registrars, and by default PheDRS will store and display unstructured data such as the EHR sourced “Reason for Visit” field which have clinical utility.

Figure 4.

Figure 4.

The PheDRS registry model contrasted with the i2b2 and research focused registry models. The PheDRS architecture model supports multiple registries in a single server instance separate from both the EHR and EDW. Data is loaded nightly from the EDW, and the PheDRS database hosts a mix of standards compliant data and EHR sourced non-standard data which can be standardized on an as-needed basis. Middleware is not shown.

Another difference between the PheDRS model and some research focused registries is that PheDRS does not yet support the display of de-identified patient data. The emphasis on de-identification described in existing software and architectural guidelines leads to complicated workflows not appropriate in a clinical registry, where the user may need to contact that patient in the hospital or by phone for intervention or follow-up. Architects and maintainers should evaluate whether de-identified data is appropriate for their use case and consider utilizing a separate registry leveraging existing open source or commercial solutions for this problem. At UAB we deploy a separate i2b2 instance for cohort discovery and other research needs that require the use of de-identified data.

Applicability of CIPROS Checklist to PheDRS Clinical Registry Architecture

Overall, we found the CIPROS guidelines helpful in assessing the state of our clinical registry, but it needs adaptation to the context of clinical use, as outlined in Figure 4.

Software Architecture - PheDRS supports the multi-tier architecture recommended in the CIPROS checklist. While the recommendation for a multi-tier architecture is relevant for clinical registries, support for platform independence is less important in clinical or “single source” registries that require integration with the EHR. Research registry frameworks typically operate under the assumption that data entered by registrars is the main or only source of data15,17, rather than being sourced through the EHR. This implies that integration components of a clinical registry may have constraints imposed on them either directly or indirectly through vendors which can limit platform independence, open source licensing or both. We think the ImproveCareNow implementation of the Learning Health System20 which interoperates with major EHR vendors is more practical.

One architecture consideration left unaddressed in these guidelines is database hosting. Based on our experience with PheDRS, we recommend that clinical registries host their own dedicated database server if resources are available, in order to help ensure that ETL resource constraints are better satisfied and to benchmark registry performance.

Finally, we suggest (as seen in Figure 1) that implementers seeking to realize performance gains through in-memory database joins by sharing an EDW database server with their registry database server consider a stand-alone registry database server. The benefits of sharing EDW server resources must be balanced against both different usage patterns in the two systems and the need to isolate registry performance testing from the EDW. Regardless of server location, clinical registries should make an effort to utilize EHR or EDW loading scripts if possible in order to leverage existing institutional expertise. With PheDRS we pull up-to-date information directly from the EHR via the EDW.

Development - We consider design modelling as well as usability and performance-testing good software development practice and not specific to any type of registry. CIPROS specifies that there should be a framework for the development of a new registry. Based on our experience and others15, we consider this to be one of the most important requirements for both research and clinical registries. We would go further and expand the guidelines to suggest that any registry system should allow the creation of a new registry without software modification. Software development is costly and we believe that a single development framework can and should provide basic registry functionality.

Interfaces, Interoperability, Semantic and Standardization - While standardized Case Report Forms (CRFs) and data items are equally applicable to a clinical registry as to a research registry, the hard constraint for ontology-based, standardized metadata and vocabularies needs to be re-considered for clinical registries that rely extensively on EHR-sourced data to achieve quality goals. In a single institution setting, we recommend first utilizing existing EHR-sourced data to accomplish the clinical task at hand, and then assess the need to map data to standards. This is becoming less of an issue, as many vendors are implementing standard terminologies in their system. As shown in Figure 4, PheDRS does store and display some vendor provided non-standard data. The CIPROS recommendation for a mobile interface and documented API is as relevant for clinically focused registries as it is for research-focused registries. PheDRS does not yet implement a mobile interface but does have a documented API.

Finally, CIPROS makes no mention for EHR alerting. If applicable, we consider integration into the EHR and the clinical workflow a key feature to fully realize the benefit of a clinical registry - namely, the ability to provide patient-level interventions and clinical decision support. In the future, we plan to implement integrated, EHR-based clinical alerting instead of relying solely on the free-standing web-based interface.

Internationality - This checklist item plays a more important role in registries which span multiple institutions and potentially multiple languages. PheDRS is currently English only.

Data analysis, management, data quality and usability - All items appear applicable to clinically focused registries. While there is value in supporting some analytical capabilities within the registry management and population detection workflow, there is substantial variation in the analytical needs of different domains. Rather than focusing on inclusion of analytical capability, we feel a more sustainable and valuable approach is to support integration with external analytical tools. This allows the registry functions to take advantage of the continually growing capabilities of other tools in the analytical space more fully. Currently PheDRS only supports data analysis indirectly through 3rd party applications via data export.

Security Aspects - CIPROS items for security are generally applicable to clinical registries, but the need for encrypted data storage in a database is more relevant in a multi-institutional setting where a database may be exported. This requirement needs to be considered in light of other security aspects, the ease of development and performance considerations given that the clinical registry database is often hosted in the same secure data center as the institutions EHR or EDW. There is little value in a clinical registry exceeding the security requirements of its source systems. PheDRS is hosted in the same secure data center as UAB’s EDW.

Privacy - This section of CIPROS covers data protection and double pseudo-anonymization for genomic and biological data through a tool like ARX37, a common requirement for research-focused registries governed by the Health System Privacy Framework. In PheDRS we have only just begun the process of implementing a translational cancer registry incorporating genetic and genomic patient data, and PheDRS does not yet support pseudo-anonymization.

Organizational - Informed Consent in the CIPROS checklist is specific both to the United States (citing Chapter 11, Title 21, Code of Federal Regulations and HIPAA) and assumes a research patient registry or clinical trial framework. Single institution registries that track a set of patients for quality or adherence purposes (because of a lack of capability in the EHR) are covered by a consent to treatment agreement. The CIPROS checklist item covering institutional data rights governance issues is only applicable for multi-institutional registries. PheDRS does not currently share data outside UAB.

General Features and Training - Support for user training in all its forms is generally good software design for all registry types and the general features enumerated by CIPROS are equally applicable to clinical registries. We are working to expand the range of documentation of PheDRS beyond a manual and API.

Conclusion

In conclusion, we have described a generalizable architecture for a clinical registry implemented at UAB and have published the non-institutional specific source code relevant for other institutions on Github. We found the CIPROS checklist helpful in evaluating our registry and planning future needed capability, but its scope is focused on large scale, multi-institutional research deployment. We have expanded the CIPROS checklist and recommendations for the case of single institution clinical registries.

Acknowledgements

We would like to acknowledge Heena Chitkara and Abakash Samal for PheDRS software development. Research reported in this publication was supported by the National Center for Advancing Translational Sciences of the National Institutes of Health under award number UL1TR001417. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.

References

  • 1.Basit MA, Vieira S. 9 Databases and Registries. Global Health Informatics: Principles of EHealth and MHealth to Improve Quality of Care. 2017;101 [Google Scholar]
  • 2.Patient Registries. Secondary Patient Registries. 2018. https://ncats.nih.gov/clinical/registries.
  • 3.Fakhr A, Hakim F, Zaidi SK, Yusuf R, Sharif A. Clinical registry for rheumatoid arthritis; a preliminary analysis. Pakistan Armed Forces Medical Journal. 2017;67(2):317–21. [Google Scholar]
  • 4.Hillert J, Stawiarz L. The Swedish MS registry-clinical support tool and scientific resource. Acta Neurologica Scandinavica. 2015;132(S199):11–19. doi: 10.1111/ane.12425. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Eissing L, Rustenbach S, Krensel M, et al. Psoriasis registries worldwide: systematic overview on registry publications. Journal of the European Academy of Dermatology and Venereology. 2016;30(7):1100–06. doi: 10.1111/jdv.13634. [DOI] [PubMed] [Google Scholar]
  • 6.Bagley BA, Mitchell J. Registries made simple. Family practice management. 2011;18(3):11. [PubMed] [Google Scholar]
  • 7.Ortiz DD. Using a simple patient registry to improve your chronic disease care. Family practice management. 2006;13(4):47. [PubMed] [Google Scholar]
  • 8.Stey AM, Russell MM, Ko CY, Sacks GD, Dawes AJ, Gibbons MM. Clinical registries and quality measurement in surgery: a systematic review. Surgery. 2015;157(2):381–95. doi: 10.1016/j.surg.2014.08.097. [DOI] [PubMed] [Google Scholar]
  • 9.Hoque DME, Kumari V, Hoque M, Ruseckaite R, Romero L, Evans SM. Impact of clinical registries on quality of patient care and clinical outcomes: A systematic review. PloS one. 2017;12(9):e0183667. doi: 10.1371/journal.pone.0183667. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Gliklich RE, Dreyer NA, Leavy MB. 2014. Interfacing registries with electronic health records. [Google Scholar]
  • 11.Bresnick J. HealthITAnalytics.com. 2016. Will Risk-Based Population Health Management Take Off in 2017? [Google Scholar]
  • 12.Nelson EC, Dixon-Woods M, Batalden PB, et al. Patient focused registries can improve health, care, and science. BMJ. 2016;354:i3319. doi: 10.1136/bmj.i3319. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Bellgard M, Beroud C, Parkinson K, et al. Dispelling myths about rare disease registry system development. Source code for biology and medicine. 2013;8(1):21. doi: 10.1186/1751-0473-8-21. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Lindörfer D, Mansmann U. A comprehensive assessment tool for patient registry software systems: the CIPROS checklist. Methods Inf Med. 2015;54(5):447–54. doi: 10.3414/ME14-02-0026. [DOI] [PubMed] [Google Scholar]
  • 15.Bellgard MI, Napier KR, Bittles AH, et al. Design of a framework for the deployment of collaborative independent rare disease-centric registries: Gaucher disease registry model. Blood Cells, Molecules, and Diseases. 2018;68:232–38. doi: 10.1016/j.bcmd.2017.01.013. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Murphy SN, Weber G, Mendis M, et al. Serving the enterprise and beyond with informatics for integrating biology and the bedside (i2b2). J Am Med Inform Assoc. 2010;17(2):124–30. doi: 10.1136/jamia.2009.000893. doi: 10.1136/jamia.2009.000893published Online First: Epub Date]|. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Cavelaars M, Rousseau J, Parlayan C, et al. OpenClinica. Journal of clinical bioinformatics. 2015;5 [Google Scholar]
  • 18.Harris PA, Taylor R, Thielke R, Payne J, Gonzalez N, Conde JG. Research electronic data capture (REDCap)—a metadata-driven methodology and workflow process for providing translational research informatics support. Journal of biomedical informatics. 2009;42(2):377–81. doi: 10.1016/j.jbi.2008.08.010. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.The Italian FSHD registry: An enhanced data integration and analytics framework for smart health care. IEEE; 2017. Research and Technologies for Society and Industry (RTSI), 2017 IEEE 3rd International Forum on. [Google Scholar]
  • 20.Marsolo K, Margolis PA, Forrest CB, Colletti RB, Hutton JJ. A digital architecture for a network-based learning health system: integrating chronic care management, quality improvement, and research. eGEMs. 2015;3(1) doi: 10.13063/2327-9214.1168. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Napier KR, Pang J, Lamont L, et al. A web-based registry for familial hypercholesterolaemia. Heart, Lung and Circulation. 2017;26(6):635–39. doi: 10.1016/j.hlc.2016.10.019. [DOI] [PubMed] [Google Scholar]
  • 22.The CERTAIN Registry: a novel, web-based registry and research platform for pediatric renal transplantation in Europe. Elsevier; 2013. Transplantation proceedings. [DOI] [PubMed] [Google Scholar]
  • 23.Bellgard MI, Render L, Radochonski M, Hunter A. Second generation registry framework. Source code for biology and medicine. 2014;9(1):14. doi: 10.1186/1751-0473-9-14. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Crandall W, Kappelman MD, Colletti RB, et al. ImproveCareNow: the development of a pediatric inflammatory bowel disease improvement network. Inflammatory bowel diseases. 2010;17(1):450–57. doi: 10.1002/ibd.21394. published Online First: Epub Date]|. [DOI] [PubMed] [Google Scholar]
  • 25.Single source information systems to connect patient care and clinical research. MIE; 2009. [PubMed] [Google Scholar]
  • 26.Cimino JJ. Collect once, use many: Enabling the reuse of clinical data through controlled terminologies. Journal of AHIMA. 2007;78(2):24–29. [PubMed] [Google Scholar]
  • 27.CIPROS-A checklist with items for a patient registry software system. MIE; 2014. [PubMed] [Google Scholar]
  • 28.Lindoerfer D, Mansmann U. Enhancing requirements engineering for patient registry software systems with evidence-based components. Journal of biomedical informatics. 017;71:147–53. doi: 10.1016/j.jbi.2017.05.013. [DOI] [PubMed] [Google Scholar]
  • 29.Lindoerfer D, Mansmann U. Data for the elaboration of the CIPROS checklist with items for a patient registry software system: Examples and explanations. Data in brief. 2017;14:494–97. doi: 10.1016/j.dib.2017.07.075. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30.Öien RF, Forssell HW. Ulcer healing time and antibiotic treatment before and after the introduction of the Registry of Ulcer Treatment: an improvement project in a national quality registry in Sweden. BMJ open. 2013;3(8):e003091. doi: 10.1136/bmjopen-2013-003091. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.Hills NK, Johnston SC. Duration of hospital participation in a nationwide stroke registry is associated with improved quality of care. BMC neurology. 2006;6(1):20. doi: 10.1186/1471-2377-6-20. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32.Liu SS, Gordon MA, Shaw PM, Wilfred S, Shetty T, YaDeau JT. A prospective clinical registry of ultrasound-guided regional anesthesia for ambulatory shoulder surgery. Anesthesia & Analgesia. 2010;111(3):617–23. doi: 10.1213/ANE.0b013e3181ea5f5d. [DOI] [PubMed] [Google Scholar]
  • 33.Mandl KD, Edge S, Malone C, Marsolo K, Natter MD. Next-generation registries: fusion of data for care, and research. AMIA Summits on Translational Science Proceedings. 2013;2013:164. [PMC free article] [PubMed] [Google Scholar]
  • 34.Osborne JD, Wyatt M, Westfall AO, Willig J, Bethard S, Gordon G. Efficient identification of nationally mandated reportable cancer cases using natural language processing and machine learning. Journal of the American Medical Informatics Association. 2016;23(6):1077–84. doi: 10.1093/jamia/ocw006. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35.Bhatt SP, Wells JM, Iyer AS, et al. 2016. Results of a Medicare Bundled Payments for Care Improvement Initiative for COPD Readmissions. Ann Am Thorac Soc. doi: 10.1513/AnnalsATS.201610-775BCpublished Online First: Epub Date]|. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 36.Mungall CJ, Emmert DB, Consortium F. A Chado case study: an ontology-based modular schema for representing genome-associated biological information. Bioinformatics. 2007;23(13):i337–i46. doi: 10.1093/bioinformatics/btm189. [DOI] [PubMed] [Google Scholar]
  • 37.Arx-a comprehensive tool for anonymizing biomedical data. American Medical Informatics Association; 2014. AMIA Annual Symposium Proceedings. [PMC free article] [PubMed] [Google Scholar]

Articles from AMIA Annual Symposium Proceedings are provided here courtesy of American Medical Informatics Association

RESOURCES