Skip to main content
AMIA Annual Symposium Proceedings logoLink to AMIA Annual Symposium Proceedings
. 2017 Feb 10;2016:381–390.

Designing a Clinical Data Warehouse Architecture to Support Quality Improvement Initiatives

John D Chelico 1, Adam B Wilcox 2, David K Vawdrey 3, Gilad J Kuperman 3
PMCID: PMC5333328  PMID: 28269833

Abstract

Clinical data warehouses, initially directed towards clinical research or financial analyses, are evolving to support quality improvement efforts, and must now address the quality improvement life cycle. In addition, data that are needed for quality improvement often do not reside in a single database, requiring easier methods to query data across multiple disparate sources. We created a virtual data warehouse at NewYork Presbyterian Hospital that allowed us to bring together data from several source systems throughout the organization. We also created a framework to match the maturity of a data request in the quality improvement life cycle to proper tools needed for each request. As projects progress in the Define, Measure, Analyze, Improve, Control stages of quality improvement, there is a proper matching of resources the data needs at each step. We describe the analysis and design creating a robust model for applying clinical data warehousing to quality improvement.

Introduction

The advent of electronic health record systems led to the emergence of backend electronic data repositories of patient and provider information. As repositories grew, clinical data warehouses were created and optimized for retrospective analysis on patient populations. This analysis has historically been focused on either clinical research or financial and organizational queries. As more information has been incorporated in healthcare data warehouses, the potential for data warehouses to address other tasks have been suggested that focus more on direct patient care improvement.1, 2 However, it is often unclear how to best make use of data in warehouses for quality improvement activities. In addition, there are many tools for accessing and analyzing data in a data warehouse, but their appropriate application can be difficult. We present a case study of a clinical data warehouse architecture that has been implemented at Columbia University Medical Center to focus on the appropriate and efficient use of resources for quality improvement. We modeled our data warehouse and associated tools around a framework that follows the various phases of quality improvement life cycle. We examine the quality improvement lifecycle used for improving care at an academic medical center and describe how we leverage the functions of clinical data warehouse tools according to the maturity of a particular quality improvement project. Finally, we illustrate the application of this architecture across the breath of quality improvement initiatives at our institution.

Background

The Clinical Data Warehouse (CDW) at Columbia University Medical Center (CUMC) of NewYork Presbyterian Hospital (NYP) was originally created in 1994 by the Department of Biomedical Informatics in conjunction with the Columbia University Office of Clinical Trials to primarily support clinical research.3 The data in the warehouse was largely populated by clinical encounter data from the home grown WebCIS electronic health record (EHR) system backend clinical data repository (CDR).4 With this information from the repository, the warehouse could allow queries of all data available in electronic form that was typically used in the care of patients. The CDR was created as an event based transactional database optimized for data retrieval of single patient data, while the CDW was created as an entity based analytical database optimized for cross patient (population) based retrieval of data. As the use of the CDW grew by clinical researchers, it was soon observed that the data could be used for other purposes such as financial, administrative and, clinical quality improvement in the hospital.

Concurrently there were two important developments at NYP that affected the data warehousing approach. First, NYP organized a quality improvement organization with teams trained in process improvement throughout the organization.5 Quality improvement was implemented using the Six-Sigma methodology to decrease variation in organizations tasks by carefully examining facts and data in improving existing processes. The application of data to quality improvement required adjustment of the data organization in the warehouse and additional tools to provide data in different workflows related to quality improvement implementation. Second, NYP completed its migration from the legacy WebCIS EHR (which was primarily used for data access to multiple data sources that were integrated in the repository) to a commercial EHR that was implemented across the system. With this new EHR implementation and federal incentive programs came an increased focus on data entry into the EHR. This affected the data warehouse by adding a significant source of new data to the existing design.

As a result of these changes, we sought to better understand how the changes could best be addressed by adaptations to the data warehouse architecture, and to create an effective model for applying data resources to quality improvement efforts.

Methods: Analysis of Existing Issues

We performed four analyses to understand perceptions and factors relevant to the current data warehouse and its potential redesign. Two analyses focused on the customers of the warehouse, while two analyses focused on its structure and processes. We used a mixture of an external stakeholder analysis, key user interviews and an analysis of data requests to formulate a guide a new vision for clinical data warehousing at Columbia University Medical Center and the whole NewYork Presbyterian Hospital system.

We first performed a stakeholder analysis among 6 important stakeholders across the institution, who sponsored or facilitated projects that relied on the data that was stored with the clinical data warehouse. These individuals included leaders in the information systems and quality improvement organizations, who had a broad understanding of the data requirements for quality improvement and the need for robust systems to provide that data. They were members of a committee called the Clinical Quality and IT (CQIT) Committee, which was formed with the mission to address key NYP goals for regulatory reporting, patient safety, pay for performance measures, and quality of care delivered through the use of information technology. Along with other efforts to improve the process at the hospital through the use of information technology, they participated in an analysis to inform how the clinical data warehouse could assist in the mission of the quality improvement administrators in the institution. During prioritization via a voting mechanism, a few key themes arose that included being able to track specific safety measures, access to data for analysis, quality reporting and research, the ability to integrate data across applications for reporting purposes, and ability to extract data from clinical notes for quality purposes. The committee felt that these themes could be better addressed through two key subgroups that concentrate on excellence in data warehousing and increased use of structured documentation. The data warehousing subgroup, through a gap analysis of user needs, found that the institution required a more effective method to bring together disparate clinical data sources around NYP and required a more robust way to manage user requests for data.3

Next we performed a needs analysis by interviewing key trained quality performance experts with in the institution to better understand their data needs from the warehouse. We concentrated our efforts along six key quality improvement initiatives (Table 1) set forth by NYP senior leadership, and interviewed a leader from each initiative. Our goal was to determine how information technology through data warehousing could help these specific efforts at the institution’s hospitals, and how well the needs were currently being met. Through semi-structured interviews we gathered information from these users in regards to the maturity of the initiatives, the data needs of the initiative, IT system dependence, and challenges faced in accomplishing goals (see Table 2). The institution already had data used in monitoring these projects, however the data was manually gathered and processed into spreadsheet-based static reports.

Table 1.

Quality initiatives pursued at NYP during the analysis period.

Initiative Description
Medication Reconciliation Goal is to create a standardized method to accurately and completely reconcile patient medications between admission and discharge in both the inpatient and outpatient settings.
Blood Stream Infections Goal is to decrease the rate of central line associated blood stream infections at NYP and to comply with mandatory New York State reporting of central line associated events.
Transplant Goal is to have 100% compliance with the United Network of Organ Donors (UNOS) procedures at NYP and create a robust program for NYP transplant patient quality improvement.
Patient Verification Goal is to implement a process to reduce the number of patient identification errors at NYP by ensuring the patients are properly matched to the care they are given.
NYSSIPP & Pre-Op Antibiotics Goal is to implement and monitor the New York State Surgical and Invasive Procedure Protocol in order to eliminate wrong patient, wrong site, wrong side and wrong invasive procedure errors at NYP. Goal is also to reduce the rate of post-operative infection at NYP with the proper use and timing pre-operative prophylactic antibiotic administration.
Pressure Ulcers Goal is to reduce the prevalence rates of pressure ulcers at NYP by implementing a rigorous staff protocol and acquiring new patient surfaces (beds) to prevent the formation of pressure ulcers.

Table 2.

Themes used in semi-structured interviews with quality performance experts.

Extent of process issues
Clarity of vision of over-all approach
Challenges
Near term IT Needs
Strategic Implications
Systems / Dependencies

These interviews identified a few key issues arose of why the quality improvement needs of the institution were not being met. First, frequently the data needed by quality performance experts was not available within the CDW and that the data needed for these projects was mainly found in vendor based transactional systems. Second, we found that when the data was available they still needed help in defining what their data needs really were. Third, there was no way for the data experts to query across transactional systems with the IT tools available to them. Finally, we noted that data needs changed according to the maturity of the quality improvement project. The more mature the quality improvement strategy was the more robust IT solution had to be. Still, in each initiative there was a common thread of how data was discovered, validated, and then used to monitor a particular quality improvement marker.

We also performed an analysis of the warehouse itself to identify any structural or technical characteristics that may have been root causes of the concerns raised by the CQIT review. This analysis reviewed requirements and technical factors that were seen by the data warehouse leadership as most significant in their relationship to the CQIT concerns. This analysis identified that the primary challenges were regarding data integration. First, assembling and integrating all administrative, financial, and clinical systems in one system was an immense task. While some of the institution transactional data repositories were well understood and map to a common terminology (the Medical Entities Dictionary, or MED), other newer commercial systems were less understood, and fewer tools were available for navigating the terminology. Prospectively mapping large backend vendor based clinical data repositories to a common representation in the clinical data warehouse was a very time consuming goal, beyond the resources of the data warehouse team. Ideally vendors can transfer data from there systems through HL7 messaging and interfaces to the data warehouse, however we found that this was not always well supported by external systems. On analysis of the different data repositories around the institution we found several localized experts centered around different home grown and vendor based systems. This was exacerbated further by having two different patient institutional systems between the university hospitals of Columbia and Weil Cornell. In such an environment, consumers with new data needs had to either “hunt around” for the appropriate person to help them, or would try to push all requests through whoever in the past had successfully met the data need.

The findings of the CQIT committee and the focused quality performance expert interviews further justified the need for a clinical data warehouse that served the needs of the quality improvement initiatives at the institution. Moreover, we felt it was also important to maintain and perhaps improve the capabilities that were already being supported in clinical research and administration. Therefore, we performed a process analysis of the current activities of the CDW team by reviewing requests submitted to the CDW, and collecting feedback from the CDW analysts. This review included data requests made formally through a data request submission process, and informally through requesters seeking instruction directly through CDW leaders. We identified the maturity of each data request in terms of the existing access to the data, the roles of the users of the data, and how the data were integrated with the users’ workflow. We also examined the sources of data used for the data requests, and the overall data flow from our source systems to their eventual use in our CDW. We then grouped different data uses and data sources into common types, and qualitatively identified and analyzed themes among the sources and uses.

The CDW user requests have classically focused on the clinical researchers at CPMC. More recently the analysts observed an increase in requests from administrative and quality improvement personnel at the institution. Requests differ from the classic research oriented queries in that they require more time commitment and expertise from the CDW staff. Requests ranged from one-time queries to quarterly and even daily reporting to administrative systems. They also required bringing together financial, clinical, and administrative data into one query. The CDW team was spending significant time running ad-hoc queries in a one time or recurrent fashion based on the user. Additionally almost all of the ad-hoc inquires where done without looking to building upon previously understood data requests. The CDW looked for an infrastructure for managing repeated common requests and users that keep requesting more complex data mining efforts.

Results: System Description and Design

From these four analyses, we were able to identify three themes that influenced the redesign of our clinical data warehouse. These themes were 1) structural changes in our data infrastructure created different needs for data integration; 2) ad hoc queries were a foundational method for accessing data and needed to be optimized; and 3) data needs for quality improvement initiatives evolved as the initiatives matured through a quality improvement life cycle. Each of these themes is described below, along with a description of the design changes that addressed the theme.

Data Integration

A dominant theme that emerged from the stakeholder and structural analyses was regarding the need for improved data integration to support customer needs. Because we were in an academic medical center, the data warehouse had to consider the goals and data needs of both the medical school and hospital - the medical school needs were focused more on education and research, while the hospital focus was more on patient care, clinical quality, and financial measures. The designed warehouse architecture also needed to be responsive to user requests with minimal resources. The key to this architecture was handling user requests and maintaining support for research and quality improvement projects.

As mentioned above, the main data source for the CDW was the institution’s clinical data repository, which had as its sources data from multiple ancillary systems that were interfaced to a central database, and the more recent institutional electronic health record that included computerized physician order entry and a clinical documentation system. Because the EHR was intended to replace many functions of the legacy WebCIS application and CDR, the data from the EHR were not modeled and stored in the repository, and many data sources that fed to the repository also were stored directly in the EHR (e.g., lab data).6 This change in the primary data source for the EDW was important for the new design, as the warehouse needed to accommodate data integration that was previously done in the repository and using the MED. Because we did not have the resources for full data integration between the systems, and the EHR data represented a new data type, we focused on the extract process to simplify access to the data and modeled or transformed the data based on need (e.g., late-binding).7 8

For data extraction, we were able to use standardized database management tools to make the data accessible in the CDW without actually consolidating the data sources by creating a virtual clinical data warehouse. We first replicated the source application’s backend database management system in real time, using standard database replication tools. This minimized the effect on the front-end application performance by not burdening the application to feed data to our warehouse directly. We then used existing database management system integration services to link the replicated database with existing tables in the CDW. These integration services supported both ad hoc analytic queries and extracting data from EHR tables to populate data marts in the CDW. Data marts contain data that is pertinent to its area of interest and hence only contains a focused subset of the source replicated databases, and required data transformation and modeling. Based on the demand of the data needs, we could focus efforts to asynchronous ad hoc queries or have more recurring and even synchronous reporting and analysis through content optimized data marts. Different data marts could be updated at defined intervals or in real time to support online analytical processing.9 Database integration services for both ad hoc queries and updates to centralized data marts allowed more rapid availability of the data for requesters. We found that this approach was most effective for supporting ad hoc queries, and could be effective in building data marts to support periodic data reports.

Data Access and Ad Hoc Queries

Themes regarding access and ad hoc queries were identified primarily from the process and needs analyses. We noted that requests from our users, whether they were researchers, clinical providers, or operational analysts, were all initially performed as ad hoc queries. Even when the data were needed in a complicated report or dashboard, the data were first extracted and verified using provisional queries, to check the data definitions, availability, and quality. Research queries were generally all ad-hoc queries, with few examples of where the queries were run repeatedly at intervals. While we did provide other functions within the CDW such as generated reports and data feeds to separate applications, these requests were enhancement requests for more robust delivery of data that was originally requested as an ad hoc preliminary query. Understanding that the data requests all started as ad hoc queries allowed us to identify a pattern of how data requests evolved, and allowed us to design around optimizing data access for ad hoc queries separately from modeling for reporting. When a query is requested on a recurrent basis, often to identify trends in indicators, data marts could then be built and extract-transform-load processes were implemented to aid in the data modeling and consolidation process (see Figure 1). This was both to increase the efficiency of the end-user data extract, and also because the data modeling was better defined for these more mature requests. Ongoing requests made by applications or analytical systems, such as our business intelligence system, were handled in a similar fashion. Data marts were created in the warehouse and either used directly by the front end systems or extracted to the system database.

Figure 1.

Figure 1.

Virtual data warehouse design.

We also found a multitude of data visualization tools and services that required different levels of information accessibility from our clinical data warehouse. After careful examination of our requests we found that they could be divided into four distinct categories based upon the frequency or urgency of data required. These categories were as follows:

  1. Ad-hoc Direct SQL Queries

  2. Recurring Static Reporting

  3. Online Analytical Processing

  4. Point of Care Registry Functions

Each of these categories provided for a predefined period of time for a new inquiry to our data warehouse tables. From an ad-hoc query that maybe done once to more timely queries required for point of care reporting, each of these categories would require data updated at predefined intervals driven by the frequency of the reporting task.

Quality Improvement Cycle

The needs and process analyses identified the changing user needs that were related to the quality improvement life cycle. The fourth lesson from the needs analysis was that data needs changed according to the project maturity in the life cycle, and the process analysis identified multiple methods for supplying data. We were able to map our categorization of data warehouse data methods to the quality improvement cycle initiatives followed at our institution. The approach followed the DMAIC framework: DEFINE MEASURE, ANALYZE, IMPROVE and, CONTROL, each representing different stages of the quality, improvement life-cycle, each with differing data needs.10 At the DEFINE stage the user asks what type of data is needed and if it is exists. With the MEASURE stage static reporting can monitor trends in data over time in order to find a baseline level for future measurements. During the ANALYZE stage data is further looked at from different angles and viewpoints in order to find the best place for intervention to make the biggest impact. Interventions are put into place and the data is used to help care givers in making the right decisions at the point of care during the IMPROVE stage. Finally, at the CONTROL stage, process improvement is incorporated in the workflow of an organization and data driven decision making is becomes part of the standard of care.

The mapping of the data access methods to the quality improvement cycle was as follows. New projects in the DEFINE phase always started with a direct database query to establish the availability and validity of the data. As projects matured, recurring automated queries would help further define the project and identify trends over time in the MEASURE phase. Deeper analysis of the data would use more advanced online analytical processing tools to find factors that correlated with the quality improvement task at hand in the ANALYZE phase. Data-driven dashboards then can bring information from real-time data marts to clinicians at the point of care so they could monitor key variables in the IMPROVE phase. As the use of the key variables was better understood, it could be integrated into workflow of the clinicians with real time decision support in the CONTROL phase (see Table 3).

Table 3.

Mapping of quality improvement stages, data needs and technical tools.

Quality Improvement Stage Data Need Tool
DEFINE Research Ad hoc queries
MEASURE Management reports Automated queries / reports
ANALYZE Operational reports On-line analytical processing
IMPROVE Point-of-care reporting Dashboards
CONTROL Decision support Alerts / automated orders

This mapping to the DMAIC cycle was critical in matching data needs to appropriate tools for quality improvement initiatives. Because of the data-intensive nature of this process, leaders of quality improvement initiatives would often approach the clinical data warehouse managers to meet their data needs. This led to consideration of how to optimize the data warehouse around quality, rather than its traditional role as a retrospective clinical research tool. The phases were not always concretely defined in the data requests; however, the task being performed by the quality expert at the time of the data warehouse inquiry typically defined the complexity of the data warehouse storage and the tools used to access it. It also defined our expectations of how the data needs would evolve as the quality improvement initiative matured. As a project evolved from a “define” stage to “improve” or “control” stage, requirements for data timeliness and access would increase accordingly. This further classification of user needs based upon the maturity of the quality improvement initiative created a framework for prioritization of user requests and proper data mart utilization. Using this model, the CDW team sought to create a clinical data warehouse architecture that focused on the types of data required for each stage of quality improvement in order to better serve the quality improvement efforts of NYP.

Results-Diabetes Mellitus Example

We illustrate the application of this clinical data warehouse architecture supporting the stages of the quality improvement life cycle using a quality improvement initiative concentrating on the outpatient clinic population of patients with diabetes mellitus at the Columbia University Medical Center.

Ad-hoc SQL Queries (Define): For several years healthcare providers from the NewYork-Presbyterian Ambulatory Care Network intermittently requested information from our clinical data warehouse on their patients with diabetes mellitus. Each request was done on a case by case basis and customized for a particular purpose in mind, whether it was for clinic research or administrative reporting.

Static Reporting (Measure): Starting in 2006, some of these reports became more formalized and a process for automated querying and delivery of data was setup by the clinical data warehouse staff. With the help our data warehouse staff, medical directors would monitor the patients with diabetes in their clinic population and provide reports to their physicians regarding timeliness and control of laboratory values including Hemoglobin A1C, LDL, and Microalbumin. On a quarterly basis data would be compiled and delivered via “flat file” text files to the healthcare analysts in the Ambulatory Care Network for processing and reporting. This data would ultimately be delivery to healthcare providers in a paper based document. Aggregate data would be provided on both the individual physicians and clinic locations- on how goals were being met for each laboratory monitor. Physicians would also receive a registry of the patients included in their calculations with associated demographic and laboratory values.

Online Analytical Processing (Analyze): As these reports became more and more utilized, the need for more frequent reporting was evident. Much of the data manipulation needed to make such reports required a great deal of manual effort and the data would sometimes be delivered when it was already temporally unusable. Additionally as medical directors sought to use this data to manage quality improvement interventions they found it very burdensome to monitor their efforts in this way.

With the goal of having timelier reporting to healthcare providers we migrated the process of data gathering into data-marts and automated the data analysis through online analytical processing tools. From various disparate data sources we built a data-mart of diabetic patient demographics, outpatient visit history, laboratory, medication, and provider information. We then created a dynamic management reporting tool using our business intelligence system to provide timely patient and provider tracking tools used at the point of care. With this new method we were able to scale up to our users needs of more frequent reporting on clinic patients with diabetes mellitus.

Dashboards-Registries (Improve): Future improvements to clinical care in diabetes will turn to our clinical data warehouse to provide instant access to patient and provider information regarding diabetic patient care. With our architecture in place we can update data-mart content with a frequency required by user needs. As new tools such as healthcare provider dashboards and chronic disease registries are implemented to manage the population of patients with diabetes in our institution, the data warehouse can provide the platform for future expansion.

Applications (Improve): As interventions are defined to improve the care of patients with diabetes they must be integrated into the transactional electronic health record applications used in the institution. New decision support alerts and reminders can keep the patient care providers up to date on what the needs are of a particular patient.

We observed in our outpatient diabetes mellitus quality improvement initiative the maturity of the project dictated the resources needed from the clinical data warehouse and its users. By planning the proper approach and building upon prior less sophisticated reporting methods we were able to create a data-mart architecture and robust data visualization tool that exactly met the needs of our users.

Discussion

Using multiple analyses, we were able to identify important themes regarding our data warehouse that led to a specific redesign to support evolving needs. We used virtualization of data sources and data marts to facilitate late-binding of data models and to support ad hoc queries that were foundational to using the warehouse data. Organizing our clinical data warehouse architecture around the quality lifecycle has been an effective approach to meeting the evolving demands of a data warehouse. The fundamental architecture of our virtual clinical data warehouse and associated prioritization strategy around our data visualization tool has brought to our attention a few key findings.

First, our data warehouse prioritization strategy made effective use of our technical resources for a given project. It was imperative that the proper resources were also given to the project from our users perspective. As the maturity of the quality improvement initiative progressed, more resources were needed on the user side to ensure success of the project. While ad-hoc queries required only minimal user interaction, mature projects required greater time commitment and technical proficiency from our users. Additionally, as a project matured, higher level management skills were required of the administrative staff monitoring its progress. As our diabetes project moved from static reporting to online analytical processing, the need for project management skills was evident to coordinate the different users and technical staff for the project.

Second, as we built customized data-marts we needed to keep in mind that it was important to keep data- mart models separated from the data visualization tools. Different tools required different backend data sources. This abstraction of backend supply side and frontend demand side environments of a clinical data warehouse has been described in the University of Michigan Health System.11 As a project matures different tools will need to interact with the same data-mart. Keeping data-marts as generic as possible facilitates future requirements and expansions to the data model.

Third, defining projects based on maturity helped to ensure that users were provided with appropriate tools for the task. Once we identified the expected progression of a project through the quality improvement life cycle, we were able to avoid situations where we would try to extend tools of one level to different functions as the project evolved. For example, prior to the mapping against the quality improvement model, projects at a Measure stage, using reporting software, might be steered to extend the tools to different functions. We experienced discussions of how to implement reporting tools at the point of care, which would be an expensive and difficult implementation. With the understanding of project evolution, we recognized that projects were expected to evolve, and that we should migrate to the appropriate tools rather than extend the tools at one level.

Some limitations exist in the architecture of a virtual data warehouse. By relying on discretely replicating backend databases of source systems we create a delay in delivery of data to the warehouse. While our data warehouse is not formally used for transactional processes, it is feasible in our model that data be available in a synchronous manner for dashboard style point of care reporting. To solve this we would need to look more toward HL7 messaging and interfaces that would feed data directly from source systems as it is entered. We have recently begun testing of loading data directly into the CDW using HL7 messaging, which would improve the data currency.

Additionally, as with any data warehousing approach, one must be sensitive to the quality of data that is in the warehouse tables. Data can only be as good as what is available in the source systems. At every level of warehouse function, we find that data must be cleaned in order to create meaningful reports to users. In the future we expect to create a scoring system for data points stored in our warehouse. This way one can not only provide data, but also can impart a level of integrity for data points provided.

While the analysis-derived themes and design were valuable as they were applied to the data warehouse at NewYork Presbyterian Hospital, perhaps their most significant characteristic has been their robustness beyond the institution where they were developed. Since this initial analysis, discovery and application, some of our team members have migrated to different institutions, and we have applied these findings to warehousing and analytics strategies in these other settings. In each case, these important lessons have been effective in creating local strategies for addressing the evolving user needs of clinical data, especially for quality improvement initiatives.

Conclusion

We found that by concentrating on the needs of our users we could establish a robust clinical data warehouse that supports the needs of quality reporting initiatives at our institution. By building a clinical data warehouse architecture to support the dynamic needs of data visualization tools available we could scale up resources as user needs required. By creating a framework for user needs based upon the maturity of quality improvement initiatives we could direct users to the proper level of data warehouse tools.

REFERENCES

  • 1.Einbinder JS, Scully KW, Pates RD, Schubart JR, Reynolds RE. Case study: a data warehouse for an academic medical center. J Healthc Inf Manag. Summer. 2001;15(2):165–175. [PubMed] [Google Scholar]
  • 2.Einbinder JS, Rury C, Safran C. Outcomes research using the electronic patient record: Beth Israel Hospital’s experience with anticoagulation; Proc Annu Symp Comput Appl Med Care 1995; pp. 819–823. [PMC free article] [PubMed] [Google Scholar]
  • 3.Kuperman GJ, Boyer A, Cole C, Forman B, Stetson PD, Cooper M. Using IT to improve quality at NewYork-Presybterian Hospital: a requirements-driven strategic planning process; AMIA Annu Symp Proc.; 2006. pp. 449–453. [PMC free article] [PubMed] [Google Scholar]
  • 4.Hripcsak G, Cimino JJ, Sengupta S. WebCIS: large scale deployment of a Web-based clinical information system; Proc AMIA Symp; 1999. pp. 804–808. [PMC free article] [PubMed] [Google Scholar]
  • 5.Johnson T, Currie G, Keill P, Corwin SJ, Pardes H, Cooper MR. NewYork-Presbyterian Hospital: translating innovation into practice. Jt Comm J Qual Patient Saf. 2005 Oct.31(10):554–560. doi: 10.1016/s1553-7250(05)31072-5. [DOI] [PubMed] [Google Scholar]
  • 6.Wilcox AB, Vawdrey DK, Chen YH, Forman B, Hripcsak G. The evolving use of a clinical data repository: facilitating data access within an electronic medical record; AMIA Annu Symp Proc.; 2009. Nov. pp. 701–5. [PMC free article] [PubMed] [Google Scholar]
  • 7.Sanders D. The late binding data warehouse technical overview [Internet] Health Catalyst The Late-Binding Data Warehouse A Detailed Technical Overview. 2013. [cited 2016 Mar 10]. Retrieved from: https://www.healthcatalyst.com/late-binding-data-warehouse-explained/
  • 8.Terry K. New healthcare data warehousing model gains favor [Internet] InformationWeek Healthcare. InformationWeek 2013 [cited 2016 Mar 10]. Retrieved from: http://www.informationweek.com/healthcare/clinical-information-systems/new-healthcare-data-warehousing-model-gains-favor/d/d-id/1109145.
  • 9.Chelico JD, Wilcox A, Wajngurt D. Architectural design of a data warehouse to support operational and analytical queries across disparate clinical databases; AMIA Annu Symp Proc.; 2007. 901 pp. [PubMed] [Google Scholar]
  • 10.Kumar S, Thomas KM. Utilizing DMAIC six sigma and evidence-based medicine to streamline diagnosis in chest pain. Qual Manag Health Care. 2010 Apr-Jun;19(2):107–116. doi: 10.1097/QMH.0b013e3181db6432. [DOI] [PubMed] [Google Scholar]
  • 11.Dewitt JG, Hampton PM. Development of a data warehouse at an academic health system: knowing a place for the first time. Acad Med. 2005 Nov.80(11):1019–25. doi: 10.1097/00001888-200511000-00009. [DOI] [PubMed] [Google Scholar]

Articles from AMIA Annual Symposium Proceedings are provided here courtesy of American Medical Informatics Association

RESOURCES