Skip to main content
AMIA Annual Symposium Proceedings logoLink to AMIA Annual Symposium Proceedings
. 2012 Nov 3;2012:770–778.

Bridging Informatics and Implementation Science: Evaluating a Framework to Assess Electronic Health Record Implementations in Community Settings

Joshua E Richardson 1, Erika L Abramson 1, Elizabeth R Pfoh 1, Rainu Kaushal 1; the HITEC Investigators1
PMCID: PMC3540540  PMID: 23304351

Abstract

Effective electronic health record (EHR) implementations in community settings are critical to promoting safe and reliable EHR use as well as mitigating provider dissatisfaction that often results. The implementation challenge is compounded given the scale and scope of EHR installations that are occurring and will continue to occur over the next five years. However, when compared to EHR evaluations relatively few biomedical informatics researchers have published on evaluating EHR implementations. Fewer still have evaluated EHR implementations in community settings. We report on the methods we used to achieve a novel application of an implementation science framework in informatics to qualitatively evaluate community-based EHR implementations. We briefly provide an overview of the implementation science framework, our methods for adapting it to informatics, the effects the framework had on our qualitative methods of inquiry and analysis, and discuss its potential value for informatics research.

Introduction

Community-based physicians are being incentivized to implement electronic health records (EHRs) and many will be doing so within the next 3–5 years; however, EHR implementations are interruptive, costly, and resource intensive even when they are done well. Therefore it is critical to evaluate EHR implementations, and from those evaluations, disseminate best practices that may mitigate negative effects and promote optimal effects associated with EHR implementations. We report our use of an implementation science framework termed “the consolidated framework for implementation research” (CFIR) to evaluate EHR implementations in community settings and discuss its strengths and limitations. By presenting this evaluation it is our intention to inform the informatics field of a new and potentially valuable means for qualitatively evaluating community-based EHR implementations. In this paper we describe: 1) The CFIR, its development, and past use; 2) How we adapted the framework to meet the needs for large scale evaluation; 3) What effect the framework had on our methods of inquiry (i.e. interview guide) and how it shaped our analysis; 4) The strengths and limitations using the CFIR in informatics; and 5) Suggestions for future use. The focus of this paper is to evaluate the usefulness of the CFIR framework, while a future publication will provide a qualitative analysis of the data we gathered.

Background

Community-based physicians are being incentivized through the Health Information Technology for Economic and Clinical Health Act (HITECH) 13 to implement electronic health records (EHRs) and many will be doing so within the next 3–5 years as a result. The rapidity and scale of EHR implementations that will occur, and are already occurring, within the United States demands robust evaluations that promote best practices and mitigate unintended consequences such as threats to patient safety or provider dissatisfaction. Indeed, EHR implementations are interruptive, costly, and resource intensive and as Ash and Bates point out, effective implementations hinge on various factors: personal, organizational, environmental, and technological. Therefore, a primary informatics challenge is applying rigorous evaluative methodologies that address the multi-faceted nature of EHR implementations particularly on a large scale. We report on our novel application of an implementation science evaluation framework to gain insights from multiple stakeholder perspectives as they relate to community-based EHR implementations across New York State (NYS). Furthermore, we will discuss the framework’s applicability to biomedical informatics initiatives.

Prior to the passage of the HITECH Act, New York State provided an unprecedented level of state funding to support EHR implementations within grantee communities. Grantees, also known as Community Health Information Technology Alliances (CHITAs), were offered matching state funds to implement and link interoperable commercial EHRs in participating community practices. As grantees carried out their implementations, our research team identified the unique and valuable opportunity to evaluate the effects and effectiveness of EHR implementations in community settings. Our research team is made up of members from the Health Information Technology Evaluation Collaborative (HITEC). HITEC is comprised of four research institutions across New York State (Weill Cornell Medical Center, Columbia University, State University of New York at Albany, and State University of New York at Rochester) that is tasked with evaluating New York’s efforts at implementing interoperable EHRs and HIT.

To better describe our efforts adapting and applying an implementation science framework to EHRs and informatics, it is necessary to first describe the discipline of implementation science” itself and second, describe the framework derived from that discipline. We referred to Stetler et al.’s definition of “implementation” to gird our basic understanding. According to Stetler et al, 4 an implementation depends on “identification of barriers and action steps to reduce or overcome them,” and identifies the “necessary factors and action steps to foster success.”4 Implementation science is an inter-disciplinary field interested in the ways people face, and attempt to systematically address, the challenges associated with implementing a new intervention. 5 Its goal is “to promote the systematic uptake of clinical research findings and other evidence-based practices into routine practice, and hence to improve the quality and effectiveness of health care.”6 The discipline offers multiple implementation models with which to frame evaluations.

The Consolidated Framework for Implementation Research (CFIR) is a synthesis of multiple implementation science models developed by health services researchers within the Veterans Administration. 7 The CFIR was developed from extensive implementation science literature including the implementations of clinical best practices, and health information technologies. Furthermore, the CFIR model was initially developed to not be solely summative (“what works?”), but formative (“what works, where, and why?”).7 Its authors argue that providing formative outcomes as well as summative outcomes lend greater trustworthiness that lessons learned can be carried over into “other settings”. Formative evaluations must reflect the perceptions, attitudes, and truths people in order to “elicit, construct, and interpret” findings that describe the complexities associated with implementations. (See Figure 1) Lastly, CFIR is a “meta-theoretical”7 model that synthesizes constructs and aligns definitions so that evaluation results may be more transferrable among health care organizations as well as to other fields of study.

Figure 1:

Figure 1:

Considered approaches to evaluating implementations

The CFIR refers to available evidence from the field of implementation science to inform 5 major areas for evaluation (“domains”) and their subordinate variables (“constructs”). We list the 5 domains and example constructs for the readers’ benefit; however, the large number of constructs prevents us from listing the total number. (See Figure 2) For a full listing of the CFIR domains and constructs, we refer readers to the original publication.7

Figure 2:

Figure 2:

CFIR Domains and example constructs7

Methods

The purpose of the study for which we used the CFIR was to explore the perceived value of New York State’s CHITA initiative among health care organization administrators, information technology staff, healthcare providers, NYS government leaders, and EHR vendors. The purpose of employing a qualitative approach was: 1) To convey socio-technical aspects related to EHR implementation and uptake; 2) Describe “ecological” aspects of the process that may inform future state and national EHR projects, and 3) Identify characteristics of EHR implementations that may be standardized, quantified, and therefore, generalized.

To evaluate the effects and effectiveness of EHR implementations among NYS grantees, we adapted an implementation science framework known as the consolidated framework for implementation research (CFIR) in order to qualitatively assess EHR implementation barriers and facilitators. To our knowledge, this was a novel attempt to employ an implementation science framework on a large-scale informatics intervention. Our approach to adapting and applying the CFIR was guided by the noted qualitative researchers, Crabtree and Miller.8

Step 1: Selecting the Framework

Our goal was to first locate a model focused on implementation factors to frame our investigation so to enhance the trustworthiness of the qualitative results associated with EHR implementations. Yet through reviews and discussions of two models from informatics 9 and management information systems (MIS), 10 we gained group consensus that the models were not directly applicable because they were predominantly IT-centered and not implementation-centered. In addition these models did not reflect the complexity we had encountered while working with grantees that were engaged in community-wide EHR implementations. Finally, group consensus was that the models were prescriptive in nature whereas the goal of the research would be descriptive in nature. We therefore sought literature from the field of health services research, more specifically the implementation science discipline, to guide our qualitative inquiry and frame the analysis. From this literature review, we encountered the Consolidated Framework for Implementation Research (CFIR).

We perceived the CFIR as having two primary advantages for the research. First, the CFIR offered a wide range of domains and constructs that held promise it would represent the depth and breadth of data we expected to collect from the multiple stakeholders. Second, the CFIR was published in quality improvement literature, 11 was being reported in “in press” manuscripts, 12 and had made resources available through an online professional network of researchers that could provide guidance if required. 1315 In addition, we discussed potential disadvantages associated with the CFIR. Our primary concern was that the CFIR was developed to evaluate a single organization rather than a community; therefore, there could be challenges adapting across settings. Selecting the CFIR was additionally agreed upon by non-HITEC qualitative research consultants thereby demonstrating rigor8 in our approach and lending greater trustworthiness to the decision for employing the framework.

Step 2: Developing the Data Gathering Tools and Code Book

After deciding to use the CFIR we went through a multi-step process to adapt it to our EHR implementation-specific code book. The CFIR domains were operationalized by framing the questions and prompts in the semi-structured interview guides and focus group guides: outer setting, inner setting individuals, intervention, and process. The scope of the study was to gather data through digitally recorded 1-hour interviews and focus groups so the framework was not tailored for field observation guides.

After an initial series of interviews with study participants, our task was to build a codebook based on the domains and constructs. The CFIR’s authors suggested using different scenario-dependent constructs so after iterative discussions we either combined or renamed constructs or weeded them out altogether if deemed inapplicable to the domain. The multi-disciplinary research team individually applied the CFIR domains and constructs as codes to the initial set of participant interviews and regrouped to discuss the framework’s fit to the data and compare advantages and disadvantages to CFIR after initial perceived advantages and disadvantages. We documented many challenges such as determining how to frame “Outer Settings” from “Inner Settings” because the project included multiple stakeholders at multiple levels: state, community, practice, and individual. Through iterative discussions we defined how stakeholders were to fit within the CFIR framework. (See Table 1) In addition, the team agreed to develop HITEC-specific domains or constructs if it encountered unexpected entities or phenomena as the study progressed. The research team again consulted a third-party to critique the study design as well as review the preliminary and final codebook. Outside input insured that perspectives were more rigorously tested than without outside input.

Table 1:

Mapping CFIR Domains to HITEC Study-specific Areas

CFIR Domain Study-Specific Areas
Intervention characteristics EHR
Outer setting Federal and State entities
Inner setting Communities, EHR Vendors, Health Information Service Providers, Regional Extension Centers, Regional Health Information Organizations
Characteristics of individuals Practices as well as those who work in practices
Process Supporting EHR implementations before and after go-live

Step 3: Using the CFIR to Guide Analysis

We reviewed transcribed data using a two-pronged approach to deductively compare our results to the CFIR framework, and inductively analyze the data to generate themes that the framework did not address. 16 Data collection and analysis occurred iteratively, as is appropriate for qualitative research, throughout the series of interviews and focus groups. Researchers broke into pairs for all interviews and focus groups using the CFIR as the official codebook. Each researcher within a pair separately coded a transcript and the two would meet to discuss, debate, and determine what CFIR domains and constructs would be applied to particular sections of transcribed interviews and focus groups. The team created a new code for a section of text if all members of the research team agreed a CFIR domain or construct did not adequately represent the data. Therefore, at the end of the study we would have CFIR and Non-CFIR transcribed text. This would be considered a deductive-inductive approach to qualitative analysis.

The researchers coded all transcripts, met as a full group to resolve coding discrepancies, and reached consensus on codes necessary to describe concepts found in the data. Codes were then compiled and discussed further to identify patterns, larger themes, and ultimately arrive at “domains” within which we list lessons learned. Interview recordings were transcribed and imported into ATLAS.TI (version 6) qualitative software for analysis.

Step 4: Evaluating the Codebook

We took a number of steps throughout the study to build trustworthiness of the CFIR’s application to EHR implementations in specific, and informatics in general. We iteratively held internal meetings, HITEC meetings, third-party consults, and member checks to evaluate the codebook. We attained greater trustworthiness of the domains and lessons learned by presenting our results back to five CHITA representatives, a method known as “member checking”, and subsequently received voluntary feedback from three.

Results

We recruited semi-structured interviews with 29 key informants that included administrators, information technology staff, healthcare providers, NYS government leaders, and EHR vendors. Furthermore, we recruited 39 providers for 5 in-person focus groups from 5 grantees across NYS. Grantees used a total of 6 different interoperable vendor-based EHRs. We conducted interviews and focus groups from November 2010 to May 2011.

Quantitative Results

We first provide a quantitative review of the domains in order to give the reader an understanding as to how the CFIR helped us to organize the transcribed text into coded segments. (See Table 2) At the end of our research, we had accumulated a total count of 2035 coded statements within the framework’s 5 domains. Of the 2035 coded statements, 1768 (87%) were defined and described by the CFIR and 267 (13%) were developed and defined by HITEC. The 13% of statements that we integrated into the CFIR framework were captured by 5 constructs of our own making. For example, we added one code named “expectations gap” to the Process domain to capture comments that compared expected versus actual outcomes.

Table 2:

Grantees used up to 6 different Vendor-based EHRs

Grantee Supported Vendor Main Practice Type
A A, B, C Community practices
B D, E Community practices
C A, B Federally Qualified Health Centers
D A Community practices and a resident family practice
E B, F Community practices

The codes from CFIR’s Inner Setting domain adequately captured 100% of all 594 coded statements from the interviews and focus groups. In contrast, the Intervention Characteristics domain captured 73% of the coded statements.

Our research team developed constructs to describe a variety of topics that we agreed were not adequately addressed in the CFIR. Specifically, interview and focus group mentions of relationships between grantees and their EHR vendors were a common topic. The count of HITEC-derived coded segments amounted to 960. (See Table 4)

Table 4:

Counts of coded statements organized by HITEC-developed construct

HITEC-defined Construct Count of Coded Segment
Relationships: Grantee/Vendor 177
Future 144
Accomplishments 87
Sustainability 75
Training 72
Relationships 60
Current Role 58
Time 56
Learning Organization 52
Interviewee Background 50
Workflow 48
Other 45
Communication 36
Total 960

Qualitative Results

The CFIR helped our research team to identify and describe the results in four ways: 1) the levels of stakeholders; 2. the organizational and governance structures of the levels; 3) the nature of the relationships within and across levels; and 4) the emerging ecosystem that results from EHR implementations. We define and describe each in the following sections.

1. Levels

A major characteristic of the CFIR is its design around levels of implementation: state-level, grantee-level, and practice-level. The levels caused our research team to consider, identify, and define an interdependent hierarchy of stakeholders that each has interests to uphold. For example, stakeholders from the domains “Outer Setting” (New York State), the “Inner Setting” (EHR vendors), and “Individuals” (practices) all discussed the importance of implementing interoperable EHRs and its long term benefits. However, challenges occurred within each of the levels such as The State getting EHR vendors to operationalize interoperability, EHR vendors waiting for interoperability standards from the federal government, and practices attempting to obtain adequate EHR training and support. Although stakeholders at different levels expressed a similar global goal, goals were expressed differently within each level.

2. Structures

By framing the analysis around levels, the CFIR structured the results to reveal a hierarchy that relied on policy and organizational structures to enable and support EHR implementations. Outer Settings (New York State) and Inner Settings (such as grantees) interacted with each other through a variety of mechanisms. One such interaction was through the New York eHealth Collaborative, a public-private venture that enabled stakeholders from different levels (providers, EHR vendors, policy makers, and more) to discuss challenges they were facing to implement EHRs. NYeC also provided a space in which the stakeholders could share best practices and develop consensus around positive next steps in EHR implementation processes. Formal structures such as NYeC provided opportunities to move the levels forward through the process.

3. Relationships

A critical aspect of the research that emerged from the analysis was how the structures supported relationships among entities both within and across levels. Practices (Individuals) embarking on implementations found themselves highly dependent on the quality of relationships with their grantees, EHR vendors, and/or health information service providers. The relationships could manifest themselves in formal business arrangements such as contracts, but more impressively to our team, were the participants describing the importance of building personal relationships. These relationships were in part fostered by the very structures that New York State had put in place by way of NYeC.

4. Ecosystem

The CFIR provided a model and language for the research team to interpret the data with a system wide view of EHR implementations. This aspect was critical in the team’s ability to consider multiple viewpoints from a diverse group of study participants and operationalize language to discuss the many challenges study participants were facing. Rather than focusing on a microcosm of interactions between a practice and an EHR vendor, for example, the CFIR promoted a systems approach that directed analysis on levels, structures, and relationships. As a result, our team began to recognize that an entity such as a practice undergoing an EHR implementation could initiate a chain of events that impacted others.

Challenges to Applying the CFIR

During the study we encountered numerous instances when we encountered questions and engaged in dialogs regarding the appropriateness of the levels. For example, the challenge remained to separate “Individual Characteristics” from practices and providers. In addition, subcontracts between grantees and vendors could blur boundaries between them and interactions with entities at other levels, such as The State. Although the attribution of any boundaries between levels are arbitrary, the research team at times found it difficult to categorize an entity into only one group. In the end, we resolved the issue by allowing the CFIR to have permeable boundaries where an EHR vendor in one context may fit into the “Inner Setting” and be labeled as something else given a different context.

Discussion

Using the CFIR method enabled our research team to manage a very large amount of qualitative data and analyze it predominantly within one evaluative framework. The quantitative results demonstrate that the CFIR captured 87% of the coded segments. Although it is impossible to state that the percentage is high because there’s no objective cross-study comparison, we believe that it nonetheless provides some insight into the CFIR’s breadth, depth, and ultimately its trustworthiness. The constructs that we integrated into the framework captured the remaining 13% of the coded statements. Four of the 5 HITEC-developed constructs were highly specific to the particularities of our project. However, the example we provided in the results section (“expectations gap”) we suggest may be a broader concern for many implementations and therefore may be worth formally adding to CFIR.

We attributed 960 coded segments to HITEC-derived constructs. A comparison of the number of the CFIR coded segments (1768) to HITEC-derived coded segments (960) demonstrates that a non-trivial amount of material addressed issues we determined did not fit within the CFIR. The predominant code of our own making was “Relationships” that spanned across a grantee and its EHR vendor (n=177) and within grantees or organizations (n= 60). These two codes resulted from our perceived need for inventing codes that captured the degree to which personal relationships, both inter-group and intra-group, factored into EHR implementations. Other HITEC-developed codes, that in our estimation the CFIR did not capture, seemed particular to EHR implementations such as training (n=72) and workflow (n=48). The need to develop our own constructs with which to organize the coded segments may provide some basis for expanding the CFIR’s numbers and types of constructs, particularly as they relate to EHR implementations or in general.

We believe the flexibility of the CFIR model enables it to manage both breadth and depth of qualitative data. The CFIR’s developers encourage site-specific use of the framework, and for every research team to pick and choose the constructs they wish to operationalize in their codebooks. We found this beneficial due to the unique nature of NYS’s implementation strategy, and other evaluators too may appreciate being able to adapt CFIR to their own EHR implementations. Therefore, the CFIR approach is a two-edged sword: it provides flexibility for a particular study alone, but may make comparisons across two or more studies difficult since each may use different permutations CFIR domains and constructs. The tradeoff may not be significant given that qualitative results are not often considered generalizable, but rather “transferrable”. 17 However, the framework’s usefulness may be enhanced if there were recommended methods for comparison. In addition, the frameworks usefulness may be enhanced if the authors provide further guidance as to if and how project-specific codes should be added to the CFIR to guide analysis.

The CFIR framework may be valuable for other large-scale qualitative evaluations of implementations, whether or not related to EHRs. The CFIR is qualitatively different to the implementation models derived from the informatics discipline. From our internal conversations among HITEC researchers and outside consultants, we have come to the opinion that informatics-based models tend to center around the technology and people’s use of the technology rather than the nature of the implementation itself. More recent informatics publications such as Lorenzi et al 18 and Banas et al 19 have focused on EHR implementations in efforts to promote best practices and we encourage more work in the area. This work provides a major contribution to evaluation methods in that no one to our knowledge has attempted to qualitatively analyze the impact of EHR implementations on such as large scale. With the CFIR as a guide, we have been able to analyze a wide number of data based on one overarching framework.

We believe that future informatics and implementation science researchers should look for opportunities to employ the CFIR for EHR and other health information technology implementations. Additional work will demonstrate potential strengths and weaknesses to the CFIR that will contribute to its further refinement. We encourage and welcome critiques from researchers as to the applicability and trustworthiness of the constructs that HITEC developed for this project. Qualitative research only flourishes through robust and active discourse and debate among researchers.

Limitations

As previously mentioned, a limit to this work is that the quantitative results that describe the numbers of domains and constructs are not generalizable. Our intention was to provide the reader a sense of the research project’s scope and to demonstrate the nature of the CFIR beyond a qualitative assessment. As is the case with most qualitative research, the results are transferrable and therefore have applicability to areas that are similar in context. 17

Conclusion

As community-based physicians are incentivized to implement electronic health records it is critical to evaluate and disseminate best practices for implementation. To address that need we reported the novel use of an implementation science framework termed the consolidated framework for implementation research (CFIR), to evaluate EHR implementations in community settings and discuss its strengths and limitations. In this paper we described the process that led to the CFIR, its use, and quantitatively and qualitatively reviewed the results it produced. This work provides value in two ways: 1) it introduces to the informatics field the use of an implementation science model for evaluating community-based EHR implementations; and 2) it reviews the methodology used to evaluate a method’s usefulness and applicability. Our findings contribute to the nascent body of research that is intended to mitigate the negative effects and promote the optimal effects associated with EHR implementations.

Table 3:

Counts of CFIR and HITEC defined coded statements

CFIR Domain CFIR Coded Segments HITEC-defined Coded Segments Total Coded Segments CFIR Coded Segments Percentage of Total
Intervention Characteristics 341 128 469 73%
Outer Setting 215 28 243 88%
Inner Setting 594 0 594 100%
Characteristics of Individuals 184 33 217 85%
Process 434 78 512 85%
Domain Total 1768 267 2035 87%

Acknowledgments

This study was supported by the New York State Department of Health (NYS contract number C023699).

References

  • 1.Blumenthal D. Launching HITECH. New England Journal of Medicine. 2010;362(5):382–385. doi: 10.1056/NEJMp0912825. [DOI] [PubMed] [Google Scholar]
  • 2.Blumenthal D. Stimulating the Adoption of Health Information Technology. New England Journal of Medicine. 2009;360(15):1477–1479. doi: 10.1056/NEJMp0901592. [DOI] [PubMed] [Google Scholar]
  • 3.Blumenthal D, Tavenner M. The “meaningful use” regulation for electronic health records. The New England Journal of Medicine. 2010;363(6):501–504. doi: 10.1056/NEJMp1006114. [DOI] [PubMed] [Google Scholar]
  • 4.Stetler CB, Mittman BS, Francis J. Overview of the VA Quality Enhancement Research Initiative (QUERI) and QUERI theme articles: QUERI Series. Implementation Science: IS. 2008;3:8–8. doi: 10.1186/1748-5908-3-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.MacFarlane A, Clerkin P, Murray E, et al. The e-health implementation toolkit: qualitative evaluation across four European countries. Implementation Science. 2011;6(1):122–122. doi: 10.1186/1748-5908-6-122. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Eccles M, Mittman BS. Welcome to Implementation Science. Implementation Science. 2006;1(1):3. 2/22/06. [Google Scholar]
  • 7.Damschroder L, Aron D, Keith R, Kirsh S, Alexander J, Lowery J. Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implementation Science. 2009;4(1):50–50. doi: 10.1186/1748-5908-4-50. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Crabtree BF, Miller WL. Doing qualitative research. Sage Pubns; 1999. [Google Scholar]
  • 9.Kukafka R, Johnson SB, Linfante A, Allegrante JP. Grounding a new information technology implementation framework in behavioral science: a systematic analysis of the literature on IT use. Journal of Biomedical Informatics. 2003;36(3):218–227. doi: 10.1016/j.jbi.2003.09.002. [DOI] [PubMed] [Google Scholar]
  • 10.Delone WH, McLean ER. The DeLone and McLean model of information systems success: A ten-year update. Journal of management information systems. 2003;19(4):9–30. [Google Scholar]
  • 11.Alexander JA, Hearld LR. The Science of Quality Improvement Implementation. Medical Care. 2010;48(12):000–000. doi: 10.1097/MLR.0b013e3181e1709c. [DOI] [PubMed] [Google Scholar]
  • 12.Damschroder L. CFIR resources. 2010.
  • 13.Damschroder LJ, Damush T, editors. 2009. QUERI Implementation Research: CFIR Implementation Framework with Application to the VISN 11 Stroke Collaborative. [Google Scholar]
  • 14.Damschroder LJ, Stetler CB, editors. Theoretical frameworks: Rationale, strengths and limitations in enhancing successful implementation. 2009. [Google Scholar]
  • 15.Damschroder LJ, Lowery J. Conducting Formative Evaluations Using a Consolidated Framework for Implementation Research. 2008. http://www.hsrd.research.va.gov/for_researchers/cyber_seminars/archives/qi-061908.html.
  • 16.Glaser B, Strauss A. Grounded Theory: The Discovery of Grounded Theory. New York: de Gruyter; 1967. [Google Scholar]
  • 17.Patton M. Qualitative research and evaluation methods. Thousand Oaks Calif: Sage Publications; 2002. [Google Scholar]
  • 18.Lorenzi NM, Kouroubali A, Detmer DE, Bloomrosen M. How to successfully select and implement electronic health records (EHR) in small ambulatory practice settings. BMC Medical Informatics and Decision Making. 2009;9:15–15. doi: 10.1186/1472-6947-9-15. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Banas CA, Erskine AR, Sun S, Retchin SM. Phased implementation of electronic health records through an office of clinical transformation. Journal of the American Medical Informatics Association. 2011;18(5):721–725. doi: 10.1136/amiajnl-2011-000165. [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from AMIA Annual Symposium Proceedings are provided here courtesy of American Medical Informatics Association

RESOURCES