Skip to main content
Journal of the American Medical Informatics Association : JAMIA logoLink to Journal of the American Medical Informatics Association : JAMIA
. 2008 May-Jun;15(3):297–301. doi: 10.1197/jamia.M2584

Formative Evaluation: A Critical Component in EHR Implementation

Julie J McGowan a ,, Caitlin M Cusack b , d , Eric G Poon c , d
PMCID: PMC2410002  PMID: 18308984

Abstract

This Viewpoint paper has grown out of a presentation at the American College of Medical Informatics 2007 Winter Symposium, the resulting discussion, and several activities that have coalesced around an issue that most informaticians accept as true but is not commonly considered during the implementation of Electronic Health Records (EHR) outside of academia or research institutions. Successful EHR implementation is facilitated and sometimes determined by formative evaluation, usually focusing on process rather than outcomes. With greater federal funding for the implementation of electronic health record systems in health care organizations unfamiliar with research protocols, the need for formative evaluation assistance is growing. Such assistance, in the form of tools and protocols necessary to do formative evaluation and resulting in successful EHR implementations, should be provided by practicing medical informaticians.

Background

With the publication of To Err is Human 1 and Crossing the Quality Chasm 2 by the Institute of Medicine in 2000 and 2001, there has been increased interest in promoting widespread adoption of electronic health records (EHRs) in an effort to improve patient safety and enhance quality of care. While there has been little research to tie specific patient outcomes to the use of health information technology, 3 a healthier population is the ultimate goal of most of the federal and state initiatives surrounding EHR implementation.

Over three decades of medical informatics research has been funded by the National Institutes of Health and the protocols for these investigational studies have set the standard for research in health information technology. The National Library of Medicine has been the leader in promoting such research and continues with such programs as the Integrated and Advanced Information Management Systems (IAIMS) Test & Evaluation Grants. The essential requirement for funding all such grants and contracts is a solid research design.

While the Agency for Healthcare Research and Quality (AHRQ) has primarily focused on health services research, it, too, has funded grants in which health informatics applications have been the interventions. The basic premise of all of these investigator-initiated grants has been the application of standard principles of quantitative and qualitative research.

Recently, to help foster adoption of health information technology, a number of federal initiatives have provided grant and contract monies to health care institutions to facilitate implementation and use of EHRs. In every case, evaluation of the projects was required. However, most of the recipients of these funds have been small institutions with little or no background in the kind of research commonly found in health services or medical informatics grants.

Initially, many of the funding agencies hoped that a strong relationship could be found between EHR implementation and improved health care outcomes. However, the lack of research expertise of the grant recipients, and the difficulty in directly linking health care outcomes to EHR implementation, have precluded the anticipated results. This does not mean, however, that there is not a strong rationale for evaluation of the implementation process. Without successful implementations it will be impossible to do the needed research linking EHRs and patient health outcomes. Formative evaluation, defined here as an iterative assessment of a project's viability through meeting defined benchmarks, can mean the difference between success and failure in EHR implementation. 4

While a number of studies have been done on factors facilitating and inhibiting EHR implementation, 5–7 there has been no widespread analysis of evaluation issues until recently. The National Resource Center (NRC) for Health Information Technology was funded by AHRQ to assist its grant recipients in the implementation and evaluation of their projects as part of AHRQ's 2004 and 2005 $166 million dollar health information technology (HIT) grant program. The NRC formed an evaluation team that was specifically charged with assisting with evaluation plan development, evaluating the evaluation plans, and integrating the evaluation findings.

In support of the EHR implementation grantees, the evaluation team held two regional evaluation workshops, offered office hours to assist each of the grantees in their evaluations, and provided both virtual and in person site visits to consult on the development of the evaluation plans. Following this multi-stage process, the evaluation team realized that there was a significant difference in the evaluation plans between the academic or research-oriented grantees and those who came from small and/or rural health care environments. As a result of this disparity, an evaluation toolkit was developed to help those unfamiliar with the basic precepts of evaluation to develop their plans. 8

Evaluation of the submitted evaluation plans was done using a tool, specifically developed to look at goals, measure selection, study design, feasibility, and analysis. Results of this process revealed that even with the guidance of the evaluation toolkit and the assistance provided by the evaluation team, the non-research oriented grantees had problems in appropriately scoping their evaluation plans, linking their EHR implementation goals with their evaluation goals, choosing suitable metrics, and developing hypotheses, data collection, and analysis plans.

Formative evaluation is an essential component of health information technology implementation. However, there is little understanding of the value of formative evaluation among many health care facilities now facing the need to implement EHR systems. There is even less understanding of how to accomplish the most rudimentary formative evaluation.

While randomized controlled trials are considered the gold standard for research in the medical sciences, there are a number of other forms of evaluation that can serve to advance health care and inform decision makers in this dynamic environment. A presentation at the Symposium on Community-based Health Information Outreach and its subsequent treatise suggested that evaluation of community-based information interventions should be focused on achieving the single hit rather than the homerun. 9 This metaphor can be used for formative evaluation in EHR implementation.

Questions may arise about the need to spend precious resources devoted to EHR implementation on the formative evaluation process. The rationale is found in the goals of the EHR implementation and the need to demonstrate that these goals have been met. Key stakeholders have real goals and the formative evaluation needs to be crafted around the desired outcomes of these stakeholders. This understanding of the need for evaluation of health care information systems is not new but deserves reconsideration. 10–11 The Friedman and Wyatt monograph, considered a standard in the field, has been released as a second edition. 12 There are three basic areas of formative evaluation that should be considered during most implementations. These are the effectiveness of the technology implementation itself, the factors that relate to personal and organizational issues, and the financial impact.

Evaluation of the Implementation of EHR Technology

Organizational issues need to drive any successful EHR implementation, however, poor choice of technology and its resulting perceptional failures is the first step toward an unsuccessful project. System selection should be driven by the desired outcomes. These desired outcomes can provide an initial framework for the development of a formative evaluation plan for the technology. Formative evaluation of the technology can help support and promote the technology adoption process. Obviously the key stakeholders need to drive the process and determine the critical success and failure factors.

Technology evaluation can be centered around many different components, including reliability, performance measures, standards and interoperability, customized tradeoffs, usability, and usefulness. The initial question that must be asked is whether the vendor specifications meet the needs of the institution as defined by the EHR system chosen. Is the throughput time acceptable? Does the system operate as intended, assuming that the vendor promises and the client expectations are synchronized?

From the point that a system becomes operational, data are collected about the system. For a solid formative evaluation of the EHR system implementation, specific metrics need to be chosen and success criteria defined to determine whether or not the system is meeting expectations. If wide scale usage is considered a priority, collection of actual numbers of transactions could be conducted. The subsequent observation of a steady increase in those numbers would suggest that system use is not based on initial curiosity about a new technology but rather acceptability among the users. If turnaround time of orders is considered a priority, determination of the time for orders to be filled prior to the implementation of the system and comparison at various points in time after the system is fully operational gives a good indication of whether the system is meeting its objectives.

Sometimes a selling point of a system is not something that an institution deems a priority. If this is true, then it should not be evaluated. An example of this might be Computerized Provider Order Entry (CPOE) functionality. If widespread adoption of CPOE (with its implied promise of enhanced patient safety) is not a critical success factor for the institution, then its utility should not be evaluated even though several early adopters think that it is important.

Two areas in which the technology might introduce problems are in interoperability with other systems and in the choice for specific customizations that might inhibit full functionality of the global EHR system. Formative evaluation in these areas needs to begin by assessing the workflows and outcomes prior to the implementation and reassessing them after significant time has passed to account for the learning curve with any new system implementation. Formative evaluation can give the data necessary to determine whether or not the system or component implementation has improved or inhibited the process leading to desired outcomes; formative evaluation can provide justification to either modify or remove a dysfunctional system or component.

Evaluation of the Organizational Aspects of EHR Implementation

Rarely is EHR implementation failure due solely to issues with the technology. If an EHR system is chosen based on the needs of the organization and the stated goals of the key stakeholders along with due consideration of financial implications, there should be no reason for failure. However, end-users must efficiently and effectively use any system. If the system introduces new problems or if the users do not feel that they have been part of the process in selecting the system, then the system has a risk of failure.

Formative evaluation of organizational aspects of a proposed EHR implementation needs to begin prior to the system selection. An environmental analysis of the organization that includes current process and workflow assessment, readiness to adopt new technologies, belief systems, and other human factors concepts will identify the potential issues that could derail any EHR implementation.

The pre-implementation evaluation accomplishes two things. First, it engages users in the implementation process and can garner buy-in among all future system users. This is accomplished by soliciting feedback and acting upon suggestions. Second, it can be used to identify any organizational constructs that could serve as roadblocks to successful implementation. For instance, if there is a major problem with a component of workflow identified prior to automation, implementing the technology might actually increase dysfunction rather than solve the problem. Similarly, if a readiness assessment determines that the institution cannot gain from implementation because the staff is not prepared to accept the new technology, then the staff might attempt to undermine the implementation.

Assuming that the potential users of the system have played a role in design and selection of the system, then certain aspects of the system such as usefulness and usability should be evaluated as part of a formative improvement process. Simple Likert-type survey instruments can be employed, using the feedback to drive system modifications.

The EHR system itself will provide data on which decisions regarding organizational acceptance can be made. If there is an increase in the number of system users, a decrease in the time it takes for an order to be filled, or a decrease in the time it takes for a discharge summary to be sent to a referring provider, then the EHR system can be shown to be meeting the above referenced goals. However, if the data show that time inefficiencies have been introduced or that numbers of users are not meeting benchmarks, then action can be taken to address the problem area and rapidly turn around what could become a significant issue for the organization.

Evaluation of Financial Factors in EHR Implementation

While the costs of the evaluation itself might initially be questioned, proof of a sustainable EHR system meeting the financial goals of the implementation will guarantee future support. An EHR system is one of the more costly capital expenditures made by any health care organization. While there has been much in the commercial press about the cost savings inherent in the widespread adoption of EHRs due to greater patient safety and improved quality of care, there has been almost no research that has proven a direct causal link. Unlike in other industries, healthcare has done an extremely poor job of tracking both direct and indirect costs associated with the implementation of an EHR. As such, organizations have few prior examples to review to help them estimate their own costs in such an endeavor.

Although there is a paucity of information on the subject, from a return on investment (ROI) perspective, health care administrators want to know how the EHR system will benefit the organization and affect the bottom line. Data on the cost of implementation should be collected from the point of making the decision to implement an EHR system. The cost of the actual system, the telecommunications infrastructure and any physical space renovations are relatively easy to determine. Less obvious but still essential in understanding the total costs of the system is the staff time involved in successful implementation. This may center on education but could also include the need to hire additional staff for planning, training and IT support when not provided by the vendor.

On-going costs including system maintenance, telecommunication fees, and dedicated personnel time are critical to consider when looking at sustainability. However, these should be discrete from the initial capital investment in system implementation.

Understanding that calculating return on investment is a highly complex procedure, at the highest level, determination of ROI can be made in two ways. If the initial capital costs were funded by a grant or other outside source, then the process needs only to look at the financial benefits of reduced staff time and other benefits of EHR systems and compare those to the current costs of operation. However, if capital costs were to be borne by the health care organization, then a determination should be made as to whether these should be amortized over a specific period of time and factored into the costs of operation. This latter ROI model will probably not yield the savings envisioned but could generate interesting data that could be leveraged in other ways, such as justification of charges.

When calculating the costs of benefits, there are actual benefits that can be determined and other benefits that are more difficult to assign value. Reduced staff time can be calculated. Proxy measures such as reduced adverse drug events or even time to treatment can be assigned a dollar value. However, increased utilization of the health care services through increased referrals could be attributed to improvement in discharge summary turnaround time to the referring provider or to another factor not related to the EHR implementation.

In attempting to do formative evaluation of ROI, many measurable events for which a dollar value could be assigned will occur over time. Extrapolation of real costs and savings from a relatively small number of events within a discrete time frame might offer the best model for demonstrating the financial impact of the EHR system.

Lastly, some thought needs to be given to the mitigation of risk. The financial impact on the organization due to implementation failure, both directly and indirectly, offers one of the greatest potential risks for an organization. Unless hidden costs, such as support for emergency recovery systems and human capital investment due to attrition caused by poor system implementation, are considered throughout the implementation process, the project could encounter serious cost overruns or even fail altogether. Formative evaluation that addresses these issues throughout the implementation process would lessen this risk.

Formative Evaluation and Metrics

To provide a framework for formative evaluation of health information technology, the NRC evaluation team developed an Evaluation Toolkit, 13 initially for the AHRQ grantees but revised for anyone beginning to implement EHR or other HIT systems. This toolkit provides a starting point for anyone beginning formative evaluation of their EHR implementation. There are, however, other issues around the widespread use of formative evaluation that need to be addressed.

The toolkit is based on a greatly simplified version of a research framework that provides a stepwise process for developing a formative evaluation plan. Any EHR implementation should be driven by the outcome goals formulated by key stakeholders, and these need to be clearly stated. Any formative evaluation plan should begin with evaluation goals that are specifically linked to the EHR implementation goals.

Choice of metrics for evaluation is a key component. Implementers need to determine what should be measured, based on the areas of formative evaluation described above. The appropriate metrics need to be chosen based on this determination. Among the categories of metrics are clinical outcome measures, clinical process measures, provider adoption and attitude measures, patient knowledge and attitude measures, workflow impact measures, and financial impact measures. Detailed examples for various types of HIT projects are included in the Evaluation Toolkit.

Other issues addressed by the Evaluation Toolkit include a discussion about barriers and facilitators to the evaluation, the differences between qualitative and quantitative assessment methods, sample size and power. Recognizing that many of these concepts are beyond the evaluation goals of small and/or rural health care facilities, they still introduce concepts that might help formulate a more valid formative evaluation plan. Lastly, to provide a means to select appropriate evaluation metrics determined by the institution, a feasibility matrix is presented. This matrix compares relative importance of various goals to the stakeholders with the ease of the actual formative evaluation process.

One area not specifically addressed but needing consideration is the study design, not from a formal research perspective but in terms of asking the basic questions of how will the data be obtained, who will do the work, and what budget exists to support the formative evaluation. Additionally, those involved in EHR implementation should determine the recipient of the formative evaluation outcomes and frame both the evaluation and the analysis of the findings around the expectations of the key stakeholders.

Knowledge Repository for Formative Evaluation: A Challenge to the Medical Informatics Field

Accepting that many involved with EHR implementation have no background in evaluation, the toolkit is a good start. However, it cannot contain everything needed for a successful formative evaluation. Some type of knowledge repository needs to be developed whereby implementers can choose the metrics they deem essential for their organization, understanding that the tools used must be appropriate to the individual health care environment.

Evaluation of technology can be done in a myriad of ways. It needs to be tied to the unique goals and objectives of the specific EHR implementation, but examples of evaluation of technology metrics could be developed and should be freely accessible to those beginning the implementation process. The metrics need to be accompanied by study designs and all of the tools that novice users need to formulate a successful evaluation plan, guided by the extant Evaluation Toolkit.

Also necessary is a fact sheet that answers very simple questions. For instance, most administrators have heard of Institutional Review Boards (IRBs), even if their health care organization has no need of one. Once the concept of evaluation is mentioned, then questions about the need for IRBs will arise. Evaluation of technology implementation raises issues even in research oriented organizations. Evaluators need an easy guide to understand when IRB review is needed and when is not. There is a common misperception that IRB approval is needed merely to turn on a system to study the usefulness and usability of a system when it is in fact likely not needed.

Evaluation of organizational issues arising from EHR implementation, ease of use, and similar human computer interface studies are usually qualitative and involve surveys or structured interviews. The NRC is compiling a compendium of quality-filtered surveys that can be used in formative evaluation. However, just having the survey instruments is not enough to provide the necessary support for non-researchers involved in EHR implementation. Some of the questions that will arise include how to administer the surveys, which subjects should be surveyed, and what will constitute appropriate analysis of the data. These could be addressed by a frequently asked questions (FAQ) document tailored to survey administration.

Additional assistance is needed in the area of organizational issues. Novice evaluators must understand how formative evaluation can be used to garner buy-in and how dysfunctional workflow needs be corrected before implementation begins. Being able to understand these concepts and use appropriate tools and methods prior to the beginning of the implementation process can facilitate adoption of the EHR system.

Formative evaluation of financial factors requires more than a detailed cost analysis model. Simple expense projections as well as determination of capital costs are relatively easy to determine. More complexity is found in assigning value to the benefits of the system and establishing real cost savings. Sample algorithms for ascertaining the true financial impact of EHRs would be of immense benefit to those involved with EHR system implementation.

The tools used for research may or may not be applicable to the formative evaluation process. However, most of these validated instruments and study designs could be rewritten for use in formative evaluation. While there is a growing call for successful EHR system implementations, it is in the best interests of those with established EHR systems to assist other organizations with their implementations, both directly and through support of formative evaluation of the fledgling systems. Successful implementations will enable the creation of viable regional health information organizations, a building block for the National Health Information Network.

Dissemination

Discussions following ACMI Symposium presentation supported the conclusion that having access to the targeted tools necessary to perform formative evaluation in EHR implementations would foster successful outcomes. However, another issue of similar import was raised, and that involved the lack of awareness among those about to undertake EHR system implementation about the intrinsic value of formative evaluation. While tools and toolkits can be made freely available through the Web, dissemination of information about their existence needs to be comprehensive and brought to the attention of those who make purchasing decisions and oversee implementation. This comprehensive dissemination should include not only publications in scholarly journals but also presentations at HIT and health care management meetings, publications in trade journals, and a campaign focused on vendors to encourage formative evaluation of EHR system implementation.

Conclusion

There is a growing mandate for health care organizations to implement EHR systems to address patient safety and quality of care. There is some evidence that computerized medical records systems can improve health care delivery but there is little research to directly link EHRs to patient care outcomes other than through proxy measures. However, with federal dollars supporting many initiatives to automate medical offices, an infrastructure could be built that would provide the foundation for future research in this area.

We are at a watershed period in the ability to realize widespread adoption of EHRs. The legislative climate is supportive and money is available to underwrite the initial capital costs. There are more health care organizations contemplating EHR implementation. Research about facilitating and inhibiting factors exists in the published literature and has been compiled on several Web sites. However, lack of organization-specific formative evaluation could mean the difference between success and failure in many of these implementations.

Some efforts, particularly by the Agency for Health Care Research and Quality, are being made to support health care providers and organizations beginning to implement EHRs, but more needs to be done. New tools and toolkits need to be developed. Information about the value of formative evaluation needs to be disseminated, not only in scholarly journals but also in trade publications and at meetings frequented by decision makers. Without such formative evaluation, EHR systems may fail during the implementation process or fail to be supported after implementation unless value is shown.

Successful EHR systems can build demand for use among health care workers. Successful EHR systems can provide the evidence to federal funding agencies about their value to the health care enterprise. Successful EHR systems across all regions of the country, regardless of the size of the organization, will be a driving force in the realization of the vision of the National Health Information Network. Formative evaluation tools and protocols developed by medical informaticians can assist EHR systems to become successful.

Footnotes

This work is supported in part by the National Resource Center of the Agency for Healthcare Research and Quality, contract number 290-04-0016.

References

  • 1.Kohn LT. To err is human: Building a safer health systemWashington, DC: National Academy Press; 2000. [PubMed]
  • 2.Institute of Medicine (U.S.) Committee on Quality of Health Care In America Crossing the quality chasm: A new health system for the 21st centuryWashington, DC: National Academy Press; 2001.
  • 3.Chaudhry B, Wang J, Wu S, Maglione M, Mojica W, Roth E, Morton SC, Shekelle PG. Systematic review: Impact of health information technology on quality, efficiency, and costs of medical care Ann Intern Med 2006;144:742-752. [DOI] [PubMed] [Google Scholar]
  • 4.Sallas B, Lane S, Mathews R, Watkins T, Wiley-Patton S. An iterative assessment approach to improve technology adoption and implementation decisions by healthcare managers Inform Sys Manage 2007;24:43-57. [Google Scholar]
  • 5.Siwicki B. Overcoming electronic records hurdles Health Data Manage 1998;6:58-60, 64–7, 70. [PubMed] [Google Scholar]
  • 6.Sprague L. Electronic health records: How close?. How far to go?. NHPF Issue Brief 2004;800:1-17. [PubMed] [Google Scholar]
  • 7.van Ginneken AM. The computerized patient record: Balancing effort and benefit Int J Med Inform 2002;65:97-119. [DOI] [PubMed] [Google Scholar]
  • 8.Cusack CM, Poon EG. Evaluation Toolkit, Version 3. [monograph on the Internet]Agency for Healthcare Research and Quality National Center for Health Information Technology; 2006. http://healthit.ahrq.gov/evaltoolkit 2006. Accessed June 29, 2007.
  • 9.Friedman CP. “Smallball” evaluation: A prescription for studying community-based information interventions J Med Libr Assoc 2005;93:S43-S48. [PMC free article] [PubMed] [Google Scholar]
  • 10.Anderson JG, Audin CE. Evaluating the impact of health care information systems Int J Tech Assess Health Care 1997;13:380-393. [DOI] [PubMed] [Google Scholar]
  • 11.Friedman CP, Wyatt JC. Evaluation methods in medical informaticsNew York: Springer-Verlag; 1997.
  • 12.Friedman CP, Wyatt JC. Evaluation methods in medical informatics2nd Ed.. New York: Springer-Verlag; 2006.
  • 13.Cusack CM, Poon EG. The AHRQ National Resource Center Evaluation Toolkithttp://healthit.ahrq.gov 2006. Accessed Jan 5, 2007.

Articles from Journal of the American Medical Informatics Association : JAMIA are provided here courtesy of Oxford University Press

RESOURCES