Skip to main content
Journal of the American Medical Informatics Association : JAMIA logoLink to Journal of the American Medical Informatics Association : JAMIA
. 2011 Jul 31;18(6):749–753. doi: 10.1136/amiajnl-2011-000249

Evaluating health information technology in community-based settings: lessons learned

Lisa M Kern 1,2,3,, Jessica S Ancker 1,3,4, Erika Abramson 1,3,4, Vaishali Patel 1,3,4, Rina V Dhopeshwarkar 1,3, Rainu Kaushal 1,2,3,4,5
PMCID: PMC3198001  PMID: 21807649

Abstract

Implementing health information technology (IT) at the community level is a national priority to help improve healthcare quality, safety, and efficiency. However, community-based organizations implementing health IT may not have expertise in evaluation. This study describes lessons learned from experience as a multi-institutional academic collaborative established to provide independent evaluation of community-based health IT initiatives. The authors' experience derived from adapting the principles of community-based participatory research to the field of health IT. To assist other researchers, the lessons learned under four themes are presented: (A) the structure of the partnership between academic investigators and the community; (B) communication issues; (C) the relationship between implementation timing and evaluation studies; and (D) study methodology. These lessons represent practical recommendations for researchers interested in pursuing similar collaborations.

Keywords: Classical experimental and quasi-experimental study methods (lab and field), cognitive study (including experiments emphasizing verbal protocol analysis and usability), delivering health information and knowledge to the public, human–computer interaction and human-centered computing, measuring/improving patient safety and reducing medical errors, quality of care, uncertain reasoning and decision theory


Most research on the effectiveness of health information technology (IT) has emerged from large academic medical centers that have iteratively refined home-grown systems over decades.1 However, the American Recovery and Reinvestment Act of 2009 (ARRA) is expected to promote the widespread adoption of commercial systems by community-based providers, through its provision of up to US$30 billion in financial incentives to physicians and hospitals for the meaningful use of electronic health records (EHR) that support the electronic exchange of data.2 3 This change in the nation's health IT landscape is accompanied by a pressing need to assess the effects of health IT in the non-academic community settings where many Americans receive their health care. Such research and evaluation is necessary to ensure that the nation makes the best possible investments in health IT.

The need for rigorous evaluations of such community-level health IT is widely recognized. However, many organizations implementing health IT lack evaluation expertise.2 4 Our academic research group has partnered with community-based organizations to produce innovative research in these community settings. For example, we recently demonstrated that the use of e-prescribing among community-based primary care physicians decreased prescribing errors more than sevenfold.5 We have also shown that the use of a health information exchange portal was associated with increased quality of care among community-based physicians.6 These studies are among the first to demonstrate quality and safety benefits associated with health IT and health information exchange when implemented in community-based settings.

Study methodology has previously been published for each of these studies. However, the studies also took place within a larger setting of carefully developed academic–community partnerships that we have not previously described. The work is grounded in community-based participatory research, an approach derived from public health that builds partnerships between community members and academic researchers so that both are integrally involved in all stages of the research process, from identifying research topics of interest to the community to disseminating the results within that community.7–9 Such partnerships are intended to leverage both the specialized research skills of academics and the rich local knowledge of community members, and particularly to ensure that the research benefits the communities in which it is conducted.

In this paper, we describe lessons learned from our experience with community-based participatory research as applied to health IT.

Context

This work took place in a unique state context. New York State is investing more than US$400 million in interoperable health IT through the Healthcare Efficiency and Affordability Law for New Yorkers (HEAL NY) Capital Grant Program.10–12 HEAL NY grants are awarded to community-based alliances of healthcare stakeholders, which come together for the purpose of implementing health IT.13 14 As part of the HEAL NY program, each community-based alliance was required to evaluate the effects of its interventions, but the grant recipients (like many community-based health IT grant recipients around the country)15 generally did not have evaluators on staff.

Several community-based alliances sought collaborations with an investigator at Weill Cornell Medical College, who had previous experience with community-based evaluations of health IT. The breadth and scope of the potential evaluations stimulated investigators to discuss with the New York State Department of Health and with universities across the state the concept of a multi-institutional, coordinated evaluation of New York state initiatives. With initial funding from the Commonwealth Fund and several HEAL NY-funded communities, the Health Information Technology Evaluation Collaborative (HITEC) was established in 2007 and began evaluation of the first phase of HEAL NY. For the three subsequent phases, HITEC was named the state-designated entity for evaluation and was funded by the New York State Department of Health. The naming of HITEC predated the 2009 federal legislation with a similar sounding name (HITECH, or the Health IT for Economic and Clinical Health Act).3

HITEC is an academic collaborative among Weill Cornell Medical College of Cornell University, Columbia University, the University of Rochester, and the State University of New York at Albany, with additional collaborators at the University at Buffalo. HITEC was established, with the endorsement of the New York State Department of Health, to conduct independent, rigorous evaluation of the health IT initiatives in New York state, particularly those funded by HEAL NY. To our knowledge, HITEC is the only multi-institutional academic entity focused on health IT evaluation.12 The authors are directors (RK, LMK, JSA) and members (EA, VP, RVD) of HITEC.

In this Perspectives paper, we describe the lessons learned from our work with community-based participatory research on health IT from 2007 to 2010. Our methods for working with communities included organizational assessment surveys,11 site visits, telephone meetings, technology demonstrations, and iterative refinement of written summaries. The lessons learned were derived from a series of meetings of HITEC investigators and staff, during which we generated lists of lessons learned and iteratively refined them, grouping lessons under broad themes. These lessons reflect experience drawn from HITEC's rapidly growing evaluation portfolio, which now includes more than 30 studies (quantitative, qualitative, and mixed methods) with community-based organizations across the state of New York.

Lessons learned

The lessons learned address four themes (box 1): (A) the quality of the partnership between academic investigators and the community; (B) the quality of communication between these groups; (C) the relationship between implementation and evaluation; and (D) study methodology.

Box 1. Lessons learned from community-based participatory research on health information technology.

Theme A: partnership

  1. Be responsive to the priorities of the community. Balance the academic preference for standardizing evaluation across communities with the communities' preferences for customization.

  2. Include in the research team a clinician-investigator based in the community.

  3. Decide early who is paying for what and estimate how much it will cost.

Theme B: communication

  1. Be prepared to share interim results with the community.

Theme C: implementation

  1. Expect implementation delays and evolution of implementation plans.

Theme D: research methodology

  1. Consider including a clause about evaluation and research in community-based participation agreements.

  2. Plan sufficient resources for developing, submitting and modifying protocols for IRBs.

  3. Identify data sources early, in collaboration with data experts.

  4. Address the need for data aggregation and case-mix adjustment.

Theme A: partnership

Community–academic priorities

To develop true partnerships, the communities and the evaluators had to understand each other's priorities. Typically, communities wanted to improve their implementation and demonstrate value to involved stakeholders including healthcare organizations, payers, and consumers. By contrast, the academic investigators frequently sought to combine evaluations across similar communities to increase sample size, precision of the effect size estimates, and generalizability. In other words, communities were more focused on generating formative findings and academic faculty on summative findings. In one example, we initially suggested that two communities implementing the same e-prescribing software study its impact on formulary compliance. However, in one of the communities, an important stakeholder had previously examined this issue in another context and did not want to build a new study around it. We ultimately pursued the study in only one of the two communities. This need to balance the academic preference for standardizing evaluations across communities to create generalizable evidence with the communities' preference for customizing evaluations was central to our experience.

Team composition

Our second finding on partnership related to the composition of the academic and community-based study teams. The academic team was multidisciplinary and evolved to include health services researchers, clinicians, informaticists, statisticians, and research coordinators. The community-based teams were also multidisciplinary and included experts in technology, implementation, business, and health policy. We found that those community-based teams that also included a clinician investigator (a physician who had substantial exposure to or experience in evaluation or research) progressed much more rapidly than those that did not. This finding was so striking that we strongly encouraged each community to identify a clinician investigator and add him or her to the team. The clinician investigator became a critical ‘translator’ between the language of the academic investigators and the community entity.

Cost breakdown

A successful partnership also depended on being able to decide early who was paying for what and having accurate estimates of how much various activities cost. A typical financial plan involved HITEC assuming costs associated with: designing the study, developing data collection instruments, submitting and revising institutional review board (IRB) protocols, analyzing data, and disseminating results through manuscripts and reports. HITEC also typically performed data collection for qualitative studies such as focus groups and interviews. The community organization was generally responsible for data collection for quantitative studies and surveys, as well as for claims data aggregation. This approach placed the community at the forefront of data collection, and was meant to decrease the perceived intrusiveness of the academic investigators.

Theme B: communication

Dissemination

The communities' need for interim feedback initially appeared to conflict with the need of academic investigators to avoid releasing results before publication in peer-reviewed journals. We found a compromise solution that involved generating summary reports with formative findings that could be disseminated within the community either in written form or through oral presentations. These summary reports, which can be generated more rapidly than academic papers, contain the level of detail often found in abstracts presented at scientific meetings and do not preclude the subsequent publication of more summative findings. We emphasize to our collaborators the importance of avoiding dissemination beyond the community before full publication.

Theme C: implementation

Evolution of implementation and evaluation

Evaluation planning could not begin until the implementation plan was at least partly developed, and evaluation itself could not occur until the technology was implemented. Implementation plans frequently took longer than anticipated, and also evolved substantially over time.16 Therefore, evaluation plans had to evolve. For example, at the outset, several communities had not yet identified a vendor for their implementation and thus had not yet determined what types of decision support would be available. In other situations, delays arose when vendors that had been contracted to provide software or services went out of business or took longer than anticipated to develop software solutions. Additional time is needed before adoption is widespread enough to measure clinical and economic outcomes. Our earliest studies focus on issues such as how providers use and respond to technologies; only after several years is it possible to determine the effects on clinical and economic outcomes.

Theme D: study methodology

Outreach methods

Many of our studies required us to survey or interview clinicians in the community. Although each community-based organization helped us locate potential contacts, we found that the ease of recruitment varied depending on how the organization informed prospective participants about studies. The most effective method was including a clause about evaluation and research in the participation agreements that physicians signed when they joined the health information organization. The subsequent outreach (whether by in-person presentation, letters, phone, email, or other means) could then cover details of specific studies rather than the rationale for evaluation and research broadly. This was more effective than when the first mention of research and evaluation was by a letter or telephone call describing a particular study.

Multiple IRB

We submitted all study plans to an IRB, as is required for human subjects research studies. This generally involved submitting a protocol to the Weill Cornell IRB (with particular attention to data analysis and manuscript preparation), with a parallel submission to a community-based IRB (with particular attention to recruitment and data collection). As others have reported,17 there was tremendous variation in perspectives and procedures between IRB in different communities. To be responsive, we had to modify some protocols several times, which took longer than expected. Allocating sufficient time and resources for developing and modifying IRB protocols was critical. Having a clinician investigator in each community who could anticipate and address the concerns of the local IRB also facilitated the research effort.

Data sources and data experts

Early in the study development process, intensive discussions were needed to determine what sorts of data were likely to be available, and thus what evaluation questions could feasibly be posed. Data sources for these studies typically included the grantee organizations themselves (eg, for data on technology users), software vendors (eg, for data on usage of the technology), health plans (eg, for claims data), patients and providers.

We found that progress was faster when our discussions included the software programmers and technical staff who understood the details of the implementation at a very granular level. For example, we designed a study to examine associations between the usage of a community-wide health information exchange and the subsequent utilization of health care. The study had to be designed in close collaboration with data experts, in order to ensure that data from the health information exchange could be matched at the patient and provider levels with data from the same time period from health plans.

Data aggregation

The HEAL NY program grants encouraged evaluations of outcomes such as healthcare quality, safety, and economics. We thus sought to obtain medical claims data to assess these outcomes at the community level. However, all of our communities had more than one dominant commercial health plan, meaning that claims data would have to be aggregated across plans. Aggregation involves grouping claims by patient and attributing patients to providers. Aggregation is challenging and costly because it involves reaching agreements with multiple health plans, developing and applying algorithms for grouping and attributing the claims, applying case-mix adjustment procedures, and substantial investment in hardware, software and personnel (through contracting with a company specializing in the service). One community-based organization had already committed to performing data aggregation in order to conduct a community-wide quality improvement program, which had the financial support of its major health plans. In this community, we were able to reach an agreement to use the aggregated data also for evaluation purposes. In several other communities, the community-based organization has recognized the value of using claims data for evaluation purposes, but these communities have encountered more challenges raising money to pay for the service as well as getting buy-in from the health plans. In these communities, data aggregation is also occurring, but along a slower timeline.

Discussion

In this paper, we present nine practical lessons learned for academic evaluators seeking to measure the effects of health IT in community-based settings. These lessons provide guidance to: contribute to the quality of academic–community partnerships; facilitate effective communication; anticipate interactions between implementation and evaluation; and address methodological issues that are central to this type of endeavor.

Several of these lessons are similar to findings reported by others, for example, discrepancies between IRBs,17 and adapting evaluations to implementation delays, evolution in implementation plans, and stage of implementation.15 16 The opportunity to reuse clinical and operational data to support evaluation efforts has also been discussed elsewhere.15 Globally, our findings are also congruent with the core principles of community-based participatory research.4 7 Nevertheless, the lessons presented here extend the literature by providing very practical guidance on how to operationalize the principles of community-based participatory research for the evaluation of community-based health IT. This is particularly relevant at a time when more studies such as these are being planned nationwide.

Conducting evaluation under a community-based participatory research framework is time-consuming and challenging. It requires considerable investment in building strong relationships, mutual education about priorities, and flexibility. On the other hand, these collaborations helped us develop a rich and nuanced understanding of health IT on the ground, foster commitment and buy-in to novel community-based studies that might otherwise be impossible, and ensure that findings are put into practice immediately to improve health care in the communities in which the research is being conducted.

In our experience, these collaborations also led to more accurate interpretation of study results and generated additional hypotheses to be tested. In general, investigators close to the implementation process may be most likely to understand the reasons behind both positive and negative findings. On the other hand, the collaborations also preserved distance between implementers and the evaluators of technology. This distinguishes HITEC projects from previous research on the effectiveness of health IT that has occurred in large academic medical centers,1 where evaluators were either members of those medical centers or participants in the implementation.

Policy implications

In addition to its implications for academic researchers, our experience has several policy implications. Policy makers seeking to understand the impact of health IT on quality and cost should be aware of the long time horizon associated with the corresponding evaluation and research. Delays in implementation, of course, delay evaluation, and measurable effects on quality and cost cannot be determined immediately after technologies go live.

Second, when community-based implementations are mature enough to measure cost and quality outcomes, claims data aggregation across health plans can provide a powerful method of assessing these outcomes at the community level. However, current solutions to data aggregation require hardware, software and personnel capabilities that are beyond the scope of most communities and researchers. Third-party vendor solutions are available but their cost may outstrip the cost of the rest of the evaluation combined. State and federal funds may be needed to support data aggregation centers for research, evaluation, and quality improvement in order to capture these effects of health IT. Eventually, the goal will be to replace claims data with rich clinical data from electronic health records (EHRs) and other sources of electronic data.

Third, while there are federal efforts underway to expand training for people to implement health IT such as the Office of the National Coordinator program of assistance to university-based training programs (http://healthit.hhs.gov), there appears to be a relative shortage of health services researchers and informaticists trained to evaluate health IT initiatives. The academic–community partnerships described here can serve as a template for how to leverage a group of investigators across many geographically disparate communities.

Limitations

First, we collaborated with highly motivated community organizations that were in relatively advanced stages of health IT implementation. Second, New York State was directly supporting implementation and requiring evaluation. As a result, the lessons learned from these experiences may not be generalizable to all communities.

Conclusion

With the federal EHR incentive program and other state and national-level initiatives, health IT is being implemented in more community-based settings. Large-scale efforts have been made to implement EHRs,18 19 electronic prescribing,20 and health information exchange18 21–23 in community settings. Such community-level health IT implementations offer exciting opportunities for evaluation studies, but many of the organizations implementing health IT lack evaluation expertise. We describe lessons learned from close community–academic partnerships, developed according to the principles of community-based participatory research, which leverage the evaluation expertise available in the academic setting, as well as the rich local and pragmatic knowledge of community members implementing health IT on the ground.

Footnotes

Funding: This work was supported by the Commonwealth Fund (grant #20060550) and the New York State Department of Health (contract #C023699). The authors thank the participating communities.

Competing interests: None.

Provenance and peer review: Not commissioned; externally peer reviewed.

References

  • 1.Chaudhry B, Wang J, Wu S, et al. Systematic review: impact of health information technology on quality, efficiency, and costs of medical care. Ann Intern Med 2006;144:742–52 [DOI] [PubMed] [Google Scholar]
  • 2.Poon EG, Cusack CM, McGowan JJ. Evaluating healthcare information technology outside of academia: observations from the National Resource Center for Healthcare Information Technology at the Agency for Healthcare Research and Quality. J Am Med Inform Assoc 2009;16:631–6 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.US Department of Health and Human Services Medicare and Medicaid Programs; Electronic Health Record Incentive Program; Final Rule. 75 Federal Register 44314, 2010; 42 CFR Parts 412, 413, 422 and 495. [PubMed] [Google Scholar]
  • 4.Friedman CP. “Smallball” evaluation: a prescription for studying community-based information interventions. J Med Libr Assoc 2005;93(4 Suppl):S43–8 [PMC free article] [PubMed] [Google Scholar]
  • 5.Kaushal R, Kern LM, Barron Y, et al. Electronic prescribing improves medication safety in community-based office practices. J Gen Intern Med 2010;25:530–6 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Kern LM, Barron Y, Blair AJ, 3rd, et al. Electronic result viewing and quality of care in small group practices. J Gen Intern Med 2008;23:405–10 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Israel BA, Schulz AJ, Parker EA, et al. Review of community-based research: assessing partnership approaches to improve public health. Annu Rev Public Health 1998;19:173–202 [DOI] [PubMed] [Google Scholar]
  • 8.Israel BA, Parker EA, Rowe Z, et al. Community-based participatory research: lessons learned from the Centers for Children's Environmental Health and Disease Prevention Research. Environ Health Perspect 2005;113:1463–71 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Ancker JS, Kukafka R. A combined qualitative method for testing an interactive risk communication tool. AMIA Annu Symp Proc 2007;11:16–20 [PMC free article] [PubMed] [Google Scholar]
  • 10.New York State Department of Health Request for Grant Applications – HEAL NY Phase 1. http://www.health.ny.gov/funding/rfa/inactive/0508190240/ (accessed 24 Jul 2011). [Google Scholar]
  • 11.Kern LM, Barron Y, Abramson EL, et al. Promoting interoperable health information technology in New York State. Health Aff (Millwood) 2009;28:493–504 [DOI] [PubMed] [Google Scholar]
  • 12.Kern LM, Kaushal R. Health information technology and health information exchange in New York State: new initiatives in implementation and evaluation. J Biomed Inform 2007;40(6 Suppl):S17–20 [DOI] [PubMed] [Google Scholar]
  • 13.National Alliance for Health Information Technology Report to the Office of the National Coordinator for Health Information Technology on Defining Key Health Information Technology Terms. Department of Health and Human Services, Office of the National Coordinator for Health Information Technology, 2008. http://healthit.hhs.gov/portal [Google Scholar]
  • 14.Kern LM, Wilcox AB, Shapiro J, et al. Community-based health information technology alliances: Potential predictors of early sustainability. Am J Manag Care 2011;17:290–5 [PubMed] [Google Scholar]
  • 15.Sladek RM, Phillips PA, Bond MJ. Measurement properties of the Inventory of Cognitive Bias in Medicine (ICBM). BMC Med Inform Decis Mak 2008;8:20. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Kukafka R, Johnson SB, Linfante A, et al. Grounding a new information technology implementation framework in behavioral science: a systematic analysis of the literature on IT use. J Biomed Inform 2003;36:218–27 [DOI] [PubMed] [Google Scholar]
  • 17.Gandhi TK, Weingart SN, Seger AC, et al. Outpatient prescribing errors and the impact of computerized prescribing. J Gen Intern Med 2005;20:837–41 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Goroll AH, Simon SR, Tripathi M, et al. Community-wide implementation of health information technology: the Massachusetts eHealth Collaborative experience. J Am Med Inform Assoc 2009;16:132–9 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Mostashari F, Tripathi M, Kendall M. A tale of two large community electronic health record extension projects. Health Aff (Millwood) 2009;28:345–56 [DOI] [PubMed] [Google Scholar]
  • 20.Halamka J, Aranow M, Ascenzo C, et al. E-Prescribing collaboration in Massachusetts: early experiences from regional prescribing projects. J Am Med Inform Assoc 2006;13:239–44 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Halamka J, Aranow M, Ascenzo C, et al. Health care IT collaboration in Massachusetts: the experience of creating regional connectivity. J Am Med Inform Assoc 2005;12:596–601 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.McDonald CJ, Overhage JM, Barnes M, et al. The Indiana network for patient care: a working local health information infrastructure. An example of a working infrastructure collaboration that links data from five health systems and hundreds of millions of entries. Health Aff (Millwood) 2005;24:1214–20 [DOI] [PubMed] [Google Scholar]
  • 23.Miller RH, Miller BS. The Santa Barbara County Care Data Exchange: what happened? Health Aff (Millwood) 2007;26:w568–80 [DOI] [PubMed] [Google Scholar]

Articles from Journal of the American Medical Informatics Association : JAMIA are provided here courtesy of Oxford University Press

RESOURCES