Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2008 Dec 1.
Published in final edited form as: J Biomed Inform. 2007 Aug 30;40(6 Suppl):S33–S39. doi: 10.1016/j.jbi.2007.08.001

Qualitative Evaluation of Health Information Exchange Efforts

Joan S Ash, Kenneth P Guappone
PMCID: PMC2171042  NIHMSID: NIHMS35432  PMID: 17904914

Abstract

Because most health information exchange (HIE) initiatives are as yet immature, formative evaluation is recommended so that what is learned through evaluation can be immediately applied to assist in HIE development efforts. Qualitative methods can be especially useful for formative evaluation because they can guide ongoing HIE growth while taking context into consideration. This paper describes important HIE-related research questions and outlines appropriate qualitative research techniques for addressing them.

Keywords: Medical records systems, computerized, medical record linkage, regional health planning, computer communication networks, community networks, qualitative research, research design

Introduction

A sign hanging in Albert Einstein’s office at Princeton declared “Not everything that can be counted counts and not everything that counts can be counted.” There are many important aspects of Health Information Exchange (HIE) efforts that are in need of exploration, yet they cannot be counted, or tabulated or quantified. Among the complex entities of local, regional or state organizations that might be making efforts toward organizing HIEs, there is a great deal that needs to be understood, yet little research has been conducted in this area. As new and undeveloped as some of these entities are, the evaluation need not wait until some future end point when objective outcomes can be clearly measured. Formative research, which can inform the process of HIE organizational development, can and should be conducted so that lessons can be learned along the way. This formative research can advise optimal course corrections earlier on in the process, contributing to a maturation of these organizations. This paper will describe some initial foci for assessment early in the development process, some evaluation questions that might be asked at intervals after the project has started, an explanation of why qualitative methods might be an appropriate and preferred approach to evaluation, and guidelines for conducting interviews and observations in the course of qualitative research.

What are qualitative methods?

Qualitative research is an approach to scientific inquiry that relies on more naturalistic, humanistic and interactive processes. The methods are primarily language based, with data in the form of words rather than numbers. They take into consideration the larger context of a human situation, so often they are used at the site of activity, in the field. The design plan is generally iterative and flexible rather than tightly preconfigured, because as new discoveries emerge, the plan may need modification and redefinition to allow the collection of the richest data possible. The most common qualitative data gathering strategies include interviews, observation and document analysis.

The evaluation design

As in any project, the design for evaluation begins with articulating exactly what it is you want to know. What are the essential and specific questions you need to have answered? Lorenzi and others have outlined relevant broad organizational issues [13], including many surrounding development of HIEs that seem ripe for qualitative evaluation. These issues might be addressed by asking the following questions:

  • Overall, what are the important barriers and facilitators to development of health information exchanges?

  • What are the expected levels of leadership and commitment from the various actors, from the federal to the local level?

  • What are the needs, expectations and motivations of the many different stakeholders?

  • Consumers must have confidence in the accuracy and the confidentiality of protected health information. What are the issues that need to be addressed?

  • When addressing the governance of these projects, what issues of control, power and politics arise?

  • All of these subjective concerns are not static. How do the players’ perceptions vary over the duration of these projects, and how does that affect the chances of success?

  • The decision-making structure must be initially outlined. How does the planned organization on paper compare to the reality in practice?

  • Communication is essential. What communication channels exist and what are their levels of effectiveness?

  • What are the levels of trust among stakeholder groups and what are the conditions and issues that affect this trust?

  • How do the information systems fit with the workflow of all users – clinicians, public health officials, laboratories, payers and administrators?

  • What are the user perceptions of the information systems in terms of usefulness or ease of use?

  • What issues in the planning and project management processes are optimally effective?

  • These efforts have been directed to be patient-centric. What are the patients’ points of view, perceptions and needs?

  • The improvement of health care quality ought to be paramount. What are the motivations, expectations and processes that are directed toward this goal?

These are questions that could be measured quantitatively, but what is the value of such a determination? The “thin” data of numerical assessment would not provide much detailed guidance about the directions the HIEs should follow in the formative stage. One needs richer, more subjective information to understand “what is really going on” in complex human environments.

Studying health information exchanges longitudinally

The eHealth Initiative Foundation’s Second Annual Survey of State, Regional and Community-Based Health Information Exchange Initiative and Organizations (August 2005) identified six “stages of development” of HIEs (Table 1) [4]. There are obviously many specific research questions related to each of these. For some, an initial assessment done early in the development process could yield critical information about whether different entities are ready for this effort (Stages 1–3). Some possible foci for this initial assessment might include the following:

Table 1.

eHealth Initiative Foundation Stages of HIE Development [4]

Stage 1 Recognition of the need for health information exchange among multiple stakeholders in your state, region or community (public declaration by a coalition or political leader).
Stage 2 Getting organized; defining shared vision, goals, and objectives; identifying funding sources, setting up legal and governance structure (multiple, inclusive meetings to address needs and frameworks).
Stage 3 Transferring vision, goals and objectives to tactics and business plan; defining your needs and requirements; securing funding (funding organizational efforts under sponsorship).
Stage 4 Implementing technical, financial and legal (pilot project or implementation with multi-year budget identified and tagged for a specific need).
Stage 5 Fully operational health information organization; transmitting data that is being used by healthcare stakeholders (ongoing revenue stream and sustainable business model).
Stage 6 Demonstration of expansion of the organization to encompass a broader coalition of stakeholders than present in the initial model.

The Plan and Planning Process

  • To what extent is there a shared mission?

  • How clear are the purpose and objectives?

  • Has a needs assessment been done?

  • What is the level of trust among the stakeholder groups?

  • Is there a mechanism for handling political issues, conflicts, and negotiation?

  • Is there a decision making process related to ownership of data and who should have access to what data?

  • Is there a process for identifying and agreeing on standards?

Cultural Foundations

  • To what extent are the leaders and their organizations committed?

  • What is the history of collaboration within and among organizations?

Clinician Involvement

  • How committed are the user clinicians?

  • What do the users anticipate will happen?

  • How involved are they in the planning process?

Workflow Assessment

  • Is there a clear understanding about what this will do to the workflow of clinicians and those who work with them?

The Stakeholders

  • Who stands to gain and lose what?

  • What are the roles of each stakeholder group, including vendors?

  • What are the rewards and motives for each stakeholder group?

There are also many questions that will need to be answered a number of times during and after the implementation process (Stages 4–6). Some ideas for research questions that might be explored on an ongoing basis include:

Clinician Satisfaction

  • Do users sense that they are heard when they provide feedback?

  • How do users feel about the training and support they receive?

  • Is information more available when and where it is needed?

  • How usable is the system?

  • What effect has this had on clinical workflow?

  • Why is the system used or not used?

  • How do clinicians feel about decision support embedded in the system?

Other Stakeholder Perceptions

  • What are the incentives and level of motivation?

  • What is the level of trust among stakeholder groups?

  • How adequate are the governance and decision making structures?

  • How well are stakeholder expectations being met?

  • If stakeholder engagement waxes and wanes, what are the reasons?

  • What are the non-financial values that the various stakeholder groups are receiving?

  • How effective are the meetings of stakeholders?

  • What is the level of patient satisfaction?

  • How well are privacy and confidentiality concerns being met?

All of the above questions can be addressed using qualitative methods. Because, for these questions, the answers may vary over time, an evaluation plan would be longitudinal. It is primarily a matter of setting priorities about what needs to be tracked. The questions concerning each issue are usually about 1) problem identification (the system does not seem to fit clinicians’ workflow, so what are the problems?), 2) description (what are the concerns about confidentiality?) or 3) explanation (why is the formal decision making structure not working?). The qualitative evaluation design will be iterative. Each step, from idea generation, to design, to data gathering, to analysis, to interpretation, will likely be done more than once. During each step, it is not only permissible, but suggested, that you revisit one or more of the prior steps. Ideally, the work continues until there is a sense that the question has been answered.

When thinking about the qualitative component of the evaluation plan, a number of important decisions must be made: Who will gather the data? What is your plan for a purposeful selection of participants? How will you gather the data? How structured will you be? How will you analyze the data? How will you present the results? As with any research project, a literature review is a good place to start. Although little is known about the above-listed issues related to HIE, a good deal has been written about them in other sectors, and much can be learned from others.

An early decision needs to be made about the setting and population to study. The setting, whether it is a unit such as one emergency department (ED), a group of units, like EDs across hospitals, an entire organization, or a group of organizations, should be selected for appropriateness for answering the evaluation question. The population that will be studied might be based on roles within the organization or attributes of the individual (a champion for change or a skeptic, for example), or even by convenience (you want to study what happens on the night shift, so you observe whoever is working then).

What are qualitative research methods?

Although qualitative methodologies have been utilized for decades in the social sciences, their usefulness has only more recently been recognized in the informatics domain. To those more accustomed to quantitative evaluation, the processes and even the nature of the knowledge that is sought can be difficult to understand. Several sources are available that provide especially appropriate strategies for using qualitative methods in informatics [57].

Qualitative and quantitative methods are simply different and equally valuable ways of seeking the truth. Qualitative methods can be described as inductive, subjective, and contextual, where quantitative techniques are deductive, objective, and generalized. Although the term “subjective” may be interpreted in a negative way to imply lack of validity, this is an incorrect interpretation. Qualitative methods are subjective in that they can assess how people make sense of things—how they view the world. Those views may not reflect the “objective” facts, but they have validity nonetheless, because that is how a person whose opinion one values perceives the situation. For example, a physician may sense that use of a clinical information system to access information from an other site about a patient takes twice as much time as using a manual system, yet a time and motion study may indicate otherwise. The physician’s view might be reflected in her behavior, however (she may show anger, for example), so it is important that her view be considered.

Qualitative methods are inductive. They are excellent choices if the evaluator wants to generate theory from observation, as they are oriented to discovery and exploration. For these reasons, the research design is emergent and the timeframe flexible. On the other hand, quantitative methods are deductive. They are the methods of choice when one wants to test theory after empirical observations have already been acquired, and they are oriented to cause and effect. The evaluation design is generally predetermined within a fixed timeframe. An example of discovery would be if one had observed that clinicians were not seeking available patient information, a qualitative evaluation might be undertaken to find out reasons for this behavior. “Why questions” such as this are particularly appropriate for qualitative evaluation; in fact, it is the only way to fully explore the underlying reasons for complex behavior.

Qualitative research can be considered subjective in that it emphasizes meanings and interpretation and tries to describe the perspectives of others so those perspectives can be understood. This kind of research relies on the researcher as the research instrument, so the researcher is closely involved, is close to the data, and is openly aware of the lens through which he or she is viewing the data. Quantitative research, though, emphasizes measurement and uses an outsider’s perspective, and it is vitally important that the observer remains isolated from the data. If the reason clinicians are not seeking available patient information is because of password problems, qualitative research can uncover the extent of their frustration and other emotions that this problem might precipitate.

Qualitative research is also characterized as being contextual. This is because of its naturalistic approach and ability to analyze systems holistically. It emphasizes depth and detail of findings by providing “thick” descriptions of relatively few cases. Quantitative methods can be experimental, using a “laboratory” to isolate variables so that one can statistically analyze those selected variables to make inferences beyond the subjects under observation. They emphasize what might be called “thinner” data on a large number of cases. Qualitative research cannot and makes no attempts to infer generalities to other populations, but hopes to understand the context itself that is being evaluated. If HIE access is available across many emergency departments, for example, the organizational culture and workflow of each will likely differ. Qualitative methods can explore these differences in depth.

Quantitative and qualitative methods have distinct differences but do not necessarily need to be employed in isolation. A recent and more pragmatic approach has been to combine the two methods as they can quite effectively complement one another. They can be used together or in tandem to provide an evaluation that covers many facets of a project or program over time. For example, the information obtained from the above-mentioned eHealth Initiative survey can be used as fodder for a follow-up qualitative evaluation. Statistical curiosities can be explored in more detail to understand the underlying processes at work. Conversely, qualitative analysis can be a very effective precursor to survey development by exploring the issues that are most important to the participants under study.

Strategies for rigor

One common concern about qualitative methods is their presumed lack of validity – the data are “soft.” While it is true that there are no numerical ways of determining validity in qualitative studies, there are strategies for assuring, or assessing in a non-numerical sense, the validity of a study. A more descriptive term for this kind of validity is “trustworthiness.” Trustworthiness might be defined as the confidence that a second person, presented with the same data, would arrive at the same interpretation. Strategies for the scientific rigor or trustworthiness of qualitative results include reflexivity, triangulation, member checking, saturation in the field, and an audit trail. The first four terms are jargon for some reasonable and easily explained methods. Reflexivity means that those gathering and analyzing the data recognize their preconceived biases and world views and take these into account as they proceed. For example, a nurse conducting observations of users of HIE might identify that his own bias is that the system should have been designed with more nursing input. He might ask a colleague to review his field notes and discuss possible bias prior to analysis to assure trustworthiness. Triangulation is a term from surveying, meaning that one can pinpoint a location along different axes and it is accomplished in qualitative research by using different methods, researchers, sites, times, or kinds of subjects for learning the truth. Member checking implies that researchers check back with informants to make sure the results and interpretations seem reasonable to them. Saturation in the field means that there is a sense that enough data have been gathered, that the same patterns are seen with no new ones being identified. Finally, the audit trail is a step-by-step record that details how the research has been conducted.

Two useful methods: Interviewing and observation

Interviewing is a more active process that consists of engaging in some verbal discourse of varying complexity with the informant, whereas observation is a more passive activity with little to no interaction with the observed. Interviewing is one of the most commonly used techniques in qualitative evaluation; it can complement observation by helping to answer questions about what was seen in the field and by offering information about what should be looked for in the field. Interviews offer an information-rich connection to the research topic and a depth of information. They are ideal for exploring HIE stakeholder perceptions about all of the issues listed at the beginning of this chapter.

Interviews span a broad spectrum of organization from extremely structured to very open-ended. Those at the most structured end of the spectrum are basically surveys with closed ended questions delivered by a human, and they would not be considered qualitative. Completely unstructured interviews, on the other hand, might not even be considered research since there would be no focus. Qualitative interviews encompass communication somewhere in between those two ends of that continuum. Usually the researcher develops a handful of questions and asks them in a way that opens the door for the interviewee to talk, tell stories, or reminisce. An interview guide, which might include four to six of the questions, serves as a template. For example, if barriers and facilitators are to be identified, the guide would list four to six main questions designed to elicit each person’s thoughts and opinions about this particular topic. Depending on the answer to a question, the interviewer has license to follow up and probe for details, but the guide assures that the territory is covered. Interviewing can help to uncover motives and causes, can gain multiple perspectives from people in different roles or with varying views. Informants, or interviewees, are usually carefully selected in a purposive manner rather than randomly and numbers of interviews are less important than appropriateness of the interviewee. It is important that the outliers, the people who are not necessarily typical, and especially the curmudgeons or skeptics, are interviewed. The term “key informants” is used for people within the environment under study who have knowledge and understanding about the culture. They can provide access and sponsorship, be collaborators, and, most important, serve as translators of the language of the particular culture. All informants should be selected for a reason, primarily based on the type of information needed. This is called “purposive” selection.

The questions in a qualitative interview usually follow a “funnel” style. To “open the door,” and help the interviewee feel comfortable, a broad open-ended question is asked early in the interview. As the interview unfolds, the interviewer at some point steers the person more towards his area of focus. Because the kind of interviews done in busy clinical settings are generally an hour or less, the funnel design helps to stimulate both creative, open answers and more specific, pointed answers. Listening is the most important and difficult job for the interviewer, as is trying to make the most of the precious time set aside for the interview. Small inexpensive voice recorders are helpful so the interviewer can maintain eye contact and pay attention to the interviewee rather than having to take notes. If a second person is available to take notes as an observer, this is an advantage as well. The wording of the main questions should be open and inviting, avoiding dichotomous yes/no answers and should always strive to eliminate language that may be construed as value-laden. There should be enough time left at the end of the interview for discussion about any topic not already covered that the interviewee feels is important to share: often by this time the person is feeling so comfortable that the most astute insight is expressed.

Focus groups

Focus groups are group interviews (the term originated as “focused group interviews”) with an added benefit that interviewees develop synergy by feeding off one another, developing or expounding on ideas from others. Many might wrongly treat a focus group as merely a way to quickly interview multiple informants. This can appear to save researcher time, e.g. by holding one one-hour focus group rather than 8 to 10 individual interviews, but the type of data obtained differs from an individual interview. They take considerable planning ahead of time, however, and it is often hard to schedule them. A well-done focus group is much more that a group of individual interviews and facilitating these requires considerable skill. It can yield abundant data in a short time, but it is important to have carefully selected the right people in the room, to encourage everyone to be heard, to carefully steer the discussion so that it stays on track, and to focus on just a few main questions. The synergy that develops in a group, the so-called “sharing and comparing” generates a different type of information than that which a single individual can provide. This lively interaction that must be created and sustained is critical to allowing the informants to voice unselfconscious ideas that can only come to the surface when conversing with others. The level of structure of the questions varies depending on the evaluation question. If the topic is exploratory, then only a few open ended questions to stimulate creativity need asking. If you know the questions and want answers and explanation, the questions can be predetermined to have a narrower focus. Basically, however, the funnel style outlined above for interviews yields excellent focus group data.

Observation

Yogi Berra noted “You can observe a lot just by watching.” When we do not have our antennae out and all of our observational skills activated, we can miss or take for granted meaningful cues in our environment. Observation has several advantages. It can be relatively unobtrusive and non-invasive, so that busy subjects are not inconvenienced. Typical daily tasks can be watched in context, and observational data can confirm or disconfirm what people tell you during interviews.

Observation techniques have been thoroughly utilized in anthropology, where the goal is to understand another’s culture. The premise is that most human behavior can be observed. By attempting to set oneself somewhat outside the realm of the observed which allows one to see through the instant initial meaning that behavior appears to have, we can strive to understand the informant’s sub-culture, the way of life in a particular setting. Daily activities give meaning to underlying beliefs, assumptions, perceptions and observing these activities reveals the richness and complexity of the human condition.

According to Pasteur, “where observation is concerned, chance favors only the prepared mind.” By following a few rules and by using a simple structure for doing field notes, nearly anyone can gather high quality observational data. For example, preparation should include careful selection of the people and places suitable for answering the particular research question, care in gaining “entry” into the situation, and careful documentation in the form of jottings in a notebook while on site. The jottings should be expanded into full descriptive field notes soon after the visit while the memory is still vivid. It is most likely that observations of actual users of HIE data in the field would yield great insight into workflow and usability issues.

Often called “participant observation,” observation in the field is rarely just fly-on-the-wall shadowing. There is a spectrum of degrees of participation. Full participation would mean actually doing the work you are observing. We usually do something in between full participation and shadowing since we tend to enter into spontaneous conversations with subjects, ask questions, attend meetings, etc. even when we try to be very unobtrusive. One must be careful about not interrupting subjects as they do their work, but they often want to talk when they have brief breaks.

There are some rules about observation which, if followed, should allow any intelligent and interested person to succeed in doing it well. First, one must be able to focus and pay attention. For example, if you are observing everything happening in an emergency department, but your focus is on use of the information system, you must concentrate on the processes and workflow rather than the medicine. In a medical setting, it may be harder for clinicians as observers in this respect than for non-clinicians who could not follow what is happening clinically. You must be able to write descriptively so that your final field notes are useful. You must maintain a good deal of discipline to write detailed and pertinent field notes. You need to attempt to separate important details from trivia, though this is harder in the beginning than later when the focus becomes clearer. You must use rigorous methods to establish the trustworthiness of your observations. Finally, you must be introspective enough to understand your own biases.

What to do with the data

Data analysis in qualitative evaluation is very different from that in quantitative research because there are no set protocols or recipes. There are many strategies for doing the job well, and again much depends on the nature of the research question. If the question is quite open ended and exploratory, a grounded theory approach to analysis might work best. Without any preconceived notions or code words in mind, the researchers read the transcripts and field notes and, using the phrases of the informants, code or annotate the meaningful sections. Patterns will emerge and, with thought and discussion, become themes and therefore results. As more is known about a topic, the approach can be more and more structured, with code lists or templates designed ahead of time. All of this work can be done with the help of software programs, but the researchers must still read every line and do the bulk of the work. The software can help by allowing researchers to go back and search for sections of data coded in a particular way and for expediting the coding process. Following development of themes, researchers must carefully interpret the results so that their meaning can be discovered. This can be an extremely rewarding and creative, albeit painstaking task.

Presenting the results

Once themes and meanings have emerged, careful thought must be given to presentation of the results. The venue and format for presenting qualitative results depend on the audience and purpose of the report. Where there are some fairly standard formats for presenting quantitative results in tables and charts, there are few standards for the format of qualitative results. Because quotations can be powerful and easily grasped, they are usually included in the text of the results report, but they can also be grouped and included in tables. Charts might include matrices that depict themes that overlap, or figures with Venn diagrams that show relationships. Basically, presenting the results requires clear and jargon-free writing, enough original evidence to convince a skeptical reader, final results that are credible, description of new perspectives uncovered in the research, and suggestions for future work.

When evaluating qualitative work, experts generally look for a clearly stated research or evaluation question, a description of the context within which the work was done, articulation of the research or evaluation design, strategies used for enhancing rigor, and a clear and reasonable presentation of the results. Non-experts who are stakeholders in HIE efforts will also appreciate this information, because it indicates that the evaluation has been done properly. Most important, the results should include representative examples and quotations, which can be particularly compelling to legislators and other decisions makers.

Acknowledgments

The authors are supported in part by grants LM06942 and ASMM10031 from the U.S. National Library of Medicine, National Institutes of Health.

Footnotes

Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

References

  • 1.Lorenzi NM. Strategies for creating successful local health information infrastructure initiatives. [Accessed February 13, 2007];U.S. Department of Health and Human Services, 12/16/03. Available at http://aspe.hhs.gov/sp/NHII/LHII-Lorenzi-12.16.03.pdf.
  • 2.Overhage JM, Evans L, Marchibroda J. Communities’ readiness for health information exchange: The national landscape in 2004. J Am Med Inform Assoc. 2005;12:107–112. doi: 10.1197/jamia.M1680. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Cusack CM, Poon EG. U.S. Department of Health and Human Services. AHRQ, National Resource Center for Health Information Technology; Evaluation Toolkit: Data Exchange Projects. [Google Scholar]
  • 4.eHealth Initiative. Washington, DC: Foundation for eHealth Initiative; [Accessed February 8, 2007]. [updated 2007; cited 2007 Feb 8]. Available at: http://www.ehealthinitiative.org/pressrelease825A.mspx. [Google Scholar]
  • 5.Ash JS, Smith AC, Stavri PZ. Interpretive or qualitative Methods: Subjectivist traditions responsive to users. In: Friedman Charles P, Wyatt Jeremy C., editors. Chapter 10 in Evaluation Methods in Medical Informatics. Vol. 2. Springer-Verlag; 2005. [Google Scholar]
  • 6.Berg BL. Qualitative Research Methods for the Social Sciences. 6. Boston: Pearson; 2007. [Google Scholar]
  • 7.Crabtree BF, Miller WL. Doing Qualitative Research. 2. Thousand Oaks, CA: Sage; 1999. [Google Scholar]

RESOURCES