Skip to main content
Journal of the American Medical Informatics Association : JAMIA logoLink to Journal of the American Medical Informatics Association : JAMIA
. 2007 Sep-Oct;14(5):537–541. doi: 10.1197/jamia.M2436

Discovering How to Think about a Hospital Patient Information System by Struggling to Evaluate It: A Committee’s Journal

Joseph Schulman a , b ,, Gilad J Kuperman a , b , Anupam Kharbanda a , b , Rainu Kaushal a , b
PMCID: PMC1975793  PMID: 17600095

Abstract

Parallel to the monumental problem of replacing paper-and-pen–based patient information management systems with electronic ones is the problem of evaluating the extent to which the change represents an improvement. All clinicians must grapple with this daunting challenge; those with little or no informatics expertise may be particularly surprised by the attendant difficulties. To do so successfully, they must be able to explicitly conceptualize the daily clinical work—a prerequisite for appreciating and reasonably evaluating it. Further, few of these evaluators may have reflected on the dynamic interaction between their work and their tools—how changing a tool necessarily changes the work. This article illuminates these problems by telling the story of how one patient care information systems committee first learned to think about the purpose of a patient information management system, and second, how to evaluate the impact of its implementation.


Alice came to a fork in the road. “Which road do I take?” she asked.

“Where do you want to go?” responded the Cheshire cat.

“I don’t know,” Alice answered.

“Then,” said the cat, “it doesn’t matter.”

∼Lewis Carroll, Alice in Wonderland

As organizations transition from paper to electronic media for storing and managing patient information, front-line clinicians experience disquieting feelings that may range between vague distress and profound disruption of their world. These clinicians face “dilemmas of transformation in the age of the smart machine.”1 We think it is crucial that all involved in this transformation strive for clarity in understanding how technology restructures the work situation, how a computer-based patient information system can “abstract thought from action” 1 —not only automate but also informate 1 —reveal activities, events, entities, ideas, and information to some degree previously opaque; and how work tasks, work flow, and tools dynamically interact.

Hospital information technology (IT) committees represent a part of an organization’s strategy for crossing the chasm separating the culture of paper media and the culture of electronic media. 2–4 These committees commonly include front-line clinicians. These individuals may have little experience with either the potential or the pitfalls of the technology over which they adjudicate, and little experience in how to think critically about the issues. To draw attention to this aspect of the unfolding transformation and contribute to the conversation about how to make sense of it, we summarize our committee’s early experience.

In the Beginning, the Task Seemed so Clear

We work at a large academic hospital. Our committee is composed of administrators; clinicians including physicians, nurses, and pharmacists; and IT specialists. Our charge is to improve our inpatient clinical IT systems by determining desirable features for our electronic patient information management system, how to minimize work disruption during system implementation, and how to evaluate the consequences of replacing the previous technology. In particular, we were asked to suggest exactly what to measure to determine whether the IT system is successful. At the outset, this seemed rather straightforward to many members. So at the first meeting, the group quickly crafted a list of short term goals. These included assembling an inventory of resources from which we could obtain evaluation data, planning to assess the medical error reporting system for IT related events, and conducting an IT user survey.

Stepping Back

Then one of us spoke up. “These aren’t goals. They’re tasks. Before deciding what to do (task), shouldn’t we say exactly what we want to achieve (goal)? For example, depending on our goal, we might prefer to track trigger events (sentinel metrics 5 ) instead of analyzing data from the medical error reporting system.” Several of the clinicians, understandably, conceptualized the committee work as they do their clinical work. After a patient’s history, physical examination, and ancillary data are presented on rounds, they often immediately rattle off the next laboratory studies and images to obtain. If pressed on this issue, they say they are so accustomed to their work that in the blink of an eye they (implicitly) determine the goals those laboratory tests and images are intended to promote. However, test this assertion by asking: “If the laboratory tests and images you need—for instance, a complete blood count, C-reactive protein, and a chest radiograph—provide the answers you seek, then precisely what is the question these answers inform?” Some workers simply respond with a puzzled look, some will articulate a reply; but the replies tend to vary among respondents—and infrequently are they framed as questions. Rarely, someone will articulate the question the studies, the “answers,” indeed inform: “What is the estimated probability my patient has condition X, given the results of these studies?” Activity without clarity of purpose may be activity without value. If the estimated probability that a patient has condition X, given confirmatory study results, does not exceed a threshold value justifying the benefits/risks of treatment, the studies, the “answers,” are unnecessary. Similarly, evaluation data should only be collected if they help to answer a specific question designed to explicitly probe goal achievement.

Identification of Purpose

At our next meeting, we tried again to articulate what we wanted to accomplish over the short term: (1) We want to identify existing data sources that can inform evaluation of our work and to understand the sources’ strengths and weaknesses. (2) As a foundation for evaluation, we want to enumerate the intended consequences of the current IT implementations and discover some of the unintended consequences. Although in hindsight no. 1 was still quite vague and no. 2 essentially stated that the goal was to create a list of goals, we pressed on.

“What are we trying to achieve in the long term?” (1) We want to be able to describe the effects of our clinical interventions, including otherwise unapparent effects we would not know of without analyzing aggregated patient data. (2) We want to use the potential of IT to improve the care we provide.

This sounded pretty good. Even so, we acknowledged the imprecision by following with the question embedded in our committee’s charge: “How do we define success—how will we know when we have achieved these goals?” We did not appreciate at the time that the idea we began to grapple with might be more usefully conceived as a continuous variable, a spectrum of “doing a good job,” rather than binary value, success/failure. 6 Nor did we appreciate the need to operationally define “doing a good job,” nor that crafting this definition was at the core of our measurement task, nor the need to consider the multiple evaluative perspectives from which achievement might be framed, for example, that of the committee, the clinical staff, the IT department, the organization. We did appreciate that answering the question entailed developing evaluative criteria for our information management tool, along with evaluative criteria for our clinical performance.

We were starting to get it: identifying what we measure comes after developing a clear, explicitly articulated idea of what we are trying to achieve. This idea of what we are trying to achieve must do more than sound lofty and laudable. It must describe what the system is to be about at the core. Without such clarity, we would ultimately just collect lots of data without gaining knowledge. By this formulation we also recognized that our work was enmeshed with that of another committee charged with developing clinical performance metrics. Although we actually were back where we started, we sensed that we could now make a more informed choice about the path to take.

Broad Goals

In discussing candidate goals, members indicated that IT was important because it represented a means of reducing errors. Therefore, we scrutinized a widely accepted definition of error: “… all those occasions in which a planned sequence of mental or physical activities fails to achieve its intended outcome, and when these failures cannot be attributed to the intervention of some chance agency.” 7

Clearly, we needed to achieve much more conceptual clarity and to specify our ideas in greater detail. The notion of error makes no sense until we precisely identify the intended outcome, i.e., the goal of the activity.

Our deliberations also led us to Norman’s 8 and Zuboff’s 1 notion of IT as a cognitive tool—something that should make us smarter than we are without it. Therefore, taking account of the various users at our hospital, we pondered how to think about the main features this cognitive tool should offer.

Herbert Simon helped point the way:

Solving a problem simply means representing it so as to make the solution transparent … a problem space in which the search for the solution can take place… Focus of attention is the key to success—focusing on the particular features of the situation that are relevant to the problem, then building a problem space containing these features but omitting the irrelevant ones. 9

A new candidate goal and associated evaluative criteria were revealed. Now we asked, “To what extent does our tool aid in creating a productive problem space?”

By now, most of us had completely forgotten that we initially thought the committee’s charge could be straightforwardly dispatched. We understood that it was so complex we must break it up into more manageable chunks. We identified broad categories within which to articulate hospital IT goals and problems:

  • • Business, i.e., billing and collections

  • • Regulatory compliance

  • • Reporting

  • • Patient documentation

  • • Electronic prescribing

  • • Decision support and other cognitive enhancements

  • • Referrals

  • • Clinical performance evaluation and quality improvement
    • ○ Exposure-outcome relationships
  • • Patient registries

  • • Work flow and efficiency

Criteria for Measures

We were beginning to share the view that collecting data is merely the tip of the iceberg that is IT measurement. Data collection is buoyed by a body of explicit performance questions whose answers have potential to advance our purpose. Proposed measures must plausibly inform those answers by withstanding rigorous and uniform scrutiny:

  • • What dimension of IT use or patient care does this measure inform us about?
    • ○ With what overarching aim does this dimension resonate? That is, if a list of explicit aims and a fine-grained process map of our entire enterprise were spread before us, precisely which aim and process component(s) does this measure enable us to associate?
      • ▪ Such measurement activity should both derive from and test hypotheses about causal sequences.
  • • What results do we expect, i.e., what is our hypothesis?

  • • How would we interpret results that might be displayed? (This entails working with fabricated “dummy data” during planning.)

  • • What might we do differently once we know this thing?

  • • What target performance range do we seek to achieve for this measurement variable?

This interrogative framework depends on clear notions of purpose, intended outcomes. However, systems may produce surprises: unintended, undesired outcomes. How might the committee learn about unintended consequences of IT? We discovered another daunting challenge. Sometimes, we would not know in advance what to look for; even worse, we might not recognize what we were looking at after it occurred. As a first step, we would measure unintended IT consequences via some type of user survey. Practical considerations required that we draw a sample from all users. Therefore, we would have to determine how to sample. Our thoughts increasingly reflected our experience: “First, we should discuss detailed, explicit aims of the survey. That way, we’ll have a clearer idea of what to do. For example, if one aim is to gain insight to whether responses might be biased by user’s experience with antecedent technology, we might consider including complementary ‘fly-on-wall’ observers’ reports.”

Learning from Others

We considered the wide range of IT already implemented, for which corresponding goals often appeared to be implicit at best. We also considered the practical reality that committee members could devote only a small fraction of their total work time to this effort. It seemed sensible to develop ever more fine-grained goals in conjunction with accumulating insights via learning from what others have done in these areas. That is, we would start with others’ evaluative frameworks, reflect on the goals they (at least implicitly) seek to establish, and over time, refine our own concept of our goals and how we determine that we achieve them.

We drew heavily from the excellent overview of Ash et al 10 to draft an extensive conceptual framework for probing clinicians’ experience using our institution’s patient information management system (). The work/tool interaction section of merits additional discussion. The content and flow of the daily work—the tasks constituting the means of achieving the (hopefully explicit) aims—reflect what is possible and practical at the time. The tools are designed to facilitate the work, and similarly reflect what is possible and practical at the time. Thus the notion of what constitutes the daily work, operationally framed, varies over time. Note that the aims of the daily work tend to be more stable than the tasks selected to achieve the aims. To illustrate, in the days of paper-based patient records, clinicians would never dream of instantly computing a patient’s posttest probability of a particular disease as soon as a test result is reported. Today, this is indeed possible. Although such Bayesian computation was always consistent with the aims of clinical work, it may become part of the daily work when it is possible and practical.

Table 1.

Table 1 Conceptual Framework for Probing Clinicians’ Experience Using a Patient Information Management System

Dimension of IT tool use: Work Flow
  • • Data entry

  • • Do we impose perceived additional work tasks?

  • • Is information displayed in a visual format that facilitates the task?

  • ○ Fonts

  • ○ Background color

  • ○ Content structure

  • • Data retrieval

  • • Individual patients

  • • Aggregates of patients

  • • How soon after creation is a record available?

  • • Interruptions: when distracted by a competing task, do users lose track of thoughts and where they were in the record by the time they return to it, or does the tool remember for them?

  • • System response time; down time

  • • Ease of system access

  • • Feature navigation: ease, and possibility to toggle between features

  • • Juxtaposition error: is a data element so close to something else on the screen that the wrong option may easily be clicked or an item read in error?

  • • Have users devised workarounds? That is, have users devised strategies and tactics enabling them to live with the system despite demands they deem unrealistic, inefficient, or harmful?

  • • To what extent does this tool promote entering information only once, but enable presenting it in varied contexts?

Dimension of IT tool use: Cognitive Enhancement/impedance
  • • Does this tool overwhelm users (cause cognitive overload) by overemphasizing structured and “complete” information entry alerts and reminders?

  • ○ If so, please provide detailed explanation.

  • • Does this tool cleave information that belongs together, forcing users to switch between different screens, so that users feel deprived of the overview desired?

  • ○ If so, please provide detailed explanation.

  • • Standard phrases

  • ○ Are readability and information value of reports diminished by over-use of standard phrases?

  • ▪ Does the availability of these standard phrases discourage users’ composing thoughts and crafting meaning?

  • ▪ As users read a narrative, is understanding sometimes confounded by uncertainty whether a sentence or clause represents thoughtful word use—a spot-on description; or merely a conveniently available selection—a more or less apropos description?

  • • Have others over-used cut and paste or copy and paste text manipulation?

  • ○ Redundant information

  • ○ Inaccurate information

  • • Are data provided as abstract cues, or do they contain sufficient context to establish their referential function?1

  • • Do users feel that they function more as data entry workers or as knowledge workers?

  • ○ Do users feel that their identity as a professional has changed by using this tool? If so, how?

  • • To what extent does this tool draw out users’ intellect in working with the data and aid their creating meaning from it?1

Dimension of IT tool use: Communication
  • • To what extent do users think that another professional reviewing their entry will grasp the essence of what they intended to communicate?

  • ○ Do users think that “entering” their contribution to the patient record replaces their previous means of initiating and communicating their plans?

  • ○ Have users noticed a change in the amount of direct interaction among physicians, nurses, and pharmacy? If so, in what direction?

  • ▪ Is this perceived to be in their patient’s and their interests?

  • • Has overall reliance on the computer system as a source of answers to clinical questions increased, decreased, or stayed the same?

Dimension of IT tool use: Work/tool Interaction
  • • Does the tool seem to speed or slow the daily work?

  • • Does the tool seem to make users feel smarter or dumber?

  • • Does the tool seem to force users to change the way they think?

  • ○ About the patient?

  • ○ About the work?

  • ○ If so, is the change good or bad?

  • • What do users need that they’re not getting?

  • • What are users getting that they don’t need?

  • • For each of the above, exactly how has the user determined this?

Our Revelation

The essential point is that clinical work and tools are calibrated to each other. If a tool is changed, the work flow and/or fine structure it is intended to support must necessarily change. 6 Thus, stakeholders must consider as an aspect of progress the need to recalibrate work flow and/or fine structure to new tools’ capabilities, ever mindful of the aims that motivate the work. A tool achieving quick user acceptance may be one that makes little use of its technological potential and correspondingly is less likely to advance the goals or justify the investment. The “aha moment” arrives with the understanding that preserving existing problem-solving approaches that suppress evident potential for more effectively and/or efficiently advancing the goals is antithetical to progress. A problem space with which workers are comfortable may, when new tasks are enabled, be rendered suboptimal. Therefore, in the context of the goals of the enterprise, we define user acceptance as a result of judging not a new tool in isolation, but a new work/tool dyad. A short “test drive” yields an answer to the wrong question.

Workers long accustomed to a particular way of working may have difficulty imagining new ways made possible by tools that enable things they never dreamed of. Indeed, workers may be unaware that their early opinions about new tools reflect their imposing the specifics of the previous work/tool interaction on the present one. To further illustrate this important idea of dynamically calibrating work and tools to each other, we invited members to consider this question: “If all you had to do was ask for it, what do you wish your patient information management tool could do?”

  • • Serve me new and relevant information without my having to open a specific patient’s record—the information system should “find me” when necessary

  • • Support Bayesian decision making (compute posttest disease probability)

  • • Enable me to access it remotely (from home)

  • • Configure multiple windows into one coherent display, as I deem necessary

  • • Optimize the problem space in relation to the nature of the problem, rather than the same configuration for every patient

  • • Facilitate communication among consultants

  • • Improve communication efficiency; minimize interactions and interruptions
    • ○ Communicate with other involved providers from within a patient record
      • ▪ Document communication and results
    • ○ Prevent duplication of efforts, prevent memory lapse
  • • Promote an explicit list of patient-specific goals for the day, articulated as part of daily patient rounds

  • • Support a shared to-do list among all care providers

The point of this invitation was to illuminate the way one’s conceptualization of work is molded by one’s notion of what is possible. The aim was to highlight contrasts: the difference between the items enumerated on a current task list and a potential ideal task list; the gap between they way tasks are done and potentially more efficient/effective alternatives enabled by technological advancement. Pondering such contrasts promotes creativity in formulating a problem space and solution (Simon 9 ). Although the invitation was not intended to encourage user expectations with which tool builders could not keep up, some low level of discord appears desirable for stoking the flame of continual improvement.

Lessons We Learned

In conclusion, although evaluating a clinical IT implementation is a daunting challenge, it is central to managing the organization. IT evaluation should be founded on explicit understanding of the goals of the enterprise—necessarily the first step in the process, appreciating the incessant work/tool interaction, and expecting that these change over time. This view thus calls for:

  • • Persuading the user community that their choices do not include the status quo

  • • Discriminating user resistance to change from suboptimal technical solutions11,12

  • • Appreciating that user acceptance need not imply a problem successfully solved12,13

  • • Setting realistic expectations; understanding that early iterations of a solution may produce only tolerable or promising results, i.e., it is impossible to anticipate every issue that will arise after implementation12

  • • Appreciating that the appropriate evaluative study design may be a matter of controversy. Randomized controlled trials, although a gold standard for discriminating an intervention effect, are typically infeasible. Some outcomes may not even be quantifiable; however, they may be analyzed using widely accepted qualitative methods14

  • • Periodically rethinking the boundaries and elements of the problem space

Daunting as IT evaluation may be, it is unavoidable because, as our story illustrates, it is central to health care. Fortunately, as we engage with the challenge we become increasingly energized. We urge others to serve on committees such as ours because the rewards of this arduous, often frustrating, endeavor are nothing less than greater clarity about the essence of our work in health care, greater mastery in achieving its purpose, and a greater sense of meaning in our daily tasks.

References

  • 1.Zuboff S. In the Age of the Smart Machine: The Future of Work and PowerNew York, NY: Basic Books; 1988.
  • 2.Healthcare Information and Management Systems Society A Desire for Change: Strong Leadership Required in the EMR-EHR Revolution 2006. Available at: http://www.himss.org/content/files/davies/Davies_WP_Leadership.pdf. Accessed May 23, 2007.
  • 3.Shortliffe EH. Strategic action in health information technology: why the obvious has taken so long Health Affairs 2005;24:1222-1233. [DOI] [PubMed] [Google Scholar]
  • 4.Wyatt JC. Hospital information management: the need for clinical leadership BMJ 1995;311:175-178. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Resar R, Rozich J, Classen D. Methodology and rationale for the measurement of harm with trigger tools Qual Saf Health Care 2003;12:39-45. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Aarts J, Doorewaard H, Berg M. Understanding Implementation: the case of a computerized physician order entry system in a large Dutch university medical center J Am Med Inform Assoc 2004;11:207-216. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Reason J. Human ErrorCambridge, UK: Cambridge University Press; 1990.
  • 8.Norman DA. Things That Make Us SmartCambridge, MA: Perseus Books; 1993.
  • 9.Simon HA. The Sciences of the ArtificialCambridge, MA: MIT Press; 1996.
  • 10.Ash JS, Berg M, Coiera E. Some unintended consequences of information technology in health care: the nature of patient care information system-related errors J Am Med Inform Assoc 2004;11:104-112. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Rogers EM. Diffusion of Innovations4th ed.. New York, NY: Free Press; 1995.
  • 12.Schulman J. Managing Your Patients’ Data in the Neonatal and Pediatric ICU: An Introduction to Databases and Statistical AnalysisOxford, UK: Blackwell; 2006.
  • 13.Lorenzi NM, Riley RT. Managing change: an overview J Am Med Inform Assoc 2000;7:116-124. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Stoop A, Heathfield H, de Mul M, Berg M. Evaluation of patient care information systems: theory and practiceIn: Berg M, Coiera E, Heathfield H, et al. editors. Health Information Management. London, UK: Routledge; 2004. pp. 206-229.

Articles from Journal of the American Medical Informatics Association : JAMIA are provided here courtesy of Oxford University Press

RESOURCES