Skip to main content
Journal of the American Medical Informatics Association : JAMIA logoLink to Journal of the American Medical Informatics Association : JAMIA
. 2003 May-Jun;10(3):235–243. doi: 10.1197/jamia.M1094

Determinants of Success of Inpatient Clinical Information Systems: A Literature Review

M J van der Meijden 1, H J Tange 1, J Troost 1, A Hasman 1
PMCID: PMC342046  PMID: 12626373

Abstract

We reviewed the English and Dutch literature on evaluations of patient care information systems that require data entry by health care professionals published from 1991 to 2001. Our objectives were to identify attributes that were used to assess the success of such systems and to test the ability of a framework developed by Delone and McLean for management information systems1 to categorize these attributes correctly. The framework includes six dimensions or success factors: system quality, information quality, usage, user satisfaction, individual impact, and organizational impact. Thirty-three papers were selected for complete review. Types of study design included descriptive, correlational, comparative, and case studies. A variety of relevant attributes could be assigned to the six dimensions in the Delone and McLean framework, but some attributes, predominantly in cases of failure, did not fit any of the categories. They related to contingent factors, such as organizational culture. Our review points out the need for more thorough evaluations of patient care information systems that look at a wide range of factors that can affect the relative success or failure of these systems.


Introducing an innovation into an organization will evoke changes. In some cases these changes will be minor ones that hardly affect the organization and the people working in it. In other cases, those having to use the innovation might experience major changes. Among health care professionals, new innovations are predominantly judged by their direct value for patient care.2 Patient care information systems include hospital information systems, computerized or electronic medical record systems, or nursing documentation systems. Information systems with a practical utility for patient care or diagnostic procedures are relatively easily accepted, sometimes even without any scientific evidence of their value.2,3 However, systems that support the process of health care without being directly relevant to patient care are less easily accepted. In particular, attempts to introduce health care information systems that require data entry by health care providers have not always been successful.4–6

But what is successful? Complete refusal of users to use a system is certainly a failure, but often success remains undefined. Clearly, the determination of success depends on the setting, the objectives, and the stakeholders. Only a thorough evaluation study can show whether or not a specific system was successful in a specific setting. A wide range of attributes has been measured in evaluations of patient care information systems. These attributes vary from purely technical factors to outcome measures such as quality of care and from end-user evaluation to extent of diffusion into the organization. Which criteria predict success or failure is unclear, but it is likely that no single criterion can account for success or failure of an information system. Furthermore, each evaluation criterion must be measured in an appropriate way.

In 1995 van der Loo conducted a literature review to classify evaluation studies of information systems in health care.7 The primary objective was to get an insight into the variety of evaluation methods applied. In all, 76 studies published between 1974 and 1995 were included in the review. Many different performance measures or success factors were applied in the studies reviewed. The review’s main conclusion was that the evaluation methods and effect measures depended on the characteristics of the information system under evaluation. However, the range of identified evaluation methods and effect variables was broad for every type of system. Among the effect variables were costs, changes in time spent by patients and health care personnel, changes in care process, database usage, performance of users or the system, patient outcomes, job satisfaction, and the number of medical tests ordered. Several authors have suggested approaches to evaluating information technology in health care.8–10 These approaches concerned assessment of technical, sociological, and organizational impacts.8,9,11 A literature review by Delone and McLean in the field of management information systems aimed at identifying determinants for system success.1 They presented a framework with six dimensions of success.

The purpose of our review was to analyze evaluation studies of inpatient patient care information systems requiring data entry and data retrieval by health care professionals, published between 1991 and May 2001, to determine the attributes that were used to assess the success of these systems and to categorize these attributes according to the Delone and McLean framework. We also examined how the attributes were measured and what methodologies were used in the evaluation studies.

Methods

Selection Procedure

A patient care information system was defined as a clinical information system in use in inpatient settings, requiring data entry and data retrieval by health care professionals themselves. Medline was searched using the following Medical Subject Headings: evaluation studies, medical record systems—computerized, and nursing records. Additionally, Medline, Embase and Current Contents were searched with the following text words and phrases: medical record*, nursing record*, evaluat*, technology assessment, electronic, and computer* in all possible combinations. Exclusion criteria were guidlin* and decision support. Medline and Embase were searched for references in English or Dutch published between 1991 and May 2001. Current Contents was searched from 1998 to May 2001. The first author manually reviewed the titles and abstracts of all journal article citations retrieved from these three sources. She also reviewed the abstracts from the 1999 and 2000 Annual AMIA Symposium proceedings and the 1995 and 1998 Medinfo conference proceedings. References were selected for further analysis if the article contained: descriptions of (a) the system, (b) the evaluation study design, (c) the data collection methods, and (d) an analysis of results. The full articles of these selected abstracts were retrieved for detailed review and further analysis. No distinction was made between articles in proceedings and regular journals. The bibliographies of selected articles were not searched for additional relevant literature.

Study Designs

Friedman and Wyatt12 describe several evaluation study designs. They make a distinction between objectivist studies, in which subjects, variables and data collection methods are selected and subjectivist studies, which are conducted in the natural environment of the subjects, without manipulating it, and in which themes of interest emerge during the study.12,13 Objectivist studies are descriptive, comparative, or correlational studies. In a descriptive study the value of an outcome variable or a set of outcome variables is measured at a certain point in time. This design is valuable for assessing predefined requirements. In a correlational study the researcher does not assign subjects to a condition, but selects variables and data collection methods. An example is a before-after design in which the introduction of the information system is preceded by baseline measurements and followed by intervention measurements. The researcher in a comparative design seeks to “create” contrasting conditions between intervention and control group. A sample of subjects is selected and assigned—randomly or not—to one of the conditions. Then a predefined set of variables—dependent and independent—is measured. With randomly assigned subjects the study can approach a randomized controlled trial.

Subjectivist studies include case studies. Case studies are empirical in nature and study a phenomenon in its natural context, where the boundaries between phenomenon and environment are not absolutely clear. Evidence is collected from multiple sources—quantitatively or qualitatively. In evaluation studies of information systems, a case study can be a powerful instrument.

Dimensions of Success

We decided to analyze the literature according to the approach of Delone and McLean, because in our view the success dimensions identified for management information systems are valid for patient care information systems as well. In their review, Delone and McLean proposed to subdivide success measures of management information systems into six distinct categories: (1) system quality, (2) information quality, (3) usage, (4) user satisfaction, (5) individual impact, and (6) organizational impact. Within each category several attributes could contribute to success.1

The information processing system itself is assessed with system quality attributes (e.g., usability, accessibility, ease of use). Information quality attributes (e.g., accuracy, completeness, legibility), concern the input and output of the system. Usage refers to system usage, information usage, or both. Examples of attributes of usage are number of entries and total data entry time. User satisfaction can concern the system itself or its information, although they are hard to disentangle. Delone and McLean included user satisfaction in addition to usage, because in cases of obligatory use user satisfaction is an alternative measure of system value. Individual impact is a measure for the effects of the system or the information on users’ behavior, and attributes can be information recall or frequency of data retrieval or data entry. Organizational impact, the last category, refers to the effects of the system on organizational performance. Thus, success measures vary from technical aspects of the system itself to effects of large-scale usage.

DeLone and McLean1 concluded that success was a multidimensional construct that should be measured as such. In addition, they argued that the focus of an evaluation depended on factors such as the objective of the study and the organizational context. Furthermore, they proposed an information system success model in which the interdependency—causal as well as temporal —of the six success factors was expressed. In their view, success was a dynamic process rather than a static state; a process in which the six different dimensions relate temporally and causally. System quality and information quality individually and jointly affect usage and user satisfaction. They influence each other and have a joint influence on user behavior.

Results

Our search identified 1077 publications, of which 202 were selected for full article review. Eleven of these references were unavailable in Dutch libraries and were not included in our study. Based on the selection criteria, the remaining set of 191 articles was reduced to a final set of 33, describing 29 different information systems. We included more than one article on a single system if the articles described distinct evaluations, and we analyzed them separately.

Types of Systems.

We identified general and specific systems. Hospital information systems, nursing (bedside) documentation systems, computerized medical records systems, and physician order entry systems (POE) are examples of general systems. These systems are not necessarily limited to one ward or department. Specific systems were those designed for one type of department, such as intensive care unit (ICU) systems or automated anesthesia record-keeping systems. Fourteen systems were used only by nurses, five only by physicians, and eleven by both nurses and physicians. The last category comprised four order entry systems and three hospital medical record systems.

Study Designs and Data Collection Methods.

Table 1 shows that descriptive and correlational designs were used most frequently, whereas the comparative design with simultaneous randomized controls was applied in two studies. Data collection methods varied and included chart review, questionnaires, time studies, work sampling, automated logging of user information, focus groups, observations, and open-end interviews.

Table 1 .

Occurrences of Study Designs and Data Collection Methods for Each Type of Study

Data Collection Methods*
Study Design CR Q I WS/TS OBS FG Other
Descriptive14–22 3 7 2
Correlational23–31 6 3 1 2 1 2
Comparative Simultaneous nonrandomized controls32–38 4 3 1 6 1
Simultaneous randomized controls39,40 1 2 2 2
Case study Single41–44 2 1 3 1 1
Multiple45,46 2 2 1 1

*CR: chart review, Q: questionnaire, I: interviews, WS: work sampling/TS: time sampling, OBS: observations, FG: focus group.

The Dimensions of Success

Information quality was evaluated in 64% of the studies, system quality in 58%, usage in 36%, user satisfaction in 48%, individual impact in 45%, and organizational impact in 39%. An overview of the data collection methods in the different studies is shown in Table 2. Eight authors used multiple data collection methods to measure several attributes of the same success factor.17,26,30,32,37,38,40,45 To measure system quality authors preferred questionnaires and time or work sampling techniques. Three authors combined two of these data collection methods. Information quality was predominantly assessed by means of chart review or a questionnaire. Four authors applied multiple methods and combined chart reviews and questionnaires, in one case supplemented with interviews. Time and work sampling and content analyses were preferred to assess usage of an information system.26,32,35,36,38,40 One study kept a log to investigate usage behavior.34 User satisfaction was most frequently measured with a questionnaire. Two authors combined interviews and questionnaires. Individual impact and organizational impact were assessed with several data collection methods. Four authors used multiple methods to assess individual impact. Two of them combined a questionnaire with work sampling, and the other two combined chart reviews, interviews and questionnaires. In three evaluations interviews and questionnaires were combined to assess organizational impact, supplemented with chart reviews in one case.

Table 2 .

Occurences of Data Collection Methods Linked to Dimensions of Success*

Data Collection Method† System Quality Information Quality Usage User Satisfaction Individual Impact Organizational Impact
Q 14, 19–21, 32, 38, 40 16, 20, 21, 26, 37, 38, 40 16 14, 16, 20–23, 26, 27, 32, 38–41 21, 22, 26, 32, 40 14, 21, 26, 39, 40
CR 22, 29, 40, 43, 46 15, 17, 22, 26, 28-31, 34, 37–40, 43 26, 34, 35, 40 23, 26, 34, 40 40
I 37, 40, 43, 45, 46 26, 42, 43, 45 45, 46 26, 37, 40, 42 26, 37, 40, 42, 44–46 26, 39, 40, 42
TS/WS 23, 25, 33, 35, 37–40 40 23, 32, 35, 36, 38, 40 32, 37, 40 25, 33, 37, 40
FG 45 45 45
OBS 25, 45 42, 45 45 42 42, 45 25, 42
Other 22, 24 17, 24, 30 24, 33, 41 22, 24, 41

*The numbers represent references.

†Q: questionnaire, CR: chart review, I: interviews, TS: time sampling/WS: work sampling, FG: focus group, OBS: observations.

Overall, system and information quality were most frequently evaluated, and the questionnaire was the preferred data collection method. Descriptive studies emphasized the technical issues, but some included the contingent factors. Organizational issues, system development process, and implementation process were rarely considered in correlational and comparative studies. Those studies describing a failed or difficult implementation were all case studies—single or multiple41,43,45,46—except one,32 and all included the contingent factors. Three authors assessed one attribute of a single success factor.27,31,36 In correlational studies relatively few attributes of the different success factors were assessed.

Table 3 shows an overview of the attributes of the dimensions of success measured in the different studies.

Table 3 .

Attributes of Different Success Factors*

System Quality Attributes Information Quality Attributes Usage Attributes User Satisfaction Attributes Individual Impact Attributes Organizational Impact Attributes
Ease of use (record-keeping time) (14,15, 20,21,23,33,35,37–39, 43,45) Completeness (15,20, 22,24,26,28,29,31, 37–39,45,46) Number of entries (15,26,34,35,38,40) User satisfaction (16, 20-23,26,32,37,39–42) Changed clinical work patterns (23,32,41,45,46) 40,42) Communication and collaboration (24,26,28,37,39,
Response time (14,19-22,32,40) Accuracy of data (15,17,21,26,30,37–39) Frequency of use (26,32,36,40) Attitude (14,27,32,39) Direct benefits (21,44,45) 42) Impact on patient care (14,21,22,28,33
Timesavings (14,22, 24,25,37,40) Legibility (15,21, 37–39,43,45) Duration of use (23,35,40,45) User friendliness (14,38) Changed documentation habits: Costs:
Intrinsic features creating extra work (37,43,45,46) Timeliness (21,25,34,37,40,43) Self-reported usage (16) Expectations (32)     More administra tive tasks (22,26,40) Timesavings (22,24,25,40–42)
Perceived ease of use† (21,39,40) Perceived usefulness‡ (21,26,39,42) Location of data entry (37) Competence (computers) (26)     Time of day for documenting (34,37,40) Reduction of staff (22,41)
Usability (19,20,45) Availability (21,42,43) Frequency of use of specific functions (16)     Documentation frequency (22,34,37) Number of procedures reduced (42)
Availability (up-time) (21,40) Comprehensiveness (20,26) Information use:
Ease of learning (14,38) Consistency (26)     Information recall (33,45)
Rigidity of system; built in rules (46) Reliability (19)     Accurate interpretation (26,42)
Reliability (32) Format (25)     Integration of information/overview (37,45)
Security (29)     Information awareness (42)
Easy access to help (21) Efficiency and effectiveness of work (24)
Data accuracy (22) Job satisfaction (40)

*Numbers in parentheses refer to references.

Perceived ease of use (PEU) concerns a user’s perception or belief. Once users believe that the information system can be helpful, the balance between the performance benefits and the efforts to invest, PEU, determines whether they will actually use a particular system.57

Perceived usefulness (PU) is defined as the extent to which potential users believe that a certain information system can or will support them in performing their job better.

System Quality

System quality attributes were evaluated in 19 studies. The most frequently addressed variables were ease of use (record keeping time), savings in documentation time and response time. In six studies ease of use,23,33,35 timesavings,24,25 or security29 was the single attribute of system quality.

Several authors reported a decrease in time spent on documentation in comparison with paper.22,24,25,35,37,38,40 In three studies users complained about the (complicated) methods to enter patient data electronically.20,43,45 Sicotte45 and Southon,43 who conducted open-ended interviews, found that rigidity and factors intrinsic to the system created extra work and accounted for the inconvenience. Down time and/or system response time were variables in six evaluations.

Information Quality

Information quality criteria were analyzed in 21 studies. The most frequently used were completeness, data accuracy, legibility, and timeliness. One single attribute was measured in seven studies, and in three of them that attribute was completeness. All relevant studies found an increased completeness of record content.20,22,24,26,28,29,34,37–39,45 In the perception of users, availability and timeliness of information were positive aspects.26,42 For bedside nursing documentation systems, an improvement in timeliness of certain types of information was observed.34,37 Order entry systems increased the availability of information about orders and improved timeliness by reducing the time between sending the order and having the results available or the orders executed.21,40

Usage

Thirteen authors analyzed the usage of an information system. The number of entries, frequency of use, and duration of use were the preferred attributes. Ambiguous results were reported for frequency of use. In three studies of bedside nursing documentation systems chart reviews showed a significant increase in number of entries and frequency of use.15,26,35 In contrast, two other nursing documentation system studies and one order entry system study identified no significant change in frequency of use.32,36,40

User Satisfaction

In 16 studies the authors were interested in user satisfaction, and five of them combined two or more attributes. Overall satisfaction, user friendliness, and user attitude toward the information system were user satisfaction attributes. Overall user satisfaction was rather high in all but one41 study. In one of the studies, physicians attributed their satisfaction to patient care benefits, such as improvement of clinical communication, improvement of medical record keeping and decision-making, and educational benefits (e.g., improved supervision of students and residents).42 In one evaluation study, staff cited a positive influence of their bedside documentation system on work efficiency and effectiveness and promoted use of the system throughout the whole organization.26 Overall satisfaction was correlated most strongly with ease of use, productivity or impact on patient care in the case of one order entry system.16 In this study, dissatisfaction was strongly correlated with perceptions of a negative impact of the system on patient care. In answer to the open-ended questions, more than half of the respondents indicated that the most positive aspect of the system was remote access. Nurses, however, considered the legibility of orders the most positive aspect. Low response time and too many screens or steps to complete order entry were important drawbacks for both nurses and physicians. Systems that were withdrawn were done so predominantly because of user resistance.32,45,46

Individual Impact

Fifteen authors evaluated individual impact on users with many different attributes. Five attributes—in ten studies—related to changed work practices, varying from a change in frequency of documentation to shifts in responsibilities for certain tasks. Immediate benefits of system use, changed documentation habits, and information use in daily practice were the other main aspects. Spontaneous adaptations of documentation habits were investigated by analyses of location and time of day of documentation.23,34,37,40 Hammond23 reported that nurses started entering data when an event occurred rather than at the end of each shift like in the paper situation. Others reported that the terminal at a patient’s bedside was used only to enter specific data, such as medication. Other relevant information was entered elsewhere, because the patient and/or his or her family distracted the nurse too much.34,37 As a consequence, bedside documentation showed no effect on completeness and timeliness of data.34

In contrast to this voluntary change in documentation habits, in four studies the system was reported to force users to change their work practices.32,41,45,46 This led to problems with the acceptance of the systems. Only one of these systems survived, after adaptations.41 Two studies showed that those who perceived a higher workload judged a shift in responsibilities negatively.41,46

Another aspect of individual impact is the ability to adequately use the information in daily work, which was evaluated by several authors.26,33,40–42,45,46 The single study that assessed information recall reported no difference between manual and automated record keeping in an ICU setting.33 Improved awareness of information and more accurate interpretation of data were reported by users of a computerized patient record (CPR) and a bedside nursing documentation system.26,42 Users cited more comprehensive records due to better documenting and the integration of images and written notes as examples of the improvement. Information overload due to the design of the system negatively influenced the awareness of available information by users of one computer-based patient record.45

Organizational Impact

Thirteen authors evaluated organizational impact by assessing communication/ collaboration with other disciplines, direct or indirect impact on patient care, and costs. An improvement in the communication between professionals or departments was reported in two studies.26,42 Users perceived that information systems reduced the number of phone calls to request tests/examinations and their results.40,42 In one study, however, the real number of phone calls did not decrease.40

Seven studies assessed the impact on patient care.14,21,22,28,30,33,40,42 In two studies, time saved from documenting increased time spent on patient care.22,36 In one study the physicians saved time documenting, but lost time due to technical difficulties with the information system or coupled systems.33 According to users, ready access to information and a reduced number of repeated examinations had a positive impact on patient care.42 Furthermore, rapid availability of test results was perceived to have a positive impact as well.40,42 Other POEs were shown to improve correct documentation of orders.28,30,40

A third aspect of organizational impact related to costs. They were measured by timesavings, reduced overtime, savings in personnel, or reduction in the number of tests ordered.22–24,26,37,40 One study observed considerable timesavings due to more efficient work routines24; others reported a reduction in the number of redundant tests.25,40,42

Other Relevant Issues

Primarily in evaluations of failed initiatives, we identified some attributes that could not be assigned to one of the six success factors. They fell into three categories of contingent factors: system development, implementation process, and culture and characteristics of the organization. Table 4 shows the attributes of these contingent factors and in which studies they were identified.

Table 4 .

Attributes of Different Contingent Factors

System Development Attributes Implementation Attributes Organizational Aspects Attributes
User involvement41,45,46 Communication (frequency, two way)18,41,44–46 Organizational culture:
Redesign work practices46 Training20,22,24,28,40     Control and decision-making18,40,41,44,46
Reconstruction of content/ format45 Priorities chosen45,46     Management support18,43,44,46
Technical limitations44,45 Technical support20,24     Professional values18,41,44
User involvement45     Collaboration/ communication41
Support and maintenance19,20,24,43
Champions18
Rewards41

System Development

Key decisions during system development of two systems were evaluated (by independent research-ers).41,44–46 Both systems were withdrawn, and the failure was partly explained by the choices made during development regarding technology, extent of user involvement, intended redesign of work practices, and redesign of the record format. In these studies data were collected with interviews and questionnaires. In one study, the choice of a touch screen resulted in a menu-driven input, which resulted in a structured questionnaire to capture data.45 This made the system merely a data collection tool that did not fit the documentation practices of physicians. In addition, developers approached the system development in a logical fashion and started with the first data that were collected. Although these were the first data to collect in a patient’s trajectory, entering this information in an information system was cumbersome and offered no benefits. Thus, as a result of insufficient communication and insufficient user involvement, the wrong priorities were chosen.

Strict interpretation of rules for work practices made work difficult.41 For example, a POE system required signing of verbal orders. When the number of unsigned orders increased gradually, management decided that new orders could only be entered after the unsigned ones were removed. This requirement was eliminated because of resistance of the residents, whose workload had increased drastically. Also in other studies, required alterations in established work practices provoked resistance41 or led to an increase in time spent on documentation.45

Implementation Process

The implementation process was evaluated in nine articles.18,20,22,24,28,40,41,45,46 Communication, education, and technical support were the main factors addressed. Communication was included in five studies.18,41,44–46 Education was only described in terms of hours of education and the organization of technical support, but not analyzed.20,22,24,28,40 Insufficient two-way communication—for example, about the progress of the implementation or the expected benefits of the system—had a negative influence on the adoption of information systems.41,44,46

Organizational Culture and Characteristics

Different aspects of the organizational culture were evaluated in several studies.18,41,44–46 In the study by Ash, frequent communication was associated with better diffusion of CPRs.18 Visible management support is essential, as are the lines of authority. In two studies, the persons responsible for implementing the information system did not have decision-making authority.41,44 This disconnection between the organizational structure and information system implementation strategy complicated the implementation significantly. Another example of a serious misunderstanding of organizational practices was reported by Lehoux et al.46 Physicians asked nurses to warn them when vital signs exceeded certain values, but in accordance with the formal responsibilities the system developers expected the physicians to check vital signs themselves.46

Limitations of This Review

We found only thirty-three studies meeting our criteria published from 1991 to May 2001. The most important limits in our search were a ten-year period, including only inpatient patient care information systems, excluding systems with decision support, and the requirement of data entry and retrieval by health care providers themselves. Perhaps a longer time period or the inclusion of outpatient systems would have yielded additional useful data.

Some of the choices that we made when categorizing the attributes of the evaluation studies are debatable. Perceived ease of use, for example, was identified as a system quality attribute, but it might also be considered a user satisfaction attribute.

Discussion

What Is Success?

We did not find any explicit definition of success in the studies that we reviewed. The value of a computer-based information system was often measured against the value of the familiar paper-based systems, with the paper based systems serving as the gold standard, despite their known limitations. None of the studies clearly identified the primary stakeholders for the system, although the definition of success can vary by stakeholder group. For example, if the main stakeholder is the manager who wants to cut costs, a documentation system is unsuccessful if staff spends more time entering data. The physician, in contrast, may regard the same bedside nursing documentation system as very successful, because vital signs are readily available and legible without the need to consult a nurse or the paper records. The timing of an evaluation also influences its ability to assess success or failure. Some technical criteria can be examined before implementation, but a useful assessment of organizational impact can only occur after a reasonable period of daily use. Our review showed that many evaluations finished within half a year after the introduction of the information system, probably too soon to measure all organizational impacts.

Definitions of success also fluctuate over time.47 A system that is successful today may be considered a failure in a decade due to technical limitations or changed demands and expectations. To compensate for these factors, a good evaluation should include multiple, carefully selected periods of data collection and should include all stakeholders’ points of view. The organization in which the information functions has a major impact on success. Repeatedly, researchers stressed the importance of incorporating contextual information in an evaluation of an information system.48–53 Contextual information and organizational impact were included in many evaluations. Organizational culture, such as professional values, however, was seldom taken into account.

In contrast to successes, failures were clearly identified by the authors. We found only few references concerning failed initiatives,43–46 but those that we found were rich and detailed descriptions of the design, implementation, and effects of the information systems, with careful consideration of the factors that contributed to the failure.

What to Evaluate?

Despite the ever-increasing number of health care information systems, published evaluation studies are scarce. To our knowledge, no evaluation framework has been proposed specifically for patient care information systems. Our review showed that the dimensions of success defined by Delone and McLean for management information systems are applicable to inpatient patient care information systems.1 The literature that we examined included a wide range of attributes for evaluating such systems (see Table 3). Further research is needed to determine which attributes are most useful in measuring success and if different attributes should be assessed for different types of patient care information systems. It is unlikely that one single factor is decisive in system success. A multidimensional construct, as proposed by Delone and McLean, is therefore more appropriate for system evaluations. Our results indicate that the framework presented here is useful in evaluating patient care information systems and should be explored in future evaluation research, with modifications to include contingent factors, such as user involvement during system development and implementation and organizational culture. These factors helped to explain the failure of patient care information systems. It is likely that they also play a role in success of information systems,54,55 although there was little mention of them in published evaluations of systems that did not fail.

How to Evaluate?

Most of the evaluations of information systems in health care that we reviewed were descriptive or correlational studies. We found only two studies with a comparative design with simultaneous randomized controls. Such a comparative design is generally considered to be best, but it requires a clear definition and complete elaboration of variables beforehand. Also it is often not possible to create two or more independent conditions in evaluation studies of information systems. In case studies, a more flexible approach allows the research questions to evolve and can also illuminate the context of the information system and the interactions between user and system.

Both approaches can be valuable, depending on the objective of the evaluation and the stakeholder(s). Kaplan10 suggested the following methodological guidelines for evaluations: (1) focus on a variety of concerns, (2) choose a longitudinal study design, (3) use multiple methods, (4) choose a study design that can be adapted to the study findings whenever necessary, and (5) be both formative and summative (that is, to use the results of an evaluation to further develop the information system). Although somewhat general and certainly ambitious, these guidelines are valuable in designing evaluation studies. Our review showed that evaluations assessing several attributes of different factors were more informative. Formative evaluations—aiming at improving the information systems during development or implementation—were hard to find in the reviewed literature. Most evaluations concerned systems in use and were summative evaluations.

A thorough evaluation should include all appropriate success factors, but the moment to measure each varies from factor to factor. An evaluation should start before the development and should have no fixed end. One could think of a kind of post-marketing surveillance as is usual in medication registration procedures. The integration of qualitative (observations, interviews) and quantitative (questionnaires, work sampling) data collection methods provides an opportunity to improve the quality of the results through triangulation.56 In evaluations of information systems that employ multiple methods the data from different sources complement each other to provide a more complete picture.

References

  • 1.DeLone WH, McLean ER. Information systems success: The quest for the dependent variable. Inf Sys Res 1992;3:60–95. [Google Scholar]
  • 2.Banta HD. Embracing or rejecting innovations: clinical diffusion of health care technology. In Anderson JG, Jay SJ (eds). Use and impact of computers in clinical medicine. New York, Springer Verlag, 1987.
  • 3.Fineberg H, Hiatt H. Evaluation of medical practices. The case for technology assessment. N Engl J Med 1979;301:1086–91. [DOI] [PubMed] [Google Scholar]
  • 4.Schoenbaum SC, Barnett GO. Automated ambulatory medical records systems. An orphan technology. Int J Technol Assess Health Care 1992;8:598–609. [DOI] [PubMed] [Google Scholar]
  • 5.Sittig DF, Stead WW. Computer-based physician order entry: the state of the art. JAMIA 1994;1:108–23. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Wyatt JC. Clinical data systems. Part 3: Development and evaluation. Lancet 1994;344:1682–8. [DOI] [PubMed] [Google Scholar]
  • 7.van der Loo RP, Gennip EMSJ, Bakker AR, Hasman A, Rutten FFH. Evaluation of automated information systems in health care: An approach to classify evaluative studies. Comput Methods Programs Biomed 1995;48:45–52. [DOI] [PubMed] [Google Scholar]
  • 8.Anderson JG, Aydin CE. Evaluating the impact of health care information systems. Int J Technol Assess Health Care 1997; 13:380–93. [DOI] [PubMed] [Google Scholar]
  • 9.Grémy F, Degoulet P. Assessment of health information technology: which questions for which systems? Proposal for a taxonomy. Med Inform 1993;18:185–93. [DOI] [PubMed] [Google Scholar]
  • 10.Kaplan B. Addressing organizational issues to the evaluation of medical systems. JAMIA 1997;4:94–101. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Goodman EC. A methodology for the ‘user-sensitive implementation’ of information systems in the pharmaceutical industry: a case study. Int J Inform Manag 1998;18:121–38. [Google Scholar]
  • 12.Friedman CP, Wyatt JG. Evaluation Methods in Medical Informatics. New York, Springer-Verlag, 1997.
  • 13.Yin RK. Applications of Case Study Research. Newbury Park, CA, Sage Publications, 1993.
  • 14.Abenstein J, DeVos C, Abel M, Tarhan S. Eight year’s experience with automated anesthesia record keeping: Lessons learned—New directions taken. Int J Clin Monit Comput 1992; 9:117–29. [DOI] [PubMed] [Google Scholar]
  • 15.Hammond J, Johnson H, Varas R, Ward C. A qualitative comparison of paper flowsheets vs. a computer-based clinical information system. Chest 1991;99:155–57. [DOI] [PubMed] [Google Scholar]
  • 16.Lee F, Teich JM, Spurr CD, Bates DW. Implementation of physician order entry: User satisfaction and self-reported usage patterns. JAMIA 1996;3:42–55. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Markert DJ, Haney PJ, Allman RM. Effect of computerized requisition of radiology examinations on the transmission of clinical information. Acad Radiol 1997;4:154–56. [DOI] [PubMed] [Google Scholar]
  • 18.Ash J. Organizational factors that influence information technology diffusion in academic health sciences centers. JAMIA 1997;4:102–11. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Meyer KE, Altamore R, Chapko M, Miner M, et al. The need for a thoughtful deployment strategy: Evaluating clinicians’ perceptions of critical deployment issues. Medinfo 1998; 2:854–58. [PubMed] [Google Scholar]
  • 20.Urschitz M, Lorenz S, Unterasinger L, Metnitz P, Preyer K, Popow C. Three years experience with a patient data management system at a neonatal intensive care unit. J Clin Monit 1998;14:119–25. [DOI] [PubMed] [Google Scholar]
  • 21.Weiner M, Gress T, Thiemann DR, Jenckes M, et al. Contrasting views of physicians and nurses about an inpatient computer-based provider order-entry system. JAMIA 1999; 6:234–44. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Allan J, Englebright J. Patient-centered documentation: An effective and efficient use of clinical information systems. J Nurs Admin 2000;30:90–95. [DOI] [PubMed] [Google Scholar]
  • 23.Hammond J, Johnson H, Ward C, Varas R, Dembicki R, Marcial E. Clinical evaluation of a computer-based patient monitoring and data management system. Heart Lung 1991; 20:119–24. [PubMed] [Google Scholar]
  • 24.Harrison GS. The Winchester experience with the TDS hospital information system. Br J Urol 1991;67:532–35. [DOI] [PubMed] [Google Scholar]
  • 25.Kahl K, Ivancin L, Fubrmann M. Automated nursing documentation system provides a favorable return on investment. J Nurs Admin 1991;21:44–51. [PubMed] [Google Scholar]
  • 26.Dennis K, Sweeney P, Macdonald L, Morse N. Point of care technology: impact on people and paperwork. Nurs Econ 1993;11:229–37. [PubMed] [Google Scholar]
  • 27.Murphy CA, Maynard M, Morgan G. Pretest and post-test attitudes of nursing personnel toward a patient care information system. Comput Nurs 1994;12:239–44. [PubMed] [Google Scholar]
  • 28.Sulmasy DP, Marx ES. A computerized system for entering orders to limit treatment: Implementation and evaluation. J Clin Ethics 1997;8:258–63. [PubMed] [Google Scholar]
  • 29.Nielsen PE, Thomson BA, Jackson RB, Kosman K, Kiley KC. Standard Obstetric Record Charting System: Evaluation of a new electronic medical record. Obstet Gynecol 2000;96:1003–08. [DOI] [PubMed] [Google Scholar]
  • 30.Bates D, Teich J, Lee J, Seger D, et al. The impact of computerized physician order entry on medication error prevention. Obstet Gynecol 1999;6:313–21. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.Larrabee J-H, Boldreghini S, Elder-Sorrells K, Turner Z-M, et al. Evaluation of documentation before and after implementation of a nursing information system in an acute care hospital. Comput Nurs 2001;19:56–65. [PubMed] [Google Scholar]
  • 32.Bürkle T, Kuch R, Prokosch H-U, Dudeck J. Stepwise evaluation of information systems in an university hospital. Meth Inform Med 1999;38:9–15. [PubMed] [Google Scholar]
  • 33.Allard J, Dzwonczyk R, Yablok D, Block J-F, McDonald J. Effect of automatic record keeping on vigilance and record keeping time. Br J Anaesth 1995;74:619–26. [DOI] [PubMed] [Google Scholar]
  • 34.Marr PB, Duthie E, Glassman KS, Janovas DM, et al. Bedside terminals and quality of nursing documentation. Comput Nurs 1993;11:176–82. [PubMed] [Google Scholar]
  • 35.Minda S, Brundage DJ. Time differences in handwritten and computer documentation of nursing assessment. Comput Nurs 1994;12:277–79. [PubMed] [Google Scholar]
  • 36.Marasovic C, Kenney C, Elliott D, Sindhusake D. A comparison of nursing activities associated with manual and automated documentation in an Australian intensive care unit. Comput Nurs 1997;15:205–11. [PubMed] [Google Scholar]
  • 37.Pabst MK, Scherubel JC, Minnick AF. The impact of computerized documentation on nurses’ use of time. Comput Nurs 1996;14:25–30. [PubMed] [Google Scholar]
  • 38.Wang X, Gardner RM, Seager PR. Integrating computerized anesthesia charting into a hospital information system. Int J Clin Monit Comput 1995;12:61–70. [DOI] [PubMed] [Google Scholar]
  • 39.Ammenwerth E, Eichstadter R, Haux T, Pohl U, Rebel S, Ziegler S. A randomized evaluation of a computer-based nursing documentation system. Meth Inform Med 2001;40:61–-68. [PubMed] [Google Scholar]
  • 40.Ostbye T, Moen A, Erikssen G, Hurlen P. Introducing a module for laboratory test order entry and reporting of results at a hospital ward: an evaluation study using a multi-method approach. J Med Sys 1997;21:107–17. [DOI] [PubMed] [Google Scholar]
  • 41.Massaro TA. Introducing physician order entry at a major academic medical center. I: Impact on organizational culture and behavior. Acad Med 1993;68:20–25. [DOI] [PubMed] [Google Scholar]
  • 42.Kaplan B, Lundsgaarde H. Toward an evaluation of an integrated clinical imaging system: Identifying clinical benefits. Meth Inform Med 1996;35:221–29. [PubMed] [Google Scholar]
  • 43.Southon FCG, Sauer C, Dampney CNG. Information technology in complex health care services: Organizational impediments to successful technology transfer and diffusion. JAMIA 1997;4:112–24. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 44.Southon G, Sauer C, Dampney K. Lessons from a failed information systems initiative: Issues for complex organisations. Int J Med Inform 1999;55:33–46. [DOI] [PubMed] [Google Scholar]
  • 45.Sicotte C, Denis J, Lehoux P, Champagne F. The computer-based patient record challenges towards timeless and spaceless medical practice. J Med Sys 1998;22:237–56. [DOI] [PubMed] [Google Scholar]
  • 46.Lehoux P, Sicotte C, Denis J. Assessment of a computerized medical record system: disclosing scripts of use. Eval Progr Plan 1999;22:439–53. [Google Scholar]
  • 47.Berg M. Implementing information systems in health care organizations: Myths and challenges. Int J Med Inform 2001; 64:143–56. [DOI] [PubMed] [Google Scholar]
  • 48.Kaplan B. Evaluating informatics applications—some alternative approaches: Theory, social interactionism, and call for methodological pluralism. Int J Med Inform 2001;64:39–56. [DOI] [PubMed] [Google Scholar]
  • 49.Forsythe DE, Buchanan BG. Broadening our approach to evaluating medical information systems. Proc 15th Annu Symp Comput Appl Med Care 1991:8–12. [PMC free article] [PubMed]
  • 50.Kaplan B. Culture counts: How institutional values affect computer use. MD Comput 2000:23–26. [PubMed]
  • 51.Heathfield H, Pitty D, Hanka R. Evaluating information technology in health care: barriers and challenges. BMJ 1998; 316:1959–61. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 52.Heathfield HA, Peel V, Hudson P, Kay S, et al. Evaluating large scale information systems: From practice towards theory. Proceedings of the 21th AMIA Annual Fall Symposium 1997: 116–20. [PMC free article] [PubMed]
  • 53.Anderson JD. Increasing the acceptance of clinical information systems. MD Comput 1999:62–65. [PubMed]
  • 54.Kaplan B, Shaw NT. People, organizational, and social issues: Evaluation as an exemplar. In Haux R, Kulikowski CA (eds). Yearbook of Medical Informatics. Medical Imaging Informatics. Stuttgart, Schattauer, 2002, pp 91–102. [PubMed]
  • 55.Shaw NT. CHEATS: A generic information communication technology (ICT) evaluation framework. Comput Biol Med 2002;32:209–20. [DOI] [PubMed] [Google Scholar]
  • 56.Kaplan B. Combining qualitative and quantitative methods in information systems research: A case study. MIS Quarterly 1988:571–86.
  • 57.Davis FD. Perceived usefulness, perceived ease of use and user acceptance of information technology. MIS Q 1989; 13:319–39. [Google Scholar]

Articles from Journal of the American Medical Informatics Association : JAMIA are provided here courtesy of Oxford University Press

RESOURCES