Skip to main content
Journal of the American Medical Informatics Association : JAMIA logoLink to Journal of the American Medical Informatics Association : JAMIA
. 1997 Mar-Apr;4(2):94–101. doi: 10.1136/jamia.1997.0040094

Addressing Organizational Issues into the Evaluation of Medical Systems

Bonnie Kaplan 1
PMCID: PMC61498  PMID: 9067875

Abstract

New system design and evaluation methodologies are being developed to address social, organizational, political, and other non-technological issues in medical informatics. This paper describes a social interactionist framework for researching these kinds of organizational issues, based on research within medical informatics and other disciplines over the past 20 years. It discusses how effective evaluation strategies may be undertaken to address organizational issues concerning computer information systems in medicine and health care. The paper begins with a theoretical framework for evaluation. It then describes the 4Cs of evaluation: communication, care, control, and context. Five methodological guidelines are given for conducting comprehensive evaluations that address these 4Cs. An example of an evaluation research design that fits the guidelines and was used in an evaluation of an on-line clinical imaging system is discussed. Results of the evaluation study illustrate how this approach addresses organizational concerns and the 4Cs.


Ever since computers first were used in medicine and health care, many individuals predicted that profound changes would result from their use, and lamented that these changes were slow in coming. They analyzed reasons for this so-called “lag” in medical computing. At least in the United States, the lag initially was attributed to three categories of problems: insufficient technology, funding, or knowledge; barriers inherent in medicine itself; and physician resistance. Starting in the 1970s, managerial issues joined the other areas of concern.1 This managerial thrust also has been evident in empirical studies pertaining to systems evaluation, adoption, and use. Since the 1970s, researchers have studied medical and hospital information systems, bringing to bear the perspectives of fields such as psychology, anthropology, sociology, history, and information systems. An excellent sampling of this work appeared in a collection edited by Anderson and Jay.2 A variety of evaluation studies were undertaken, and different medical computer applications were scrutinized by a wide range of methods from a number of disciplinary perspectives. The growing interest in organizational issues, and issues of adoption and use, is reflected in new books on evaluation3,4 and on organizational impacts5,6; and in special issues of publications such as this one or the recent SIGBIO Newsletter on situated practice and computer systems' impacts on work.7

In medical informatics, these trends are evident in the increasing recognition of social, organizational, political, and other non-technical factors surrounding an information systems project. New design methodologies are being developed to address these concerns through the assumption that system design should take into account actual work routines.7,8,9,10,11,12 There also is increasing interest in evaluating computer information systems in light of broader organizational concerns and in examining their organizational impact.3,4,13,14,15 Previously, information systems were most commonly evaluated according to outcomes related to selected technical or economic factors, while social, cultural, political, or work life issues were given less consideration.16,17,18 Newer evaluation approaches take what is known as a social interactionist perspective by considering relationships between system characteristics, individual characteristics, organizational characteristics, and effects among them.13,19 Such approaches encourage evaluating an information system's impact upon an organization while evaluating organizational features' impacts on the system. This paper describes a social interactionist framework for researching these kinds of organizational issues, based on research over the past twenty years.

Framework

Theories and empirical results depend upon whether an evaluation study focuses on a computer information system, on users, on the organization where the system and users are, or on interrelationships and interactions among these. Traditionally, medical information systems evaluations have been conducted according to an experimental or clinical trials model of research. These evaluations focus on technical, economic, or other factors believed to affect systems' impacts. Some areas of systems evaluation are well-recognized in the medical informatics literature: (1) technical and systems features that affect systems use, (2) cost-benefit analysis, (3) user acceptance, and (4) patient outcomes. The factors believed to cause impacts were identified and the impacts measured in these evaluation studies. This kind of research design takes a variance approach20; i.e., the focus of study is on how a variable changes as a result of some intervention, in this case, the information system.

Evaluations that focus on selected technical, economic, user, or medical criteria may not be sufficient to improve outcomes and realize benefits information systems can offer. Understanding the processes that contribute to impacts and outcomes is needed as well. In addition to assessing what differences medical computer information systems make in patient care, education, medical research, and hospital management, evaluations can address why these systems make those differences, or why systems may not have the impacts expected of them. For example, an interactionist perspective can explain why the same system, used by the same kinds of users, may have different impacts in different settings. The TDS system, for example, is both one of the most successful hospital information systems, and also one against which medical staff have protested vociferously.21,22

To measure effectiveness or impacts of a medical information system, research should be designed to identify, collect, analyze, and interpret data to form a coherent picture of processes that resulted in the effects or impacts.19 Interactionist studies often take a process approach; i.e., they examine processes that emerge from complex, indeterminate, often unpredictable, interactions and interrelationships.20

Models of Change

It is helpful to understand how to evaluate these processes; therefore, three classes of models of organizational change and information technology assessment are described. The three models are: Research, Development, and Diffusion; Problem Solving; and Social Interaction. These models could guide information systems leaders and researchers to the most appropriate approach for studying a specific information system implementation.23,24

In Research, Development, and Diffusion models, experts develop a system they consider beneficial, and users either adopt it or are considered resistant to it. Problem Solving models are more collaborative, with experts and clients working together to develop information systems solutions to what they perceive as problems. Problem Solving models are based on variations of Lewin's theories of change, such as the Kolb-Frohman model.25 A third class of models is Social Interaction models. These models are based on Rogers' Classic Diffusion Theory26 and thus emphasize how an innovation, such as an information system, is communicated through social channels over time. Other reviews of models are given by other authors.27,28,29

Related to these models of change is a set of models about what causes resistance to information systems. Markus, for example, presents three theories of user resistance: user centered, system centered, and interactional.30 Resistance is explained in system centered theories by factors inherent in a system itself, such as slow response time, poor screen design, and other factors the Institute of Medicine classified as technological barriers to computer-based patient records.31 Alternatively, user-centered theories consider resistance to be due to factors inherent in users, such as their lack of knowledge or their reluctance to change. Lastly, according to interactional theories, resistance is due to interrelationships and interactions among users, a system, and the organizational context in which that system is to be used. Interactional factors may be the most difficult to study. Such factors would include the kinds of considerations discussed by Kaplan and by Brenner and Logan.32,33 On the individual level, clinical values are one important example. On the organizational level, communication and control within an organizational unit could be crucial.

These models of organizational change and innovation emphasize the importance of process and communication and the relevance of changes in an individual's status and work.23 Rogers, in his review of a considerable body of knowledge on the diffusion of innovation, discusses communication that occurs among adopters of innovations.26 This kind of communication occurs for medical information systems.34 Physicians influence each other in their attitudes towards such systems, and this affects who uses a system and how they use it. Rogers also discusses the importance of how potential users assess characteristics of an innovation.26 Kaplan has studied the way in which clinicians' assessments of these characteristics is influenced by their professional values and roles.32 The long history of “physician resistance” can be explained, in part, in this way. In contrast, as Kaplan points out, there are many complex applications of computers in medicine that physicians have enthusiastically adopted.1,32,35,36 Brenner and Logan also indicate that clinicians have been more enthusiastic in adopting other medical innovations with characteristics similar to medical information systems than they have been for medical information systems. Brenner and Logan, therefore, look to organizational factors, claiming that “the root cause of non-diffusion would appear to be more related to the interaction between MISs and professional conventions” such as personal autonomy and doctor-patient interaction,33 which Kaplan identified as primary professional values and roles in the adoption and use of medical information systems.32 Brenner and Logan's emphasis on interaction, then, is a key consideration in understanding reactions attributed to physician resistance.

Models of Evaluation

Drawing on these interactionist models, Anderson and Aydin, and Anderson, Aydin, and Kaplan provide a framework to guide evaluations of how medical information systems affect and are affected by health care organization.13,19 Their conceptual model is based on understanding that the interaction between information technology and the organization in which it is implemented guides the design and implementation of an information system, the assessment of its impacts, and the management of the change processes that result from its implementation. Their framework also involves three models.

In one of these models, an information system is seen as an external force. Evaluations based on this model treat technical features of an information technology as causing its impacts, as in system centered theory. As in the Research, Development, and Diffusion Model, participants who do not use a new computer information system are viewed as passive, resistant, or dysfunctional. Because organizational and technology characteristics are treated as invariant rather than as changing over time, these evaluation studies fail to include characteristics of the organizational environment and social interaction that may have important effects on outcomes.18 Such studies frequently are undertaken in a laboratory using controlled clinical trials, and there may be little or no investigation of how systems fit into the daily work in the organization into which they will be introduced.9,37

In another of these models, an information system is seen as determined by organizational needs in that it is viewed as meeting the needs of managers and clinicians. As in the Problem Solving model, systems are thought to be developed in a rational manner, with needs identified and problems solved. Organizational members are thought to have control over the technical aspects of a system and the consequences of its implementation. This model attributes the effectiveness and impacts of information technology to decisions made by managers, developers, and implementers. In this respect, as in user centered theory, characteristics of decision-makers within an organization determine results. End users are considered passive, resistant, or dysfunctional if they do not react to a system as managers or developers intended.30,38

In the third model, social interactions are considered determinants of system use. This model views uses and impacts of information technology as resulting from complex social interactions within an organization.17,20,39 Evaluation in this model requires understanding dynamic organizational social and political processes as they occur over time. Use and impacts of information technology are thought to be affected by communication over time among individuals who are members of social systems. Users are considered active in that they generally change or modify information systems during design and implementation so that the technology better fits specific organizational, professional, or personal needs.26 In this view, the way technology is designed, implemented, and used in a particular organizational setting depends on individuals' and groups' objectives, preferences, and work demands.

Evaluation Questions

Anderson, Aydin, and Kaplan provide examples of how an interactionist framework has been used to measure the adoption, use, effectiveness, and impacts of computer information systems in medicine.13 In one study, physicians' positions in a hospital's referral and consultation network affected their adoption and use of a medical information system.40,41,42 Another study explored the impact of an interactive health appraisal system on interaction between patients and clinicians, among clinicians, and between clinicians and administrators.43 In a third study, users' attitudes toward a laboratory information system were influenced by the relationship between how laboratory technologists viewed the system and how they viewed their work.44,45,46,47,48

As these studies suggest, evaluation questions within an interactionist framework concern issues of communication, care, control, and context. As formulated by Anderson and Aydin,14 these questions are, respectively:

  • What are the anticipated long-term impacts on the ways departments linked by computer interact with each other?

  • What are the anticipated long-term effects on the delivery of medical care?

  • Will system implementation have an impact on control in the organization?

  • To what extent do medical information systems have impacts that depend on the practice setting in which they are implemented?

Issues of communication, care, control, and context are the 4Cs of evaluation.49 They are interrelated in ways that affect what happens when an information system is introduced. A laboratory information system, for example, may improve communication between laboratories and clinical units. However, even technologists who recognize this improvement may consider the system incompatible with their work patterns. If laboratory management does not recognize that technologists think their jobs are changing, management is not able to plan for the ways in which technologists react to these changes. This laboratory information system evaluation illustrates how changes in communication directly affected changes in work. These changes can be used as an excuse for playing out existing political agendas, thereby raising control issues. Having other departments claim inadequacies in the laboratory information system and argue that they should have their own laboratories provides one example of how existing organizational control issues may affect, and are affected by, a new information system.

Methodological Guidelines for Evaluation

An evaluator needs to be sensitive to the 4Cs—communication, care, control, and context—when collecting and analyzing data, as direct questions concerning them often cannot be asked. Further, it is difficult to study processes over time, as an interactionist evaluation framework would lead one to do. To address these difficulties, Kaplan suggested five methodological guidelines that can be useful when developing a comprehensive evaluation plan. The evaluation should15: focus on a variety of technical, economic, and organizational concerns; use multiple methods; be modifiable; be longitudinal; and be formative as well as summative.

Focus on a Variety of Concerns

An evaluation could study technical, economic, and organizational concerns. These can be examined during the entire process of information system implementation, focusing on system use, organizational context, and work practices in multiple specialties or functional areas.

Use Multiple Methods

Evaluations often could benefit from multiple research methods. Numerous approaches are discussed in the evaluation literature. Their application to medical informatics is presented by Anderson, Aydin, and Jay3 and by Friedman and Wyatt.4 These include cost/benefit analyses, critical incident logs, document analysis, experiments, interviews, observations, simulations, and surveys. Some of these may use either qualitative or quantitative methods. For example, surveys can include scaled response questions as well as open-ended questions, thereby collecting both quantitative and qualitative data. The open-ended questions may be analyzed quantitatively by counting various attributes contained in the data, as in content analysis, or, as is more common in qualitative data analysis, by seeking patterns and themes.

Using a rich variety of evaluation research methods provides several advantages.3,46,50,51 A combination of methods to evaluate medical information systems has been recommended for two reasons. The first is the diverse and diffuse nature of information systems' effects. The second reason is to combine results in a way that maximizes understanding of causal links52 by collecting a variety of data, each set of which might provide partial information needed for a complete evaluation.53 Combining qualitative with quantitative methods allows for a focus on the complex web of technological, economic, organizational, and behavioral issues.17,51 Putting together data collected by a variety of methods from a variety of sources strengthens the robustness of research results through a process known as “triangulation.” Lastly, a multiplicity of methods can help ensure that issues and concerns that were not included in the preliminary design can be integrated into an evaluation later on.

Be Modifiable

Some important issues and concerns might arise during a study that were not, or could not have been, anticipated a priori. Therefore, there are benefits to designing an evaluation that can be modified. Such a plan allows for adding new phases, methods, or research questions when important new issues and concerns arise or when new knowledge becomes available during the course of an evaluation. Periodically reviewing and modifying an evaluation plan in light of new information can help keep the plan relevant to key evaluation concerns and fit it to the changing natures of the information system, the organization, and the users.

Be Longitudinal

Longitudinal study designs capture changes over time. A study may be designed either to document what has changed, as in a series of snapshots, or it may be designed in ways that, like a motion picture, capture change processes as they occur. An interactionist perspective suggests favoring the latter.26

Be Formative and Summative

Formative evaluation is aimed at improving a system under development or during implementation. Assessing a system that already is up and running is summative evaluation. Formative evaluation may identify potential problems as they are forming, thereby providing opportunities to improve a system as it develops.54

An Evaluation Illustration: Clinical Imaging Systems

This example of evaluating a clinical imaging system illustrates how the evaluation framework, questions, and methodological guidelines can be used. First, Kaplan's suggested plan, or research design, for comprehensively evaluating complex computer information systems is given and then applied to a clinical imaging system that provides on-line access to patient records that include images of clinical conditions.15 The evaluation was conducted at the development site. Details of the system, plan, methods, and findings are described elsewhere.55,56,57,58 Here the phases of the plan are summarized in general terms so they can be adapted to other evaluations.

The Plan

Phase 1: Identify Evaluation Issues, Questions, and Concerns

As an initial step, identify the evaluation research questions. Two primary means are useful: literature synthesis,59 and interviews and observations. Literature synthesis, interviews, and observations may be done concurrently so that findings from one of these activities influence how the other is conducted.

For this study, initial data collection and analysis in the form of interviews and observations was undertaken first. During evaluation planning meetings and interviews with developers, the focus of the evaluation was determined to be on benefits of the imaging system, particularly clinical benefits. Because the system was intended primarily for clinical use, the first evaluation phase focused on clinicians who used the system in the range of clinical, educational, and research activities at this medical center. Initial data collection was undertaken through a series of interviews and observations during a 1-week period when 22 individuals from 10 services, plus the head and staff of the imaging system development project, were interviewed. The researchers also observed the system in use during teaching rounds and clinical procedures.

During these activities, clinical staff identified benefits they believed occurred in all the areas of an academic medical center: patient care, education, research, and administration.58 This initial list of benefits can be used to identify and measure key processes. The next steps in the evaluation would involve documenting whether and how these benefits occur in different contextual settings, and seeking additional data that might have been overlooked in initial phases.

Phase 2: Document System Use

Two approaches in addition to observation are useful in documenting system use. First, individuals can be asked to document or recall their own system use through critical incident reporting. Second, user profiles can be generated.

Critical incident reporting is a useful way of documenting changes attributed to the new system.60 Key project staff members and users may be asked to keep records of incidents that seem especially significant. An alternative approach would be to document critical incidents through interviews and observations. At any time during an evaluation, individuals can be asked to recall specific critical incidents and discuss them. Similarly, observers could note critical incidents when conducting observations.

To develop user profiles, baseline data can be collected before implementation. As system use expands, further data can be collected. Baseline data may include such demographic data as each user's specialty or functional area, professional or organizational status, and experience with various computer systems. Data concerning use of system functions also can be collected and correlated with demographic data to develop user profiles. Such data can be collected unobtrusively (i.e., without active involvement of system users) by capturing it on-line as part of routine system use. Data also can be collected by survey questionnaire.61 Before developing the evaluation plan, measures of system use were collected and analyzed by project development staff. These data provide baselines for later comparison and document usage by clinical service.

Phase 3: Evaluate System in Context

The imaging system was studied in different contexts with attention to different uses of images. Observations and interviews were done in different units and departments, in different wards and clinics, and at rounds. During observations, the individuals involved were interviewed as to what they were doing, why they were doing it, what effects it had, and what benefits were expected. Similar questions were asked during interviews separate from observations. Study participants also were requested to recall critical incidents or examples of system use.

An information system also may be studied in different institutions. For example, another imaging system at a different institution was studied subsequently. Results from the two studies were compared, thereby providing additional insight into each system, broader issues concerning imaging systems, and evaluation concerns.62

Phase 4: Present Results

The evaluation plan and initial results were presented at a meeting of project stakeholders. This meeting served several purposes. First, presenting the plan to stakeholders enabled them to make further decisions concerning both the system and system evaluation. Second, in line with the ethics of information systems evaluation, this meeting was one of several occasions to present the plan to interested parties for comment. In this way, study participants were informed so they could be willing contributors to the evaluation.63 Lastly, their responses helped validate initial findings and helped shape both the evaluation plan and system development efforts.

The 4Cs: Communication, Care, Control, and Context

In this study, physicians' perceptions of benefits of the imaging system suggest the close connection between changes in communication, changes in medical care, and issues of control. Physicians reported beneficial changes in communication, such as the ability for everyone to view the same image simultaneously at a conference instead of relying on written reports or the availability of images whenever they were needed. Physicians thought these changes in communication improved patient care. However, physicians' enthusiasm for the imaging system itself raised control issues. Among these control issues were: the criteria for evaluating systems, what constitutes benefits, who considers them benefits, and who makes such decisions.

In this example, the importance of context is not so well illustrated. However, control issues related to differences in enthusiasm for the system between clinicians and administrators suggest that contextual issues are involved. When comparing this imaging system with another one, additional contextual issues arose that pertain to how physicians talk about images as compared to how they use them.62

Methodological Guidelines

This plan, therefore, resulted in an evaluation that addressed the 4Cs, and, consequently, the importance of interrelationships and interactions among users, the system, and the organizational context pertaining to developers, administrators, and departments. The evaluation plan also met the five methodological guidelines described above. It employed multiple methods intended to address a variety of evaluation questions and issues. The plan allowed for longitudinal evaluation because phases are repeatable at periodic intervals so as to capture data at different times. It provided feedback for formative evaluation because each phase of the plan resulted in a product or output for individuals to use when making decisions concerning the system. These outputs also can serve as input to other phases of the plan and to periodic refinement and validation of the evaluation plan, thus making the plan modifiable. These modifications also would help ensure that the evaluation captured both anticipated and unanticipated concerns. The plan also was modifiable in that the evaluation was designed in phases. During initial phases, the plan was refined as new information became available through data collection and analysis, through discussion with individuals from various clinical services, and through further meetings with project staff. Further, the phases did not have to be performed sequentially. The order could be rearranged and some phases can be conducted concurrently.

Summary

The imaging system evaluation example illustrates how an interactionist perspective may be useful in information system evaluation that takes account of organizational issues. As this example illustrates, the effort of designing, implementing, and using an information system involves numerous considerations and a series of processes that change the organization, the people, and the information system involved. The challenge in an information systems project is designing evaluations that capture the complexity of interactions, interrelationships, and inter-effects that occur during these processes. This paper describes a framework and an example for how effective evaluation strategies may be undertaken to address organizational issues concerning computer information systems in medicine and health care.

References

  • 1.Kaplan B. The medical computing “lag”: Perceptions of barriers to the application of computers to medicine. Int J Technol Assess Health Care. 1987;3: 123-36. [DOI] [PubMed] [Google Scholar]
  • 2.Anderson JG, Jay SJ, eds. Use and Impact of Computers in Clinical Medicine. New York: Springer-Verlag, 1987.
  • 3.Anderson JG, Aydin CE, Jay SJ, eds. Evaluating Health Care Information Systems: Approaches and Applications. Thousand Oaks, CA: Sage Publications, 1994.
  • 4.Friedman CP, Wyatt JC. Evaluation and Research Methods for Medical Informatics. New York: Springer-Verlag, 1996.
  • 5.Lorenzi NM, Riley RT. Organizational Aspects of Health Informatics: Managing Technological Change. New York: Springer-Verlag, 1995.
  • 6.Lorenzi NM, Riley RT, Ball MI, Douglas JV. Transforming Health Care Through Information: Case Studies. New York: Springer-Verlag, 1995.
  • 7.Kaplan B. Information technology and three studies of clinical work. ACM SIGBIO Newsletter. 1995;15: 2-5. [Google Scholar]
  • 8.Fafchamps D, Young CY, Tang PC. Modeling work practices: input to the design of a physician's workstation. SCAMC Proceedings, New York: McGraw-Hill, 1991; 788-92. [PMC free article] [PubMed]
  • 9.Forsythe D, Buchanan BG. Broadening our approach to evaluating medical expert systems. SCAMC Proceedings, New York: McGraw-Hill, 1991; 8-12. [PMC free article] [PubMed]
  • 10.Kaplan B. Fitting system design to work practice: using observation in evaluating a clinical imaging system, First Americas Conference on Information Systems Proceedings, Vol IV: Information Systems—Collaboration Systems and Technology, and Organizational Systems and Technology Proceedings, 1995; 86-8.
  • 11.Nyce JM, Graves W III. The construction of neurology: implications for hypermedia system development. Artif Intell Med. 1990;2: 315-22. [Google Scholar]
  • 12.Nyce JM, Timpka T. Work, knowledge and argument in specialist consultations: incorporating tacit knowledge into system design and development. Med Biol Eng Comput. 1993;31: HTA16-HTA19. [DOI] [PubMed] [Google Scholar]
  • 13.Anderson JG, Aydin CE, Kaplan B. An analytical framework for measuring the effectiveness/impacts of computer-based patient record systems. Hawaii International Conference on Systems Science Proceedings, 1995; 767-76.
  • 14.Anderson JG, Aydin CE. Theoretical perspectives and methodologies for the evaluation of health care information systems. In: Anderson JG, Aydin CE, and Jay SJ (eds). Evaluating Health Care Information Systems: Methods and Applications. Thousand Oaks, CA: Sage Publications, 1994; 5-29.
  • 15.Kaplan B. A model comprehensive evaluation plan for complex information systems: clinical imaging systems as an example, European Conference on Information Technology Investment Evaluation Proceedings, 1995; 174-81.
  • 16.Kling R. Social analyses of computing: theoretical perspectives in recent empirical research. Comp Surv. 1980;12: 61-110. [Google Scholar]
  • 17.Kling R, Scacchi R. The web of computing: computer technology as social organization. In: Yovits MC (ed). Advances in Computers. New York: Academic Press, 1982;21: 2-90. [Google Scholar]
  • 18.Lyytinen K. Different perspectives on information systems: problems and solutions. Comp Surv. 1987;19: 5-46. [Google Scholar]
  • 19.Anderson JG, Aydin CE. Theoretical perspectives and methodologies for the evaluation of health care information systems. In: Anderson JG, Aydin CE, Jay SJ (eds). Evaluating Health Care Information Systems: Methods and Applications. Thousand Oaks, CA: Sage Publications, 1994; 5-29.
  • 20.Markus ML, Robey D. Information technology and organizational change: causal structure in theory and research. Management Science. 1988;34: 583-98. [Google Scholar]
  • 21.Williams LS. Microchips versus stethoscopes: Calgary Hospital MDs face off over controversial computer system. Can Med Assoc J. 1982;147:10: 1534-47. [PMC free article] [PubMed] [Google Scholar]
  • 22.Massaro TA. Introducing physician order entry at a major academic medical center. Acad Med. 1993;68:1: 20-30. [DOI] [PubMed] [Google Scholar]
  • 23.Kaplan B. Models of change and information systems research. In: Nissen H-E, Klein HK, Hirschheim R (eds). Information Systems Research: Contemporary Approaches and Emergent Traditions. Amsterdam: North Holland, 1991; 593-611.
  • 24.Havelock RG, Guskin A, Frohman M, Havelock M, Hill M, Huber J. Planning for Innovation through Dissemination and Utilization of Knowledge, 2nd printing. Ann Arbor, MI: Center for Research on Utilization of Scientific Knowledge, Institute for Social Research, University of Michigan, 1971.
  • 25.Kolb DA, Frohman AL. An organization development approach to consulting. Sloan Manage Rev. 1970;12: 51-65. [Google Scholar]
  • 26.Rogers EM. Diffusion of Innovations, 3rd ed. New York: The Free Press, 1983.
  • 27.Hirschheim R, Klein HK. Four paradigms of information systems development. Comm ACM. 1989;32: 1199-1216. [Google Scholar]
  • 28.Pfeffer J. Organizations and Organization Theory. Marshfield, MA: Pitman, 1982.
  • 29.Slack JD. Communication Technologies and Society: Conceptions of Causality and the Politics of Technological Intervention. Norwood, NJ: Ablex, 1984.
  • 30.Markus ML. Power, politics, and MIS implementation. Comm ACM. 1983;26: 430-44. [Google Scholar]
  • 31.Dick RS, Steen EB (eds). The Computer-Based Patient Record: An Essential Technology for Health Care. Washington, DC: Institute of Medicine, National Academy Press, 1991.
  • 32.Kaplan B. The influence of medical values and practices on medical computer applications. In: MEDCOMP Procs. Silver Spring, MD: IEEE Computer Society Press, 1982:83-88. Reprinted in: Anderson JG, Jay SJ (eds). Use and Impact of Computers in Clinical Medicine. New York: Springer-Verlag, 1987; 39-50.
  • 33.Brenner DJ, Logan RA. Some considerations in the diffusion of medical technologies: medical information systems. Communication Yearbook 4. Beverly Hills, CA: Sage Publications, 1980.
  • 34.Anderson JG, Jay SJ, Schweer HM, Anderson MM. Why doctors don't use computers: some empirical findings. J R Soc Med. 1986;79: 142-144. Reprinted in: Anderson JG and Jay SJ (eds). Use and Impact of Computers in Clinical Medicine. New York: Springer-Verlag, 1987;97-109. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35.Kaplan B. Barriers to medical computing: history, diagnosis, and therapy for the medical computing “lag.” In SCAMC Proceedings, 1985; 400-404.
  • 36.Kaplan B. Reducing barriers to physician data entry for computer-based patient records. Top Health Inform Manage. 1994;15: 24-34. [PubMed] [Google Scholar]
  • 37.Benbasat I. Laboratory experiments in information systems studies with a focus on individuals: a critical appraisal. In: Benbasat I (ed). The Information Systems Research Challenge: Experimental Research Methods. Boston, MA: Harvard Business School, 1989;2: 33-47. [Google Scholar]
  • 38.Wetherbe JC. Systems Analysis and Design, 3rd ed. St. Paul, MN: West Publishing, 1988.
  • 39.Anderson JG, Jay SJ. The diffusion of computer applications in medical settings. In: Anderson JG, Jay SJ (eds). Use and Impact of Computers in Clinical Medicine. New York: Springer-Verlag, 1987; 3-7.
  • 40.Anderson JG, Jay SJ. Computers and clinical judgment: the role of physician networks. Soc Sci Med. 1985;10: 969-79. [DOI] [PubMed] [Google Scholar]
  • 41.Anderson JG, Jay SJ, Schweer HM, Anderson MM, Kassing D. Physician communication networks and the adoption and utilization of computer applications in medicine. In: Anderson JG, Jay SJ (eds). Use and Impact of Computers in Clinical Medicine. New York: Springer-Verlag, 1987; 185-99.
  • 42.Anderson JG, Jay SJ, Schweer HI, Anderson MM. Diffusion and impact of computers in organizational settings: empirical findings from a hospital. In: Salvendy G, Sauter SL, Hurrell JJ (eds). Social, Ergonomic and Stress Aspects of Work with Computers. New York: Elsevier, 1987; 3-10.
  • 43.Rosen PN, Aydin CE, Anderson JG, Felitti VJ. Evaluation of CompuHx. Unpublished report. San Diego, CA: Kaiser-Permenente Medical Care Program, 1993.
  • 44.Kaplan B. Impact of a clinical laboratory computer system: users' perceptions. In: Salamon R, Blum BI, Jorgensen MJ (eds). Medinfo 86: Fifth World Congress on Medical Informatics. Amsterdam: North-Holland, 1986; 1057-61.
  • 45.Kaplan B. Initial impact of a clinical laboratory computer system: themes common to expectations and actualities. J Med Syst. 1986;11: 137-47. [DOI] [PubMed] [Google Scholar]
  • 46.Kaplan B, Duchon D. Combining qualitative and quantitative approaches in information systems research: a case study. MIS Quarterly. 1988;4: 571-86. [Google Scholar]
  • 47.Kaplan B, Duchon D. A job orientation model of impact on work seven months post implementation. In: Medinfo Proceedings. Amsterdam: North-Holland, 1989; 1051-55.
  • 48.Kaplan B, Duchon D. Combining methods in evaluating information systems: case study of a clinical laboratory information system. SCAMC Proceedings, 1989; 1051-55.
  • 49.Kaplan B. Organizational evaluation of medical information systems. In: Friedman CP, Wyatt JC (eds). Evaluation Methods in Medical Informatics. New York: Springer-Verlag, 1996.
  • 50.Brewer J, Hunter A. Multimethod Research: A Synthesis of Styles. Newbury Park, CA: Sage Publications, 1989.
  • 51.Kaplan B, Maxwell JA. Qualitative research methods for evaluating computer information systems. In: Anderson JG, Aydin CE, Jay SJ (eds). Evaluating Health Care Information Systems: Methods and Applications. Thousand Oaks, CA: Sage Publications, 1994; 45-68.
  • 52.Bryan S, Keen J, Buxton M, Weatherburn G. Evaluation of a hospital-wide PACS: costs and benefits of the Hammersmith PACS installation. In: SPIE. February 1992; 1654 Medical Imaging VI: PACS Design and Evaluation: 573-6.
  • 53.Keen J, Stirling B, Muris N, Weatherburn G, Buxton M. A model for the evaluation of PACS. SCAR Proceedings, 1994; 22-9.
  • 54.Lundsgaarde HP, Gardner RM, Menlove RL. Using attitudinal questionnaires to achieve benefits optimization. SCAMC Proceedings, 1989; 703-8.
  • 55.Dayhoff RE, Maloney DL, Kuzmak PM, Shepard BM. Integrating medical images into hospital information systems. J Digit Imaging. 1991;4: 87-93. [DOI] [PubMed] [Google Scholar]
  • 56.Dayhoff RE, Kuzmak PM, Maloney DL, Shepard BM. Experience with an architecture for integrating images into a hospital information system. IEEE Symposium Computer-Based Medical Systems Proceedings. Silver Spring, MD: IEEE Computer Society Press, 1991.
  • 57.Dayhoff RE, Kuzmak P, Maloney D. Medical images as an integral part of the patient's automated record. Symp Computer-Assisted Radiology Proceedings. Baltimore, MD, 1991.
  • 58.Kaplan B, Lundsgaarde HP. Toward an evaluation of a clinical imaging system: identifying benefits. Methods Inform Med. 1996;35: 221-29. [PubMed] [Google Scholar]
  • 59.Goldschmidt PG. Information synthesis: a practical guide. Health Services Research. 1986;21: 215-37. [PMC free article] [PubMed] [Google Scholar]
  • 60.Siegel ER, Rapp BA, Lindberg DAB. Evaluating the impact of MEDLINE using the critical incident technique. SCAMC Proceedings, 1992; 83-7. [PMC free article] [PubMed]
  • 61.Aydin CE. Survey methods for assessing social impacts of computers in health care organizations. In: Anderson JG, Aydin CE, Jay SJ (eds). Evaluating Health Care Information Systems: Methods and Applications. Thousand Oaks, CA: Sage Publications, 1994; 69-96.
  • 62.Kaplan B. Objectification and negotiation in interpreting clinical images: implications for computer-based patient records. Artif Intell Med. 1995;7: 439-54. [DOI] [PubMed] [Google Scholar]
  • 63.Dayhoff RE, Maloney DL. An integrated multidepartmental hospital imaging system: usage of data across specialties. SCAMC Proceedings, 1992; 30-4. [PMC free article] [PubMed]

Articles from Journal of the American Medical Informatics Association are provided here courtesy of Oxford University Press

RESOURCES