Skip to main content
Journal of the American Medical Informatics Association : JAMIA logoLink to Journal of the American Medical Informatics Association : JAMIA
. 2001 Mar-Apr;8(2):117–125. doi: 10.1136/jamia.2001.0080117

Clinicians' Response to Computerized Detection of Infections

Beatriz HSC Rocha 1, John C Christenson 1, R Scott Evans 1, Reed M Gardner 1
PMCID: PMC134551  PMID: 11230380

Abstract

Objective: To analyze whether computer-generated reminders about infections could influence clinicians' practice patterns and consequently improve the detection and management of nosocomial infections.

Design: The conclusions produced by an expert system developed to detect and manage infections were presented to the attending clinicians in a pediatric hospital to determine whether this information could improve detection and management. Clinician interventions were compared before and after the implementation of the system.

Measurements: The responses of the clinicians (staff physicians, physician assistants, and nurse practitioners) to the reminders were determined by review of paper medical charts. Main outcome measures were the number of suggestions to treat and manage infections that were followed before and after the implementation of compiss (Computerized Pediatric Infection Surveillance System). The clinicians' opinions about the system were assessed by means of a paper questionnaire distributed following the experiment.

Results: The results failed to show a statistical difference between the clinicians' treatment strategies before and after implementation of the system (P > 0.33 for clinicians working in the emergency room and P > 0.45 for clinicians working in the pediatric intensive care unit). The questionnaire results showed that the respondents appreciated the information presented by the system.

Conclusion: The computer-generated reminders about infections were unable to influence the practice patterns of clinicians. The methodologic problems that may have contributed to this negative result are discussed.


Nosocomial infections are a serious public health problem all over the world.1 Nosocomial infections afflict more than 2 million patients annually in the United States alone, where they are a leading cause of death.2 The cost of treating nosocomial infections has also become an important public health problem.2 In the United States, at least 6 percent of hospitalized pediatric patients have nosocomial infections, a rate that has increased over the past decade.3

Early detection of infections is very important. Prompt detection not only enables prompt treatment but also allows preventive interventions to help control and reduce transmission as soon as an infection is detected. Many authors have discussed the importance that computer-aided surveillance might have in assisting in the prompt detection of nosocomial infections.48 Several studies have reported the use of computers to improve and speed up the process of detecting nosocomial infections.6,9,10 All these studies sent the computer results to the infection control team. To our knowledge, however, little work has been done to evaluate whether the detection of nosocomial infections can be improved by sending computer results directly to attending physicians.

On the basis of our group's previous experience with an expert system (compiss, Computerized Pediatric Infection Surveillance System) to detect nosocomial infections in pediatric patients,11,12 we decided to present compiss results directly to the attending clinicians (staff physicians, physician assistants, and nurse practitioners). The objective was to analyze whether computer-generated reminders about infections could influence clinicians' practice patterns and consequently improve the detection and management of nosocomial infections.

Methods

Compiss is a rule-based expert system developed to detect and manage infections in pediatric patients. The development of compiss and the validation of its knowledge base have been described elsewhere.11,12 The system is currently in use at Primary Children's Medical Center (PCMC), a 232-bed pediatric tertiary-care hospital in Salt Lake City, Utah.

Compiss was developed to generate “alerts” (indicating the presence of infection) and “reminders” (suggestions about the management of an infection) each time a patient had an indication of infection that triggered the rules contained in compiss's knowledge base. The reminders can be divided into three types—“educational,” “managerial,” and “therapeutic.” Educational reminders inform the attending clinician about the peculiarities of some infections. Managerial reminders recommend actions that should be taken with the patient but that are not directly related to the prescription of a drug. Therapeutic suggestions advise what medications should be prescribed for the patient. Frequently, both managerial and therapeutic reminders also have an educational connotation. Examples of these reminders are shown in Table 1.

Table 1.

▪ Type of Reminder Based on Type of Infection

Type of Reminder Type of Infection Reminder
Educational Lower respiratory tract infection by Bordetella pertussis Reportable disease; notify Infection Control
Managerial Fungemia by Candida spp. If related to central venous catheter, line needs to be removed
Lower respiratory tract infection by Mycobacteriumtuberculosis Negative-pressure isolation room necessary
Therapeutic Bacteremia by Enterococcus spp. Therapy of choice: ampicillin or vancomycin plus aminoglycoside

Not all alerts were associated with reminders, but all reminders were associated with alerts. In this paper, the combination of a computer-generated alert and reminder will be referred to from now on simply as a reminder.

The alerts and reminders were displayed on the computer terminal nearest the patient, and a paper copy was also printed on the nearest printer. These alerts and reminders contained the name of the patient, the type of infection, the reason for the alert, and the appropriate reminders about patient management. The complete culture result that triggered the system was also displayed or printed below the computer-generated alerts and reminders.

Each time a new alert was generated, the letter A started flashing in the lower left corner of the terminal screen. When the letter A appeared, clinicians and nurses were instructed to check the terminal and read the alert. To stop the flashing A, they had to access the terminal with their own login and password and acknowledge the alert.

Experimental Design

Compiss was implemented in the pediatric intensive care unit (PICU), the bone marrow transplant unit (BMT), and the emergency room (ER) at PCMC. These three units were selected for the following reasons. The PICU patients are in critical condition and need constant intensive care, which increases the incidence of nosocomial infections. The BMT admits patients who are going to have bone marrow transplants and need strict isolation. Patients in the BMT are very susceptible to infections because their immune systems are compromised. The ER provides the initial care for patients in emergency conditions. After the initial care, the patients are discharged for home care or are admitted to one of the wards in the hospital for continued care. All three units require that infections be diagnosed promptly. Compiss had the potential to expedite this process and perhaps improve the detection of infections.

The subjects of the study were the clinicians (staff physicians, physician assistants, and nurse practitioners) working in these three units, who will be addressed from now on as “clinical decision makers.” In the PICU and BMT, clinical decision makers and nurses were invited to learn to use the alert review program during classroom training sessions or on a personal basis in the unit over a period of two months (one hour per person). Besides the initial instruction, nurses (two nurses in the PICU, one nurse in the BMT) were trained and assigned to continue training the other nurses in their units. In the ER, only the clerks were trained (one hour per person). A quick-reference card containing instructions on how to use the alert review program was affixed to all computers on which compiss was implemented. In the BMT, one terminal and one printer were available in the ward. In the ER, two terminals and one printer were available on the clerks' desk, and in the PICU, 24 bedside terminals and two printers were available.

After consensus meetings with clinical decision makers, nurses, and clerks, it was decided that in the PICU and BMT the nurses would be responsible for contacting the clinical decision makers each time they received an alert. The change of nurses' seeing the alerts was larger, since the nurses used the terminals more frequently and spent more time in the units than physicians did. The clerks were instructed to deliver printed alerts to the nurses as soon as the alerts appeared on the printer. The nurses were also responsible for acknowledging the alerts after contacting a clinical decision maker.

It was also decided that in the ER the clerks would be responsible for contacting a clinical decision maker. The terminal and the printer were on their desk, and they were the first to receive alerts. After receiving an alert, the clerks pulled the paper medical chart and handed both (alert printout and medical chart) to the clinical decision maker. The clerks were responsible for acknowledging the alert after giving the information to the clinical decision maker.

The compiss alerts and reminders were given to clinical decision makers for a period of six months, from September 1995 to February 1996. After this period, the effects of the reminders on the practice patterns of clinical decision makers were evaluated. By the experimental design, the interventions of the clinical decision makers before and after the implementation of the reminders were compared.

Other designs were also considered, such as randomized controlled trials (randomization of clinical decision makers and randomization of patients) and controlled groups. These designs were discarded because of several problems. For example, randomization of clinical decision makers into a control group and a study group was not possible. The units had a small number of beds, and the clinical decision makers on call took care of all patients in a unit. The clinical decision makers did not each have an independent set of patients.

The same problem affected the possibility of a study design in which the patients were randomized. The contamination effect would have been very important, because the same clinical decision maker would have received a reminder for some patients and not others, and could have used the knowledge thus gained to treat both sets of patients. Another important problem with randomizing patients was that randomization could have induced a false sense of security in the clinical decision maker when a patient did not receive an reminder, which would have given rise to a potential ethical problem.

An experimental design using controlled groups would have required the use of PCMC's PICU as the intervention site and the newborn intensive care unit of another hospital as the control site. Another option would have been to compare different units within PCMC. These options were also discarded, because the types of patients, the age groups, and the severity of diseases differed greatly between the two hospitals and among the units at PCMC. Since age and severity of disease are important factors associated with the development of infections, the different patient populations were not considered comparable.

In the three units selected for this experiment, patients with infections before implementation of compiss and patients with infections after implementation were compared. During the “before implementation” period, compiss was functioning and reminders were generated, but the reminders were not made available to the attending clinical decision makers. The patients in the “after implementation” group were all the patients for whom reminders were generated and issued to a clinical decision maker. To compare the two groups, the pre- and post-implementation patients were matched by infection location, type of alert (definite, probable, possible infection, or uninterpretable culture),11,12 type of reminder, and unit of study (PICU, ER, or BMT). Matched patients were compared to verify whether the clinical decision makers complied with the reminders or whether the clinical decision makers were already doing what the reminder suggested. Matching the patients avoided the problem of seasonal incidence of infections.

Each patient could generate multiple alerts and reminders, but only one type of reminder per patient was matched to the same type of reminder for another patient. The clinical decision makers were analyzed together because they should have a standard way of treating nosocomial infections. The reminders that were implemented were based on the consensus of the literature, whereas topics that might have been associated with different types of treatment strategies were not included.

The objective of the experiment was to determine whether the reminders influenced the treatment strategies of the clinical decision makers. The interventions performed by clinical decision makers to treat and manage infections before and after the implementation of compiss were compared. The responses of the clinical decision makers to the reminders were determined through review of the paper medical charts. Only the managerial reminders, the therapeutic reminders, and the associations of both types of reminders were reviewed, because the information necessary to determine whether a reminder was followed should have been available in the paper medical chart.

To ensure the consistency of the reviews of medical charts for the two periods (before and after implementation), a set of questions was created for each type of reminder (Table 2). The principal investigator and an expert in pediatric infectious disease (J.C.C.) created the questions. The answers to each question were determined from the medical chart review performed by the principal investigator.

Table 2.

▪ Sample Questions Created for Review of the Medical Charts, Divided by Type of Reminder

Type of Reminder Question
If related to central venous catheter, line needs to be removed The patient had a central venous catheter before the alert?
The central venous catheter was removed before the alert?
The central venous catheter was removed after the alert?
Therapy of choice: ampicillin or vancomycin PLUS aminoglycoside Patient was receiving ampicillin or vancomycin plus amino- glycoside (gentamicin, tobramycin, amikacin) before the alert?
Patient was receiving ampicillin or vancomycin plus amino glycoside (gentamicin, tobramycin, amikacin) after the alert?
Follow-up blood culture on therapy recommended Follow-up culture was done before the alert?
Blood culture was done in a period of 3 days after the alert?

The clinical decision makers' compliance rate for the computer-generated reminders was calculated by dividing the number of suggestions followed by the total number of suggestions made.13 A two-tailed t-test for matched pairs was used to determine the statistical significance of the results, using a significance level (α) of 0.05. The Student t-test for matched pairs was used for reminders with the same characteristics (before and after the implementation).

The time to acknowledge the alert was evaluated. The time to adopt the suggestion was not evaluated. It was expected that the adoption of the suggestion and the acknowledgment of the alert would happen at the same time.

At the end of the experiment, the opinions of the clinical decision makers and nurses exposed to compiss were accessed by means of a questionnaire. The questionnaire was subdivided into three main parts. The first part asked for general information, such as age and specialty. The second part asked about computer experience, and the third part asked for opinions about the computer-generated alerts and reminders. The questionnaire contained questions about whether the reminders (suggestions) were adequate, accurate, and useful and whether the clinical decision makers changed their decisions on the basis of the information the reminders provided. Multiple-choice questions and a Likert scale were used. Sample questions are shown in Table 3.

Table 3.

▪ Sample Questions from the Questionnaire

Question Type of Question Possible Answers
Your computer experience (check all that apply): Multiple choice a. No experience
b. Data review at PCMC
c. Data entry at PCMC
d. Programming
Have you ever received a micro- biology alert printout? Multiple choice YES or NO
If YES, how often? a. Daily
b. Once a week
c. Once a month
d. Occasionally
e. Just once or twice
If NO, why? a. Don't know
b. Asked not to
c. Other:
The microbiology alerts gave you useful information before you other- wise would have known it. Likert scale 0. Don't know
1. Strongly disagree
2. Disagree
3. Neutral
4. Agree
5. Strongly agree
The microbiology alerts in general were annoying. Likert scale 0. Don't know
1. Strongly disagree
2. Disagree
3. Neutral
4. Agree
5. Strongly agree

The questionnaire was sent to all clinical decision makers in the ER, PICU, and BMT. The questionnaire was also sent to the nurses in the BMT and PICU. The questionnaire was not sent to the nurses in the ER because these nurses had no contact with compiss.

Results

Compiss was implemented in August 1995. All alerts and reminders before this date were considered “before implementation,” and all alerts and reminders after this date were considered “after implementation.” Table 4 lists the number of alerts generated and the time taken to acknowledge them. During the experiment (before and after compiss implementation), 326 reminders were generated (Table 5). Of the 129 reminders studied in the PICU, 22 reminders before implementation and 22 after implementation were matched. In the ER, 36 reminders before implementation and 36 after implementation were matched. Because only two reminders in the BMT could be matched (one before and one after), the unit was excluded from the experiment.

Table 4.

▪ Characteristics of the Experiment Before and After compiss Implementation: Number of Alerts Issued, Median Number of Alerts per Patient, and Median Time to Acknowledge an Alert

Unit No. of Alerts
Median Time (min.)*
Total Median per Patient Max. per Patient
PICU 407 1 19 10,225 (7.1 days)
ER 600 1 6 63
BMT 73 2 39 5,549 (3.9 days)

Note: PICU indicates pediatric intensive care unit; ER, emergency room; BMT, bone marrow transplant unit.

*Median time to acknowledge the alert.

Table 5.

▪ Number of Reminders Issued During the Experiment, Before and After compiss Implementation, Divided by Hospital Unit and Type of Reminder

Unit No. (%) of Reminders
Educ. Manag. Therap. M/T Total
PICU 79 (38.0) 70 (33.7) 34 (16.3) 25 (12.0) 208 (63.8)
ER 10 (9.8) 76 (74.6) 13 (12.7) 3 (2.9) 102 (31.3)
BMT 3 (18.8) 7 (43.7) 6 (37.5) 0 (0.0) 16 (4.9)
Total 92 (28.2) 153 (46.9) 53 (16.3) 28 (8.6) 326 (100.0)

Note: Educ. indicates educational; Manag., managerial; Therap., therapeutic; M/T, associations of managerial and therapeutic; PICU, pediatric intensive care unit; ER, emergency room; BMT, bone marrow transplant unit.

The matched reminders in the PICU were associated with 38 encounters for 37 patients. In the ER, the matched reminders were issued for 72 encounters (71 patients). Both units produced 110 encounters to be reviewed for 108 patients. The principal investigator reviewed only 102 of the 110 encounters, since eight medical charts were not available. Forty-one reminders were excluded from the study because the data necessary to review them were not documented in the medical chart. Another 18 reminders were excluded because they were considered not applicable to the infection. The reasons they were not applicable were as follows: “patient no longer in the unit when the reminder was issued,” “patient came to the hospital only for laboratory specimen collection (not seen by physician),” “culture specimen location incomplete or incorrect,” “culture corrected after reminder had been issued,” and “patient without central venous catheter.” After the review, 24 reminders (12 before and 12 after) could be matched in the ER and 16 reminders (8 before and 8 after) could be matched in the PICU.

Each reminder could have more than one suggestion, so for each type of reminder reviewed, the number of suggestions was determined. Through the review of the paper medical chart for each patient, the number of suggestions followed by the clinical decision maker was obtained. The mean for the overall (ER and PICU) compliance rate was 0.45 before compiss implementation and 0.46 after compiss implementation. The combined t-value was –0.03 with 19 degrees of freedom (P > 0.97). For the ER, t was –1.00 with 11 degrees of freedom (P > 0.33). For the PICU, t was 0.80 with 7 degrees of freedom (p > 0.45). The results were not statistically significant.

Questionnaire Results

The questionnaire return rate was 75.0 percent (9 of 12) for the ER clinical decision makers, 40.0 percent (6 of 15) for the PICU clinical decision makers, and 46.4 percent (39 of 84) for the PICU nurses. Most respondents (44.4 percent of the ER clinical decision makers, 83.3 percent of the PICU clinical decision makers, and 66.7 percent of the PICU) had only “data review” computer experience.

Most respondents (77.8 percent of the ER clinical decision makers, 83.3 percent of the PICU clinical decision makers, and 82.1 percent of the PICU nurses) knew that compiss had been implemented in their units. All ER clinical decision makers had received compiss printouts, and most (77.8 percent) received them daily. Only 50.0 percent of the PICU clinical decision makers and 69.2 percent of the PICU nurses had received printouts. When asked whether they had used a computer terminal to review the alerts, 44.4 percent of the ER clinical decision makers, 66.7 percent of the PICU clinical decision makers, and 64.1 percent of the PICU nurses answered “yes.” Generally, the alert review process using computer terminals was done once a week or even less frequently. The majority of the respondents (88.9 percent of the ER clinical decision makers, 83.3 percent of the PICU clinical decision makers, and 66.7 percent of the PICU nurses) said they had never used a computer terminal to acknowledge an alert. Most of the PICU clinical decision makers and nurses justified not acknowledging the alerts because they did not know how to do it.

The answers to the question “Have you ever followed the microbiology alerting system suggestions?” can be found in Table 6. Only one PICU clinical decision maker did not agree with compiss suggestions.

Table 6.

▪ Answers to the Question “Have You Ever Followed the Microbiology Alerting System Suggestions?”

Answer No. of Respondents (%)
ER Clin. Decision Makers PICU Clin. Decision Makers PICU Nurses
Yes How often?
Always
Frequently 7 (87.5) 1 (50.0) 1 (8.3)
A few times 1 (12.5) 1 (50.0) 5 (41.7)
Just once or twice 4 (33.3)
Total 8 (88.9) 2 (33.3) 12 (30.8)
No Why?
Never received one 3 (75.0) 9 (34.6)
Did not agree 1 (25.0)
Physicians' realm 9 (34.6)
Other 5 (19.2)
Total 1 (11.1) 4 (66.7) 26 (66.7)

Note: ER indicates emergency room; Clin., clinical; PICU, pediatric intensive care unit.

Clinical decision makers and nurses disagreed with the statement that they were adequately trained, i.e., they thought they needed more training. When asked whether they would like to be trained, 44.4 percent of the ER clinical decision makers replied “no.” The majority of compiss opinions were positive, and the respondents indicated that they would like to continue receiving compiss alerts. They also were of the opinion that compiss should be implemented in other units.

Discussion

The experiment was designed to verify whether computer-generated reminders, when given to attending clinical decision makers, influenced their practice patterns. For this study, compiss was implemented in care units that would benefit most from such reminders (PICU, ER, and BMT).

Before the implementation of compiss, several efforts were made to provide training on how to review and acknowledge the reminders. In the ER, clinical decision makers did not want to be trained, so only the ward clerks were trained. The clinical decision makers and nurses in the other units (PICU and BMT) did not say that they did not want to be trained, but most of them did not come to the scheduled training sessions. The sessions were scheduled at convenient times and were occasionally given on a one-on-one basis.

The study used a before-and-after design. Although this type of design can be easily implemented, it has recognized weaknesses.14,15 The design can be affected by “secular trends.”14,15 For instance, the practice pattern of clinical decision makers could change independently of the reminders' suggestions, i.e., on the basis only of education or of the use of new technologies. However, McDonald16 and Rind et al.17 have shown that alerts and reminders have no training effect. In addition, during the experiment period, no new computerized applications were implemented at PCMC. The ability to review microbiology culture results from computer terminals was implemented in June 1995, three months before compiss was implemented. Patients also were matched before and after compiss implementation to reduce the influence of confounding variables.

An important problem affecting the results was the lack of documented information in the paper medical charts confirming that the attending clinical decision maker had followed the reminders. Other problems included incomplete specimen location descriptions and misleading preliminary culture results. Because of these problems, only a small number of reminders could be reviewed and matched. We expected that by matching the patients we would avoid the problem of seasonal incidence of infections, but we did not expect that we would end up with so few cases. We recognized this only after the analysis was done. A power calculation was not performed before the study.

The results failed to demonstrate a statistical difference between the clinical decision makers' practicing patterns before and after compiss implementation. One possible explanation was the small sample size combined with a small treatment effect. Studies that have been able to demonstrate the effect of reminders have had much larger sample sizes.13,1720 However, there have also been studies that were unable to show any effect of reminders.21

Another possible explanation for the results was that the previously described problems (lack of documented information, incomplete specimen descriptions, and consequently, a small sample size) produced a biased study sample, i.e., the types of reminders were not equally represented. For instance, of the 24 matched reminders issued for ER patients, 20 were as follows: “Patient with uninterpretable urine culture. Please, do a catheter urine or a suprapubic urine culture.” Nineteen of these 20 reminders (ten before and nine after compiss implementation) did not have follow-up cultures (zero compliance rate). The main reason for the poor compliance rate was that ER patients with a suspected episode of urinary infection are normally sent home with antibiotics. If the clinical decision maker were to follow the reminders, the patient would have to be called back to the ER for a repeat urine culture. Since the patient is already under treatment, the clinical decision makers usually decide not to recall the patients, even knowing that some are being treated unnecessarily. compiss may have had an effect for the other types of reminders, but since the sample size for these other reminders was small, the effect could not be demonstrated. For the previous type of reminder, it did not demonstrate any effect.

An important concern associated with the experiment was the possibility that clinical decision makers would receive too many alerts for the same patient. In this situation, clinical decision makers might have ignored some of the alerts and reminders issued by compiss. However, the experiment showed that this concern was not important, since the median number of alerts per patient in the PICU and the ER was only one.

Questionnaire

According to Wyatt et al.,22 one of the necessary steps in the evaluation of an expert system is to assess the opinion and acceptance of the users. The assessment was achieved by means of a questionnaire sent to all clinical decision makers and nurses involved in compiss use.

Despite several efforts to improve the return rate, the majority of the questionnaires were never returned. In the ER, where the return rate was greatest, the alerts issued by compiss have become part of the unit's routine practice. A possible explanation for the high acceptance may be related to the fact that before compiss's implementation, the ER clerks had to complete a form with the positive culture results and hand the form to the clinical decision makers. After compiss implementation, the alert printouts replaced these forms, a fact observed during the paper medical chart review. Twenty-four of the 35 ER medical charts reviewed contained alert printouts. Of the 24 printouts, 10 also had handwritten comments about the therapy instituted or action taken. In the PICU charts, no alert printouts were found, despite the fact that the PICU had numerous alerts.

The questionnaire results indicated that most of the respondents had received an alert printout. The use of computer terminals to review alerts was not high. The majority of clinical decision makers and nurses used terminals only sporadically. If compiss had been dependent only on the use of terminals to deliver the alerts, many alerts would probably never have been seen. The preference for paper alerts may have been because of a lack of computer experience among the respondents. An interesting observation was that four of nine ER clinical decision makers reported using the computer terminals to review alerts, despite the fact that they were not trained in using the system. These clinical decision makers were probably trained by the clerks or used the quick-reference card that was attached to the terminals.

The questionnaires showed that most respondents had never acknowledged an alert. In the ER, the clerks were responsible for acknowledging the alerts and the clinical decision makers were not trained to do so. The acknowledgments by the ER clerks were usually made within an hour of the alert. In the PICU, some clinical decision makers were trained, but the acknowledgment of the alerts was the responsibility of the nurses. The alerts issued for the PICU patients were not promptly acknowledged (median, 7.1 days after the alert was issued). A possible explanation for the deferred acknowledgment was the extra work involved, especially considering that the alerts were also being sent simultaneously to the nearest printer. Another possible reason that PICU nurses did not acknowledge the alerts was that they did not know how to use the alert review program, as was noted in the questionnaires. The main consequence of not acknowledging the alerts was that old alerts from previous patients were displayed along with the alerts for the current patients. Another consequence was that the letter A on the terminal never stopped blinking.

The problems related to the acknowledgment of the alerts might explain why the PICU nurses disagreed with the questionnaire statement “the system was very easy to use.” In contrast, the ER clinical decision makers agreed the most with this comment, but they received only printed copies of the alerts, since the clerks acknowledged the alerts.

According to the questionnaires, most of the ER clinical decision makers followed the reminders, an observation not confirmed by the experiment. In reality, only one PICU clinical decision maker indicated not having followed the suggestions, because he or she did not agree with the suggestions. The questionnaire respondents agreed that compiss should be continued and implemented in other units of the hospital. The respondents likewise agreed that compiss alerts gave “useful information before they otherwise would have known it.” The agreement with this last assertion is very important, considering that one of the goals of the alerts was to deliver critical information to the clinical decision makers as quickly as possible.17

The questionnaire respondents agreed that “the suggestions in the microbiology alerts were adequate” and “the suggestions in the microbiology alerts were sufficient.” However, the adequacy and sufficiency of the reminders issued by compiss were not established by the measures of the experiment. Future endeavors should attempt to expand the reminders and make them more specific by using the antibiogram and radiology results. Increasing the specificity of the reminders would probably increase the compliance rate.16

Most of the PICU questionnaire respondents considered that they were not adequately trained in the use of the system and would like to be trained. This opinion was contradictory, given that respondents did not take advantage of the training offered. All the respondents had a second chance to schedule training sessions by calling the telephone extension provided in the questionnaire, but none called. These results showed that, even after recognizing the need for training, the users did not take advantage of new opportunities to be trained. Another problem that may have affected the training was the lack of free time among physicians and nurses. A possible solution would be the development of an even more user-friendly interface, enabling users to learn the system with little training. It should be noted that, from the beginning, the ER clinical decision makers did not want to be trained, and the majority of them retained this opinion after the implementation.

Conclusion

The experiment failed to demonstrate that infection reminders given directly to attending clinical decision makers changed their practice patterns. Three explanations for this result are possible. The first possibility is that there was an effect but the methodology of the experiment failed to detect it. The findings suggest that a larger sample size is required to demonstrate the alleged small effect of the reminders. As mentioned before, we did not expect that the matching of patients would result in so few cases. In addition, it is possible that only a subset of the reminders had an effect, and this effect was missed because of the other reminders that clinical decision makers did not consider important. A future study including more cases and a subset of the more specific alerts might demonstrate the effect of the reminders. Also, the choice of measured outcome might have influenced the results. Measurement of the time to taking an action might have offered more power and showed some effect. We expected that the adoption of the suggestion and the acknowledgment of the alert would happen at the same time, which was not observed.

The second possible explanation is that the system was only partially integrated into the health care process, and the partial implementation contributed to the lack of an effect. One possibility was that attending clinical decision makers already knew the information communicated by the alerts and reminders or that they had access to a faster way of obtaining this information. However, clinical decision makers and nurses who answered the questionnaire agreed that the information contained in the alerts and reminders was both helpful and timely.

During the experiment, compiss was apparently never integrated into the process of care of the PICU. The lack of integration could have resulted from inadequate training in use of the system. Another explanation might be that the PICU nurses and clinical decision makers did not perceive the utility of the reminders. In either case, the reminders might not have had an effect in the PICU because compiss was not adequately used.

Modifications to compiss may eventually improve the performance and usage. compiss performance and usage might be improved by the use of data from other sources, besides laboratory and demographic data; by the generation of more specific alerts and reminders; and by the education of the clinical staff to accurately document the specimen location when microbiology cultures are ordered. System performance and usage could also be improved if an even more user-friendly interface were created and if the capability of updating issued alerts were added. Another possibility would be to change the way the alerts are delivered. These modifications might increase the effect of the reminders.

The third possibility is that, for whatever reason, this system had no effect on patient care. This option can only be confirmed when the other problems are solved and shown not to influence the results.

This work was supported in part by the National Council for Scientific and Technological Development (CNPq), Secretary for Science and Technology, Brazil.

References

  • 1.Haley RW, Culver DH, White JW, Morgan WM, Emori TG. The nationwide nosocomial infection rate: a new need for vital statistics. Am J Epidemiol. 1985;121(2):159–67. [DOI] [PubMed] [Google Scholar]
  • 2.Centers for Disease Control. Public health focus: surveillance, prevention, and control of nosocomial infections. MMWR Morb Mortal Wkly Rep. 1992;41(42):783–7. [PubMed] [Google Scholar]
  • 3.Stein F, Trevino R. Nosocomial infections in pediatric intensive care unit. Pediatr Clin North Am. 1994;41(6):1245–57. [DOI] [PubMed] [Google Scholar]
  • 4.Gaunt PN. Information in infection control. J Hosp Infect. 1991;18(supp A):397–401. [DOI] [PubMed] [Google Scholar]
  • 5.Lee TB, Baker OG, Lee JT, Scheckler WE, Steele L, Laxton CE. Recommended practices for surveillance. Association for Professionals in Infection Control and Epidemiology, Inc. Surveillance initiative working group. Am J Infect Control. 1998;26(3):277–88. [DOI] [PubMed] [Google Scholar]
  • 6.Pittet D, Safran E, Harbarth S, et al. Automatic alerts for methicillin-resistant Staphylococcus aureus surveillance and control: role of a hospital information system. Infect Control Hosp Epidemiol. 1996;17(8):496–502. [PubMed] [Google Scholar]
  • 7.Mertens R, Ceusters W. Quality assurance, infection surveillance, and hospital information systems: avoiding the Bermuda triangle. Infect Control Hosp Epidemiol. 1994; 15(3):203–9. [DOI] [PubMed] [Google Scholar]
  • 8.Cauet D, Quenon JL, Desce G. Surveillance of hospital acquired infections: presentation of a computerized system. Eur J Epidemiol. 1999;15(2):149–53. [DOI] [PubMed] [Google Scholar]
  • 9.Evans RS, Gardner RM, Bush AR, et al. Development of a computerized infectious disease monitor (CIDM). Comput Biomedl Res. 1985;18:103–13. [DOI] [PubMed] [Google Scholar]
  • 10.Kahn MG, Steib SA, Fraser VJ, Dunagan WC. An expert system for culture-based infection control surveillance. Proc 17th Annu Symp Comput Appl Med Care. 1993:171–5. [PMC free article] [PubMed]
  • 11.Rocha BHSC. Development, implementation, and evaluation of a computerized pediatric infection surveillance system [PhD thesis]. Salt Lake City, UT: University of Utah, 1997.
  • 12.Rocha BHSC, Christenson JC, Pavia A, Evans RS, Gardner RM. Computerized detection of nosocomial infections in newborns. Proc 18th Annu Symp Comput Appl Med Care. 1994:684–8. [PMC free article] [PubMed]
  • 13.McDonald CJ, Hui SL, Smith DM, et al. Reminders to physicians from an introspective computer medical record: a two-year randomized trial. Ann Intern Med. 1984;100:130–8. [DOI] [PubMed] [Google Scholar]
  • 14.Rind DM, Davis R, Safran C. Designing studies of computer-based alerts and reminders. MD Comput. 1995; 12(2):122–6. [PubMed] [Google Scholar]
  • 15.Friedman CP, Wyatt JC. Evaluation Methods in Medical Informatics. New York: Springer, 1996.
  • 16.McDonald CJ. Protocol-based computer reminders, the quality of care and the non-perfectibility of man. N Engl J Med. 1976;295:1351–5. [DOI] [PubMed] [Google Scholar]
  • 17.Rind DM, Safran C, Phillips RS, et al. Effect of computer-based alerts on the treatment and outcomes of hospitalized patients. Arch Intern Med. 1994;154:1511–7. [PubMed] [Google Scholar]
  • 18.Bates DW, Kuperman GJ, Rittenberg E, et al. A randomized trial of a computer-based intervention to reduce utilization of redundant laboratory tests. Am J Med. 1999;106(2):144–50. [DOI] [PubMed] [Google Scholar]
  • 19.Shojonia KG, Yokoe D, Platt R, Fiskio J, Ma'luf N, Bates DW. Reducing vancomycin use utilizing a computer guideline: results of a randomized controlled trial. J Am Med Inform Assoc. 1998;5(6):554–62. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Evans RS, Pestotnik SL, Classen DC, Burke JP. Evaluation of a computer-assisted antibiotic-dose monitor. Ann Pharmacother. 1999;33(10):1026–31. [DOI] [PubMed] [Google Scholar]
  • 21.Johnston ME, Langton KB, Haynes B, Mathieu A. Effect of computer-based clinical decision support systems on clinician performance and patient outcome. Ann Intern Med. 1994;120(2):135–42. [DOI] [PubMed] [Google Scholar]
  • 22.Wyatt J, Spiegelhalter D. Evaluating medical expert systems: what to test and how? Med Inform. 1990;15(3):205–17. [DOI] [PubMed] [Google Scholar]

Articles from Journal of the American Medical Informatics Association : JAMIA are provided here courtesy of Oxford University Press

RESOURCES