Skip to main content
Applied Clinical Informatics logoLink to Applied Clinical Informatics
. 2010 Dec 29;1(4):466–485. doi: 10.4338/ACI-2010-05-RA-0029

Impact of Clinical Reminder Redesign on Physicians’ Priority Decisions

Sze-jung Wu 1, Mark R Lehto 1, Yuehwern Yih 1,, Jason J Saleem 2,3,4,2,3,4,2,3,4, BN Doebbeling 2,4,5,2,4,5,2,4,5
PMCID: PMC3633320  PMID: 23616855

Abstract

Objective

Computerized clinical reminder (CCR) systems can improve preventive service delivery by providing patient-specific reminders at the point of care. However, adherence varies between individual CCRs and is correlated to resolution time amongst other factors. This study aimed to evaluate how a proposed CCR redesign providing information explaining why the CCRs occurred would impact providers’ prioritization of individual CCRs.

Design

Two CCR designs were prototyped to represent the original and the new design, respectively. The new CCR design incorporated a knowledge-based risk factor repository, a prioritization mechanism, and a role-based filter. Sixteen physicians participated in a controlled experiment to compare the use of the original and the new CCR systems. The subjects individually simulated a scenario-based patient encounter, followed by a semi-structured interview and survey.

Measurements

We collected and analyzed the order in which the CCRs were prioritized, the perceived usefulness of each design feature, and semi-structured interview data.

Results

We elicited the prioritization heuristics used by the physicians, and found a CCR system needed to be relevant, easy to resolve, and integrated with workflow. The redesign impacted 80% of physicians and 44% of prioritization decisions. Decisions were no longer correlated to resolution time given the new design. The proposed design features were rated useful or very useful.

Conclusion

This study demonstrated that the redesign of a CCR system using a knowledge-based risk factor repository, a prioritization mechanism, and a role-based filter can impact clinicians’ decision making. These features are expected to ultimately improve the quality of care and patient safety.

Keywords: Decision support, computerized clinical reminders, HIT, health information technology

1. Introduction

A computerized clinical reminder (CCR) system is a form of decision support that reminds providers of upcoming or overdue actions to take in order to support adherence to clinical-practice guidelines. The reminders are typically triggered by a set of Boolean rules and electronic data, such as patient medical history and laboratory findings. As such a CCR system provides an automated process for processing patient information that is designed to alleviate providers’ workload and reduce their reliance on memory.

This approach is illustrating the Computerized Patient Record System (CPRS) in the Department of Veterans Affairs (VA). CPRS is an integration of electronic medical record database and numerous decision support modules that allows health care providers to review and update patients’ records. A point-and-click data entry system enables providers to place orders, including medications, laboratory tests, and special procedures (►Figure 1). The CCR system is one of the modules embedded in CPRS. The CCR system in the VA is both context- and time-sensitive. It recognizes a patients’ specific diagnosis and the time elapsed since the last screening was provided. It also provides standardized screening protocols in its dialog boxes and automatically generated documentation. A variety of CCRs have been widely implemented throughout VA’s healthcare system of acute care medical centers, outpatient care clinics, and long-term care facilities [1].

Fig. 1.

Fig. 1

VA’s computerized patient record system (CPRS)

CCRs are effective in facilitating adherence to clinical practice guidelines [2-6] and improving preventive service delivery [7-9]. When used optimally, CCRs also significantly improve decision quality [10] and quality of care [11-16]. Despite the potential to improve the quality of patient care, CCRs have been underused by clinicians, resulting in missed opportunities for provision of preventive care [17]. Moreover, adherence to individual CCRs has been found to be variable [18]. For example, CCRs were found to increase mammography performance [19] and adherence to guidelines for patients with heart disease [20], but had no effect on fecal occult blood testing [19]. In another study, a survey of primary care physicians revealed that CCRs were felt to be more useful for preventive health management items than for diabetes management [15].

In previous work we have uncovered several important factors associated with the aforementioned variation in CCR adherence. First of all, we found that the perceived clinical importance of individual CCRs influenced their likeliness of completion. Furthermore, physicians’ projected resolution time was found to be inversely correlated to their adherence rate [21]. In order to resolve a CCR, the provider will have to perform a series of procedures, including opening and reviewing the electronic health records, consulting with the patient regarding the dialog boxes and recommendations, ordering laboratory tests, following-up on the laboratory tests, revising orders for the patient’s medication, and documenting resolution of the reminder. In the study, resolution time is defined as the total time required to complete these steps to address and resolve a clinical reminder.

The finding from our prior study implies that a CCR perceived as taking longer to resolve is more likely to be deferred. One explanation comes from time restriction and work load, which often forces primary care providers to choose among multiple problems and tasks during a given visit. Another possible explanation is that the “black-box” design of some VA CCRs fails to provide conspicuous reasons why a CCR is triggered. Consequently, physicians have to develop information management strategies to facilitate the retrieval of relevant data, and appropriately prioritize information at the time of care [22].

To address these issues, we developed a new CCR prototype to assist clinicians in more effectively assigning priorities to particular CCRs during a patient encounter. The design modifications included a knowledge-based risk factor repository, a role-based filter, and a prioritization mechanism, as elaborated in the following section. The performance of the original and new CCR designs was tested in a study involving primary care physicians in the controlled Human Computer Interaction (HCI) & Simulation Laboratory in a participating Midwestern VA Medical Center (VAMC). In this study, we hypothesized that the new features in CCR redesign would be useful and result in better prioritization decisions.

2. CPRS Prototype

A web-based prototype was developed as a mock-up of the current VA CPRS system. This application was programmed in Hyper Text Markup Language (HTML), JavaScript, Active Server Pages (ASP), SQL, and Microsoft Access 2003 database. This web application was database driven and enabled users to log in, select patients, and review the same clinical information as in the VA’s CPRS, including cover sheet, problems, radiology results, laboratory tests, medications, orders, and medical notes, etc. This web-based CPRS simulation provided a platform for studying the use of CRPS in simulated clinic settings, without creating HIPAA compliance issues and clinical workflow integration problems that would have been faced if testing the system in actual clinical practice. (Note: HIPPA stands for the Health Insurance Portability and Accountability Act of 1996 that was enacted by the U.S. Congress in 1996 to address the security and privacy of health data in the electronic data interchange era.)

Two different prototypes of the CCR system were developed. Design A, representing the original CCR design, had an identical interface design and functionality to that in the current VA CPRS implemented at the participating VAMC (►Figure 2). In design B, numerous design modifications were implemented (►Figure 3). The following sections elaborate each of the novel features in the new CCR design (design B).

Fig. 2.

Fig. 2

Design A: the CPRS prototype with the original design of CCR system (simulated data)

Fig. 3.

Fig. 3

Design B: the CPRS prototype with the new design of CCR system (simulated data)

2.1 Risk Factor Repository

We designed a risk factor repository that connected to the patient’s health records to populate a systematic review of a patient’s risk factors (i.e. problem list, laboratory results, other diagnostic tests), past encounter summary and pending exams. This tool automatically generated risk factor assessment summary consistent with existing knowledge base pertaining to early detection of a disease or other preventive services. To define the risk factors for an individual CCR, we searched the United States Preventive Services Task Force (USPSTF) and the National Guidelines Clearinghouse [23-24], and also obtained inputs from domain experts. Only grade A and grade B recommendations with stringent evidence from USPSTF were adopted for better evidence-based practice. The CCRs chosen were also VA performance measures.

The interface for the risk factor repository was programmed using JavaScript to include an expandable tree feature to facilitate navigation. This repository was connected to the CPRS database and accessible as a pop-up window by a single click on the corresponding CCR (►Figure 4). The intent was to make it easy for clinicians to quickly retrieve desired information without having to manually browse through various locations to collect patient information.

Fig. 4.

Fig. 4

The Risk factor repository for colorectal cancer screening

2.2 Prioritization Mechanism

The second feature incorporated in the prototype CCR system was a prioritization mechanism that enabled users to prioritize the clinical reminders according to several CCR attributes. These CCR attributes included

  • 1.

    reminder name (in both designs),

  • 2.

    due date,

  • 3.

    resolution time, and

  • 4.

    risk factors.

2.2.1 Reminder Name

Prioritizing by reminder name alphabetically was the default setting and the only prioritization approach taken in the existing VA’s CCR system. Therefore, this feature was included in both the original and new design as a design benchmark in order to compare its importance with the rest of design features.

2.2.2 Due Date

The existing design in the VA displayed the statement “due now” in place of the actual date. The new design proposed to display and prioritize each overdue clinical reminder by the actual due dates. This function enabled clinicians to recognize how late a CCR was past due, and to prioritize accordingly if desired. It also saved them the time to browse through past exam results or medical notes to locate such data.

2.2.3 Estimated Resolution Time

Resolution time is the amount of time it takes to resolve a CCR, including both addressing the reminder issue and documenting the resolution. In a prior study, we asked a panel of clinical experts to estimate the resolution time of different CCRs for the CPRS system [21]. This new feature displayed the resolution time estimated by physicians for each CCR in the prior study, allowing us to investigate how providers used this information to generate their own prioritizing decisions.

2.2.4 Risk Factors

Studies have indicated risk factor management as one key to improve the safety and quality of care for patients. For example, when CCRs were prioritized by risk factors, patients could benefit in life expectancy, 10-year mortality, and absolute risk in a disease [27]. Differentiating CCRs by risk factor was also shown to improve the cost-effectiveness of a screening test or treatment, including implantable defibrillator therapy [25] and colorectal cancer screening [26]. Therefore, we proposed a risk factor ranking system that stratified the CCRs by patient’s risk of developing a certain disease. A risk score of “average” or “high” was assigned to individual CCRs as an indication of the level of risk for each screened disease. The score was generated by the same knowledge base that powered the risk factor repository. Only grade A evidence levels, which were unequivocally recommended by USPSTF were used to populate the data system and calculate the risk score.

2.3 Role-based Filter

Currently, the CCR system at the participating VAMC uses the preface “N“ and “P“ to clearly indicate whether the practice fell within the scope of practice for nurses and physicians, respectively [28]. We proposed a role-based filter that can optionally display the CCRs prefaced in either “N”, “P”, or both, as designated by the users. Unlike the two aforementioned design features (risk factor repository and prioritization mechanism) that have the potential to affect individual’s prioritization decisions, the role-based filter was designed as a system intervention to prioritize who should best receive the reminder in order to reduce information overload and improve the use of the CCR system.

3 Methods

We conducted an IRB-approved, controlled experiment to evaluate the impact of the CCR redesign in a simulated clinical setting in which clinicians were asked to behave as they normally would during a patient encounter. A scripted instruction was administered throughout the experiment to minimize bias and potential variation between experiments.

3.1 participants

Clinicians were shown to be able to behave as they normally would during a clinical encounter in a simulated environment for usability testing [29]. We recruited sixteen (16) VA physicians opportunistically to participate in a comparison study of the original system and the new design. In a similar study, 16 subjects were demonstrated to be a sufficient sample size for comparison of CCR designs [30]. The subjects were recruited as a convenience sample because of the challenges in recruiting busy providers with packed clinical schedules. The participants constituted approximately half of the staff physicians in the participating VAMC outpatient clinics. Among the participants, two were novice CPRS users, and the rest were experienced users with an average of 5.8 years of experience. The average age of the participants was 38.5 years old (s.d. = 7.7 years), with nine male and seven female physicians.

3.2 Procedure

The experiment was conducted in the HCI & Simulation Laboratory at the participating VAMC. The HCI lab provided a controlled, closed setting to simulate physicians using a workstation in an exam room. A web camera and Morae Recorder, a usability testing software, were installed on participant’s workstation. Morae recorded user videos, audio, screen, and computer events, including keyboard entry and mouse clicks. The researcher (S. Wu) observed participant’s facial expression and the computer screen remotely through Morae Observer at an observation station near by the participant.

The experiment started with a structured exploration session that acquainted the subjects with the CPRS prototype with the original CCR design (design A) and simulated patient data. After the exploration session, each subject moved on to an interactive simulation package programmed with JavaScript, as shown in ►Figure 5, to simulate the procedure of a typical patient encounter in an exam room. During the simulation, the subject walked through a mimic patient physical exam on the computer screen that provided immediate interactive feedback pertaining to patient symptoms, examination and checkup results through the simulation package.

Fig. 5.

Fig. 5

Patient Encounter Simulation (Page 1 of 5)

The base case scenario was a 55 year-old male smoker with 4-year history of type II diabetes, and other active problems, including hypertension, tobacco abuse, and neuropathy in diabetes. Through the interactive simulation, the subject learned of the patient’s health data and the following assessment plan generated in prior encounter notes:

  • 1.

    the patient’s diabetes was under very good control;

  • 2.

    neuropathy in diabetes was well controlled;

  • 3.

    hypertension was already controlled with medications;

  • 4.

    the patient smoked very little now with one pack per week or so.

The patient scenario was found to be common among many patients in the participating VA facility. And thus, the patient’s record was selected out of a patient pool as a typical case a physician might see in daily practice. The base case patient had neither acute complications nor critical conditions, and was merely scheduled for routine care.

Near the end of the simulated patient encounter on the computer, the simulation system informed the subject there were still five clinical reminders not resolved, but the next patient was to be seen in five minutes. Participants had to decide whether they wanted to resolve at least some of the clinical reminders, or defer all until the next visit. From this point on, a semi-structured interview was conducted by the investigator to elicit how the subject prioritized the clinical reminders under time pressure. The subject was asked to prioritize the five remaining clinical reminders, verbally explain how they made their decisions, and estimate the time they expected to spend on resolving each clinical reminder. This prioritization decision was made under the assumption that not all five clinical reminders could be resolved because of time constraints, and thus the clinical reminders had to be prioritized.

In the second half of interview, the new CCR design (design B) and its new features were introduced to the subject with the help of the researcher, following a scripted standard procedure. The subject walked through each feature in the new design with the investigator, including risk factor repository, prioritization functionality, and the role-based filter. After being introduced to the new design, the physicians were asked the same questions as in the original design, assuming he/she was seeing the same patient and under same time constant. The participant prioritized the CCRs again after reviewing the information on the new design, and provided explanations and feedbacks. Finally, the interview concluded with a web-based survey for the subject to rate the usefulness of each design feature. The survey was designed in a five-point Likert scale, which is an interval-based multiple-choice style of scales commonly used in questionnaires. The whole experimental procedure lasted between 40 to 50 minutes in general.

3.3 Data Collection

During each session, qualitative interview video, written observations and quantitative survey entry were collected. Participant’s verbalization, their respective videotapes and CCR prioritization decisions were reviewed and summarized within a few days after each experiment. All notes were deidentified; no facility- and individual identifiable information was retained. Each provider’s interview video was password-encrypted, stored in a DVD disk, and secured in a locked drawer in a protected data room in the VA facility. The audio and video recordings of the interviews were reviewed and summarized by the observer within days after each interview. The prioritization reasoning heuristics provided by the participants were interpreted with a coding system that assigned similar keywords and phrases into word groups.

4. Results

This study investigated how physicians prioritized CCRs under time pressure given both the original and new designs. The prioritization data of the first participant were discarded from the experiment because the simulated EHR of the simulated patient was modified following the first experiment. The patient scenario had been changed from a patient with renal failure to a typical patient with several active problems, so as to better represent the population of most VA patients. The rating of system utility from participant 1 was not affected by the change and thus was retained in the analysis in section 4.4.

4.1 CCR Priority

The order in which the subjects prioritized the five reminders was coded into numerical values of 1, 2, 3, 4, and 5. ►Table 1 provides an intrinsic view of the priority shifts within subjects. With the original design, participants on average prioritized the reminders by the following order: hypertension (mean = 1.69, s.d. = 0.85), hemoglobin A1c (mean = 1.93, s.d. = 1.10), Lipid profile (mean = 2.67, s.d. = 1.35), diabetic foot exam (mean = 3.47, s.d. = 1.46), and colorectal cancer screening (mean = 3.53, s.d. = 1.51). Nevertheless, twelve (12) out of the 15 subjects modified their prioritization decisions after they were introduced to the new CCR design. With the new design the subjects prioritized the CCRs by a different order: colorectal cancer screening (mean = 1.87, s.d. = 1.51), hemoglobin A1c (mean = 2.53, s.d. = 1.19), hypertension (mean = 2.62, s.d. = 0.96), diabetic foot exam (mean = 2.73, s.d. = 1.53), and lipid profile (mean = 3.27, s.d. = 1.33).

Table 1.

The priority order elicited from each participant with original design (on the left) and new design (on the right). *note: CRC: colorectal cancer screening, D. Foot: diabetic foot exam, HTN: hypertension screening, HgbA1c: hemoglobin A1c test, LIPID: lipid profile. **note: the CCRs that could be resolved within 5-minute time span were underlined.

Subject ID With Original Design With New Design
CRC D. Foot HTN HgbA1c LIPID CRC D. Foot HTN HgbA1c LIPID
1 4 4 2 1 3 2 4 3 1 5
2 4 5 1 2 2 4 5 1 2 2
3 3 3 3 1 1 1 1 3 3 3
4 5 3 1 4 2 1 2 3 4 4
5 5 4 1 2 2 1 5 1 3 3
6 5 1 2 3 4 2 1 3 4 4
7 4 5 1 1 1 1 1 3 4 4
8 5 4 1 2 3 5 4 1 2 3
9 1 4 3 2 5 1 1 3 3 5
10 3 5 1 1 4 1 3 3 1 5
11 4 3 -- 1 1 1 4 -- 2 2
12 3 4 -- 1 1 1 4 -- 1 1
13 5 1 1 1 4 1 2 3 3 3
14 1 5 2 3 3 5 3 4 1 1
15 1 1 3 4 4 1 1 3 4 4
Mean 3.53 3.47 1.69 1.93 2.67 1.87 2.73 2.62 2.53 3.27
St. Dev. 1.51 1.46 0.85 1.10 1.35 1.51 1.53 0.96 1.19 1.33

In the experiment, the individual subjects provided estimated resolution time for each reminder as they walked through the prioritizing procedure for both designs. With this estimated resolution time information, the reminders that couldn’t be addressed within the 5-minute time constraint were identified. These reminders were highlighted in grey in ►Table 1. During the experimental procedure, the subjects were aware that not all reminders could be resolved within the five-minute time frame. Therefore, the reminders that weren’t addressed in time would be deferred. These “resolve/defer” decisions for each clinical reminder were further populated into ►Table 2.

Table 2.

Resolve/defer decision for colorectal cancer screening, diabetic foot, hypertension, hemoglobin A1c, LIPID profile, and overall, respectively.

Colorectal Cancer Screening Resolve Defer
With original design Resolve 2 1
Defer 10 2
Hypertension Resolve Defer
With original design Resolve 4 6
Defer 0 5
LIPID Profile Resolve Defer
With original design Resolve 4 5
Defer 1 5
Hemoglobin A1c Resolve Defer
With original design Resolve 6 5
Defer 1 3
Overall Resolve Defer
With original design Resolve 19 17
Defer 16 23

In ►Table 2, the upper-left-to-lower-right diagonal cells underlined represent the number of unchanged decisions, and vice versa. From ►Table 2, one can conclude that, the new design changed physician’s prioritization decision substantially. This impact is especially evident for the colorectal cancer screening. Ten (67%) out of fifteen participants changed their decisions from defer to resolve, and one changed from resolve to defer, using the new CCR design. Overall, 17 (49%) out of 36 originally resolved decisions were switched to defer, and 16 (41%) out of 39 previously deferred CCR decisions were resolved using the new design. Therefore, the new design impacted 12 (80%) out of 15 subjects and 33 (44%) out of 75 overall prioritization decisions.

4.2 CCR Resolution Time versus Priority

As stated in prior sections, there was a strong negative linear correlation found between CCR resolution time and its adherence rate in our prior study [21]. More specifically, a CCR perceived to require longer to resolve is less likely to be completed by providers. Hence, another perspective of this study is to provide more insight to this previously found relationship, and to investigate how physicians perform with the new design.

Figure 6 demonstrate the relationship between the resolution time as evaluated by domain exerts and participants’ CCR prioritization for the original and the new design, respectively (Note: a higher numerical value in the Y axis stands for lower priority). A positive linear correlation (R2 = 0.72) was found between system-provided resolution time and CCR priority in the original design, even though the CCR resolution time calculated by the domain experts’ input were not provided (►Figure 6a). This implies that the subjects considered the time needed to perform clinical tasks regardless of the presence of time information. This result is consistent with our prior study that suggested a CCR perceived as easier to resolve would be more likely to be resolved.

Fig. 6.

Fig. 6

a) and b) Average CCR priority v.s. estimated resolution time in the original design (left) and the new design (right) (Note: greater numerical value in Y axis stands for latter/lower priority)

Intriguingly, the correlation between the system-provided resolution time and CCR priority was not found (R2 = 0.31) when the subjects used the new CCR design, where the estimated resolution time for each CCR were provided (►Figure 6b). This comparison showed that the new design provided the subjects a different perspective in their prioritizing decision, such that resolution time was no longer a dominant decision criterion.

4.3 Prioritization Heuristics

The audio and video recordings of the interviews were reviewed and summarized by the observer within days after each interview. The prioritization reasoning heuristics provided by the participants were interpreted with a coding system that assigned similar keywords and phrases into word groups. The similarities and differences between related word groups were then examined. This qualitative analysis revealed certain decision heuristics frequently used by the participating physicians to prioritize the clinical reminders, as summarized in the following section.

4.3.1 Relevance

During the interview, all subjects commented that they prioritized a CCR because of its relevancy to patient’s medical history. This was especially true for the hypertension and hemoglobin A1c reminders in the original CCR design. With the original design, nine physicians resolved either hypertension or hemoglobin A1c reminders at a higher priority because of the patient’s prior history of hypertension and diabetes. These two reminders were perceived to be more relevant than the others in the original CCR design, even though all physicians had browsed through patient’s health summary pages that noted that patient’s blood pressure and glucose had normalized with exercise, diet or medication.

After the physicians were introduced to the new design, the reminders related to colorectal cancer screening and diabetic foot exam were perceived to be more relevant through the display of risk factors and due date features. Eight physicians prioritized these two reminders in the new CCR design for this reason. The feature of risk factor repository was also perceived to be helpful in determining the relevancy of the CCRs.

4.3.2 Resolution Time

While it was true that relevancy was the major reason why in the original design many physicians prioritized hemoglobin A1c over other CCRs, resolution time also played a role in their decisions. Four physicians commented in the interview that Hemoglobin A1c and lipid profile reminders were easy to resolve and thus were more likely to be resolved. The resolution of this type of reminders generally involved no more than a few mouse clicks and relatively less patient consultation, and thus was regarded to be the most easy to resolve. This served as the major reason why lipid profile was addressed most frequently in the original design.

The risk factor repository in the new design provides physicians with a different perspective of resolution time. Two physicians perceived the colorectal cancer screening to be less time consuming after finding the patient had a record of colonoscopy through patient’s risk factor repository. This indicated a greatly reduced time in patient consultation, decreased likelihood of patient’s resistance, and thus higher chance for successful medical intervention. As the two physicians addressed, this provided great incentives for physicians to prioritize colorectal cancer screening in the new CCR design. Essentially, the risk factor repository not only enabled physicians to differentiate the relevancy of CCRs, but subsequently assisted physicians in adjusting resolution time perception in certain cases and eventually influenced the prioritization decision.

4.3.3 Integration with Work Flow

The comments also revealed that, “not integrated with work flow” and “double documentation” were often cited reasons for why certain reminders were left unresolved. The hypertension, hemoglobin A1c and lipid-profile reminders were in this category. Six physicians mentioned the hypertension reminder was in their opinion merely a documentation tool that provided no extra value in decision making. Two physicians commented that they would not resolve the hypertension reminder in their daily practice regardless of their time availability. This comment is reflected in ►Table 1 where hypertension reminder was un-prioritized in both CCR designs for subject 11 and 12. For these two physicians, the medical issue for hypertension were usually addressed with the patient without documentation on the CCR interface, thus leaving the reminder status to be “unresolved”.

Similarly, the hemoglobin A1c and lipid-profile reminders were mentioned by one particular physician as being documentation issues. The two clinical reminders were integrated with VA’s lab system and thus could trigger a lab order electronically if a prescription was recommended. However, when no intervention was recommended, as was the case for some physicians in the study, resolving these two reminders become a documenting chore that was often un-prioritized in physicians’ time-pressed work environment.

4.4 CCR Usefulness Ratings

The usefulness of each new design feature rated by the subjects is summarized in ►Table 3. Overall speaking, 12 out of 16 (75%) subjects agreed the prioritization mechanism was useful (mean = 4.19, s.d. = 0.95). Specifically, due date and risk score were found to be most useful among all the four prioritizing features. 93% of the participants regarded due date to be useful (mean = 4.33, s.d. = 0.62) whereas 75% of the participants perceived risk score to be useful for decision making (mean = 4.06, s.d. = 1.34).

Table 3.

Perceived usefulness for each design feature in the new design (1 = strongly disagree, 2 = disagree, 3 = neutral, 4 = agree, 5 = strongly agree).

Perceived Usefulness Percent (N) Mean Std Dev.
1 2 3 4 5
Prioritization Overall .00(0) .06(1) .19(3) .25(4) .50(8) 4.19 0.95
Name .13(2) .13(2) .25(4) .31(5) .19(3) 3.31 1.26
Due Date .00(0) .00(0) .07(1) .53(8) .40(6) 4.33 0.62
Time .06(1) .19(3) .31(5) .13(2) .31(5) 3.44 1.27
Risk Score .13(2) .00(0) .13(2) .19(3) .56(9) 4.06 1.34
Role-based Filter .00(0) .00(0) .25(4) .44(7) .31(5) 4.06 0.75
Clinical Risk Repository .00(0) .00(0) .12(2) .25(4) .63(10) 4.50 0.71

On the contrary, the prioritization of CCRs by their name, the default setting of the reminders in the current VA’s CPRS, was found to be marginally useful by only 50% of the participants, with an average usefulness score of 3.31 and standard deviation 1.26. This implies a great potential for improvement for the current CPRS system. The feature of providing estimated resolution time at the point of care was also not found to be useful (mean = 3.44, s.d. = 1.31). This result is supportive of the result from ►Figure 6b, which demonstrated that the resolution time was not the major consideration in the subjects’ decision making processes when they were provided with the new CCR design. During the retrospective semi-structured interview, nine subjects acknowledged to have been aware of the time required to resolve each CCR when making clinical judgment. This was contrary to the two physicians who reported resolution time as a new perspective and useful information in their decision making processes.

In addition to positive perception of the prioritization feature, the participants also agreed that the functions of role-based filter and clinical risk factor repository were useful. The feature of a role-based filter was found to be useful by 75% of physicians (mean = 4.06, s.d. = 0.75). Last but not least, the proposed clinical risk factor repository was found to be useful by 88% of the participants (mean = 4.50, s.d. = 0.71), which was the most favored feature among all of the experimented features.

5. Discussion

This study developed an experimental protocol to simulate patient encounter in a controlled computer-lab setting. The main disadvantage of a simulation methodology is that it does not capture the full complexity of a system and the surrounding contextual factors as they exist in real life. However, a simulation experiment in a controlled setting allows factors of interest to be manipulated and tested without the contamination of extraneous variability. A high degree of simulation fidelity for both the prototype and environment is also essential. However, since achieving a high degree of fidelity is resource-intensive, an effective experiment can be carried out by focusing the high simulation fidelity only on design aspects evaluators want to gather feedback on [31]. A simulated patient is also an important factor for this method. While our study used an interactive computer-based package to simulate the procedure of a typical patient encounter in an exam room, using an actual patient or patient actor can add an additional layer of realism to a simulation study.

This study showed that several design features proposed to improve the VA’s CCR system were perceived to be useful by clinicians. The new design also substantially impacted the way they prioritized CCRs. The results were consistent with our prior work [21], which showed that clinicians consider CCR resolution time, subconsciously or not, and factored the time into their prioritization decisions. By displaying the same information wisely, as is in our proposed new design, the proposed CCR redesign has great potential to impact on clinical practice. It changed the way providers incorporated CCR information to achieve a clinical decision. This reaffirmed the need for designing a CCR system that is capable of incorporating patient-specific risk information into decision support.

5.1 Practical Issues for Risk Factor Repository

The proposed risk factor repository provides a higher level of decision support in that it extracted patient-specific information from existing electronic medical records according to a pre-defined knowledge base. The originally scattered information in the original CCR system provided little value to providers until it was mapped to assist decision making by the risk factor repository. However, there are a few practical concerns expressed by the subjects that have to be addressed for it to take effect to the fullest extent.

5.1.1 Increased Responsibility for Data Management

First, five physicians initially expressed concerns for their responsibility for manual data maintenance to support risk factor assessment. Also, two physicians identified a work culture environment that did not support risk factor assessment. Another physician was concerned about the potential of increased liability for the recommended clinical reminders that was not originally planned to be addressed. To resolve these issues, a standard process should be developed and implemented to delegate appropriate personnel and procedure to keep current patient risk factors information. Alternatively, this could be accomplished through heuristic programming and text mining.

5.1.2 Consensus on Risk Factors Definition

The definition of risk factor repository demands constant updating to stay compatible with the current best evidence. For example, in November 2008, the USPSTF had recently modified the recommended screening discontinuation age of colorectal cancer screening for people with average risk from 80 years old to 75 years old. Two physicians questioned the capability of the CCR system to actively capture the definition changes for individual reminders. With new evidence being updated every few months, strategies and procedures should be developed to keep CCR risk factors up-to-date with most current findings. Notably, this is a challenge with all decision support based on evidence-based guidelines.

5.1.3 Distrust of Information Accuracy

Two subjects rated the feature of risk scores as 1 out of 5 because they distrusted the accuracy of the patient history for which the risk factors will be generated in the medical records. These subjects complained about inapplicable clinical reminders being triggered because of erroneous data entered into the patient database. When further asked to assume the data were accurate, the two subjects thereby increased scores to the 4-5 range. The distrust issue brought out the fact that an improved decision support alone does not make a system. Improving and maintaining information accuracy is the capstone for a decision support system to be reliable and efficient.

5.2 Study Limitation

This study hypothesized that the number of clinical reminders to be resolved is limited under time constraint, such that physicians have to make choices amongst the list of available CCRs. In this study, fourteen out of sixteen physicians commented it is common for them not to have sufficient time to resolve all recommended clinical reminders. Some physicians had developed strategies to resolve as many clinical reminders as possible, including having the next patient wait, or finishing unresolved reminders later in the day or the next day. These practices aroused issues including prolonged patient waiting times, clinicians’ work overload, and uninformed exams prescribed after the patient encounter. These practice patterns potentially were contradicted with the assumption that clinicians could not resolve all reminders when they were pressed for time.

The study enrolled approximately half of the staff physicians as well as two resident physicians in the participating VAMC facility as a convenience sample. Even though the study incorporated physicians of a wide range of age groups from late 20s to late 50s, the average age of the participants (38.5 year-old) may still be younger than national average of VA physicians. Notably, age was not found to be correlated to the subjects’ rating of perceived system utility in our study. Specifically, a simple regression model showed that age had no effect on physician’s rating on the prioritization mechanism (F(1,14) = 0.20, p = 0.65), role-based filter (F(1,14) = 2.23, p = 0.15), and clinical risk repository (F(1,14) = 0.18, p = 0.67). Therefore, the impact of participants’ younger average age was not significant.

5.3 Conclusions

This study demonstrated that a CCR redesign that provides information explaining why the reminders occurred affected the way physicians prioritized the clinical reminders significantly. Eighty percent (80%) of physicians changed their prioritization decisions after using the redesign, and 44% of prioritization decisions were modified from “resolved” to “deferred” or vice versa. Physicians’ prioritization decisions were no longer correlated to CCR resolution time in the new design. The proposed design features, including risk factor repository, role-based filter, and prioritization by due date and risk factors, were also found to be useful by the physicians. The proposed design features can be beneficial to the audience who are in the planning and design phase for implementing clinical information systems, as well as those who are adding heuristic modules to their preexisting CCR system. We feel these findings demonstrate a great potential for improving decision quality and eventually patient safety.

Clinical Relevance Statement

The redesign of a CCR system using a knowledge-based risk factor repository, a prioritization mechanism, and a role-based filter can impact clinicians’ prioritization decisions. These features were found to be useful and have great potential to ultimately improve the quality of care and patient safety.

Conflict of interest Statement

The authors declare no conflict of interest in the study.

Human Subject Research

The study was reviewed and approved by the Indianapolis VA Institutional review board. The study was performed in compliance with the World Medical Association Declaration of Helsinki on Ethical Principles for Medical Research Involving Human Subjects.

Acknowledgement

This research was supported in part by the VA HSR&D Center of Excellence on Implementing Evidence-Based Practice (CIEBP), US Department of Veterans Affairs, HSR&D Center grant #HFP 04-148. Dr. Saleem is supported by a VA HSR&D Career Development Award (CDA 09-024-1). The Department of Veterans Affairs has no involvement in the study design, in the collection, analysis and interpretation of data, in the writing of the report, and in the decision to submit the paper for publication. The views expressed in this article are those of the authors and do not necessarily represent the view of the Department of Veterans Affairs. The authors thank all the physicians who allowed us to observe or interview them. Special thanks go to Chris Suelzer, MD, Maddamsetti Rao, MD, and Noelle Sinex, MD, for facilitating the recruitment in this study. We also thank the reviewers, and the IJMI Editorial Board for assistance with this manuscript.

References

  • 1.Doebbeling BN, Vaughn TE, McCoy KD, Glassman P. Informatics implementation in the Veterans Health Administration (VHA) healthcare system to improve quality of care. AMIA Annu Symp Proc 2006: 204-208 [PMC free article] [PubMed] [Google Scholar]
  • 2.Balas EA, Li ZR, Spencer DC, Jaffrey F, Brent E, Mitchell JA. An expert system for performance-based direct delivery of published clinical evidence. Journal of the American Medical Informatics Association 1996; 3(1): 56-65 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Cannon DS, Allen SN. A comparison of the effects of computer and manual reminders on compliance with a mental health clinical practice guideline. Journal of the American Medical Informatics Association 2000; 7(2): 196-203 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Hasman A, Safran C, Takeda H. Quality of health care: informatics foundations. Methods Inf Med 2003; 42(5): 509-518 [PubMed] [Google Scholar]
  • 5.Eslami S, Abu-Hanna A, de Keizer NF. Evaluation of outpatient computerized physician medication order entry systems: a systematic review. J Am Med Inform Assoc 2007; 14(4): 400-406 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Vashitz G, Meyer J, Parmet Y, Peleg R, Goldfarb D, Porath A, et al. Defining and measuring physicians’ responses to clinical reminders. J Biomed Inform 2009; 42(2): 317-326 [DOI] [PubMed] [Google Scholar]
  • 7.Dexter PR, Perkins S, Overhage JM, Maharry K, Kohler RB, McDonald CJ. A computerized reminder system to increase the use of preventive care for hospitalized patients. New England Journal of Medicine 2001; 345(13): 965-970 [DOI] [PubMed] [Google Scholar]
  • 8.Ornstein SM, Garr DR, Jenkins RG, Rust PF, Arnon A. Computer-generated physician and patient reminders - tools to improve population adherence to selected preventive services. Journal of Family Practice 1991; 32(1): 82-90 [PubMed] [Google Scholar]
  • 9.RAND Corporation A systematic review of the literature on interventions to increase the use of clinical preventive services under Medicare. 1999 [Google Scholar]
  • 10.Sintchenko V, Coiera E, Iredell JR, Gilbert GL. Comparative impact of guidelines, clinical data, and decision support on prescribing decisions: An interactive web experiment with simulated cases. Journal of the American Medical Informatics Association 2004; 11(1): 71-77 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Gorton TA, Cranford CO, Golden WE, Walls RC, Pawelak JE. Primary care physicians’ response to dissemination of practice guidelines. Arch Fam Med 1995; 4(2): 135-142 [DOI] [PubMed] [Google Scholar]
  • 12.Shea S, DuMouchel W, Bahamonde L. A meta-analysis of 16 randomized controlled trials to evaluate computer-based clinical reminder systems for preventive care in the ambulatory setting. Journal of the American Medical Informatics Association 1996; 3(6): 399-409 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Kralj B, Iverson D, Hotz K, Ashbury FD. The impact of computerized clinical reminders on physician prescribing behavior: Evidence from community oncology practice. American Journal of Medical Quality 2003; 18(5): 197-203 [DOI] [PubMed] [Google Scholar]
  • 14.Sequist TD, Gandhi TK, Karson AS, Fiskio JM, Bugbee D, Sperling M, et al., A randomized trial of electronic clinical reminders to improve quality of care for diabetes and coronary artery disease. 27th Annual Meeting of the Society of General Internal Medicine; 2004May12-15; Chicago, IL; [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Gandhi TK, Sequist TD, Poon EG, Karson AS, Murff H, Fairchild DG, et al. Primary care clinician attitudes towards electronic clinical reminders and clinical practice guidelines. AMIA Annu Symp Proc 2003: 848. [PMC free article] [PubMed] [Google Scholar]
  • 16.Calabrisi RR, Czarnecki T, Blank C. The impact of clinical reminders and alerts on health screenings. The VA Pittsburgh Healthcare System achieves notable results by enhancing an automated clinical reminder system within its CPR – and has the data to prove it. Health Manag Technol 2002; 23(12): 32-34 [PubMed] [Google Scholar]
  • 17.Schellhase KG, Koepsell TD, Norris TE. Providers’ reactions to an automated health maintenance reminder system incorporated into the patient’s electronic medical record. American Board of Family Practice 2003; 16: 312-317 [DOI] [PubMed] [Google Scholar]
  • 18.Agrawal A, Mayo-Smith MF. Adherence to computerized clinical reminders in a large healthcare delivery network. Medinfo 2004; 11: 111-114 [PubMed] [Google Scholar]
  • 19.Goldberg HI, Wagner EH, Fihn SD, Martin DP, Horowitz CR, Christensen DB, et al. A randomized controlled trial of CQI teams and academic detailing: can they alter compliance with guidelines? Jt Comm J Qual Improv 1998; 24(3): 130-142 [DOI] [PubMed] [Google Scholar]
  • 20.Tierney WM, Overhage JM, Murray MD, Harris LE, Zhou XH, Eckert GJ, et al. Effects of computerized guidelines for managing heart disease in primary care - A randomized, controlled trial. Journal of General Internal Medicine 2003; 18(12): 967-976 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Wu SJ, Lehto M, Yih Y, Saleem JJ, Doebbeling BN. Relationship of estimated resolution time and computerized clinical reminder adherence. AMIA Annu Symp Proc 2007: 334-338 [PMC free article] [PubMed] [Google Scholar]
  • 22.Weir CR, Nebeker JJR, Hicken BL, Campo R, Drews F, LeBar B. A cognitive task analysis of information management strategies in a computerized provider order entry environment. Journal of the American Medical Informatics Association 2007; 14(1): 65-75 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.U.S. Preventive Services Task Force (USPSTF) website: http://www.ahrq.gov/clinic/uspstfix.htm
  • 24.National Guideline Clearinghouse website: http://www.guideline.gov/
  • 25.Sheldon R, OBrien BJ, Blackhouse G, Goeree R, Mitchell B, Klein G, et al. Effect of clinical risk stratification on cost-effectiveness of the implantable cardioverter defibrillator: the Canadian implantable defibrillator study. Circulation 2001; 104(14): 1622-1626 [DOI] [PubMed] [Google Scholar]
  • 26.Read TE, Kodner IJ. Colorectal cancer: Risk factors and recommendations for early detection. American Family Physician 1999; 59(11): 3083-3092 [PubMed] [Google Scholar]
  • 27.Dalal M, Bradley E, Braithwaite RS. Prioritizing clinical practice guidelines in the primary care setting. The 29st annual meeting of the Society for Medical Decision Making, Pittsburgh, PA.2007October [Google Scholar]
  • 28.Saleem JJ, Patterson ES, Militello L, Render ML, Orshansky G, Asch SM. Exploring barriers and facilitators to the use of computerized clinical reminders. Journal of the American Medical Informatics Association 2005; 12(4): 438-447 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29.Linder JA, Rose AF, Palchuk MB, Chang F, Schnipper JL, Chan JC, Middleton B. Decision support for acute problems: the role of the standardized patient in usability testing. J Biomed Inform 2006; 39(6): 648-655 [DOI] [PubMed] [Google Scholar]
  • 30.Saleem JJ, Patterson ES, Militello L, Anders S, Falciglia M, Wissman JA, Roth EM, Asch SM. Impact of clinical reminder redesign on learnability, efficiency, usability, and workload for ambulatory clinic nurses. J Am Med Inform Assoc 2007; 14(5): 632-640 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.Yngve Dahl, Ole Andreas Alsos, Dag Svanæs. Evaluating mobile usability: The role of fidelity in full-scale laboratory simulations with mobile ICT for hospitals. Lecture Notes in Computer Science 2009; 5610: 232-241 [Google Scholar]

Articles from Applied Clinical Informatics are provided here courtesy of Thieme Medical Publishers

RESOURCES