Skip to main content
Digital Health logoLink to Digital Health
. 2022 Aug 7;8:20552076221113696. doi: 10.1177/20552076221113696

Clinical decision support for intervention reduction in neonatal patients: A usability assessment

Patrice D Tremoulet 1,
PMCID: PMC9364207  PMID: 35968029

Abstract

Objective

This study investigated how effectively simplified cognitive walkthroughs, performed independently by four nonclinical researchers, can be used to assess the usability of clinical decision support software. It also helped illuminate the types of usability issues in clinical decision support software tools that cognitive walkthroughs can identify.

Method

A human factors professor and three research assistants each conducted an independent cognitive walkthrough of a web-based demonstration version of T3, a physiologic monitoring system featuring a new clinical decision support software tool called MAnagement Application (MAP). They accessed the demo on personal computers in their homes and used it to walk through several pre-specified tasks, answering three standard questions at each step. Then they met to review and prioritize the findings.

Results

Evaluators acknowledged several positive features including concise, helpful tooltips and an informative column in the patient overview which allows users direct (one-click) access to protocol eligibility and compliance criteria. Recommendations to improve usability include: modify the language to clarify what user actions are possible; visually indicate when eligibility flags are snoozed; and specify which protocol's data is currently being shown.

Conclusion

Independent, simplified cognitive walkthroughs can help ensure that clinical decision support software tools will appropriately support clinicians. Four researchers used this technique to quickly, inexpensively, and effectively assess T3's new MAP tool, which suggests positive actions, such as removing a patient from a ventilator. Results indicate that, while there is room for usability improvements, the MAP tool may help reduce clinician's cognitive load, facilitating improved care. The study also confirmed that cognitive walkthroughs identify issues that make clinical decision support software hard to learn or remember to use.

Keywords: Clinical decision support, protocol management, alert fatigue, cognitive workload, patient monitoring, clinical decision making, physiologic monitoring

Introduction

The physiologic monitoring systems used in intensive care units aggregate real-time data from a variety of different sources, including pulse oximeters, electrocardiography devices, infusion pumps, and ventilators. 1 Initially, these systems displayed monitored parameters in real time and issued alerts whenever data values were outside of preset thresholds. However, modern systems log patient data, enabling them to offer clinical decision support capabilities that leverage recent advances in data science, predictive analytics, and clinical informatics. 2 For example, several patient monitoring systems compute early warning scores, and notify clinicians when scores suggest that a patient's condition is worsening.35

Although physiologic monitoring systems play an essential role in caring for critically ill patients, they contribute to alarm fatigue, that is, situations where a large number of audio signals overwhelm or desensitize users.6,7 Clinical decision support software (CDSS) that uses early warning scores or risk indexes to prompt health providers to assess patients and potentially intervene sooner than they otherwise might, can be beneficial. However, multiple interruptions triggering rapid patient assessments are disruptive of workflow, setting providers up for burnout8,9 which is associated with poor patient safety outcomes. 10 In addition, notifications based on warning scores or risk indexes can contribute to alert fatigue: situations where an excess of visual warnings, flags, or pop-up messages overload and/or desensitize health providers.1113 Both alarm fatigue and alert fatigue negatively impact patient safety. Healthcare providers that are overwhelmed or desensitized may delay their responses to, ignore, or dismiss alarms or alerts that otherwise would have prompted rapid intervention,1315 leaving patients vulnerable to greater deterioration or more harm than necessary.

On the other hand, the data collected by physiologic monitoring systems may also be used to detect positive trends suggesting that a patient's health is improving. This ability can be leveraged to create CDSS tools that provide users with gentle reminders to consider reducing intensive interventions, rather than obtrusive alerts. One physiologic monitoring system, called T3, recently adopted this approach. It features a new component called MAP that identifies patients who may be good candidates for clinical protocols, defined as specific codes of practice for applying medical interventions. In most acute care settings, a healthcare team works together to determine if a patient is a good candidate for a clinical trial or decides if and when to implement a beneficial protocol such as vasoactive weaning (VW) or extubation. This means that at least one team member must remember and bring the possibility of eligibility to the team's attention. CDSS that unobtrusively indicates that a patient is eligible for a protocol can remove this memory burden from clinicians, allowing them to focus on other aspects of providing care for their patients. MAP also tracks and displays compliance data, enabling clinicians to quickly and easily review physiological data relevant to evaluating the progress of patients placed on clinical protocols.

CDSS that automatically identifies patients who meet the criteria for enrolling in a clinical study or starting a new protocol can reduce clinician workload and memory demands, but only if clinicians are able to quickly and easily access a comprehensive, easy-to-understand summary of relevant patient data; otherwise, being presented with flags and reminders could increase their workload and/or reduce their effectiveness. Similarly, if it is difficult for T3 users to simultaneously access all the data needed to evaluate how well a patient is tolerating a protocol, clinicians will need more time than is strictly necessary to assess patients, reducing efficiency and effectiveness. In short, the usability of T3's new MAP component will play a significant role in determining whether it will facilitate or inhibit users from effectively caring for patients monitored by T3.

One relatively quick, inexpensive, and convenient method for assessing usability is the cognitive walkthrough (CW). This is an analytical technique that entails walking through the steps needed to perform each of a series of pre-identified tasks and answering a small set of questions about how easily users will be able to perform those tasks. While originally developed to assess “walk up and use” technologies,16,17 CWs are commonly used to evaluate relatively complex products,1820 since most users prefer to interact with tools to learn how to perform a task rather than read a manual or follow directions. 18 Prior research indicates that this is true of T3 users; in a study that assessed the efficacy of training, one participant noted that “using T3 is the best way to learn to use it”. 21 CWs are particularly useful for highlighting aspects of user interfaces that are intuitive, thus easy to learn and remember, and for identifying usability issues that may make it hard for new or infrequent users.2224

Many variations of CWs have been used to assess the usability of different types of user interface designs.2427 There is also a wide variance in terms of the backgrounds and number of evaluators used. It is valuable for even a single user interface developer or a team of user interface designers to conduct CWs,2832 though some recommend assembling larger teams that also include project managers and target users.31,3335 In fact, one of the significant benefits of CW is that it does not require evaluators to be pre-trained, nor to have the same domain expertise as an application's target users.36,37 In addition, this technique is relatively simple to perform; however, some researchers have noted that it can be difficult for evaluators to take into account the real context of use 26 and that it does not provide estimates of frequency or severity of the issues it uncovers. 20 Even with those shortcomings, CWs can be extremely helpful, by quickly and easily identifying usability issues early in development lifecycles, when it is least expensive to address them. 24 The study reported here explores the impact of having a small team of human factors researchers each independently conduct a simplified CW using first-person questions, and then meet to review results, prioritize issues in terms of the expected impact of resolving them, and generate recommendations. This work addresses several research questions:

  • Can CWs be performed by four human factors researchers to determine how easily clinicians will be able to use T3's new MAP capabilities?

  • How effective is it to have evaluators conduct independent CWs rather than a single team-based walkthrough?

  • Is it helpful to employ three straightforward questions, phrased in the first person, rather than the four third-person questions that are typically used, or the two third-person questions developed for streamlined CWs 35 ?

  • What usability issues for CDSS are CWs well designed to identify?

Methods

Rowan University IRB determined that this study does not qualify as human subjects research and therefore was exempt from full review.

Four trained evaluators each independently performed a CW to assess usability of the Beta version of MAP (short for Management Application). The main steps for conducting a CW are as follows17,30,34,38,39:

  1. Create descriptions of the intended users (personas).

  2. Decide upon a set of tasks to use to analyze the user interface.

  3. Document the correct sequence(s) of steps needed to complete each task.

  4. Develop instructions for all participants.

  5. For each task, go step-by-step, asking the same set of pre-defined set of questions about each step.

  6. Aggregate results and develop a report to share findings; ideally issues found should be prioritized in terms of how much resolution will improve usability.

Evaluators

Three students serving as research assistants in a Human Factors lab, and the lab director, a Psychology professor, each independently assessed T3's MAP tool. All students had prior experience conducting usability assessments and had spent at least two semesters working in the Human Factors lab. All evaluators were familiar with T3's user interface. The students had recently participated in a training activity, which entailed reviewing each screen of a demonstration version of T3 to determine if any usability best practices, called heuristics, were violated in each screen. The professor had previously conducted heuristic evaluations of two earlier versions of T3, which entailed becoming very comfortable with its user interface.

Setting

Walkthroughs were conducted using evaluators’ own devices in their homes during March and April 2021. All evaluators accessed a web-based demonstration of T3 version 3.10 with Risk Analytics version 5.3, which displayed de-identified historical data from actual patients. This demo version of T3, which also contained a Beta version of the new MAP tool, was only available to hospitals who were helping test MAP at the time of the evaluation. An earlier version of T3 was deployed 34 at other hospitals.

Procedure: CW

All evaluators were instructed to adopt the perspective of a new user attempting to use the MAP tool to complete a set of tasks. They were directed to record both all potential usability problems that they discovered and all ideas for improvement that they conceived while undertaking the tasks.

Preparation: Initially, the Psychology professor and one of the students reviewed training materials, and then attended a demonstration of the new MAP tool. Next, the professor developed a list of tasks and descriptions of the steps required to complete each task. For tasks that could be completed multiple ways, multiple step sequences were recorded. After verifying the completion sequences, the student developed instructions for the walkthroughs, which included the list of tasks and a set of three questions that all evaluators answered after attempting each task.

Analysis: The students were given copies of all the MAP training materials along with the CW instructions. Evaluation tasks are listed in the Appendix and post-task questions are listed in Table 1.

Appendix.

Tasks used for MAP cognitive walkthroughs (CWs).

Task name Task description
Enroll patient in vasoactive weaning (VW) Indicate to T3 that a patient is starting VW protocol.
Start screening patient for extubation readiness trial (ERT) Indicate that a patient is potentially a candidate for an ERT so T3 includes this patient in eligibility scans.
Snooze eligibility flag(s) Temporarily stop displaying patient eligibility notification flag (both ERT and VW flags)
View/edit compliance and eligibility criteria Confirm/adjust inclusion and compliance criteria for a single patient
View enrolled patient's progress Check how patient is doing on an ongoing protocol (both ERT and VW—access new MAP view)
Review all data from a patient who completed a protocol Review how a patient did while on a completed protocol (both ERT and VW)
Check trial dates Find start and end times for a completed patient trial
View compliance data for a completed patient Determine which compliance criteria, if any, were not fully met during a completed trial
View compliance data for an enrolled patient Determine which, if any, compliance criteria have not been fully met by a patient currently under a protocol
Find eligible patients Identify all patients currently eligible to start a protocol

Table 1.

Questions used for cognitive walkthroughs (CWs).

Short name Full question
Completion Were you able to complete the task?
Controls Were the controls clearly visible?
Feedback Was there feedback to indicate that you completed (or did not complete) the task?

While attempting to work through each of these tasks, all evaluators were asked to answer the questions in Table 1 to help identify positive features and potential usability concerns.

Once all evaluators had completed their analysis, the professor aggregated the individual findings and took the first pass at grouping-related feedback and similar suggestions for improvement. Then the evaluators met to review the aggregated results, to ensure that all of the feedback was accurately reflected, and to assign priorities to the problems. Priorities were based on the evaluators’ collective judgement of how much each problem impacts usability.

Results

For ease of explanation, results are organized based on five regions in T3's user interface that are used to display and/or interact with the new MAP capabilities. For each region, a list of positive features is followed by a table listing usability issues. The positive features are aspects of the MAP functionality that should be retained if the recommended modifications are implemented.

Census screen of MAP activity column

Figure 1 shows a screenshot of a part of T3's census screen.

Figure 1.

Figure 1.

Census screen's new MAP activity column, indicating which patients are currently following protocols, and which ones are eligible. It also shows the legend for symbols used in this column, which is displayed when users hover the cursor over the information icon.

Positive features:

  1. Clean, easy to read.

  2. Informative tooltips explaining the meanings of icons, and what numbers represent.

  3. Users can view patient data during current or most recently completed protocol with a single click.

  4. Users can pull up and review inclusion criteria and compliance targets for a current/recent protocol with a single click.

Problem Descriptions and Priorities
The column header “MAP activity” seems more like application language than user language. Priority: Medium-High
Not clear what will happen if users click on the leftmost icon in a MAP activity bar (no tooltip for checks and flags). Priority: Medium.

Patient view of MAP Activity summary (top center)

Figure 2(a) shows the top portion of a patient view screen, Figure 2(b) shows an enlarged view of the MAP activity summary bar on a patient view screen, and Figures 2(c), (d), and (e) show screenshots of pop-up windows that are displayed after clicking on different regions in the MAP activity summary bar.

Figure 2a.

Figure 2a.

Top portion of a patient view screen, showing new “MAP activity summary” (same information as on census page). The histogram shows one of the T3's risk indexes, and the graph below it shows heart rate; more physiological data is shown in graphs that are not included in this screenshot.

Figure 2b.

Figure 2b.

Enlarged view of MAP activity summary.

Figure 2c.

Figure 2c.

Pop-up accessed by clicking first on MAP activity summary bar, then on the three dots that appear after clicking on it.

Figure 2d.

Figure 2d.

Extubation readiness trial (ERT) criteria pop-up accessed by clicking on the text that states “ERT” in a MAP activity summary.

Figure 2e.

Figure 2e.

Vasoactive weaning (VW) criteria pop-up, accessed by clicking on “WV” when shown on a MAP activity summary bar (would show where “ERT” is shown in Figure 2(b) and (c)).

Positive features:

  1. Informative, useful tooltips.

  2. Consistent with display in census view.

  3. Allows users to snooze flags or start eligible patients on protocols.

  4. Pop-up shown when clicking on the protocol name (VW/extubation readiness trial, ERT) are clear and consistent with one another.

Problem Descriptions and Priorities
No tooltip for the three vertical dots (which only appear after clicking on either Flag or clock/time section of bar). Priority: Medium-High
Control bar/button is missing tooltip on leftmost icon (flag/check). Priority: Low
For patients eligible to start a protocol, four clicks are required to start the protocol or snooze the eligibility flag. Priority: Low

MAP view (MAP tab on individual patient view screen)

Figure 3 shows a screenshot of T3 screen with the MAP tab selected.

Figure 3.

Figure 3.

Summary of several physiological parameters during the period that patient was participating in the extubation readiness trial (ERT).

Positive features:

  1. Allows users to quickly review the data for the time the patient was on the protocol.

  2. Green horizontal lines helpfully graphically overlay the start and end of protocol trial on large graph windows.

  3. Shading clearly indicates when parameters are not in compliance with protocol targets; this is consistent with the use of shading in other T3 graphical displays.

  4. Green horizontal lines inside the time navigation slider to show protocol start/end times relative to the slider is helpful.

Problem Descriptions and Priorities
Not always obvious which protocol trial's data is being displayed; it’s possible to be viewing data from a different protocol trial than the one shown in top center. Priority: High
Top center MAP activity section can show only one completed trial but it’s possible to use time navigation controls to show multiple trials on the display at the same time. Priority: Medium
When a patient has completed multiple trials, it can be hard to distinguish between start time for one trial and end time for another. Priority: Low
For a patient who has been eligible for a long time, it is unclear why a particular interval of time is shown in graphs. Priority: Low

Patient view of MAP icon (checkbox on bottom of patient view screen)

Figure 4 shows a screenshot of the middle bottom portion of a T3 patient view screen, which contains several icons, including one that can be used to bring up pop-up windows allowing users to see if the patient is eligible for a specific protocol and, if so, allowing users to indicate to T3 that they will be starting patients on a protocol.

Figure 4.

Figure 4.

Icon used to access MAP functionality from individual patient view.

Positive features:

  1. Table content is clear and easy to read.

  2. Tooltip that connects this icon to MAP tab and MAP Activity summary bar top center is helpful.

Problem Descriptions and Priorities
Users who want to adjust eligibility criteria for a patient must remember that the only way they can do this is to first click on this icon. Other things can be done in multiple ways (e.g. start MAP). Priority: Medium
Check box icon is not intuitive for something named “MAP”. Priority: Medium–Low

MAP pop-up menus (accessed via MAP icon on individual patient view screen)

Figure 5(a) shows a screenshot of the pop-up window that shows whether or not a patient is eligible for specific clinical protocols and Figure 5(b) shows how the pop-up changes if the user clicks on a protocol for which a patient is eligible. Figure 5(c) shows the pop-up window that is displayed when a user elects to have an eligible patient start on the VW protocol. Figure 5(d) shows the pop-up that appears if a patient has completed one or more clinical protocols and the user clicks on the bar labeled “Completed MAPs.” Figure 5(e) shows the pop-up window that is displayed when a user elects to have an eligible patient start the ERT.

Figure 5a.

Figure 5a.

MAP pop-up accessed by clicking on MAP icon (checkmark) at bottom of individual patient view screen (see Figure 4). Based on data captured by the physiologic monitoring system, this patient is eligible for vasoactive weaning (VW).

Figure 5b.

Figure 5b.

Change to MAP pop-up shown if user clicks the button showing patient is eligible to start VW.

Figure 5c.

Figure 5c.

Pop-up window that appears if user clicks on the button to start VW.

Figure 5d.

Figure 5d.

Another view of the initial MAP pop-up window (to left) accessed by clicking MAP icon (checkmark) and list of completed MAPs (right) accessed by clicking the button that says “completed MAPs” on initial pop-up. This patient is eligible to start an extubation readiness trial (ERT).

Figure 5e.

Figure 5e.

Pop-up menu that appears when user indicates that patient will start ERT.

Positive features:

  1. Clicking on the protocol name (inside initial pop-up, Figure 5(a)) produces a pop-up summarizing inclusion criteria and compliance targets.

  2. Clicking “Not Eligible” in a protocol's control bar in the initial MAP pop-up brings and takes the user directly to MAP tab by allowing the user to review data relevant to that protocol.

  3. Update button on Start Extubation Readiness pop-up states that updates require that a patient be eligible or enrolled in a MAP.

  4. The pop-up table of completed MAPs is clear and it's easy to select a row to bring up the MAP tab display to show data collected during that completed trial.

Problem Descriptions and Priorities
MAP pop-up: Not intuitive that user needs to use “click to start MAP” link to view/adjust Parameter targets. Priority: High
MAP pop-up: Icon of x in a circle conveys that something is negative or not allowed which is inconsistent with “click to start MAP”. Priority: Medium

Recommendations

Based upon the results of the CWs, the team generated several recommendations, including the following:

  1. Ask target users to review the language used throughout MAP, to ensure users clearly understand what actions are available. Specifically, consider renaming “click to start MAP” as “click to review/adjust protocol parameters”.

  2. Allow users to easily view all relevant patient data when deciding whether or not to start a protocol or evaluating how well a patient is doing/did while on an active/completed protocol.

  3. After snoozing a patient eligibility flag, provide an indicator that the flag has been snoozed (e.g. instead of displaying “No Recent MAP activity,” show “Eligibility flag snoozed until HH:MM”).

  4. When the MAP tab is active, have the MAP activity summary indicate which is depicted in the graph, a completed protocol trial, a currently active protocol, or a recent time interval when eligibility criteria have been met.

  5. Add start date/time to the MAP Activity summary bar.

  6. Allow users to snooze eligibility as an option in the MAP pop-up window (accessed from checkbox icon).

  7. Since clicking the vertical dots in the MAP Activity bar only yields two options, consider placing icons for these actions directly on the Activity bar.

Discussion and conclusions

Results of independent CWs of T3's new MAP feature suggest that novices may find some of its features hard to learn. The evaluators noted that some of the language used in the MAP tool's pop-ups and controls seems to be more application-oriented than user-oriented. They also suggested that it would be helpful if tooltips and control labels could help to make it clearer what actions to take when users make decisions about patient eligibility. This is consistent with previously developed guidance that CDSS tools use language that is familiar to users. 40 Moreover, lack of information in the MAP tab tooltip and the fact that the snooze feature is only available via a three vertical dot display but the ability to “start MAP” seems to be available through multiple routes, could mean that users need to work harder and/or take longer to complete basic tasks or to understand how well patients are doing/did during protocol trials than should be necessary. These usability issues may lower overall user satisfaction with the MAP tool, even though it provides users with relevant and clinically useful information and capabilities.

On the other hand, despite room for usability improvements, the MAP tool has the potential to significantly benefit both healthcare providers and their patients. With just a few clicks, users can pull up a customized display of relevant patient data that helps clinicians quickly understand a patient's current status and recent history. In the long term, the MAP tool could contribute to increased efficiency, effectiveness, and situational awareness among clinicians, particularly if recommendations for addressing existing usability issues developed in this study are followed.

The evaluators’ results are consistent with feedback provided by clinical Beta testers. This indicates that nonclinical researchers who use first-person evaluation questions while performing a CW can identify issues that impact how easily clinicians will be able to use CDSS. It also suggests that it is effective to employ independent walkthroughs followed by a virtual meeting rather than a collaborative team-based walkthrough.

These findings are important for several reasons. First, although CW is a relatively simple technique that does not require much training, the target user base for CDSS tools such as T3—inpatient physicians and nurses—are extremely busy, which makes it challenging for them to participate in usability evaluations. 41 Thus, it is significant that undergraduates and a human factors professor could identify issues that impact CDSS usability for those clinical experts. Several other researchers have successfully had nonclinical evaluators–most often software developers or usability experts—use CWs to assess the usability of complex health technologies.4247

Second, having evaluators independently perform CWs and then meet to review results and generate recommendations can be more efficient than having a group collaboratively perform a CW. At the time this study was performed, large face-to-face meetings were rare due to COVID-19 restrictions. Evaluators in this study were able to perform independent walkthroughs in their homes, and the review meeting was conducted via videoconferencing software. Even without social distancing restrictions, trying to schedule a time for a diverse team to meet to walkthrough a user interface can be challenging, so it is notable that independent walkthroughs can be productive.

Third, instructing evaluators to adopt the perspective of a new user and then directing them to answer first-person questions based on their experience, rather than answering questions about what clinical users would likely experience, helps make CWs more straightforward for novice evaluators. (One criticism of CW is that it can be difficult for participants to truly represent the perspective of target users. 26 ) This modification was especially advantageous in the context of this study where students who had not previously participated in a CW were tasked to perform independent walkthroughs.

In general, CWs help to identify features of interfaces that influence how easily users will be able to learn, and remember, how to use applications.18,24,28 Hence, this technique is particularly helpful in identifying those features in CDSS user interfaces. In addition, CWs are well suited to assess the comprehensibility and utility of contextual information that is intended to support users’ decision making. In fact, several of the results of this study—including both positive and negative findings, could be generalized into guidelines for producing useful, usable CDSS tools.

For example, the evaluators indicated that one-click access to relevant physiological data about a patient who has been identified as a candidate for a clinical protocol, and displaying eligibility criteria for a protocol via a mouseover, are both positive features of MAP. These results suggest guidance that CDSS tools enable the users to quickly and easily access relevant contextual information that helps explain why a particular action is (or is not) suggested or why a particular alert/alarm has been fired. This aligns with prior research suggesting that CDSS recommendations be accompanied by simple explanations of why they are recommended,48,49 and that CDSS should be a “clinical partner”. 50 In addition, the results of this study suggest that it is beneficial both for busy clinicians to be able to defer or “snooze” notifications that patient health is improving, so that intervention reductions can be considered later, and for there to be a visual indicator that a notification has been deferred. This is consistent with previously developed guidance that CDSS should fit into users existing workflows,48,51,52 and that it should be “a team player”. 50

Meanwhile, evaluators’ judgement that MAP requires users to perform too many clicks to indicate that a patient will be starting on a protocol can be generalized as “make it as easy as possible to implement recommendations provided by CDSS tools”, which is consistent with other researchers’ guidance to minimize numbers of clicks/screens48,53 and to make it easy to follow recommended actions.48,51,52 Other results can be generalized as “consistently allow users multiple ways to perform the same action”. This is aligned with general guidance to aim for consistency in any user interface. 53 In summary, the candidates for general guidelines for creating useful, usable CDSS tools suggested by this study complement and extend existing literature that contains guidelines for creating successful CDSS tools.

While this study effectively identified several positive features of the new MAP tool and produced recommendations for changes that could improve its usability, the study has some limitations. Rather than a diverse team that includes intended users, evaluators are all researchers affiliated with the same human factors lab, and most were undergraduate students. Having at least one clinical expert would have strengthened the study—though domain expertise is not required for CWs. 28 Moreover, the students had different educational backgrounds: one was a psychology major heading to medical school, one was an engineering major headed to a clinical psychology graduate program, and the third one was a computer science major, who had over 2 years of experience working in healthcare as an x-ray technician. In addition, the effort was led by a human factors expert with over two decades of experience and all students had prior experience evaluating usability, including participating in a heuristic evaluation of an earlier version of T3. 54

That said, students working in a human factors lab are adept at adopting the perspective of target users since understanding users’ needs is central to their research. As a result, these students may have an easier time putting themselves into the shoes of clinical experts for the purpose of a usability assessment than other potential CW participants. This suggests that other CW participants might still have had difficulty taking the perspective of users even when given first-person questions. Despite these limitations, the positive features, issues, and recommendations generated in this study can be applied to improve the usability of T3's MAP functionality, which in turn can result in improved care for patients in hospitals that use T3. In particular, T3's MAP tool can benefit patients whose health is improving by gently prompting clinicians to consider reducing intensive interventions and making it easy for those clinicians to access relevant data. Moreover, several of the results of this study suggest possible guidelines for developing useful, easy-to-use CDSS tools, although additional research is needed to determine how broadly applicable these potential guidelines are.

Acknowledgments

The author gratefully acknowledges Elizabeth Meyeroff, Ryan Stroyka, and Jena Mota for conducting cognitive walkthroughs, and Victor Wu and Michael McManus for providing a demonstration, training materials, and access to an online demonstration version of T3 that included the Beta version of its new MAP tool. The author would also like to acknowledge the two anonymous reviewers who provided excellent suggestions which helped to significantly improve this paper.

Footnotes

Contributorship: The author conceived the study, trained the student research assistant evaluators, supervised their participation, analyzed the results, generated the recommendations for improvement, and wrote this paper.

Declaration of conflicting interests: The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

Funding: The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work was supported by the National Heart, Lung, and Blood Institute (grant number R44HL117340). The content is solely the responsibility of the author and does not necessarily represent the official views of the National Institutes of Health.

Ethical approval: In this study, trained researchers conducted cognitive walkthroughs, a well-established analytical technique to assess the usability of technology. The application that was assessed was a demonstration version of software that contains only de-identified patient data. The study did not entail obtaining, using, studying, analyzing, or generating any identifiable private information or biospecimens. Rowan University IRB determined that the study reported here does not qualify as human subjects research, IRB #: PRO-2021-642, therefore it was exempt from review.

Guarantor: PT serves as a guarantor for this paper.

ORCID iD: Patrice D. Tremoulet https://orcid.org/0000-0003-0443-9806

References

  • 1.ECRI. Physiologic monitoring systems. Health Devices 2000; 29: 153–184. [PubMed] [Google Scholar]
  • 2.ECRI. Evaluation background: ICU physiologic monitoring systems. Health Devices 2020; 29: 153-155 [Google Scholar]
  • 3.Arney D, Zhang Y, Goldman JM, et al. Implementing Real-Time Clinical Decision Support Applications on OpenICE: A Case Study Using the National Early Warning System Algorithm. In: 2019 IEEE/ACM International Conference on Connected Health: Applications, Systems and Engineering Technologies (CHASE) 2019, pp.35–40: IEEE. [Google Scholar]
  • 4.Weenk M, Koeneman M, van de Belt TH, et al. Wireless and continuous monitoring of vital signs in patients at the general ward. Resuscitation 2019; 136: 47–53. [DOI] [PubMed] [Google Scholar]
  • 5.Blankush JM, Freeman R, McIlvaine J, et al. Implementation of a novel postoperative monitoring system using automated modified early warning scores (MEWS) incorporating end-tidal capnography. J Clin Monit Comput 2017; 31: 1081–1092. [DOI] [PubMed] [Google Scholar]
  • 6.Sendelbach S, Funk M. Alarm fatigue: A patient safety concern. AACN Adv Crit Care 2013; 24: 378–386. [DOI] [PubMed] [Google Scholar]
  • 7.Bell L. Monitor alarm fatigue. Am J Crit Care 2010; 19: 38–38. [DOI] [PubMed] [Google Scholar]
  • 8.Co Z, Holmgren AJ, Classen DC, et al. The tradeoffs between safety and alert fatigue: Data from a national evaluation of hospital medication-related clinical decision support. J Am Med Inform Assoc 2020; 27: 1252–1258. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Collier R. Rethinking EHR interfaces to reduce click fatigue and physician burnout. Can Med Assoc 2018; 190(33): E994-E995 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Hall LH, Johnson J, Watt I, et al. Healthcare staff wellbeing, burnout, and patient safety: A systematic review. PloS one 2016; 11: e0159015. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Beasley BW. Commentary: How ‘alert fatigue’ truly exhausts US. Physician Leadersh J 2018; 5: 60–62. [Google Scholar]
  • 12.Cash JJ. Alert fatigue. Am J Health-Syst Pharm 2009; 66: 2098–2101. [DOI] [PubMed] [Google Scholar]
  • 13.Ancker JS, Edwards A, Nosal S, et al. Effects of workload, work complexity, and repeated alerts on alert fatigue in a clinical decision support system. BMC Med Inform Decis Mak 2017; 17: 1–9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Baker K, Rodger J. Assessing causes of alarm fatigue in long-term acute care and its impact on identifying clinical changes in patient conditions. Informatics in Medicine Unlocked 2020; 18: 100300. [Google Scholar]
  • 15.Evans G, Steele B, Setliff E. 1371: Managing physiologic alarms to reduce alarm fatigue. Crit Care Med 2020; 48: 662. [Google Scholar]
  • 16.Lewis C, Polson PG, Wharton C, et al. Testing a walkthrough methodology for theory-based design of walk-up-and-use interfaces. Proc SIGCHI Conf Hum Fact Comp Syst 1990; 1: 235–242. [Google Scholar]
  • 17.Wharton C, Rieman J, Lewis C, et al. The cognitive walkthrough: a practitioner’s guide. Usability inspection methods. New York: John Wiley, 1994, pp. 105–140. [Google Scholar]
  • 18.Wharton C, Bradford J, Jeffries R, et al. Applying cognitive walkthroughs to more complex user interfaces: Experiences, issues, and recommendations. Proc SIGCHI Conf Hum Fact Comp Syst 1992; 1: 381–388. [Google Scholar]
  • 19.Lewis C, Wharton C. Cognitive walkthroughs. In Handbook of Human-Computer Interaction, 2nd ed. Edited by Helander M, Landauer TK and Prabhu P. Elsevier, 1997, pp. 717–732. [Google Scholar]
  • 20.Association UEP. Cognitive Walkthrough., https://www.usabilitybok.org/cognitive-walkthrough 2012.
  • 21.Tremoulet P, Clark K, McManus M, et al. IVCO2 `training effectiveness study. Proc Int Symp Hum Fact Ergon Heal Car 2020; 1: 45–49. [Google Scholar]
  • 22.Polson PG, Lewis C, Rieman J, et al. Cognitive walkthroughs: A method for theory-based evaluation of user interfaces. Int J Man Mach Stud 1992; 36: 741–773. [Google Scholar]
  • 23.Khajouei R, Zahiri Esfahani M, Jahani Y. Comparison of heuristic and cognitive walkthrough usability evaluation methods for evaluating health information systems. J Am Med Inform Assoc 2017; 24: e55–e60. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Bligård L-O, Osvalder A-L. Enhanced cognitive walkthrough: Development of the cognitive walkthrough method to better predict, identify, and present usability problems. Advances in Human-Computer Interaction 2013; 1–17. [Google Scholar]
  • 25.Mahatody T, Sagar M, Kolski C. State of the art on the cognitive walkthrough method, its variants and evolutions. Intl Journal of Human–Computer Interaction 2010; 26: 741–785. [Google Scholar]
  • 26.Mahatody T, Sagar M, Kolski C. Cognitive walkthrough for HCI evaluation: Basic concepts, evolutions and variants, research issues. Proceedings EAM 2007 European Annual Conference on Human-Decision Making and Manual Control, Technical University of Danemark, Lyngby 2007; 1: 1–12. [Google Scholar]
  • 27.Dieter M, Tkacz N. The patterning of finance/security: A designerly walkthrough of challenger banking apps. Computational Culture 2020; 7: 1–39. [Google Scholar]
  • 28.Rieman J, Franzke M, Redmiles D. Usability evaluation with the cognitive walkthrough. In: Conference companion on Human factors in computing systems 1995, pp. 387–388. [Google Scholar]
  • 29.Samrgandi N. User interface design & evaluation of mobile applications. International Journal of Computer Science & Network Security 2021; 21: 55–63. [Google Scholar]
  • 30.Woodmas R. The Cognitive Walkthrough: A Low-Cost Usability Testing Method and Empathy Training Tool, https://xd.adobe.com/ideas/process/user-testing/cognitive-walkthrough-improve-ux/ (2020, accessed May 28 2022).
  • 31.Lewis C, Rieman J. Task-centered user interface design. A Practical introduction 1993. [Google Scholar]
  • 32.May J, Barnard P. The case for supportive evaluation during design. Interact Comput 1995; 7: 115–143. [Google Scholar]
  • 33.Grigoreanu V, Mohanna M. Informal cognitive walkthroughs (icw) paring down and pairing up for an agile world. Proc SIGCHI Conf Hum Fact Comp Syst 2013; 1: 3093–3096. [Google Scholar]
  • 34.Salazar K. Evaluate Interface Learnability with Cognitive Walkthroughs, https://www.nngroup.com/articles/cognitive-walkthroughs/ (2022, accessed May 28 2022).
  • 35.Spencer R. The streamlined cognitive walkthrough method, working around social constraints encountered in a software development company. Proc SIGCHI Conf on Hum Fact Comp Syst 2000; 1: 353–359. [Google Scholar]
  • 36.Jaspers MW. A comparison of usability methods for testing interactive health technologies: Methodological aspects and empirical evidence. Int J Med Inf 2009; 78: 340–353. [DOI] [PubMed] [Google Scholar]
  • 37.Foundation ID. How to conduct a cognitive walkthrough, https://www.interaction-design.org/literature/article/how-to-conduct-a-cognitive-walkthrough 2021.
  • 38.Nielsen J. Usability inspection methods. In: Conference companion on Human factors in computing systems 1994, pp.413–414. [Google Scholar]
  • 39.Nielsen J. Usability engineering. San Diego, CA: Elsevier, 1994. [Google Scholar]
  • 40.Miller K, Mosby D, Capan M, et al. Interface, information, interaction: A narrative review of design and functional requirements for clinical decision support. J Am Med Inform Assoc 2018; 25: 585–592. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 41.Tremoulet P, Krishnan R, Karavite D, et al. A heuristic evaluation to assess use of after visit summaries for supporting continuity of care. Appl Clin Inform 2018; 9: 714–724. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 42.Liu Y, Osvalder A-L, Dahlman S. Exploring user background settings in cognitive walkthrough evaluation of medical prototype interfaces: A case study. Int J Ind Ergon 2005; 35: 379–390. [Google Scholar]
  • 43.Farzandipour M, Nabovati E, Tadayon H, et al. Usability evaluation of a nursing information system by applying cognitive walkthrough method. Int J Med Inf 2021; 152: 104459. [DOI] [PubMed] [Google Scholar]
  • 44.Ghalibaf AK, Jangi M, Habibi MRM, et al. Usability evaluation of obstetrics and gynecology information system using cognitive walkthrough method. Electron Physician 2018; 10: 6682. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 45.Peute LW, Jaspers MM. Usability evaluation of a laboratory order entry system: Cognitive walkthrough and think aloud combined. Stud Health Technol Inform 2005; 116: 599–604. [PubMed] [Google Scholar]
  • 46.Liljegren E, Osvalder A-L. Cognitive engineering methods as usability evaluation tools for medical equipment. Int J Ind Ergon 2004; 34: 49–62. [Google Scholar]
  • 47.Arshad F, Nnamoko N, Wilson J, et al. Improving healthcare system usability without real users: A semi-parallel design approach. International Journal of Healthcare Information Systems and Informatics (IJHISI 2015; 10: 67–81. [Google Scholar]
  • 48.Lee S. Features of computerized clinical decision support systems supportive of nursing practice: A literature review. CIN: Computers, Informatics, Nursing 2013; 31: 477–495. [DOI] [PubMed] [Google Scholar]
  • 49.Chase JG, Andreassen S, Jensen K, et al. Impact of human factors on clinical protocol performance: a proposed assessment framework and case examples. J Diabetes Sci Technol 2008 May; 2(3): 409–416. doi:10.1177/193229680800200310. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 50.Pelayo S, Marcilly R, Bernonville S, Leroy N, Beuscart-Zephir MC. Human factors based recommendations for the design of medication related clinical decision support systems (CDSS). Stud Health Technol Inform 2011; 169: 412–416. [PubMed] [Google Scholar]
  • 51.Wright M-O, Robicsek A. Clinical decision support systems and infection prevention: To know is not enough. Am J Infect Control 2015; 43: 554–558. [DOI] [PubMed] [Google Scholar]
  • 52.Kawamoto K, Houlihan CA, Balas EA, et al. Improving clinical practice using clinical decision support systems: A systematic review of trials to identify features critical to success. Br Med J 2005; 330: 765. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 53.Tsopra R, Jais J-P, Venot A, et al. Comparison of two kinds of interface, based on guided navigation or usability principles, for improving the adoption of computerized decision support systems: Application to the prescription of antibiotics. J Am Med Inform Assoc 2014; 21: e107–e116. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 54.Meyeroff E, Tremoulet P. Etiometry’s T3 heuristic evaluation. Proc Int Symp Hum Fact Ergon Heal Car 2021; 1: 37–41. [Google Scholar]

Articles from Digital Health are provided here courtesy of SAGE Publications

RESOURCES