Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2023 Jul 1.
Published in final edited form as: Contemp Clin Trials. 2022 Feb 27;118:106712. doi: 10.1016/j.cct.2022.106712

Conducting a representative national randomized control trial of tailored clinical decision support for nurses remotely: Methods and implications

Karen Dunn Lopez a,*, Yingwei Yao b, Hwayoung Cho b, Fabiana Cristina Dos Santos b, Olatunde O Madandola b, Ragnhildur I Bjarnadottir b, Tamara Goncalves Rezende Macieira b, Amanda L Garcia b, Karen JB Priola b, Jessica Wolf a, Jiang Bian c, Diana J Wilkie b, Gail M Keenan b
PMCID: PMC9851662  NIHMSID: NIHMS1835729  PMID: 35235823

Abstract

Clinical Decision Support (CDS) systems, patient specific evidence delivered to clinicians via the electronic health record (EHR) at the right time and in the right format, has the potential to improve patient outcomes. Unfortunately, outcomes of CDS research are mixed. A potential cause lies in its testing. Many CDS are implemented in practice without sufficient testing, potentially leading to patient harm. When testing is conducted, most research has focused on “what” evidence to provide with little attention to the impact of the CDS display format (e.g., textual, graphical) on the user.

In an adequately powered randomized control trial with 220 hospital based registered nurses, we will compare 4 randomly assigned CDS format groups (text, text table, text graphs, tailored to subject’s graph literacy score) for effects on decision time and simulated patient outcomes. We recruit using state based professional registries, which allows access to participants from multiple institutions across the nation. We use online survey software (REDCap) for efficient study workflow including screening, informed consent documentation, pre-experiment demographic data collection including a graph literacy questionnaire used in randomization. The CDS prototype is accessed via a web app and the simulation-based experiment is conducted remotely at a subject’s local computer using video-conferencing software. Also included are 6 post intervention surveys to assess cognitive workload, usability, numeracy, format preference, CDS utilization rationale, and CDS interpretation. Our methods are replicable and scalable for testing of health information technologies and have the potential to improve the safety and effectiveness of these technologies across disciplines.

Keywords: Nursing informatics, Decision support, Randomized control trial, Graph literacy, Numeracy

1. Introduction

Electronic health records (EHR) provide a powerful platform for delivering information to clinicians at the point of care that can enhance the quality, effectiveness, and timeliness of care. Clinical decision support (CDS) systems, “computer generated clinical knowledge and patient related information which is intelligently filtered and presented at appropriate times to enhance care,” [1] are innovations that leverage the EHR platform. There are many forms of CDS such as alerts, reminders, and evidence-based treatment suggestions [2]. CDS offers the promise of delivering just-in-time information in a format that can be quickly interpreted by clinicians. Despite the promise of CDS and other EHR technologies, the results of studies on these are mixed and challenges remain [2,3].

For CDS targeting nurse decision making, a systematic review of 28 CDS studies that targeted hospital nurses’ decision-making found important weaknesses in this body of research [4]. Only one used a rigorous randomized controlled trial (RCT) design. Only two studies included the important issue of CDS format, one using focus group [5] methods, a lower level of evidence. Since that review, a 2017 study showed that bar graphs helped to facilitate nurses understanding over 3 other display formats, but comprehension was not optimal across the four formats (41–88%) [6]. One of the potential reasons for lack of optimal comprehension may lie in the graph literacy of the sample. Graph literacy is one’s ability to comprehend information that is displayed graphically [7]. Graphs are often used to present trends and comparisons in medical information.

In our team’s earlier research, we found that nurses’ graph literacy scores were quite varied [8] and a trend of shorter care planning times when nurses with higher graph literacy were presented graphs in CDS [9]. In other health professions graph literacy has been shown to influence comprehension of health information, improve decision making time [10] and reduce prescribing errors [11,12]. In non-clinicians, comprehension of statistics was improved when subjects with high graph literacy were shown graphs instead of numbers [13]. Though rare, these studies suggest the positive impact graphs and graph literacy may have on presenting CDS information that can reduce decision time.

Both the format and the content of the CDS are crucial and have been shown to influence the clinicians’ likelihood of accessing, considering, and applying the CDS correctly [9]. Many EHR technologies, however, are deployed for use in practice without adequate testing [4]. Because CDS seeks to assist clinicians in decision-making, poorly tested CDS could compromise patient care and safety [2]. Formative iterative testing that brings small groups of potential users together to interact with the CDS and provide suggestions to improve the ease of use and usefulness of the CDS is an important first step [6,14]. Once the formative testing is complete, rigorous testing is needed to establish the effectiveness of the CDS. Moving CDS testing immediately to clinical trials following formative testing, however, presents formidable challenges. Multisite testing under real-time conditions is complex and expensive and frequently involves the use of underpowered samples [1519]. The temporally demanding nature of health care workflow and high workload also makes it difficult to recruit study participants at their work setting. Further, these testing conditions not only limit generalizability, but also put patients at risk of harm when workflow has not been adequately factored into formative testing.

Our CDS research team developed an innovative combination of tools and methods that enable remote simulation-based research methods to overcome barriers to rigorous and generalizable CDS research. We used a nursing care planning prototype integrated into a simulated EHR to deliver the decision support. Nursing care plans are a form of nursing documentation that represent decisions nurses make to organize and evaluate nursing care for their individual patients. These plans, start on the first day of admission, are updated each shift and [20] generally focus on three areas of the nursing process: 1) nursing diagnosis (a problem that can be addressed within nursing’s scope of practice), 2) nursing outcomes (measurable goals related to a given nursing diagnosis) and 3) nursing interventions (the nurse directed actions chosen and performed to optimize patient outcomes [21]. For example, a patient with heart failure as a medical diagnosis, may have Excess Fluid Balance as a nursing diagnosis, Fluid Overload Severity as the outcome being monitored, and Respiratory Monitoring and Fluid Monitoring as interventions [21]. Documented nursing care plans that use standardized classifications in the EHR provide a rich source of data that can be utilized to analyze and understand the role of nursing care for example when applying it to pressure ulcers [22], palliative care, the patient’s pain [23] and predicting care needs [24]. The findings of these analyses, and other forms of evidence, can be delivered to nurses in the form of patient specific CDS as they compose and update their care plans in the EHR.

The method described below builds on our pilot study with 60 nurses in which we examined the impact of CDS formats on care planning time and patient outcomes using simulation [9]. Our main findings were: 1) graph literacy and numeracy vary substantially among nurses [9], 2) our prototypes were acceptable, easy to use and useful by practicing nurses [9], 3) three different formats (text, text plus table and text plus graph) improved nurse decision making (over no CDS) [9], and 4) graph literacy predicts decision efficiency of nurses using CDS of different formats [9]. The purpose of the paper is to describe our methods for recruiting a fully powered national sample of practicing nurses and testing the CDS with this sample virtually prior to deployment in practice.

2. Methods

2.1. Design

Tailored Clinical Decision Support Formats Designed to Improve Palliative Care for Cancer and Chronically Ill Patients: A Pre-Clinical Test is a National Institute of Nursing Research funded trial (1R01NR018416–01) of CDS to determine optimal formats for presenting evidence to individual nurses. In this grant, we use a randomized control trial (RCT) design to compare the effectiveness of presenting clinical evidence in four formats (text, text + table, text + graph, and tailored).

Under single Institutional Review Board (IRB) authority for multisite trials, the IRB at the University of Florida (UF) assumes primary authority for approval of the trial that is conducted at UF and the University of Iowa.

2.2. Theoretical underpinnings and conceptual framework

Our research is informed by the Cognitive Load Theory (CLT) [25], which seeks to explain complex problem solving where excessively high cognitive loads degrade performance [26]. The theory posits that humans’ information processing capacity is quite limited [27] and that cognitive load is influenced by individual learner differences [28,29].

In the context of clinical nursing, registered nurses (RNs) frequently must process a large amount of real-time patient data (e.g., respiratory, hemodynamic, mobility, level of consciousness, medication needs, pain) along with data contained in an EHR (patient history, lab values, x-ray results) interface to make decisions. These data combined with a temporally demanding clinical workflow leads to a high cognitive load. A suboptimal CDS format not accounting for individual nurse differences (e.g., graph literacy, years of experience, education) could induce additional cognitive burden that further increases the mental effort and degrades the information processing ultimately causing undesired patient outcomes. Our application of CLT in this research is depicted in Fig. 1. In it we posit that tailoring the CDS format to nurse characteristics would minimize the cognitive load placed on the RN to process the CDS and lead to efficient decision making and desired patient outcomes.

Fig. 1.

Fig. 1.

Conceptual framework guided by cognitive load theory.

2.3. Primary aim

To compare four CDS groups (text, text + table, text + graph, tailored) for effects on nurses’ care planning time when using the CDS and simulated patient outcomes.

2.4. Hypothesis

We hypothesize that alignment between assigned CDS format and graph literacy level, measured by the Graph Literacy Scale (GLS) [7], is associated with faster care planning time and better patient outcomes than simply using text, text + table or text + graph for all RNs.

2.5. Randomization

RNs are randomized into one of four groups (Fig. 2). Each of the first three CDS groups is associated with one of three formats: text only, text + table, or text + graph. All participants assigned to one of the first three CDS groups thus access CDS evidence presented in the format associated with that group. (See Figs. 3 and 4.)

Fig. 2.

Fig. 2.

Randomization by Graph Literacy Score.

Fig. 3.

Fig. 3.

Standardized instructions to participants.

During the entire orientation phase the RA stays visible in the Zoom session and answers questions and responds by directing the RN to content in the video to ensure the orientation is consistent for all RNs.

Fig. 4.

Fig. 4.

Standardized brief handoff for Shifts 2 and 3.

Participants assigned to the fourth group (tailored), on the other hand, are presented with one of the three CDS formats based on their GLS scores (low, medium, or high). The GLS is a 13-item objective literacy tool that measures a person’s ability to understand information presented in graphs (Supplemental File 1) [7]. The scale is an openended fill-in-the-answer type test that uses the medical scenarios for each item and takes 9–10 min to complete [7]. Psychometric assessments of the scale are satisfactory to high (Cronbach alpha = 0.85 and convergent validity r = 0.44). The score is calculated by simple counting the number of correct answers. There are no opportunities for partial points.

Those with low GLS scores are presented CDS evidence in the text only format, those with medium scores receive the text + table format, and those with high scores receive CDS in the text + graph format. Stratified randomization with a block size of four is used to ensure balance across the groups on the key variable of Graph Literacy score.

2.6. Sample, recruitment, screening and representation

2.6.1. Sample size and power

We rely on preliminary findings from our earlier pilot work to conduct a power analysis [9]. These findings suggest that the time to complete a care plan depends on the alignment between the RN’s graph literacy and the format of the CDS [9]. Based on these findings and the distribution of GLS scores of our pilot study RNs, we estimated that the mean care planning time for the proposed four CDS format groups will be as follows: 4.7 ± 2.3 min for tailored, 7.7 ± 3.6 min for text, 8.2 ± 4.8 min for text + table, and 6.7 ± 3.5 min for text + graph. The primary aim of the current RCT is to demonstrate that tailoring CDS reduces cognitive load by comparing the care planning time of the tailored group with those of text, text + table, and text + graph. Applying Bonferroni adjustment to achieve a family wise type I error of 5%, we estimated that a sample of 200 nurses gives us 86% power to detect the difference between the tailored group and the three other groups.

2.6.2. Sample source

Using methods adapted from the Nursing Work life Study [30], we sample using State Boards of Nursing (SBON) registry lists. To promote geographic and work setting diversity, we purposively selected a diverse group of ten states in the United States (U.S.): Florida (FL), Arizona (AR), Vermont (VT), New Jersey (NJ), Nebraska (NE), Ohio (OH), California (CA), Texas (TX), Oregon (OR), and New Mexico (NM). These states include two states from each of the five U.S. regions, some of the most populous (CA, TX, FL) and least populous states (VT, NE), states with high population densities (NJ, FL, CA), and sparsely populated states (NE, OR) [31]. To ensure adequate representation of RNs without a Bachelor of Science in Nursing (BSN) degree, we included states with large portions of RNs with either Associate Degree in Nursing (ADN) or Diploma education, including FL (57%) [32], VT (52%) [33], TX (43%) [34], OR (46%) [35], and AR (58%) [36]. Finally, we included some of the most diverse state RN workforces in the nation with a high portion of minority nurses including CA (47%) [37], TX (40%) [34], and FL (37%) [32]. Together these ten states have more than 1.3 million RNs.

2.6.3. Recruitment

We recruit nurses using lists obtained from SBON’s registries. These lists contain the names and contact information for licensed RNs in their state. Access to the lists and format of each varies by state. Some SBON provide free instant data downloads, other states require purchase with data being sent electronically or via mail on discs. Data are converted to Microsoft Office Excel as needed. Using contact information provided by the SBON, we randomly select nurses to invite participation in recruitment cycles. In subsequent cycles the names of all RNs previously contacted are removed from the database to avoid contacting the same participants in more than one cycle.

Our recruitment strategy is guided by Dillman’s evidence based Total Design Mixed Mode Method [38] for participant recruitment. This method uses a series of different forms (e.g., letter, postcard, email) to contact potential participants on prespecified days. The first letter includes a stamped mail enveloped with an invitation letter, a colorful photo flyer and a $2 token cash incentive. On day 7, a graphically designed color postcard reminder is sent. On day 21 participants that have not responded are sent a stamped addressed return envelope as a last contact. In addition to the above postal mailings, nurses that are licensed in states that provided email addresses (5 out of the 10 included in our state sample) are also sent email reminders on days 7 and 14.

2.6.4. Eligibility screening

Interested RN invitees contact our study team, and a telephone screening interview is conducted to assess eligibility. Our inclusion criteria are: 1) age 18 years or older, 2) licensed RN, 3) cares for adults and 4) works in a hospital. Our exclusion criteria are: 1) have not practiced in a hospital adult-medical surgical type unit within the last 2 years, 2) inability to see or understand an English language web interface, 3) limited access to the internet, and 4) no access to a desktop, a laptop, or a tablet computer with a 9″ or larger screen. Answers to the screening questions are entered into the secure, web-based application REDCap (Research Electronic Data Capture) [39] software system.

2.6.5. Ensuring sample representation with quotas

To enhance generalizability, we used quota sampling methods to ensure that nurses with diverse backgrounds, education and geographical location were included in our sample. We used REDCap to standardize the process for assessing if quotas were met. Following the telephone screening, eligible RNs answer several demographic questions read to them by a research assistant (RA) to assess gender, race, ethnicity, and highest nursing education. The answers are recorded by the RA in REDCap, where the RNs are automatically categorized based on their answers (example participant category: female, minority, holds a BSN) and tools within REDCap were used to enforce participant category quotas. We preset sampling quotas for the five geographic regions to ensure our study is representative of the U.S. nursing population (20% male, 35% minority and 30% with an ADN as their highest degree) [40] and diverse among the five geographic regions in our sample. Utilizing REDCap functions for quota sampling eliminates the need for manual record keeping and reduces the likelihood of human errors.

2.7. Enrollment and pre-experiment data collection (Session 1)

If the RN qualifies for an open study slot, the RA immediately sends a uniform resource locator (URL) link to the RN’s email to complete an electronic informed consent (in REDCap), additional demographic questions, and the Graph Literacy Scale [7,39]. The RA stays on the phone and remains logged into REDCap, while the RN connects to the URL link and completes the informed consent, GLS, and the additional demographic questions (Session 1). While on the phone the RA tracks and validates the completion of the required forms and answers the RN’s questions about the study. At the end of Session 1, the RA schedules the participant for Session 2 (Experimental Task) by offering available dates on the REDCap calendar to the RN and inputting the RN’s choice while both are still on the phone. The RA immediately sends an Outlook calendar invite and a Zoom Video Communications (San Jose, CA) link to both the RN and the RA who will be conducting Session 2.

2.8. Experimental session (Session 2)

The experiment (Session 2) takes approximately one hour. The RN connects via the private Zoom link (sent in Session 1) and is given an automatic reminder ten minutes prior to the start of Session 2 generated from the Outlook calendar entry sent in Session 1. If the RN does not show for the scheduled Session 2 an RA contacts the RN via phone or email to reschedule Session 2 repeating the attempt every 2–3 days until the RN is reached. The RA at the UF site guides the RN through the four steps: technology verification, orientation, interacting with the care planning software (CPS) and post surveys.

2.8.1. Technology verification

At the start of Session 2 the RN is asked “What technology will you be using today?” If the RN does not indicate having access to a required device option, the RA explains the requirements and offers to reschedule the session at a time when the RN can access a required device. The RA next asks “What browser do you have?” and “Are you familiar with Zoom and its features?” If needed, the RA will help the participant download a compatible browser and explain the Zoom features to the RN to ensure the components of Session 2 work as intended. The team developed a technical guide to Zoom features that the RA uses to assist the RN to share screen and access the browser. When the verification is complete, the RN proceeds to the next step, orientation.

2.8.2. Orientation and instructions

For the orientation, the RA shares the computer screen and sound using the Zoom ‘share screen’ features to play the standardized audio-visual orientation to the RN. The orientation provides images, functions, and instructions for interacting with the CPS that will appear once the interaction phase begins [9]. This includes:

  • how to add, remove, and re-prioritize the nursing problems, outcomes, and interventions in the care plan as needed.

  • an overview of the standardized terms used in the CPS for patient problems, outcomes, and interventions represented respectively by terms from NANDA-I [41], the Nursing Outcome Classification (NOC) [42], and the Nursing Intervention Classification (NIC) [43].

Next the case histories and initial nursing care plans of two patients nearing end of life are displayed on static CPS screens. A recording provides the handoff information and instructions of the steps to take an overview of what the RN will be expected to do once interaction with the CPS begins:

During the entire orientation phase the RA stays visible in the Zoom session and answers questions and responds by directing the RN to content in the video to ensure the orientation is consistent for all RNs.

The RA then pastes the URL for the interactive CPS into the Zoom chat feature and directs the RN to copy and paste the link into a Web Browser (i.e. Google Chrome, Safari, Firefox, etc.) and begin updating the CPS based on the patient information provided once the session automatically begins. The RA then shuts down his/her video and mutes his/her audio to provide privacy to the subject while remaining present in the Zoom session to respond to unforeseen technical glitches. The RA deliberately refrains from interacting with the RN so as not to bias responses during the 3 simulated shifts.

2.8.3. Experimental task

Once the ‘Next Shift’ button from the handoff screen is selected, the interactive CPS displays Patient 1’s care plan with an ‘Action Required’ button, that when clicked displays the CDS format (text, text + table, text + graph; tailored) to which the RN was randomly assigned (See Supplemental File 2) and can choose to accept or reject the actions recommended in the CDS. The RN can toggle back and forth between the 2 patient care plans and when satisfied with updating each care plan (Shift 1) will click ‘End Shift’ button.

Immediately after clicking ‘End Shift’ to end shift 1, the following handoff for Shift 2 appears:

The RN is again directed to click on the audio button to hear the audio version of the handoff. After listening to the brief handoff, the RN clicks the ‘Next Shift’ button to view the 2 patients care plans that may reflect a change in the simulated patient outcomes based on the nurse’s decision from the prior simulated shift. For example, if the nurses added an evidence-based intervention for pain control in shift 1, the initial shift 2 care plan NOC would show that the patient’s pain had improved.

Just like the first simulated shift, the nurse makes decisions about the patient care in the care plan, access relevant ‘Action Required’ CDS options, and toggle between patients as desired. When satisfied with their updates for Shift 2 patients’ care plans, the RN clicks the ‘End Shift’ button.

Shift 3, follows the same handoff and sequence as Shift 2. In total, the nurses complete 6 care plans (3 for each of the 2 patients) to complete the experimental task. Once the Shift 3 care plans for the two patients have been submitted, the experimental task is complete.

2.9. Data management and analysis

The CPS application was created in SPRING which is an open-source framework for developing in JAVA, along with HIBERNATE, a light-weight, Object Relational Mapping framework. The application stores and retrieves data from the relational database MICROSOFT SQL Server and is hosted on Apache Tomcat server. To facilitate our analysis, the application automatically records the actions and time the RN spends interacting with the care plans during each of the three shifts and stores these data in an SQL database. Once data collection is complete, the data will be imported into statistical computing software R, Version 3.6.3 (R Foundation for Statistical Computing, Vienna, Austria) for analysis.

The primary outcome of our study is the care planning time, the total amount of time an RN spends over the three simulated shifts. We hypothesize that the tailored group will have lower care planning time than each of the other three groups. Care planning time will be calculated based on the time stamps generated when the RN clicks the ‘Next Shift’ button through when the RN clicks the ‘End Shift’ button for each of the three shifts. Linear regression analysis will be performed to compare the tailored group with the other three groups on the dependent variable: care planning time. Dummy variables will be created for our categorical independent variable for CDS group assignment (Text, Text + Table, Text + Graph and Tailored).

The secondary outcome is the simulated patient outcome. We calculate this outcome based on the specific evidence-based palliative care suggestions presented in the CDS that are accepted in the care plans in each shift. When an RN accepts one of the CDS suggestions, the NOC outcome is improved by one point. Next we dichotomize each simulated patient outcome into improved or not improved and use logistic regression analysis with the tailored group vs. the other three groups as the independent variable. We hypothesize that the tailored group will have better patient outcomes. We will set the significance at a two-sided Type I error of 0.016 for the primary outcome and 0.05 for the secondary outcome.

We will also construct machine learning prediction model of care planning time and simulated patient outcomes using RN demographic variables, graph literacy, objective numeracy, and format preference as predictors. These models could be used to further refine tailoring in the future.

2.10. RN incentives

Nurses that complete the experiment are offered a $100 electronic gift card and a 1-credit hour of continuing education credit.

3. Discussion

We assembled a novel combination of tools for CDS testing to address critical barriers in advancing the field. Specifically, we sought to overcome problems associated with complex system implementation, difficulty recruiting the sample locally that is required for adequate power, lack of generalizability in studies conducted at a small number of sites, and the need to avoid the expense and distraction of work-based data collection. Our approach includes an rarely, if ever used, combination of tools: 1) simulation methods, 2) web-based prototypes and survey measures that allow remote data collection, 3) efficient methods to recruit a diverse national sample, 4) leverage of information technology for online consent, screening, quota enforcement, scheduling, and coordination between two study sites with distinct and complementary roles in the study, and 5) utilization of a videoconferencing software to ensure an effective and smooth orientation process as well as remote monitoring of study session to ensure fidelity. Our methods are scalable and have implications for other researchers conducting research involving the testing of health information technology (HIT) used by licensed health care professionals.

By conducting the study using simulation methods before implementation, we foster a safer, do-no-harm, approach for HIT research [44]. This is critically important because poor and untested HIT has the potential to cause unintended consequences and harm if tested with real patients [4547] (e.g., delivery of wrong care due to misinterpretation of CDS message). In our study, we use two simulated scenarios that focus on the care for two patients nearing the end of life who are currently receiving inappropriate aggressive bedside care. RNs are asked to demonstrate their decision making by adjusting electronic care plans based on the information in the scenarios, features of the CPS, and the CDS provided in the format assigned to them.

The simulated methods also allows us to identify potentially negative patient outcomes of the CDS before implementing in a live setting. For patient outcomes in our simulation, we utilized findings from our earlier research using machine learning to identify best practices from our care plan database [9]. The evidence gleaned allows us to predict the patient outcomes based on the RNs decisions made during this study. With the progress in artificial intelligence and big data in healthcare and in nursing, high performance prediction algorithms for patient outcomes will become more widely available for use in CDS systems in the near future. These prediction models will further support the use of simulation to conduct high quality pre-clinical testing of innovative HIT.

By conducting the experiment remotely, we overcame two major barriers to rigorous research of HIT. Nurses participated in the experiment in their own homes with highly flexible scheduling options. First, the method removes the barrier of extra time and travel required to be in a specific laboratory or practice environment. Second, because our CDS prototype was developed as a web-application, it could be delivered using videoconferencing software (Zoom). Further, it is not necessary for the RNs to download bulky software with the potential to “crash” their home computers. The use of a secure site for answering survey questions allowed our RA to verify full study completion in real time, minimizing missing data.

By obtaining a sample from state board registries, we were able to gain access to a sample of RNs from a wide variety of organizations. This approach is important as many studies of HIT are typically conducted only in a single setting (e.g., one hospital, one large medical center) where findings cannot be generalized. The sampling method also promotes diversity by selecting and recruiting from states that allow us to ensure our RN sample contains adequate numbers of males, underrepresented minorities, and those whose highest degree is an Associate Degree.

Our nationwide representative sampling is also facilitated by leveraging advanced features in REDCap, which minimizes the need for manual recordkeeping and a high possibility for human errors. It enables a streamlined process of initial contact, screening, collecting demographic information, quota enforcement, consent, baseline session, and scheduling of second session, minimizing the need to contact the participant multiple times and thus maximizing retention. It also allows seamless coordination of study researchers and staff located at two universities; the University of Iowa focuses primarily on Session 1 and the UF focuses primarily on Session 2. Utilizing Zoom for Session 2 allows visual communication necessary to ensure smooth orientation and potential troubleshooting if needed. It also enables remote monitoring of participants to ensure fidelity of the experiment.

4. Potential difficulties and alternative strategies

Our methods have many advantages but there are some limitations. First, it is important to recognize that not all areas of the U.S. have internet services with adequate speed to run an interactive web-application. This could bias the results if the group of nurses with less access to high-speed internet could also be the populations that struggle with the use of HIT. As an effort to mitigate this bias, we allowed participants to participate from other quiet locations such as a library or a coffee shop. We also continue to monitor the number of potential participants that are not able to enroll because of poor internet access. This information will provide valuable insights for planning future online studies. Second, although we made it as easy as possible for nurses to participate, nurses (and other clinicians) are hardworking, skilled professionals that may not be interested in participating given the time commitment of this study (total 1.5–2 h). We provided participants with a $100 honorarium, but this may not be viewed as a strong enough inducement by some nurses. To develop widespread pre-clinical testing, efforts should be made to make an evaluation shorter or to provide larger honoraria. Finally, although professions other than nurses can use our care plan CDS, for this study we tested CDS targeting nurse decision making. Findings may not generalize to other health professions and should be tested if intended to be used by non-nurses.

5. Conclusions

Our study recruitment and experiment delivery methods are feasible, reproducible, and scalable to address barriers that are known impediments to rigorous testing of HIT such as CDS. Our methods can be applied to research trials evaluating decision making from a broad area of technologies and with other health professions. In addition, our methods pave the way for future decision-making interfaces to have improved effectiveness by tailoring the technology to an individuals’ characteristics. Beyond individual research studies, we believe these methods can be used to develop a scalable ecosystem that tests HIT innovations and routine updates by vendors before implementation in practice. If used widely, these methods can promote HIT that is effective, highly usable, and decreases potential harm to patients and the burden to clinicians.

Supplementary Material

1

Acknowledgements

Research reported in this publication was supported by the National Institute of Nursing Research (NINR), National Institutes of Health (NIH), [1R01NR018416-01] and the NIH, National Center for Advancing Translational Sciences (NCATS) support for the University of Florida Clinical and Translational Science Institute [UL1TR001427). Its contents are solely the responsibility of the authors and do not necessarily represent the official views of the NIH, NINR, or NCATS. The final peer-reviewed manuscript is subject to the NIH Public Access Policy. We wish to acknowledge our fabulous programmer Rishabh Garg who worked closely with us to convert our study prototype into a web-based application.

Footnotes

Declaration of Competing Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Appendix A. Supplementary data

Supplementary data to this article can be found online at https://doi.org/10.1016/j.cct.2022.106712.

References

  • [1].Teich JM, Osheroff JA, Pifer EA, Sittig DF, Jenders RA, Clinical decision support in electronic prescribing: recommendations and an action plan: report of the joint clinical decision support workgroup, J. Am. Med. Inform. Assoc 12 (4) (2005) 365–376. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [2].Sutton RT, Pincock D, Baumgart DC, Sadowski DC, Fedorak RN, Kroeker KI, An overview of clinical decision support systems: benefits, risks, and strategies for success, NPJ Digital Med. 3 (1) (2020) 1–10. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [3].Bright TJ, Wong A, Dhurjati R, Bristow E, Bastian L, Coeytaux RR, Samsa G, Hasselblad V, Williams JW, Musty MD, Wing L, Effect of clinical decision-support systems: a systematic review, Ann. Intern. Med 157 (1) (2012) 29–43. [DOI] [PubMed] [Google Scholar]
  • [4].Dunn Lopez K, Gephart SM, Raszewski R, Sousa V, Shehorn LE, Abraham J, Integrative review of clinical decision support for registered nurses in acute care settings, J. Am. Med. Inform. Assoc 24 (2) (2017) 441–450. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [5].Sidebottom AC, Collins B, Winden TJ, Knutson A, Britt HR, Reactions of nurses to the use of electronic health record alert features in an inpatient setting, CIN Comput. Inform. Nurs 30 (4) (2012) 218–226. [DOI] [PubMed] [Google Scholar]
  • [6].Dowding D, Merrill JA, Onorato N, Barrón Y, Rosati RJ, Russell D, The impact of home care nurses’ numeracy and graph literacy on comprehension of visual display information: implications for dashboard design, J. Am. Med. Inform. Assoc 25 (2) (2018) 175–182. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [7].Galesic M, Garcia-Retamero R, Graph literacy: a cross-cultural comparison, Med. Decis. Mak 31 (3) (2011) 444–457. [DOI] [PubMed] [Google Scholar]
  • [8].Lopez KD, Wilkie DJ, Yao Y, Sousa V, Febretti A, Stifter J, Johnson A, Keenan GM, Nurses’ numeracy and graphical literacy: informing studies of clinical decision support interfaces, J. Nurs. Care Qual 31 (2) (2016) 124–130. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [9].Keenan GM, Lopez KD, Yao Y, Sousa VE, Stifter J, Febretti A, Johnson A, Wilkie DJ, Toward meaningful care plan clinical decision support: feasibility and effects of a simulated pilot study, Nurs. Res 66 (5) (2017) 388–398. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [10].Garcia-Retamero R, Petrova D, Cokely ET, Joeris A, Scientific risk reporting in medical journals can bias expert judgment: Comparing surgeons’ risk comprehension across reporting formats, J. Exp. Psychol. Appl 26 (2) (2020) 283.–291. [DOI] [PubMed] [Google Scholar]
  • [11].Russ AL, Chen S, Melton BL, et al. , A Novel design for drug-drug interaction alerts improves prescribing efficiency, Joint Comm. J. Qual. Patient Saf 41 (9) (2015) 396–405, 10.1016/s1553-7250(15)41051-7. [DOI] [PubMed] [Google Scholar]
  • [12].Russ AL, Zillich AJ, Melton BL, et al. , Applying human factors principles to alert design increases efficiency and reduces prescribing errors in a scenario-based simulation, J. Am. Med. Inform. Assoc 21 (e2) (2014) e287–e296, 10.1136/amiajnl-2013-002045. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [13].Gaissmaier W, Wegwarth O, Skopec D, Müller AS, Broschinski S, Politi MC, Numbers can be worth a thousand pictures: individual differences in understanding graphical and numerical representations of health-related information, Health Psychol. 31 (3) (2012) 286. [DOI] [PubMed] [Google Scholar]
  • [14].Ratwani RM, Fairbanks RJ, Hettinger AZ, Benda NC, Electronic health record usability: analysis of the user-centered design processes of eleven electronic health record vendors, J. Am. Med. Inform. Assoc 22 (6) (2015) 1179–1182. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [15].Anchala R, Kaptoge S, Pant H, Di Angelantonio E, Franco OH, Prabhakaran D, Evaluation of effectiveness and cost-effectiveness of a clinical decision support system in managing hypertension in resource constrained primary health care settings: results from a cluster randomized trial, J. Am. Heart Assoc 4 (1) (2015), e001213. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [16].Moore TJ, Zhang H, Anderson G, Alexander GC, Estimated costs of pivotal trials for novel therapeutic agents approved by the US Food and Drug Administration, 2015–2016, JAMA Intern. Med 178 (11) (2018) 1451–1457. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [17].Martinez-Franco AI, Sanchez-Mendiola M, Mazon-Ramirez JJ, Hernandez-Torres I, Rivero-Lopez C, Spicer T, Martinez-Gonzalez A, Diagnostic accuracy in Family Medicine residents using a clinical decision support system (DXplain): a randomized-controlled trial, Diagnosis 5 (2) (2018) 71–76. [DOI] [PubMed] [Google Scholar]
  • [18].Blum D, Raj SX, Oberholzer R, Riphagen II, Strasser F, Kaasa S, Computer-based clinical decision support systems and patient-reported outcomes: a systematic review, Patient-Patient-Centered Outcomes Res. 8 (5) (2015) 397–409. [DOI] [PubMed] [Google Scholar]
  • [19].Downing NL, Rolnick J, Poole SF, Hall E, Wessels AJ, Heidenreich P, Shieh L, Electronic health record-based clinical decision support alert for severe sepsis: a randomised evaluation, BMJ Qual. Saf 28 (9) (2019) 762–768. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [20].IT@Beaumount, New Epic One Chart Care Plans with ProVation Content. https://providers.beaumont.org/docs/default-source/pdfs-for-bpp-bulletin/new-epic-onechartcare-plans-further-information.pdf?sfvrsn=2ba1f5da_2, 2022.
  • [21].Johnson M, Moorhead S, Bulechek GM, et al. (Eds.), NOC and NIC Linkages to NANDA-I and Clinical Conditions: Supporting Critical Thinking and Quality Care, 3rd ed., Elsevier Mosby, 2012. [Google Scholar]
  • [22].Stifter J, Yao Y, Lodhi MK, et al. , Nurse continuity and hospital-acquired pressure ulcers: a comparative analysis using an electronic health record “big data” set, Nurs. Res 64 (5) (2015) 361–371, 10.1097/nnr.0000000000000112. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [23].Lodhi MK, Ansari R, Yao Y, Keenan GM, Wilkie DJ, Khokhar AA, Predictive modeling for comfortable death outcome using electronic health records, Proc. IEEE Int. Congr. Big Data 2015 (2015) 409–415, 10.1109/BigDataCongress.2015.67. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [24].Westra BL, Savik K, Oancea C, Choromanski L, Holmes JH, Bliss D, Predicting improvement in urinary and bowel incontinence for home health patients using electronic health record data, J. Wound Ostomy Continen. Nurs 38 (1) (2011) 77–87, 10.1097/won.0b013e318202e4a6. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [25].Ayres P, Paas F, Cognitive load theory: new directions and challenges, Appl. Cogn. Psychol 26 (6) (2012) 827–832. [Google Scholar]
  • [26].Paas F, Renkl A, Sweller J, Cognitive load theory: instructional implications of the interaction between information structures and cognitive architecture, Instr. Sci 32 (1–2) (2004) 1–8. [Google Scholar]
  • [27].Miller GA, The magical number seven plus or minus two: some limits on our capacity for processing information, Psychol. Rev 63 (2) (1956) 81–97. [PubMed] [Google Scholar]
  • [28].Kalyuga S, et al. , The expertise reversal effect, Educ. Psychol 38 (1) (2003) 23–31. [Google Scholar]
  • [29].Jonassen DH, Grabowski BL, Handbook of Individual Differences, Learning, and Instruction, Routledge, 2012. [Google Scholar]
  • [30].Smiley RA, Lauer P, Bienemy C, Berg JG, Shireman E, Reneau KA, Alexander M, The 2017 national nursing workforce survey, J. Nurs. Regul 9 (3) (2018) S1–S88. [Google Scholar]
  • [31].Statistical Atlas, Population of the United States [cited 2019 February 18]; Available from: https://statisticalatlas.com/United-States/Population, 2018.
  • [32].Florida Center for Nursing, Florida’s 2016–2017 Workforce Supply Characteristics and Trends: Registered Nurses (RN) [cited 2021 June 8]; Available from: https://www.flcenterfornursing.org/DesktopModules/Bring2mind/DMX/API/Entries/Download?Command=Core_Download&EntryId=1608&PortalId=0&TabId=151, 2022.
  • [33].Vermont Future of Nursing 2017 Dashboard [cited 2021 June 8]; Available from: http://contentmanager.med.uvm.edu/docs/final_2017_ahec_nursing_data_dashboard_final/ahec-documents/final_2017_ahec_nursing_data_dashboard_final.pdf?sfvrsn=2, 2017.
  • [34].Texas Center for Nursing Workforce Studies 2016 Texas RN by the Numbers [cited 2021 june 8]; Available from: https://www.dshs.texas.gov/nursingworkforce/, 2016.
  • [35].Oregon Center for Nursing, Characteristics of the Nursing Workforce in Oregon-2016 [cited 2021 june 8]; Available from: https://oregoncenterfornursing.org/wp-content/uploads/2017/09/Characteristics-of-the-Nursing-Workforce-Report_Final.pdf, 2016.
  • [36].Arkansas Center for Nursing, 2018 State of the Nursing Workforce in Arkansas (Research Report No. 1), 2018 [cited 2021 June 8]; Available from: https://arcenterfornursing.org/wp-content/uploads/2019/06/2018-Workforce-Report.pdf.
  • [37].Spetz J, et al. , California Board of Registered Nursing 2012 Survey of Registered Nurses [cited 2021 june 8]; Available from: https://www.rn.ca.gov/pdfs/forms/survey2012.pdf, 2013.
  • [38].Dillman DA, Mail and Internet Surveys: The Tailored Design Method – 2007 Update with New Internet, Visual, and Mixed-Mode Guide, John Wiley & Sons, 2011. [Google Scholar]
  • [39].Harris PA, et al. , The REDCap consortium: building an international community of software platform partners, J. Biomed. Inform 95 (2019) 103208. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [40].Smiley RA, et al. , The 2020 National Nursing Workforce Survey, J. Nurs. Regul 12 (1) (2021) S4–S96. [Google Scholar]
  • [41].Herdman TH, in: Kamitsuru S (Ed.), Nursing Diagnoses: Definitions and Classification, 2021–2023, 12th New edition, Thieme, 2021. [Google Scholar]
  • [42].Moorhead S, et al. , Nursing Outcomes Classification (NOC)-E-Book: Measurement of Health Outcomes, Elsevier Health Sciences, 2018. [Google Scholar]
  • [43].Butcher HK, et al. , Nursing Interventions Classification (NIC)-E-Book, Elsevier Health Sciences, 2018. [Google Scholar]
  • [44].Sowan AK, et al. , Improving the safety, effectiveness, and efficiency of clinical alarm systems: simulation-based usability testing of physiologic monitors, JMIR Nurs. 4 (1) (2021), e20584. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [45].Govindan M, et al. , Automated detection of harm in healthcare with information technology: a systematic review, BMJ Qual. Saf 19 (5) (2010) 1–11. [DOI] [PubMed] [Google Scholar]
  • [46].Howe JL, et al. , Electronic health record usability issues and potential contribution to patient harm, JAMA 319 (12) (2018) 1276–1278. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [47].Bates DW, Singh H, Two decades since to err is human: an assessment of progress and emerging priorities in patient safety, Health Aff. 37 (11) (2018) 1736–1743. [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

1

RESOURCES