Abstract
Objective
To describe a user-centered approach to develop, pilot test, and refine requirements for 3 electronic health record (EHR)-integrated interventions that target key diagnostic process failures in hospitalized patients.
Materials and Methods
Three interventions were prioritized for development: a Diagnostic Safety Column (DSC) within an EHR-integrated dashboard to identify at-risk patients; a Diagnostic Time-Out (DTO) for clinicians to reassess the working diagnosis; and a Patient Diagnosis Questionnaire (PDQ) to gather patient concerns about the diagnostic process. Initial requirements were refined from analysis of test cases with elevated risk predicted by DSC logic compared to risk perceived by a clinician working group; DTO testing sessions with clinicians; PDQ responses from patients; and focus groups with clinicians and patient advisors using storyboarding to model the integrated interventions. Mixed methods analysis of participant responses was used to identify final requirements and potential implementation barriers.
Results
Final requirements from analysis of 10 test cases predicted by the DSC, 18 clinician DTO participants, and 39 PDQ responses included the following: DSC configurable parameters (variables, weights) to adjust baseline risk estimates in real-time based on new clinical data collected during hospitalization; more concise DTO wording and flexibility for clinicians to conduct the DTO with or without the patient present; and integration of PDQ responses into the DSC to ensure closed-looped communication with clinicians. Analysis of focus groups confirmed that tight integration of the interventions with the EHR would be necessary to prompt clinicians to reconsider the working diagnosis in cases with elevated diagnostic error (DE) risk or uncertainty. Potential implementation barriers included alert fatigue and distrust of the risk algorithm (DSC); time constraints, redundancies, and concerns about disclosing uncertainty to patients (DTO); and patient disagreement with the care team’s diagnosis (PDQ).
Discussion
A user-centered approach led to evolution of requirements for 3 interventions targeting key diagnostic process failures in hospitalized patients at risk for DE.
Conclusions
We identify challenges and offer lessons from our user-centered design process.
Keywords: diagnostic errors, diagnostic safety, user-centered intervention design, electronic health records, acute care
BACKGROUND AND SIGNIFICANCE
Diagnostic errors (DEs), defined as “failure to (a) establish an accurate and timely explanation of the patient’s health problem(s) and (b) communicate that explanation to the patient”, are common in US hospitals and often lead to preventable harm.1 DEs likely explain 22% of paid malpractice claims in the acute care setting.2,3 Emerging data based on use of validated instruments (eg, Safer Dx) suggest that the rate of harmful DEs is between 5% and 7% in patients transferred to intensive care or readmitted.4,5 Using the Safer Dx instrument adapted for acute care, our preliminary data suggest that the incidence of harmful DEs is 6.7% in a cohort of patients hospitalized on the general medicine service.6 These DEs were frequently associated with failures in several diagnostic process domains, including initial patient-provider encounter and assessment; diagnostic test ordering, performance, and interpretation; diagnostic information and follow-up; and subspecialty consultation.7
Singh et al. have recommended a “Safety Dx Checklist” approach to prioritizing and refining practices for mitigating DEs; most checklist items were directed at improving organizational accountability, case surveillance, measurement, and improving processes.8 To date, technique-based interventions (changes in equipment, procedures, and clinical approaches), system level technology-based tools (clinical decision support for diagnosis9–13), and structured process changes (feedback loops or additional stages in the diagnostic pathway) have been studied most often.14,15 Educational initiatives and cognitive interventions16–21 and personnel changes (additional or fewer healthcare members) have potential as well, but have not been rigorously studied.14,15 These interventions have been tested via case simulations10,11,17–19,22 and are usually limited to emergency20 or ambulatory settings,16,21,23 where DE rates have been better characterized.14,15,24
These prior efforts have had variable effects on improving diagnostic safety for several reasons. First, few have incorporated knowledge gained from analyzing actual cases of DEs that led to harm. A systematic approach to analyzing failures in the diagnostic process is an important first step as it would enable the identification of the most frequent cognitive, systems, and patient factors contributing to harmful DEs which then could be targeted for intervention development.25 Second, none have employed a user-centered approach to refine intervention requirements to ensure maximal use, usefulness, and usability by clinicians and patients. Third, few interventions have been tested in complex clinical settings to characterize potential barriers and facilitators prior to implementation.15
OBJECTIVE
The purpose of this study was to describe a user-centered approach (Figure 1) to develop, pilot test, and refine initial clinician and patient requirements for 3 electronic health record (EHR)-integrated interventions that address common and impactful diagnostic process failures to mitigate DEs from occurring in hospitalized patients in real time.7,25
Figure 1.
User-centered approach for developing, testing, and refining diagnostic safety interventions.
MATERIALS AND METHODS
Overview and prior work
As part of our Agency for Healthcare Research and Quality (AHRQ)-funded Patient Safety Learning Lab study, our interdisciplinary team adapted the modified DE and Evaluation Research (DEER) Taxonomy for acute care by mapping 44 specific diagnostic process failures to 6 Safer Dx process dimensions (Table 1, first column).25–29 Of note, healthcare communication and collaboration was added to the original 5 Safer Dx dimensions based on recommendations from our steering committee.25,29 Based on an in-depth analysis of a cohort of cases with DEs using this framework,25,29 our systems engineers identified common and impactful diagnostic process failures (eg, failure or delay in ordering needed tests) at our institution that would serve as potential targets of intervention within the corresponding Safer Dx dimension (eg, diagnostic test performance and interpretation).25,29
Table 1.
Potential interventions that satisfy initial user requirements and address common and impactful diagnostic process failures
Safer Dx process dimension | Example of initial user requirement | Example of DEER diagnostic process failures | Potential interventionsa |
---|---|---|---|
Patient-provider encounter and initial diagnostic assessment | Clinicians want to easily access all previously collected data when prompted, including out-of-network information | Failure or delay in providing or eliciting a piece of history data |
|
Diagnostic test performance and interpretation | Clinicians want to be aware of the pre-test probability of disease and the likelihood that a test would increase or decrease post-test odds | Failure or delay in ordering needed test(s) |
|
Subspecialty consultation | Clinicians want to be able to rapidly obtain subspecialty expertise when risks are high or uncertainty persists | Failure or delay in ordering a referral or consult |
|
Follow-up and tracking of diagnostic information | Clinicians want to identify the conditions and clinical states at high risk for deterioration | Failure or delay in recognizing or acting upon urgent condition or complications |
|
Patient factors |
Patients want to be able to participate in the diagnostic process Clinicians want to know when patients are not receiving adequate explanations about their plan |
Failure to communicate an accurate and timely explanation of the patient’s health problem(s) to the patient/caregiver |
|
Healthcare team communication and collaboration | Clinicians want to ensure that key care team members understand why the patient was admitted and the plan | Failure or delay in communicating initial encounter and assessment findings between healthcare team members |
|
Potential intervention candidates based on organizational, technical, and study requirements addressing initial user requirements (bolded items, our prioritized interventions, were fully developed and tested):.
Quality and Safety Dashboard: Clinician-facing platform that uses EHR data to calculate safety risks across a variety of domains (pressure injuries, falls, venous thromboembolism, delirium, foley catheter, and central lines) by clinical unit or team. HPP Documentation and Deterioration Index were incorporated into the dashboard.
Diagnostic Time-Out (DTO): A structured checklist prompting clinicians to pause and reevaluate the working diagnosis.
Automated Hospital Principal Problem (HPP) Documentation: Application programming interface (API) to automatically extract and update the HPP based on clinical note documentation at the time of admission and subsequently during hospitalization.
Patient Diagnosis Questionnaire (PDQ): A web-based survey for patients to report their understanding of the diagnosis and satisfaction with care team communication.
Deterioration Index (Epic): Probabilistic model using EHR data to predict clinical deterioration during hospitalization.
Digital Communication Tools: Secure, patient-centered care team messaging, and collaboration integrated with the EHR.
Smart Notification Platform: A rules-based notification platform that uses “If-This-Then-That” logic (eg, if hemoglobin <6, then send a page to responding clinician).
Best Practice Advisory (BPA): A pop-up that could be configured to apply Bayes’ theorem based on EHR data advising clinicians about the pre-test and post-test probability of certain condition-specific safety risks (eg, sepsis, pulmonary embolism, etc.).
Other interventions: “MyLife, MyStory”,40 Epic Care Everywhere, enterprise patient portal, laboratory test alerts, condition-specific monitoring using patient-reported outcomes, order templates, consultation alert, structured hand-off tool, virtual meeting room, care process adaptation to involve patients in bedside rounding discussions, clinician decision support to suggest alternative diagnoses (eg, DxPlain) or corrective actions.13
Setting and participants
This study was approved by the Mass General Brigham Institutional Review Board and was conducted at Brigham and Women’s Hospital, an academic medical center in Boston, MA. Eligible participants included English-speaking patients hospitalized on the general medicine service and hospital-based clinicians (registered nurses [RN], advanced practice providers [APPs], medical residents, and attendings) caring for these patients. Clinicians had access to our commercial EHR (Epic Systems, Inc., Verona, WI) and previously developed EHR-integrated digital health applications.26,30–36 This infrastructure included our Quality and Safety Dashboard (Figure 2), a custom-developed, extensible, and configurable application that applies logic to EHR data retrieved from application programming interfaces (APIs) to flag patients at low, medium, and high risk for certain hospital-acquired harms.26,30–32,37 For example, risk for catheter-associated urinary tract infections can be quickly visualized via a “Urinary Catheter” column.
Figure 2.
Prioritized interventions across Safer Dx process dimensions.
Potential interventions
Broad clinician and patient needs pertaining to each Safer Dx dimension were identified from local context, clinician stakeholders practicing on the general medicine services, and patient advocates, as in our prior work.26,30–34,38,39 These user needs served as our initial requirements. Next, we searched the literature and assessed the available infrastructure at our institution to compile a comprehensive list of interventions (Table 1, fourth column) that could both satisfy initial user requirements (Table 1, second column) and prevent key diagnostic process failures (Table 1, third column) identified at our institution.7,25,29
Prioritized interventions
An expert panel of 12 individuals (core study team and steering committee) prioritized 3 of the 20 interventions (Figure 2) for prototyping, development, and testing. These 3 interventions were chosen based on organizational, technical, and study requirements, including alignment with concurrent quality, safety, and educational initiatives; feasibility of rapidly extending or enhancing these tools based on user feedback; the likelihood of approval by emerging digital health governance bodies within our study’s implementation timeframe; and the ability to facilitate other interventions.26,33
The first intervention was a new Diagnostic Safety Column (DSC) in the Quality and Safety Dashboard, enhanced to enable clinicians to view curated information from the EHR to visualize a patient’s risk of DE in real-time, navigate to and update specific structured EHR documentation (flowsheet, problem list, etc.), and access patient-reported data submitted via web-based questionnaires. Because the Quality and Safety Dashboard was tightly integrated with the EHR, the DSC could be enhanced to facilitate Hospital Principal Problem (HPP) Documentation in the EHR’s problem list via a hyperlink, and access to the EHR’s Deterioration Index. The second intervention, a Diagnostic Time-Out (DTO), was a short checklist based on the concept of a safety pause,20,21,41–43 providing clinicians a structured process to address diagnostic uncertainty and rethink the primary working diagnosis. The third intervention, a Patient Diagnosis Questionnaire (PDQ), was a web-based survey assessing patients’ perceptions of their diagnoses and satisfaction with communication by the care team about the diagnostic process early during hospitalization.44 The 3 interventions (Figure 2) spanned all 6 Safer Dx diagnostic process dimensions and targeted 16 out of 44 DEER failure points that represented common and impactful diagnostic process failures identified in hospitalized patients at our institution.7,25,29
Development and testing of prioritized interventions
Diagnostic safety column
The DSC was designed with the intent of identifying patients at risk for DE (Table 2) based on patient, clinical, and system factors identified from the literature5,45–48 and preliminary analysis of a DE case cohort at our institution.6,25,29 By identifying at-risk patients, the DSC could then suggest actions to clinicians (such as recommending a DTO) to mitigate DEs from occurring in these specific patients. We used expert consensus and our prior analysis of diagnostic process failures to define initial logic (Table 2) to stratify patients into low (green), moderate (yellow), or high (red) risk states. The DSC logic was modeled as follows: DE Risk Score = X1 + X2 + X3 + X4 … + Xn, where Xn represents baseline risk factors. Based on this logic, our software development team generated an initial functional prototype of the DSC, which we released into our EHR’s production environment in “silent mode” (ie, accessible only to the study team but not clinicians). This enabled our team to test the initial logic during weekly case review sessions using live data from hospitalized patients.
Table 2.
Diagnostic error risk state, initial logic, communicated actions
Risk state | Initial logic | Actions communicated to clinicians |
---|---|---|
Green | Risk Score <2 | Get input from patients and care team members, re-assess the primary working diagnosis as appropriate |
Yellow | Risk Score between 2 and 5 | At risk for diagnostic error, consider a diagnostic time-out |
Red | Risk Score >5 | At risk for diagnostic error. Take a diagnostic time-out and reconsider the primary working diagnosis |
- Altered mental status, Delirium, Dementia, Depression, Bipolar, Psychosis, or End-Stage Renal Disease on EHR Problem List
- Primary language not English
- 3 or more subspecialty consultants
- Inter-hospital transfer
- 2 or more outpatient visits within 10 d prior to admission
- A prior hospitalization within 7 d of index hospitalization
- Emergency department visit for undifferentiated sign or symptom within 10 d of admission
- 1 daytime responding clinician and 1 attending change, or 3 or more different attendings in last 72 h
- New or increasing oxygen requirement
- 2 or more blood gases (arterial or venous) resulted within 24 h
- High risk for clinical deterioration based on Epic’s deterioration index
Over an 8-week period, we used the EHR to identify 1–2 test cases per week based on actual patients who were admitted to the general medicine service with an undifferentiated symptom (eg, abdominal pain) entered as the HPP. We tracked the DSC flag status (predicted risk state) for each test case over the course of hospitalization and recorded pertinent clinical information from the EHR. Summarized data were presented to our clinician working group (JLS, AKD) who had expertise in hospital medicine and evaluating the diagnostic process using the Safer Dx instrument to assess the likelihood of DE.6,7,25,29 The clinician working group determined whether the DSC correctly or incorrectly flagged patients at the appropriate risk state based on the initial logic and expert clinical judgment. Iterations to the logic were made based on issues identified from analysis and discussion of each test case. We focused on cases in which DE was perceived as likely but the risk state was “green”, and those in which DE was perceived as unlikely but the risk state was “yellow” or “red”.
Diagnostic time-out
A preliminary version of the DTO was modeled after a structured safety pause used for other high-risk processes such as surgery and discharge.41,49 Our goal was to encourage clinicians to reconsider the primary working diagnosis for patients with an uncertain admission diagnosis (undifferentiated symptom, sign, or clinical state from the HPP) or who had 2 or more DE risk factors (Table 2). We first reviewed existing instruments and mapped their content to corresponding diagnostic process failures within each Safer Dx process dimension.20,21,43 Next, we identified gaps in these instruments and generated an initial prototype targeting key diagnostic process failures.7,25,29 Risk factors reported in the literature and common cognitive biases encountered by clinicians were included.4,5,45,47,50,51 Finally, we presented an initial prototype of the DTO to clinical stakeholders and subject matter experts and incorporated their input. We tested the working prototype with clinicians (MDs, APPs, and RNs) and solicited feedback during testing sessions. The working prototype was iterated between each session based on analysis of feedback.
Patient diagnostic questionnaire
An initial prototype of the PDQ was designed to assess patients’ understanding of their diagnosis, aligning with the “Patient Experience” Safer Dx process dimension.27 Initial questions were based on literature examining the role of suboptimal patient-clinician communication as a contributing factor to DEs52 and incorporated input from our systems engineers, human factors experts, and patient advisors. After initial cognitive testing to ensure each question’s wording conveyed the intended meaning, we conducted a 4-week pilot in which a research assistant (AJG) administered the questionnaire to patients admitted to the general medicine service within the preceding 24 h.44 For all patient participants, we recorded responses, elicited feedback about wording of questions, and discussed perceptions of communication with their clinicians regarding the diagnostic process. We observed clinical unit workflow to identify optimal times to administer the questionnaire and relay feedback to the care team. Based on our analysis of input, feedback, and observations, we iterated questions and identified strategies for incorporating the questionnaire into workflow.
Integrated intervention
We conducted focus groups, first with the research team to identify gaps in functionality, and then with clinicians (MDs, APPs, and RNs) and patient advisors to understand the perceived value of the interventions when integrated into clinical workflow. We created storyboards to illustrate how the 3 interventions functioned together (Supplementary Appendix S1). The storyboards were used by our Human Factors expert (PG) to lead a semi-structured discussion with participants to identify potential facilitators and barriers for implementing the interventions into clinical workflow. All sessions and feedback were recorded.
Mixed methods analysis
For the DSC, descriptive statistics were used to determine the proportion of sampled test cases that flagged concordantly to the risk state perceived by clinician working group based on chart review. For the DTO, key issues were identified from analysis of feedback from clinicians and confirmed by the study team using a group consensus approach. For the PDQ, descriptive statistics were used to quantify questionnaire responses, and a 2-clinician adjudication process (AKD and JLS) was used to rate concordance between patient-reported diagnosis and the HPP entered in the EHR’s problem list at admission. For the integrated intervention, 2 research assistants (AJG and HF) independently coded focus group transcripts in Excel (Microsoft, Inc.), extracted quotes, and generated common categories for implementation facilitators and barriers. A group consensus approach was used to confirm major themes and identify additional requirements for implementation.
RESULTS
Diagnostic safety column
Of the 10 test cases sampled for review by our clinician working group, the DSC logic correctly flagged the patient perceived as having moderate or high risk for DE in 7 (70%) cases, based on predefined DE risk states (Table 2). In 6 (60%) cases, the risk state did not change over the course of hospitalization despite expert clinician consensus suggesting increasing DE risk based on chart review. For example, in a case of a 65-year-old male admitted with abdominal pain who deteriorated and expired during a 12-day hospitalization, the clinician working group determined that a flag color change from low risk (green) to elevated risk (yellow or red) was warranted given the persistence of an undifferentiated symptom for the HPP (reflecting diagnostic uncertainty) and objective clinical evidence of deterioration (patient did not respond to initial treatment).
Based on similar findings from analysis of all other test cases, a decision was made to add configurable parameters (Supplementary Appendix S2: DSC) to the initial logic to address emerging requirements. These requirements included: (1) incremental contribution of undifferentiated diagnoses (eg, ICD-10 “R” codes for undifferentiated signs or symptoms) to DE risk if present for more than 24 h into admission; (2) a mechanism to add risk factors acquired during hospitalization based on newly available EHR data (multiple consultation orders, poor lactate clearance, certain laboratory tests (arterial blood gases), and low-frequency studies (electroencephalogram) requiring complex interpretation); (3) ability to add EHR prediction models (Epic’s Deterioration Index) that were concurrently being released into production during our study’s timeline once retrievable via enterprise APIs; and (4) a mechanism to assign each risk factor a configurable weight based on findings from concurrent research (multivariate analyses and regression modeling). These additional requirements would ensure that new clinical data that became available during hospitalization would contribute to the overall DE risk score, in addition to the baseline risk factors (Table 2).
The final logic for our prediction algorithm was modeled as follows: DE Risk Score = a1X1 + a2X2 + a3X3 + a4X4 … + anXn, where Xn represents baseline risk factors and in-hospital clinical factors retrievable from the EHR via our enterprise APIs, and an represents the coefficient (weight) of each independent variable. See Supplementary Appendix S3 for final DSC logic.
Diagnostic time-out
The initial DTO prototype was tested with 18 hospital-based clinicians (MDs and APPs) during eight sessions conducted over a 4-week period. Modifications (Supplementary Appendix S2: DTO) made in response to observations of, and feedback from clinician participants primarily included decreasing the number of steps and communicating flexibility for conducting the DTO inside or outside of patients’ rooms. The final DTO (Figure 4) had 5 steps, each with 1–2 sub-bullets, and took approximately 2–3 minutes per simulated case. Overall, participants perceived that the DTO would be useful, leading clinicians to consider additional history, testing, and diagnostic possibilities (correlating with key steps in the diagnostic process) when confronted with diagnostic uncertainty and/or risk factors for DE.
Figure 4.
Diagnostic time-out.
Patient diagnosis questionnaire
Of 78 patients approached, 39 patients (50%) agreed to complete the PDQ; the remainder were unavailable (eg, off-unit, approaching discharge). Of 39 participants, 22 (56.4%) answered yes to all questions (had no concerns); 29 (74.4%) were confident about their diagnoses; and 32 (82.1%) affirmed that they had enough information to be involved in shared decision-making. The patient-reported diagnosis was concordant with the HPP entered in the EHR in 18 of 39 (46.2%) cases. Analysis of input and feedback from patients, clinicians, and research team consensus identified issues which led to modifications to initial requirements. These modifications included adapting the PDQ to facilitate patient access and integrating PDQ responses with the DSC to communicate patients’ understanding of their diagnosis as well as other concerns to the clinical team (Supplementary Appendix S2: PDQ). The final, 11-item questionnaire (Table 3) assessed patients’ understanding of their admission diagnosis and confidence that it was correct; whether all symptoms were being addressed; satisfaction with care team communication about their diagnosis; and involvement in shared decision-making.
Table 3.
Final Patient Diagnostic Questionnaire (PDQ)
Questions | Choices |
---|---|
1. Has your care team told you your diagnosis in a way that you understand? [Your diagnosis is the main reason why you’re in the hospital.] |
|
2. Can you tell me your main diagnosis? [Your diagnosis is the main reason why you’re in the hospital.] | Free text entry |
3. Are you confident that your diagnosis is correct? |
|
4. Do you think your care team is treating your main medical problem appropriately? |
|
5. Is the team addressing all of your symptoms? |
|
6. Have you had the opportunity to ask questions about your diagnosis? |
|
7. Are you satisfied with how your care team has communicated with you about your diagnosis? |
|
8. Are you comfortable with your current involvement in the decision-making process? These decisions could be about what tests you are getting, what treatments you are getting, etc. |
|
9. Do you have enough information to be involved in making decisions about your care? |
|
10. Do you feel that your care team is telling you all the information about your diagnosis? |
|
11. Does your care team always treat you with respect? |
|
Interventions as an EHR-integrated system to improve diagnostic safety
Figure 3 and Supplementary Appendix S1 illustrate how the 3 interventions function as an EHR-integrated system to improve diagnostic safety which hospital-based clinicians could use as part of existing workflows (eg, morning rounds, afternoon sign-out rounds). For example, during afternoon sign-out rounds, the clinical team could systematically review the Quality and Safety Dashboard (A) directly from the EHR to identify patients at risk for DEs (using the DSC) as well as other hospital-acquired harms (using other columns). Clicking yellow or red flags would display patients’ individual risk factors and suggest potential actions. These actions include clicking a hyperlink in the DSC to review the PDQ (B), completed by patients on their mobile device, a bedside nurse, or research staff members on the patients’ behalf early during admission. By clicking a hyperlink in the DSC, clinicians could also “walk through” the DTO (C) procedure in a pop-up window. They could also access and view the 5-step procedure independently on the DTO mobile app (which also provides a link to a clinician diagnosis support tool, DxPlain)13 or a laminated pocket card (Figure 4). A checklist icon on the DSC would identify patients who complete the PDQ, directing the clinical team to their responses. Other potential workflows include pulling up the dashboard from within the patient’s chart during morning rounds (similar to the use of safety checklists in the intensive care unit)53–56 or when reviewing standardized “safety bundles” when writing admission or progress notes.
Figure 3.
Three interventions functioning as an EHR-integrated system to improve diagnostic safety.
Facilitators and barriers to implementation
Potential facilitators and barriers to implementation (Supplementary Appendix S4) were identified from 8 focus groups involving a total of 20 nurses, 7 physicians, 1 physician assistant, and 4 patient advisors. Participants described the value of all components of the intervention in mitigating diagnostic safety risks in the hospital (“I’m really excited about [the intervention]… it prompts us to help us think. I just need to see it to go through the process of saying that’s a possibility and not a possibility, but you can’t think of what you’re not thinking about”—an attending). They embraced the DSC, especially if it would populate the EHR with optimally timed flags and alerts (“If it’s someone that I’ve been taking care of for a few days, I think it becomes more difficult to rethink a new diagnosis if I’m not prompted [to do so]”—an attending); and if the algorithm logic could be clearly communicated (to minimize distrust). Overcoming these barriers would ensure that clinicians reconsider the working diagnosis in cases with elevated DE risk or uncertainty. Regarding the DTO, additional suggestions included using it during less busy hours (eg, in the afternoon); encouraging its use by non-MDs (“Going through it collectively would be most helpful because the nurse and pharmacist might have some helpful input to share too”—an APP); and addressing concerns about disclosing uncertainty to patients. Finally, while participants suggested that the PDQ would be ideally accessible from the patient portal and administered by research assistants serving as “digital navigators”, they expressed concerns about patient disagreement with the care team’s diagnosis (“Sometimes a patient doesn’t agree with a specific diagnosis because they’re refusing it… they don’t want that diagnosis to be the case… we may do further testing when it’s not really that warranted”—an attending).
DISCUSSION
We employed a user-centered approach to pilot test and refine initial user requirements for 3 EHR-integrated interventions to mitigate risk of DE in hospitalized patients. The 3 interventions targeted the most common and impactful process failures within all Safer Dx dimensions previously identified at our institution.7,25,29
The DSC, when added to our EHR-integrated Quality and Safety Dashboard, appropriately flagged patients with baseline DE risk factors present on admission in most test cases. Nonetheless, we determined that configurable parameters would be required to adjust baseline risk estimates as new clinical data become available over the course of hospitalization, and when new research findings emerge about the relative contribution of individual risk factors. While clinicians perceived the value of the DTO in providing a structured approach to addressing diagnostic uncertainty and DE risk (often leading to additional diagnostic considerations), they expressed concern about time constraints and conducting it in the presence of patients. Also, while the PDQ was feasible to administer, many patients did not participate. Of those participating, responses demonstrated suboptimal patient-clinician concordance regarding the main reason for hospitalization, underscoring the importance of features to ensure “closing-the-loop” with clinicians, in part by providing access to PDQ results from the DSC and alerting clinicians within their workflows (eg, a BPA, email notification). While the integrated interventions were perceived as having potential to improve diagnostic safety, potential implementation barriers included alert fatigue and distrust of the algorithm (DSC); time constraints, redundancies, and concerns about disclosing uncertainty to patients (DTO); and patient disagreement with the care team’s admission diagnosis (PDQ).
Most clinicians and patient advisors who participated in design sessions supported the 3 interventions, which we attribute to our iterative, participatory process (Figure 1) for refining requirements based on direct user input and feedback. Indeed, the application of human factors and usability approaches are increasingly recognized for improving the diagnostic process (though often under-resourced).57,58 As in our prior studies, we tested many versions of the Quality and Safety Dashboard with clinicians, including individual columns such as the DSC.26,30–34,37 While the DSC did not perfectly flag risk in all cases tested, we identified opportunities to further refine our logic based on discrepancies between actual and ideal risk states as judged by our clinician working group. For example, in addition to using baseline variables present at admission (eg, primary language, number of encounters preceding hospitalization), we now update DE risk in real-time based on newly available clinical data (eg, high or increasing deterioration index, new orders for certain tests, new consults, etc.), and attribute variable weights to individual risk factors based on emerging findings from our research activities. Additional requirements for the DTO (making it available via a mobile app and pocket reference card, and prompting clinicians from the DSC) were heavily influenced by trainee perceptions largely because we designed it concurrently with the development of a diagnostic safety educational curriculum for internal medicine trainees. Finally, results from our PDQ pilot supported the ongoing need for incorporating patient-reported concerns directly into clinician workflow, consistent with prior studies demonstrating poor patient-clinician care plan concordance.59–61
We are not aware of other reported attempts at refining user requirements for EHR-based preventative interventions targeting common and impactful diagnostic process failures during the hospital encounter.8,12,14,15 While prior studies have evaluated the potential for clinical decision support tools, quality improvement collaboratives, diagnostic pauses, and diagnostic questionnaires administered to patients,13,16,21,58,62 these studies have generally been conducted in ambulatory settings and do not include a mechanism to target interventions to specific patients through use of real-time prediction algorithms that leverage EHR data obtained via APIs.26,30,31,33 As a preliminary attempt, we recognize that how the 3 interventions are ultimately implemented and sustained in the EHR and clinical workflow will determine its success at preventing DEs from occurring in hospitalized patients. While clinicians could use the DSC to identify patients at risk for DE, clinicians could still dismiss high-risk states if they are overconfident about the working diagnosis, distrust the prediction algorithm, or do not routinely access the dashboard as part of their workflow. Similar to the experience of others, we plan to implement institutional diagnostic safety training to coach clinicians on how and when to use the DTO, even if used independently of the DSC.16,21 Furthermore, aligning our planned implementation with a formal training would help build awareness about clinicians’ general reluctance to acknowledge diagnostic uncertainty with patients and how effectively to engage nurses in the diagnostic process. We identify challenges and offer lessons (Table 4) based on our user-centered design approach.
Table 4.
A User-centered approach to developing an EHR-integrated diagnostic safety intervention for acute care: challenges & lessons learned
Challenges | Lessons learned |
---|---|
Clinician User | |
Reconciling clinicians’ understanding of DE risks and perception of diagnostic uncertainty in specific cases | Cases with multiple DE risk factors (calculated by EHR data) but no diagnostic uncertainty (overconfident clinician) could slip through the cracks, suggesting the need for formal diagnostic safety training and coaching |
Addressing perceptions that additional support for diagnosis (eg, DTO) is redundant with current workflows, rounding structure, and training | Formal diagnostic safety training and coaching can address need for structured checklists to revisit working diagnoses and supplement existing processes |
Understanding how practices for structured problem-based charting in the EHR vary by clinician type (attending, physician trainees, APPs) | Reliably entered structured diagnosis data (principal problem entered at admission) should be used in core intervention components |
Patient User | |
Ensuring completion of diagnostic questionnaires by the patient or caregiver | An incomplete questionnaire itself may be a marker of DE risk |
Enabling multi-modal accessibility for independent or facilitated questionnaire submission | Integration of questionnaires into the patient portal and use of research assistants as digital navigators are essential |
Explaining uncertainty in the diagnostic process without alarming the patient or caregiver | Communicating uncertainty and risk is a delicate balance which cannot be achieved by technology alone |
EHR Considerations | |
Generating useful insights for real-time prediction algorithms that require a variety of EHR data and are often constrained by limited data available in pre-production environments | Running prediction algorithms is optimally accomplished in a live production environment (ie, in “silent mode”) to enable the research team the ability to rapidly test and iterate logic and input variables |
Adequately assessing whether existing and forthcoming EHR functionality could be used in context of the planned initiative | Because of institutional governance constraints, certain favored EHR functionalities (eg, BPA) might be utilized once the research or pilot phase is complete |
Withdrawal of vendor support for core functionality (eg, hyperlinks navigating users to specific EHR flowsheets, reports, widgets, etc.) | Vendor development roadmaps may limit use of third-party applications by end-users that rely on functionality enabling seamless interaction with the EHR |
DTO: diagnostic time-out; BPA: best practice advisory; APP: advance practice provider.
Our study is limited by its small sample of participants within a single academic medical center, and thus, may not accurately represent the perspectives of front-line clinicians and patients who ultimately use these interventions, either at our or at other similar institutions. While we considered other types of interventions (Table 1), these were not feasible to implement because of study timeline constraints; institutional governance roadblocks (limited access to certain APIs); and technical challenges. For example, while we explored developing a BPA or other types of “smart notifications” (to notify clinicians about test results discordant with pre-test probabilities) during our study timeline, our enterprise’s governance process for EHR extensions was still emerging. Also, because we were concurrently validating specific EHR data that could serve as risk factors in our prediction algorithm, the types of risk factors included were not exhaustive. In the future, formal validation of the DSC and its configurable algorithm could provide insight into the combination of EHR data elements that would most accurately predict DE during the hospital encounter, which could then be operationalized as part of a BPA. Finally, while our intervention did not address all failure points (such as failure or delay in acting on or following-up on test results or failure or delay in communication between consultants and the primary team) identified by our prior analysis, APIs will eventually become available that correspond to these failures which we can then utilize.
CONCLUSION
We pilot tested and refined requirements for 3 EHR-integrated interventions to improve diagnostic safety in acute care at our institution and offer lessons learned based on our user-centered design process. Our next steps include assessing the of ability of our prediction algorithm to accurately discriminate DE-positive from DE-negative cases confirmed by chart review, optimizing our algorithm, implementing this intervention for general medicine teams, and evaluating impact on DE rates. We will consider further refinements to workflow integration and usability based on user feedback obtained during implementation. We also plan to investigate how other clinical decision support tools (eg, BPAs) can complement our intervention, especially if certain diagnostic process failures (such as misinterpretation of test results) persist.
Supplementary Material
Contributor Information
Alison Garber, Division of General Internal Medicine, Brigham and Women’s Hospital, Boston, Massachusetts, USA.
Pamela Garabedian, Division of General Internal Medicine, Brigham and Women’s Hospital, Boston, Massachusetts, USA.
Lindsey Wu, Division of General Internal Medicine, Brigham and Women’s Hospital, Boston, Massachusetts, USA.
Alyssa Lam, Division of General Internal Medicine, Brigham and Women’s Hospital, Boston, Massachusetts, USA.
Maria Malik, Division of General Internal Medicine, Brigham and Women’s Hospital, Boston, Massachusetts, USA.
Hannah Fraser, Division of General Internal Medicine, Brigham and Women’s Hospital, Boston, Massachusetts, USA.
Kerrin Bersani, Division of General Internal Medicine, Brigham and Women’s Hospital, Boston, Massachusetts, USA.
Nicholas Piniella, Division of General Internal Medicine, Brigham and Women’s Hospital, Boston, Massachusetts, USA.
Daniel Motta-Calderon, Division of General Internal Medicine, Brigham and Women’s Hospital, Boston, Massachusetts, USA.
Ronen Rozenblum, Division of General Internal Medicine, Brigham and Women’s Hospital, Boston, Massachusetts, USA; Harvard Medical School, Boston, Massachusetts, USA.
Kumiko Schnock, Division of General Internal Medicine, Brigham and Women’s Hospital, Boston, Massachusetts, USA; Harvard Medical School, Boston, Massachusetts, USA.
Jacqueline Griffin, Northeastern University, Boston, Massachusetts, USA.
Jeffrey L Schnipper, Division of General Internal Medicine, Brigham and Women’s Hospital, Boston, Massachusetts, USA; Harvard Medical School, Boston, Massachusetts, USA.
David W Bates, Division of General Internal Medicine, Brigham and Women’s Hospital, Boston, Massachusetts, USA; Harvard Medical School, Boston, Massachusetts, USA; Department of Health Policy and Management, Harvard T. H. Chan School of Public Health, Boston, Massachusetts, USA.
Anuj K Dalal, Division of General Internal Medicine, Brigham and Women’s Hospital, Boston, Massachusetts, USA; Harvard Medical School, Boston, Massachusetts, USA.
FUNDING
This work was supported by a grant from AHRQ (R18-HS026613). AHRQ had no role in the design or conduct of the study; collection, analysis, or interpretation of data; or preparation or review of the manuscript. The findings and conclusions in this report are those of the authors and do not necessarily represent the official position of AHRQ.
AUTHOR CONTRIBUTIONS
All authors have contributed sufficiently and meaningfully to the conception, design, and conduct of the study; data acquisition, analysis, and interpretation; and/or drafting, editing, and revising the manuscript.
SUPPLEMENTARY MATERIAL
Supplementary material is available at JAMIA Open online.
CONFLICT OF INTEREST STATEMENT
Dr. Dalal reports consulting fees from MayaMD, which makes AI software for patient engagement and decision support. Dr. Rozenblum reports having an equity in Hospitech Respiration Ltd., which makes Airway Management Solutions. Dr. Bates reports grants and personal fees from EarlySense, personal fees from CDI Negev, equity from ValeraHealth, equity from Clew, equity from MDClone, personal fees and equity from AESOP, and grants from IBM Watson Health, outside the submitted work. Authors otherwise report no conflicts of interest.
DATA AVAILABILITY
The data underlying this article are available in the article and in its online supplementary material.
REFERENCES
- 1. Committee on Diagnostic Error in Health Care; Board on Health Care Services; Institute of Medicine. The national academies of sciences, engineering, and medicine. In: Balogh EP, Miller BT, Ball JR, eds. Improving Diagnosis in Health Care. Washington (DC: ): National Academies Press; 2015. Summary available from: https://www.ncbi.nlm.nih.gov/books/NBK338596/. doi: 10.17226/21794. [DOI] [PubMed] [Google Scholar]
- 2. Bishop TF, Ryan AM, Casalino LP.. Paid malpractice claims for adverse events in inpatient and outpatient settings. JAMA 2011; 305 (23): 2427–31. [DOI] [PubMed] [Google Scholar]
- 3. Gupta A, Snyder A, Kachalia A, Flanders S, Saint S, Chopra V.. Malpractice claims related to diagnostic errors in the hospital. BMJ Qual Saf 2018; 27 (1): 53–60. [DOI] [PubMed] [Google Scholar]
- 4. Raffel KE, Kantor MA, Barish P, et al. Prevalence and characterisation of diagnostic error among 7-day all-cause hospital medicine readmissions: a retrospective cohort study. BMJ Qual Saf 2020; 29 (12): 971–9. [DOI] [PubMed] [Google Scholar]
- 5. Bergl PA, Taneja A, El-Kareh R, Singh H, Nanchal RS.. Frequency, risk factors, causes, and consequences of diagnostic errors in critically ill medical patients: a retrospective cohort study. Crit Care Med 2019; 47 (11): e902–10. [DOI] [PubMed] [Google Scholar]
- 6. Motta-Calderon D, Lam A, Kumiko S, et al. A preliminary prevalence estimate of diagnostic error in patients hospitalized on general medicine: analysis of a random stratified sample. Abstract published at SHM Converge 2021. Abstract 129. J Hosp Med. https://shmabstracts.org/abstract/preliminary-prevalence-estimate-of-diagnostic-error-in-patients-hospitalized-on-general-medicine-analysis-of-a-random-stratified-sample/. Accessed May 2, 2023. [Google Scholar]
- 7. Konieczny K, Lam A, Motta-Calderon D, et al. Diagnostic process failures associated with diagnostic error in the hospital. Abstract published at SHM Converge 2022. Abstract A2. J Hosp Med. https://shmabstracts.org/abstract/diagnostic-process-failure-points-associated-with-diagnostic-failure-in-the-hospital/. Accessed May 2, 2023.
- 8. Singh H, Mushtaq U, Marinez A, et al. Developing the safer Dx checklist of ten safety recommendations for health care organizations to address diagnostic errors. Jt Comm J Qual Patient Saf 2022; 48 (11): 581–90. [DOI] [PubMed] [Google Scholar]
- 9. Friedman CP, Elstein AS, Wolf FM, et al. Enhancement of clinicians' diagnostic reasoning by computer-based consultation: a multisite study of 2 systems. JAMA 1999; 282 (19): 1851–6. [DOI] [PubMed] [Google Scholar]
- 10. Kostopoulou O, Lionis C, Angelaki A, Ayis S, Durbaba S, Delaney BC.. Early diagnostic suggestions improve accuracy of family physicians: a randomized controlled trial in Greece. Fam Pract 2015; 32 (3): 323–8. [DOI] [PubMed] [Google Scholar]
- 11. Ramnarayan P, Roberts GC, Coren M, et al. Assessment of the potential impact of a reminder system on the reduction of diagnostic errors: a quasi-experimental study. BMC Med Inform Decis Mak 2006; 6: 22. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12. Graber ML. Reaching 95%: decision support tools are the surest way to improve diagnosis now. BMJ Qual Saf 2022; 31 (6): 415–8. [DOI] [PubMed] [Google Scholar]
- 13. Martinez-Franco AI, Sanchez-Mendiola M, Mazon-Ramirez JJ, et al. Diagnostic accuracy in family medicine residents using a clinical decision support system (DXplain): a randomized-controlled trial. Diagnosis (Berl) 2018; 5 (2): 71–6. [DOI] [PubMed] [Google Scholar]
- 14. Dave N, Bui S, Morgan C, Hickey S, Paul CL.. Interventions targeted at reducing diagnostic error: systematic review. BMJ Qual Saf 2022; 31 (4): 297–307. [DOI] [PubMed] [Google Scholar]
- 15. Ranji SR, Thomas EJ.. Research to improve diagnosis: time to study the real world. BMJ Qual Saf 2022; 31 (4): 255–8. [DOI] [PubMed] [Google Scholar]
- 16. Bundy DG, Singh H, Stein RE, et al. The design and conduct of project RedDE: a cluster-randomized trial to reduce diagnostic errors in pediatric primary care. Clin Trials 2019; 16 (2): 154–64. [DOI] [PubMed] [Google Scholar]
- 17. Myung SJ, Kang SH, Phyo SR, Shin JS, Park WB.. Effect of enhanced analytic reasoning on diagnostic accuracy: a randomized controlled study. Med Teach 2013; 35 (3): 248–50. [DOI] [PubMed] [Google Scholar]
- 18. Sherbino J, Kulasegaram K, Howey E, Norman G.. Ineffectiveness of cognitive forcing strategies to reduce biases in diagnostic reasoning: a controlled trial. CJEM 2014; 16 (1): 34–40. [DOI] [PubMed] [Google Scholar]
- 19. O’Sullivan ED, Schofield SJ.. A cognitive forcing tool to mitigate cognitive bias - a randomised control trial. BMC Med Educ 2019; 19 (1): 12. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 20. Graber ML, Sorensen AV, Biswas J, et al. Developing checklists to prevent diagnostic error in emergency room settings. Diagnosis (Berl) 2014; 1 (3): 223–31. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21. Huang GC, Kriegel G, Wheaton C, et al. Implementation of diagnostic pauses in the ambulatory setting. BMJ Qual Saf 2018; 27 (6): 492–7. [DOI] [PubMed] [Google Scholar]
- 22. Friedman E, Sainte M, Fallar R.. Taking note of the perceived value and impact of medical student chart documentation on education and patient care. Acad Med 2010; 85 (9): 1440–4. [DOI] [PubMed] [Google Scholar]
- 23. Delvaux N, Piessens V, Burghgraeve T, et al. Clinical decision support improves the appropriateness of laboratory test ordering in primary care without increasing diagnostic error: the ELMO cluster randomized trial. Implement Sci 2020; 15 (1): 100. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24. Abimanyi-Ochom J, Bohingamu Mudiyanselage S, Catchpool M, Firipis M, Wanni Arachchige Dona S, Watts JJ.. Strategies to reduce diagnostic errors: a systematic review. BMC Med Inform Decis Mak 2019; 19 (1): 174. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 25. Griffin JA, Carr K, Bersani K, et al. Analyzing diagnostic errors in the acute setting: a process-driven approach. Diagnosis (Berl) 2022; 9 (1): 77–88. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26. Dalal AK, Fuller T, Garabedian P, et al. Systems engineering and human factors support of a system of novel EHR-integrated tools to prevent harm in the hospital. J Am Med Inform Assoc 2019; 26 (6): 553–60. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 27. Singh H, Sittig DF.. Advancing the science of measurement of diagnostic errors in healthcare: the Safer Dx framework. BMJ Qual Saf 2015; 24 (2): 103–10. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 28. Schiff GD, Hasan O, Kim S, et al. Diagnostic error in medicine: analysis of 583 physician-reported errors. Arch Intern Med 2009; 169 (20): 1881–7. [DOI] [PubMed] [Google Scholar]
- 29. Malik MA, Motta-Calderon D, Piniella N, et al. A structured approach to EHR surveillance of diagnostic error in acute care: an exploratory analysis of two institutionally-defined case cohorts. Diagnosis (Berl) 2022; 9 (4): 446–57. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 30. Dalal AK, Piniella N, Fuller TE, et al. Evaluation of electronic health record-integrated digital health tools to engage hospitalized patients in discharge preparation. J Am Med Inform Assoc 2021; 28 (4): 704–12. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 31. Fuller TE, Pong DD, Piniella N, et al. Interactive digital health tools to engage patients and caregivers in discharge preparation: implementation study. J Med Internet Res 2020; 22 (4): e15573. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 32. Bersani KF, Garabedian P, Espares J, et al. Use, perceived usability, and barriers to implementation of a patient safety dashboard integrated within a vendor EHR. Appl Clin Inform 2020; 11 (01): 034–45. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 33. Businger AC, Fuller TE, Schnipper JL, et al. Lessons learned implementing a complex and innovative patient safety learning laboratory project in a large academic medical center. J Am Med Inform Assoc 2020; 27 (2): 301–7. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 34. Fuller TE, Garabedian PM, Lemonias DP, et al. Assessing the cognitive and work load of an inpatient safety dashboard in the context of opioid management. Appl Ergon 2020; 85: 103047. [DOI] [PubMed] [Google Scholar]
- 35. Dalal AK, Schnipper J, Massaro A, et al. A web-based and mobile patient-centered “‘microblog’” messaging platform to improve care team communication in acute care. J Am Med Inform Assoc 2017; 24 (e1): e178–84. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 36. Dalal AK, Dykes PC, Collins S, et al. A web-based, patient-centered toolkit to engage patients and caregivers in the acute care setting: a preliminary evaluation. J Am Med Inform Assoc 2016; 23 (1): 80–7. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 37. Mlaver E, Schnipper JL, Boxer RB, et al. User-centered collaborative design and development of an inpatient safety dashboard. Jt Comm J Qual Patient Saf 2017; 43 (12): 676–85. [DOI] [PubMed] [Google Scholar]
- 38. Collins SA, Rozenblum R, Leung WY, et al. Acute care patient portals: a qualitative study of stakeholder perspectives on current practices. J Am Med Inform Assoc 2017; 24 (e1): e9–17. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 39. Dalal AK, Bates DW, Collins S.. Opportunities and challenges for improving the patient experience in the acute and postacute care setting using patient portals: the patient’s perspective. J Hosp Med 2017; 12 (12): 1012–6. [DOI] [PubMed] [Google Scholar]
- 40. Roberts TJ, Ringler T, Krahn D, Ahearn E.. The my life, my story program: sustained impact of veterans’ personal narratives on healthcare providers 5 years after implementation. Health Commun 2021; 36 (7): 829–36. [DOI] [PubMed] [Google Scholar]
- 41. Haynes AB, Weiser TG, Berry WR, et al. ; Safe Surgery Saves Lives Study Group. A surgical safety checklist to reduce morbidity and mortality in a global population. N Engl J Med 2009; 360 (5): 491–9. [DOI] [PubMed] [Google Scholar]
- 42. Phyo ZH, Harris CM, Singh A, Kotwal S.. Utility of a diagnostic time-out to evaluate an atypical pneumonia. Am J Med 2022; 135 (5): 581–5. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 43. Ely JW, Graber MA.. Checklists to prevent diagnostic errors: a pilot randomized controlled trial. Diagnosis (Berl) 2015; 2 (3): 163–9. [DOI] [PubMed] [Google Scholar]
- 44. Garber AM, Bersani K, Carr K, et al. Improving patient-provider communication about diagnoses in the acute care setting: an EHR-integrated patient questionnaire. J Hosp Med 2020. https://shmabstracts.org/abstract/improving-patient-provider-communication-about-diagnoses-in-the-acute-care-setting-an-ehr-integrated-patient-questionnaire/. Accessed May 4, 2023. [Google Scholar]
- 45. Enayati M, Sir M, Zhang X, et al. Monitoring diagnostic safety risks in emergency departments: protocol for a machine learning study. JMIR Res Protoc 2021; 10 (6): e24642. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 46. Vaghani V, Wei L, Mushtaq U, Sittig DF, Bradford A, Singh H.. Validation of an electronic trigger to measure missed diagnosis of stroke in emergency departments. J Am Med Inform Assoc 2021; 28 (10): 2202–11. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 47. Mahajan P, Basu T, Pai CW, et al. Factors associated with potentially missed diagnosis of appendicitis in the emergency department. JAMA Netw Open 2020; 3 (3): e200612. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 48. Shenvi EC, El-Kareh R.. Clinical criteria to screen for inpatient diagnostic errors: a scoping review. Diagnosis (Berlin, Germany) 2015; 2 (1): 3–19. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 49. Mohta N, Vaishnava P, Liang C, et al. The effects of a ‘discharge time-out’ on the quality of hospital discharge summaries. BMJ Qual Saf 2012; 21 (10): 885–90. [DOI] [PubMed] [Google Scholar]
- 50. Bergl PA, Zhou Y.. Diagnostic error in the critically ill: a hidden epidemic? Crit Care Clin 2022; 38 (1): 11–25. [DOI] [PubMed] [Google Scholar]
- 51. Gunderson CG, Bilan VP, Holleck JL, et al. Prevalence of harmful diagnostic errors in hospitalised adults: a systematic review and meta-analysis. BMJ Qual Saf 2020; 29 (12): 1008–18. [DOI] [PubMed] [Google Scholar]
- 52. Giardina TD, Haskell H, Menon S, et al. Learning from patients’ experiences related to diagnostic errors is essential for progress in patient safety. Health Affairs (Project Hope) 2018; 37 (11): 1821–7. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 53. Dykes PC, Rozenblum R, Dalal A, et al. Prospective evaluation of a multifaceted intervention to improve outcomes in intensive care: the promoting respect and ongoing safety through patient engagement communication and technology study. Crit Care Med 2017; 45 (8): e806–13. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 54. Pronovost PJ, Bo-Linn GW.. Preventing patient harms through systems of care. JAMA 2012; 308 (8): 769–70. [DOI] [PubMed] [Google Scholar]
- 55. Van Decker SG, Bosch N, Murphy J.. Catheter-associated urinary tract infection reduction in critical care units: a bundled care model. BMJ Open Qual 2021; 10 (4): e001534. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 56. Cavalcanti AB, Bozza FA, Machado FR, et al. ; Writing Group for the CHECKLIST-ICU Investigators and the Brazilian Research in Intensive Care Network (BRICNet). Effect of a quality improvement intervention with daily round checklists, goal setting, and clinician prompting on mortality of critically ill patients: a randomized clinical trial. JAMA 2016; 315 (14): 1480–90. [DOI] [PubMed] [Google Scholar]
- 57. Carayon P, Hoonakker P, Hundt AS, et al. Application of human factors to improve usability of clinical decision support for diagnostic decision-making: a scenario-based simulation study. BMJ Qual Saf 2020; 29 (4): 329–40. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 58. McParland CR, Cooper MA, Johnston B.. Differential diagnosis decision support systems in primary and out-of-hours care: a qualitative analysis of the needs of key stakeholders in Scotland. J Prim Care Community Health 2019; 10: 2150132719829315. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 59. Figueroa JF, Schnipper JL, McNally K, Stade D, Lipsitz SR, Dalal AK.. How often are hospitalized patients and providers on the same page with regard to the patient's primary recovery goal for hospitalization? J Hosp Med 2016; 11 (9): 615–9. [DOI] [PubMed] [Google Scholar]
- 60. Schubart JR, Toran L, Whitehead M, Levi BH, Green MJ.. Informed decision making in advance care planning: concordance of patient self-reported diagnosis with physician diagnosis. Support Care Cancer 2013; 21 (2): 637–41. [DOI] [PubMed] [Google Scholar]
- 61. DesHarnais S, Carter RE, Hennessy W, Kurent JE, Carter C.. Lack of concordance between physician and patient: reports on end-of-life care discussions. J Palliat Med 2007; 10 (3): 728–40. [DOI] [PubMed] [Google Scholar]
- 62. Giardina TD, Choi DT, Upadhyay DK, et al. Inviting patients to identify diagnostic concerns through structured evaluation of their online visit notes. J Am Med Inform Assoc 2022; 29 (6): 1091–100. [DOI] [PMC free article] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Data Availability Statement
The data underlying this article are available in the article and in its online supplementary material.