Skip to main content
Scientific Reports logoLink to Scientific Reports
. 2026 Jan 30;16:6669. doi: 10.1038/s41598-026-37796-1

Professional role and hierarchy shape adherence to electronic alerts for laboratory test ordering

Angela Greco 1,2,3,, Maria Luisa Garo 4, Martina Zandonà 5, Luca Clivio 6, Luca Gabutti 3,5,
PMCID: PMC12913593  PMID: 41617933

Abstract

Healthcare systems face increasing pressure to reduce costs and improve care quality. Guideline adherence is essential for optimal patient outcomes, yet compliance remains inconsistent. Nudges - behavioral interventions modifying decision environments - have shown potential to enhance adherence, but their effectiveness varies by professional role and organizational factors. This study examines the impact of nudges on laboratory test prescriptions, assessing their effectiveness based on prescriber demographics, professional role, and hierarchical position. Additionally, it explores alert fatigue and prescriber perceptions of these interventions. A dataset of 929,808 laboratory test prescriptions issued between July 2021 and March 2024 across the Swiss EOC hospitals was analyzed. Prescription details, prescriber demographics, and alert outcomes were examined. Monthly adherence rates to alerts and appropriate prescription rates were assessed overall and by professional role. A cross-sectional survey (12-item Likert scale) evaluated prescriber perceptions. Statistical analyses included descriptive statistics, chi-square tests, time series analysis, and exploratory factor analysis (EFA) with varimax rotation. Group differences were analyzed using the Kruskal-Wallis test, Dunn’s procedure, and the Mann-Whitney U test. Of all prescriptions, 12.4% triggered alerts due to non-compliance with guideline-based Minimal Retesting Interval (MRI) criteria, internal standards collaboratively developed by EOC laboratory specialists and clinicians, aligned with international recommendations on laboratory test appropriateness. Among these, 11.9% of orders were canceled following the alert. Of these, 11.9% were canceled. Alert adherence varied by role, highest among heads of service (19.9%) and lowest among residents (9.6%). Compliance increased with age (p < 0.0001) and experience (p < 0.001) but declined over time (β = -0.08, p = 0.003), suggesting alert fatigue. Perceived usefulness of alerts was higher among nurses than physicians (p < 0.001). Despite greater adherence, senior physicians found alerts less useful. Professional hierarchy influences alert adherence, and sustaining behavioral change remains challenging. While electronic alerts can improve guideline compliance, alert fatigue limits long-term effectiveness. Optimizing alert design through personalization and context-aware triggers may enhance engagement. These findings provide insights for improving nudge-based interventions in clinical practice.

Supplementary Information

The online version contains supplementary material available at 10.1038/s41598-026-37796-1.

Keywords: Prescriber compliance, Nudges, Choosing Wisely, Electronic health records, Healthcare professionals, Hierarchical influence

Subject terms: Health care, Medical research

Introduction

The rising pressure on healthcare systems - driven by increasing costs and the need to deliver high-value care1 - has brought renewed attention to clinicians’ adherence to evidence-based guidelines, which remains suboptimal in many settings despite its critical importance for patient outcomes and system efficiency2. Among the various strategies proposed to improve guideline compliance, behavioral interventions such as “nudges” have recently gained traction for their ability to influence decision-making without coercion or financial incentives36.

Nudges, defined as subtle changes in the decision-making environment that steer behavior while preserving freedom of choice4, have shown promise in promoting appropriate prescribing behaviors and reducing low-value care3,5,6. However, their effectiveness is not uniform: it depends on the context in which they are deployed, including organizational structures, clinical workflows, and professional cultures3,7,11. Individual characteristics such as clinical experience, professional role, and task orientation (e.g., clinical vs. administrative duties) may also affect how prescribers respond to behavioral prompts3,8.

Among contextual variables, professional hierarchy and leadership dynamics have received limited empirical attention in nudge-based interventions. However, implementation science and clinical informatics literature suggest that differences in hierarchical status are associated with varying levels of professional identity, responsibility, and decision-making autonomy, which can shape how clinicians perceive and engage with clinical decision support systems, including alerts and nudges810.

More broadly, reviews on guideline implementation highlight that contextual and organizational features - such as institutional culture, team composition, and role expectations - contribute significantly to variability in adherence across settings2,10.

Despite the growing use of nudging strategies, evidence on their impact in laboratory test ordering - particularly in relation to professional hierarchy and alert fatigue - remains scarce.

In our study, we evaluated the effectiveness of a specific form of digital decision support, delivered as a full-screen pop-up during test ordering, requiring user interaction to proceed. While some definitions reserve the term “nudge” for non-interruptive or passive prompts, we adopt a broader interpretation aligned with behavioral economics literature, where a nudge is any intervention that alters the decision-making environment in a predictable way without forbidding options or significantly changing economic incentives10. Our alert fits this broader definition: it does not prevent the test from being ordered, but it introduces a reflective moment aimed at improving appropriateness by leveraging Minimal Retesting Interval (MRI) criteria. From a design perspective, we classify it as “active” because it requires action from the prescriber, and “synchronous” because it appears at the exact time of the decision, integrated into the prescribing workflow. We acknowledge the debate around these definitions and explicitly clarify our position to improve conceptual transparency.

The alerts implemented in this study were based on the Minimal Retesting Interval (MRI), which defines the appropriate time at which a laboratory test can be repeated in order to minimize potentially unnecessary prescriptions.

Our aim was to examine the impact of these nudges on laboratory test prescriptions. We specifically explored whether nudge effectiveness varied based on prescribers’ demographic variables, particularly gender and age, as well as professional factors such as experience (years of practice), professional role (physician vs. non-physician), and hierarchical position (head of service vs. attending physician vs. resident physician). We also compared our results with findings from the scientific literature.

Additionally, we investigated the presence of “alert fatigue” (the exhaustion caused by repeated notifications) and prescribers’ sentiment toward the nudge.

The study addressed three main research dimensions:

(1) Variation in adherence: Does adherence to guideline-based alerts vary according to prescribers’ professional characteristics (role, hierarchical position, and experience) and demographic factors (sex and age)?

(2) Alert fatigue: Does adherence to electronic alerts change over time, suggesting the presence of alert fatigue?

(3) Perception of alerts: How do healthcare professionals perceive electronic alerts in terms of acceptability, usefulness, and their influence on clinical decision-making?

A distinctive aspect of this study is that, while much research has focused on the overall effectiveness of nudges in healthcare, few studies have explored how the effectiveness of such interventions may vary based on the individual prescriber characteristics. Moreover, we found no studies evaluating the impact of hierarchical roles on adherence to nudges in relation to laboratory test prescriptions. This innovative approach allows us to address a gap in the existing literature and provides a more nuanced perspective on the applicability of nudges in diverse healthcare settings.

Furthermore, the breadth of our dataset, which includes over 900,000 laboratory test prescriptions, lends robustness to our findings. The large sample size enables us to draw valid and generalizable conclusions, thereby increasing the practical relevance of the study.

The results obtained could offer crucial insights for healthcare organizations regarding the adoption of nudges in specific organizational contexts and the customization of prescribing policies aimed at improving guideline compliance.

Conducted across multiple hospitals within the Swiss EOC (Ente Ospedaliero Cantonale) network, the study not only provides new evidence on prescribers’ behavior in response to nudges but also suggests potential modifications to internal policies and offers valuable insights for hospital leadership in managing resources and prescribing practices.

Materials and methods

Design and implementation of the electronic alert (nudge)

In this study, we investigated prescribers’ responses to a nudge-based alert system already implemented across all hospitals within the EOC network. The intervention consisted of a full-screen pop-up alert integrated into the computerized physician order entry (CPOE) module of the electronic health record (EHR), specifically within the laboratory-ordering interface. The alert was technically embedded in the clinical workflow and was triggered automatically at the moment of test prescription.

The pop-up was triggered automatically and synchronously when a laboratory test was ordered within the Minimal Retesting Interval (MRI), indicating that a clinically valid result was still available according to institutional appropriateness criteria.

Each MRI was defined through a structured peer discussion process among EOC laboratory specialists and clinicians, following a review of the available literature. These criteria were informed by international recommendations, including those promoted by the “Choosing Wisely” initiative and the Swiss Smarter Medicine campaign, and were previously described in the context of a broader multicentric intervention within the same hospital network12.

The alert displayed a brief explanatory message specifying the reason for inappropriateness and the date of validity of the previous result (Fig. 1).

Fig. 1.

Fig. 1

Example of the interruptive, synchronous alert triggered when a test is ordered within the MRI. The system notifies the prescriber that a clinically valid result is still available and recommends avoiding test repetition. The alert requires an active decision: proceed with the test (“Add anyway”) or cancel the request (“Cancel”).

The alert required an explicit user action to proceed. Prescribers were presented with two options: (i) cancel the test order, thereby adhering to the recommendation, or (ii) confirm the prescription (“add anyway”), overriding the alert. The system did not automatically block the test order; rather, it preserved full clinical autonomy by allowing the prescriber to proceed after active confirmation. The alert could not be dismissed or bypassed without selecting one of the two options, making it interruptive by design and ensuring engagement with the message.

The same alert logic, wording, and user interface were implemented uniformly across all participating hospitals, with no differences in alert behavior or system configuration between sites.

This design was intentionally chosen to introduce a reflective pause at the point of care while maintaining freedom of choice, in line with principles of behavioral economics and choice architecture.

Dataset and cross-sectional survey description

The extracted dataset included 929,808 prescriptions issued between July 2021 and March 2024 across the hospitals of the Swiss EOC (Ente Ospedaliero Cantonale) network. For each prescription, the dataset contained information on: (1) the prescription date, (2) the laboratory test requested, (3) the triggering of an alert in cases of non-compliance with the recommended Minimal Retesting Interval (MRI), (4) the outcome of the alert in case of triggering, and (5) the prescriber’s age, sex, and professional role. The total monthly number of prescriptions, as well as the number of ignored and accepted alerts, was calculated overall and stratified by professional role.

A cross-sectional survey was also conducted as part of the institution’s routine quality improvement activities, aiming to assess prescribers’ attitudes towards the alert system. The survey consisted of 12 items on a 5-point Likert scale (see Supplementary Material - Table A1), exploring perceptions and use of the pop-up alerts, as well as the possible influence of supervisors on alert acceptance or override decisions by nurses and attending physicians.

No missing or inconsistent data were identified in either the prescription dataset or the survey responses. Each entry corresponded to a complete test order event, and the dataset was verified for accuracy and completeness during both the extraction and merging procedures.

Data source

To analyze the nudges that appeared in the electronic health record forms regarding the acceptance of potentially inappropriate prescriptions, data extraction was performed from the local EOC data warehouse using custom and reproducible procedures based on Power BI. The extraction was conducted in a fully anonymized manner, as the focus was solely on the prescription nudges themselves, making patient personal data unnecessary.

Data on prescribers’ declared reactions to a nudge were collected through a voluntary, anonymous survey sent via email to all potential prescribers at EOC. The data were collected and extracted using “EDDDIE”, an internally developed and validated electronic data collection platform accessible on both mobile phones and computers.

The only demographic data collected (age and sex) were used exclusively for stratifying responses and were processed solely in aggregate form. The survey was entirely anonymous, and it was not possible to trace responses back to individual participants.

Statistical analysis

The statistical analysis was carried out with two different data sets. The first data set was used to assess the percentage of adherence to the warning pop-up (i.e., accepted alerts) and the percentage of potentially appropriate prescriptions overall and by professional role. The second data set was used to analyze the responses of physicians and non-physicians to the cross-sectional survey.

For the results of the first data set, descriptive statistics are expressed as relative frequencies and percentages for categorical and dichotomous variables. Possible statistically significant differences between physicians and non-physicians as well as heads of service, resident physicians, attending physicians and nurses with regard to the acceptance/rejection of warning popups were determined using the chi-square test.

A time series analysis was also conducted to assess the evolution of the percentage of adherence to pop-up warnings and the percentage of potentially appropriate prescriptions over time. The percentage of adherence to pop-up warnings was calculated monthly as the ratio between the total number of accepted alerts and the total number of prescribed examinations that did not comply with the recommended MRI. The percentage of potentially appropriate prescriptions was calculated monthly as the ratio between the total number of prescribed examinations that did not trigger a pop-up warning, and the total number of prescriptions written during the considered period.

After examining the data graphically to understand the basic characteristics of the time series data, assessing the stationarity of the time series with the Augmented Dickey-Fuller test and evaluating the most appropriate approach (i.e., linear vs. non-linear) to explain the evolution of the two percentages over time, no substantial departures from an approximately monotonic pattern were observed, and non-linear specifications did not provide a meaningful improvement in model interpretability. Therefore, the trend of the time series was assessed using a linear regression analysis. This trend analysis made it possible to understand the long-term movement in the data by indicating whether there was a statistically significant increase, decrease or stagnation over time. Given the study objective of assessing long-term directional changes rather than short-term variability, linear regression was considered the most parsimonious and appropriate approach. The assumptions of the linear regression models were checked by examining the residual plot, the Durbin-Watson test and the Breusch-Pagan test. The correct model specification was assessed using the regression specification error test. The trend analysis was performed overall and by professional role.

The second data set, which referred to the cross-sectional survey, was analyzed to assess possible differences in prescribers’ behaviors and attitudes towards warning pop-ups. For this purpose, an exploratory factor analysis (EFA) was conducted with 7 items developed specifically for this purpose. The factors derived from the individual scales and the interpretation of the factor loadings (saturation) were determined by setting the value 0.500 as the lower limit of acceptability of the individual item. Factor identification was performed after applying a varimax rotation to ensure factor independence13. To understand whether the data were suitable for factor analysis, the Kaiser–Meyer–Olkin test was performed14. After EFA, a score was determined for each identified factor by first calculating the average of the included items (identified as Mean Dimension Score, MDS) and then determining the final score by standardizing the raw score to a range of 0-100 according to the following linear transformation:

S = [(MDS – 1)/range]*100,

where the range was the difference between the highest and lowest possible item score (i.e., 5 − 1 = 4).

The descriptive statistics of the scores are reported as mean and standard deviation, while categorical or dichotomous variables are reported as relative frequencies and percentages. The distribution of the quantitative variables was tested using the Shapiro-Wilk test. Differences between professional roles were assessed using the Kruskal-Wallis test followed by the Dunn procedure and the U Mann-Whitney test. Statistical significance was set at 0.05 for the first data set, while it was set at 0.01 (p < 0.01) for the cross-sectional survey after Bonferroni correction. All statistical analyses were performed with STATA18 (StataCorp., Collage Station, TX, USA).

Ethical approval and informed consent

This study was conducted as part of the institution’s routine quality improvement activities and did not involve patients or the collection of patient clinical data. According to the internal policies of the EOC, such projects do not require formal approval from the ethics committee. The survey was addressed exclusively to EOC personnel. All participants were adequately informed about the purpose of the study, the voluntary nature of their participation, and the anonymous handling of the data prior to completing the questionnaire. The study was conducted in full compliance with institutional regulations and all applicable national guidelines. Data were collected and analyzed anonymously.

Results

Percentage of accepted alerts and appropriate prescriptions over the years

A total of 929,808 laboratory tests were requested during the observation period; 115,248 of the 929,808 (12.4%) test requests violated the appropriateness criteria defined by the MRI and triggered the appearance of the electronic alert (warning pop-up). After the alert appeared, 13,698 requests were cancelled (1.5% of the total tests requested and 11.9% of the alerted tests) (Table 1).

Table 1.

Professional and demographic characteristics: frequencies of prescriptions and accepted alerts.

Frequencies and (%) related to no. of prescriptions (929,808) Frequencies and (%) related to no. of accepted alerts (13,698)
Professional factors
Grouped role, n (%)
Physicians 681,529 (73.3) 9,515 (69.5)
Others 248,279 (26.7) 4,183 (30.5)
Professional role, n (%)
Heads of service 148,319 (16.0) 1,799 (13.1)
Attending physicians 197,069 (21.2) 2,381 (17.4)
Resident physicians 336,141 (36.2) 5,335 (38.9)
Nurses 144,807 (15.6) 2,950 (21.5)
Experience (years), n (%)
< 5 525,300 (56.5) 8,662 (63.2)
5–10 171,146 (18.4) 1,949 (14.2)
10–20 147,528 (15.9) 1,934 (14.1)
> 20 85,834 (9.2) 1,153 (8.4)
Demographic factors
Sex, n (%)
Female 550,614 (59.2) 8,077 (59.0)
Male 379,194 (40.8) 5,621 (41.0)
Age range (years), n (%)
< 40 535,590 (57.6) 8,483 (61.9)
40–60 350,281 (37.7) 4,822 (35.2)
> 60 43,937 (4.7) 393 (2.9)

Among physicians, only 11.4% accepted the alerts; a similar percentage was observed among non-physicians (13.2%) (Fig. 2a). Acceptance of the alerts was higher among heads of service (19.9%) than among attending physicians (12.5%), and resident physicians (9.6%) (Fig. 2b).

Fig. 2.

Fig. 2

Percentage of accepted alerts between (a) non-physicians vs. physicians, and among (b) specific professional roles. The percentage was determined calculated the ratio between the number of accepted alerts and the total number of warning pop-ups.

Regarding demographic factors, the analysis did not reveal any differences in compliance with the warning based on the prescriber’s sex. However, compliance increased with age (p < 0.0001) and years of work experience (p < 0.001).

The percentage of accepted alerts decreased significantly over the observed period (β = -0.08, SE: 0.02, 95%CI: -0.13; -0.03; p = 0.003, Fig. 3a), while there was no statistically significant change in the percentage of potentially appropriate prescriptions (Fig. 3b).

Fig. 3.

Fig. 3

(a) Percentage of accepted alerts and (b) Percentage of potentially appropriate prescriptions. The green dashed line represents the actual monthly data. The orange line represents the linear trend as calculated after regression analysis. The shadow area represents the 95% confidence interval.

A significant decrease in the monthly percentage of accepted alerts was observed for both physicians (β = -0.04, SE: 0.02, 95%CI: -0.07; -0.01, p = 0.024, Fig. 4a) and non-physicians (= -0.18, SE: 0.06, 95%CI: -0.30; -0.07, p = 0.003, Fig. 4b). Moreover, a significant decline was observed among attending physicians (β = -0.23, SE: 0.04, 95%CI: -0.32;-0.15, p < 0.001, Fig. 4d) and nursing staff (β = -0.37, SE: 0.08, 95%CI: -0.53; -0.21, p < 0.001, Fig. 4f), as shown in Fig. 4.

Fig. 4.

Fig. 4

Percent of accepted alert among (a) Physicians; (b) Non-physicians; (c) Heads of service; (d) Attending physicians; (e) Resident physicians; (f) Nurses.

The percentage of appropriate prescriptions per month showed a different trend between physicians and non-physicians (Table 2); in particular, an increasing trend was observed in physicians (β = 0.07, SE: 0.01, 95%CI: 0.04; 0.10, p < 0.001), while a significant decrease was observed in non-physicians (β = -0.13, SE: 0.05, 95%CI: -0.23;-0.03; p = 0.013). An increasing trend was observed among resident physicians (β = 0.10, SE: 0.01, 95%CI: 0.07; 0.13; p < 0.001) and heads of service (β = 0.09, SE: 0.03, 95%CI: 0.04;0.16 p < 0.001), while a significant decrease was observed among attending physicians (β = -0.05, SE: 0.02, 95%CI: -0.10;-0.01, p = 0.047) and nursing staff (β = -0.16, SE: 0.01, 95%CI: -0.29;-0.04 p = 0.012).

Table 2.

Impact of professional role on the percentage of potentially appropriate prescriptions.

% of potentially appropriate prescriptions
Trend for medical and non-medical staff
Physicians 0.07*** (0.01) [0.04;0.10]
Others -0.13** (0.05) [-0.23;-0.03]
Trend for professional roles
Head of service 0.09*** (0.03) [0.04;0.16]
Resident physicians and interns 0.10*** (0.01) [0.07;0.13]
Attending physicians -0.05** (0.02) [-0.10;-0.01]
Nurses -0.16** (0.06) [-0.29;-0.04]

Results are reported as coefficient (Standard errors) [95% Confidence Interval]. *** p < 0.001; ** p < 0.05.

Cross-sectional survey

Three hundred and twenty-seven physicians and non-physicians took part in the survey (28.8% of the response rate). Overall, the percentage of physicians participating in the study was 55.8%, mainly attending physicians (49.2%) and resident physicians (32.8%) (Fig. 5).

Fig. 5.

Fig. 5

Sample distribution by professional roles.

Only about 6% of respondents reported often or always dismissing the alert without reading it. Less than 17% stated that they often or always followed the alert uncritically, while more than 70% reported reassessing the request in light of the information provided by the pop-up.

A comparison between the distribution of professional roles among eligible participants and respondents is presented in Supplementary Figure A1. This comparison allows readers to assess potential differences between respondents and the eligible population.

Exploratory factor analysis

An exploratory factor analysis was performed using principal axis factoring with varimax rotation. Based on the Kaiser criterion and inspection of the screen plot, two factors that explained 54% of the variance (27.9% in the first and 26.1% in the second factor) were extracted.

Table 3 shows the rotated factor loadings for the two-factor solution. Factor loadings above 0.500 were considered significant and highlighted. Each of the factors had high loadings on different groups of variables, which allowed for meaningful interpretation. No significant cross-loadings were observed. Factor 1 (Eigenvalue = 2.30, 27.9% variance) was labelled “Warning Pop-up Acceptability” as it contained items relating to prescribers’ attitudes towards warnings. Factor 2 (Eigenvalue = 1.48, 26.1% variance) was labelled “Warning Pop-up Perceived Usefulness” as it was based on the high loading of items relating to the use of the warning as a medical decision tool to confirm the prescribed examination, review previous examination results or reassess the patient’s condition.

Table 3.

Factor loadings.

Factor 1 (Warning Pop-up Perceived Acceptability) Factor 2 (Warning Pop-up Usefulness)
Q1 I tend to dismiss the alert without reading it 0.5631 0.2477
Q2 I accept the alert and do not prescribe the test that day -0.1441 0.6269
Q3 I check previous results and, if deemed necessary, repeat the test 0.0652 0.8358
Q5 The appearance of the alert prompts me to reassess the patient’s condition 0.1973 0.8117
Q6 I am annoyed by repeatedly seeing the same alert 0.7923 0.0928
Q7 I believe that this alert is incorrect for many tests 0.7017 0.0130
Q12 I do not have time to review every alert 0.6729 0.0837

The factors identified in this way were then used to determine two scores from 0 to 100, labelled “Warning Pop-up Acceptability” and “Warning Pop-up Perceived Usefulness” respectively (Table 4). Overall, the mean scores for the acceptability and usefulness of the warnings were 25.6 ± 17.4 and 41.2 ± 25.0 points respectively. While no statistically significant differences were found between the professional roles in terms of warning pop-up acceptability, the perceived usefulness of the warning pop-up was statistically significantly higher among non-physicians (51.0 ± 28.8 points) than among physicians (33.3 ± 18.1, p < 0.001). A detailed comparison showed that nursing staff rated the usefulness of the warning pop-up for medical decisions statistically significantly higher (nursing staff: 49.5 ± 27.0) than attending physicians (36.3 ± 18.2), resident physicians (30.6 ± 18.4) and heads of service (29.5 ± 15.9) (p < 0.001).

Table 4.

Mean ± Standard deviation of acceptance and usefulness by professional role.

Professional Role Detailed Professional role
Physician Non-physician Head of service Attending physician Resident physician Nurse
Warning Acceptability 25.4 ± 17.4 25.7 ± 17.3 22.7 ± 16.4 29.1 ± 19.1 21.0 ± 14.1 24.8 ± 17.4
Warning Usefulness 33.3 ± 18.1 51.0 ± 28.8 29.5 ± 15.9 36.3 ± 18.2 30.6 ± 18.4 49.5 ± 27.0

Role of the supervisor

For the nursing staff and the attending physicians, the appearance of the warning pop-up was often an incentive to discuss the new possible prescription of the examination with a supervisor (48.3%). For this subgroup of respondents, the decision to inform a supervisor was only in a few cases due to an uncertainty about their own abilities and competences to refuse the warning. Only 21.7% of respondents believed that they did not have the competences and/or abilities to decide whether or not to accept the warning pop-up. The lack of specific skills to reject warning pop-ups was perceived more strongly by non-physicians (41.4%) than by physicians (6.1%) (p < 0.001) (Fig. 6).

Fig. 6.

Fig. 6

Comparison between physicians vs. non-physician about the item “I don’t have the skills to reject the Alert”.

Discussion

This study examined the effectiveness of an alert system designed to reduce unnecessary laboratory test prescriptions within a network of hospitals in southern Switzerland. The results show that alert acceptance rates vary based on professional role, with senior physicians (heads of service) demonstrating higher adherence compared to resident physicians.

Moreover, the observed decline in alert adherence over time is consistent with previous literature on alert fatigue in clinical decision support systems, which describes progressive desensitization to repeated alerts across different clinical domains15,16. Our findings contribute to the growing body of literature on EMR-based decision support systems, which are increasingly used to enhance diagnostic efficiency and reduce unnecessary testing17,18. While several studies suggest that electronic alerts can improve adherence to clinical guidelines, the lack of a significant increase in appropriate prescriptions over time in our study suggests that their effectiveness may be transient.

Research on adherence to alerts embedded in electronic medical records (EMRs) has gained increasing attention in healthcare studies. However, we found no studies investigating the influence of gender or hierarchical role on compliance with these reminders.

Only one study has analyzed the impact of prescribers’ age and experience on adherence to nudges in clinical practice19. Younger physicians, generally more familiar with digital technologies, tend to respond more positively to alerts compared to older colleagues who may be less accustomed to such systems. In contrast, our results indicate an opposite trend, painting a different reality than expected: older, more experienced physicians and those in higher hierarchical positions were more likely to accept alerts and refrain from ordering additional laboratory tests.

One possible interpretation is that older prescribers often hold higher hierarchical positions and that these physicians may adhere more to alerts because they have greater autonomy in determining test necessity or may feel more aligned with institutional policies. These findings highlight the importance of considering organizational culture and power dynamics when designing nudges to improve clinical practice7. However, these explanations remain speculative and warrant further investigation.

Another key finding is that the percentage of appropriate prescriptions remained stable, raising concerns about the long-term impact of alerts on clinical decision-making. While previous studies suggest that electronic alerts can improve adherence to clinical guidelines, the lack of a significant increase in appropriate prescriptions over time suggests that their effectiveness may be transient.

The survey results revealed two key themes: acceptability and perceived usefulness. Acceptability was low, and alerts were generally viewed as unconvincing or intrusive. Nurses found them useful, whereas physicians - particularly those in higher hierarchical positions - did not share this view. Among physicians, perceived usefulness was negatively correlated with hierarchical role: the higher the position, the less valuable the alert was considered. Interestingly, despite perceiving alerts as less useful, senior physicians showed higher compliance. One possible interpretation is that senior staff, while more disturbed by the interruption, may comply more due to greater autonomy, stronger alignment with institutional policies, or a heightened sense of responsibility.

This may also reflect varying levels of decision-making autonomy across roles. Respondents felt confident in evaluating alerts when they believed they had the necessary expertise. However, when alerts fell outside their competencies, nursing staff preferred to seek guidance from a more authoritative figure. This underscores the central issue of decision-making capacity.

Overall, the survey results indicate that the alert system tends to benefit less experienced staff by prompting action, whereas more experienced professionals in higher hierarchical positions often perceive it as an obstacle or nuisance - an interpretation aligned with previous literature highlighting how professional identity and role-related autonomy influence engagement with clinical decision support systems9. These insights should guide the design of future alerts, ensuring that interventions support clinical decision-making without contributing to fatigue or resistance.

Given the observed decline in alert adherence, project leaders involved in implementing nudging strategies, including hospital administrators and medical directors, should consider not only initial alert design but also ongoing surveillance and iterative refinement of existing alerts20. Continuous monitoring of alert performance may help identify when alerts become less effective and require redesign to remain context-sensitive and clinically meaningful. Possible strategies include:

  • Personalized alerts: tailoring alert frequency and content based on the prescriber’s experience and specialty to enhance engagement.

  • Context-sensitive alerts: integrating additional patient data to generate more specific and relevant recommendations, reducing the perception of alerts as redundant.

  • Periodic reinforcement training: educating physicians on the rationale behind alerts and their role in optimizing patient care may improve long-term adherence.

  • Modifications to alert design: implementing graded or tiered alerts, where only high-risk deviations trigger interruptions, could reduce cognitive overload.

Moreover, hospitals implementing an alert system for computerized laboratory test prescriptions and aiming to achieve successful outcomes in terms of test reduction and compliance with clinical guidelines should review their internal policies on test ordering. Specifically, they should determine whether physicians should be allowed to issue oral orders for laboratory tests, resulting in a non-physician entering the prescription into the electronic medical record. Our findings suggest that this practice may not be advisable, as physicians demonstrated higher compliance with alerts compared to non-physicians.

Strengths, limitations and future research priorities

A key strength of this study is the combination of a large sample size (over 900,000 prescriptions), which enhances the generalizability of the findings, with a tailored survey, allowing us to integrate behavioral insights into system-level data. The cross-sectional survey provided valuable insights into physicians’ attitudes and perceptions toward alerts.

However, several limitations should be noted:

  • the study was conducted in a single hospital network in Switzerland, which may limit external validity and generalizability;

  • survey participation was voluntary and may be subject to selection bias, as some differences between respondents and the eligible population were observed across professional roles;

  • the study relied on routinely collected electronic prescribing and alert data and did not capture important contextual factors such as patient clinical complexity, perceived urgency of the prescription, or concurrent workload at the time of ordering. These unmeasured factors may have influenced prescribers’ willingness or ability to respond to the alerts, leading to residual confounding;

  • prescriber-level characteristics (e.g., seniority, specialty, prior exposure to cost-containment or stewardship initiatives) and organizational changes occurring during the study period were not systematically measured. Variation in these factors across hospitals, wards, or time may have influenced responses to alerts and limit causal interpretation of the observed associations;

  • alert fatigue and the overall burden of other clinical decision support messages were not quantified. If prescribers exposed to a higher volume of alerts were less likely to follow recommendations, this may have attenuated the apparent effect of the nudge intervention;

  • the reasons for alert acceptance or override were not collected in qualitative depth;

  • while we analyzed alert adherence, the study did not assess patient outcomes directly.

Future research should adopt a multi-pronged approach. First, studies should expand across multiple institutions to capture contextual differences in alert effectiveness and generalizability. Second, longitudinal designs are needed to assess whether sustained adherence to alerts leads to improved patient outcomes. Third, further research should evaluate redesign strategies aimed at mitigating alert fatigue - particularly whether context-sensitive or tiered alert systems enhance long-term compliance. Finally, qualitative methodologies, such as interviews or focus groups, could provide deeper insights into prescribers’ cognitive processes, motivations, and perceived burden when responding to alerts.

Conclusions

This study provides valuable evidence on the role of professional hierarchy in alert adherence and highlights the challenges of maintaining alert-driven behavioral changes over time. While electronic alerts remain a promising tool for improving guideline adherence, their long-term effectiveness depends on managing alert fatigue and optimizing implementation strategies. Refining alert systems to be more user-centered and context-aware, as well as evaluating internal hospital policies, may enhance their impact on clinical decision-making and healthcare efficiency.

We believe these findings offer practical insights for hospitals seeking to introduce nudges in the form of computerized alerts to reduce inappropriate laboratory test prescriptions. At the same time, in the healthcare context and in nudges targeting healthcare professionals, there is a need for further investigations into differences in alert compliance concerning demographic and professional factors.

Supplementary Information

Below is the link to the electronic supplementary material.

Acknowledgements

The authors thank the following colleagues (in alphabetical order): Dr Nicolò Saverio Centemero and Brigitte Waldispühl Suter, Computerised Clinical Processes EOC Service, for coordinating the Alert Introduction; Prof Paolo Ferrari, Head of Medical Area DG EOC, for participating in the design and approving the Alert Introduction Project and the Survey; Alessandro Merler, Head of Business Intelligence EOC, and Federica Fiorentini, for providing data on laboratory tests and the alert; and all the clinicians of the EOC Hospitals for their daily commitment to providing quality care. Without them, this project would not have existed.

Abbreviations

EOC

Ente ospedaliero cantonale

ICT

Information and communication technology

MRI

Minimal retesting interval

Author contributions

AG and LG conceptualized the study’s aims and design, including the survey. AG wrote original draft preparation; she wrote the main manuscript text and, together with MZ, drafted the Discussion and Conclusions. LC extracted the study data and developed the software for survey distribution, ensuring anonymized response analysis. MLG performed the statistical analysis. AG and LG contributed to the revision of the original draft and provided overall supervision. Project administration: AG. All authors have read and agreed to the published version of the manuscript.

Data availability

The data that support the findings of this study are available from the corresponding author upon reasonable request.

Declarations

Competing interests

The authors declare no competing interests.

Footnotes

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Contributor Information

Angela Greco, Email: pierangela.greco@eoc.ch.

Luca Gabutti, Email: luca.gabutti@eoc.ch.

References

  • 1.OECD. Health at a Glance 2023: OECD Indicators (OECD Publishing, 2023).
  • 2.Stewart, D. et al. A scoping review of theories used to investigate clinician adherence to clinical practice guidelines. Int. J. Clin. Pharm.45, 52–63 (2023). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Nwafor, O. et al. Effectiveness of nudges as a tool to promote adherence to guidelines in healthcare and their organizational implications: a systematic review. Soc. Sci. Med.286, 114321 (2021). [DOI] [PubMed] [Google Scholar]
  • 4.Thaler, R. H. & Sunstein, C. R. Nudge: the Final Edition (Penguin Books, 2021).
  • 5.Murayama, H., Takagi, Y., Tsuda, H. & Kato, Y. Applying nudge to public health policy: practical examples and tips for designing nudge interventions. Int. J. Environ. Res. Public. Health. 20, 3962 (2023). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.O’Reilly-Shah, V. N., Easton, G. S., Jabaley, C. S. & Lynde, G. C. Variable effectiveness of Stepwise implementation of nudge-type interventions to improve provider compliance with intraoperative low tidal volume ventilation. BMJ Qual. Saf.27, 1008–1018 (2018). [DOI] [PubMed] [Google Scholar]
  • 7.Lamprell, K., Tran, Y., Arnolda, G. & Braithwaite, J. Nudging clinicians: a systematic scoping review of the literature. J. Eval Clin. Pract.27, 175–192 (2021). [DOI] [PubMed] [Google Scholar]
  • 11.Sant’Anna, A., Vilhelmsson, A. & Wolf, A. Nudging healthcare professionals in clinical settings: a scoping review of the literature. BMC Health Serv. Res.21, 543 (2021). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Hajjaj, F. M., Salek, M. S., Basra, M. K. & Finlay, A. Y. Non-clinical influences on clinical decision-making: a major challenge to evidence-based practice. J. R Soc. Med.103, 178–187 (2010). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Ackerhans, S., Huynh, T., Kaiser, C. & Schultz, C. Exploring the role of professional identity in the implementation of clinical decision support systems - a narrative review. Implement. Sci.19, 11. 10.1186/s13012-024-01339-x (2024). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Wang, T., Tan, J. B., Liu, X. L. & Zhao, I. Barriers and enablers to implementing clinical practice guidelines in primary care: an overview of systematic reviews. BMJ Open.13, e062158 (2023). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Greco, A., Garo, M. L., Zandonà, M., Clivio, L. & Gabutti, L. A multicentric intervention based on nudging for the reduction of potentially unnecessary laboratory tests: insights from a Swiss hospital network. BMC Health Serv. Res.25, 13913 (2025). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Harman, H. Modern Factor Analysis 3rd edn (University of Chicago Press, 1976).
  • 14.Kaiser, H. F. An index of factorial simplicity. Psychometrika39, 31–36 (1974). [Google Scholar]
  • 15.Park, H. et al. Appropriateness of alerts and physicians’ responses with a medication-related clinical decision support system: retrospective observational study. JMIR Med. Inf.10, e40511 (2022). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Ancker, J. S. et al. Effects of workload, work complexity, and repeated alerts on alert fatigue in a clinical decision support system. BMC Med. Inf. Decis. Mak.17, 36 (2017). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Moja, L. et al. Effectiveness of computerized decision support systems linked to electronic health records: a systematic review and meta-analysis. Am. J. Public. Health. 104, e12–e22 (2014). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Bright, T. J. et al. Effect of clinical decision-support systems: a systematic review. Ann. Intern. Med.157, 29–43 (2012). [DOI] [PubMed] [Google Scholar]
  • 19.Lazzarino, R. et al. Views and uses of sepsis digital alerts in National health service trusts in england: qualitative study with health care professionals. JMIR Hum. Factors. 11, e56949 (2024). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Phansalkar, S. et al. Evaluation of medication alerts in electronic health records for compliance with human factors principles. J. Am. Med. Inf. Assoc.21, e332–e340 (2014). [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Data Availability Statement

The data that support the findings of this study are available from the corresponding author upon reasonable request.


Articles from Scientific Reports are provided here courtesy of Nature Publishing Group

RESOURCES