Skip to main content
PLOS One logoLink to PLOS One
. 2022 Sep 1;17(9):e0273800. doi: 10.1371/journal.pone.0273800

Variation in detected adverse events using trigger tools: A systematic review and meta-analysis

Luisa C Eggenschwiler 1,#, Anne W S Rutjes 2,#, Sarah N Musy 1, Dietmar Ausserhofer 1,3, Natascha M Nielen 1, René Schwendimann 1,4, Maria Unbeck 5,6, Michael Simon 1,*
Editor: Mojtaba Vaismoradi7
PMCID: PMC9436152  PMID: 36048863

Abstract

Background

Adverse event (AE) detection is a major patient safety priority. However, despite extensive research on AEs, reported incidence rates vary widely.

Objective

This study aimed: (1) to synthesize available evidence on AE incidence in acute care inpatient settings using Trigger Tool methodology; and (2) to explore whether study characteristics and study quality explain variations in reported AE incidence.

Design

Systematic review and meta-analysis.

Methods

To identify relevant studies, we queried PubMed, EMBASE, CINAHL, Cochrane Library and three journals in the patient safety field (last update search 25.05.2022). Eligible publications fulfilled the following criteria: adult inpatient samples; acute care hospital settings; Trigger Tool methodology; focus on specialty of internal medicine, surgery or oncology; published in English, French, German, Italian or Spanish. Systematic reviews and studies addressing adverse drug events or exclusively deceased patients were excluded. Risk of bias was assessed using an adapted version of the Quality Assessment Tool for Diagnostic Accuracy Studies 2. Our main outcome of interest was AEs per 100 admissions. We assessed nine study characteristics plus study quality as potential sources of variation using random regression models. We received no funding and did not register this review.

Results

Screening 6,685 publications yielded 54 eligible studies covering 194,470 admissions. The cumulative AE incidence was 30.0 per 100 admissions (95% CI 23.9–37.5; I2 = 99.7%) and between study heterogeneity was high with a prediction interval of 5.4–164.7. Overall studies’ risk of bias and applicability-related concerns were rated as low. Eight out of nine methodological study characteristics did explain some variation of reported AE rates, such as patient age and type of hospital. Also, study quality did explain variation.

Conclusion

Estimates of AE studies using trigger tool methodology vary while explaining variation is seriously hampered by the low standards of reporting such as the timeframe of AE detection. Specific reporting guidelines for studies using retrospective medical record review methodology are necessary to strengthen the current evidence base and to help explain between study variation.

Introduction

For the last two decades, patient safety has become and remained a key issue for health care systems globally [1]. One major driver of patient harm in acute care hospitals are adverse events (AEs)—“unintended physical injury resulting from or contributed to by medical care that requires additional monitoring, treatment or hospitalization, or that results in death” [2]. Reported AE rates vary between 7% and 40% [3], increasing health care costs by roughly 10,000 Euros per index admission [4]. Considering that approximately 40% of admissions can be associated with AEs, it is likely that the consequences, both on health care service costs and on patient suffering, are underestimated [4, 5]. While some AEs are hardly avoidable, others are: studies have indicated that 6%–83% of AEs are deemed to be preventable [6, 7].

Retrospective medical record reviews are commonly used when collecting data about patient safety such as AEs. Medical record review methodology using available data [8], was found to identify more AEs when compared with other methods [9, 10], can be repeated over time and can target specific AE types, or the overall AE rate [11].

There are several medical record review methods, and the most used ones are the Harvard Medical Practice Study (HMPS) methodology [12], with subsequently modifications [13], and the Global Trigger Tool (GTT) [2]. The GTT, popularised by the Institute for Healthcare Improvement (IHI) in the US, was primarily designed as a measurement tool in clinical practice to estimate and track AE rates over time, extending beyond traditional incident reports, and aiming to measure the effect of safety interventions [14, 15]. The GTT includes a two-step medical record review process. In the first step, knowledgeable hospital staff—mainly nurses, conduct primary reviews to identify potential AEs using predefined triggers as outlined in the GTT guidance. In the second step, physicians verify the reviews from the first step and authenticate their consensus. A "trigger" (or clue) is either a specific term or an event in a medical record that could indicate the occurrence of an AE, e.g., readmissions within 30 days or pressure ulcers [2]. Its main methodological advantage is that it is an open, inductive process, sensitive to detect various types of AEs [2]. GTT based studies typically report inter-rater reliability coefficients that represent satisfactory reliability (kappa 0.34 to 0.89; mean: 0.65) [16].

GTT’s triggers are grouped into six modules (e.g., Care Module, Medication Module). Some researchers use all six of these [17, 18] while most use only those relevant to their setting [19, 20]. Yet others either create additional modules (e.g., Oncology Module [21, 22]) or develop modified versions tailored specifically to their patient and care settings [3, 23]. While former versions diverge too importantly from the original GTT to label it as GTT, they are still considered as trigger tools (TTs).

When using the GTT outside of the USA, even in cases where translation is unnecessary, triggers need to be adapted to reflect local norms (e.g., blood level limits). Additionally, medication labels need to be adjusted as appropriate [24, 25]. Although the GTT was developed as a manual method, with the rise of electronic health records, the GTT process can be semi or fully automated [26].

Recent systematic reviews focussing on AEs detected via GTT or TT showed high detection rate variability [3, 6, 26]. Some of this variability may reflect differences in the studies’ methodological features. Adaptations in triggers, review processes or patient record selection protocols might influence detection rates, thereby impacting the comparability of detected AEs. Such differences in medical record review methodology have not yet been systematically addressed. Therefore, this study has two aims: (1) to synthesize the evidence identified by the TT methodology regarding AE incidence in acute care inpatient settings; and (2) to explore whether between study variation in the incidence of AEs can be explained by study characteristics and study quality.

Methods

Design

This systematic review and meta-analyses adhered to the preferred reporting items for PRISMA guideline [27, 28].

Search strategy and information sources

Our search strategy was developed and validated using methods suggested by Hausner et al. [29, 30]. This involves generating a test set, developing and validating a search strategy and documenting the strategy using a standardized approach [30]. The medical subject headings (MeSH) and keywords for titles and abstracts in our search string were: (trigger[tiab] OR triggers[tiab]) AND (chart[tiab] OR charts[tiab] OR identif*[tiab] OR record[tiab] OR records[tiab]) AND (adverse[tiab] OR medical error[mh]). We used this to query four electronic databases: PubMed, EMBASE, CINAHL and Cochrane Library. In addition, we also hand-searched the top three journals publishing about GTT/TT (BMJ Quality & Safety; Journal of Patient Safety; International Journal for Quality in Health) and screened all authors’ personal libraries. In all searches, publication dates were unrestricted. The detailed search strategy used for this review and further explanations on chosen journals is published in Musy et al. [26]. The index search was conducted in November 2015, additional five update searches in April 2016, July 2017, January 2020, September 2020, and the latest update on May 25 2022.

Eligibility criteria

We included publications fulfilling six criteria:1. publication in English, French, German, Italian or Spanish; 2. adult inpatient samples; 3. acute care hospital settings; 4. medical record review performed manually via GTT or other TT methods; 5. specialties in internal medicine, surgery (including orthopaedics), oncology, or any combination of these (mixed); and 6. outcome data relevant to our study, e.g., number of detected AEs. Systematic reviews and studies addressing only adverse drug events or exclusively deceased patients were excluded.

Study selection and data extraction

Titles and abstracts were screened independently by two researchers in a first round if they included any information on GTT or TT and in a second round on the eligibility criteria. After screening the titles and abstracts, two researchers individually assessed the full-text articles for eligibility. To ensure high-quality data entry, data were extracted by one researcher and verified by a second. Information on study characteristics (e.g., number of admissions, setting, patient demographics) and patient outcomes (incidence, preventability) were collected into an online data collection instrument (airtable.com). Where studies of authors of this report were considered, a pair without direct involvement in the primary study was chosen to abstract and appraise the study. Differences between researchers were then discussed in the research group to reach consensus.

Our main outcome of interest was AEs per 100 admissions ((number of AEs / number of admissions) * 100). In addition, we included three secondary outcomes: AEs per 1,000 inpatient days ((number of AEs / number of inpatient days) * 1,000), the percentage of admissions with one or more AEs (number of admissions with ≥1 AE / number of admissions) and percentage of preventable AEs (number of preventable AEs / number of AEs). We included nine TT methodology characteristics in our statistical analysis to assess their potentially influence on AE detection rates. We categorized these under four headings: setting (type of hospital, type of specialty), patient characteristics (age, length of stay), design (AE definition, timeframe of AE detection, commission/ omission) and reviewer (training, experience). Definitions of our variables, our categorisations of the selected characteristics and our rationale for the chosen variable and its categorisation are available in Table 1.

Table 1. Study characteristics for stratified analysis.

Variable Definition Categorisation Rationale
Setting
    Hospital Type of hospital Academic hospital We reasoned that academic hospitals tend to receive more severely ill or complex patients at higher risk of experiencing AEs when compared to other hospital types [31].
Non-academic hospital
Mixed
Not reported
    Specialty Type of unit Internal medicine We expected the AE incidence to vary by type of specialty. We combined surgical and orthopaedical units as an important fraction of admitted orthopaedical patients was expected to undergo surgical interventions. Mixed = a combination of the three categories mentioned above or combined with other specialties [3, 32, 33].
Surgery and orthopaedics
Oncology
Mixed
Not reported
    Patient characteristics
    Age Mean or median age of patients at admission > 70 years Multi-morbidity and polypharmacy are expected to occur more often in elderly patients. We anticipated patients with multimorbid conditions or polypharmacy to be at higher risk for AEs [31, 33, 34].
≤ 70 years
Not reported
    Length of stay (LOS) Mean or median length of hospital stay LOS > 5 days Patients with longer LOS are at higher risk of experiencing AEs. As the average LOS in the US and many European countries ranges between 4 and 6 days, we chose a cut-off at five days [23, 35, 36].
LOS ≤ 5 days
Not reported
Design
    AE definition IHI AE definition IHI like We expected that differences in the AE definition between studies lead to variation in estimates of AE incidence [33, 37].
Definition: “unintended physical injury resulting from or contributed to by medical care that requires additional monitoring, treatment or hospitalisation, or that results in death” [2]
“Narrower” than IHI GTT
“Wider” than IHI GTT
Not reported
    Timeframe of AE detection Definition of the time period in which AEs were detected. Hospital stay plus time after discharge The frequency of AEs varies depending on the timeframe and setting considered, i.e., before and after index admission [38].
If a study reported AEs only during hospitalisation, it was categorized into the category “hospital stay plus time before admission”.
Hospital stay plus time before admission
Hospital stay plus time
before and after admission
Not reported
    Commission and omission Evaluation of commission or omission of care Inclusion of commission only The IHI GTT focuses on AEs related to commission (doing the wrong thing), however in recent years authors have included omissions (failing to do the right thing). Including omissions in medical record reviews may lead to more AEs detected [3].
Inclusion of commission and omission
Not reported
Reviewer
    Training The reviewer’s training before starting with data collection Training plus pilot phase We reasoned that trained and/or experienced reviewers were less likely to miss AEs than untrained or unexperienced reviewers [37, 39, 40].
Training only
No training
Not reported
    Experience The reviewer’s experience in application of the GTT method or similar medical record review method. GTT or medical record review experience
No experience
Not reported

AE, Adverse event; GTT, Global Trigger Tool; IHI, Institute for Healthcare Improvement; LOS, length of stay

Quality assessment

To assess the risk of bias and applicability-related concerns for each included study, we developed and piloted a quality assessment tool (QAT) (see S1 File). This was inspired by the Quality Assessment Tool for Diagnostic Accuracy Studies 2 (QUADAS-2) tool and by the QAT developed by Musy et al. [41]. While assessing our included studies, we used both QUADAS-2 tool dimensions: the risk of bias and applicability-related concerns [41]. We assessed five domains: 1) patient selection; 2) rater or reviewer; 3) trigger tool method; 4) outcomes; and 5) flow and timing. Following the QUADAS-2 structure each domain included standardised signalling questions to help researchers’ rate each of the two dimensions, i.e., risk of bias and applicability-related concerns. Possible dimension classifications were low, high, or unclear. For each study, a QAT was completed by one researcher and reviewed by a second. To reach consensus, differences were discussed between the two and, if necessary, within the research group.

Statistical analysis

To analyse and plot our results we used R version 4.1.3 on Linux [42] with the meta [43] and metafor [44] packages. We determined the number of AEs per 100 admissions and the number of AEs per 1,000 patient days from the reported data. If the number of AEs was not explicitly described, we calculated it from the reported estimate of AEs per 100 admissions and number of patient admissions. The number of patient days could for example be calculated from the total number of AEs per 1,000 patient days. For studies published by this study’s co-authors or in some cases by their research colleagues, when samples overlapped, we asked them for additional information in order to avoid double counting of admissions and AEs [34, 45, 46]. Pooled estimates for AEs per 100 admissions and AEs per 1,000 patient days were derived using a random effects Poisson regression approach within the R metarate function [43, 44]. With the R metaprop function, a random effects logistic regression model was used to obtain summary estimates and confidence intervals (derived by the Wilson method) for the outcomes expressed as percentage of admissions with ≥1 AE and percentage of preventable AEs [43].

Subgroup analysis

Heterogeneity was explored by stratified analyses, which were performed on the main outcome measure, i.e. number of AEs per 100 admissions to evaluate the influence of the nine study characteristics: type of hospital, type of specialty, patient age, length of stay, AE definition, timeframe of AE detection, commission and omission, reviewer training, and reviewer experience. In addition, we analysed five elements relating to risk of bias and the three for applicability-related concerns. P-values were derived from the likelihood ratio test for model fit (p < 0.05 was considered significant). Furthermore, between study heterogeneity was evaluated visually and by calculating the prediction intervals [47, 48]. To assess the risk of publication bias related to small study size, we created a funnel plot regressing the logit of the AEs per 100 admissions on the standard error, assessed the symmetry of the distribution and performed the Egger test [49].

Results

The index search and update searches produced 9,780 returns. Deleting duplicates left 6,685 separate entries. The more detailed screening process left 54 studies, which were published in 72 publications [5, 9, 10, 14, 15, 1722, 24, 34, 3740, 45, 46, 50102]. Fig 1 depicts the complete review procedure.

Fig 1. Flow diagram of literature search and included studies.

Fig 1

From [27] (GTT, Global Trigger Tool, TT, Trigger Tool).

Study characteristics

The 54 included studies were all published between 2009 and 2022. Their study periods ranged from one month to six years (Table 2). They were conducted in 26 countries, most of them in Europe (34 studies, 63%), followed by the US (12 studies, 22%) and Others (8 studies, 15%).

Table 2. Characteristics of the 54 included studies.

Sorted by continent; within continent alphabetically by country code, and within the country by year.

Study Country Study period number of months Sample size number of records Patient age Length of stay Clinical specialty Type of hospital Timeframe of AE detection
Europe
    Hoffmann 2018 [86] AUT 12 239 ≤70 years > 5 days SURG Academic NR
    Grossmann 2019 [19] CHE 12 240 ≤70 years > 5 days MED Academic Stay + Before
    Gerber 2020 [21] CHE 1.5 224 ≤70 years ≤ 5 days ONCO Mixed Stay + After + Before
    Nowak 2022 [100] CHE 12 150 >70 years > 5 days MED Academic Stay + After + Before
    Lipczak 2011 [69, 88] DNK 6 572 NR NR ONCO NR NR
    von Plessen 2012 [40] DNK 18 NR ≤70 years NR MIX NR NR
    Mattson 2014 [22, 68] DNK 12 240 NR NR ONCO Academic NR
    Bjorn 2017 [52] DNK 6 120 NR NR MIX Academic NR
    Brösterhaus 2020 [82] DEU 2 80 NR > 5 days SURG Academic NR
    Suarez 2014 [63, 91] ESP 72 1,440 NR NR MIX Non-aca NR
    Guzman Ruiz 2015 [64, 67] ESP 12 291 >70 years > 5 days MED Non-aca NR
    Perez Zapata 2015 [53, 66] ESP 12 350 ≤70 years NR SURG Academic NR
    Toribio-Vicente 2018 [94] ESP 12 233 NR NR MIX Academic NR
    Kaibel 2020 [97] ESP 12 251 ≤70 years ≤ 5 days SURG Academic Stay + After
    Menendez-Fraga 2021 [98] ESP 12 240 >70 years > 5 days MED Academic Stay + After
    Perez Zapata 2022 [101] ESP 9 1132 ≤70 years > 5 days SURG Mixed Stay + After
    Mayor 2017 [56] GBR 36 4,833 ≤70 years NR MIX Mixed NR
    Mortaro 2017 [60] ITA 66 513 ≤70 years NR MIX Non-acad NR
    Cihangir 2013 [70] NLD 12 129 NR NR ONCO NR NR
    Deilkas 2015 [24, 81, 92] NOR 34 29,865 NR NR MIX Mixed NR
    Farup 2015 [80] NOR 24 272 ≤70 years > 5 days MED Non-acad NR
    Mevik 2016 [57, 58] NOR 12 1,680 ≤70 years > 5 days MIX Academic Stay + After + Before
    Haukland 2017 [54, 85] NOR 48 812 ≤70 years > 5 days ONCO Non-acad NR
    Deilkas 2017 [61] NOR 12 10,986 NR NR MIX Mixed NR
    Pierdevara 2020 [102] PRT 9 176 >70 years > 5 days MIX Mixed NR
    Schildmeijer 2012 [72] SWE 8 50 ≤70 years ≤ 5 days MIX NR NR
    Unbeck 2013 [37] SWE 12 350 ≤70 years ≤ 5 days SURG Academic Stay + After + Before
    Rutberg 2014 [73] SWE 48 960 ≤70 years > 5 days MIX Academic Stay + After + Before
    Nilsson 2016 [46] SWE 12 3,301 ≤70 years > 5 days SURG Mixed NR
    Rutberg 2016 [34] SWE 24 4,994 >70 years > 5 days SURG Mixed Stay + After + Before
    Deilkas 2017 [61] SWE 12 19,141 NR NR MIX Mixed NR
    Nilsson 2018 [45, 84] SWE 48 56,447 ≤70 years > 5 days MIX Mixed NR
    Hommel 2020 [20, 89, 90] SWE 36 1,998 >70 years > 5 days SURG Mixed Stay + After
    Kelly-Pettersson 2020 [96] SWE 24 163 >70 years > 5 days SURG Academic Stay + After
    Kurutkan 2015 [18] TUR 12 229 ≤70 years ≤ 5 days MIX Academic NR
North America
    Griffin 2008 [83] USA 12 854 NR NR SURG NR NR
    Naessens 2010 [9, 14] USA 25 1,138 NR NR MIX Academic NR
    Landrigan 2010 [39, 77] USA 72 2,341 ≤70 years NR NR Mixed NR
    Classen 2011 [10] USA 1 795 ≤70 years ≤ 5 days NR Mixed NR
    Garrett 2013 [5, 79] USA 36 17,295 ≤70 years ≤ 5 days MIX Mixed NR
    O’Leary 2013 [74] USA 12 250 ≤70 years > 5 days MED Academic NR
    Kennerly 2014 [15, 50, 78] USA 60 9,017 NR NR MIX Non-acad Stay + After + Before
    Mull 2015 [76] USA 4 273 ≤70 years > 5 days MIX Non-acad NR
    Croft 2016 [38, 59] USA 11 296 ≤70 years ≤ 5 days MIX Academic Stay + After + Before
    Lipitz-Snyderman 2017 [55] USA 12 400 ≤70 years NR ONCO Academic NR
    Zadvinskis 2018 [95] USA 1 317 ≤70 years ≤ 5 days MIX Academic NR
    Sekijima 2020 [93] USA 4 300 ≤70 years > 5 days MED Academic NR
Other
    Moraes 2021 [99] BRA 1 220 ≤70 years > 5 days MIX Academic Stay + After
    Xu 2020 [62] CHN 12 240 ≤70 years > 5 days MIX Academic Stay + After
    Hu 2019 [87] CHN 12 480 >70 years > 5 days MIX Academic NR
    Wilson 2012 [71]* EGY 12 1,358* ≤70 years NR NR Mixed NR
JOR 3,769
KEN 1,938
MAR 984
ZAF 931
SDN 3,977
RUN 930
YEM 1,661
    Najjar 2013 [75] ISR 4 640 ≤70 years ≤ 5 days MIX Mixed NR
    Hwang 2014 [17] KOR 6 629 ≤70 years > 5 days NR Academic NR
    Asavaroengchai 2009 [51] THA 1 576 ≤70 years ≤ 5 days MIX Academic NR
    Müller 2016 [65] ZAF 8 160 ≤70 years > 5 days MED Academic Stay + Before

NR, not reported; MED, internal medicine; MIX, mixed; ONCO, oncology; SURG, surgery/orthopaedics; Academic, academic hospital; Non-acad, non-academic hospital; Stay + After, hospital stay plus time after discharge; Stay + Before, hospital stay plus time before admission; Stay + After + Before, hospital stay plus time before and after admission; *After coding these countries A-H, this studies’ authors linked each number directly to a letter, but failed to link each letter to a particular country, therefore it is impossible to reconcile these numbers with the countries listed.

Four studies (7%) did not report their clinical specialties [10, 17, 71, 77]. For those remaining, almost half (24 studies, 44%) involved mixed specialties. One study included no information on the number of included records [40]. The numbers of included records ranged from 50 to 56,447. Overall, we included 194,470 index admissions in our report.

Table 3 illustrates AE rates’ key characteristics. In seven studies, we could not retrieve the main outcome measure AEs per 100 admissions [14, 24, 40, 55, 70, 80, 94]; for the remaining 47, rates ranged from 2.5 to 140 per 100 admissions. Per 1,000 patient days, the 36 (67%) studies with sufficient data yielded counts ranging from 12.4 to 139.6. And in the 48 studies whose data allowed us to calculate percentages of admissions with one or more AEs, these ranged from 7% to 69%. AE preventability percentages, which 37 studies (69%) reported, ranged from 7% to 93%; however, four of these studies provided no relevant raw data [21, 45, 55, 56].

Table 3. Main characteristics of adverse events (AE) rates.

Study AEs per 100 admissions AEs per 1,000 patient days % of admissions with ≥ 1 AE % of preventable AEs out of all AEs
Wilson 2012 [71], Country B 2.5 NR NR 83.9
Wilson 2012 [71], Country F 5.5 NR NR 84.4
Wilson 2012 [71], Country A 6.0 NR NR 72.8
Hwang, 2014 [17] 7.8 12.4 7.2 61.2
Wilson 2012 [71], Country E 8.2 NR NR 55.3
Wilson 2012 [71], Country G 8.3 NR NR 85.7
Mayor, 2017 [56] 8.9 NR 8.0 AEs detected by TT not reported separately
Najjar, 2013 [75] 14.2 NR 14.2 59.3
Nilsson, 2018 [45, 84]$ 14.4 20.2 11.4 Included sample not reported separately
Wilson 2012 [71], Country C 14.5 NR NR 76.9
Wilson 2012 [71], Country D 14.8 NR NR 85.6
Deilkas, 2017 [61] (NOR) 15.2 NR 13.0 NR
Griffin, 2008 [83] 16.2 NR 14.6 NR
Deilkas, 2017 [61] (SWE) 16.8 NR 14.4 NR
Wilson 2012 [71], Country H 18.4 NR NR 93.1
Rutberg, 2016 [34]$ 19.0 27.0 14.7 73.4
Nilsson, 2016 [46]$ 19.9 29.6 15.4 62.5
Zadvinskis, 2018 [95] 21.1 68.9 NR NR
Mattson, 2014 [22, 68] 23.3 37.4 20.8 NR
Landrigan, 2010 [39, 77] 25.1 56.5 18.1 61.9
Mevik, 2016 [57, 58] 26.6 39.3 20.7 NR
Rutberg, 2014 [73]$ 28.2 33.2 20.5 71.2
Xu, 2020 [62] 29.2 32.1 22.5 NR
Kurutkan, 2015 [18] 29.3 80.72 17.0 64.2
Suarez, 2014 [63, 91] 29.4 24.5 23.3 65.8
Schildmeijer, 2012 [72] 30.0 45.3 20.0 60.0
Mortaro, 2017 [60]* 30.4 31.9 21.6 NR
Haukland, 2017 [54, 85] 31.2 37.1 24.3 NR
O’Leary, 2013 [74] 34.4 NR 21.6 7.0
Brösterhaus, 2020 [82]* 36.2 31.6 27.5 NR
Müller, 2016 [65] 36.9 25.8 24.4 47.5
Garrett 2013 [5, 79] 38.0 85.0 26.0 NR
Kennerly 2014 [15, 50, 78] 38.0 61.3 32.1 18.0
Unbeck, 2013 [37]$ 39.1 74.1 28.0 80.3
Mull, 2015 [76] 39.9 52.4 21.6 NR
Asavaroengchai, 2009 [51] 41.0 52.9 24.0 55.9
Classen, 2011 [10] 44.5 NR NR NR
Lipczak, 2011 [69, 88] 45.5 NR NR NR
Perez Zapata, 2015 [53, 66] 46.0 NR 31.7 54.7
Sekijima, 2020 [93]* 46.3 73.7 28.3 NR
Guzman Ruiz, 2015 [64, 67] 51.2 63.0 35.4 32.2
Perez Zapata, 2022 [101] 52.9 NR 31.5 34
Menendez-Fraga, 2021 [98] 57.1 49.8 44.6 49.6
Hoffmann, 2018 [86]* 61.9 31.5 33.5 NR
Kelly-Pettersson, 2020 [96]$ 62.6 104.2 38.0 60.8
Nowak, 2022 [100] 72.0 90.6 42.7 54.6
Gerber, 2020 [21] 75.4 106.6 42.0 Included sample not reported separately
Kaibel, 2020 [97] 76.1 NR 45.8 92.1
Pierdevara, 2020 [102] 80.7 42.1 NR NR
Bjorn, 2017 [52] 81.7 139.6 44.2 NR
Moraes, 2021 [99] 90.5 76.1 40.9 NR
Hommel, 2020 [20, 89, 90]$ 105.9 93.2 58.6 75.9
Croft, 2016 [38, 59] 114.2 NR NR 50.0
Hu, 2019 [87] 127 22.4 68.5 50.8
Grossmann, 2019 [19] 140 95.7 60.0 29.2
Cihangir, 2013 [70]* NR NR 36.4 NR
Deilkas, 2015 [24, 81, 92]* NR NR 15.1 NR
Farup, 2015 [80]* NR NR 14.0 NR
Lipitz-Snyderman, 2017 [55] NR NR 36.0 AEs detected by TT not reported separately
Naessens, 2010 [9, 14] NR NR 27.0 NR
Toribio-Vicente, 2018 [94]* NR NR 20.2 NR
von Plessen, 2012 [40] NR 59.8 25# NR

NR, not reported; TT, Trigger Tool.

* Pooled estimate.

• Mean estimate.

‡ Calculated total number of AEs.

$ Additional outcome data included.

# Original data reported.

Quality assessment

Our quality assessment results (Fig 2) indicate that most of the domains of the risk of bias are rated as low (range: 48%–93%). However, the “patient selection” and “reviewer” domains received respectively 15% and 13% high ratings—considerably more than the other domains (range: 2%–6%). In two domains, risk of bias was largely unclear: “reviewer and “trigger tool method” received this rating respectively in 39% and 30% of cases.

Fig 2. Quality assessment of all included studies.

Fig 2

Assessments are presented in risk of bias and applicability-related concerns. (TT method, Trigger Tool method).

Overall applicability-related concerns were predominantly low (range of domains: 65%–87%). High ratings were most prevalent (17%) in the “patient selection” domain; unclear ratings were most common (28%) for “reviewer”. Quality assessment results on study-level are provided in S1 Table.

Summary estimates from meta-analyses

The forest plot in Fig 3 presents AEs per 100 admissions by sample size. Forty-five samples from single countries contributed, as well as two multi-country (n = 10) samples [61, 71]. The summary estimate was 30.0 AEs per 100 admissions (95% CI 23.9–37.5). Visual inspection of the forest plot indicated a high level of between study heterogeneity, which was confirmed by an I2 of 99.7% (95% CI 99.7–99.7). The prediction interval ranged from 5.4 to 164.7 AEs per 100 admissions. Four studies had exceptionally high detection rates [19, 20, 38, 87]. At the opposite side, seven study samples reported fewer than ten AEs per 100 admissions [17, 56, 71].

Fig 3. Forest plot of adverse events per 100 admissions.

Fig 3

Ordered by sample size [5, 10, 15, 1722, 34, 3739, 45, 46, 5054, 5669, 7179, 8291, 93, 95102]. In Wilson et al. 2012, countries were not further specified. (AEs, Adverse events; * pooled estimate; • mean estimate; ‡ calculated total number of AEs).

S1S3 Figs present additional forest plots for the three secondary outcomes, respectively AEs per 1,000 patient days (n = 36 studies), percentages of admissions with AEs (n = 48 studies), and percentages of preventable AEs (n = 33 studies). Our meta-analysis showed a summary estimate of 48.3 AEs per 1,000 patient days (95% CI 40.4–57.8) with high level of between study heterogeneity (prediction interval 15.9–147.0). Twenty-six percent of admissions experienced one or more AEs (95% CI 22.0–29.5, prediction interval 7.8–58.3). Within the studies that rated preventability, 62.6% of AEs were classified as preventable (95% CI 54.0–70.5, prediction interval 16.8–93.3). Similarly, visual inspection indicated a high between study heterogeneity. Funnel plot exploration did not suggest evidence for publication bias or other biases related to small study size (P from Egger test = 0.3, S4 Fig).

Effect of study characteristics

Eight of nine analysed study characteristics explained part of the heterogeneity between studies (Fig 4).

Fig 4. Forest plot with stratified analysis of the nine selected study characteristics.

Fig 4

(AE, adverse event; CI, confidence interval; GTT, Global Trigger Tool; IHI, Institute for Healthcare Improvement; N Studies, number of studies).

As for the type of hospital study characteristic, academic medical centres (n = 25, 45%) detected more AEs per 100 admissions than non-academic hospitals (respectively 47.1, 95% CI 36.6–60.5 and n = 6, 11%; 35.8, 95% CI 30.8–41.7), but as the summary estimate for mixed types of hospitals (n = 21, 38%; 17.0, 95% CI 11.7–24.8) is lower than either academic and non-academic hospitals, this association is likely confounded by a third feature. For type of clinical specialty, the significant differences within categories were driven by the not reported category (n = 11, 20%), which had fewer AEs per 100 admissions compared to the others (10.6, 95% CI 6.8–16.7). The internal medicine specialty (n = 7, 13%) had the highest number of AEs per 100 admissions (56.4, 95% CI 40.5–78.5), followed by surgery/orthopaedics (n = 11, 20%; 41.7, 95% CI 29.5–59.0). Oncology (n = 4, 7%) had numbers similar to those of the mixed designation (respectively 40.0, 95% CI 26.2–61.3 vs. 33.5, 95% CI 25.0–44.8).

Older patients (mean > 70 years; n = 8, 15%) had a higher incidence of AEs than younger ones (mean ≤ 70 years; n = 38, 69%), although only eight studies specifically investigated older patients (respectively 63.7, 95% CI 43.6–93.0 and 25.9, 95% CI 19.6–34.2). As occurred with the type of clinical specialty, for the category length of stay, the not reported category (n = 20, 36%) has a driving effect, with a mean of 16.7 AEs per 100 admissions (95% CI 11.6–23.9). Greater lengths of stay (mean >5 days; n = 24, 44%) had slightly higher AE rates than shorter ones (<5 days; n = 11, 20%) (respectively 42.9, 95% CI 32.7–56.4 and 40.8, 95% CI 29.0–57.3).

Almost all studies reported an IHI-like definition of AEs (n = 45, 82%). Of the five (9%) that did not report such a definition, AE rates were lower (respectively 29.0, 95% CI 22.4–37.5 and 22.6, 95% CI 13.9–36.8). The remaining five (9%) studies applying a wider than IHI AE definition reported clearly higher AE rates (55.3, 95% CI 42.1–72.7).

For the two characteristics, timeframe of AE detection and commission and omission the studies failed to report in 69% and 82% of the cases, seriously hampering the analyses. Studies that employed a pilot phase as part of the reviewer training (n = 14, 25%) might have had slightly higher detection rates than training only (respectively 36.8, 95% CI 26.3–51.5 and n = 31, 56%; 24.9, 95% CI 18.0–34.4). Reviewers with no experience in medical record review (n = 11, 20%) detected fewer AEs than those with experience (respectively 12.4, 95% CI 7.3–21.2) and n = 16, 29%; 40.9, 95% CI 30.6–54.4). Half of all studies did not report (n = 28, 51%) whether their reviewers had experience in medical record review. In those cases, the reported AE rates were comparable to those of experienced reviewers (35.8, 95% CI 27.5–46.5).

Effect of risk of bias

Our quality assessment explained some of the variation regarding AE detection rates (S5 Fig). In eight studies (15%), patient selection was rated as high risk of bias because they included a slightly different patient population than defined in the inclusion criteria. These studies had higher rates of AEs than studies with a low risk of bias (respectively 61.2 vs. 32.5 AEs per 100 admissions). In studies where the risk of bias for the trigger tool methodology, the outcome category and the flow and timing were rated as high or unclear, considerably lower AE rates were detected than in those with a low risk of bias.

Similarly, regarding the trigger tool methodology’s applicability-related concerns, ratings of unclear correlated with lower AE rates than those of low (respectively 10.7 vs. 38.7 AEs per 100 admissions).

Discussion

The aim of this systematic review and meta-analysis was to synthesize AE detection rates with TT methodology and to explore variations in AE rates and assess the study quality in acute care inpatient settings. Reporting of study characteristics varied widely, and non-reporting of characteristics ranged from 5% to 82%. The summary estimate for AEs per 100 admissions was 30 (95% CI 23.9–37.5). An AE rate of 48 per 1,000 patient days, which translates into, 48 AEs in 200 patients with a length of stay of 5 days. Twenty-six percent of patients experience at least one AE related to their hospital stay and 63% out of all AEs were deemed preventable. Eight out of nine study characteristics explained variation in reported AE results. Studies conducted in academic medical centres, or with older populations reported higher AE rates than non-academic centres or younger adult populations. For several risk of bias categories (e.g., outcome, flow and timing), a higher risk of bias in a study indicated lower AE rates, which points to an underestimation of AE detection rates in low quality studies.

Analysing 17 studies in general inpatients, Hibbert et al. [3] reported AE rates of 8–51 per 100 admissions—a far smaller range than we detected (2.5–140). Our studies’ larger range of AEs could result from our larger study sample (n = 54). Further, their rates of admissions with AEs ranged from 7% to 40%, with a cluster of nine falling between 20% and 29% [3]. We found a wider range—7%–69%, but the average (26%) is close to Hibbert et al. [3].

Schwendimann et al.’s scoping review [32] of multicentre studies reported a median of 10% of admissions with AEs, which is less than half what we found. But this is congruent with Zanetti et al.’s integrative review, which reported between 5% and 11% [7]. Both of those reviews, especially Schwendimann et al.’s, concentrated solely on studies applying the HMPS methodology, not TT methodology [7, 32]. One possible reason for the lower rates could be that TT methodology requires the research team to include all identified AEs (if present, several AEs for one patient, not only the most severe, like in HMPS) [2, 12].

Interestingly, Panagioti et al.’s meta-analysis [6] found that half of their sample’s AEs were preventable whereas our meta-analysis indicated an overall preventability of 61%. For an academic hospital with 32,000 annual admissions, a preventable percentage of 61 would mean roughly 5,000 AEs could be prevented annually–given effective prevention strategies could be implemented. The confidence intervals reported by Panagioti and our 95% CI largely overlaps despite the difference in selection criteria for inclusion. They included every study that explored AEs’ preventability and many of those used the HMPS methodology, i.e., targeting more severe AEs [6].

Our meta-analysis explained part of the broad variation in AE detection via the selected study characteristics. One unanticipated finding was that, for many of these characteristics, essential details (e.g., length of stay) were not provided. For those, the not reported group had a dominant influence on AE detection rates. Although four study characteristics—type of specialty, length of stay, timeframe of AE detection, and commission and omission—showed differences in the subgroups, as the differences were driven by the not reported category, these only slightly explain the variation between AE detection rates. For all four characteristics, eight countries from which Wilson et al. [71] drew their samples fell within the not reported category, which might explain some of this result.

Compared to other categories, academic hospitals [34], higher patient age [75], and experienced reviewers [39] all corresponded with more AEs per 100 admissions. Supporting Sharek et al. [39] we found that experienced reviewers were less likely to miss AEs than unexperienced reviewers. These results support many published medical record review studies [23, 3133]. Nevertheless, the findings need to be interpreted with some caution. Regarding type of specialty, the data on internal medicine and surgery including orthopaedic both involve wide confidence intervals (respectively 95% CI 40.5–78.5, and 95% CI 29.5–59.0); therefore, their higher numbers of AEs per 100 admissions (respectively 56.4 and 41.7) are to be questioned: numerous publications have found that surgical patients typically experience more AEs during their hospital stay than medical patients [6, 37, 103].

Addressing the overall quality of the included studies, we rated both their risk of bias and applicability-related concerns as low. This finding is supported by those of two earlier systematic reviews. First, Klein et al.’s [104] assessment of 24 of our 66 included publications indicated reasonable overall quality; second, also using a study sample that overlapped somewhat with ours, Panagioti et al. [6] rated all of the overlapping studies’ risk of bias as low.

Nevertheless, regarding adherence to TT methodology, including data completeness and usability, our meta-analysis clearly showed that our overall study sample’s reporting quality was inadequate. Our QAT explained part of the AE detection rate’s high variability: where risk of bias is rated as high or unclear for “outcome”, “trigger tool method” and “flow and timing”, AE rates are lower than where risk of bias is rated as low. This suggests that insufficient reporting resulted in lower estimates, i.e., the actual AEs per 100 admissions are likely higher than reported here.

Although patterns of publication bias in the field of single arm studies measuring the incidence of AEs are not well understood, we decided to perform a funnel plot analysis to evaluate any association between small study size and the magnitude of the estimates of AEs per 100 admissions. Whenever an uncontrolled study evaluates effects and safety of a therapeutic intervention, publication bias may still be expected, where higher estimates of AE may be less likely to be published. If this type of publication bias is associated with small study size, funnel plot exploration may detect it. The studies included in our review were more about health services and delivery research and we did not anticipate to find obvious signs of publication bias [105], which was eventually confirmed. The vast majority of studies did not report the occurrence of AEs per patient days. Rather than considering this as potential selective reporting bias, we reason that the field is insufficiently aware of the advantage of using person-time incidence rates over incidence proportions, where former facilitates comparison across studies.

Strengths and limitations

Our systematic review was based on an exhaustive search strategy so that it is unlikely we missed studies that would have changed our findings. Throughout the search we have included two studies that were not identified with our search strategy. Those were lacking on of the core components like “adverse” [40] or “record” [86]. We did not do a systematic search of “grey literature” which may lead to remaining studies not identified.

In absence of a suitable risk of bias tool for the type of studies included, we adapted an existing QAT to simultaneously address risk of bias and applicability-related concerns of the included studies. We conducted stratified analyses not only to evaluate effects of studies’ characteristics but also to evaluate effects of QAT domains. Our systematic review included a considerable high number of included studies when compared to previous reviews and resulted in a proportionately higher number of index admissions.

However, we also acknowledge further limitations. One was the exclusion of psychiatric, rehabilitation, emergency departments and intensive care settings. We set this criterion to maximize comparability across study settings. Similarly, by excluding studies focussed only on adverse drug events, we avoided skewing AE rates based on single-event results. Despite their benefits, both decisions reduced the final sample size.

Also, although we consider the identification and labelling of adverse events vital, we chose not to address either the types of AEs or their severity. Furthermore, we did not conduct an analysis of the influence of reported conflict of interest or funding in the included studies, which could further explain some of the variation. For the future, we also acknowledge that the registration of the review protocol on an open access repository is necessary.

Still, the most important limitation is that high levels of not reported information that hampered a full appreciation of the findings. The data did not allow to run multivariable models in a meaningful manner, so that all findings from univariable analyses need to be interpreted with caution, as we cannot exclude that some of the observed association, such as the effect of type of hospital, are confounded. For future studies on AEs via retrospective medical record review, irrespective of the detection methods used, the certainty of the evidence base would benefit from the standard use of a dedicated reporting guideline. Such a guideline is currently lacking for the type of studies included.

Conclusion

Based on our analyses of 54 studies using TT methodology, we found an overall incidence of 30.0 AEs per 100 admissions—affecting 26% of patients. Of these we estimated that 63% were preventable, indicating a high potential to improve patient safety. However, lack of reporting and high levels of statistical heterogeneity limit these estimates’ reliability.

Of nine TT study characteristics evaluated, our analyses indicate that eight explained part of the wide variation in AE incidence estimates. In four of those, most of the variation was driven by the not reported category (type of specialty, length of stay, timeframe of AE detection, commission and omission). For two characteristics (time frame of AE detection, commission and omission), studies even failed to report the methodological information in 69% and 82%.

To enhance comparability—and the reporting of TT studies clearly needs improvement—we recommend the development and implementation of a reporting checklist accompanied with a guidance document specifically for studies on the use of retrospective medical record review methods for AE detection.

Supporting information

S1 Checklist. PRISMA 2020 checklist.

(DOCX)

S1 File. Quality assessment tool template.

(PDF)

S1 Table. Assessments of risk of bias and applicability-related concerns.

(PDF)

S1 Fig. Forest plot of AEs per 1000 patient days.

* = pooled estimate, • = mean estimate, ‡ = calculated total number of AEs, ~ = calculated total number of patient days [5, 15, 1722, 34, 37, 39, 40, 45, 46, 5052, 54, 57, 58, 60, 6265, 67, 68, 72, 73, 7679, 82, 8487, 8991, 93, 95, 96, 98100, 102].

(TIF)

S2 Fig. Forest plot percentage of admissions with at least one adverse event (AE).

CI, confidence interval; * = pooled estimate, • = mean estimate, + = calculated total number of admissions with ≥ 1 AE [5, 9, 14, 15, 1722, 24, 34, 37, 39, 45, 46, 5058, 6068, 70, 7287, 8994, 96101].

(TIF)

S3 Fig. Forest plot percentage of preventable adverse events (AEs).

CI, confidence interval; * = pooled estimate, • = mean estimate, ¢ = calculated number of preventable AEs [15, 1720, 34, 3739, 46, 50, 51, 53, 59, 6367, 7175, 77, 78, 87, 8991, 9698, 100, 101].

(TIF)

S4 Fig. Funnel plot for AEs per 100 admissions [5, 10, 15, 1722, 34, 3739, 45, 46, 5054, 5669, 7179, 8291, 93, 95102].

(TIF)

S5 Fig. Forest plot with stratified analysis of the risk of bias and applicability-related concerns.

AE, adverse events; N studies, number of studies; CI, confidence interval [5, 10, 15, 1722, 34, 3739, 45, 46, 5054, 5669, 7179, 8291, 93, 95102].

(TIF)

Acknowledgments

The authors would like to thank Chris Shultis for the editing of this manuscript.

Data Availability

All data files are available from https://doi.org/10.5281/zenodo.4892518.

Funding Statement

The author(s) received no specific funding for this work.

References

  • 1.Institute of Medicine. To Err is Human: Building a Safer Health System. Kohn LT, Corrigan JM, Donaldson MS, editors. Washington (DC): National Academies Press; 2000. [PubMed] [Google Scholar]
  • 2.Griffin F, Resar R. IHI Global Trigger Tool for Measuring Adverse Events (Second edition). IHI Innovation Series white paper. Cambridge, Massachusetts: Institute for Healthcare Improvement; 2009. [Google Scholar]
  • 3.Hibbert PD, Molloy CJ, Hooper TD, Wiles LK, Runciman WB, Lachman P, et al. The application of the Global Trigger Tool: a systematic review. Int J Qual Health Care. 2016;28(6):640–9. Epub 2016/09/25. doi: 10.1093/intqhc/mzw115 [DOI] [PubMed] [Google Scholar]
  • 4.Kjellberg J, Wolf RT, Kruse M, Rasmussen SR, Vestergaard J, Nielsen KJ, et al. Costs associated with adverse events among acute patients. BMC Health Serv Res. 2017;17(1):651. Epub 2017/09/15. doi: 10.1186/s12913-017-2605-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Adler L, Yi D, Li M, McBroom B, Hauck L, Sammer C, et al. Impact of Inpatient Harms on Hospital Finances and Patient Clinical Outcomes. J Patient Saf. 2018;14(2):67–73. Epub 2015/03/25. doi: 10.1097/PTS.0000000000000171 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Panagioti M, Khan K, Keers RN, Abuzour A, Phipps D, Kontopantelis E, et al. Prevalence, severity, and nature of preventable patient harm across medical care settings: systematic review and meta-analysis. BMJ. 2019;366:l4185. Epub 2019/07/19. doi: 10.1136/bmj.l4185 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Zanetti ACB, Gabriel CS, Dias BM, Bernardes A, Moura AA, Gabriel AB, et al. Assessment of the incidence and preventability of adverse events in hospitals: an integrative review. Rev Gaucha Enferm. 2020;41:e20190364. Epub 2020/07/16. doi: 10.1590/1983-1447.2020.20190364 [DOI] [PubMed] [Google Scholar]
  • 8.Thomas EJ, Petersen LA. Measuring errors and adverse events in health care. J Gen Intern Med. 2003;18(1):61–7. Epub 2003/01/22. doi: 10.1046/j.1525-1497.2003.20147.x [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Naessens JM, Campbell CR, Huddleston JM, Berg BP, Lefante JJ, Williams AR, et al. A comparison of hospital adverse events identified by three widely used detection methods. Int J Qual Health Care. 2009;21(4):301–7. Epub 2009/07/21. doi: 10.1093/intqhc/mzp027 [DOI] [PubMed] [Google Scholar]
  • 10.Classen DC, Resar R, Griffin F, Federico F, Frankel T, Kimmel N, et al. ’Global trigger tool’ shows that adverse events in hospitals may be ten times greater than previously measured. Health Aff (Millwood). 2011;30(4):581–9. Epub 2011/04/08. doi: 10.1377/hlthaff.2011.0190 [DOI] [PubMed] [Google Scholar]
  • 11.Vincent C. Patient Safety: John Wiley & Sons, Ltd; 2010. [Google Scholar]
  • 12.Brennan TA, Leape LL, Laird NM, Hebert L, Localio AR, Lawthers AG, et al. Incidence of adverse events and negligence in hospitalized patients. Results of the Harvard Medical Practice Study I. N Engl J Med. 1991;324(6):370–6. Epub 1991/02/07. doi: 10.1056/NEJM199102073240604 [DOI] [PubMed] [Google Scholar]
  • 13.Wilson RM, Runciman WB, Gibberd RW, Harrison BT, Newby L, Hamilton JD. The Quality in Australian Health Care Study. Med J Aust. 1995;163(9):458–71. Epub 1995/11/06. doi: 10.5694/j.1326-5377.1995.tb124691.x [DOI] [PubMed] [Google Scholar]
  • 14.Naessens JM, O’Byrne TJ, Johnson MG, Vansuch MB, McGlone CM, Huddleston JM. Measuring hospital adverse events: assessing inter-rater reliability and trigger performance of the Global Trigger Tool. Int J Qual Health Care. 2010;22(4):266–74. Epub 2010/06/11. doi: 10.1093/intqhc/mzq026 [DOI] [PubMed] [Google Scholar]
  • 15.Good VS, Saldana M, Gilder R, Nicewander D, Kennerly DA. Large-scale deployment of the Global Trigger Tool across a large hospital system: refinements for the characterisation of adverse events to support patient safety learning opportunities. BMJ Qual Saf. 2011;20(1):25–30. Epub 2011/01/14. doi: 10.1136/bmjqs.2008.029181 [DOI] [PubMed] [Google Scholar]
  • 16.Hanskamp-Sebregts M, Zegers M, Vincent C, van Gurp PJ, de Vet HC, Wollersheim H. Measurement of patient safety: a systematic review of the reliability and validity of adverse event detection with record review. BMJ Open. 2016;6(8):e011078. Epub 2016/08/24. doi: 10.1136/bmjopen-2016-011078 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Hwang JI, Chin HJ, Chang YS. Characteristics associated with the occurrence of adverse events: a retrospective medical record review using the Global Trigger Tool in a fully digitalized tertiary teaching hospital in Korea. J Eval Clin Pract. 2014;20(1):27–35. Epub 2013/07/31. doi: 10.1111/jep.12075 [DOI] [PubMed] [Google Scholar]
  • 18.Kurutkan MN, Usta E, Orhan F, Simsekler MC. Application of the IHI Global Trigger Tool in measuring the adverse event rate in a Turkish healthcare setting. Int J Risk Saf Med. 2015;27(1):11–21. Epub 2015/03/15. doi: 10.3233/JRS-150639 [DOI] [PubMed] [Google Scholar]
  • 19.Grossmann N, Gratwohl F, Musy SN, Nielen NM, Simon M, Donze J. Describing adverse events in medical inpatients using the Global Trigger Tool. Swiss Med Wkly. 2019;149:w20149. Epub 2019/11/11. doi: 10.4414/smw.2019.20149 [DOI] [PubMed] [Google Scholar]
  • 20.Hommel A, Magneli M, Samuelsson B, Schildmeijer K, Sjostrand D, Goransson KE, et al. Exploring the incidence and nature of nursing-sensitive orthopaedic adverse events: A multicenter cohort study using Global Trigger Tool. Int J Nurs Stud. 2020;102:103473. Epub 2019/12/07. doi: 10.1016/j.ijnurstu.2019.103473 [DOI] [PubMed] [Google Scholar]
  • 21.Gerber A, Da Silva Lopes A, Szüts N, Simon M, Ribordy-Baudat V, Ebneter A, et al. Describing adverse events in Swiss hospitalized oncology patients using the Global Trigger Tool. Health Sci Rep. 2020;3(2):e160. Epub 2020/05/15. doi: 10.1002/hsr2.160 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Mattsson TO, Knudsen JL, Brixen K, Herrstedt J. Does adding an appended oncology module to the Global Trigger Tool increase its value? Int J Qual Health Care. 2014;26(5):553–60. Epub 2014/08/01. doi: 10.1093/intqhc/mzu072 [DOI] [PubMed] [Google Scholar]
  • 23.Unbeck M, Lindemalm S, Nydert P, Ygge BM, Nylen U, Berglund C, et al. Validation of triggers and development of a pediatric trigger tool to identify adverse events. BMC Health Serv Res. 2014;14:655. Epub 2014/12/22. doi: 10.1186/s12913-014-0655-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Deilkas ET, Bukholm G, Lindstrom JC, Haugen M. Monitoring adverse events in Norwegian hospitals from 2010 to 2013. BMJ Open. 2015;5(12):e008576. Epub 2016/01/01. doi: 10.1136/bmjopen-2015-008576 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Institute for Healthcare Improvement. Aktives Messinstrument der Patientensicherheit–das IHI Global Trigger Tool: Projekt-Version. Cambridge: Institute for Healthcare Improvement (IHI); 2009. [Google Scholar]
  • 26.Musy SN, Ausserhofer D, Schwendimann R, Rothen HU, Jeitziner MM, Rutjes AW, et al. Trigger Tool-Based Automated Adverse Event Detection in Electronic Health Records: Systematic Review. J Med Internet Res. 2018;20(5):e198. Epub 2018/06/01. doi: 10.2196/jmir.9901 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Page MJ, McKenzie JE, Bossuyt PM, Boutron I, Hoffmann TC, Mulrow CD, et al. The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. BMJ. 2021;372:n71. Epub 2021/03/31. doi: 10.1136/bmj.n71 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Page MJ, Moher D, Bossuyt PM, Boutron I, Hoffmann TC, Mulrow CD, et al. PRISMA 2020 explanation and elaboration: updated guidance and exemplars for reporting systematic reviews. BMJ. 2021;372:n160. Epub 2021/03/31. doi: 10.1136/bmj.n160 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29.Hausner E, Guddat C, Hermanns T, Lampert U, Waffenschmidt S. Development of search strategies for systematic reviews: validation showed the noninferiority of the objective approach. J Clin Epidemiol. 2015;68(2):191–9. Epub 2014/12/04. doi: 10.1016/j.jclinepi.2014.09.016 [DOI] [PubMed] [Google Scholar]
  • 30.Hausner E, Waffenschmidt S, Kaiser T, Simon M. Routine development of objectively derived search strategies. Syst Rev. 2012;1:19. Epub 2012/05/17. doi: 10.1186/2046-4053-1-19 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.Zegers M, de Bruijne MC, Wagner C, Hoonhout LH, Waaijman R, Smits M, et al. Adverse events and potentially preventable deaths in Dutch hospitals: results of a retrospective patient record review study. Qual Saf Health Care. 2009;18(4):297–302. Epub 2009/08/05. doi: 10.1136/qshc.2007.025924 [DOI] [PubMed] [Google Scholar]
  • 32.Schwendimann R, Blatter C, Dhaini S, Simon M, Ausserhofer D. The occurrence, types, consequences and preventability of in-hospital adverse events—a scoping review. BMC Health Serv Res. 2018;18(1):521. Epub 2018/07/06. doi: 10.1186/s12913-018-3335-z [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33.Unbeck M. Evaluation of retrospective patient record review as a method to identify patient safety and quality information in orthopaedic care 2012. Available from: https://openarchive.ki.se/xmlui/handle/10616/40941. [Google Scholar]
  • 34.Rutberg H, Borgstedt-Risberg M, Gustafson P, Unbeck M. Adverse events in orthopedic care identified via the Global Trigger Tool in Sweden—implications on preventable prolonged hospitalizations. Patient Saf Surg. 2016;10:23. Epub 2016/11/02. doi: 10.1186/s13037-016-0112-y [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35.OECD. Length of hospital stay (indicator) 2021 [cited 2021 03.01.]. Available from: https://data.oecd.org/healthcare/length-of-hospital-stay.htm.
  • 36.Kable AK, Gibberd RW, Spigelman AD. Adverse events in surgical patients in Australia. Int J Qual Health Care. 2002;14(4):269–76. Epub 2002/08/31. doi: 10.1093/intqhc/14.4.269 [DOI] [PubMed] [Google Scholar]
  • 37.Unbeck M, Schildmeijer K, Henriksson P, Jurgensen U, Muren O, Nilsson L, et al. Is detection of adverse events affected by record review methodology? an evaluation of the "Harvard Medical Practice Study" method and the "Global Trigger Tool". Patient Saf Surg. 2013;7(1):10. Epub 2013/04/17. doi: 10.1186/1754-9493-7-10 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 38.Croft LD, Liquori ME, Ladd J, Day HR, Pineles L, Lamos EM, et al. Frequency of Adverse Events Before, During, and After Hospital Admission. South Med J. 2016;109(10):631–5. Epub 2016/10/06. doi: 10.14423/SMJ.0000000000000536 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 39.Sharek PJ, Parry G, Goldmann D, Bones K, Hackbarth A, Resar R, et al. Performance characteristics of a methodology to quantify adverse events over time in hospitalized patients. Health Serv Res. 2011;46(2):654–78. Epub 2010/08/21. doi: 10.1111/j.1475-6773.2010.01156.x [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 40.von Plessen C, Kodal AM, Anhoj J. Experiences with global trigger tool reviews in five Danish hospitals: an implementation study. BMJ Open. 2012;2(5). Epub 2012/10/16. doi: 10.1136/bmjopen-2012-001324 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 41.Whiting PF, Rutjes AW, Westwood ME, Mallett S, Deeks JJ, Reitsma JB, et al. QUADAS-2: a revised tool for the quality assessment of diagnostic accuracy studies. Ann Intern Med. 2011;155(8):529–36. Epub 2011/10/19. doi: 10.7326/0003-4819-155-8-201110180-00009 [DOI] [PubMed] [Google Scholar]
  • 42.R Core Team. R: A language and environment for statistical computing. Vienna, Austria: R Foundation for Statistical Computing; 2022. Available from: https://www.R-project.org/. [Google Scholar]
  • 43.Balduzzi S, Rucker G, Schwarzer G. How to perform a meta-analysis with R: a practical tutorial. Evid Based Ment Health. 2019;22(4):153–60. Epub 2019/09/30. doi: 10.1136/ebmental-2019-300117 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 44.Viechtbauer W. Conducting Meta-Analyses inRwith themetaforPackage. Journal of Statistical Software. 2010;36(3):1–48. doi: 10.18637/jss.v036.i03 [DOI] [Google Scholar]
  • 45.Nilsson L, Borgstedt-Risberg M, Soop M, Nylen U, Alenius C, Rutberg H. Incidence of adverse events in Sweden during 2013–2016: a cohort study describing the implementation of a national trigger tool. BMJ Open. 2018;8(3):e020833. Epub 2018/04/01. doi: 10.1136/bmjopen-2017-020833 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 46.Nilsson L, Risberg MB, Montgomery A, Sjodahl R, Schildmeijer K, Rutberg H. Preventable Adverse Events in Surgical Care in Sweden: A Nationwide Review of Patient Notes. Medicine (Baltimore). 2016;95(11):e3047. Epub 2016/03/18. doi: 10.1097/MD.0000000000003047 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 47.Higgins JP, Thompson SG, Spiegelhalter DJ. A re-evaluation of random-effects meta-analysis. J R Stat Soc Ser A Stat Soc. 2009;172(1):137–59. Epub 2009/04/22. doi: 10.1111/j.1467-985X.2008.00552.x [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 48.IntHout J, Ioannidis JP, Rovers MM, Goeman JJ. Plea for routinely presenting prediction intervals in meta-analysis. BMJ Open. 2016;6(7):e010247. Epub 2016/07/14. doi: 10.1136/bmjopen-2015-010247 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 49.Egger M, Davey Smith G, Schneider M, Minder C. Bias in meta-analysis detected by a simple, graphical test. BMJ. 1997;315(7109):629–34. Epub 1997/10/06. doi: 10.1136/bmj.315.7109.629 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 50.Kennerly DA, Kudyakov R, da Graca B, Saldana M, Compton J, Nicewander D, et al. Characterization of adverse events detected in a large health care delivery system using an enhanced global trigger tool over a five-year interval. Health Serv Res. 2014;49(5):1407–25. Epub 2014/03/19. doi: 10.1111/1475-6773.12163 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 51.Asavaroengchai S, Sriratanaban J, Hiransuthikul N, Supachutikul A. Identifying adverse events in hospitalized patients using global trigger tool in Thailand. Asian Biomedicine. 2009;3(5):545–50. [Google Scholar]
  • 52.Bjorn B, Anhoj J, Ostergaard M, Kodal AM, von Plessen C. Test-Retest Reliability of an Experienced Global Trigger Tool Review Team. J Patient Saf. 2017. Epub 2017/10/13. doi: 10.1097/PTS.0000000000000433 [DOI] [PubMed] [Google Scholar]
  • 53.Perez Zapata AI, Gutierrez Samaniego M, Rodriguez Cuellar E, Gomez de la Camara A, Ruiz Lopez P. [Comparison of the "Trigger" tool with the minimum basic data set for detecting adverse events in general surgery]. Rev Calid Asist. 2017;32(4):209–14. Epub 2017/03/21. doi: 10.1016/j.cali.2017.01.001 [DOI] [PubMed] [Google Scholar]
  • 54.Haukland EC, von Plessen C, Nieder C, Vonen B. Adverse events in hospitalised cancer patients: a comparison to a general hospital population. Acta Oncol. 2017;56(9):1218–23. Epub 2017/04/06. doi: 10.1080/0284186X.2017.1309063 [DOI] [PubMed] [Google Scholar]
  • 55.Lipitz-Snyderman A, Classen D, Pfister D, Killen A, Atoria CL, Fortier E, et al. Performance of a Trigger Tool for Identifying Adverse Events in Oncology. J Oncol Pract. 2017;13(3):e223–e30. Epub 2017/01/18. doi: 10.1200/JOP.2016.016634 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 56.Mayor S, Baines E, Vincent C, Lankshear A, Edwards A, Aylward M, et al. Measuring harm and informing quality improvement in the Welsh NHS: the longitudinal Welsh national adverse events study. Health Serv Deliv Res. 2017;5(9). doi: 10.3310/hsdr05090 [DOI] [PubMed] [Google Scholar]
  • 57.Mevik K, Griffin FA, Hansen TE, Deilkas ET, Vonen B. Is inter-rater reliability of Global Trigger Tool results altered when members of the review team are replaced? Int J Qual Health Care. 2016;28(4):492–6. Epub 2016/06/11. doi: 10.1093/intqhc/mzw054 [DOI] [PubMed] [Google Scholar]
  • 58.Mevik K, Griffin FA, Hansen TE, Deilkas ET, Vonen B. Does increasing the size of bi-weekly samples of records influence results when using the Global Trigger Tool? An observational study of retrospective record reviews of two different sample sizes. BMJ Open. 2016;6(4):e010700. Epub 2016/04/27. doi: 10.1136/bmjopen-2015-010700 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 59.Croft LD, Liquori M, Ladd J, Day H, Pineles L, Lamos E, et al. The Effect of Contact Precautions on Frequency of Hospital Adverse Events. Infect Control Hosp Epidemiol. 2015;36(11):1268–74. Epub 2015/08/19. doi: 10.1017/ice.2015.192 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 60.Mortaro A, Moretti F, Pascu D, Tessari L, Tardivo S, Pancheri S, et al. Adverse Events Detection Through Global Trigger Tool Methodology: Results From a 5-Year Study in an Italian Hospital and Opportunities to Improve Interrater Reliability. J Patient Saf. 2017. Epub 2017/06/10. doi: 10.1097/PTS.0000000000000381 [DOI] [PubMed] [Google Scholar]
  • 61.Deilkas ET, Risberg MB, Haugen M, Lindstrom JC, Nylen U, Rutberg H, et al. Exploring similarities and differences in hospital adverse event rates between Norway and Sweden using Global Trigger Tool. BMJ Open. 2017;7(3):e012492. Epub 2017/03/23. doi: 10.1136/bmjopen-2016-012492 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 62.Xu XD, Yuan YJ, Zhao LM, Li Y, Zhang HZ, Wu H. Adverse Events at Baseline in a Chinese General Hospital: A Pilot Study of the Global Trigger Tool. J Patient Saf. 2020;16(4):269–73. Epub 2016/09/10. doi: 10.1097/PTS.0000000000000329 [DOI] [PubMed] [Google Scholar]
  • 63.Suarez C, Menendez MD, Alonso J, Castano N, Alonso M, Vazquez F. Detection of adverse events in an acute geriatric hospital over a 6-year period using the Global Trigger Tool. J Am Geriatr Soc. 2014;62(5):896–900. Epub 2014/04/05. doi: 10.1111/jgs.12774 [DOI] [PubMed] [Google Scholar]
  • 64.Guzman Ruiz O, Perez Lazaro JJ, Ruiz Lopez P. [Performance and optimisation of a trigger tool for the detection of adverse events in hospitalised adult patients]. Gac Sanit. 2017;31(6):453–8. Epub 2017/05/27. doi: 10.1016/j.gaceta.2017.01.014 [DOI] [PubMed] [Google Scholar]
  • 65.Müller MM, Gous AG, Schellack N. Measuring adverse events using a trigger tool in a paper based patient information system at a teaching hospital in South Africa. Eur J Clin Pharm. 2016;18(2):103–12. [Google Scholar]
  • 66.Pérez Zapata AI, Gutiérrez Samaniego M, Rodríguez Cuéllar E, Andrés Esteban EM, Gómez de la Cámara A, Ruiz López P. Detection of Adverse Events in General Surgery Using the “Trigger Tool” Methodology. Cirugía Española (English Edition). 2015;93(2):84–90. doi: 10.1016/j.ciresp.2014.08.007 [DOI] [PubMed] [Google Scholar]
  • 67.Guzman-Ruiz O, Ruiz-Lopez P, Gomez-Camara A, Ramirez-Martin M. [Detection of adverse events in hospitalized adult patients by using the Global Trigger Tool method]. Rev Calid Asist. 2015;30(4):166–74. Epub 2015/05/31. doi: 10.1016/j.cali.2015.03.003 [DOI] [PubMed] [Google Scholar]
  • 68.Mattsson TO, Knudsen JL, Lauritsen J, Brixen K, Herrstedt J. Assessment of the global trigger tool to measure, monitor and evaluate patient safety in cancer patients: reliability concerns are raised. BMJ Qual Saf. 2013;22(7):571–9. Epub 2013/03/01. doi: 10.1136/bmjqs-2012-001219 [DOI] [PubMed] [Google Scholar]
  • 69.Lipczak H, Knudsen JL, Nissen A. Safety hazards in cancer care: findings using three different methods. BMJ Qual Saf. 2011;20(12):1052–6. Epub 2011/06/30. doi: 10.1136/bmjqs.2010.050856 [DOI] [PubMed] [Google Scholar]
  • 70.Cihangir S, Borghans I, Hekkert K, Muller H, Westert G, Kool RB. A pilot study on record reviewing with a priori patient selection. BMJ Open. 2013;3(7). Epub 2013/07/23. doi: 10.1136/bmjopen-2013-003034 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 71.Wilson RM, Michel P, Olsen S, Gibberd RW, Vincent C, El-Assady R, et al. Patient safety in developing countries: retrospective estimation of scale and nature of harm to patients in hospital. BMJ. 2012;344:e832. Epub 2012/03/15. doi: 10.1136/bmj.e832 [DOI] [PubMed] [Google Scholar]
  • 72.Schildmeijer K, Nilsson L, Arestedt K, Perk J. Assessment of adverse events in medical care: lack of consistency between experienced teams using the global trigger tool. BMJ Qual Saf. 2012;21(4):307–14. Epub 2012/03/01. doi: 10.1136/bmjqs-2011-000279 [DOI] [PubMed] [Google Scholar]
  • 73.Rutberg H, Borgstedt Risberg M, Sjodahl R, Nordqvist P, Valter L, Nilsson L. Characterisations of adverse events detected in a university hospital: a 4-year study using the Global Trigger Tool method. BMJ Open. 2014;4(5):e004879. Epub 2014/05/30. doi: 10.1136/bmjopen-2014-004879 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 74.O’Leary KJ, Devisetty VK, Patel AR, Malkenson D, Sama P, Thompson WK, et al. Comparison of traditional trigger tool to data warehouse based screening for identifying hospital adverse events. BMJ Qual Saf. 2013;22(2):130–8. Epub 2012/10/06. doi: 10.1136/bmjqs-2012-001102 [DOI] [PubMed] [Google Scholar]
  • 75.Najjar S, Hamdan M, Euwema MC, Vleugels A, Sermeus W, Massoud R, et al. The Global Trigger Tool shows that one out of seven patients suffers harm in Palestinian hospitals: challenges for launching a strategic safety plan. Int J Qual Health Care. 2013;25(6):640–7. Epub 2013/10/22. doi: 10.1093/intqhc/mzt066 [DOI] [PubMed] [Google Scholar]
  • 76.Mull HJ, Brennan CW, Folkes T, Hermos J, Chan J, Rosen AK, et al. Identifying Previously Undetected Harm: Piloting the Institute for Healthcare Improvement’s Global Trigger Tool in the Veterans Health Administration. Qual Manag Health Care. 2015;24(3):140–6. Epub 2015/06/27. doi: 10.1097/QMH.0000000000000060 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 77.Landrigan CP, Parry GJ, Bones CB, Hackbarth AD, Goldmann DA, Sharek PJ. Temporal trends in rates of patient harm resulting from medical care. N Engl J Med. 2010;363(22):2124–34. Epub 2010/11/26. doi: 10.1056/NEJMsa1004404 [DOI] [PubMed] [Google Scholar]
  • 78.Kennerly DA, Saldana M, Kudyakov R, da Graca B, Nicewander D, Compton J. Description and evaluation of adaptations to the global trigger tool to enhance value to adverse event reduction efforts. J Patient Saf. 2013;9(2):87–95. Epub 2013/01/22. doi: 10.1097/PTS.0b013e31827cdc3b [DOI] [PubMed] [Google Scholar]
  • 79.Garrett PR Jr., Sammer C, Nelson A, Paisley KA, Jones C, Shapiro E, et al. Developing and implementing a standardized process for global trigger tool application across a large health system. Jt Comm J Qual Patient Saf. 2013;39(7):292–7. Epub 2013/07/31. doi: 10.1016/s1553-7250(13)39041-2 [DOI] [PubMed] [Google Scholar]
  • 80.Farup PG. Are measurements of patient safety culture and adverse events valid and reliable? Results from a cross sectional study. BMC Health Serv Res. 2015;15:186. Epub 2015/05/03. doi: 10.1186/s12913-015-0852-x [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 81.Bjertnaes O, Deilkas ET, Skudal KE, Iversen HH, Bjerkan AM. The association between patient-reported incidents in hospitals and estimated rates of patient harm. Int J Qual Health Care. 2015;27(1):26–30. Epub 2014/11/25. doi: 10.1093/intqhc/mzu087 [DOI] [PubMed] [Google Scholar]
  • 82.Brosterhaus M, Hammer A, Kalina S, Grau S, Roeth AA, Ashmawy H, et al. Applying the Global Trigger Tool in German Hospitals: A Pilot in Surgery and Neurosurgery. J Patient Saf. 2020;16(4):e340–e51. Epub 2020/11/21. doi: 10.1097/PTS.0000000000000576 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 83.Griffin FA, Classen DC. Detection of adverse events in surgical patients using the Trigger Tool approach. Qual Saf Health Care. 2008;17(4):253–8. Epub 2008/08/06. doi: 10.1136/qshc.2007.025080 [DOI] [PubMed] [Google Scholar]
  • 84.Gunningberg L, Sving E, Hommel A, Alenius C, Wiger P, Baath C. Tracking pressure injuries as adverse events: National use of the Global Trigger Tool over a 4-year period. J Eval Clin Pract. 2019;25(1):21–7. Epub 2018/07/22. doi: 10.1111/jep.12996 [DOI] [PubMed] [Google Scholar]
  • 85.Haukland EC, Mevik K, von Plessen C, Nieder C, Vonen B. Contribution of adverse events to death of hospitalised patients. BMJ Open Qual. 2019;8(1):e000377. Epub 2019/04/19. doi: 10.1136/bmjoq-2018-000377 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 86.Hoffmann-Volkl G, Kastenbauer T, Muck U, Zottl M, Huf W, Ettl B. [Detection of adverse events using IHI Global Trigger Tool during the adoption of a risk management system: A retrospective study over three years at a department for cardiovascular surgery in Vienna]. Z Evid Fortbild Qual Gesundhwes. 2018;131–132:38–45. Epub 2017/11/07. doi: 10.1016/j.zefq.2017.09.013 [DOI] [PubMed] [Google Scholar]
  • 87.Hu Q, Wu B, Zhan M, Jia W, Huang Y, Xu T. Adverse events identified by the global trigger tool at a university hospital: A retrospective medical record review. J Evid Based Med. 2019;12(2):91–7. Epub 2018/12/05. doi: 10.1111/jebm.12329 [DOI] [PubMed] [Google Scholar]
  • 88.Lipczak H, Neckelmann K, Steding-Jessen M, Jakobsen E, Knudsen JL. Uncertain added value of Global Trigger Tool for monitoring of patient safety in cancer care. Dan Med Bull. 2011;58(11):A4337. Epub 2011/11/04. [PubMed] [Google Scholar]
  • 89.Magneli M, Unbeck M, Rogmark C, Rolfson O, Hommel A, Samuelsson B, et al. Validation of adverse events after hip arthroplasty: a Swedish multi-centre cohort study. BMJ Open. 2019;9(3):e023773. Epub 2019/03/10. doi: 10.1136/bmjopen-2018-023773 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 90.Magneli M, Unbeck M, Samuelsson B, Rogmark C, Rolfson O, Gordon M, et al. Only 8% of major preventable adverse events after hip arthroplasty are filed as claims: a Swedish multi-center cohort study on 1,998 patients. Acta Orthop. 2020;91(1):20–5. Epub 2019/10/17. doi: 10.1080/17453674.2019.1677382 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 91.Menendez Fraga MD, Cueva Alvarez MA, Franco Castellanos MR, Fernandez Moral V, Castro Del Rio MP, Arias Perez JI, et al. [Compliance with the surgical safety checklist and surgical events detected by the Global Trigger Tool]. Rev Calid Asist. 2016;31 Suppl 1:20–3. Epub 2016/06/07. doi: 10.1016/j.cali.2016.03.006 [DOI] [PubMed] [Google Scholar]
  • 92.Mevik K, Hansen TE, Deilkas EC, Ringdal AM, Vonen B. Is a modified Global Trigger Tool method using automatic trigger identification valid when measuring adverse events? Int J Qual Health Care. 2019;31(7):535–40. Epub 2018/10/09. doi: 10.1093/intqhc/mzy210 [DOI] [PubMed] [Google Scholar]
  • 93.Sekijima A, Sunga C, Bann M. Adverse events experienced by patients hospitalized without definite medical acuity: A retrospective cohort study. J Gen Intern Med. 2020;34(2):S125. doi: 10.12788/jhm.3235 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 94.Toribio-Vicente MJ, Chalco-Orrego JP, Diaz-Redondo A, Llorente-Parrado C, Pla-Mestre R. [Detection of adverse events using trigger tools in 2hospital units in Spain]. J Healthc Qual Res. 2018;33(4):199–205. Epub 2018/01/01. doi: 10.1016/j.jhqr.2018.05.003 [DOI] [PubMed] [Google Scholar]
  • 95.Zadvinskis IM, Salsberry PJ, Chipps EM, Patterson ES, Szalacha LA, Crea KA. An Exploration of Contributing Factors to Patient Safety. J Nurs Care Qual. 2018;33(2):108–15. Epub 2018/02/22. doi: 10.1097/NCQ.0000000000000284 [DOI] [PubMed] [Google Scholar]
  • 96.Kelly-Pettersson P, Sköldenberg O, Samuelsson B, Stark A, Muren O, Unbeck M. The identification of adverse events in hip fracture patients using the Global Trigger Tool: A prospective observational cohort study. Int J Orthop Trauma Nurs. 2020;38:100779. Epub 2020/05/23. doi: 10.1016/j.ijotn.2020.100779 [DOI] [PubMed] [Google Scholar]
  • 97.Kaibel Val R, Ruiz López P, Pérez Zapata AI, Gómez de la Cámara A, de la Cruz Vigo F. [Detection of adverse events in thyroid and parathyroid surgery using trigger tool and Minimum Basic Data Set (MBDS)]. J Healthc Qual Res. 2020;35(6):348–54. Epub 2020/10/30. doi: 10.1016/j.jhqr.2020.08.001 [DOI] [PubMed] [Google Scholar]
  • 98.Menéndez-Fraga MD, Alonso J, Cimadevilla B, Cueto B, Vazquez F. Does Skilled Nursing Facility Trigger Tool used with Global Trigger Tool increase its value for adverse events evaluation? J Healthc Qual Res. 2021;36(2):75–80. Epub 2021/01/30. doi: 10.1016/j.jhqr.2020.08.004 [DOI] [PubMed] [Google Scholar]
  • 99.Moraes SM, Ferrari TCA, Figueiredo NMP, Almeida TNC, Sampaio CCL, Andrade YCP, et al. Assessment of the reliability of the IHI Global Trigger Tool: new perspectives from a Brazilian study. Int J Qual Health Care. 2021;33(1). Epub 2021/03/07. doi: 10.1093/intqhc/mzab039 [DOI] [PubMed] [Google Scholar]
  • 100.Nowak B, Schwendimann R, Lyrer P, Bonati LH, De Marchis GM, Peters N, et al. Occurrence of No-Harm Incidents and Adverse Events in Hospitalized Patients with Ischemic Stroke or TIA: A Cohort Study Using Trigger Tool Methodology. Int J Environ Res Public Health. 2022;19(5). Epub 2022/03/11. doi: 10.3390/ijerph19052796 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 101.Pérez Zapata AI, Rodríguez Cuéllar E, de la Fuente Bartolomé M, Martín-Arriscado Arroba C, García Morales MT, Loinaz Segurola C, et al. Predictive Power of the "Trigger Tool" for the detection of adverse events in general surgery: a multicenter observational validation study. Patient Saf Surg. 2022;16(1):7. Epub 2022/02/10. doi: 10.1186/s13037-021-00316-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 102.Pierdevara L, Porcel-Gálvez AM, Ferreira da Silva AM, Barrientos Trigo S, Eiras M. Translation, Cross-Cultural Adaptation, and Measurement Properties of the Portuguese Version of the Global Trigger Tool for Adverse Events. Ther Clin Risk Manag. 2020;16:1175–83. Epub 2020/12/11. doi: 10.2147/TCRM.S282294 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 103.de Vries EN, Ramrattan MA, Smorenburg SM, Gouma DJ, Boermeester MA. The incidence and nature of in-hospital adverse events: a systematic review. Qual Saf Health Care. 2008;17(3):216–23. Epub 2008/06/04. doi: 10.1136/qshc.2007.023622 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 104.Klein DO, Rennenberg R, Koopmans RP, Prins MH. The ability of triggers to retrospectively predict potentially preventable adverse events in a sample of deceased patients. Prev Med Rep. 2017;8:250–5. Epub 2017/11/29. doi: 10.1016/j.pmedr.2017.10.016 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 105.Munn Z, Moola S, Lisy K, Riitano D, Tufanaru C. Methodological guidance for systematic reviews of observational epidemiological studies reporting prevalence and cumulative incidence data. Int J Evid Based Healthc. 2015;13(3):147–53. Epub 2015/09/01. doi: 10.1097/XEB.0000000000000054 [DOI] [PubMed] [Google Scholar]

Decision Letter 0

Mojtaba Vaismoradi

13 May 2022

PONE-D-21-40420Variation in Detected Adverse Events using Trigger Tools: A Systematic Review and Meta-AnalysisPLOS ONE

Dear Dr. Simon,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

Please submit your revised manuscript by Jun 26 2022 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Mojtaba Vaismoradi

Academic Editor

PLOS ONE

Journal Requirements:

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at 

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and 

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Partly

Reviewer #2: Yes

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: I Don't Know

Reviewer #2: Yes

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: Yes

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: No

Reviewer #2: Yes

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: This is a systematic review and meta-analysis of 48 studies investigating the use of Trigger Tools for the assessment of adverse events in medical record review and estimating the rate of adverse events per 100 admission and several subgroups based on patient characteristics. The abstract does not adhere to PRISMA 2020 abstract, the method section does not adhere to PRISMA 2020 and the results section is difficult to follow as many results and analyses are reported. Furthermore, the last date of search is more than 12 months ago. To increase the readability and transparency of the reporting PRISMA 2020 should be followed and the result section revised.

Please see specific comments below.

Major:

1. The overall message of the study is difficult to follow, you report many results and subgroups(?) and these are not specified in the method section. Could you rearrange the result section with subheadings or omit some of the analyses to guide the reader.

2. The date of search is difficult to find, and it seems that the date of the last search >1,5-2 years ago. The search should be updated.

3. You state in the method section that PRISMA 2020 was identified but several items and the flow diagram from PRISMA 2020 is missing. I have listed some below in minor revisions but I recommend that you upload a PRISMA 2020 checklist stating where each item can be located.

Minor:

1. Acute care or acute-care? Please uniform

2. Incidence or prevalence

3. Abstract: please add the eligibility criteria on language and exclusion criteria that you describe in the method section.

4. Abstract: Please provide the date last searched (PRISMA 2020 for abstracts checklist. Item 4: https://www.equator-network.org/reporting-guidelines/prisma-abstracts/)

5. Abstract: Please describe methods to assess risk of bias (PRISMA 2020 for abstracts checklist. Item 5: https://www.equator-network.org/reporting-guidelines/prisma-abstracts/)

6. Abstract: Please report I^2

7. Abstract: I do not understand the results, could you simply? Several terms have not been introduced: e.g. applicability-related concerns, commission and omission, reviewers’ level of experience, the evidence on the remainder.

8. Abstract: Please provide details on registration and funding (PRISMA 2020 for abstracts checklist. Item 11+12: https://www.equator-network.org/reporting-guidelines/prisma-abstracts/)

9. Your REF 2 is the Trigger Tool – would ICH GCP not be better suited?

10. Consider using the term from the cited reference [8] “medical record review” rather that “record review” throughout your article.

11. Introduction: Please revise sentence and commas for: “Record review uses available data [8], was found to identify more AEs when 70 compared with many other methods [9, 10], can be repeated over time and can target specific AE 71 types, or the overall AE rate [11].”

12. Introduction: Please correct: “A "trigger" (or clue) consists either of specific wording or an event in a medical 82 record that could indicate the occurrence of an AE, e.g., readmissions within 30 days or pressure 83 ulcers [2].”

13. Methods: Please revise: “Design Systematic review and meta-analyses [27].” So that it reflect that you reported according to PRISMA 2020 [27].

14. Methods: Should the subheading “data sources” rather be “information sources” (PRISMA 2020, item 6)?

15. Methods: Your specific search strategy is difficult to follow:

a. You write that “The medical subject headings (MeSH) and keywords for titles and abstracts” was your search limited to title and abstract or were all fields searched (PRISMA 2020, item 7)?

b. Was “medical error” combined with AND or put in quotation marks?

c. Which of your search terms were Mesh terms? How were these translated from PubMed to the other databases?

d. Please provide the full (and specific) search strategy to each database as recommended by PRISMA 2020, item 7.

e. You first state that “Our search strategy was developed and validated using methods suggested by Hausner et 111 al. [28, 29]. This involves generating a test set, developing and validating a search strategy and 112 documenting the strategy using a standardized approach [29]” but later that “The detailed search strategy used 119 for this review is that described by Musy et al. [26].” – did you or reference [26] develop the search strategy and applied the methods.

16. Methods: Please provide date of last search (PRISMA 2020, 6), if the date of last search is >12 months ago I recommend that you update the search.

17. Methods: from you flow diagram, it seems that you have a restriction on the search date (2015 and onwards”, please report this, PRISMA 2020, item 6.

18. Methods: Were title and abstracts screened by one researcher or two researcher independently? Please specify in the manuscript.

19. Please add details on data collection process, PRISMA 2020, item 9.

20. Please add information on PRISMA 2020, item 10b.

21. Methods: why did you have to invent a new bias assessment tool?

22. Methods: How was heterogenicity measured, which cut-offs did you use?

23. Methods: your approach “Because R's 176 metaprop function does not accept proportions exceeding 100%, we adapted results of four 177 studies where the number of AEs exceeded the number of patient admissions. To reduce 178 oversized values to less than 100 AEs per 100 admissions, we reduced the number of AEs 179 detected to one less than the number of admissions (e.g., for a patient group of 240 with 336 180 AEs, we entered 239 AEs).” Can you provide a reference for this?

24. Results: dates should be reported in methods.

25. Results: Please help me understand your flow diagram – the layout of PRISMA 2020 has not been used. Did you use automatic tolls for the screening and exclusion of the 4531 non-trigger tools? Please specify in the method section if you did and only screened 406 title/abstracts independently. Only full-text exclusion reasons must be explained in detail.

26. Results: Please correct: “The reviewed studies were all published between 2009 and 2020” to “included”.

27. Results: please uniform: “Overall, we included 192,316 index admissions in our report” in the abstract these as described as patients, which is more correct?

28. Results: which type of studies was included in the review? Cohort studies, RCTs? This is not described in the method section or table 1.

29. Result: There are a lot of results reported in this section – and a lot of analysis. The section is difficult to follow and not all subgroups are evident from the method section. Could you omit so analysis or add some aiding subheadings?

30. Please add PRISMA 2020, item 22.

31. Discussion: Please provide a key findings paragraph in the beginning of the discussion section with the key findings of your study without references to other studies.

32. Discussion: please expand your limitations section.

33. Did you analyse conflict of interest and funding of included studies and accounted for that in the analyses?

Reviewer #2: TITLE

The title is clear with enough detail for the reader to know what to expect.

RELEVANCE AND ORIGINALITY

Adverse events are an ongoing occurrence in the health landscape and the mechanism of identifying and reporting adverse events is not consistent across or between countries. This review is relevant as it provides an argument (using a recognised high quality and rigorous approach, i.e., a systematic review and meta-analysis) for the need to address this inconsistency with clearer reporting guidance.

AUTHENTICITY AND REFERENCING

The manuscript appears to be the work of the author with appropriate attribution to the work of others both in text and in the reference list.

ABSTRACT/INTRODUCTION

The abstract is comprehensive and an accurate reflection of the manuscript. The introduction is brief yet provides enough information from the literature to support the need for the review. In addition, key terms, (e.g., ‘global trigger tool’, ‘trigger’) are explained and operationalised for the review. The introduction leads logically to the gap in the literature and the aims of the study.

AIMS

Dual aims are clear.

METHODOLOGY

The methodology is well described and replicable, apart from a few queries:

• One evidence source searched, ‘all authors’ personal libraries’ (line 117), is not defined or described. Are the authors referring to self -authored publications or simply publications amassed in personal collections? If the former, then these papers would presumably be indexed in one of the other databases searched. If the latter, then it renders the search not replicable. Removal of ‘personal libraries’ or explanation for its inclusion might address any concerns raised by its inclusion.

• Similarly, an explanation for the choice of the three journals hand searched would allay any concerns of bias in the search strategy.

Re eligibility criteria – point #3 is “…acute care (including elective admissions) hospital settings… (line 122). I would think elective admissions are inherently a part of an acute care cohort. Did the authors mean ‘emergency admissions’? In either case, this could be clarified.

Table 1 is particularly helpful.

RESULTS

Results are comprehensive and well organised.

TABLES AND FIGURES

Tables and figures are well presented and do not replicate information in text.

DISCUSSION/CONCLUSION

The discussion is supported by the findings and the findings are situated within the current body of evidence on the topic. Recommendations for future practice related to adverse events and future research reporting on adverse events, albeit very brief (e.g., one sentence each), logically derives from the findings and discussion.

OTHER COMMENTS

Use of a reporting guideline is not evident but is a conventional expectation. The authors might consider adding reference to this in some way.

WRITING STYLE

The writing style is easy to academically sound and easy to read.

SCHOLARLY APPROACH

The authors have used a scholarly approach that begins with a clearly stated premise so that compelling arguments can be presented and supported with up-to-date literature, including empirical research evidence. Providing more critique of the studies cited in the introduction and in the discussion would elevate this further.

OVERALL COMMENTS

My comments have been provided in the spirit of collegiality to hopefully assist the authors in further preparing their manuscript for publication. I commend the authors on this high-quality report of their systematic review and meta-analysis.

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: Yes: Siv Fonnes

Reviewer #2: Yes: Sonya R Osborne

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

Attachment

Submitted filename: PONE-D-21-40420-adverse events SR-reviewed.pdf

PLoS One. 2022 Sep 1;17(9):e0273800. doi: 10.1371/journal.pone.0273800.r002

Author response to Decision Letter 0


24 Jun 2022

We appreciate the opportunity to address the very helpful reviewers’ comments and revise our manuscript. Below, please find item-by-item responses to the Reviewers’ comments, which are included verbatim. All page and paragraph numbers refer to locations in the revised manuscript.

Attachment

Submitted filename: Response to Reviewers.docx

Decision Letter 1

Mojtaba Vaismoradi

16 Aug 2022

Variation in Detected Adverse Events using Trigger Tools: A Systematic Review and Meta-Analysis

PONE-D-21-40420R1

Dear Dr. Simon,

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

Prof, Mojtaba Vaismoradi, PhD, MScN, BScN

Academic Editor

PLOS ONE

Additional Editor Comments (optional):

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #1: All comments have been addressed

Reviewer #2: All comments have been addressed

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: Yes

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: Yes

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: Yes

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: Thank you for your comprehensive work on revising and improving your systematic review and meta-analysis. The reporting according to PRISMA 2020 has improved the readability and transparency of the manuscript. The revision of the results section and the key findings paragraph in the discussion section has made the message and the results of your study easier to understand and follow. Congratulations on your comprehensive and hard work.

Reviewer #2: The authors have addressed all of my comments in the revision. I acknowledge the data has been updated in light of an updated search.

**********

**********

Acceptance letter

Mojtaba Vaismoradi

23 Aug 2022

PONE-D-21-40420R1

Variation in detected adverse events using trigger tools: A systematic review and meta-analysis

Dear Dr. Simon:

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department.

If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org.

If we can help with anything else, please email us at plosone@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Professor Mojtaba Vaismoradi

Academic Editor

PLOS ONE

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    S1 Checklist. PRISMA 2020 checklist.

    (DOCX)

    S1 File. Quality assessment tool template.

    (PDF)

    S1 Table. Assessments of risk of bias and applicability-related concerns.

    (PDF)

    S1 Fig. Forest plot of AEs per 1000 patient days.

    * = pooled estimate, • = mean estimate, ‡ = calculated total number of AEs, ~ = calculated total number of patient days [5, 15, 1722, 34, 37, 39, 40, 45, 46, 5052, 54, 57, 58, 60, 6265, 67, 68, 72, 73, 7679, 82, 8487, 8991, 93, 95, 96, 98100, 102].

    (TIF)

    S2 Fig. Forest plot percentage of admissions with at least one adverse event (AE).

    CI, confidence interval; * = pooled estimate, • = mean estimate, + = calculated total number of admissions with ≥ 1 AE [5, 9, 14, 15, 1722, 24, 34, 37, 39, 45, 46, 5058, 6068, 70, 7287, 8994, 96101].

    (TIF)

    S3 Fig. Forest plot percentage of preventable adverse events (AEs).

    CI, confidence interval; * = pooled estimate, • = mean estimate, ¢ = calculated number of preventable AEs [15, 1720, 34, 3739, 46, 50, 51, 53, 59, 6367, 7175, 77, 78, 87, 8991, 9698, 100, 101].

    (TIF)

    S4 Fig. Funnel plot for AEs per 100 admissions [5, 10, 15, 1722, 34, 3739, 45, 46, 5054, 5669, 7179, 8291, 93, 95102].

    (TIF)

    S5 Fig. Forest plot with stratified analysis of the risk of bias and applicability-related concerns.

    AE, adverse events; N studies, number of studies; CI, confidence interval [5, 10, 15, 1722, 34, 3739, 45, 46, 5054, 5669, 7179, 8291, 93, 95102].

    (TIF)

    Attachment

    Submitted filename: PONE-D-21-40420-adverse events SR-reviewed.pdf

    Attachment

    Submitted filename: Response to Reviewers.docx

    Data Availability Statement

    All data files are available from https://doi.org/10.5281/zenodo.4892518.


    Articles from PLoS ONE are provided here courtesy of PLOS

    RESOURCES