Skip to main content
The BMJ logoLink to The BMJ
. 2021 Mar 15;372:n265. doi: 10.1136/bmj.n265

Preferred reporting items for journal and conference abstracts of systematic reviews and meta-analyses of diagnostic test accuracy studies (PRISMA-DTA for Abstracts): checklist, explanation, and elaboration

Jérémie F Cohen 1, Jonathan J Deeks 2 ,3, Lotty Hooft 4, Jean-Paul Salameh 5 ,6, Daniël A Korevaar 7, Constantine Gatsonis 8, Sally Hopewell 9, Harriet A Hunt 10, Chris J Hyde 10, Mariska M Leeflang 11, Petra Macaskill 12, Trevor A McGrath 13, David Moher 14, Johannes B Reitsma 4, Anne W S Rutjes 15, Yemisi Takwoingi 2 ,3, Marcello Tonelli 16, Penny Whiting 17, Brian H Willis 18, Brett Thombs 19, Patrick M Bossuyt 11, Matthew D F McInnes 20,
PMCID: PMC7957862  PMID: 33722791

Abstract

For many users of the biomedical literature, abstracts may be the only source of information about a study. Hence, abstracts should allow readers to evaluate the objectives, key design features, and main results of the study. Several evaluations have shown deficiencies in the reporting of journal and conference abstracts across study designs and research fields, including systematic reviews of diagnostic test accuracy studies. Incomplete reporting compromises the value of research to key stakeholders. The authors of this article have developed a 12 item checklist of preferred reporting items for journal and conference abstracts of systematic reviews and meta-analyses of diagnostic test accuracy studies (PRISMA-DTA for Abstracts). This article presents the checklist, examples of complete reporting, and explanations for each item of PRISMA-DTA for Abstracts.


Summary points.

  • The PRISMA-DTA statement has become an internationally accepted reporting guideline for systematic reviews of diagnostic test accuracy studies

  • PRISMA-DTA for Abstracts is intended to improve the completeness and informativeness of journal and conference abstracts of systematic reviews of diagnostic test accuracy studies

  • PRISMA-DTA for Abstracts includes 12 essential items to report in journal and conference abstracts

  • This article provides the checklist, examples of complete reporting and explanations for each item of the checklist, and abstracts of two reviews that authors can use as examples for their abstracts

The abstract is often the only section read by users of biomedical articles.1 On the basis of the abstract, many readers decide whether they will read the full text. The abstract is also critical to people who do not have access to the full text, owing to pay walls or because the article is written in a language they do not understand. Therefore, abstracts should enable a quick assessment of the study’s objectives, purpose, and key design features; present an accurate picture of the validity of the main results; and allow readers to evaluate whether the study can meet their information needs.2 Informative abstracts are also key to enabling effective literature searches in electronic databases, notably in the context of systematic reviews.

Several evaluations have shown deficiencies in the reporting of journal and conference abstracts across study designs and research fields.3 4 5 6 The PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) statement was developed to improve the reporting of systematic reviews, primarily for reviews of interventions.7 PRISMA for Abstracts is a checklist for reporting abstracts of systematic reviews.8

Because of the specific methods, terminology, and reporting requirements of diagnostic test accuracy (DTA) studies (table 1), our group developed the PRISMA-DTA checklist, which also includes guidance on abstracts.9 10 PRISMA-DTA for Abstracts includes 12 essential items to report in journal and conference abstracts (table 2). A recent evaluation, however, found that only half of these items were consistently reported.11 This explanation and elaboration document gives examples of complete reporting and explanations for each item of the PRISMA-DTA for Abstracts checklist and is intended to provide a useful resource for authors of DTA reviews.

Table 1.

Diagnostic test accuracy terminology

Term Explanation
Index test Test under evaluation in a diagnostic accuracy study. The accuracy (eg, sensitivity and specificity) of the index test is estimated by comparing the results of the index test with those of a reference standard applied to the same participants. Multiple index tests can be evaluated within the same study
Comparative studies Studies aiming to compare the diagnostic accuracy of two or more index tests
Reference standard The method or combination of methods used in the study for establishing the presence or absence of the target condition
Target condition The disease or condition that the reference standard is expected to detect
Role of the test The position of the index test relative to other tests in the diagnostic investigation of the same target condition (eg, triage, replacement, add-on, new test)
Intended use of the test Whether the index test is used for diagnosis, screening, staging, monitoring, surveillance, prognosis, or other purposes
Sensitivity The proportion of correctly classified participants among those with the target condition
Specificity The proportion of correctly classified participants among those without the target condition
QUADAS-2 A tool for use in systematic reviews to assess the risk of bias and concerns about applicability of primary diagnostic accuracy studies

Table 2.

PRISMA-DTA for Abstracts checklist

Section and topic Item No Description
Title and purpose
Title 1 Identify the report as a systematic review (+/– meta-analysis) of DTA studies
Objectives 2 Indicate the research question, including components such as participants, index test, and target conditions
Methods
Eligibility criteria 3 Include study characteristics used as criteria for eligibility
Information sources 4 List the key databases searched and the search dates
Risk of bias and applicability 5 Indicate the methods of assessing risk of bias and applicability
Synthesis of results A1 Indicate the methods for the data synthesis
Results
Included studies 6 Indicate the number and type of included studies and the participants and relevant characteristics of the studies (including the reference standard)
Synthesis of results 7 Include the results for the analysis of diagnostic accuracy, preferably indicating the number of studies and participants. Describe test accuracy including variability; if meta-analysis was done, include summary results and confidence intervals
Discussion
Strengths and limitations 9 Provide a brief summary of the strengths and limitations of the evidence
Interpretation 10 Provide a general interpretation of the results and the important implications
Other
Funding 11 Indicate the primary source of funding for the review
Registration 12 Provide the registration number and the registry name

Compared with PRISMA for Abstracts, one item was deleted (item 8), and one new item was added (A1).

DTA=diagnostic test accuracy.

Methods for developing explanation and elaboration document

During the consensus meeting to develop the PRISMA-DTA checklist,12 a first version of PRISMA-DTA for Abstracts was drafted, based on PRISMA for Abstracts and the PRISMA-DTA checklist.8 9 Compared with PRISMA for Abstracts, one item was deleted (item 8), and one new item was added (A1, about synthesis of results). We then circulated the draft PRISMA-DTA for Abstracts checklist among PRISMA-DTA Group members for review and approval.

A writing committee (JFC, JJD, LH) drafted this explanation and elaboration document, which was then reviewed, edited, and approved by consensus by PRISMA-DTA Group members. Consistent with other reporting guidelines,8 13 14 we provide examples of complete reporting for each item and explanations clarifying the rationale for the item and how to incorporate it in the abstract. We have edited the examples by spelling out abbreviations. We also present the abstracts of two reviews that comply with the checklist in fewer than 300 words (box 1 and box 2).

Box 1. Example of abstract fulfilling all PRISMA-DTA for Abstracts items in less than 250 words.

Diagnostic accuracy of dual-energy computed tomography (DECT) to differentiate uric acid from non-uric acid calculi: systematic review and meta-analysis

Background: Uric acid stone diagnosis is done primarily with in vitro analysis of stones. Dual-energy CT (DECT) would allow earlier diagnosis and therapy.

Objective: To evaluate if DECT, using stone analysis as reference standard, is sufficiently accurate to replace stone analysis for diagnosis of uric acid stones.

Methods: Original studies in patients with urolithiasis examined with DECT with stone analysis as the reference standard were eligible for inclusion. MEDLINE (1946–2018), Embase (1947–2018), CENTRAL (August 2018), and multiple urology and radiology conferences were searched. QUADAS-2 was used to assess risk of bias and concerns regarding applicability. Meta-analyses were performed using a bivariate random-effects model.

Results: Twenty-one studies (1105 patients, 1442 stones) were included. Fourteen studies (662 patients, 944 stones) were analyzed in the uric acid dominant target condition (majority of stone composition uric acid): summary sensitivity was 0.88 (95% CI 0.79–0.93) and specificity 0.98 (95% CI 0.96–0.99). Thirteen studies (674 patients, 760 stones) were analyzed in the uric acid-containing target condition (< majority of stone composition uric acid): summary sensitivity was 0.82 (95% CI 0.73–0.89) and specificity 0.97 (95% CI 0.94–0.98). Meta-regression identified no significant source of variability in accuracy. Two studies had one or more domains at high risk of bias and there were no concerns regarding applicability.

Conclusion: DECT is an accurate replacement test for diagnosis of uric acid calculi in vivo, such that stone analysis might be replaced in the diagnostic pathway.

Funding: Ontario Graduate Scholarship (OGS).

Registration: CRD42018107398 (Prospero).

Word count: 249.

Adapted with permission of authors from McGrath TA et al. Eur Radiol 2020;30:2791-01.

Box 2. Example of abstract fulfilling all PRISMA-DTA for Abstracts items in less than 300 words.

Rapid antigen detection tests for group A streptococcus in children with pharyngitis: systematic review and meta-analysis of diagnostic test accuracy studies

Background: Group A streptococcus (GAS) accounts for 20% to 40% of cases of pharyngitis in children; the remaining cases are caused by viruses. Compared with throat culture, rapid antigen detection tests (RADTs) offer diagnosis at the point of care.

Objectives: To evaluate the diagnostic accuracy of RADTs for diagnosing GAS in children with pharyngitis.

Methods: We searched 8 databases (including MEDLINE and Embase) from 1980 through 2015. We included studies that compared RADT for GAS pharyngitis with throat culture on a blood agar plate in a microbiology laboratory in children in ambulatory care. Quality assessment was carried out using QUADAS-2. We used bivariate meta-analysis to estimate summary sensitivity and specificity, and to investigate variability in accuracy across studies.

Results: We included 98 unique studies in the review (116 test evaluations; 101 121 participants). The overall methodological quality of included studies was poor, mainly because many studies were at high risk of bias regarding patient selection and the reference standard used. In our main meta-analysis (105 test evaluations; 58 244 participants; median prevalence of GAS 29.5%), RADT had a summary sensitivity of 85.6% (95% CI 83.3 to 87.6) and a summary specificity of 95.4% (95% CI 94.5 to 96.2). There was substantial variability in sensitivity across studies (range 38.6 to 100%); specificity was more stable (range 54.1 to 100%). Variability in accuracy was not explained by study-level characteristics such as age and clinical severity of participants, and GAS prevalence.

Conclusions: Whether or not RADT can be used as a stand-alone test to rule out GAS will depend mainly on the epidemiological context. RADT specificity seems sufficiently high to ensure against unnecessary use of antibiotics. These results should be interpreted with caution because of high risk of bias and variability in sensitivity estimates.

Funding: Association Française de Pédiatrie Ambulatoire (AFPA).

Registration: CD010502 (Cochrane).

Word count: 299.

Adapted with permission of authors from Cohen JF et al. Cochrane Database Syst Rev 2016(7):CD010502.

PRISMA-DTA for abstracts checklist, section 1: title and purpose

Item 1: Title

Identify the report as a systematic review (+/− meta-analysis) of DTA studies.

Examples

1a: “Diagnostic accuracy of segmental enhancement inversion for diagnosis of renal oncocytoma at biphasic contrast-enhanced computed tomography: systematic review.”15

1b: “The diagnostic accuracy of serological tests for Lyme borreliosis in Europe: a systematic review and meta-analysis.”16

Explanation

To facilitate identification, the title should describe the article as a “systematic review” and as a “meta-analysis” (examples 1a-b), if appropriate. To clarify the focus of the review, the title should contain the terms “diagnostic” and “accuracy,” thereby differentiating it from other aspects of test evaluation, such as reproducibility, prognostic accuracy, optimal threshold estimation, analytical performance, clinical utility, or cost effectiveness. Alternatively, terms that refer to diagnostic accuracy measures (such as sensitivity, specificity, predictive values, or area under the curve) may be used. The title should also contain the index test, the target condition, and comparisons made between tests, if applicable. Incorporating a description of participants is encouraged.

Item 2: Objectives

Indicate the research question, including components such as participants, index test, and target conditions.

Examples

2a: “To assess the diagnostic accuracy of Xpert® MTB/RIF for pulmonary tuberculosis detection, where Xpert® MTB/RIF was used as both an initial test replacing microscopy and an add‐on test following a negative smear microscopy result.”17

2b: “To assess the diagnostic accuracy of magnetic resonance imaging for differentiating stage T1 or lower tumors from stage T2 or higher tumors and to analyse the influence of different imaging protocols in patients with bladder cancer.”18

Explanation

Abstracts should include the research question for the systematic review so that readers can understand the rationale and relevance for clinical practice. This should reflect the target condition(s) for detection (example 2a) or differentiation (example 2b), index test(s) under evaluation (examples 2a-b), the population for intended use (example 2b), the setting, and the proposed role of the index test(s) (example 2a). Authors may also highlight comparative review questions here.

PRISMA-DTA for abstracts checklist, section 2: methods

Item 3: Eligibility criteria

Include the study characteristics used as criteria for eligibility.

Examples

3a: “We included randomised controlled trials, cross-sectional studies, and cohort studies using respiratory specimens that allowed for extraction of data evaluating Xpert® MTB/RIF against the reference standard. We excluded gastric fluid specimens. The reference standard for tuberculosis was culture and for rifampicin resistance was phenotypic culture-based drug susceptibility testing.”17

3b: “We included diagnostic accuracy studies that used computed tomography for diagnosis of fat-poor angiomyolipoma in patients with renal masses, using pathologic examination as the reference standard.”19

Explanation

A clear description of the systematic review’s eligibility criteria allows readers to judge the applicability of findings. Eligibility criteria should include all components of the review question (item 2) plus the reference standard (examples 3a-b), along with any restrictions on study design, such as excluding studies with healthy controls. In comparative reviews, the authors may restrict studies to those in which participants underwent all tests under comparison. Additional examples of eligibility criteria may include year of publication, language, or publication status (for example, no conference abstracts). Results from older studies sometimes differ from more recent results, and studies published in non-English language journals or only in conference abstracts may report lower accuracy estimates.20 21 22 23 24

Item 4: Information sources

List the key databases searched and the search dates.

Examples

4a: “A systematic search of MEDLINE, Embase, The Cochrane Library and Science Citation Index Expanded from January 1994 to October 2014 was performed.”25

4b: “We carried out extensive literature searches including MEDLINE (1980 to 25 August 2011), Embase (1980 to 25 August 2011), BIOSIS via EDINA (1985 to 25 August 2011), CINAHL via OVID (1982 to 25 August 2011), and The Database of Abstracts of Reviews of Effects (the Cochrane Library 2011, Issue 7).”26

Explanation

The abstract should report the databases searched with the date range or date of last search. This informs readers of the completeness and recency of the search and the likelihood that potentially relevant articles have been missed. Additional efforts made to identify studies (for example, searching reference lists of included studies and published reviews, contacting experts, screening trial registries) are often not reported in the abstract because of word restrictions.

Item 5: Risk of bias and applicability

Indicate the methods of assessing risk of bias and applicability.

Example

5a: “We assessed possible bias and applicability of the studies using the QUADAS-2 tool.”27

Explanation

Aspects of the design and conduct of included primary DTA studies can raise questions about the validity of their findings and applicability for the review question. These include aspects that increase “risk of bias” (that is, estimates that deviate systematically from the truth) and aspects that lead to “concerns about applicability” (that is, study results that do not directly apply to the review question). Non-blinded readers of index tests, for example, can introduce bias.28 29 Recruiting a highly selected study group or using a prototype version of a test may affect applicability. Systematic reviews should evaluate risk of bias and concerns about applicability; review authors may not be able to describe this in detail in the abstract but can specify the tool or approach used. The QUADAS-2 tool 30 (example 5a) is the most frequently used tool for DTA studies.31

Item A1: Synthesis of results

Indicate the methods for the data synthesis.

Examples

A1a: “We performed meta‐analyses using the bivariate and hierarchical summary receiver operating characteristic models.”32

A1b: “Variability was assessed by subgroup analyses (dual-energy computed tomography technique and risk of bias) and metaregression using test type and threshold applied as covariates.”33

A1c: “We analysed sensitivity and specificity of included studies narratively as there were insufficient studies to perform a meta‐analysis.”34

Explanation

Authors are encouraged to report the approach taken to summarise study results, whether narratively or using a statistical model. Results from different approaches may diverge,35 36 and some strategies are more robust than others. Example A1a informs readers about key details of the analysis, such as whether statistical methods account for the hierarchical nature of data and the potential trade-off between sensitivity and specificity across studies. How authors evaluated variability in accuracy estimates may also be relevant (example A1b). If applicable, we encourage authors to report methods used for comparisons of multiple index tests and reasons for not pooling study results (example A1c).

PRISMA-DTA for abstracts checklist, section 3: results

Item 6: Included studies

Indicate the number and type of included studies and the participants and relevant characteristics of the studies (including the reference standard).

Examples

6a: “We included 27 unique studies […] involving 9557 participants. Sixteen studies (59%) were performed in low‐ or middle‐income countries.”17

6b: “For the diagnostic accuracy of HBsAg from dried blood spot compared to venous blood, 19 studies were included in a quantitative meta-analysis, and 23 in a narrative review.”37

6c: “Of the 40 studies that met the inclusion criteria, 33 compared rapid diagnostic test and/or enzyme immunoassays against enzyme immunoassays and 7 against nucleic-acid test as reference standards. Thirty studies assessed diagnostic accuracy of 33 brands of rapid diagnostic tests in 23,716 individuals from 23 countries using enzyme immunoassays as the reference standard.”38

6d: “All studies were at high risk of bias for the index test domain because no reported thresholds were prespecified.”39

Explanation

Authors should report the number of included studies and participants (and, if possible, the number of participants with the target condition) and any other key characteristics (example 6a). Some studies may be included in the qualitative part of the review but not in the quantitative synthesis (example 6b). If the included studies use multiple reference standards, this should be reported (example 6c). This information enables readers to gauge the amount of summarised evidence and its applicability to the review question. Reviews with few included studies and a limited number of participants may produce imprecise accuracy estimates and may not add substantive value compared with the individual studies. Review authors are also invited to summarise their assessment of the quality of evidence (that is, risk of bias and concerns about applicability) and highlight their main source of concern (example 6d).

Item 7: Synthesis of results

Include the results for the analysis of diagnostic accuracy, preferably indicating the number of studies and participants. Describe test accuracy including variability; if meta-analysis was done, include summary results and confidence intervals.

Examples

7a: “In the 12 studies with the least biased estimates, sensitivity ranged from 30% to 87% and specificity ranged from 86% to 100%.”40

7b: “As an initial test replacing smear microscopy, Xpert® MTB/RIF pooled sensitivity was 89% [95% Credible Interval (CrI) 85% to 92%] and pooled specificity 99% (95% CrI 98% to 99%), (22 studies, 8998 participants: 2953 confirmed tuberculosis, 6045 non‐tuberculosis). As an add‐on test following a negative smear microscopy result, Xpert®MTB/RIF pooled sensitivity was 67% (95% CrI 60% to 74%) and pooled specificity 99% (95% CrI 98% to 99%; 21 studies, 6950 participants).”17

7c: “For HRP‐2, the meta‐analytical average sensitivity and specificity (95% CI) were 95.0% (93.5% to 96.2%) and 95.2% (93.4% to 99.4%), respectively […], for pLDH, the meta‐analytical average sensitivity and specificity (95% CI) were 93.2% (88.0% to 96.2%) and 98.5% (96.7% to 99.4%), respectively.”41

7d: “Compared to microscopy, the detection of microhaematuria on test strips had the highest sensitivity and specificity (sensitivity 75%, 95% CI 71% to 79%; specificity 87%, 95% CI 84% to 90%; 74 studies, 102,447 participants). For proteinuria, sensitivity was 61% and specificity was 82% (82,113 participants); and for leukocyturia, sensitivity was 58% and specificity 61% (1,532 participants). However, the difference in overall test accuracy between the urine reagent strips for microhaematuria and proteinuria was not found to be different when we compared separate populations (P = 0.25), or when direct comparisons within the same individuals were performed (paired studies; P = 0.21).”42

Explanation

The authors should provide results for the main index test(s) evaluated in the abstract and, if relevant, thresholds defining index test positivity. If no meta-analysis was done, the abstract should describe accuracy results across included studies—for example, by describing the range of estimates (example 7a). If meta-analysis was done, authors should include summary estimates of accuracy and an expression of statistical imprecision, such as confidence intervals, prediction intervals, or Bayesian credible intervals (examples 7b-d). If space allows, authors should report the number of studies and participants used to generate summary estimates for each index test (example 7b and 7d).

Measures of statistical inconsistency (“heterogeneity”) used in intervention reviews (such as I2) are usually not applicable in systematic reviews of DTA studies, and no consensus exists for alternative statistics. As such, the broader term “variability” was used in place of the term “inconsistency” in PRISMA-DTA. Variability of accuracy results could be mentioned in the abstract and may include results of main investigations of reasons for variability, such as subgroup analyses and meta-regression (example 7d).43

If the review aimed to compare tests, these results should be reported, preferably including relative or absolute differences in accuracy, with confidence intervals or tests for statistical significance (example 7d). If sensitivity analysis raised serious concerns about the robustness of the main analyses, this should be mentioned as well.

PRISMA-DTA for abstracts checklist, section 4: discussion

Item 9: Strengths and limitations

Provide a brief summary of the strengths and limitations of the evidence.

Examples

9a: “The spectrum of patients was relatively narrow in all studies, sample sizes were small, there was substantial incorporation bias, and blinding procedures were often incomplete.”44

9b: “The value of accuracy estimates is considerably undermined by the small number of included studies, and concerns about risk of bias relating to the index test and the reference standard.”34

9c: “We observed substantial variation in sensitivity and specificity of all tests, which was likely attributable to methodological differences and variations in the clinical characteristics of populations recruited.”45

Explanation

The abstract should briefly highlight the main strengths and limitations of the review process and the included evidence. Review limitations might include search restrictions (for example, number of databases, language, dates) and lack of independent study selection and data extraction by more than one person, for example. Limitations of included evidence might include risk of bias (examples 9a-b), unavailability of data (examples 9a-b), variability of accuracy estimates (example 9c), imprecision (for example, due to few studies or small sample sizes; examples 9a-b), or low applicability of study findings (for example, due to patient selection within the included studies; example 9a). Such limitations may lead to summary estimates of accuracy that may not reflect the “true” performance of a test or may limit applicability in real world clinical use. Reporting all limitations in an abstract might be impossible, but the authors should mention those that they deem most important.

Item 10: Interpretation

Provide a general interpretation of the results and the important implications.

Examples

10a: “Compared with microscopy, Xpert offers better sensitivity for the diagnosis of pulmonary tuberculosis in children and its scale-up will improve access to tuberculosis diagnostics for children. Although Xpert helps to provide rapid confirmation of disease, its sensitivity remains suboptimum compared with culture tests. A negative Xpert result does not rule out tuberculosis. Good clinical acumen is still needed to decide when to start antituberculosis therapy and continued research for better diagnostics is crucial.”46

10b: “It might be too early to recommend its use because of the scarcity of reliable clinical data, heterogeneity in case definitions, and unstable accuracy estimates.”47

10c: “If the point estimates for Type 1 and Type 4 tests are applied to a hypothetical cohort of 1000 patients where 30% of those presenting with symptoms have P. falciparum, Type 1 tests will miss 16 malaria cases, and Type 4 tests will miss 26 malaria cases. The number of people wrongly diagnosed with P. falciparum malaria would be 34 with Type 1 tests, and nine with Type 4 tests.”41

Explanation

“Spin,” which refers to the reporting of findings in a way that makes test performance seem better than is justified by the study results, is common in abstracts of DTA systematic reviews.48 49 The abstract’s conclusion should summarise the evidence with wording that reflects potential limitations of the review and evidence and, ideally, account for the intended use of the test (example 10a; table 1). If insufficient evidence from well conducted studies exists to allow conclusions to be drawn, this should be made clear (example 10b). If the word count permits, providing readers with the numbers of patients who would be expected to obtain correct and erroneous test results, and the likely consequences, may help with interpretation of test accuracy results and differences between tests (example 10c).

PRISMA-DTA for abstracts checklist, section 5: other

Item 11: Funding

Indicate the primary source of funding for the review.

Examples

11a: “Primary funding source: Québec Health Research Fund and BD Diagnostic Systems.”50

11b: “Funding: No external funding.”51

Explanation

A conference abstract should include the main funding source(s) of the review (example 11a) or state that there was no specific funding (example 11b). Journals may require this to be reported elsewhere. This information enables readers to assess whether financial conflicts of interest occurred if, for instance, the test manufacturer provided funding for the review. Other financial conflicts of interest, such as when the inventors of a test are involved in the review,52 are relevant but not easily conveyed in the abstract. Ideally, whether financial support came from a for-profit company or a public funder should be made clear.

Item 12: Registration

Provide the registration number and the registry name.

Example

12a: “This study was registered with PROSPERO (CRD42018089545).”47

Explanation

Registration of systematic reviews is increasingly expected.53 54 Registration provides evidence that a review is being undertaken prospectively and provides a record of reviews that have been initiated, which reduces the risk of duplicated efforts and allows interested parties to contact reviewers. It also enables peer reviewers, editors, and readers to compare reported review methods against the registered record.55 As registries such as PROSPERO are typically open access, including the number and name of the registry may provide a useful additional source of information. Alternatives to citing an entry on a register include providing a link to an upload of the review protocol on a publicly available website (such as the Open Science Framework), preprint server (such as medRxiv.org), or journal publication (with the DOI).

Discussion

We developed the PRISMA-DTA for Abstracts checklist and have provided this explanation and elaboration document to help authors to improve the reporting of journal and conference abstracts of systematic reviews of DTA studies. This explanation and elaboration document is a companion to the checklist and the explanation and elaboration for PRISMA-DTA for full text reviews.9 10 It may also be useful as a pedagogical resource for people learning about DTA systematic review abstracts. PRISMA-DTA for Abstracts enriches the body of reporting guidelines for journal and conference abstracts,2 including CONSORT for Abstracts of randomised trials of interventions,56 PRISMA for Abstracts of systematic reviews,8 STROBE for Abstracts of observational studies,57 STARD for Abstracts of primary DTA studies,58 PRIO for Abstracts of overviews of systematic reviews,59 and TRIPOD for Abstracts of multivariable prediction models.60 In addition to supporting authors, these checklists can be used by editors, peer reviewers, and conference organisers to assess the completeness of abstracts submitted for publication or presentation. We also provided illustrative examples of real abstracts that comply with the checklist (box 1 and box 2).

An evaluation of adherence to the PRISMA-DTA for Abstracts checklist, based on 100 published DTA reviews (2017-2018), found abstracts to be insufficiently informative.11 Items reported in less than 50% of abstracts included items 2 (participants: 49%), 4 (search dates: 42%), 5 (methods for assessing risk of bias: 38%; methods for assessing applicability: 25%), 6 (characteristics of included studies (including reference standard): 13%), 9 (strengths: 8%; limitations: 26%), 11 (funding: 3%), and 12 (registration: 5%).

To ensure that abstracts of systematic reviews of DTA studies are sufficiently informative, we strongly recommend the use of structured abstracts.61 62 Recognising that many journals and conferences have their own abstract formatting requirements, we indicate the information that should be reported without specifying abstract sections. Authors should mention key features only once to make the best use of the limited space available. We encourage journals and organisers of scientific conferences to endorse the use of PRISMA-DTA for Abstracts. This may be done by implementing the checklist into instructions to authors and by inviting peer reviewers to use it when evaluating study reports. The usual 250 word limit used by many journals and conferences may be a barrier to complete reporting. We invite journal editors and conference organisers to consider increasing their word limit to at least 300 words. For example, Radiology and The BMJ now allow abstracts up to 300 and 400 words long, respectively.

To enhance dissemination, all PRISMA-DTA for Abstracts material will be freely available on the EQUATOR (www.equator-network.org) and PRISMA (www.prisma-statement.org) websites. We also encourage the introduction of PRISMA-DTA and PRISMA-DTA for Abstracts in teaching programmes focusing on systematic reviews of DTA studies and the importance of transparent reporting of health research.

PRISMA-DTA for Abstracts presents a minimum set of reporting criteria that should be reported in abstracts of systematic reviews of DTA studies. When possible, authors should also report other items from the full PRISMA-DTA checklist in the abstract,9 especially those deemed critical to their review question. For conferences that allow the inclusion of figures in the abstract, a chart describing the flow of study inclusion through the review is also welcomed. Other figures may include key forest and summary receiver operating characteristics plots or a test consequence graphic.63

The checklist aims at ensuring complete reporting, but it cannot guarantee that reviews adhere to principles of good research practice and research integrity. Guidance for appropriate methods to conduct systematic reviews of DTA studies can be found elsewhere (for example, the Cochrane Handbook for Systematic Reviews of Diagnostic Test Accuracy).64 The abstract should be a fair and honest summary of the full study report. As in the full text, distorted and selective reporting of findings (“spin”) should be avoided.48 49 Clinical implications should be justified by the results, and an accurate description of limitations should be provided.

Abstracts are not a replacement for full text articles in informing clinical practice, policy decisions, or other research. However, they must present an accurate and trustworthy account of the research conducted and reported. The PRISMA-DTA for Abstracts checklist can guide authors in preparing an informative, complete, and fair summary of their review, thus increasing the value of the abstract to the clinical and scientific community.65 For full reports of systematic reviews of DTA studies, authors are encouraged to use the PRISMA-DTA checklist.9 10

Contributors: JFC, JJD, LH, JPS, DAK, CG, HAH, CJH, MML, PM, TAM, DM, JBR, AWSR, YT, MT, PW, BHW, BT, PMB, and MDFM contributed to PRISMA-DTA for Abstracts items and checklist generation. JFC, JJD, and LH wrote the first draft of the manuscript. All authors critically revised the manuscript and approved the final version. JJD, PMB, and MDFM supervised the work. MDFM is the guarantor. The corresponding author attests that all listed authors meet authorship criteria and that no others meeting the criteria have been omitted.

Funding: No specific funding was received to develop this explanation and elaboration document. JJD is a UK National Institute for Health Research (NIHR) senior investigator emeritus. YT is funded by a UK NIHR postdoctoral fellowship. JJD and YT are supported by the NIHR Birmingham Biomedical Research Centre. MDFM is supported by the Canadian Institute for Health Research (grant number 375751), the Canadian Agency for Drugs and Technologies in Health (CADTH), and the Standards for Reporting of Diagnostic Accuracy Studies Group (STARD). BT is a Fonds de recherche du Québec – Santé distinguished scholar and a Tier 1 Canada research chair. BHW is funded by a UK MRC clinician scientist fellowship (MR/N007999/1). DM is supported by a university research chair (uOttawa). The views expressed are those of the author(s) and not necessarily those of the NHS, NIHR, or Department of Health and Social Care.

Competing interests: All authors have completed the ICMJE uniform disclosure form at www.icmje.org/coi_disclosure.pdf and declare: no support from any organisation for the submitted work;no financial relationships with any organisations that might have an interest in the submitted work in the previous three years; no other relationships or activities that could appear to have influenced the submitted work.

Provenance and peer review: Not commissioned; externally peer reviewed.

Patient and public involvement statement: Patients or the public were not involved in the design, conduct, reporting, or dissemination plans of our research.

References

  • 1. Boutron I, Altman DG, Hopewell S, Vera-Badillo F, Tannock I, Ravaud P. Impact of spin in the abstracts of articles reporting results of randomized controlled trials in the field of cancer: the SPIIN randomized controlled trial. J Clin Oncol 2014;32:4120-6. 10.1200/JCO.2014.56.7503  [DOI] [PubMed] [Google Scholar]
  • 2. Cohen JF, Korevaar DA, Boutron I, et al. Reporting guidelines for journal and conference abstracts. J Clin Epidemiol 2020;124:186-92. 10.1016/j.jclinepi.2020.04.012  [DOI] [PubMed] [Google Scholar]
  • 3. Janackovic K, Puljak L. Reporting quality of randomized controlled trial abstracts in the seven highest-ranking anesthesiology journals. Trials 2018;19:591. 10.1186/s13063-018-2976-x  [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4. Korevaar DA, Cohen JF, Hooft L, Bossuyt PM. Literature survey of high-impact journals revealed reporting weaknesses in abstracts of diagnostic accuracy studies. J Clin Epidemiol 2015;68:708-15. 10.1016/j.jclinepi.2015.01.014  [DOI] [PubMed] [Google Scholar]
  • 5. Hopewell S, Clarke M, Askie L. Reporting of trials presented in conference abstracts needs to be improved. J Clin Epidemiol 2006;59:681-4. 10.1016/j.jclinepi.2005.09.016  [DOI] [PubMed] [Google Scholar]
  • 6. Korevaar DA, Cohen JF, de Ronde MW, Virgili G, Dickersin K, Bossuyt PM. Reporting weaknesses in conference abstracts of diagnostic accuracy studies in ophthalmology. JAMA Ophthalmol 2015;133:1464-7. 10.1001/jamaophthalmol.2015.3577  [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7. Moher D, Liberati A, Tetzlaff J, Altman DG, Group P, PRISMA Group . Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. BMJ 2009;339:b2535. 10.1136/bmj.b2535  [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8. Beller EM, Glasziou PP, Altman DG, et al. PRISMA for Abstracts Group . PRISMA for Abstracts: reporting systematic reviews in journal and conference abstracts. PLoS Med 2013;10:e1001419. 10.1371/journal.pmed.1001419  [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9. McInnes MDF, Moher D, Thombs BD, et al. and the PRISMA-DTA Group . Preferred Reporting Items for a Systematic Review and Meta-analysis of Diagnostic Test Accuracy Studies: The PRISMA-DTA Statement. JAMA 2018;319:388-96. 10.1001/jama.2017.19163  [DOI] [PubMed] [Google Scholar]
  • 10. Salameh JP, Bossuyt PM, McGrath TA, et al. Preferred reporting items for systematic review and meta-analysis of diagnostic test accuracy studies (PRISMA-DTA): explanation, elaboration, and checklist. BMJ 2020;370:m2632. 10.1136/bmj.m2632  [DOI] [PubMed] [Google Scholar]
  • 11. Salameh JP, McInnes MDF, Moher D, et al. Completeness of reporting of systematic reviews of diagnostic test accuracy based on the PRISMA-DTA reporting guideline. Clin Chem 2019;65:291-301. 10.1373/clinchem.2018.292987  [DOI] [PubMed] [Google Scholar]
  • 12. McGrath TA, Alabousi M, Skidmore B, et al. Recommendations for reporting of systematic reviews and meta-analyses of diagnostic test accuracy: a systematic review. Syst Rev 2017;6:194. 10.1186/s13643-017-0590-8  [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13. Cohen JF, Korevaar DA, Altman DG, et al. STARD 2015 guidelines for reporting diagnostic accuracy studies: explanation and elaboration. BMJ Open 2016;6:e012799. 10.1136/bmjopen-2016-012799  [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14. Moher D, Hopewell S, Schulz KF, et al. CONSORT 2010 explanation and elaboration: updated guidelines for reporting parallel group randomised trials. BMJ 2010;340:c869. 10.1136/bmj.c869  [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15. Schieda N, McInnes MD, Cao L. Diagnostic accuracy of segmental enhancement inversion for diagnosis of renal oncocytoma at biphasic contrast enhanced CT: systematic review. Eur Radiol 2014;24:1421-9. 10.1007/s00330-014-3147-4  [DOI] [PubMed] [Google Scholar]
  • 16. Leeflang MM, Ang CW, Berkhout J, et al. The diagnostic accuracy of serological tests for Lyme borreliosis in Europe: a systematic review and meta-analysis. BMC Infect Dis 2016;16:140. 10.1186/s12879-016-1468-4  [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17. Steingart KR, Schiller I, Horne DJ, Pai M, Boehme CC, Dendukuri N. Xpert® MTB/RIF assay for pulmonary tuberculosis and rifampicin resistance in adults. Cochrane Database Syst Rev 2014;(1):CD009593. 10.1002/14651858.CD009593.pub3  [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18. Huang L, Kong Q, Liu Z, Wang J, Kang Z, Zhu Y. The Diagnostic Value of MR Imaging in Differentiating T Staging of Bladder Cancer: A Meta-Analysis. Radiology 2018;286:502-11. 10.1148/radiol.2017171028  [DOI] [PubMed] [Google Scholar]
  • 19. Woo S, Suh CH, Cho JY, Kim SY, Kim SH. Diagnostic performance of CT for diagnosis of fat-poor angiomyolipoma in patients with renal masses: a systematic review and meta-analysis. AJR Am J Roentgenol 2017;209:W297-307. 10.2214/AJR.17.18184  [DOI] [PubMed] [Google Scholar]
  • 20. van Enst WA, Naaktgeboren CA, Ochodo EA, et al. Small-study effects and time trends in diagnostic test accuracy meta-analyses: a meta-epidemiological study. Syst Rev 2015;4:66. 10.1186/s13643-015-0049-8  [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21. Cohen JF, Korevaar DA, Wang J, Leeflang MM, Bossuyt PM. Meta-epidemiologic study showed frequent time trends in summary estimates from meta-analyses of diagnostic accuracy studies. J Clin Epidemiol 2016;77:60-7. 10.1016/j.jclinepi.2016.04.013  [DOI] [PubMed] [Google Scholar]
  • 22. Sharifabadi AD, Korevaar DA, McGrath TA, et al. Reporting bias in imaging: higher accuracy is linked to faster publication. Eur Radiol 2018;28:3632-9. 10.1007/s00330-018-5354-x  [DOI] [PubMed] [Google Scholar]
  • 23. Vollgraff Heidweiller-Schreurs CA, Korevaar DA, Mol BWJ, et al. Publication bias may exist among prognostic accuracy studies of middle cerebral artery Doppler ultrasound. J Clin Epidemiol 2019;116:1-8. 10.1016/j.jclinepi.2019.07.016  [DOI] [PubMed] [Google Scholar]
  • 24. Cherpak LA, Korevaar DA, McGrath TA, et al. Publication Bias: Association of Diagnostic Accuracy in Radiology Conference Abstracts with Full-Text Publication. Radiology 2019;292:120-6. 10.1148/radiol.2019182206  [DOI] [PubMed] [Google Scholar]
  • 25. Giljaca V, Nadarevic T, Poropat G, Nadarevic VS, Stimac D. Diagnostic Accuracy of Abdominal Ultrasound for Diagnosis of Acute Appendicitis: Systematic Review and Meta-analysis. World J Surg 2017;41:693-700. 10.1007/s00268-016-3792-7  [DOI] [PubMed] [Google Scholar]
  • 26. Alldred SK, Takwoingi Y, Guo B, et al. First trimester ultrasound tests alone or in combination with first trimester serum tests for Down’s syndrome screening. Cochrane Database Syst Rev 2017;3:CD012600. 10.1002/14651858.CD012600  [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27. Deeks JJ, Dinnes J, Takwoingi Y, et al. Cochrane COVID-19 Diagnostic Test Accuracy Group . Antibody tests for identification of current and past infection with SARS-CoV-2. Cochrane Database Syst Rev 2020;6:CD013652. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28. Lijmer JG, Mol BW, Heisterkamp S, et al. Empirical evidence of design-related bias in studies of diagnostic tests. JAMA 1999;282:1061-6. 10.1001/jama.282.11.1061  [DOI] [PubMed] [Google Scholar]
  • 29. Whiting P, Rutjes AW, Reitsma JB, Glas AS, Bossuyt PM, Kleijnen J. Sources of variation and bias in studies of diagnostic accuracy: a systematic review. Ann Intern Med 2004;140:189-202. 10.7326/0003-4819-140-3-200402030-00010  [DOI] [PubMed] [Google Scholar]
  • 30. Whiting PF, Rutjes AW, Westwood ME, et al. QUADAS-2 Group . QUADAS-2: a revised tool for the quality assessment of diagnostic accuracy studies. Ann Intern Med 2011;155:529-36. 10.7326/0003-4819-155-8-201110180-00009  [DOI] [PubMed] [Google Scholar]
  • 31. Ochodo EA, van Enst WA, Naaktgeboren CA, et al. Incorporating quality assessments of primary studies in the conclusions of diagnostic accuracy reviews: a cross-sectional study. BMC Med Res Methodol 2014;14:33. 10.1186/1471-2288-14-33  [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32. Pammi M, Flores A, Versalovic J, Leeflang MM. Molecular assays for the diagnosis of sepsis in neonates. Cochrane Database Syst Rev 2017;2:CD011926. 10.1002/14651858.CD011926.pub2  [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33. Salameh JP, McInnes MDF, McGrath TA, Salameh G, Schieda N. Diagnostic accuracy of Dual-Energy CT for evaluation of renal masses: systematic review and meta-analysis. AJR Am J Roentgenol 2019;212:W100-5. 10.2214/AJR.18.20527  [DOI] [PubMed] [Google Scholar]
  • 34. Hunt H, Stanworth S, Curry N, et al. Thromboelastography (TEG) and rotational thromboelastometry (ROTEM) for trauma induced coagulopathy in adult trauma patients with bleeding. Cochrane Database Syst Rev 2015;(2):CD010438. 10.1002/14651858.CD010438.pub2  [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35. McGrath TA, McInnes MD, Korevaar DA, Bossuyt PM. Meta-Analyses of Diagnostic Accuracy in Imaging Journals: Analysis of Pooling Techniques and Their Effect on Summary Estimates of Diagnostic Accuracy. Radiology 2016;281:78-85. 10.1148/radiol.2016152229  [DOI] [PubMed] [Google Scholar]
  • 36. Simel DL, Bossuyt PM. Differences between univariate and bivariate models for summarizing diagnostic accuracy may not be large. J Clin Epidemiol 2009;62:1292-300. 10.1016/j.jclinepi.2009.02.007  [DOI] [PubMed] [Google Scholar]
  • 37. Lange B, Cohn J, Roberts T, et al. Diagnostic accuracy of serological diagnosis of hepatitis C and B using dried blood spot samples (DBS): two systematic reviews and meta-analyses. BMC Infect Dis 2017;17(Suppl 1):700. 10.1186/s12879-017-2777-y  [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 38. Amini A, Varsaneux O, Kelly H, et al. Diagnostic accuracy of tests to detect hepatitis B surface antigen: a systematic review of the literature and meta-analysis. BMC Infect Dis 2017;17(Suppl 1):698. 10.1186/s12879-017-2772-3  [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 39. Wilson MP, Patel D, Murad MH, McInnes MDF, Katlariwala P, Low G. Diagnostic performance of MRI in the detection of renal lipid-poor angiomyolipomas: a systematic review and meta-analysis. Radiology 2020;296:511-20. 10.1148/radiol.2020192070  [DOI] [PubMed] [Google Scholar]
  • 40. Nanda K, McCrory DC, Myers ER, et al. Accuracy of the Papanicolaou test in screening for and follow-up of cervical cytologic abnormalities: a systematic review. Ann Intern Med 2000;132:810-9. 10.7326/0003-4819-132-10-200005160-00009  [DOI] [PubMed] [Google Scholar]
  • 41. Abba K, Deeks JJ, Olliaro P, et al. Rapid diagnostic tests for diagnosing uncomplicated P. falciparum malaria in endemic countries. Cochrane Database Syst Rev 2011;(7):CD008122. 10.1002/14651858.CD008122.pub2  [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 42. Ochodo EA, Gopalakrishna G, Spek B, et al. Circulating antigen tests and urine reagent strips for diagnosis of active schistosomiasis in endemic areas. Cochrane Database Syst Rev 2015;(3):CD009579. 10.1002/14651858.CD009579.pub2  [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 43. Naaktgeboren CA, Ochodo EA, Van Enst WA, et al. Assessing variability in results in systematic reviews of diagnostic studies. BMC Med Res Methodol 2016;16:6. 10.1186/s12874-016-0108-4  [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 44. Brazzelli M, Sandercock PA, Chappell FM, et al. Magnetic resonance imaging versus computed tomography for detection of acute vascular lesions in patients presenting with stroke symptoms. Cochrane Database Syst Rev 2009;(4):CD007424. 10.1002/14651858.CD007424.pub2  [DOI] [PubMed] [Google Scholar]
  • 45. Randall M, Egberts KJ, Samtani A, et al. Diagnostic tests for autism spectrum disorder (ASD) in preschool children. Cochrane Database Syst Rev 2018;7:CD009044. 10.1002/14651858.CD009044.pub2  [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 46. Detjen AK, DiNardo AR, Leyden J, et al. Xpert MTB/RIF assay for the diagnosis of pulmonary tuberculosis in children: a systematic review and meta-analysis. Lancet Respir Med 2015;3:451-61. 10.1016/S2213-2600(15)00095-8  [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 47. Cohen JF, Ouziel A, Matczak S, et al. Diagnostic accuracy of serum (1,3)-beta-d-glucan for neonatal invasive candidiasis: systematic review and meta-analysis. Clin Microbiol Infect 2020;26:291-8. 10.1016/j.cmi.2019.09.010  [DOI] [PubMed] [Google Scholar]
  • 48. McGrath TA, McInnes MDF, van Es N, Leeflang MMG, Korevaar DA, Bossuyt PMM. Overinterpretation of Research Findings: Evidence of “Spin” in Systematic Reviews of Diagnostic Accuracy Studies. Clin Chem 2017;63:1353-62. 10.1373/clinchem.2017.271544  [DOI] [PubMed] [Google Scholar]
  • 49. McGrath TA, Bowdridge JC, Prager R, et al. Overinterpretation of Research Findings: Evaluation of “Spin” in Systematic Reviews of Diagnostic Accuracy Studies in High-Impact Factor Journals. Clin Chem 2020;66:915-24. 10.1093/clinchem/hvaa093  [DOI] [PubMed] [Google Scholar]
  • 50. Merckx J, Wali R, Schiller I, et al. Diagnostic Accuracy of Novel and Traditional Rapid Tests for Influenza Infection Compared With Reverse Transcriptase Polymerase Chain Reaction: A Systematic Review and Meta-analysis. Ann Intern Med 2017;167:394-409. 10.7326/M17-0848  [DOI] [PubMed] [Google Scholar]
  • 51. Korevaar DA, Crombag LM, Cohen JF, Spijker R, Bossuyt PM, Annema JT. Added value of combined endobronchial and oesophageal endosonography for mediastinal nodal staging in lung cancer: a systematic review and meta-analysis. Lancet Respir Med 2016;4:960-8. 10.1016/S2213-2600(16)30317-4  [DOI] [PubMed] [Google Scholar]
  • 52. Hansen C, Lundh A, Rasmussen K, Hróbjartsson A. Financial conflicts of interest in systematic reviews: associations with results, conclusions, and methodological quality. Cochrane Database Syst Rev 2019;8:MR000047. 10.1002/14651858.MR000047.pub2  [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 53. Page MJ, Shamseer L, Tricco AC. Registration of systematic reviews in PROSPERO: 30,000 records and counting. Syst Rev 2018;7:32. 10.1186/s13643-018-0699-4  [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 54. Rombey T, Doni K, Hoffmann F, Pieper D, Allers K. More systematic reviews were registered in PROSPERO each year, but few records’ status was up-to-date. J Clin Epidemiol 2020;117:60-7. 10.1016/j.jclinepi.2019.09.026  [DOI] [PubMed] [Google Scholar]
  • 55. Tricco AC, Cogo E, Page MJ, et al. A third of systematic reviews changed or did not specify the primary outcome: a PROSPERO register study. J Clin Epidemiol 2016;79:46-54. 10.1016/j.jclinepi.2016.03.025  [DOI] [PubMed] [Google Scholar]
  • 56. Hopewell S, Clarke M, Moher D, et al. CONSORT Group . CONSORT for reporting randomised trials in journal and conference abstracts. Lancet 2008;371:281-3. 10.1016/S0140-6736(07)61835-2  [DOI] [PubMed] [Google Scholar]
  • 57.Equator Network. Draft STROBE checklist for conference abstracts. 2017. www.equator-network.org/reporting-guidelines/strobe-abstracts.
  • 58. Cohen JF, Korevaar DA, Gatsonis CA, et al. STARD Group . STARD for Abstracts: essential items for reporting diagnostic accuracy studies in journal or conference abstracts. BMJ 2017;358:j3751. 10.1136/bmj.j3751  [DOI] [PubMed] [Google Scholar]
  • 59. Bougioukas KI, Bouras E, Apostolidou-Kiouti F, Kokkali S, Arvanitidou M, Haidich AB. Reporting guidelines on how to write a complete and transparent abstract for overviews of systematic reviews of health care interventions. J Clin Epidemiol 2019;106:70-9. 10.1016/j.jclinepi.2018.10.005  [DOI] [PubMed] [Google Scholar]
  • 60. Heus P, Reitsma JB, Collins GS, et al. Transparent Reporting of Multivariable Prediction Models in Journal and Conference Abstracts: TRIPOD for Abstracts. Ann Intern Med 2020. 10.7326/M20-0193  [DOI] [PubMed] [Google Scholar]
  • 61. Ad Hoc Working Group for Critical Appraisal of the Medical Literature . A proposal for more informative abstracts of clinical articles. Ann Intern Med 1987;106:598-604. 10.7326/0003-4819-106-4-598  [DOI] [PubMed] [Google Scholar]
  • 62. Taddio A, Pain T, Fassos FF, Boon H, Ilersich AL, Einarson TR. Quality of nonstructured and structured abstracts of original research articles in the British Medical Journal, the Canadian Medical Association Journal and the Journal of the American Medical Association. CMAJ 1994;150:1611-5. [PMC free article] [PubMed] [Google Scholar]
  • 63. Whiting P, Davenport C. Understanding test accuracy research: a test consequence graphic. Diagn Progn Res 2018;2:2. 10.1186/s41512-017-0023-0  [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 64. Deeks J, Bossuyt P, Gatsonis C. Cochrane Handbook for Systematic Reviews of Diagnostic Test Accuracy version 1.0. The Cochrane Collaboration, 2010. [Google Scholar]
  • 65. Glasziou P, Altman DG, Bossuyt P, et al. Reducing waste from incomplete or unusable reports of biomedical research. Lancet 2014;383:267-76. 10.1016/S0140-6736(13)62228-X  [DOI] [PubMed] [Google Scholar]

Articles from The BMJ are provided here courtesy of BMJ Publishing Group

RESOURCES