First, let me disclose that the Standards for Reporting Diagnostic Accuracy Studies (STARD) 2003 statement was published and endorsed by Clinical Biochemistry (1). Second, I will also disclose that I was doing my training in clinical chemistry during that time and I did not truly appreciate the importance of this document. Quite the omission, considering the Editor-in-Chief of Clinical Biochemistry at that time was also my supervisor.
So why did the STARD 2003 document fly under my radar?
Partly the reason may be attributable to the fact that trainees, laboratory professionals, clinical staff and the public at large is continuously being bombarded by expert group statements and guidelines. So how is one supposed to know that this document is the one they should be reading? Dissemination via the scientific literature might alert individuals to an important report or guideline, and perhaps even targeted communications in journals in a specific area would further promote the document’s uptake. Well, the STARD 2003 statement appeared in two dozen journals with the authors noting that since its publication “more of the essential items are being reported, but the situation remains far from optimal” (2).
So will the STARD 2015 document uptake fare any better than the 2003 document? Before tackling this question, perhaps it would be useful to compare and contrast the two different documents. To start with, the list of items has grown to 30 (really 34 as items #10, 12, 13 and 21 have two items: a and b listed) as compared to 25 items in the original document (1,2). Reading both lists it is evident that the core elements have remained the same, but those who have used the STARD 2003 checklist will now have to reorient to the STARD 2015 list, which is sufficiently different. The 2015 document has expanded on certain items and have added new elements: structured abstract, intended use of and clinical role of the test, study hypotheses, sample size, structured discussion, registration, protocol, and sources of funding (see Table 2 of STARD 2015 document for further explanation of new items) (2). Of these new items, the “study hypotheses” item is very similar to item #2 (“State the research questions or study aims…”) in the 2003 document, however the remaining items are novel. In fact, the 2015 document has another complete section (i.e., Other Information) that lists registration, protocol and sources of funding. The latter item, sources of funding for the study, should be available to the readership/public and is applicable to all studies, not just diagnostic research studies. However, my impression of the first two items in this section, is that it may be difficult for authors to address these points. A suggestion to make these items more amenable for completion would be the addition of the following phrases (in italic) to the respective items: #28. Registration number and name of registry if applicable; #29. Where the full study protocol can be accessed if available. The rationale for these suggestions is that not every diagnostic study is a prospective study that can be registered and not every full study protocol can be made publically available as the document may contain confidential information. In fact, the CONSORT 2010 statement for reporting randomized trials states the following item in the Other information section: “24. Where the full trial protocol can be accessed, if available” (3). Notwithstanding these minor points, the STARD 2015 report is an excellent document with the updated text/additions aimed to improve its utility.
So back to the original question: will the STARD 2015 document uptake fare any better than the 2003 document? My optimistic answer is yes. The reason for my optimism can be found in the following text obtained from the STARD 2015 document: “We see this list not as the final product, but as the starting point for building more specific instruments to stimulate complete and transparent reporting, such as a checklist and a writing aid for authors, tools for reviewers and editors, instruction videos, and teaching materials, all based on this STARD list of essential items.” (2). If indeed the basis of the STARD 2015 document can be used as the key guide for diagnostic accuracy studies this broadens the scope, insofar that it now reaches beyond paper submissions and peer review to part of the training and evaluation of diagnostic research studies by the interested community. A dozen years ago as a trainee, the STARD document flew right by me; with STARD 2015 and its emphasis of increasing value/reducing waste, I suspect all trainees will become aware of this document as it may well be their key guide in evaluating diagnostic research studies. Education is an important step towards adoption and with STARD 2015, class is now in session.
Acknowledgements
None.
Footnotes
Provenance: This is a Guest Editorial commissioned by Editorial Board Member Prof. Giuseppe Lippi (Section of Clinical Biochemistry, University of Verona, Verona, Italy).
Conflicts of Interest: Dr. Kavsak is currently the Editor-in-Chief for Clinical Biochemistry.
References
- 1.Bossuyt PM, Reitsma JB, Bruns DE, et al. Towards complete and accurate reporting of studies of diagnostic accuracy: the STARD initiative. Clin Biochem 2003;36:2-7. [DOI] [PubMed] [Google Scholar]
- 2.Bossuyt PM, Reitsma JB, Bruns DE, et al. STARD 2015: an updated list of essential items for reporting diagnostic accuracy studies. Clin Chem 2015;61:1446-52. [DOI] [PubMed] [Google Scholar]
- 3.Schulz KF, Altman DG, Moher D. CONSORT 2010 statement: updated guidelines for reporting parallel group randomised trials. J Pharmacol Pharmacother 2010;1:100-7. [DOI] [PMC free article] [PubMed] [Google Scholar]