Skip to main content
Elsevier - PMC COVID-19 Collection logoLink to Elsevier - PMC COVID-19 Collection
editorial
. 2022 Oct 7;29(4):422–423. doi: 10.1016/j.cmi.2022.10.006

Responsible research: using the right methodology

Mariska MG Leeflang 1,
PMCID: PMC9536870  PMID: 36209992

Trust in science is crucial and concerns about the credibility of scientists may undermine evidence-based policymaking. During the coronavirus disease 2019 pandemic, scientific credibility was challenged by a lack of scientific evidence to back up policy decisions at the start of the pandemic, followed by a huge and overwhelming increase of scientific publications, often of poor quality [1]. However, concerns about the trust that the general public has in science are not new. In 1999, researchers stated that the scientific community has a credibility problem because of the scientific involvement in the genetic modification of crops [2]. More recently, similar discussions have arisen after the decline in pre-pandemic vaccination rates and after the so called ‘reproducibility crisis’ was called out [[3], [4], [5]].

Trust should be earned. If we scientists worry about gaining trust from the society, then we must realize that we are responsible for the research we do, the claims we make, and the reports we publish. We should take the responsibility for asking the relevant scientific questions, applying the appropriate designs and methods, reporting in a usable and unbiased way, and reporting the question-methods-results in accessible manuscripts [6].

What a relevant research question constitutes may be a topic for debate. Where some researchers argue that research should be curiosity driven, societal stakeholders and funders rather see research questions driven by their potential for societal impact. Clinical research involving human beings has an ethical obligation, i.e. not to burden the research participants only because the researcher is curious. Relevant clinical research questions are therefore preferably derived in collaboration with patients and stakeholders [7,8]. The expertise of a researcher should lie in his or her ability to rephrase questions from stakeholders into answerable and researchable scientific questions, e.g. by using the Participant-Intervention-Comparison-Outcome-(PICO-)framework [9].

Research questions drive the methodology used in addition to the interpretation of the results. For example, a research question may be aimed at establishing a causal link between an exposure or intervention and an outcome. In that case, one would ideally use an experimental design, such as a randomized-controlled trial. Moreover, if that is not possible, then an observational study would require the attention for potential confounders and mediators [10]. The statistical method used would probably be a multivariable model, built to adjust for confounding. However, a similar multivariable model may be used to accurately predict a certain outcome, which requires a different interpretation [11].

Reporting the study in an unbiased and usable way can be achieved through the use of reporting guidelines. More than 500 reporting guidelines can be found through the website of the Equator network (https://www.equator-network.org/). A set of reporting guidelines exists for almost every specific research design. But some principles are true for each design. For example, reporting of methods and results should be complete, and not limited to the positive results only. Additionally, for each step in the research design, all details should be reported, such as the origin of the study subjects—being humans, mice, cells, and study reports—should be reported, including the selection criteria. All interventions and measurements done on these subjects should be reported in such a way that a colleague will be able to replicate them. All statistical analyses, including the software packages used and models or tests used, should be reported in such a way that a colleague can replicate them. Finally, scientific reports should be free of ‘spin’: overoptimistic reporting of the results and conclusions, or making inferences that cannot be made.

These reports may be published in an open access journal; however, we should realize that being open and transparent is more than just publishing in open access journals. Studies can be registered before they have started and their protocols can be published, including all planned analyses and outcomes. Most scientific journals require clinical trials to be registered before the trials have started. This enables a comparison between what was planned and what has been done. Although some authors claim that preregistration would not be desirable for exploratory research, most observational studies can be planned beforehand. Another way to make research publicly available, is the publication of the so called ‘preprints’. These are the versions of the scientific manuscript before it is sent to a journal for publication. Preprints are usually not peer-reviewed, although the idea of most preprint servers is that readers can comment and thus review the manuscripts before publication.

Scientific credibility requires responsible research. This involves research integrity, transparency, and reproducibility. It also requires using the right methods for the right questions. In this theme issue, we have invited three author teams with specific expertise in methodology, to address a specific research type in clinical microbiology and infectious diseases. They address state-of-the-art methodology for designing primary studies on antimicrobial resistance, for designing systematic reviews of prognostic models, and evidence-based guidelines on diagnostic questions [[12], [13], [14]].

First, van Leth and Schulz [12] explain the pitfalls and advantages of population-based surveys for antimicrobial resistance. These surveys are used to determine the prevalence of antimicrobial resistance in a country or region. A more realistic and clinically relevant estimate of the antimicrobial resistance is provided by these surveys compared with laboratory-based studies. Although population and environment surveys may be more challenging than laboratory-based studies, these do allow for a One Health approach, combining veterinary data with human data and environmental data.

Second, Damen et al. [13] provide guidance for conducting a systematic review of prognostic modelling studies, including the guidance for data extraction, quality assessment, and data analysis. They start explaining that ‘prognosis studies’ may imply a variety of designs and outcomes and then focus on prognostic model studies, which combine ‘multiple prognostic factors in one multivariable prognostic model aimed at making predictions for occurrence of a certain outcome’. The authors also explain how the implications and usefulness of these reviews depend on the complete reporting of primary studies.

Third, El Mikati et al. [14] explain how the process of developing trustworthy guidelines should be systematic and transparent, and should be supported by all stakeholders. They present a case example from four diagnostic coronavirus disease 2019 guidelines for the Infectious Diseases Society of America, which was performed when the evidence was scarce. For these guidelines, a rapid and living systematic review methodology was adopted and the Grading of Recommendations Assessment, Development and Evaluation approach was followed to ensure transparency and structure.

Enjoy reading these narrative reviews and use these as guidance whenever relevant. As editors, we try to ensure transparent and unbiased reporting; therefore, we encourage the authors to follow the reporting guidelines as well. Together we are the scientific community. Therefore, let us take our responsibility and restore the trust in science by asking relevant questions, applying appropriate methods, and unbiased and transparent reporting.

Transparency declaration

Dr. Leeflang is a methodologist, involved both in Cochrane and in the Grading of Recommendations Assessment, Development and Evaluation working group and has previously collaborated with the authors of the narrative reviews.

Editor: L. Leibovici

References

  • 1.Lipworth W., Gentgall M., Kerridge I., Stewart C. Science at warp speed: medical research, publication, and translation during the COVID-19 pandemic. J Bioeth Inq. 2020;17:555–561. doi: 10.1007/s11673-020-10013-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Haerlin B., Parr D. How to restore public trust in science. Nature. 1999;400:499. doi: 10.1038/22867. [DOI] [PubMed] [Google Scholar]
  • 3.de Figueiredo A., Simas C., Karafillakis E., Paterson P., Larson H.J. Mapping global trends in vaccine confidence and investigating barriers to vaccine uptake: a large-scale retrospective temporal modelling study. Lancet. 2020;396:898–908. doi: 10.1016/s0140-6736(20)31558-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Ioannidis J.P. Why most published research findings are false. PLoS Med. 2005;2:e124. doi: 10.1371/journal.pmed.1004085. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Fanelli D. Opinion: is science really facing a reproducibility crisis, and do we need it to? Proc Natl Acad Sci U S A. 2018;115:2628–2631. doi: 10.1073/pnas.1708272114. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Chalmers I., Glasziou P. Avoidable waste in the production and reporting of research evidence. Lancet. 2009;374:86–89. doi: 10.1016/s0140-6736(09)60329-9. [DOI] [PubMed] [Google Scholar]
  • 7.Elwyn G., Crowe S., Fenton M., Firkins L., Versnel J., Walker S., et al. Identifying and prioritizing uncertainties: patient and clinician engagement in the identification of research questions. J Eval Clin Pract. 2010;16:627–631. doi: 10.1111/j.1365-2753.2009.01262.x. [DOI] [PubMed] [Google Scholar]
  • 8.Jongsma K.R., Milota M.M. Establishing a multistakeholder research agenda: lessons learned from a James Lind Alliance Partnership. BMJ Open. 2022;12 doi: 10.1136/bmjopen-2021-059006. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Haynes R.B., Sackett D.L., Guyatt G.H., Tugwell P.S. 3rd ed. Lippincott Williams Wilkins; Philadelphia, PA: 2006. Clinical epidemiology: how to do clinical practice research. [Google Scholar]
  • 10.Vandenbroucke J.P. Observational research, randomised trials, and two views of medical science. PLoS Med. 2008;5:e67. doi: 10.1371/journal.pmed.0050067. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Schooling C.M., Jones H.E. Clarifying questions about "risk factors": predictors versus explanation. Emerg Themes Epidemiol. 2018;15:10. doi: 10.1186/s12982-018-0080-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.van Leth F., Schultsz C. Unbiased antimicrobial resistance prevalence estimates through population-based surveillance. Clin Microbiol Infect. 2022;29:429–433. doi: 10.1016/j.cmi.2022.05.006. [DOI] [PubMed] [Google Scholar]
  • 13.Damen J.A.A., Moons K.G.M., van Smeden M., Hooft L. How to conduct a systematic review and meta-analysis of prognostic model studies. Clin Microbiol Infect. 2022;29:434–440. doi: 10.1016/j.cmi.2022.07.019. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Mikati E. Testing guidelines during times of crisis: challenges and limitations of developing rapid and living. Clin Microbiol Infect. 2023;29:424–428. doi: 10.1016/j.cmi.2023.01.020. [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from Clinical Microbiology and Infection are provided here courtesy of Elsevier

RESOURCES