Skip to main content
Wiley - PMC COVID-19 Collection logoLink to Wiley - PMC COVID-19 Collection
. 2021 Feb 13;27(3):708–715. doi: 10.1111/jep.13548

COVID‐19 and the generation of novel scientific knowledge: Evidence‐based decisions and data sharing

Lucie Perillat 1,, Brian S Baigrie 2,3
PMCID: PMC8013509  PMID: 33580747

Abstract

Rationale, aims and objectives

The COVID‐19 pandemic has impacted every facet of society, including medical research. This paper is the second part of a series of articles that explore the intricate relationship between the different challenges that have hindered biomedical research and the generation of novel scientific knowledge during the COVID‐19 pandemic. In the first part of this series, we demonstrated that, in the context of COVID‐19, the scientific community has been faced with numerous challenges with respect to (1) finding and prioritizing relevant research questions and (2) choosing study designs that are appropriate for a time of emergency.

Methods

During the early stages of the pandemic, research conducted on hydroxychloroquine (HCQ) sparked several heated debates with respect to the scientific methods used and the quality of knowledge generated. Research on HCQ is used as a case study in both papers. The authors explored biomedical databases, peer‐reviewed journals, pre‐print servers and media articles to identify relevant literature on HCQ and COVID‐19, and examined philosophical perspectives on medical research in the context of this pandemic and previous global health challenges.

Results

This second paper demonstrates that a lack of research prioritization and methodological rigour resulted in the generation of fleeting and inconsistent evidence that complicated the development of public health guidelines. The reporting of scientific findings to the scientific community and general public highlighted the difficulty of finding a balance between accuracy and speed.

Conclusions

The COVID‐19 pandemic presented challenges in terms of (3) evaluating evidence for the purpose of making evidence‐based decisions and (4) sharing scientific findings with the rest of the scientific community. This second paper demonstrates that the four challenges outlined in the first and second papers have often compounded each other and have contributed to slowing down the creation of novel scientific knowledge during the COVID‐19 pandemic.

Keywords: epistemology, evidence‐based medicine, medical research, philosophy of medicine

1. INTRODUCTION

The COVID‐19 pandemic has impacted every facet of society and has created a multitude of challenges for national and supranational organizations, such as the WHO. COVID‐19 originates from a novel coronavirus and the scientific community has been faced with the daunting task of creating a novel model for this pandemic, or in other words, creating novel science. This series of papers explores the intricate relationship between the different challenges that have hindered biomedical research and the generation of scientific knowledge during the COVID‐19 pandemic. This second paper will also use research on hydroxychloroquine (HCQ) as a case study.

In the previous article, it was argued that, in the context of the COVID‐19 pandemic, the scientific community has been faced with challenges with respect to (1) finding and prioritizing relevant research questions and (2) choosing study designs that are appropriate for a time of emergency. First, a lack of research prioritization resulted in redundancy in research works and the dispersal of scarce resources (funding, hospital infrastructure, staff and patient base). The duplication of research works, combined with poor‐quality research, has greatly contributed to slowing down the creation of novel scientific knowledge. With respect to study designs, the previous paper has demonstrated that members of the scientific community took part in heated debates regarding the most appropriate design for an emergency. These oppositions, as well as the overall low methodological quality of studies on HCQ, suggest that methodological rigour and the notion of design complementarity have sometimes been abandoned.

This follow‐up paper will now examine the challenges presented by the COVID‐19 pandemic in terms of (3) evaluating evidence for the purpose of making evidence‐based decisions and (4) sharing scientific findings with the rest of the scientific community and the general public. This second paper will demonstrate how these challenges and those presented in the first paper have compounded each other to hinder biomedical research and the generation of novel scientific knowledge during the COVID‐19 pandemic.

2. MAKING EVIDENCE‐BASED DECISIONS

Questions about what kinds of evidence should be used, how evidence is to be evaluated and whether the answers to these questions change during a health emergency have often been discussed. This long‐lasting debate among decision‐makers, clinicians, researchers, and philosophers of science has been the essence of most discussions around the generation of novel scientific knowledge during the COVID‐19 pandemic. Two approaches for making decisions during emergencies are staples in the biomedical literature: the precautionary approach* or an evidence‐based approach. 1 The precautionary approach is often used to justify the implementation of non‐pharmaceutical interventions. However, following the precautionary approach is often seen as harder to justify in the case of pharmaceutical interventions considering the perceived risks associated. 2 Three factors, which this paper will examine consecutively, might explain why making evidence‐based decisions on investigational drugs has been difficult in the context of this pandemic.

2.1. The controversial nature of evidence

Thriving to understand the meaning of the term ‘evidence’ has been the essence of a long‐standing debate that has yet to find a definite answer. The growing influence of evidence‐based medicine (EBM) inspired evaluation frameworks to categorize studies as randomized controlled trial (RCT) or non‐RCT in a manner that is oblivious to the diversity of designs. 3 The idea that RCTs are the ‘gold standard’ of evidence shapes most of the discussions regarding the efficacy of HCQ and has led to the neglect of relevant pieces of evidence. Most systematic reviews, including that of the WHO, 4 only take into consideration RCTs testing HCQ and automatically discard all other evidence. However, findings from non‐RCT studies have sometimes been more robust and generalizable. While RCTs are often considered the ideal design to determine causal inferences and reduce biases, they should not be considered flawless. 5 , 6 The three limitations that are significant in the context of COVID‐19 are:

  1. Inability to draw robust causal inferences. Making causal inferences is essential in the context of the COVID‐19 pandemic since any proposed intervention must be accompanied by confidence that the intervention will change the outcome. Borgerson notes that ‘claims about the special ability of RCTs to isolate causes refer to probabilistic causes and downplay the possibility that mechanistic causes could be just as well established, just as epistemically strong, and just as useful in medical practice’. 5 (p.222) In the context of COVID‐19, this is an issue since clinical trials were launched, and decisions were made, without thoroughly understanding the mechanisms of action and transmission of SARS‐CoV‐2. New information from lab‐based, mechanistic studies has sometimes undermined clinical trials.§

  2. Randomization issues. Randomization is thought to eliminate confounding factors, thereby allowing researchers to isolate the intervention's effects. While randomization certainly has epistemic and scientific values, Worrall has argued persuasively that it only reduces the likelihood that confounding factors will affect the results but does not eliminate it. 6 Even if randomization eliminated confounding factors, it would require a larger sample size than those used in the studies on HCQ. In the context of COVID‐19, exclusively focusing our attention on randomization has sometimes been an obstacle to recognizing the quality of findings generated by retrospective cohort studies.**

  3. Lack of external validity. While a RCT is considered, in the evidence hierarchy, as the best design to ensure internal validity, it is not necessarily the best design to generalize results. Black argues that generalization to the whole population is more easily determined from an observational design (since it usually has broad inclusion criteria and preserves the context of care). 7 In the context of the COVID‐19 pandemic, the generalization of research findings has often been a challenge. For example, conclusions obtained by Gautret and colleagues 8 on the efficacy of HCQ could not be reproduced by Molina and colleagues. 9 Thus, their conclusions are not externally valid (either because they are not internally valid, the methods are not reproducible, or the population sampled in the second study was fundamentally different from that of the first study).

Given these limitations, proponents of EBM acknowledge that ‘there are always exceptions to the general rules’. 10 (p.165) Nevertheless, it is arguable that non‐RCT designs and mechanistic studies should not be the exception but, instead, be considered complementary to RCTs. 7 Insights gained from research on HCQ and the theoretical limitations outlined above show that scientific rigour, although crucial, cannot be restricted to the use of RCTs. The scientific community should critically appraise the evidence available on a case‐by‐case basis, instead of relying on a set of predefined criteria.

2.2. Inconsistent and fleeting evidence

Upshur reminds us that ‘all evidence is capable of being overturned or modified in light of new findings’, 1 (p.109) which further complicates making evidence‐based decisions. Evidence on HCQ has sometimes been uncontested for only a few days before being invalidated by new findings. Moreover, as Russell and colleagues 11 suggest, the generation and evaluation of evidence cannot be completely judgement‐free. As such, basing decisions on a single study is problematic though, as we have seen, such decisions have been made frequent during the COVID‐19 pandemic.†† The threshold of evidence required to take action is ambiguous: there is always a tension between wanting to take immediate action and gathering more evidence. 1 However, the biological, physiological and pharmacological complexity at work does not allow for rushed decisions regarding vaccine and drug approval.

2.3. Evaluating evidence: A time‐consuming process

Making decisions based on a body of knowledge requires that the evidence be first evaluated. However, conducting systematic reviews takes time (between 6 and 24 months). 12 , 13 As such, rapid reviews were developed to evaluate evidence in less than 3 months and are commonly used during health emergencies. 14 Several methodological modifications are used to fast track the process, such as limiting the scope, the outcomes of interest and the number of databases reviewed, adding more reviewers or defining more restrictive search criteria. 14 Two studies show that very few differences exist between the conclusions reached by both review types. 15 , 16 Interestingly, however, there is no standard methodology to conduct rapid reviews – which is not an issue as long as the authors are transparent about their methods. 17 In the context of COVID‐19, one problem has precisely been the lack of transparency regarding the methods used in rapid and systematic reviews. Ruano and colleagues also claim that out of the 18 peer‐reviewed systematic reviews published on COVID‐19 up to 24 March 2020, 13 were considered of ‘critically low’ quality by AMSTAR 2. 18 (p.2) This issue is compounded by a tendency to consider RCTs as the only source of evidence, thereby ignoring a valuable part of the knowledge base.

2.4. Non‐evidence‐based and rushed decisions

Given the limitations regarding the nature of evidence and the complexity of evaluating evidence in a timely manner, some might ask whether basing all decisions solely on evidence is meaningful. National public health leaders have often portrayed their recommendations and injunctions as evidence‐based but what is intended by this declaration is not entirely clear. Several organizations have modified their guideline development process. During an emergency, the WHO 14 is no longer bound to support decisions on systematic reviews and can rely exclusively on expert opinion,‡‡ which, interestingly, ranks at the bottom of the evidence hierarchy. The FDA has also developed ways to fast track the approval of therapeutics (fast‐track, breakthrough therapy, accelerated approval and priority review), 19 which have been used, and proved to be efficient, during emergencies. However, the threshold of evidence required to approve a drug under a ‘Fast‐Track’ approach remains unclear, especially given the fleeting and inconsistent nature of evidence.

While it might not be sustainable to maintain the traditional standards of evidence during a pandemic, disproportionately lowering these standards might also be problematic. Several rushed decisions based on a single study were made, 20 such as the WHO's decision to halt HCQ treatment arms on 25 May 2020 and resume them on 3 June. 21 Other examples of rushed decisions include the FDA's Emergency Use Authorization for HCQ on 28 March 22 2020, which was retracted on 15 June, 23 and the addition of HCQ to the WHO's list of prioritized drugs (13 March). 24 Chen and colleagues' 25 study on 30 patients was the only completed study on HCQ that could be evaluated by peers before that date. Thus, it can be argued that this decision was primarily influenced by the growing international media coverage on HCQ. Retrospectively, and considering the number of trials that stopped enrolment in their HCQ arms, 26 , 27 these decisions seem to have been rushed. They also resulted in HCQ shortages for patients with conditions other than COVID‐19 and more frequent self‐medication incidents. 28

Theoretically, the implementation of non‐pharmaceutical interventions (NPIs) might not need to be supported by as much evidence and can be safely implemented following the precautionary approach (given the low risks). Conversely, the approval of pharmaceutical interventions should be supported with much more evidence with respect to the associated perceived risks. However, what has happened since January is precisely the opposite: policymakers have sometimes waited for extensive evidence before implementing NPIs but have made rushed decisions regarding pharmaceutical interventions. This can be explained, in part, because, with respect to NPIs, adherence is more easily obtained if the population believes that the intervention is scientifically supported. On the other hand, the decision to allow the use of HCQ in the clinical setting and the FDA's emergency use authorization can be seen as a way to delegate decisions to clinicians' expert judgement. In that case, patients' adherence is facilitated by their trust in their family doctors, whom they see as authority figures.

The fleeting nature of evidence, as well as the complexity of evaluating a body of knowledge in a timely manner, has been an obstacle to the development of guidelines during the COVID‐19 pandemic. It has not always been possible to sustain prevailing standards of evidence. While evaluating methodological rigour is essential, other criteria should be taken into account, notably, whether the intervention can be easily and equitably administered, acceptable to patients and has a favourable cost–benefit profile. This is not something a RCT can always determine. 29 These considerations support the idea that different kinds of studies might be more appropriate depending on whether the prioritized objective is to determine the intervention's effects, produce generalizable results, or draw robust causal inferences.§§ For evidence to be appropriately used in public health decision making, both the reliability (which is often assessed using tools such as GRADE) and the relevance of evidence must be evaluated. 30 In the context of this pandemic, evidence that is relevant to the issue at hand may not exclusively originate from RCTs and can just as well be found in observational and mechanistic studies.

3. SHARING SCIENTIFIC FINDINGS

The fast reporting of accurate scientific knowledge also proved to be a challenge during the COVID‐19 pandemic. The sharing of scientific evidence contributes to the generation of new knowledge by allowing scientists to build on others' work and find new, relevant research questions. Policymakers also need to quickly and easily access these findings to readjust their decisions as new evidence is generated. During a health emergency, the need for rapid reporting of scientific knowledge must not come at the cost of compromising its accuracy. The reporting of inaccurate findings is detrimental both for future research efforts and the public's perception of the pandemic.

Issues regarding the rapid reporting of scientific findings have already been debated during past health emergencies. Concerns first arose in 2007 during the H1N5 outbreak in Indonesia when the country refused to share the virus' genome with the WHO.¶¶ 31 If shared, the Indonesian government feared that they would not derive any benefit from the development of future vaccines or therapeutics. This crisis incentivized the WHO to develop the Pandemic Influenza Preparedness (PIP) network to ensure that benefits derived from the sharing of genome data would be returned to local populations at a price they can afford. 31 (p.21) The PIP network purports to facilitate the sharing of genome sequences and create a framework whereby the industry has to assist developing countries to have access to genome information. 32 Nevertheless, because the WHO has no international jurisdiction, it remains to be seen whether low and middle‐income countries will really have equitable access to the findings of COVID‐19 research. The question of transparency in the sharing of research findings was further debated during the Zika outbreak and resulted in the creation of Zika Open, a platform for the open sharing of papers related to the virus. 33

On January 31, 2020, Wellcome, a foundation dedicated to addressing public health challenges, released a statement encouraging researchers, journals and funders to share COVID‐19 research findings as rapidly and openly as possible, in an attempt to keep the WHO informed of the latest advancements. 34 This statement outlined five recommendations:

  • All peer‐reviewed publications on COVID‐19 are made open access during the pandemic,

  • Research findings are shared with the WHO upon journal submission,

  • Research findings are made available on pre‐print servers with clear statements regarding the limitations of data,

  • Researchers share interim and final research results, together with protocols and standards used to collect the data, as rapidly and widely as possible,

  • Authors understand that data shared ahead of submission will not preclude their publication.

Scientific journals have explicitly stated that, in the context of COVID‐19, they will expedite all editorial steps. As such, articles have sometimes been published in less than 48 hours. 35 Following these five principles and compared to past outbreaks, data sharing at the basic science level has been incredible since the beginning of the pandemic. On 11 January, the first full genome sequence of SARS‐CoV‐2 (obtained on 3 January) was shared on a discussion forum, virological.org. By 2 February, de Oliveira and colleagues had developed a software program to classify genomes of SARS‐CoV‐2. 36 The development of reagents for diagnostic tests has been relatively fast (11 January), which is important progress compared to past health emergencies (e.g., SARS 36 ). The use of pre‐prints has also been widely encouraged*** 37 and has exponentially increased, allowing faster results reporting. While this is crucial for scientists, the growing use of pre‐prints has had negative consequences on the public's understanding of the pandemic. Indeed, information derived from papers posted on pre‐print servers was reported in the media, often without outlining the study's limitations, and has contributed to the spread of misleading information. 38

The fourth recommendation outlined in the Wellcome statement regarding the sharing of clinical findings has been relatively poorly followed. While interim results of clinical trials have sometimes been shared, 26 , 39 most trials do not release interim results nor protocols. It seems that only the protocols of large, international, highly publicized trials were released (such as REMAP‐CAP and RECOVERY, but interestingly not SOLIDARITY). Sharing interim results also comes with its own set of issues: clinicians and researchers involved in the trial might subconsciously change their behaviour and alter the outcome as well as patient accrual, adherence and retention.††† 40 , 41 , 42 The reporting of findings at the clinical level is also complicated by the need to accommodate different values and interests. During a health emergency, there is a strong incentive to publish quickly, given the number of knowledge gaps (i.e., the need for novel scientific knowledge) and lives at stake. Generating evidence to inform international and national decisions in a timely manner often comes back as a core ethical requirement during health emergencies. 43 However, and in addition to the ‘publish‐or‐perish’ culture in academia, a health emergency provides strong incentives to publish articles that lapse into sensationalism, sometimes at the cost of quality. 44 At the clinical level, there are also numerous potential financial and academic benefits associated with the commercialization or patenting of therapeutics and vaccines. Journals might also have strong incentives to expedite the publication process to be recognized as the first to publish a world‐changing paper. Therefore, the need for fast knowledge reporting is, sometimes, in conflict with reporting accurate information. While retraction of scientific papers has always happened, often without sparking public interest, several COVID‐related papers, that had been highly publicized in the media, have been retracted. As of 16 January 2021, 62 COVID‐19 related papers were retracted, and four are the subject of a statement of concern. 45 Six of these retracted papers investigated the role of HCQ as a treatment for COVID‐19. However, the rapidity with which errors have been flagged by the scientific community and rectified by editors can be appraised. Compared to a paper published in The Lancet on a possible relationship between the MMR vaccine and autism, which was retracted 12 years after its publication, questionable papers related to COVID‐19 were retracted within days. 35

The publication of findings during the COVID‐19 pandemic highlights the need to follow the five principles outlined by Smith, Upshur and Emanuel 43 : ensuring scientific accuracy, social value (data must be released and (in)validated by the scientific community), protection of research participants, transparency and accountability on the part of journal editors. Contradicting evidence has been reported, almost in real‐time, by the media and has affected the public's understanding of the pandemic and trust in science. This has resulted in inappropriate behaviours, such as the panic buying of HCQ, leading to shortages for those who need it, 46 and increased risks associated with self‐medication. 28 , 58 In a time where confusion, uncertainty and fear rule, and where mitigation strategies rely on people's adherence to science‐based guidelines, it is particularly important to communicate scientific findings, and their limitations, in a clear and transparent manner.

4. DISCUSSION

In the context of the COVID‐19 pandemic, research conducted on pharmaceutical treatments, and especially HCQ, has generated low‐quality evidence and inconclusive findings, which have had negative consequences on patients, the public and other ongoing research efforts. Part one and two of this series of papers have attempted to evaluate the factors that have interfered with the generation of novel scientific knowledge and have demonstrated that such challenges are to be found at each step of the research process. First, a lack of prioritization among research questions and therapeutics has, in part at least, been responsible for the duplication of research works and the dispersion of scarce resources. Study designs, aimed at minimizing biases and increasing objectivity, have, instead, been the subject of fruitless oppositions. During the pandemic, it seems that methodological rigour and the notion of design complementarity were somewhat abandoned. These two issues combined have resulted in the generation of fleeting and inconsistent evidence that has been an obstacle to the development of public health guidelines. Finally, the reporting of scientific findings has again highlighted the difficulty of finding a balance between accuracy and speed. Inter‐epidemic efforts have shaped and improved the COVID‐19 research response, especially in terms of expedited ethics approval and the sharing of basic science research. Interestingly, these achievements constitute the focus of our efforts since the last health emergencies, which should motivate researchers to address the remainder of challenges that are obstacles to the generation of novel scientific knowledge (such as the duplication of research works or the sharing of clinical data). The COVID‐19 pandemic will undoubtedly contribute to reshaping the way we think about research during health emergencies and encourage us to approach them in terms of alternate phases of preparation, response and learning instead of disconnected outbreak events.

CONFLICT OF INTEREST

The authors declare no conflict of interest.

AUTHOR CONTRIBUTIONS

Lucie Perillat: conceptualization, investigation, writing – original draft preparation, writing – review and editing. Brian S. Baigrie: conceptualization, writing – review and editing.

ACKNOWLEDGEMENTS

We would like to thank Colin Deinhardt at the E. J. Pratt Library for his advice and help in finding COVID‐19 databases and relevant sources on COVID‐19 and hydroxychloroquine. There was no funding for this study.

Perillat L, Baigrie BS. COVID‐19 and the generation of novel scientific knowledge: Evidence‐based decisions and data sharing. J Eval Clin Pract. 2021;27:708–715. 10.1111/jep.13548

Endnotes

*

The precautionary principle and the precautionary approach are grounded in the belief that decision‐makers have a social responsibility to anticipate harm before it occurs (‘informed prudence’) in order to protect the public from harm, even when the absence of scientific certainty makes it difficult to predict the likelihood of harm occurring, or the level of harm should it occur. The principle itself was formally asserted as Principle number 15 at the Rio Conference in 1992: ‘[…] the precautionary approach shall be widely applied by the States according to their capabilities. Where there are threats of serious or irreversible damage, lack of full scientific certainty, shall not be used as a reason for postponing cost‐effective measures to prevent environmental degradation.’ 47 (p.3). Given the legal connotations of the term ‘principle,’ the Rio Declaration (as quoted above) references a precautionary ‘approach’, which can be read as a relaxing of this term. In this section, we will use the term ‘precautionary approach’ in recognition of the ongoing debate as to whether the precautionary principle in fact achieved the status of a rule of law.

Irving and colleagues outline eight concerns about using grading systems (such as GRADE) to inform public health policies: ‘(1) lack of information on validity and reliability, (2) poor concurrent validity, (3) may not account for external validity, (4) may not be inherently logical, (5) susceptibility to subjectivity, (6) complex systems with inadequate instructions, (7) may be biased toward randomized controlled trial (RCT) studies, and (8) may not adequately address the variety of non‐RCTs.’. 3 (p.244) Mercuri and Gafni, in a series of papers, evaluate the appropriateness of the GRADE framework by determining whether aspects of the framework are justified based on theoretical and empirical grounds and conclude that there is an absence of such justification. 48 , 49 , 50 In another paper, Mercuri and Baigrie conclude that ‘the GRADE framework should strive to ensure that the whole evidence base is considered when determining confidence in the effect estimate’. 51

Borgerson explains the difference between mechanistic and probabilistic causes as follows: ‘Mechanistic causes are provided by bench research in biochemistry, genetics, physiology, and other basic sciences, and are thought to be especially stable because they hold in all cases (not just selected subpopulations, however carefully or randomly selected). Probabilistic causes establish strength of association between dependent and independent variables in a given population, ideally in repeated studies […]. These causes are often identified through epidemiological research.’ 5 (p.222)

§

For example, learning about the mechanism of action of SARS‐CoV‐2 (i.e., the three‐stage nature of COVID‐19) has undermined results from the HCQ arm of the RECOVERY trial and, possibly, other clinical trials that tested the efficacy of HCQ as a treatment for severely ill patients. Siddiqi and Mehra describe the three stages of COVID‐19 (early infection, pulmonary phase and hyperinflammation phase) and note that the first phase is driven by the virus itself while the last phase is driven by the host response. 52 As such, the authors note that ‘pharmacotherapy targeted against the virus holds the greatest promise when applied early in the course of the illness, but its usefulness in advanced stages may be doubtful. Similarly, use of anti‐inflammatory therapy applied too early may not be necessary and could even provoke viral replication […]’. 52 (p.405) Treating patients with HCQ – a therapy targeted against the virus – is, therefore, not appropriate for severely ill patients (who are in the last, ‘hyperinflammation’ phase of the disease).

In his paper, Worrall examines the claim that RCTs and randomization are more robust than non‐RCT designs from an epistemic perspective. He claims that ‘we are always, quite trivially, at the mercy of the possibility that the two groups are, unbeknown to us, unbalanced in some significant way. And, whatever may be true in the theoretical indefinite long run of endlessly repeated random divisions, for real‐world trials, randomization does exactly nothing to alleviate this worry.’ 6 (p.486)

**

Studies conducted by Geleris and colleagues, 53 Rosenberg and colleagues 54 and Arshad and colleagues 55 are all observational, retrospective cohort studies and are generally considered to have produced good‐quality evidence, or at least evidence of higher quality than that produced by RCTs conducted early in the pandemic (Chen and colleagues 25 and Tang and colleagues 56 ).

††

On the other hand, if independent studies with different designs reach the same conclusion, it is arguable that one is more warranted to believe that conclusion. Indeed, if in‐vitro studies and clinical trials both indicate that a treatment is beneficial, then one should be even more confident in using that treatment (mechanistic and probabilistic causes).

‡‡

In the WHO Handbook for Guidelines Development, section 1.7.4 describes the changes being made to the guideline development process during an emergency. 14 These modifications include the use of rapid reviews and rapid advice guidelines (ought to be developed in less than 3 months). The authors emphasize the need for stakeholders to make the guideline development process transparent: ‘Emergency (rapid response) guidelines – Public health emergencies may necessitate a response from WHO within hours to days. Hence, many of the guideline development processes and methods outlined in this handbook are not applicable. WHO staff will need to quickly identify relevant existing guidelines produced by WHO or other entities or may need to issue recommendations based on expert opinion only […]. It is important that the decision‐making process be documented and that the rationale for each recommendation be stated, even if it is based on indirect or very limited evidence or on expert opinion’. 14 (p.8)

§§

Petticrew and Roberts refer to ‘methodological appropriateness’ or, in other words, the emphasis on ‘typologies rather than hierarchies of evidence’. 57 (p.527) They argue that there is a ‘need to match research questions to specific types of research’. 57 (p.527) Parkhurst and Abeysinghe 29 argue in favour of what they call ‘evidence appropriateness’, which is an alternative to ‘methodological appropriateness’. They argue that ‘rather than adhering to a single hierarchy of evidence to judge what constitutes “good” evidence for policy, it is more useful to examine evidence through the lens of appropriateness. The form of evidence, the determination of relevant categories and variables, and the weight given to any piece of evidence, must suit the policy needs at hand’. 29 (p.665)

¶¶

Section IV(E) ‘Data Sharing During Public Health Emergencies: Histories and Precedents’ of the report by Abramowitz and colleagues 31 describes the events that happened in Indonesia during the H1N1 epidemic.

***

Global Research Collaboration for Infectious Disease Preparedness (GloPID‐R) released a roadmap outlining recommendations for sharing scientific data during public health emergencies. 37 To encourage the use of pre‐prints, the authors recommend that we should ‘align funding policies to ensure that data sets and pre‐publications are all included within assessment of researcher outputs (in accordance with the San Francisco Declaration)’. 37 (pp.28,29) A study by Nabavi Nouri and colleagues suggests that there has been ‘a dramatic increase in the presence and importance of preprint publications’ 38 (p.1): between the beginning of the pandemic and 7 September 2020, 8468 pre‐prints were published on MedRxiv and BioRiv. 38 (p.3)

†††

The FDA guidance document on adaptive trials (2019) warns the reader that ‘knowledge of accumulating data by trial investigators can adversely affect patient accrual, adherence, retention, or endpoint assessment, compromising the ability of the trial to reliably achieve its objective in a timely manner’. 40 (p.24)

DATA AVAILABILITY STATEMENT

Data sharing not applicable to this article as no datasets were generated or analysed during the current study.

REFERENCES

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

Data sharing not applicable to this article as no datasets were generated or analysed during the current study.


Articles from Journal of Evaluation in Clinical Practice are provided here courtesy of Wiley

RESOURCES