Skip to main content
International Journal of Medical Sciences logoLink to International Journal of Medical Sciences
. 2018 Sep 7;15(12):1397–1405. doi: 10.7150/ijms.25869

The Emperor's New Clothes: a Critical Appraisal of Evidence-based Medicine

Giovanni D Tebala 1,
PMCID: PMC6158662  PMID: 30275768

Abstract

Evidence-Based Medicine (EBM) is the way we are expected to deliver our healthcare in the 21st century. It has been described as the integration of information from best available evidence with the doctor's experience and the patient's point of view. Unfortunately, the original meaning of EBM has been lost and the worldwide medical community has shifted the paradigm to Guidelines-Based Medicine, that has displaced the figures of the doctor and the patient from the decision-making process and relegated them to mere executor and final target of decisions taken by someone else. Problems related to the reliability of evidence and to the way guidelines are constructed, implemented and followed are discussed in detail. It is mandatory that the whole medical community takes responsibility and tries to reverse this apparently inexorable process so to re-establish a proper evidence-based care, where patients and their healing relation with practitioners are at the centre and where doctors are able to critically evaluate the available evidence and use it in light of their personal experience and knowledge.

Keywords: evidence-based medicine, guidelines, healthcare

Introduction

Many years ago there was an Emperor whose main interest was wearing new clothes, so much that he spent all his money on fine dresses. He loved attending social events just to show off his new clothes. He had different attire for every hour of the day. One day, two swindlers came to the Emperor's door and pretended to be weavers, able to make the finest cloth imaginable. The material they used had beautiful colours and patterns but in addition it had the extraordinary property of being invisible to anyone who was stupid or incompetent. Of course the Emperor was utterly interested. He thought: “It would be wonderful to have clothes made from that cloth. I would know which of my men are unfit for their role and I would also be able to tell clever people from stupid ones”. So he immediately hired the swindlers and gave them a great sum of money, along with silk and gold, to weave their cloth for him.

They set up their looms and pretended to work, often late into the night, but nothing at all could be seen on the looms. The Emperor was curious to know how they were coming along with their cloth but he was also a bit uneasy when he recalled that anyone who was stupid or unfit for his position would not be able to see the fabric, but he still decided to send someone to see how the work was progressing. The choice went upon the old prime minister, thinking that he was very experienced and clever and most definitely worthy of the position he had been holding for so many years.

The Minister went to the weavers but he could not see anything on the looms. Fearing of being considered stupid or unfit for his role, he reported back to the Emperor how magnificent the dress was, how wonderful its colours and how amazing the material! The same happened with other officers and generals sent to inspect the weavers' work. Every one of them had words of wonder and astonishment for the Emperor's new clothes.

The Emperor decided to wear his new clothes for the procession to take place the very next day. Early in the morning, the swindlers finally announced that the Emperor's new clothes were ready. The Emperor came to them with all his court. Nobody could see anything - for nothing was there - but fearing to be considered stupid they all nodded approvingly when the Emperor removed his old clothes and pretended to put on the new ones, that were… nothing! He was completely naked when he came out of the dressing room. The Emperor himself was surprised when he saw himself completely naked at the mirror so he thought: “Am I stupid as I can't see anything?”. All the same he came out to start the procession under his canopy, pretending to be wearing magnificent new clothes. Obviously none of the people gathered for the procession could see any dress but they all kept singing praises for the Emperor's new clothes. Only a small child shouted: “The Emperor is naked!” The voice spread quickly and the crowd seemed to be suddenly aware of the truth, but His Majesty decided that the procession had to continue anyway and carried himself even more proudly under the canopy. (Hans Christian Andersen)

The above reported story by the Danish writer Hans Christian Andersen can be considered as a parody of the way we practice Medicine at the present time.

If we simply replace the names of the characters, a worrying picture will develop. The Emperor will be our healthcare, the way we treat our patients. His new clothes are what we consider as modern Evidence Based Medicine (EBM). Ministers and knights - and the crowd gathered for the procession - are those who pretend to practice the best up-to-date medicine. The innocent young kid represents whistleblowers with respect to a potentially failing system.

EBM is generally defined as “the process of systematically finding, appraising and using contemporaneous research findings as the basis for clinical decisions” 1. Clearly, this is not a new process, as medicine has always been based on some form of “evidence”. The element of originality of EBM is represented by the critical evaluation of the reliability and significance of evidence before it is applied into clinical practice 2. However, EBM is not simply the mechanical application of good research findings to the patient's care, but it is supposed to maintain the humanistic core of the art of Medicine by integrating evidence with the experience of the practitioner and expectations of the patient 3.

As already pointed out by some authoritative scholars 4, one of the biggest innovations in healthcare has been turned away from its original significance and has started showing its “dark side”, that is, risks to patient safety, reduced standards of care and poor medical education. Barriers to the implementation of true EBM have been identified. Some of them may be related to lack of the knowledge and skills crucial for the correct interpretation of evidence in the decision-making process 5, 6.

This critical appraisal is meant to raise concerns about the way we practice medicine and the possible risks associated with our peculiar version of EBM.

Literature Search and Limitations

The PubMed database (www.ncbi.nlm.nih.gov/pubmed) was systematically searched from 1946 to 2016 using “evidence-based medicine” (Fig. 1) and “guideline OR guidelines” (Fig. 2) as search queries. Titles and abstracts (where available) of the items retrieved were reviewed. The most significant articles in terms of support or criticism towards EBM have been fully analysed. Significant references from the selected articles have been reviewed as well.

Fig 1.

Fig 1

Articles listed on PubMed containing the words “evidence based medicine” (1946-2016)

Fig 2.

Fig 2

Articles listed on PubMed containing the words “guideline”or”guidelines” (1946-2016)

This is not meant to be a systematic review of EBM, but only a commentary on some unclear or allegedly unsafe aspects of it, in order to raise awareness and concerns on the way EBM is practiced in the current environment, therefore the PRISMA statement flowchart has not been provided.

Historical Background

Authority-Based Medicine. Early Medicine was based on the authority of the Master, being Aristotle or Hippocrates or any other cultural leader of the ancient societies. The latin phrase “ipse dixit” was written under the sentences of the Master signifying they could not be challenged or refused. Medicine was only studied in books and hardly any experiment could be acceptable. No progress was possible and the practitioner was just a passive executor of the decisions of the Master.

Experience-Based Medicine. Gradually, free thinking, human autonomy and progress moved the decision-making responsibilities towards the practitioner. He - extremely rarely it was “she” as women were still precluded from practicing medicine - gathered his skills and knowledge from his own experience and ideas, but also from the words of the old Masters, and improved through personal audits and self-reflection. This was typical of the Renaissance, when Medicine showed great improvements, but unfortunately, it was not standardized and the results depended pretty much on the skills of the single practitioner.

Evidence-Based Medicine. The introduction of the scientific method in medicine and the diffusion of academy and research - including the birth of statistics -favoured the gradual shift towards EBM. The decision making process was no longer based on the experience of the single practitioner, but it started following the results of specific clinical trials and basic research. The phrase “evidence-based medicine” appeared in the literature only in the early '80s. Since then, its frequency in the published articles has been significantly increasing (Fig. 1). The first clear definition of EBM is the one of the late Prof Sackett: “Evidence-based medicine is the conscientious, explicit and judicious use of current best evidence in making decisions about the care of individual patients. The practice of evidence-based medicine means integrating individual clinical expertise with the best available external clinical evidence from systematic research” 3. Clearly, EBM is not seen only as the result of research outcomes, but it introduces two crucial concepts: (a) clinical decision should be based on research evidence as well as personal experience and the single patient's expectations and (b) the use of literature evidence must be “conscientious” and “judicious”, meaning that a critical evaluation - not passive acceptance - of research outcomes is paramount. In the early '80s several articles were published by the Department of Clinical Epidemiology of the McMaster University, aiming at “teaching” the art of reading a scientific journal 7.

Rosenberg in 2005 once again put the emphasis on the need to appraise research findings before using them for clinical decisions 1. Similarly, Straus et al. in 2007 strongly advocated that best available evidence must be used at the light of the patient's “values and circumstances” 8. The question of “values” is of the utmost importance as they act as the lenses through which we evaluate a clinical dilemma, as they connect the strict statistical methodology to the humanistic doctor-patient relationship 9.

The pivot of the decision making process is again the practitioner, who collects, evaluates and interprets the literature evidence (from basic research to meta-analysis and textbooks) in light of his or her own experience, after proper audits and appraisals, and discusses with the patient to reach a shared decision. Integration is the key-word.

Guidelines-Based Medicine. In the last few years, literature evidence has been increasingly collected into critical summaries, which constitute the “guidelines” for a specific clinical situation. Guidelines are meant to represent a general reference “to assist practitioner's and patient's decisions about appropriate healthcare for specific clinical circumstances” 10. Undoubtedly, they are valuable instruments to deliver a good evidence-based healthcare, minimize variations and reduce costs, but unfortunately, they quickly became the unchallengeable, almost “divine” truth. Due to their continuously increasing number (Fig. 2), virtually covering every aspect of medicine, guidelines are progressively restricting the “freedom” of doctors and healthcare staff. Nowadays, guidelines are at the centre of our practice. The doctors are gradually becoming only passive executors of someone else's decisions. Due to the ever-recurring cycles of human history, modern healthcare is dangerously heading back towards “Authority-Based Medicine”.

The Problems

Reliability of evidence

According to the Oxford Centre of Evidence-Based Medicine, the best available evidence is derived from systematic reviews and meta-analysis of randomised controlled trials (RCT) 11. Actually, finding the best evidence for EBM is about tracking down the best evidence that can answer the specific clinical question through an accurate and thorough research of the literature 3. Even an “expert” opinion must be considered evidence, albeit low level.

Due to their - theoretically - well-controlled design, RCTs should be able to give clear, definitive and reliable responses to clinical questions 12, 13. A good-quality RCT must fulfil at least the following criteria: (a) the clinical question must be clearly stated, (b) the statistical methods must be accurately chosen, (c) the target sample must be carefully selected, (d) the randomisation must happen in a clear, unbiased and blinded way, (e) the collection of data must be rigorous and thorough, (f) the analysis of data must be blinded and statistically correct, (g) the evaluation of the results must be unbiased, (h) the conclusions of the work must be a direct consequence of the statistical analysis and no room should be allowed for personal beliefs and unsupported opinions.

A good-quality RCT is difficult to conceive and to perform, expensive and time-consuming, and - sometimes - unnecessary.

In fact, RCTs are not considered to be essential for Group 2 interventions, where there is abundant non-experimental evidence of the validity of the procedure/medication. In some cases, denying the known benefit of a procedure to a group of subjects selected by randomisation can be considered unethical 14. Such is the case, for instance, with laparoscopic cholecystectomy, that was introduced into the clinical practice without any high-quality study, as its advantages to the patients were so blatant with respect to the traditional open cholecystectomy that a RCT was considered superfluous and unethical 15. Similarly, several years ago, thalidomide was withdrawn from the market - or in any case its use has been significantly restricted - only on the basis of single case reports of side effects, that is Level of Evidence 3 and Grade of Recommendation C 16.

Sometimes, even more worryingly, the spasmodic pursuit of statistical significance led to unethical consequences 17. The results of the first RCT of thrombolytic treatment for acute myocardial infarction performed in the late 50s (23 patients enrolled) clearly showed a 50% reduction of the risk of dying. Unfortunately, the confidence interval was too wide and the study did not reach statistical significance. Two other RCTs performed in the 60s (214 patients) gave similar results but again statistical significance was low. By the early 70s, when a total of 2544 subjects had been enrolled in RCTs, although it was clear that thrombolysis conferred significant clinical advantages to patients with acute coronary syndrome, there was no clear statistical confirmation and still in the late 80s thrombolysis was not even considered by experts and textbooks. To be accepted as treatment of choice, more than 48,000 patients had to be enrolled in RCTs. Assuming that all the studies had a good randomisation, around 24,000 patients were denied a known effective treatment, and some of them died unnecessarily only to fulfil statistical criteria 17, 18.

RCTs are highly susceptible to bias, and this adds uncertainty to the most “scientific” studies. The study design can be poor (statistical bias), or there can be systematic differences between groups (bad randomisation - selection bias), or between the care the patients receive (performance bias). The outcome can be determined differently in the groups (detection bias) or the experimental and control groups can get mixed (contamination bias).

A particular type of bias is the “conflict of interest bias”. Undoubtedly, the growing cost of science and research can hardly be supported by governments and taxpayers, so the help of industry and private sector is of the utmost importance. The most influential RCTs are actually run or sponsored by industries, which have the money, the means, the knowledge and the structure (also in terms of manpower) to design and conduct wide trials and to publish their results in influential journals. About 80% of US clinical trials registered with ClinicalTrials.gov are sponsored by industry whereas only about 20% are funded by the National Institute of Health 19. It has been recently highlighted how industry funded trials yield positive results in 96.5% of cases, with an odds-ratio of 2.8 with respect to government funded studies 20. With the growing emphasis on evidence-based treatments, it is obvious that pharmaceutical industry takes into high consideration statistical data - being they reliable or not - published in high impact journals 21.

Can we speak of “Marketing-Based Medicine” 22? Assuming that the trials are conducted rigorously and the analysis of data is statistically correct, it comes as no surprise that sometimes the companies may decide to suppress negative data, cherry-picking results that can optimise their ability to sell their products 22. Negative results, as relevant and worthy of publication as the positive ones, are possibly hidden and studies with positive results (that is to say, mainly studies that are industry sponsored) are more likely to be published in high impact journals 23. Therefore, medicine based on such evidence is likely to be less effective if not unsafe.

It has even been proposed to modify the classification of evidence so that biased evidence is clearly downgraded 21.

Furthermore, it has been claimed that even in well-conducted RCTs it may not be possible to take into account all the possible variables, as there is still a lot of unknown in human pathophysiology. It is therefore possible that a statistically significant correlation does not actually represent a causal effect. It has been proposed that a statistical correlation not supported by a clear causative pattern should be considered only a numeric effect with no clinical significance 24. However, this approach can be also criticized, as our medical knowledge is not always totally reliable 25.

Despite the risk of bias, RCT remains the most reliable research design, provided that the size of the sample is wide enough to allow statistically significant conclusions. If several small sized RCTs are not able to give definitive answers due to individual low statistical power, they can be combined with an advanced statistical procedure called meta-analysis. It consists of a systematic review of the different studies in order to virtually gather all the cases together in a single pool and perform statistical tests on the pooled population. The main difference between a “simple” systematic review and a meta-analysis is that the former is just a collective interpretation of the available studies, whereas the latter allows a proper statistical analysis with evaluation of the probability that the null hypothesis is true (p-value).

Meta-analyses are powerful studies that constitute the bases of our guidelines as, theoretically, they yield the more significant results in statistical terms.

Performing a meta-analysis is reasonably quick and methodologically quite easy (specifically designed software is available online), but the final conclusions are usually so important for our clinical practice that very often those studies get easily published in influential and high-impact journals. The concern has already been raised that a growing number of researchers are nowadays devoting themselves only to meta-analysis of someone else's data, to improve their academic parameters (impact factor, h-index, number of publications…) and get easy access to funds and career opportunities without committing themselves to the difficulties, expenses and hard work associated with RCTs and other clinical studies 26. In fact, we are witnessing an “epidemic” of meta-analyses, with a 2600% increase in 20 years, whereas research studies increased by only 50% in the same period 27.

The risks of meta-analysis had already been emphasized almost 25 years ago by Eysenck 28 who summarized as follows: (a) wrong estimate of statistical effect, (b) excessive heterogeneity among studies, (c) unclear quality of included studies, (d) unclear effects of grouping. He concluded that “if a medical treatment has an effect so recondite and obscure as to require a meta-analysis to establish it, [he] would not be happy to have it used on [him]” 28.

Unfortunately, heterogeneity is not always clearly specified in published meta-analyses. It may involve different study designs, different criteria of recruitment, different ways to estimate the effect of the treatment and so on, and can negatively affect the results 29.

A meta-analysis with low heterogeneity requires us to be highly selective in the inclusion of studies but if we aim for a wide meta-analysis with a high number of patients included, we necessarily need to consider a high number of studies, thus increasing heterogeneity.

The higher the heterogeneity, the more difficult it is to calculate the linear correlation between cause and effect. Moreover, the higher the number of studies included, the more difficult it is to correct by the numerous covariates 28.

If loose criteria of inclusion may increase heterogeneity and reduce statistical significance of a meta-analysis, more strict criteria would possibly mislead those results.

Inclusion of only published results may lead to biased results 30, often overestimating the pooled treatment effect 31. However, it has been suggested that unpublished or not formally published studies - “grey trials” - are more likely to be of low quality so they should be identified and excluded 32.

If further restrictions apply and only articles in English are included, 3% of meta-analyses would give different results from those that could be obtained if no linguistic restriction would have been applied 33. In a comprehensive study on 303 meta-analyses, Juni et al. confirmed that excluding non-English language trials has little but significant effect on outcome estimates 34. On the specific field of perioperative transfusions, Fergusson et al. could not find that “inclusion bias”, although present, affected the results of ten published meta-analyses, but the Authors doubt that their results can be extended to other clinical settings 35.

If, by definition, RCTs are to be considered the best research design and meta-analyses are meant only to improve the statistical power of RCTs, one would expect that a large well conducted RCT and a meta-analysis of several small RCTs would yield the same results. Unfortunately, this is not true in 10-23% of cases 36.

Therefore, although they are very powerful instruments in the hands of practitioners and policy-makers, meta-analyses cannot be completely trusted as regards their clinical significance 37.

Unfortunately, guidelines are still based mainly on systematic reviews and meta-analyses. Although new technology has definitely improved the way information is collected and disseminated, it poses new challenges and biases that can further decrease the reliability of evidence used 38.

Reliability of guidelines

Guidelines have gained a crucial role in the way we practice medicine (Fig. 2). Despite being extremely useful in standardizing health directed interventions and reducing the expenses, sadly the mechanical application of guidelines has many untoward consequences, which many people nowadays still fail to acknowledge and appreciate.

How are guidelines conceived?

A group of experts, either self-appointed or by expression of an authoritative body, form a task-and-finish panel which collects and reviews the best available literature evidence on a specific topic. This part of the process may take months. Selected evidence are often recent RCTs and meta-analyses, but meta-analyses can be based on clinical studies several years old so their results can be outdated. Furthermore, as already discussed, they can be highly biased (selection bias, statistical bias, publication bias, linguistic bias, commercial bias and so on). The panel meets again to analyse and discuss the evidence. Sometimes, to overcome the logistical difficulties, the topic is divided in several sub-topics to be analysed by single members of the panel or subgroups who subsequently share their individual conclusions with the other members by letter, email, telephone or with a further meeting (consensus conference). Analysis of available evidence is obviously a subjective process, although strict criteria can be applied, and a further risk of bias is introduced due to personal views, experiences and interests of the members of the panel. Finally, the members of the panel reach their conclusions and draft the manuscript with the guidelines. The whole process from the first meeting of the panel to the publication of the guidelines takes months if not years, and a further five years, on average, are necessary for the guidelines to be included into clinical practice 39

The resulting guidelines can therefore be (a) late, as they are based on “old” evidence, and (b) biased, as the whole process is imperfect. It has been demonstrated as sometimes guidelines may not be in the best interest of the patients 40.

Just as an example, the NICE guidelines on the treatment of colorectal cancer 41 have been published in November 2011 and updated in December 2014. An analysis by year of publication of the references used for those guidelines is shown in Figure 3. It demonstrates clearly that the vast majority of references had been published between 2000 and 2010 (median 2006, mean 2005). This means that in 2018 in the UK we should treat our colorectal cancer patients on the basis of widely outdated evidence (on average, 12 years old). Can we still consider it as “best practice”? Fortunately, the Association of ColoProctology of Great Britain and Ireland (ACPGBI) has recently published updated and thorough guidelines on the same topic 42, but unless ACPGBI starts working immediately on new guidelines to be published within 3-4 years, they will soon become obsolete.

Fig 3.

Fig 3

Year distribution of the references of the NICE Guideline - Colorectal Cancer: Diagnosis and Management of Colorectal Cancer (full guideline)

As regards the possible biases, they can involve any of the steps leading from evidence to guidelines. We have already discussed the possible biases of RCTs and meta-analyses. Same kind of bias can affect the development of guidelines. They can be based on misleading evidence or the selection of evidence can be affected by personal views and interests of the members of the panel. Just as an example, the current UK guidelines on the management of gastro-esophageal reflux disease suggest proton-pump inhibitors (PPI) as the almost exclusive treatment of patients with Barrett's. Notwithstanding, it has been demonstrated that the transformation of the esophageal mucosa into Barrett's and the further dysplastic evolution of the mucosa leading to adenocarcinoma are much more frequent in the case of non-acid reflux 40. Obviously, PPI are not effective on non-acid reflux and there is good evidence demonstrating that their effect can be highly detrimental 43. So, why do current guidelines keep on suggesting PPI? Would it not be preferable to consider surgery more often, being a medical treatment of non-acid reflux not yet available? This would probably reduce the risk of Barrett's and esophageal cancer 40.

By their own nature, guidelines are “rigid”; they are not specific for the single patient but they are targeted to the “average” patient for that single clinical situation and may not be reliable for each single individual. Applying the guidelines to every single patient without a bit of insight and clinical judgment is like using the bed of Procrustes. This ancient Greek myth recounts that once upon a time there was a bad bandit, whose name was Procrustes, who lived in a forest and used to abduct anyone who was so foolish as to pass through the forest and to bind him or her to his bed. The ones who were too short were stretched and those who were too tall had their limbs amputated. Clearly, nobody survived this treatment.

Are we treating our patients with a Procrustean method?

Clearly, as doctors we have the duty to act in the best interest of our patients also even when this conflicts with the official guidelines 44, but sometimes the sense of “legal” protection they provide is too strong to allow us freedom of decision.

This awful approach is detrimental not only for our patients but also for the medical community. In fact, using guidelines blindly would reduce our capacity of thinking and taking decisions. Ultimately, it runs the risk of hampering our role of “cultural leaders” as we would become mere executors of someone else's clinical decisions 45. If world healthcare continues on this path, anyone would be able to treat a sick person, no need for “qualified” and “experienced” doctors. Why spend years studying anatomy, physiology, pharmacology and so on when our only job is to open the book of guidelines (or surf on the websites) and apply acritically what has been already written? The role of Universities and Medical Schools would change. Why teach the complex interactions between human cells and tissues and the active drugs? In the future machines and computers would do our job.

In fact, we are already witnessing a slow but inexorable change in the medical profession, where more and more non-medically qualified figures are taking over a progressively growing area of healthcare, previously within the remittance of fully qualified doctors.

Rigid guidelines are already severely impacting on progress and improvement. According to Darwin, evolution is in diversity and adaptation, and progress is going beyond the rigid schemes of guidelines to explore new opportunities. Nothing of this is possible if we keep following rigid guidelines.

One must admit, however, that overcoming guidelines and protocols and thinking laterally carry obvious risks to patient safety. In the wider picture, this attitude may be seen as undermining the system and can be hardly acceptable by those who, on the contrary, are supposed to safeguard its stability. Generally speaking, strong systems develop guidelines - not just in medicine - to maintain themselves and provide stability.

However, this risk is not enough to justify stopping the progress and clipping science's wings.

The Proposal

According to Prof Ioannidis 4, EBM has been hijacked and has been transformed into guidelines-based medicine. It has clearly shown its limitations in terms of negative impact on the doctor-patient relationship, disregards of patients' values and possible conflict of interest 46. In light of what has been discussed in the previous sections, this is no longer acceptable.

Our duty would be to bring back the patient and the practitioner at the centre of the decision-making process in medicine, so that clinical choices can be based on the three legs of: evidence, personal experience of the doctor and expectations of the patient.

If EBM is the “…judicious use of …evidence” 3, it is implied that a form of judgment is necessary 47. For this reason we must be able to critically evaluate the available literature and must teach our students and junior doctors to do the same. The skills needed to select the best available evidence for each single clinical scenario must be a central part of the medical school curriculum.

By its nature, EBM regards disease at a population level with minimal consideration to the role of the individuals. As one size does not fit all, it has been suggested that a “precision medicine” approach is implemented, to tailor our healthcare interventions on the single patient instead of the average one. Clearly, this poses several challenges in terms of education, investigation, knowledge, sharing and interpretation of multilevel data, from cells and microbiome to environment and lifestyle, for a large number of individuals, in order to be able to set up detailed guidelines which may focus on the individuals, including those who would be outliers to the usual EBM guidelines. This new evidence-based precision medicine may require a considerable information-technology capacity, defined as “clinical bioinformatics”, and new policies for sensible data collection and sharing 48. In fact, far from being mutually excluding, the two opposite approaches - EBM and precision “mechanistic” medicine - are actually fully complementary. Therefore, every effort should be made to include the two approaches in a unified pluralistic model 49. This is an interesting challenge for the future but at the moment we feel we should come back to the original definition of EBM as a bridge between literature evidence, practitioner's experience and patient's values.

Moreover, we must encourage our students and junior doctors to think laterally, exploring new pathways and new opportunities, going well beyond the rigidity of the already reported data and acquired knowledge. Clearly, this needs an extraordinary effort to preserve and guarantee the safety of our patients but we are convinced that a modern, rational, patient-centered and forward-looking healthcare can only improve our clinical outcomes, provided that ethics go side-by-side with progress and innovation.

In a typical example of recurring historical cycles, the Hippocrates' Oath should be refreshed as a constant recall of our duties towards our patients and our colleagues.

Acknowledgments

The Author would like to thank Ms Jennifer Connell for her valuable suggestions and for thoroughly reviewing the manuscript.

References

  • 1.Rosenberg W, Donald A. Evidence based medicine: an approach to clinical problem-solving. BMJ. 1995;310:1122–1126. doi: 10.1136/bmj.310.6987.1122. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Straus SE, McAlister FA. Evidence-based medicine: a commentary on common criticisms. Can Med Assoc J. 2000;163:837–838. [PMC free article] [PubMed] [Google Scholar]
  • 3.Sackett D. Evidence-Based Medicine: what it is and what it isn't. BMJ. 1996;312:71–72. doi: 10.1136/bmj.312.7023.71. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Ioannidis JP. Evidence-Based Medicine has been hijacked: a report to David Sackett. J Clin Epidemiol. 2016;73:82–86. doi: 10.1016/j.jclinepi.2016.02.012. [DOI] [PubMed] [Google Scholar]
  • 5.Sadeghi-Bazargani H, Tabrizi JS, Azami-Aghdash S. Barriers to evidence-based medicine: a systematic review. J Evaluat Clin Practice. 2014 doi: 10.1111/jep.12222. DOI: 10.1111/jep.12222. [DOI] [PubMed] [Google Scholar]
  • 6.Ghojazadeh M, Azami-Agdash S, Azar FP, Fardid M, Mohseni M, Tahamtani T. A systematic review on barriers, facilities, knowledge and attitude toward evidence-based medicine in Iran. J Anal Res Clin Med. 2015;3:1–11. [Google Scholar]
  • 7.Sackett D. How to read clinical journals. I. Why to read them and how to start reading them critically. Can Med Assoc J. 1981;124:555–558. [PMC free article] [PubMed] [Google Scholar]
  • 8.Straus S, Haynes B, Glasziou P, Dickersin K, Guyatt G. Misunderstandings, misperceptions and mistakes. Evid Based Med. 2007;12:2–3. doi: 10.1136/ebm.12.1.2-a. [DOI] [PubMed] [Google Scholar]
  • 9.Kelly MP, Heath I, Howick J, Greenhalgh T. The importance of values in evidence-based medicine. BMC Medical Ethics. 2015;16:69. doi: 10.1186/s12910-015-0063-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Field MJ, Lohr KN. Clinical Practice Guidelines: Directions for a New Program. Institute of Medicine (US) Committee to Advise the Public Health Service on Clinical Practice Guidelines, Washington (DC): National Academies Press (US), 1990. https://www.ncbi.nlm.nih.gov/books/NBK235751/
  • 11.Oxford Centre for Evidence-Based Medicine. http://www.cebm.net/blog/2009/06/11/oxford-centre-evidence-based-medicine-levels-evidence-march-2009/
  • 12.Tukey JW. Some thoughts on clinical trials, especially problems of multiplicity. Science. 1977;198:679–684. doi: 10.1126/science.333584. [DOI] [PubMed] [Google Scholar]
  • 13.Gore SM. Assessing clinical trials - first steps. BMJ. 1981;282:1605–1607. doi: 10.1136/bmj.282.6276.1605. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Sackett DL, Rosenberg WM. The need for evidence-based medicine. J R Soc Med. 1995;88:620–624. doi: 10.1177/014107689508801105. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Neugebauer E, Troidl H, Spangenberger W, Dietrich A, Lefering R. Conventional versus laparoscopic cholecystectomy and the randomized controlled trial. Cholecystectomy Study Group. Br J Surg. 1991;78:150–154. doi: 10.1002/bjs.1800780207. [DOI] [PubMed] [Google Scholar]
  • 16.Canadian Medical Association (editorial) Thalidomide and congenital malformations. Canad Med Ass J. 1962;86:462–463. [PMC free article] [PubMed] [Google Scholar]
  • 17.Guyatt G. Evidence-Based Medicine: past, present and future. MUMJ. 2003;1:27–32. [Google Scholar]
  • 18.Antman EM, Lau J, Kupelnick B, Mosteller F, Chalmers TC. A comparison of results of meta-analyses of randomized control trials and recommendations of clinical experts. Treatments for myocardial infarction. JAMA. 1992;268:240–248. [PubMed] [Google Scholar]
  • 19.Ehrhardt S, Appel LJ, Meinert CL. Trends in National Institutes of Health funding for clinical trials registers in ClinicalTrials.gov. JAMA. 2015;314:2566–2567. doi: 10.1001/jama.2015.12206. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Flacco ME, Manzoli L, Boccia S, Capasso L, Aleksovska K, Rosso A, Scaioli G, De Vito C, Siliquini R, Villari P, Ioannidis JP. Head-to-head randomized trials are mostly industry sponsored and almost always favor the industry sponsor. J Clin Epidemiol. 2015;68:811–820. doi: 10.1016/j.jclinepi.2014.12.016. [DOI] [PubMed] [Google Scholar]
  • 21.Every-Palmer S, Howick J. How evidence-based medicine is failing due to biased trials and selective publication. J Evaluation Clin Practice 2014;20: DOI 10.1111/jep.12147 - https://onlinelibrary.wiley.com/doi/full/10.1111/jep.12147. [DOI] [PubMed]
  • 22.Spielmans GI, Parry PI. From evidence-based medicine to marketing -based medicine: Evidence from internal industry documents. J Bioeth Inquiry; 2010. DOI 10.1007/s11673-010-9208-8. [Google Scholar]
  • 23.Dickersin K, Min YI. Publication bias: the problem won't go away. Ann N Y Acad Sci. 1993;703:145–176. doi: 10.1111/j.1749-6632.1993.tb26343.x. [DOI] [PubMed] [Google Scholar]
  • 24.Clarke B, Gillies D, Illari P, Russo F, Williamson J. The evidence that evidence-based medicine omits. Prevent Med. 2013;57:745–747. doi: 10.1016/j.ypmed.2012.10.020. [DOI] [PubMed] [Google Scholar]
  • 25.Rocca E. The judgements that evidence-based medicine adopts. J Eval Clin Pract; 2018. DOI: 10.1111/jep.12994 [Epub ahead of print] [DOI] [PubMed] [Google Scholar]
  • 26.Tebala GD. What is the future of biomedical research? Med Hypoth. 2015;85:488–490. doi: 10.1016/j.mehy.2015.07.003. [DOI] [PubMed] [Google Scholar]
  • 27.Mayor S. Five minutes with John Ioannidis. BMJ. 2016;354:i5184. doi: 10.1136/bmj.i5184. [DOI] [PubMed] [Google Scholar]
  • 28.Eysenck HJ. Meta-analysis and its problems. BMJ; 1994. Sept 24;309:789-792. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29.Althuis MD, Weed DL, Frankenfeld CL. Evidence-based mapping of design heterogeneity priori to meta-analysis: a systematic review and evidence synthesis. Syst Rev. 2014;3:80.. doi: 10.1186/2046-4053-3-80. DOI: 10.1186/2046-4053-3-80. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30.William DD, Garner J. The case against “the evidence”: a different perspective on evidence-based medicine. Br J Psych. 2002;180:8–12. doi: 10.1192/bjp.180.1.8. [DOI] [PubMed] [Google Scholar]
  • 31.Schmucker CM, Blumle A, Schell LK, Schwarzer G, Oeller P, Cabrera L, von Elm E, Briel M, Meerpohl JJ, OPEN consortium. Systematic review find that study data not published in full text articles have unclear impact on meta-analyses results in medical research. PLoS One 2017;12:e0176210. DOI: 10.1371/jounal.pone.0176210. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5404772/pdf/pone.0176210.pdf. [DOI] [PMC free article] [PubMed]
  • 32.Hopewell S, McDonald S, Clarke M, Egger M. Grey literature in meta-analyses of randomized trials of health care interventions. Cochrane Database Syst Rev. 2007 Apr 18;(2):MR000010. doi: 10.1002/14651858.MR000010.pub3. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33.Gregoire G, Derderian F, LeLorier J. Selecting the language of the publications included in a meta-analysis: is there a Tower of Babel bias? J Clin Epidemiol. 1995;48:159–163. doi: 10.1016/0895-4356(94)00098-b. [DOI] [PubMed] [Google Scholar]
  • 34.Juni P, Holenstein F, Sterne J, Bartlett C, Egger M. Direction and impact of language bias in meta-analyses of controlled trials: empirical study. Int J Epidemiol. 2002;31:115–123. doi: 10.1093/ije/31.1.115. [DOI] [PubMed] [Google Scholar]
  • 35.Fergusson D, Laupacis A, Salmi LR, McAlister FA, Huet C. What should be included in meta-analyses? An exploration of methodological issues using the ISPOT meta-analyses. Int J Technol Assess Health Care. 2000;16:1109–1119. doi: 10.1017/s0266462300103150. [DOI] [PubMed] [Google Scholar]
  • 36.Ioannidis JPA, Cappelleri JC, Lau J. Issues in comparison between meta-analyses and large trials. JAMA. 1998;279:1089–1093. doi: 10.1001/jama.279.14.1089. [DOI] [PubMed] [Google Scholar]
  • 37.Song F, Parekh S, Hooper L, Loke YK, Ryder J, Sutton AJ, Hing C, Kwok CS, Pang C, Harvey I. Dissemitation and publication of research findings: an updated review of related biases. Health Technol Assess 2010;14:iii,ix-xi,1-193, DOI: 10.3310/hta14080. https://www.journalslibrary.nihr.ac.uk/hta/hta14080/#/abstract. [DOI] [PubMed]
  • 38.Eustace S. Technology-induced bias in the theory of evidence-based medicine. J Evaluat Clin Pract. 2018 doi: 10.1111/jep.12972. DOI: 10.1111/jep.12972. [DOI] [PubMed] [Google Scholar]
  • 39.Lomas J, Sisk JE, Stocking B. From evidence to practice in the United Stetes, the United Kingdom and Canada. Milbank Q. 1993;71:405–410. [PubMed] [Google Scholar]
  • 40.Tebala GD. Gastro-esophageal reflux disease. Are we acting in the best interest of our patients? Eur Rev Med Pharmacol Sci. 2016;20:4553–4556. [PubMed] [Google Scholar]
  • 41.NICE. Colorectal cancer: diagnosis and management. Clinical guideline CG131. Full guideline. Published 1 november 2011. https://www.nice.org.uk/guidance/cg131/resources/colorectal-cancer-diagnosis-and-management-pdf-35109505330117.
  • 42.ACPGBI. Guidelines for the Management of Cancer of the Colon, Rectum and Anus. Colorect Dis. 2017;19(S1):1–97. [Google Scholar]
  • 43.Xie Y, Bowe B, Li T, Xian H, Yan Y, Al-Aly Z. Risk of death among users of proton pump inhibitors: a longitudinal observational cohort study of United States veterans. BMJ Open 2017;7:e015735. DOI: 10.1136/bmjopen-2016-015735. https://bmjopen.bmj.com/content/bmjopen/7/6/e015735.full.pdf. [DOI] [PMC free article] [PubMed]
  • 44.Colbrook P. Can you ignore guidelines? BMJ 2005;330:s143. DOI: 10.1136/bmj.330.7495.s143-a. https://www.bmj.com/content/330/7495/s143.2.full?int_source=trendmd&int_medium=trendmd&int_campaign=trendmd.
  • 45.Tebala GD. Guidelines. A word of caution. Open Access J Surgery 2016;1: OAJS.MS.ID.555558. http://juniperpublishers.com/oajs/pdf/OAJS.MS.ID.555558.pdf.
  • 46.Fava GA, Guidi J, Rafanelli C, Sonino N. The clinical inadequacy of evidence-based medicine and the need for a conceptual framework based on clinical judgment. Psychoter Psychosom. 2015;84:1–3. doi: 10.1159/000366041. [DOI] [PubMed] [Google Scholar]
  • 47.Accad M, Francis D. Does evidence based medicine adversely affect clinical judgment? BMJ. 2018;362:k2799. doi: 10.1136/bmj.k2799. [DOI] [PubMed] [Google Scholar]
  • 48.Beckmann JS, Lew D. Reconciling evidence-based medicine and precision medicine in the era of big data: challenges and opportunities. Genome Medicine 2016;8:134- https://genomemedicine.biomedcentral.com/articles/10.1186/s13073-016-0388-7. [DOI] [PMC free article] [PubMed]
  • 49.Chin-Yee BH. Underdetermination in evidence-based medicine. J Eval Clin Pract. 2014;20:921–927. doi: 10.1111/jep.12258. [DOI] [PubMed] [Google Scholar]

Articles from International Journal of Medical Sciences are provided here courtesy of Ivyspring International Publisher

RESOURCES