Abstract
Objective
To explore contemporary iterations of the evidence pyramid as applied in evidence-based practice.
Methods
We searched for articles published in PubMed, Web of Science, and Scopus databases between 2016 and 2024 that assessed the evidence pyramid and its application in clinical practice. Title/abstract and full-text screening were conducted by one reviewer to determine eligibility, followed by data extraction and analysis to summarize themes.
Results
Of 83 full-text articles identified, 28 were included. Extracted information centred on three common themes: (1) use of the evidence pyramid as a guide, not a rigid tool; 2) importance of the clinical question; and (3) necessity of clinical expertise to integrate research findings into clinical decision-making.
Conclusion
Preliminary findings of our review suggest that, when applying the evidence pyramid in practice, clinicians should consider context (i.e., the clinical question, best available evidence, patient preferences, and clinical circumstances), to optimize clinical decision-making and patient outcomes.
Author’s Note
This paper is one of seven in a series exploring contemporary perspectives on the application of the evidence-based framework in chiropractic care. The Evidence Based Chiropractic Care (EBCC) initiative aims to support chiropractors in their delivery of optimal patient-centred care. We encourage readers to review all papers in the series.
Keywords: chiropractic, clinical decision-making, clinical competence, evidence-based practice, evidence-based medicine, patient care
Abstract
Conceptualiser la pyramide des données probantes pour usage en pratique clinique: une revue narrative de la documentation
Objectifs
Examiner les itérations contemporaines de la pyramide des données probantes comme elle est mise en oeuvre dans la pratique fondée sur des données probantes
Méthodes
Nous avons recherché des articles publiés dans les bases de données PubMed, Web of Science et Scopus entre 2016 et 2024 qui évaluaient la pyramide des données probantes et sa mise en oeuvre dans la pratique clinique. Un évaluateur a procédé à l’examen du titre ou du résumé et du texte intégral pour déterminer l’éligibilité, suivi de l’extraction et de l’analyse des données pour résumer les thèmes.
Résultats
Parmi les 83 articles en texte intégral cernés, 28 ont été inclus. Renseignements extraits axés sur trois thèmes communs : 1) usage de la pyramide des données probantes comme guide et non comme un outil rigide; 2) importance de la question clinique et 3) nécessité de l’expertise clinique pour intégrer les résultats de recherche dans la prise de décision clinique.
Conclusion
Les résultats préliminaires de notre examen suggèrent que, au moment de la mise en oeuvre de la pyramide des données probantes dans la pratique, les cliniciens devraient tenir compte du contexte (c’est-à-dire la question clinique, les meilleures données probantes disponibles, les préférences des patients et les circonstances cliniques), afin d’optimiser la prise de décision clinique et les résultats pour les patients.
Note de l’auteur
Ce document fait partie d’une série de sept documents examinant les perspectives contemporaines sur la mise en oeuvre du cadre fondé sur des données probantes pour les soins chiropratiques. L’initiative de soins chiropratiques fondés sur des données probantes (SCFDP) vise à soutenir les chiropraticiens dans la prestation de soins optimaux axés sur le patient. Nous encourageons les lecteurs à consulter tous les articles de la série.
MOTS CLÉS: chiropratique, prise de décision clinique, compétence clinique, pratique fondée sur des données probantes, médecine fondée sur des données probantes, soins aux patients
Introduction
The conceptualization and application of the evidence hierarchy in evidence-based practice (EBP) has iteratively evolved since EBP was first introduced.1–3 Early papers on EBP advocated for a shift in the manner in which medicine was taught and clinical decisions were made. Initially, the focus of EBP was to educate clinicians on assessing and applying published literature to clinical decision-making to improve patient care, while placing a lower value on clinical expertise, on its own, than in the traditional medical model.2 As EBP evolved, clinical expertise (i.e., the competence and decision-making abilities that clinicians acquire throughout their career) was seen as integral to incorporating the best available research with a patient’s values and preferences to improve clinical decision-making. 3
By 1997, EBP was viewed as a life-long process of self-directed learning,4 rather than “cookbook” medicine.3 More recently, however, some in the field have asserted that EBP has at times been co-opted, misappropriated, or “hijacked” by others to serve unintended agendas or conflicts of interest.5 Moreover, clinicians face continual challenges in selecting and appraising appropriate and available, ‘high-quality’ evidence (e.g., meta-analyses, systematic reviews, and clinical practice guidelines) to integrate in their day-to-day practices with the remaining pillars of EBP.4
Though the evidence hierarchy remains useful for understanding which research study designs are most valid and reliable, such as systematic reviews of randomized controlled trials (RCTs) for clinical questions about therapy, misconceptions exist on the use of other forms of evidence such as observational studies, and their application in clinical practice. The impact of these misconceptions reaches beyond medicine to other health professional fields of practice, such as chiropractic, and therefore the evidence hierarchy requires an analysis from this perspective. A comprehensive, inclusive understanding of the appropriateness of different forms of evidence, informed by the clinical question and context, is important in order for clinicians to deliver optimal patient-centred care and improve patient outcomes. Therefore, the purpose of our review is to explore contemporary iterations of the evidence pyramid as applied in EBP, as well as to summarize contextual factors and limitations associated with these evidence hierarchies.
Methods
Study design
We conducted a narrative review6 to summarize contemporary iterations, contextual factors, and limitations of evidence hierarchies by examining published scholarly literature on the evidence pyramid in relation to EBP.
Data sources and searches
We searched PubMed, Web of Science, and Scopus databases to identify English-language articles on evidence hierarchies in EBP that were published between January 1, 2016 and July 1, 2024. This timeframe was used to capture recent developments and perspectives in the field. We used combinations of the following key terms for our database searches: “evidence based medicine,” “evidence based healthcare,” “evidence based practice,” “evidence based nursing,” “evidence based chiropractic,” “evidence based care,” “evidence pyramid,” “evidence hierarchy,” “rules of evidence,” “evidence rules,” “classification of evidence,” “quality of evidence,” “grading system,” “grading guidelines,” “best evidence,” and “canon* pyramid”.
We defined the evidence hierarchy in EBP, according to Guyatt et al.7, as a system to rank different types of evidence and research, from unsystematic clinical observations to RCTs, based on their methodological rigour and ability to provide reliable evidence for clinical decision-making.7 We defined EBP, according to Haynes et al., as an approach to clinical care that emphasizes the integration of the best available research evidence with clinical expertise, patients’ preferences, and clinical state and circumstances to make informed clinical decisions.7,8 In the Haynes et al. model, clinical expertise is the central pillar responsible for integrating each of the other three components into forming a clinical decision.8
Selection criteria
We included empirical research articles as well as secondary sources of evidence (e.g., systematic, scoping, or narrative reviews, and commentaries) that explored the evidence hierarchy or evidence pyramid within the context of EBP. We excluded conference abstracts, protocols, and EBP articles that did not explicitly analyze the evidence hierarchy or evidence pyramid.
Screening process
One author assessed titles and abstracts of identified articles to determine eligibility. Articles deemed potentially relevant underwent full-text review by the same author. The rest of the working group confirmed inclusion of each full-text article.
Data extraction and analysis
Descriptive information was extracted from included full-text articles, including discipline, first author, year of publication, title, study design, and insights on the evidence hierarchy, including relevant findings or author perspectives as applicable. For this last item, data from each paper were grouped into one of three categories: (1) contemporary understandings of the evidence pyramid, including how it is used and understood; (2) critiques of the evidence pyramid in relation to EBP; and (3) contextual considerations when applying the evidence pyramid to clinical decision-making. These categories were determined a priori, in line with the purpose of our review. All data were extracted, summarized, and presented in tabular form by one reviewer. The data extraction table underwent independent review among the full working group, and required unanimous consensus among the full group.
Results
Of 4,699 articles identified, 83 underwent full-text review and 28 met our inclusion criteria (Figure 1). Each of the 28 included articles explored the evidence hierarchy, along with methodological approaches for appraising research literature or provided discussion on the importance of aligning evidence to the clinical question. The fields of clinical practice (24 articles)9–32, public health (3 articles) 33–35, and geoscience (1 article)36 were represented across the analyzed literature (Table 1).
Figure 1.
Flowchart diagram showing the search and selection process of studies included in this review.
Table 1.
Descriptive information extracted from the 28 articles included in our review.
| Field | First author, year | Title | Study design | Evidence hierarchy insights a |
|---|---|---|---|---|
| Clinical practice | Aldous, 202414 | Wheel replacing pyramid: better paradigm representing totality of evidence-based medicine | Narrative review |
|
| Antoniou, 202215 | An overview of evidence quality assessment methods, evidence to decision frameworks, and reporting standards in guideline development | Narrative review |
|
|
| Anttila, 20169 | Conclusiveness resolves the conflict between quality of evidence and imprecision in GRADE | Commentary |
|
|
| Bosdriesz, 202010 | Evidence-based medicine: when observational studies are better than randomized controlled trials | Narrative review |
|
|
| Chloros, 202316 | Has anything changed in evidence-based medicine? | Commentary |
|
|
| Cuello-Garcia, 202217 | GRADE guidance 24: optimizing the integration of randomized and non-randomized studies of interventions in evidence syntheses and health guidelines | Commentary |
|
|
| Djulbegovic, 202218 | High quality (certainty) evidence changes less often than low-quality evidence, but the magnitude of effect size does not systematically differ between studies with low versus high-quality evidence | Meta-epidemiological study |
|
|
| Djulbegovic, 202419 | High certainty evidence is stable and trustworthy, whereas evidence of moderate or lower certainty may be equally prone to being unstable | Meta-epidemiological study |
|
|
| Galbraith, 201728 | A real-world approach to evidence-based medicine in general practice: a competency framework derived from a systematic review and Delphi process | Systematic review and Delphi process |
|
|
| Hohmann, 201829 | Research pearls: how do we establish the level of evidence? | Commentary |
|
|
| Mayoral, 202130 | Decision-making in medicine: a Kuhnian approach | Commentary |
|
|
| Mercuri, 201831 | The evolution of GRADE (part 1): is there a theoretical and/or empirical basis for the GRADE framework? | Narrative review |
|
|
| Mercuri, 2018 32 | The evolution of GRADE (part 2): still searching for a theoretical and/or empirical basis for the GRADE framework | Narrative review |
|
|
| Mercuri, 201811 | The evolution of GRADE (part 3): a framework built on science or faith? | Narrative review |
|
|
| Mercuri, 201812 | What confidence should we have in GRADE? | Commentary |
|
|
| Mugerauer, 202013 | Professional judgement in clinical practice (part 3): a better alternative to strong evidence-based medicine | Narrative review |
|
|
| Noman, 202420 | Simplifying the concept of level of evidence in lay language for all aspects of learners: in brief review | Commentary |
|
|
| Ritson, 202322 | Bridging the gap: evidence-based practice guidelines for sports nutritionists | Narrative review |
|
|
| Semrau, 202323 | Common misunder-standings of evidence-based medicine | Commentary |
|
|
| Sekhon, 202424 | Synthesis of guidance available for assessing methodological quality and grading of evidence from qualitative research to inform clinical recommendations: a systematic literature review | Systematic review |
|
|
| Szajewska, 201821 | Evidence-based medicine and clinical research: both are needed, neither is perfect | Commentary |
|
|
| Vere, 201926 | Evidence-based medicine as science | Commentary |
|
|
| Wieten, 201827 | Expertise in evidence-based medicine: a tale of three models | Commentary |
|
|
| Wallace, 202225 | Hierarchy of evidence within the medical literature | Commentary |
|
|
| Geo-science | St. John, 201736 | The strength of evidence pyramid: one approach for characterizing the strength of evidence of geoscience education research (GER) community claims | Commentary |
|
| Public health | Irving, 2016 33 | A critical review of grading systems: implications for public health policy | Narrative review |
|
| Jervelund, 202235 | Evidence in public health: an integrated, multi-disciplinary concept | Commentary |
|
|
| Parkhurst, 201634 | What constitutes “good” evidence for public health and social policy-making? From hierarchies to appropriateness | Commentary |
|
COVID-19 = coronavirus disease of 2019; EBP = evidence-based practice; GRADE = grading of recommendations assessment, development, and evaluation; NRSI = non-randomized studies of interventions; PICO = patient, intervention, comparison, outcome; RCT = randomized controlled trial; ROBINS-I = risk of bias in nonrandomized studies – of interventions.
Review categories: (1) contemporary understandings of the evidence pyramid, including how it is used and understood; (2) critiques of the evidence pyramid in relation to EBP; and (3) contextual considerations when applying the evidence pyramid to clinical decision-making.
Evolution and contemporary iterations of the evidence pyramid in EBP
Three initial models of EBP were identified, each providing distinct perspectives on the role of clinical expertise in evidence hierarchies.27 The first pyramid, established by the Evidence-Based Medicine Working Group in 1992, formed the foundation of the symbol of the evidence pyramid (Figure 2).2,27 The pyramid includes four layers, and is used to inform the Grading of Recommendations Assessment, Development, and Evaluation (GRADE) framework for systematically rating certainty of evidence. In much of the reviewed literature, the evidence pyramid is often viewed as either identical or closely aligned with this original model, which places systematic reviews and meta-analyses at the highest level, followed by individual RCTs, then observational studies, and finally expert opinion. 9–13,15–19,21–23,25–27,29–32,35 The second model, presented by Sackett and colleagues, utilizes a Venn-diagram to highlight the convergence of patient values and expectations, best external evidence, and individual clinical expertise at the core of EBP (Figure 3).3,27 The third model, proposed by Haynes and colleagues in 2002, introduces a shifted Venn diagram with components of research evidence, clinical state and circumstances, and patients’ preferences and actions, all converging with clinical expertise at the core (Figure 4).8,27 These models collectively contribute to the evolving understanding of how clinical expertise is considered and valued within the landscape of evidence in EBP.
Figure 2.
Initial evidence pyramid conceptualizing the strength of various forms of evidence. Information provided by the Evidence-Based Medicine Working Group,2 and figure adapted from Wieten et al.27
Figure 3.
Haynes and colleagues’ initial model for evidence-based clinical decision making. Reproduced and adapted with permission of the American College of Physicians, from Haynes et al.37; permission conveyed through Copyright Clearance Center, Inc. (EBM = evidence-based medicine).
Figure 4.
Haynes and colleagues’ updated EBP model to conceptualize the optimal integration of various considerations into clinical decision-making.8 Reproduced and adapted with permission of the American College of Physicians, from Haynes et al.8; permission conveyed through Copyright Clearance Center, Inc.
More recent literature from the field of geoscience education presents a 5-level modified evidence pyramid (Figure 5), with the foundational level as practitioner wisdom/expert opinion.36 The pyramid’s foundation is based in “what we know”, recognizing that practitioners are in a unique position to share pedagogic content knowledge (e.g., in geoscience education, knowing what to teach and how to teach).36 As the pyramid ascends, it proposes separating qualitative and quantitative studies into case studies and cohort studies, emphasizing the importance of clinical expertise in assessing study robustness.36 At the pinnacle of this pyramid are meta-analyses and systematic reviews, which are the least common designs due to their being collations or summations of primary research studies.36 The pyramid is similar in nature to the EBP evidence hierarchy, indicating the need for context-dependent, nuanced understandings and decision-making procedures using hierarchical instruments or approaches (e.g., GRADE).36 Following the COVID-19 pandemic, Aldous et al. proposed a ‘totality of evidence’ wheel (see Table 1).14 Provided in a circular format, this system purposefully avoids the hierarchical framework to lead clinicians in considering all sources of information.14 Aldous et al. argue that this approach may be useful in emergent situations, enabling quicker, informed decision-making from evidence that the traditional evidence hierarchy may otherwise neglect.14
Figure 5.
Proposed Strength of Evidence Pyramid for evaluating the strength of evidence in geoscience education research, reproduced and adapted from St. John and McNeal.36 Originally published by the National Association of Geoscience Teachers (NAGT) and licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License (CC BY-NC-SA 4.0). Changes were made to the original.
Evidence pyramid considerations and critiques
Several articles supported the use of a traditional evidence pyramid in clinical practice, while noting considerations in its application.20,22,25 Noman et al. viewed the traditional evidence pyramid as having filtered (e.g., systematic reviews and meta-analyses) and unfiltered categories (e.g., RCTs and observational studies), and encouraged use of both sources of evidence while acknowledging that filtered evidence has the highest reliability between the two.20 Moreover, filtered evidence is easier for clinicians to apply because of its pre-evaluated nature; however, it may not always be available or relevant to specific clinical situations, thereby requiring reliance on primary, unfiltered sources.20 This notion is also supported by others.22,25
The role of clinician expertise must also be considered. 27 In evidence hierarchies, clinician expertise (i.e., expert opinion), on its own, is ranked as the lowest internal form of evidence. However, Wieten27 argues that EBP models should not consider clinician expertise as an internal form of evidence at all. Instead, they argue that clinician expertise should be thought of as a process to incorporating and appraising all factors that go into a clinical decision, such as available evidence, patient preferences and clinical circumstances27, in line with the Haynes et al.8 model (see Figure 4). Noman et al. further note that when filtered information (e.g., systematic reviews, meta-analyses) is unavailable, practitioners must critically assess the quality and relevance of unfiltered sources before applying them in practice, reiterating the importance of clinical expertise in this context.20
Several articles also challenged the traditional evidence hierarchy13–16,26,28–30, particularly in clinical scenarios where the design feasibility is a challenge (e.g., ethical or cost considerations in using RCTs to examine questions involving risk or prognosis).29 A more nuanced approach to the hierarchy is proposed in such circumstances, where the research question (e.g., therapeutic, diagnostic, prognostic, etc.) and study type (e.g., RCT, cohort, cross-sectional study) needs to be established prior to assigning a ‘level’ of evidence.29 A 2020 narrative review questioned the traditional placement of RCTs atop other primary research designs in the evidence hierarchy, arguing that the choice of study design should be driven by its suitability for addressing the specific research question.10 In a 2018 commentary, Szajewska21 advocated for a pragmatic acknowledgement of the appropriateness of evidence hierarchies and the significance of systematic reviews as the “strongest” form of evidence, while also encouraging a flexible approach to EBP that adapts to the diverse demands of clinical practice. Thus, a dynamic and context-specific approach to applying evidence in practice is needed,10,21 including one that values professional judgement, acknowledges uncertainty, and considers individual patient complexities.13,29,30
Evidence hierarchies and the GRADE framework
Ten articles discussed the GRADE framework and how this systematic approach to evidence rating relates to evidence hierarchies in EBP.9,11,12,17–19,23,27,31,32 In brief, the GRADE framework is used to assess the certainty of evidence and strength of recommendations regarding patient-important outcomes for clinical decision-making, where studies are ‘pre-ranked’ based on study design, following the traditional evidence pyramid (e.g., RCTs are ranked as high certainty, observational studies are low certainty). The ranking can then be adjusted up or down based on several factors (i.e., higher if there is a large magnitude of effect, a dose-response gradient, or if all plausible residual confounding would reduce a demonstrated effect or suggest a spurious effect if no effect was observed; lower if there is serious risk of bias, inconsistency, indirectness, imprecision, or publication bias).38
Authors in five commentaries11,12,23,31,32 expressed concerns over the GRADE framework with regard to its initial categorization of RCTs as “high” grade evidence and observational studies as “low”. However, observational studies in GRADE can initially be categorized as “high” grade evidence if these constitute the best study design(s) for a particular clinical question (e.g., cross-sectional studies to address a clinical question about prevalence, or cohort studies to address a clinical question about prognosis). The certainty of evidence can then be downgraded if there is, for example, serious risk of bias.38
Concerns regarding GRADE and its foundation on an evidence hierarchy were raised by Mercuri and Gafni11,31,32 in a three-part narrative review. In part 1, the authors suggested the evidence hierarchy which the GRADE framework is based on lacks theoretical and empirical justification for assessing certainty of evidence, and in turn, making clinical recommendations.31 They argued that current literature suggests randomization contributes minimal differences to estimated effects when compared to well-designed studies lower in the hierarchy, and that the superiority of RCTs over these studies is inconclusive. 31 In part 2, they questioned whether randomization actually balances all important factors between groups, and if it did, limitations of external validity and individual patient applicability (i.e., generalizability) become more prevalent and problematic.32 However, this is less of a concern in pragmatically conducted RCTs that investigate the effects of “real-world” interventions on “real-world” patients.39
In part 2, Mercuri and Gafni cite literature that suggests GRADE lacks explicit consideration of biological plausibility and mechanisms, which are downplayed in evidence hierarchies because animal model and basic science research have limited value in clinical decision-making yet are important for generating hypotheses for understanding causation.32 In part 3, they questioned the separation of RCTs and observational studies in hierarchies arguing that the GRADE framework does not provide clear rationale for categorizing observational studies or why these are grouped together and rated similarly.11 The authors discussed how changes made to the framework throughout its development were introduced based on consensus methods, and suggested that assessments and recommendations produced leave too much room for user-judgement.11 However, GRADE was designed to provide a systematic framework for assessing certainty of evidence that encourages transparency and an explicit accounting of judgements made, making it a more valid and reliable method than the alternative.38
Evidence hierarchies in public health
Critiques of evidence hierarchies are also offered in the public health literature33, including the difficulty in some cases with applying evidence hierarchies to guiding policy decisions.34 In a 2016 article, Parkhurst and Abeysinghe34 argued that while evidence hierarchies prioritize RCTs and experimental designs as “high-quality” evidence, the complexity of policy decisions in public health, influenced by economic, social, and political factors, at times requires a broader consideration of evidence, a sentiment further supported by Jervelund and Villadsen.35 Accordingly, literature in this field proposes an “appropriateness” framework that enables consideration of the multifaceted nature of policy concerns and values, and encourages reflection on goals of evidence utilization34,35, and the alignment of study design to the specific research question.35 In emergent public health situations (e.g., war/conflict, natural disasters, or global pandemics), there may also be a need to consider additional forms of evidence, other than strictly RCTs, to inform rapid decision-making.16
Discussion
Summary of findings
Preliminary findings from our review of the literature exploring the evidence pyramid, or evidence hierarchy, in EBP centred on three common themes: (1) use of the evidence pyramid as a guide, not a rigid tool; 2) importance of the clinical question; and (3) necessity of clinical expertise to integrate research findings into clinical decision-making. We briefly discuss each of these in further detail below.
(1) Use of the evidence pyramid as a guide, not a rigid tool
The evidence pyramid is viewed as a guideline to help clinicians determine which types of evidence, if conducted soundly, are more likely to provide valid, reliable, and trustworthy answers to their clinical questions.21 Updated evidence pyramids have been developed to reflect how the GRADE framework, a tool that is based on the evidence hierarchy and designed to systematically rank a body of evidence, considers factors in addition to study design. These factors include risk of bias across studies, inconsistency of results, indirectness of the evidence to the clinical question, imprecision and magnitude of the effect estimate, whether there is a dose-response gradient, and the likelihood of publication bias. While GRADE overcomes certain restrictive limitations of the evidence pyramid, and its application can be a valuable tool for clinicians, some authors have expressed concern over the traditional evidence hierarchy inherent in its application. 11,23,26,31,32 For example, Murad et al.40 developed an evidence pyramid that depicts layers of evidence as waves (to reflect uncertainty) instead of rigid, flat lines. There is also agreement among authors in the contemporary literature that the application of evidence hierarchies in the clinical context is dependent on a patient’s clinical state and circumstances and should align with the clinical question at hand (e.g., therapy, etiology, diagnosis) 13–16,26,28–30,41, rather than as an algorithmic tool to be rigidly applied without discussion.13,28,35
(2) Importance of the clinical question
Recent commentaries on the evidence hierarchy have emphasized the importance of tailoring the selection of evidence to the clinical question.10,16,22,24,29,34–36 This requires an understanding that observational studies, for example, may be more suitable when investigating the unintended effects or harm of an intervention10,21,23, or when addressing questions around etiology or prognosis.21 A recent systematic review examined the course and prognostic factors associated with whiplash injuries in cohort and case-control studies, helping to inform chiropractors on patient management.42
Although clinical research relies heavily on quantitative research methods,39,43 the broader methodological literature should also be considered, including qualitative and mixed methods research. These methodologies have traditionally been excluded from the evidence pyramid, despite that certain clinical questions may be best answered using these approaches.24 For instance, insights into patient behaviour or experiences, and greater in-depth understanding of clinical outcomes, may be best answered by qualitative or mixed methods studies, grounded in established theory and thoughtful, relevant questions.44–47 An alternative to the evidence pyramid, proposed in 2005 by Miller and Jones-Harris44, is an “evidence pathways model”, which allows clinicians to consider high-quality quantitative and qualitative research with different pathways according to the type of clinical question (Figure 6).
Figure 6.
Evidence pathways model proposed by Miller and Jones-Harris44, illustrating how different forms of evidence are best suited to answer different clinical questions. Study designs most appropriate for addressing qualitative research questions are highlighted in the bottom three rows. Figure is reprinted and adapted from Miller and Jones-Harris44 with permission from Elsevier.
(3) Necessity of clinical expertise to integrate research findings into clinical decision-making
Clinical expertise involves understanding the nuances of a clinical scenario, weighing different factors (e.g., best available evidence, patient preferences, and clinical circumstances) to make informed decisions that will optimize patient care.13,27 Multiple authors13,20,28,29,48,49 further suggest that clinical expertise is important, particularly in scenarios where evidence is limited or lacking. It is from the experience-informed position that clinicians are able to make appropriate decisions that are guided by contextual understanding of patient circumstance and evidence to determine the most appropriate course of action.13 In essence, it is the clinician, with their inherent expertise, that seeks out the best available evidence and critically appraises it in terms of its validity, importance, and applicability to managing an individual patient within the context of their unique values and clinical circumstances.
Role of chiropractic professional stakeholders
Chiropractors in the field who lack training in research methodologies will need assistance in applying the best available evidence to patient care if EBP is to be conducted in clinical practice successfully and appropriately. Therefore, there is an opportunity for the support and leadership of professional organizations to assist clinicians in learning how to use an EBP framework, as it is intended to be used. In Canada, several chiropractic organizations could work cooperatively to not only fund research but to invest in knowledge translation (KT) of research findings into clinical practice. These organizations include the Canadian Chiropractic Research Foundation, Canadian Chiropractic Guideline Initiative, Canadian Memorial Chiropractic College, Canadian Chiropractic Association, Département de Chiropractique, Université du Québec, à Trois Rivières, and provincial advocacy associations such as the Ontario Chiropractic Association (OCA). We discuss KT in chiropractic in more detail in a subsequent paper of this JCCA special edition.
Limitations
Our review has several limitations. First, all relevant papers on the evolution of the evidence pyramid, or evidence hierarchy, in EBP may not have been captured. Second, restricting our searches to three databases and English-only articles published between January 1, 2016 and July 1, 2024 may have further excluded potentially relevant papers. Third, we did not hand-search references of included articles, and only one reviewer performed article screening and data extraction. Fourth, we did not assess included articles for risk of bias. The strength of our review was the diverse working group of clinicians, educators, researchers, and OCA staff members. Our findings are nevertheless exploratory in nature. As such, a systematic literature review in this topic area may be warranted. Future research in the form of interviews and/or surveys could also be conducted to seek practitioner and institutional perspectives on the evidence pyramid and its use in clinical practice.
Conclusions
In line with the model by Haynes et al.8, preliminary findings of our review suggest that the value placed on clinical expertise, as the central pillar of EBP, reinforces that care is delivered in collaboration with patients and their unique values and clinical circumstances, supported by the best available research evidence. Clinical questions, including those that are qualitative in nature, must be answered using the most appropriate research methodologies (i.e., quantitative, qualitative, or mixed methods). As clinicians and researchers, the manner in which questions are framed along with the language that is used, are essential for structuring and seeking research that helps to inform clinical practice. The principles and goals that were initially developed in EBP continue to be informative and ought to be applied in a dynamic and contextualized manner as intended.2
Acknowledgments
Acknowledgements for this paper, and for the entire special edition, are listed and detailed within the Preface paper.
Footnotes
Conflicts of Interest:
This research was funded by the OCA. The lead authors received a per diem for their work on this project. The authors declare no other conflicts of interest, including no disclaimers, competing interests, or other sources of support or funding to report in the preparation of this manuscript.
References
- 1.Djulbegovic B, Guyatt GH. Progress in evidence-based medicine: a quarter century on. Lancet. 2017;390:415–423. doi: 10.1016/S0140-6736(16)31592-6. [DOI] [PubMed] [Google Scholar]
- 2.Guyatt GH, Cairns JA, Churchill DN, Cook DJ, Haynes B, Hirsh J, et al. Evidence-based medicine. A new approach to teaching the practice of medicine. JAMA. 1992;268(17):2420–2425. doi: 10.1001/jama.1992.03490170092032. [DOI] [PubMed] [Google Scholar]
- 3.Sackett DL, Rosenberg WMC, Gray JAM, Haynes RB, Richardson WS. Evidence based medicine: What it is and what it isn’t. BMJ. 1996;312(7023) doi: 10.1136/bmj.312.7023.71. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Sackett DL. Evidence-based medicine. Semin Perinatol. 1997;21(1):3–5. doi: 10.1016/s0146-0005(97)80013-4. [DOI] [PubMed] [Google Scholar]
- 5.Ioannidis JPA. Evidence-based medicine has been hijacked: a report to David Sackett. J Clin Epidemiol. 2016;73:82–86. doi: 10.1016/j.jclinepi.2016.02.012. [DOI] [PubMed] [Google Scholar]
- 6.Green BN, Johnson CD, Adams A. Writing narrative literature reviews for peer-reviewed journals: secrets of the trade. J Chiropr Med. 2006;5(3):101–117. doi: 10.1016/S0899-3467(07)60142-6. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Guyatt GH, Haynes RB, Jaeschke RZ, Cook DJ, Green L, Naylor CD, et al. Users’ Guides to the Medical Literature: XXV. Evidence-based medicine: principles for applying the Users’ Guides to patient care. Evidence-Based Medicine Working Group. JAMA. 2000 Sep;284(10):1290–1296. doi: 10.1001/jama.284.10.1290. [DOI] [PubMed] [Google Scholar]
- 8.Haynes RB, Devereaux PJ, Guyatt GH. Clinical expertise in the era of evidence-based medicine and patient choice. ACP J Club. 2002;136(2):A11–4. [PubMed] [Google Scholar]
- 9.Anttila S, Persson J, Vareman N, Sahlin NE. Conclusiveness resolves the conflict between quality of evidence and imprecision in GRADE. J Clin Epidemiol. 2016;75:1–5. doi: 10.1016/j.jclinepi.2016.03.019. [DOI] [PubMed] [Google Scholar]
- 10.Bosdriesz JR, Stel VS, van Diepen M, Meuleman Y, Dekker FW, Zoccali C, et al. Evidence-based medicine—When observational studies are better than randomized controlled trials. Nephrol. 2020;25:737–743. doi: 10.1111/nep.13742. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Mercuri M, Gafni A. The evolution of GRADE (part 3): A framework built on science or faith? J Eval Clin Pract. 2018;24(5):1223–1231. doi: 10.1111/jep.13016. [DOI] [PubMed] [Google Scholar]
- 12.Mercuri M, Baigrie BS. What confidence should we have in GRADE? J Eval Clin Pract. 2018;24(5):1240–1246. doi: 10.1111/jep.12993. [DOI] [PubMed] [Google Scholar]
- 13.Mugerauer R. Professional judgement in clinical practice (part 3): A better alternative to strong evidence-based medicine. J Eval Clin Pract. 2021;27(3):612–623. doi: 10.1111/jep.13512. [DOI] [PubMed] [Google Scholar]
- 14.Aldous C, Dancis BM, Dancis J, Oldfield PR. Wheel Replacing Pyramid: Better Paradigm Representing Totality of Evidence-Based Medicine. Ann Glob Heal. 2024;90(1):17. doi: 10.5334/aogh.4341. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15.Antoniou GA, Bastos Gonçalves F, Björck M, Chakfé N, Coscas R, Dias NV, et al. Editor’s Choice – European Society for Vascular Surgery Clinical Practice Guideline Development Scheme: An Overview of Evidence Quality Assessment Methods, Evidence to Decision Frameworks, and Reporting Standards in Guideline Development. Eur J Vasc Endovasc Surg. 2022;63(6):791–799. doi: 10.1016/j.ejvs.2022.03.014. [DOI] [PubMed] [Google Scholar]
- 16.Chloros GD, Prodromidis AD, Giannoudis PV. Has anything changed in Evidence-Based Medicine? Injury. 2023;54(Suppl 3):S20–25. doi: 10.1016/j.injury.2022.04.012. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17.Cuello-Garcia CA, Santesso N, Morgan RL, Verbeek J, Thayer K, Ansari MT, et al. GRADE guidance 24 optimizing the integration of randomized and nonrandomized studies of interventions in evidence syntheses and health guidelines. J Clin Epidemiol. 2022;142:200–208. doi: 10.1016/j.jclinepi.2021.11.026. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18.Djulbegovic B, Ahmed MM, Hozo I, Koletsi D, Hemkens L, Price A, et al. High quality (certainty) evidence changes less often than low-quality evidence, but the magnitude of effect size does not systematically differ between studies with low versus high-quality evidence. J Eval Clin Pract. 2022;28(3):353–362. doi: 10.1111/jep.13657. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19.Djulbegovic B, Koletsi D, Hozo I, Price A, Martimbianco ALC, Riera R, et al. High certainty evidence is stable and trustworthy, whereas evidence of moderate or lower certainty may be equally prone to being unstable. J Clin Epidemiol. 2024;171:111392. doi: 10.1016/j.jclinepi.2024.111392. [DOI] [PubMed] [Google Scholar]
- 20.Al Noman A, Sarkar O, Mita TM, Siddika K, Afrose F. Simplifying the concept of level of evidence in lay language for all aspects of learners: In brief review. Intell Pharm. 2024;2(2):270–273. [Google Scholar]
- 21.Szajewska H. Evidence-Based Medicine and Clinical Research: Both Are Needed, Neither Is Perfect. Ann Nutr Metab. 2018;72:13–23. doi: 10.1159/000487375. [DOI] [PubMed] [Google Scholar]
- 22.Ritson AJ, Hearris MA, Bannock LG. Bridging the gap: Evidence-based practice guidelines for sports nutritionists. Front Nutr. 2023;10:1118547. doi: 10.3389/fnut.2023.1118547. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23.Semrau F, Aidelsburger P, Israel CW. Common misunderstandings of evidence-based medicine. Herzschrittmacherther Elektrophysiol. 2023;34(3):232–239. doi: 10.1007/s00399-023-00957-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24.Sekhon M, de Thurah A, Fragoulis GE, Schoones J, Stamm TA, Vliet Vlieland TPM, et al. Synthesis of guidance available for assessing methodological quality and grading of evidence from qualitative research to inform clinical recommendations: a systematic literature review. RMD Open. 2024;10(2) doi: 10.1136/rmdopen-2023-004032. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 25.Wallace SS, Barak G, Truong G, Parker MW. Hierarchy of Evidence Within the Medical Literature. Hosp Pediatr. 2022;12(8):745–750. doi: 10.1542/hpeds.2022-006690. [DOI] [PubMed] [Google Scholar]
- 26.Vere J, Gibson B. Evidence-based medicine as science. J Eval Clin Pract. 2019;25(6):997–1002. doi: 10.1111/jep.13090. [DOI] [PubMed] [Google Scholar]
- 27.Wieten S. Expertise in evidence-based medicine: A tale of three models. Philos Ethics, Humanit Med. 2018;13(1) doi: 10.1186/s13010-018-0055-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 28.Galbraith K, Ward A, Heneghan C. A real-world approach to Evidence-Based Medicine in general practice: A competency framework derived from a systematic review and Delphi process. BMC Med Educ. 2017;17(1) doi: 10.1186/s12909-017-0916-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 29.Hohmann E, Feldman M, Hunt TJ, Cote MP, Brand JC. Research Pearls: How Do We Establish the Level of Evidence? Arthrosc – J Arthrosc Relat Surg. 2018;34(12):3271–3277. doi: 10.1016/j.arthro.2018.10.002. [DOI] [PubMed] [Google Scholar]
- 30.Mayoral JV. Decision-Making in Medicine: A Kuhnian Approach. Teorema: Revista Internacional de Filosofía. 2021;40(1):133–150. [Google Scholar]
- 31.Mercuri M, Gafni A. The evolution of GRADE (part 1): Is there a theoretical and/or empirical basis for the GRADE framework? J Eval Clin Pract. 2018;24(5):1203–1210. doi: 10.1111/jep.12998. [DOI] [PubMed] [Google Scholar]
- 32.Mercuri M, Gafni A. The evolution of GRADE (part 2): Still searching for a theoretical and/or empirical basis for the GRADE framework. J Eval Clin Pract. 2018;24(5):1211–1222. doi: 10.1111/jep.12997. [DOI] [PubMed] [Google Scholar]
- 33.Irving M, Eramudugolla R, Cherbuin N, Anstey KJ. A Critical Review of Grading Systems: Implications for Public Health Policy. Eval Heal Prof. 2017;40(2):244–262. doi: 10.1177/0163278716645161. [DOI] [PubMed] [Google Scholar]
- 34.Parkhurst JO, Abeysinghe S. What Constitutes “Good” Evidence for Public Health and Social Policy-making? From Hierarchies to Appropriateness. Soc Epistemol. 2016;30(5–6):665–679. [Google Scholar]
- 35.Smith Jervelund S, Villadsen SF. Evidence in public health: An integrated, multidisciplinary concept. Scand J Public Health. 2022 Nov;50(7):1012–1017. doi: 10.1177/14034948221125341. [DOI] [PubMed] [Google Scholar]
- 36.StJohn K, McNeal KS. The strength of evidence pyramid: One approach for characterizing the strength of evidence of geoscience education research (GER) community claims. J Geosci Educ. 2017;65:363–372. [Google Scholar]
- 37.Haynes RB, Sacket DL, Gray JMA, Cook DJ, Guyatt GH. Transferring evidence from research into practice: 1. The role of clinical care research evidence in clinical decisions. ACP J Club. 1996;125(3):A14. [PubMed] [Google Scholar]
- 38.Guyatt G, Oxman AD, Akl EA, Kunz R, Vist G, Brozek J, et al. GRADE guidelines: 1. Introduction-GRADE evidence profiles and summary of findings tables. J Clin Epidemiol. 2011;64(4):383–394. doi: 10.1016/j.jclinepi.2010.04.026. [DOI] [PubMed] [Google Scholar]
- 39.Eklund A, Jensen I, Lohela-Karlsson M, Hagberg J, Leboeuf-Yde C, Kongsted A, et al. The nordic maintenance care program: Effectiveness of chiropractic maintenance care versus symptom-guided treatment for recurrent and persistent low back pain—a pragmatic randomized controlled trial. PLoS One. 2018;13(9) doi: 10.1371/journal.pone.0203029. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 40.Murad MH, Asi N, Alsawas M, Alahdab F. New evidence pyramid. Evid Based Med. 2016;21(4):125–127. doi: 10.1136/ebmed-2016-110401. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 41.Warnke G. Gadamer: Hermeneutics, Tradition, and Reason. Stanford: Stanford University Press; 1987. p. 220. [Google Scholar]
- 42.Shearer HM, Carroll LJ, Côté P, Randhawa K, Southerst D, Varatharajan S, et al. The course and factors associated with recovery of whiplash-associated disorders: an updated systematic review by the Ontario protocol for traffic injury management (OPTIMa) collaboration. Eur J Physiother. 2021;23(5):279–294. [Google Scholar]
- 43.Bolton JE. The evidence in evidence-based practice: What counts and what doesn’t count? J Manipulative Physiol Ther. 2001;24(5):362–366. doi: 10.1067/mmt.2001.115259. [DOI] [PubMed] [Google Scholar]
- 44.Miller PJ, Jones-Harris AR. The Evidence-Based Hierarchy: Is It Time For Change? A Suggested Alternative. J Manipulative Physiol Ther. 2005;28(6):453–457. doi: 10.1016/j.jmpt.2005.06.010. [DOI] [PubMed] [Google Scholar]
- 45.Giacomini MK. The rocky road: qualitative research as evidence. Evid Based Med. 2001;6(1):4–6. [PubMed] [Google Scholar]
- 46.Emary PC, Stuber KJ, Mbuagbaw L, Oremus M, Nolet PS, Nash JV, et al. Risk of bias in chiropractic mixed methods research: a secondary analysis of a meta-epidemiological review. J Can Chiropr Assoc. 2022;66(1):7–20. [PMC free article] [PubMed] [Google Scholar]
- 47.Emary PC, Stuber KJ, Mbuagbaw L, Oremus M, Nolet PS, Nash JV, et al. Quality of reporting in chiropractic mixed methods research: a methodological review protocol. Chiropr Man Ther. 2021;29(1) doi: 10.1186/s12998-021-00395-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 48.Roberge-Dao J, Yardley B, Menon A, Halle MC, Maman J, Ahmed S, et al. A mixed-methods approach to understanding partnership experiences and outcomes of projects from an integrated knowledge translation funding model in rehabilitation. BMC Health Serv Res. 2019;19(1) doi: 10.1186/s12913-019-4061-x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 49.Palermo TM, Davis KD, Bouhassira D, Hurley RW, Katz JD, Keefe FJ, et al. Promoting inclusion, diversity and equity in pain science. Eur J Pain. 2023;24:105–109. doi: 10.1093/pm/pnac204. [DOI] [PMC free article] [PubMed] [Google Scholar]






