Skip to main content
The Journal of the Canadian Chiropractic Association logoLink to The Journal of the Canadian Chiropractic Association
. 2025 Nov 30;69(3):238–254.

Conceptualizing the evidence pyramid for use in clinical practice: a narrative literature review

Paul S Nolet 1,2,, Peter C Emary 3, Jonathan Murray 4, Glen H Harris 5, Brian Gleberzon 6, Anita Chopra 7, Marco De Ciantis 8, Rod Overton 9
PMCID: PMC12716892  PMID: 41425288

Abstract

Objective

To explore contemporary iterations of the evidence pyramid as applied in evidence-based practice.

Methods

We searched for articles published in PubMed, Web of Science, and Scopus databases between 2016 and 2024 that assessed the evidence pyramid and its application in clinical practice. Title/abstract and full-text screening were conducted by one reviewer to determine eligibility, followed by data extraction and analysis to summarize themes.

Results

Of 83 full-text articles identified, 28 were included. Extracted information centred on three common themes: (1) use of the evidence pyramid as a guide, not a rigid tool; 2) importance of the clinical question; and (3) necessity of clinical expertise to integrate research findings into clinical decision-making.

Conclusion

Preliminary findings of our review suggest that, when applying the evidence pyramid in practice, clinicians should consider context (i.e., the clinical question, best available evidence, patient preferences, and clinical circumstances), to optimize clinical decision-making and patient outcomes.

Author’s Note

This paper is one of seven in a series exploring contemporary perspectives on the application of the evidence-based framework in chiropractic care. The Evidence Based Chiropractic Care (EBCC) initiative aims to support chiropractors in their delivery of optimal patient-centred care. We encourage readers to review all papers in the series.

Keywords: chiropractic, clinical decision-making, clinical competence, evidence-based practice, evidence-based medicine, patient care

Introduction

The conceptualization and application of the evidence hierarchy in evidence-based practice (EBP) has iteratively evolved since EBP was first introduced.13 Early papers on EBP advocated for a shift in the manner in which medicine was taught and clinical decisions were made. Initially, the focus of EBP was to educate clinicians on assessing and applying published literature to clinical decision-making to improve patient care, while placing a lower value on clinical expertise, on its own, than in the traditional medical model.2 As EBP evolved, clinical expertise (i.e., the competence and decision-making abilities that clinicians acquire throughout their career) was seen as integral to incorporating the best available research with a patient’s values and preferences to improve clinical decision-making. 3

By 1997, EBP was viewed as a life-long process of self-directed learning,4 rather than “cookbook” medicine.3 More recently, however, some in the field have asserted that EBP has at times been co-opted, misappropriated, or “hijacked” by others to serve unintended agendas or conflicts of interest.5 Moreover, clinicians face continual challenges in selecting and appraising appropriate and available, ‘high-quality’ evidence (e.g., meta-analyses, systematic reviews, and clinical practice guidelines) to integrate in their day-to-day practices with the remaining pillars of EBP.4

Though the evidence hierarchy remains useful for understanding which research study designs are most valid and reliable, such as systematic reviews of randomized controlled trials (RCTs) for clinical questions about therapy, misconceptions exist on the use of other forms of evidence such as observational studies, and their application in clinical practice. The impact of these misconceptions reaches beyond medicine to other health professional fields of practice, such as chiropractic, and therefore the evidence hierarchy requires an analysis from this perspective. A comprehensive, inclusive understanding of the appropriateness of different forms of evidence, informed by the clinical question and context, is important in order for clinicians to deliver optimal patient-centred care and improve patient outcomes. Therefore, the purpose of our review is to explore contemporary iterations of the evidence pyramid as applied in EBP, as well as to summarize contextual factors and limitations associated with these evidence hierarchies.

Methods

Study design

We conducted a narrative review6 to summarize contemporary iterations, contextual factors, and limitations of evidence hierarchies by examining published scholarly literature on the evidence pyramid in relation to EBP.

Data sources and searches

We searched PubMed, Web of Science, and Scopus databases to identify English-language articles on evidence hierarchies in EBP that were published between January 1, 2016 and July 1, 2024. This timeframe was used to capture recent developments and perspectives in the field. We used combinations of the following key terms for our database searches: “evidence based medicine,” “evidence based healthcare,” “evidence based practice,” “evidence based nursing,” “evidence based chiropractic,” “evidence based care,” “evidence pyramid,” “evidence hierarchy,” “rules of evidence,” “evidence rules,” “classification of evidence,” “quality of evidence,” “grading system,” “grading guidelines,” “best evidence,” and “canon* pyramid”.

We defined the evidence hierarchy in EBP, according to Guyatt et al.7, as a system to rank different types of evidence and research, from unsystematic clinical observations to RCTs, based on their methodological rigour and ability to provide reliable evidence for clinical decision-making.7 We defined EBP, according to Haynes et al., as an approach to clinical care that emphasizes the integration of the best available research evidence with clinical expertise, patients’ preferences, and clinical state and circumstances to make informed clinical decisions.7,8 In the Haynes et al. model, clinical expertise is the central pillar responsible for integrating each of the other three components into forming a clinical decision.8

Selection criteria

We included empirical research articles as well as secondary sources of evidence (e.g., systematic, scoping, or narrative reviews, and commentaries) that explored the evidence hierarchy or evidence pyramid within the context of EBP. We excluded conference abstracts, protocols, and EBP articles that did not explicitly analyze the evidence hierarchy or evidence pyramid.

Screening process

One author assessed titles and abstracts of identified articles to determine eligibility. Articles deemed potentially relevant underwent full-text review by the same author. The rest of the working group confirmed inclusion of each full-text article.

Data extraction and analysis

Descriptive information was extracted from included full-text articles, including discipline, first author, year of publication, title, study design, and insights on the evidence hierarchy, including relevant findings or author perspectives as applicable. For this last item, data from each paper were grouped into one of three categories: (1) contemporary understandings of the evidence pyramid, including how it is used and understood; (2) critiques of the evidence pyramid in relation to EBP; and (3) contextual considerations when applying the evidence pyramid to clinical decision-making. These categories were determined a priori, in line with the purpose of our review. All data were extracted, summarized, and presented in tabular form by one reviewer. The data extraction table underwent independent review among the full working group, and required unanimous consensus among the full group.

Results

Of 4,699 articles identified, 83 underwent full-text review and 28 met our inclusion criteria (Figure 1). Each of the 28 included articles explored the evidence hierarchy, along with methodological approaches for appraising research literature or provided discussion on the importance of aligning evidence to the clinical question. The fields of clinical practice (24 articles)932, public health (3 articles) 3335, and geoscience (1 article)36 were represented across the analyzed literature (Table 1).

Figure 1.

Figure 1

Flowchart diagram showing the search and selection process of studies included in this review.

Table 1.

Descriptive information extracted from the 28 articles included in our review.

Field First author, year Title Study design Evidence hierarchy insights a
Clinical practice Aldous, 202414 Wheel replacing pyramid: better paradigm representing totality of evidence-based medicine Narrative review
  • 1. Propose a ‘totality of evidence’ wheel that provides a non-hierarchical framework to include all study designs to offer a comprehensive view of medical evidence, for use in fast-evolving situations like the COVID-19 pandemic, enabling quicker, informed decision-making.

  • 2. The evidence pyramid places RCTs at the top, potentially overshadowing other study designs. For example, well-conducted observational studies are sometimes neglected because of their lower position. The authors argue that the traditional evidence pyramid restricts the scope of information and thereby hampers medical progress, particularly in emergencies. The wheel structure they proposed, which is non-hierarchical in nature, would enable medical professionals to consider a broader array of evidence, including population studies and narrative accounts, which are often excluded in traditional pyramid-based thinking.

Antoniou, 202215 An overview of evidence quality assessment methods, evidence to decision frameworks, and reporting standards in guideline development Narrative review
  • 1. Distinguishes between strength of evidence assessments and evidence hierarchies. While both aim to provide clinicians, patients and researchers a comprehensive evaluation of the evidence, assessments provide judgements on confidence in study findings and hierarchies rank evidence by study design (e.g., RCTs highest, expert opinion lowest). Hierarchies are simple and easy to use by non-experts, aiding guideline development for therapeutic effects, harms, and other clinical questions. They are also easy to comprehend for clinical practice guidance. Within hierarchies, the level of evidence does not necessarily reflect the strength of a recommendation.

    The authors developed their own hierarchy to align evidence with class of recommendation. They suggested within the discipline of vascular surgery that evidence from multiple RCTs showing favourable results for a given treatment should be associated with the wording “is recommended”, clearly favourable results from a single RCT or large non-randomized study “should be considered”, unclear favourable results (efficacy less well established) from these single studies “may be considered”, and unfavourable results potentially suggesting harm from consensus of experts or small studies “is not recommended” when making clinical decisions.

  • 2. They felt that hierarchies are overly simplistic, failing to account for important factors of evidence beyond study design that are essential for clinical decision-making.

Anttila, 20169 Conclusiveness resolves the conflict between quality of evidence and imprecision in GRADE Commentary
  • 1. Highlights that the GRADE guideline presents significant challenges in the understanding of the key concepts of “quality of evidence” and “imprecision,” particularly when considered together. This confusion may hinder the practical process of evidence assessment, indicating a need for explicit guidance in the GRADE framework. Quality is not objectively calculated but instead reflects reviewers’ confidence in how close the estimate is to the true effect, expressed on a 4-point ordinal scale. Imprecision, a reason for downgrading evidence quality, incorporates aspects such as sample size, statistical power, confidence intervals, and critical margins regarding benefits and harms. However, the inclusion of critical margins within the concept of imprecision leads to confusion, as these elements do not necessarily reflect the statistical closeness of the parameter value to the estimate.

Bosdriesz, 202010 Evidence-based medicine: when observational studies are better than randomized controlled trials Narrative review
  • 3. RCTs are the gold standard for evaluating the intended effects of interventions due to their use of randomization, which minimizes confounding by indication. However, RCTs can have limitations, including limited generalizability, high costs, short follow-up, ethical concerns, and smaller sample sizes. When RCTs are not feasible, observational studies (e.g., cohort or case-control) are used. While observational studies may have confounding concerns, they provide more generalizability and the ability to measure naturally occurring exposure on an outcome. Ultimately, the research question should guide the study design to be considered.

Chloros, 202316 Has anything changed in evidence-based medicine? Commentary
  • 1. The evidence pyramid ranks research designs, with meta-analyses and systematic reviews at the top, followed by RCTs, cohort and case-control studies, case series, case reports, and expert opinion at the bottom. While fine-tuned periodically, the top of the pyramid remains consistent, but the lower levels may vary, sometimes including laboratory and animal research. The pyramid separates evidence into “robust” (levels 1 and 2) and “less robust” categories for prioritizing the best evidence in research and clinical practice. Many view the pyramid as a hierarchy. However, not all research questions can be addressed by RCTs, which primarily aim to reduce bias and confounding.

  • 2. The traditional evidence pyramid, based solely on methodology, is oversimplified and potentially misleading. A poorly conducted RCT can yield unreliable results, while a well-executed observational study may produce strong evidence.

  • 3. Urgent public health needs (e.g., COVID-19 pandemic) sometimes necessitate considering multiple forms of evidence, such as robust observational studies, in addition to RCTs.

Cuello-Garcia, 202217 GRADE guidance 24: optimizing the integration of randomized and non-randomized studies of interventions in evidence syntheses and health guidelines Commentary
  • 1. The authors recommend using the GRADE methodology to assess the certainty of evidence from RCTs for each outcome individually. If high certainty is achieved, further evaluation of non-randomized studies of interventions (NRSI) is unnecessary. However, if RCT evidence is of low or very low certainty, NRSIs can be considered to enhance overall certainty. In cases where RCT evidence is moderate, NRSIs may be integrated to address issues like indirectness. The authors caution that while large NRSIs with precise estimates may be appealing, they should be carefully evaluated for bias using appropriate tools (e.g., ROBINS-I).

Djulbegovic, 202218 High quality (certainty) evidence changes less often than low-quality evidence, but the magnitude of effect size does not systematically differ between studies with low versus high-quality evidence Meta-epidemiological study
  • 1. Within a traditional evidence hierarchy, the authors found lower-quality evidence changes more often than higher-quality evidence, suggesting that higher quality evidence is more valid and reliable. However, the magnitude of treatment effects did not significantly between low and high quality of evidence. Therefore, the GRADE approach may not effectively differentiate the impact of quality of evidence on treatment effect sizes. The authors suggest current appraisal methods of evidence may need reassessing to capture quality of evidence as intended. If both low and high quality of evidence studies produce similar effect sizes, it challenges the assumption that higher quality evidence is always more valid or applicable for informing clinical decisions.

  • 2. As above.

Djulbegovic, 202419 High certainty evidence is stable and trustworthy, whereas evidence of moderate or lower certainty may be equally prone to being unstable Meta-epidemiological study
  • 1. Found that high-quality evidence, free from limitations, rarely changes with new data, while evidence with even one limitation (moderate quality) is more likely to change. Moderate-quality evidence often has a single limitation and should be interpreted cautiously when issuing strong recommendations. Lower quality evidence (moderate, low, or very low) exhibited more frequent changes, larger deviations, and greater uncertainty. Limitations, especially imprecision and indirectness, significantly impacted changes in effect estimates and their significance.

Galbraith, 201728 A real-world approach to evidence-based medicine in general practice: a competency framework derived from a systematic review and Delphi process Systematic review and Delphi process
  • 1. Propose a competency framework to bridge real-world practice and EBP. Propose viewing evidence as what is more appropriate, suggesting that relying solely on evidence to guide a search for ‘real-world’ evidence is not best practice.

  • 3. Emphasize the importance of clinician expertise, as viewing evidence alone is insufficient and suggest that EBP is rigid in its application.

Hohmann, 201829 Research pearls: how do we establish the level of evidence? Commentary
  • 1. Acknowledge a traditional evidence hierarchy in research as categorized into five levels (I-V), where Level I represents the highest quality, and Level V the lowest. They state these levels are to help classify studies based on design and rigour, with higher levels often offering more reliable results for clinical practice.

  • 3. They suggest that the level of evidence assigned to studies in the hierarchy reflects study design rather than quality, and even a poorly executed ‘level 1’ trial can be downgraded if it lacks power or proper design. Level of evidence is just one measure of quality, but relying on this alone does not reflect the definition of EBP.

Mayoral, 202130 Decision-making in medicine: a Kuhnian approach Commentary
  • 2. Criticizes the traditional thought process of using an evidence pyramid to guide evidence consideration, suggesting it imposes constraints on clinical decision-making that can contribute to a lack of holistic care for individual patients with their own contexts and circumstances.

Mercuri, 201831 The evolution of GRADE (part 1): is there a theoretical and/or empirical basis for the GRADE framework? Narrative review
  • 2. Critiques the GRADE framework for lacking theoretical and empirical justification in its criteria for assessing evidence quality and making clinical recommendations. They state that GRADE relies on a modified hierarchy of evidence, which itself does not have a solid theoretical foundation, suggesting the EBP hierarchy is based more on belief than scientific proof. These hierarchical limitations are emphasized in the prioritization of RCTs over other well-designed studies. They suggest that empirical studies have shown that the superiority of RCTs in controlling bias is inconclusive, with some non-randomized studies yielding similar effect estimates when well-designed. The article suggests that without addressing these foundational issues, GRADE may not effectively improve upon the limitations of the EBP evidence hierarchy, and could suffer from the same limitations in guiding clinical practice.

Mercuri, 2018 32 The evolution of GRADE (part 2): still searching for a theoretical and/or empirical basis for the GRADE framework Narrative review
  • 2. Highlights research critiquing the GRADE framework for adopting Bradford Hill’s criteria (implicitly and explicitly) without fully integrating them into a coherent theoretical basis and not clearly articulating the connection. They also note that GRADE lacks explicit consideration of biological plausibility and mechanisms, which are downplayed in EBP hierarchies but are important for understanding causation. They critique EBP’s reliance on evidence hierarchies, particularly the emphasis on randomization. Proponents of EBP argue that randomization balances study groups, leading to more reliable effect estimates. However, literature is presented that questions the philosophical and empirical basis of randomization’s superiority. Even with balanced groups, external validity and individual patient applicability remain problematic, as generalizability and patient-specific outcomes are not always addressed effectively.

Mercuri, 201811 The evolution of GRADE (part 3): a framework built on science or faith? Narrative review
  • 2. States that GRADE categorizes studies into RCTs and observational studies, with the latter consistently rated as lower-quality evidence, without clear reasoning for why these types of studies are grouped together or rated similarly. They suggest the decision to classify observational studies as starting at “low certainty” was made based on internal discussion rather than empirical evidence. They suggest that clarity is lacking on why certain criteria for assessing evidence quality and making recommendations were selected and others excluded. They suggest changes to the framework have been introduced based on consensus rather than scientific evidence, and the lack of operational definitions for key criteria leaves too much room for user-judgement, raising concerns about the validity of the recommendations produced. They conclude that GRADE’s foundation is weak, as it lacks the necessary theoretical or empirical support to justify its approach. They argue that until the framework is substantiated by scientific evidence, the validity of its recommendations remain uncertain, and reliance on it should be cautious.

Mercuri, 201812 What confidence should we have in GRADE? Commentary
  • 1. Summarize that within GRADE and the evidence hierarchy, RCTs receive a “high” grade, signifying high confidence in the effect “low” and other sources (e.g., lab studies, case reports) are graded “very low.” Criteria are provided to adjust these grades, either increasing or decreasing confidence based on factors such as study limitations, effect size, or bias.

  • 2. They criticize GRADE for suggesting that certain types of evidence, like observational studies or expert opinion, are discarded when stronger evidence (e.g., RCTs) is available. They suggest it also lacks clarity on how to integrate evidence from diverse sources (e.g., RCTs with observational or basic science findings), and that the hierarchy implies that higher-quality evidence, such as RCTs, automatically outweighs lower-quality studies, which may undermine the value of the broader evidence base.

Mugerauer, 202013 Professional judgement in clinical practice (part 3): a better alternative to strong evidence-based medicine Narrative review
  • 3. Suggests a major issue with EBP is its unrealistic focus on certainty, leading to the mistaken belief that if clinicians make different decisions, it means they do not know what they are doing. This results in a push for rigid, standardized guidelines based on evidence ‘level’ or quality, with RCTs seen as the most “objective.” However, they argue that skilled practitioners recognize that uncertainty is normal, especially when treating unique patients with multiple conditions in complex and varying environments. They suggest that clinician expertise is therefore not only important, but necessary when considering EBP and evidence hierarchies.

Noman, 202420 Simplifying the concept of level of evidence in lay language for all aspects of learners: in brief review Commentary
  • 1. The authors conceptualized the evidence hierarchy as divided into filtered and unfiltered categories, reflecting different levels of synthesis and evaluation. Filtered information, positioned at the top of the pyramid, includes systematic reviews, meta-analyses, and critically appraised topics and articles. These forms of evidence undergo rigorous assessment and synthesis, providing highly reliable information that can guide clinical practice without further scrutiny from practitioners. Unfiltered information, located in the middle tiers, comprises primary research studies, such as RCTs and observational studies, which, while potentially more current and specific, require practitioners to critically evaluate their quality and relevance before application.

  • 3. While filtered evidence is easier to apply due to its pre-evaluated nature, it may not always be available or applicable to specific clinical scenarios, necessitating a reliance on unfiltered sources. Additionally, the base of the pyramid, which includes expert opinion and background information, though not considered high-level evidence, still plays a role in forming the foundation of clinical knowledge, especially in areas where high-level evidence is lacking. Practitioners are encouraged to carefully select and apply the best available evidence, balancing the reliability of filtered sources with the immediacy and specificity of unfiltered ones, and to remain mindful of the context and limitations inherent in lower levels of evidence.

Ritson, 202322 Bridging the gap: evidence-based practice guidelines for sports nutritionists Narrative review
  • 1. Suggests the hierarchy is based on susceptibility to bias from study design. For intervention-focused questions, systematic reviews and meta-analyses of RCTs (Level 1) and evidence syntheses (Level 2) are preferred due to their rigorous appraisal process. Evidence hierarchies provide practitioners with insight on the degree of certainty they can have when providing recommendations. The authors further suggested that practitioners prioritize evidence from the top of these hierarchies as a result, but should not disregard evidence at the bottom of hierarchies when making recommendations, particularly when evidence at the top has gaps.

  • 3. Despite the defined hierarchy informing levels of bias and trustworthiness, in applied sports and exercise nutrition, this hierarchy is not always definitive. Such high-level evidence can take years to publish and may not fully address a practitioner’s specific PICO question, leading them to rely on lower-tier evidence. While RCTs offer strong internal validity, their high control levels can reduce practical relevance. Although the top of the hierarchy should be prioritized, the full hierarchy should still be considered.

Semrau, 202323 Common misunder-standings of evidence-based medicine Commentary
  • 1. The evidence pyramid is appropriate to highlight the highest quality evidence for doubtful, mechanistically unexplained effects requiring a control group. As in these cases, a control group baseline is needed to inform the treatment effect.

    However, they feel the pyramid’s structure is misleading when assessing parameters associated with specific interventions, where RCTs may not provide the highest quality evidence in cases when a comparator group does not impact the quality of evidence.

  • 2. When available, they feel that different study designs (or levels of evidence) should be assessed. For example, RCTs can demonstrate a probable cause-effect relationship or indicate a treatment’s practical usefulness, but significant results do not guarantee a true effect. Positive RCTs cannot definitively prove a therapy’s benefit, and negative results cannot disprove a known cause-effect relationship.

  • 3. They suggested to reconsider traditional evidence pyramids, advising that the most suitable evidence should be determined based on the specific parameter being evaluated.

Sekhon, 202424 Synthesis of guidance available for assessing methodological quality and grading of evidence from qualitative research to inform clinical recommendations: a systematic literature review Systematic review
  • 1. Identified two approaches for summarizing the quality of qualitative research for clinical guidance: a qualitative evidence hierarchy and a research pyramid. Both rank qualitative systematic reviews and meta-syntheses at the top, similar to quantitative research, and each suggests that the top of the hierarchy is reserved for studies providing the most ‘evidence’. However, qualitative research focuses on experiences, barriers, facilitators, and the feasibility of implementation, which are not easily ranked in the same hierarchical way as quantitative evidence.

Szajewska, 201821 Evidence-based medicine and clinical research: both are needed, neither is perfect Commentary
  • 3. Acknowledges the appropriateness of an evidence hierarchy and that systematic reviews are the strongest form of evidence. However, it is emphasized there are contextual factors to consider. Depending on the clinical question, observational studies may be more applicable. A framework is proposed to account for this.

Vere, 201926 Evidence-based medicine as science Commentary
  • 2. Critiques the notion that EBP, through the evidence hierarchy, fits neatly into traditional scientific methods such as inductivism and falsificationism, which focus on theory confirmation or falsification through observation. EBP prioritizes empirical evidence through hierarchies (RCTs, meta-analyses) but this does not align well with traditional scientific theories. Hierarchies rank evidence quality but do not necessarily test or advance scientific theories directly.

Wieten, 201827 Expertise in evidence-based medicine: a tale of three models Commentary
  • 1. Summarizes three initial models of EBP, one being the evidence pyramid. They note how the first pyramid comprised of four layers is used to inform the GRADE framework.

  • 2. Explains clinician expertise is an important consideration when evaluating evidence for use in practice. They argue that clinician expertise is incorrectly considered a form of evidence in many pyramids. Instead, expertise should be considered as a process for appraising and integrating various forms of evidence.

Wallace, 202225 Hierarchy of evidence within the medical literature Commentary
  • 1. Defines the hierarchy similar to others (observational studies up through to RCTs, systematic reviews, and meta-analyses). The authors believe the hierarchy should be applied when performing literature searches, particularly when clinicians are pressed for time. However, the overall quality of evidence of each study design is still dependent on study strengths and limitations identified in the critical appraisal process. An abundance of only lower-level observational studies for a particular clinical question should also inform the development of higher-level studies on the same topic.

Geo-science St. John, 201736 The strength of evidence pyramid: one approach for characterizing the strength of evidence of geoscience education research (GER) community claims Commentary
  • 1. Proposes a modified 5-level evidence pyramid in geoscience education that places “Practitioner wisdom/expert opinion” at its foundation, recognizing educators’ unique insights into what and how to teach. The pyramid distinguishes between qualitative and quantitative studies, separating case studies and cohort studies, while emphasizing the role of clinical expertise in assessing quality. At the top are meta-analyses and systematic reviews, which are less common as they summarize primary research. This model is similar to the EBP hierarchy, highlighting the need for context-sensitive decision-making using hierarchical frameworks.

Public health Irving, 2016 33 A critical review of grading systems: implications for public health policy Narrative review
  • 2. While RCTs are considered the best method to minimize bias and are frequently regarded as the ideal research design within evidence hierarchies, this does not mean RCTs are appropriate for all types of questions. The authors suggest that some grading systems often overlook issues like flawed randomization or unequal group sizes in RCTs. Additionally, RCTs may not always be appropriate or ethical for all research areas. Observational studies, especially large population-based studies, may offer applicable findings and include more diverse participants, enhancing their external validity.

Jervelund, 202235 Evidence in public health: an integrated, multi-disciplinary concept Commentary
  • 2. The authors feel the typical hierarchy of ranking study designs based on methodological rigour and risk of bias with systematic reviews and meta-analyses at the top, and expert opinions at the bottom, is less applicable in a field with large epistemological diversity (e.g., context variability) such as public health.

  • 3. They advocate for an evidence typology that evaluates the quality of evidence based on the appropriateness of study designs to specific research questions, rather than following a rigid hierarchy. This approach suggests that quantitative methods are best for studying causal relationships, while qualitative methods are relevant for understanding social contexts, or the use of mixed methods to optimize public health outcomes.

Parkhurst, 201634 What constitutes “good” evidence for public health and social policy-making? From hierarchies to appropriateness Commentary
  • 2. The authors acknowledge that “good evidence” for clinical practice often relies on evidence hierarchies, with RCTs typically viewed as the gold standard due to their scientific rigour. However, they suggest there is growing recognition that these hierarchies may not always provide the best guidance for policy-making. Evidence hierarchies prioritize internal validity, but policy decisions require broader considerations, such as social, political, and economic factors, which are often not suitably investigated using RCTs.

  • 3. The authors argue for a framework based on the appropriateness of evidence, which considers relevance to policy concerns, applicability to local contexts, and alignment with public health goals.

COVID-19 = coronavirus disease of 2019; EBP = evidence-based practice; GRADE = grading of recommendations assessment, development, and evaluation; NRSI = non-randomized studies of interventions; PICO = patient, intervention, comparison, outcome; RCT = randomized controlled trial; ROBINS-I = risk of bias in nonrandomized studies – of interventions.

a

Review categories: (1) contemporary understandings of the evidence pyramid, including how it is used and understood; (2) critiques of the evidence pyramid in relation to EBP; and (3) contextual considerations when applying the evidence pyramid to clinical decision-making.

Evolution and contemporary iterations of the evidence pyramid in EBP

Three initial models of EBP were identified, each providing distinct perspectives on the role of clinical expertise in evidence hierarchies.27 The first pyramid, established by the Evidence-Based Medicine Working Group in 1992, formed the foundation of the symbol of the evidence pyramid (Figure 2).2,27 The pyramid includes four layers, and is used to inform the Grading of Recommendations Assessment, Development, and Evaluation (GRADE) framework for systematically rating certainty of evidence. In much of the reviewed literature, the evidence pyramid is often viewed as either identical or closely aligned with this original model, which places systematic reviews and meta-analyses at the highest level, followed by individual RCTs, then observational studies, and finally expert opinion. 913,1519,2123,2527,2932,35 The second model, presented by Sackett and colleagues, utilizes a Venn-diagram to highlight the convergence of patient values and expectations, best external evidence, and individual clinical expertise at the core of EBP (Figure 3).3,27 The third model, proposed by Haynes and colleagues in 2002, introduces a shifted Venn diagram with components of research evidence, clinical state and circumstances, and patients’ preferences and actions, all converging with clinical expertise at the core (Figure 4).8,27 These models collectively contribute to the evolving understanding of how clinical expertise is considered and valued within the landscape of evidence in EBP.

Figure 2.

Figure 2

Initial evidence pyramid conceptualizing the strength of various forms of evidence. Information provided by the Evidence-Based Medicine Working Group,2 and figure adapted from Wieten et al.27

Figure 3.

Figure 3

Haynes and colleagues’ initial model for evidence-based clinical decision making. Reproduced and adapted with permission of the American College of Physicians, from Haynes et al.37; permission conveyed through Copyright Clearance Center, Inc. (EBM = evidence-based medicine).

Figure 4.

Figure 4

Haynes and colleagues’ updated EBP model to conceptualize the optimal integration of various considerations into clinical decision-making.8 Reproduced and adapted with permission of the American College of Physicians, from Haynes et al.8; permission conveyed through Copyright Clearance Center, Inc.

More recent literature from the field of geoscience education presents a 5-level modified evidence pyramid (Figure 5), with the foundational level as practitioner wisdom/expert opinion.36 The pyramid’s foundation is based in “what we know”, recognizing that practitioners are in a unique position to share pedagogic content knowledge (e.g., in geoscience education, knowing what to teach and how to teach).36 As the pyramid ascends, it proposes separating qualitative and quantitative studies into case studies and cohort studies, emphasizing the importance of clinical expertise in assessing study robustness.36 At the pinnacle of this pyramid are meta-analyses and systematic reviews, which are the least common designs due to their being collations or summations of primary research studies.36 The pyramid is similar in nature to the EBP evidence hierarchy, indicating the need for context-dependent, nuanced understandings and decision-making procedures using hierarchical instruments or approaches (e.g., GRADE).36 Following the COVID-19 pandemic, Aldous et al. proposed a ‘totality of evidence’ wheel (see Table 1).14 Provided in a circular format, this system purposefully avoids the hierarchical framework to lead clinicians in considering all sources of information.14 Aldous et al. argue that this approach may be useful in emergent situations, enabling quicker, informed decision-making from evidence that the traditional evidence hierarchy may otherwise neglect.14

Figure 5.

Figure 5

Proposed Strength of Evidence Pyramid for evaluating the strength of evidence in geoscience education research, reproduced and adapted from St. John and McNeal.36 Originally published by the National Association of Geoscience Teachers (NAGT) and licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License (CC BY-NC-SA 4.0). Changes were made to the original.

Evidence pyramid considerations and critiques

Several articles supported the use of a traditional evidence pyramid in clinical practice, while noting considerations in its application.20,22,25 Noman et al. viewed the traditional evidence pyramid as having filtered (e.g., systematic reviews and meta-analyses) and unfiltered categories (e.g., RCTs and observational studies), and encouraged use of both sources of evidence while acknowledging that filtered evidence has the highest reliability between the two.20 Moreover, filtered evidence is easier for clinicians to apply because of its pre-evaluated nature; however, it may not always be available or relevant to specific clinical situations, thereby requiring reliance on primary, unfiltered sources.20 This notion is also supported by others.22,25

The role of clinician expertise must also be considered. 27 In evidence hierarchies, clinician expertise (i.e., expert opinion), on its own, is ranked as the lowest internal form of evidence. However, Wieten27 argues that EBP models should not consider clinician expertise as an internal form of evidence at all. Instead, they argue that clinician expertise should be thought of as a process to incorporating and appraising all factors that go into a clinical decision, such as available evidence, patient preferences and clinical circumstances27, in line with the Haynes et al.8 model (see Figure 4). Noman et al. further note that when filtered information (e.g., systematic reviews, meta-analyses) is unavailable, practitioners must critically assess the quality and relevance of unfiltered sources before applying them in practice, reiterating the importance of clinical expertise in this context.20

Several articles also challenged the traditional evidence hierarchy1316,26,2830, particularly in clinical scenarios where the design feasibility is a challenge (e.g., ethical or cost considerations in using RCTs to examine questions involving risk or prognosis).29 A more nuanced approach to the hierarchy is proposed in such circumstances, where the research question (e.g., therapeutic, diagnostic, prognostic, etc.) and study type (e.g., RCT, cohort, cross-sectional study) needs to be established prior to assigning a ‘level’ of evidence.29 A 2020 narrative review questioned the traditional placement of RCTs atop other primary research designs in the evidence hierarchy, arguing that the choice of study design should be driven by its suitability for addressing the specific research question.10 In a 2018 commentary, Szajewska21 advocated for a pragmatic acknowledgement of the appropriateness of evidence hierarchies and the significance of systematic reviews as the “strongest” form of evidence, while also encouraging a flexible approach to EBP that adapts to the diverse demands of clinical practice. Thus, a dynamic and context-specific approach to applying evidence in practice is needed,10,21 including one that values professional judgement, acknowledges uncertainty, and considers individual patient complexities.13,29,30

Evidence hierarchies and the GRADE framework

Ten articles discussed the GRADE framework and how this systematic approach to evidence rating relates to evidence hierarchies in EBP.9,11,12,1719,23,27,31,32 In brief, the GRADE framework is used to assess the certainty of evidence and strength of recommendations regarding patient-important outcomes for clinical decision-making, where studies are ‘pre-ranked’ based on study design, following the traditional evidence pyramid (e.g., RCTs are ranked as high certainty, observational studies are low certainty). The ranking can then be adjusted up or down based on several factors (i.e., higher if there is a large magnitude of effect, a dose-response gradient, or if all plausible residual confounding would reduce a demonstrated effect or suggest a spurious effect if no effect was observed; lower if there is serious risk of bias, inconsistency, indirectness, imprecision, or publication bias).38

Authors in five commentaries11,12,23,31,32 expressed concerns over the GRADE framework with regard to its initial categorization of RCTs as “high” grade evidence and observational studies as “low”. However, observational studies in GRADE can initially be categorized as “high” grade evidence if these constitute the best study design(s) for a particular clinical question (e.g., cross-sectional studies to address a clinical question about prevalence, or cohort studies to address a clinical question about prognosis). The certainty of evidence can then be downgraded if there is, for example, serious risk of bias.38

Concerns regarding GRADE and its foundation on an evidence hierarchy were raised by Mercuri and Gafni11,31,32 in a three-part narrative review. In part 1, the authors suggested the evidence hierarchy which the GRADE framework is based on lacks theoretical and empirical justification for assessing certainty of evidence, and in turn, making clinical recommendations.31 They argued that current literature suggests randomization contributes minimal differences to estimated effects when compared to well-designed studies lower in the hierarchy, and that the superiority of RCTs over these studies is inconclusive. 31 In part 2, they questioned whether randomization actually balances all important factors between groups, and if it did, limitations of external validity and individual patient applicability (i.e., generalizability) become more prevalent and problematic.32 However, this is less of a concern in pragmatically conducted RCTs that investigate the effects of “real-world” interventions on “real-world” patients.39

In part 2, Mercuri and Gafni cite literature that suggests GRADE lacks explicit consideration of biological plausibility and mechanisms, which are downplayed in evidence hierarchies because animal model and basic science research have limited value in clinical decision-making yet are important for generating hypotheses for understanding causation.32 In part 3, they questioned the separation of RCTs and observational studies in hierarchies arguing that the GRADE framework does not provide clear rationale for categorizing observational studies or why these are grouped together and rated similarly.11 The authors discussed how changes made to the framework throughout its development were introduced based on consensus methods, and suggested that assessments and recommendations produced leave too much room for user-judgement.11 However, GRADE was designed to provide a systematic framework for assessing certainty of evidence that encourages transparency and an explicit accounting of judgements made, making it a more valid and reliable method than the alternative.38

Evidence hierarchies in public health

Critiques of evidence hierarchies are also offered in the public health literature33, including the difficulty in some cases with applying evidence hierarchies to guiding policy decisions.34 In a 2016 article, Parkhurst and Abeysinghe34 argued that while evidence hierarchies prioritize RCTs and experimental designs as “high-quality” evidence, the complexity of policy decisions in public health, influenced by economic, social, and political factors, at times requires a broader consideration of evidence, a sentiment further supported by Jervelund and Villadsen.35 Accordingly, literature in this field proposes an “appropriateness” framework that enables consideration of the multifaceted nature of policy concerns and values, and encourages reflection on goals of evidence utilization34,35, and the alignment of study design to the specific research question.35 In emergent public health situations (e.g., war/conflict, natural disasters, or global pandemics), there may also be a need to consider additional forms of evidence, other than strictly RCTs, to inform rapid decision-making.16

Discussion

Summary of findings

Preliminary findings from our review of the literature exploring the evidence pyramid, or evidence hierarchy, in EBP centred on three common themes: (1) use of the evidence pyramid as a guide, not a rigid tool; 2) importance of the clinical question; and (3) necessity of clinical expertise to integrate research findings into clinical decision-making. We briefly discuss each of these in further detail below.

(1) Use of the evidence pyramid as a guide, not a rigid tool

The evidence pyramid is viewed as a guideline to help clinicians determine which types of evidence, if conducted soundly, are more likely to provide valid, reliable, and trustworthy answers to their clinical questions.21 Updated evidence pyramids have been developed to reflect how the GRADE framework, a tool that is based on the evidence hierarchy and designed to systematically rank a body of evidence, considers factors in addition to study design. These factors include risk of bias across studies, inconsistency of results, indirectness of the evidence to the clinical question, imprecision and magnitude of the effect estimate, whether there is a dose-response gradient, and the likelihood of publication bias. While GRADE overcomes certain restrictive limitations of the evidence pyramid, and its application can be a valuable tool for clinicians, some authors have expressed concern over the traditional evidence hierarchy inherent in its application. 11,23,26,31,32 For example, Murad et al.40 developed an evidence pyramid that depicts layers of evidence as waves (to reflect uncertainty) instead of rigid, flat lines. There is also agreement among authors in the contemporary literature that the application of evidence hierarchies in the clinical context is dependent on a patient’s clinical state and circumstances and should align with the clinical question at hand (e.g., therapy, etiology, diagnosis) 1316,26,2830,41, rather than as an algorithmic tool to be rigidly applied without discussion.13,28,35

(2) Importance of the clinical question

Recent commentaries on the evidence hierarchy have emphasized the importance of tailoring the selection of evidence to the clinical question.10,16,22,24,29,3436 This requires an understanding that observational studies, for example, may be more suitable when investigating the unintended effects or harm of an intervention10,21,23, or when addressing questions around etiology or prognosis.21 A recent systematic review examined the course and prognostic factors associated with whiplash injuries in cohort and case-control studies, helping to inform chiropractors on patient management.42

Although clinical research relies heavily on quantitative research methods,39,43 the broader methodological literature should also be considered, including qualitative and mixed methods research. These methodologies have traditionally been excluded from the evidence pyramid, despite that certain clinical questions may be best answered using these approaches.24 For instance, insights into patient behaviour or experiences, and greater in-depth understanding of clinical outcomes, may be best answered by qualitative or mixed methods studies, grounded in established theory and thoughtful, relevant questions.4447 An alternative to the evidence pyramid, proposed in 2005 by Miller and Jones-Harris44, is an “evidence pathways model”, which allows clinicians to consider high-quality quantitative and qualitative research with different pathways according to the type of clinical question (Figure 6).

Figure 6.

Figure 6

Evidence pathways model proposed by Miller and Jones-Harris44, illustrating how different forms of evidence are best suited to answer different clinical questions. Study designs most appropriate for addressing qualitative research questions are highlighted in the bottom three rows. Figure is reprinted and adapted from Miller and Jones-Harris44 with permission from Elsevier.

(3) Necessity of clinical expertise to integrate research findings into clinical decision-making

Clinical expertise involves understanding the nuances of a clinical scenario, weighing different factors (e.g., best available evidence, patient preferences, and clinical circumstances) to make informed decisions that will optimize patient care.13,27 Multiple authors13,20,28,29,48,49 further suggest that clinical expertise is important, particularly in scenarios where evidence is limited or lacking. It is from the experience-informed position that clinicians are able to make appropriate decisions that are guided by contextual understanding of patient circumstance and evidence to determine the most appropriate course of action.13 In essence, it is the clinician, with their inherent expertise, that seeks out the best available evidence and critically appraises it in terms of its validity, importance, and applicability to managing an individual patient within the context of their unique values and clinical circumstances.

Role of chiropractic professional stakeholders

Chiropractors in the field who lack training in research methodologies will need assistance in applying the best available evidence to patient care if EBP is to be conducted in clinical practice successfully and appropriately. Therefore, there is an opportunity for the support and leadership of professional organizations to assist clinicians in learning how to use an EBP framework, as it is intended to be used. In Canada, several chiropractic organizations could work cooperatively to not only fund research but to invest in knowledge translation (KT) of research findings into clinical practice. These organizations include the Canadian Chiropractic Research Foundation, Canadian Chiropractic Guideline Initiative, Canadian Memorial Chiropractic College, Canadian Chiropractic Association, Département de Chiropractique, Université du Québec, à Trois Rivières, and provincial advocacy associations such as the Ontario Chiropractic Association (OCA). We discuss KT in chiropractic in more detail in a subsequent paper of this JCCA special edition.

Limitations

Our review has several limitations. First, all relevant papers on the evolution of the evidence pyramid, or evidence hierarchy, in EBP may not have been captured. Second, restricting our searches to three databases and English-only articles published between January 1, 2016 and July 1, 2024 may have further excluded potentially relevant papers. Third, we did not hand-search references of included articles, and only one reviewer performed article screening and data extraction. Fourth, we did not assess included articles for risk of bias. The strength of our review was the diverse working group of clinicians, educators, researchers, and OCA staff members. Our findings are nevertheless exploratory in nature. As such, a systematic literature review in this topic area may be warranted. Future research in the form of interviews and/or surveys could also be conducted to seek practitioner and institutional perspectives on the evidence pyramid and its use in clinical practice.

Conclusions

In line with the model by Haynes et al.8, preliminary findings of our review suggest that the value placed on clinical expertise, as the central pillar of EBP, reinforces that care is delivered in collaboration with patients and their unique values and clinical circumstances, supported by the best available research evidence. Clinical questions, including those that are qualitative in nature, must be answered using the most appropriate research methodologies (i.e., quantitative, qualitative, or mixed methods). As clinicians and researchers, the manner in which questions are framed along with the language that is used, are essential for structuring and seeking research that helps to inform clinical practice. The principles and goals that were initially developed in EBP continue to be informative and ought to be applied in a dynamic and contextualized manner as intended.2

Acknowledgments

Acknowledgements for this paper, and for the entire special edition, are listed and detailed within the Preface paper.

Footnotes

Conflicts of Interest:

This research was funded by the OCA. The lead authors received a per diem for their work on this project. The authors declare no other conflicts of interest, including no disclaimers, competing interests, or other sources of support or funding to report in the preparation of this manuscript.

References

  • 1.Djulbegovic B, Guyatt GH. Progress in evidence-based medicine: a quarter century on. Lancet. 2017;390:415–423. doi: 10.1016/S0140-6736(16)31592-6. [DOI] [PubMed] [Google Scholar]
  • 2.Guyatt GH, Cairns JA, Churchill DN, Cook DJ, Haynes B, Hirsh J, et al. Evidence-based medicine. A new approach to teaching the practice of medicine. JAMA. 1992;268(17):2420–2425. doi: 10.1001/jama.1992.03490170092032. [DOI] [PubMed] [Google Scholar]
  • 3.Sackett DL, Rosenberg WMC, Gray JAM, Haynes RB, Richardson WS. Evidence based medicine: What it is and what it isn’t. BMJ. 1996;312(7023) doi: 10.1136/bmj.312.7023.71. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Sackett DL. Evidence-based medicine. Semin Perinatol. 1997;21(1):3–5. doi: 10.1016/s0146-0005(97)80013-4. [DOI] [PubMed] [Google Scholar]
  • 5.Ioannidis JPA. Evidence-based medicine has been hijacked: a report to David Sackett. J Clin Epidemiol. 2016;73:82–86. doi: 10.1016/j.jclinepi.2016.02.012. [DOI] [PubMed] [Google Scholar]
  • 6.Green BN, Johnson CD, Adams A. Writing narrative literature reviews for peer-reviewed journals: secrets of the trade. J Chiropr Med. 2006;5(3):101–117. doi: 10.1016/S0899-3467(07)60142-6. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Guyatt GH, Haynes RB, Jaeschke RZ, Cook DJ, Green L, Naylor CD, et al. Users’ Guides to the Medical Literature: XXV. Evidence-based medicine: principles for applying the Users’ Guides to patient care. Evidence-Based Medicine Working Group. JAMA. 2000 Sep;284(10):1290–1296. doi: 10.1001/jama.284.10.1290. [DOI] [PubMed] [Google Scholar]
  • 8.Haynes RB, Devereaux PJ, Guyatt GH. Clinical expertise in the era of evidence-based medicine and patient choice. ACP J Club. 2002;136(2):A11–4. [PubMed] [Google Scholar]
  • 9.Anttila S, Persson J, Vareman N, Sahlin NE. Conclusiveness resolves the conflict between quality of evidence and imprecision in GRADE. J Clin Epidemiol. 2016;75:1–5. doi: 10.1016/j.jclinepi.2016.03.019. [DOI] [PubMed] [Google Scholar]
  • 10.Bosdriesz JR, Stel VS, van Diepen M, Meuleman Y, Dekker FW, Zoccali C, et al. Evidence-based medicine—When observational studies are better than randomized controlled trials. Nephrol. 2020;25:737–743. doi: 10.1111/nep.13742. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Mercuri M, Gafni A. The evolution of GRADE (part 3): A framework built on science or faith? J Eval Clin Pract. 2018;24(5):1223–1231. doi: 10.1111/jep.13016. [DOI] [PubMed] [Google Scholar]
  • 12.Mercuri M, Baigrie BS. What confidence should we have in GRADE? J Eval Clin Pract. 2018;24(5):1240–1246. doi: 10.1111/jep.12993. [DOI] [PubMed] [Google Scholar]
  • 13.Mugerauer R. Professional judgement in clinical practice (part 3): A better alternative to strong evidence-based medicine. J Eval Clin Pract. 2021;27(3):612–623. doi: 10.1111/jep.13512. [DOI] [PubMed] [Google Scholar]
  • 14.Aldous C, Dancis BM, Dancis J, Oldfield PR. Wheel Replacing Pyramid: Better Paradigm Representing Totality of Evidence-Based Medicine. Ann Glob Heal. 2024;90(1):17. doi: 10.5334/aogh.4341. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Antoniou GA, Bastos Gonçalves F, Björck M, Chakfé N, Coscas R, Dias NV, et al. Editor’s Choice – European Society for Vascular Surgery Clinical Practice Guideline Development Scheme: An Overview of Evidence Quality Assessment Methods, Evidence to Decision Frameworks, and Reporting Standards in Guideline Development. Eur J Vasc Endovasc Surg. 2022;63(6):791–799. doi: 10.1016/j.ejvs.2022.03.014. [DOI] [PubMed] [Google Scholar]
  • 16.Chloros GD, Prodromidis AD, Giannoudis PV. Has anything changed in Evidence-Based Medicine? Injury. 2023;54(Suppl 3):S20–25. doi: 10.1016/j.injury.2022.04.012. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Cuello-Garcia CA, Santesso N, Morgan RL, Verbeek J, Thayer K, Ansari MT, et al. GRADE guidance 24 optimizing the integration of randomized and nonrandomized studies of interventions in evidence syntheses and health guidelines. J Clin Epidemiol. 2022;142:200–208. doi: 10.1016/j.jclinepi.2021.11.026. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Djulbegovic B, Ahmed MM, Hozo I, Koletsi D, Hemkens L, Price A, et al. High quality (certainty) evidence changes less often than low-quality evidence, but the magnitude of effect size does not systematically differ between studies with low versus high-quality evidence. J Eval Clin Pract. 2022;28(3):353–362. doi: 10.1111/jep.13657. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Djulbegovic B, Koletsi D, Hozo I, Price A, Martimbianco ALC, Riera R, et al. High certainty evidence is stable and trustworthy, whereas evidence of moderate or lower certainty may be equally prone to being unstable. J Clin Epidemiol. 2024;171:111392. doi: 10.1016/j.jclinepi.2024.111392. [DOI] [PubMed] [Google Scholar]
  • 20.Al Noman A, Sarkar O, Mita TM, Siddika K, Afrose F. Simplifying the concept of level of evidence in lay language for all aspects of learners: In brief review. Intell Pharm. 2024;2(2):270–273. [Google Scholar]
  • 21.Szajewska H. Evidence-Based Medicine and Clinical Research: Both Are Needed, Neither Is Perfect. Ann Nutr Metab. 2018;72:13–23. doi: 10.1159/000487375. [DOI] [PubMed] [Google Scholar]
  • 22.Ritson AJ, Hearris MA, Bannock LG. Bridging the gap: Evidence-based practice guidelines for sports nutritionists. Front Nutr. 2023;10:1118547. doi: 10.3389/fnut.2023.1118547. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Semrau F, Aidelsburger P, Israel CW. Common misunderstandings of evidence-based medicine. Herzschrittmacherther Elektrophysiol. 2023;34(3):232–239. doi: 10.1007/s00399-023-00957-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Sekhon M, de Thurah A, Fragoulis GE, Schoones J, Stamm TA, Vliet Vlieland TPM, et al. Synthesis of guidance available for assessing methodological quality and grading of evidence from qualitative research to inform clinical recommendations: a systematic literature review. RMD Open. 2024;10(2) doi: 10.1136/rmdopen-2023-004032. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Wallace SS, Barak G, Truong G, Parker MW. Hierarchy of Evidence Within the Medical Literature. Hosp Pediatr. 2022;12(8):745–750. doi: 10.1542/hpeds.2022-006690. [DOI] [PubMed] [Google Scholar]
  • 26.Vere J, Gibson B. Evidence-based medicine as science. J Eval Clin Pract. 2019;25(6):997–1002. doi: 10.1111/jep.13090. [DOI] [PubMed] [Google Scholar]
  • 27.Wieten S. Expertise in evidence-based medicine: A tale of three models. Philos Ethics, Humanit Med. 2018;13(1) doi: 10.1186/s13010-018-0055-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Galbraith K, Ward A, Heneghan C. A real-world approach to Evidence-Based Medicine in general practice: A competency framework derived from a systematic review and Delphi process. BMC Med Educ. 2017;17(1) doi: 10.1186/s12909-017-0916-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29.Hohmann E, Feldman M, Hunt TJ, Cote MP, Brand JC. Research Pearls: How Do We Establish the Level of Evidence? Arthrosc – J Arthrosc Relat Surg. 2018;34(12):3271–3277. doi: 10.1016/j.arthro.2018.10.002. [DOI] [PubMed] [Google Scholar]
  • 30.Mayoral JV. Decision-Making in Medicine: A Kuhnian Approach. Teorema: Revista Internacional de Filosofía. 2021;40(1):133–150. [Google Scholar]
  • 31.Mercuri M, Gafni A. The evolution of GRADE (part 1): Is there a theoretical and/or empirical basis for the GRADE framework? J Eval Clin Pract. 2018;24(5):1203–1210. doi: 10.1111/jep.12998. [DOI] [PubMed] [Google Scholar]
  • 32.Mercuri M, Gafni A. The evolution of GRADE (part 2): Still searching for a theoretical and/or empirical basis for the GRADE framework. J Eval Clin Pract. 2018;24(5):1211–1222. doi: 10.1111/jep.12997. [DOI] [PubMed] [Google Scholar]
  • 33.Irving M, Eramudugolla R, Cherbuin N, Anstey KJ. A Critical Review of Grading Systems: Implications for Public Health Policy. Eval Heal Prof. 2017;40(2):244–262. doi: 10.1177/0163278716645161. [DOI] [PubMed] [Google Scholar]
  • 34.Parkhurst JO, Abeysinghe S. What Constitutes “Good” Evidence for Public Health and Social Policy-making? From Hierarchies to Appropriateness. Soc Epistemol. 2016;30(5–6):665–679. [Google Scholar]
  • 35.Smith Jervelund S, Villadsen SF. Evidence in public health: An integrated, multidisciplinary concept. Scand J Public Health. 2022 Nov;50(7):1012–1017. doi: 10.1177/14034948221125341. [DOI] [PubMed] [Google Scholar]
  • 36.StJohn K, McNeal KS. The strength of evidence pyramid: One approach for characterizing the strength of evidence of geoscience education research (GER) community claims. J Geosci Educ. 2017;65:363–372. [Google Scholar]
  • 37.Haynes RB, Sacket DL, Gray JMA, Cook DJ, Guyatt GH. Transferring evidence from research into practice: 1. The role of clinical care research evidence in clinical decisions. ACP J Club. 1996;125(3):A14. [PubMed] [Google Scholar]
  • 38.Guyatt G, Oxman AD, Akl EA, Kunz R, Vist G, Brozek J, et al. GRADE guidelines: 1. Introduction-GRADE evidence profiles and summary of findings tables. J Clin Epidemiol. 2011;64(4):383–394. doi: 10.1016/j.jclinepi.2010.04.026. [DOI] [PubMed] [Google Scholar]
  • 39.Eklund A, Jensen I, Lohela-Karlsson M, Hagberg J, Leboeuf-Yde C, Kongsted A, et al. The nordic maintenance care program: Effectiveness of chiropractic maintenance care versus symptom-guided treatment for recurrent and persistent low back pain—a pragmatic randomized controlled trial. PLoS One. 2018;13(9) doi: 10.1371/journal.pone.0203029. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 40.Murad MH, Asi N, Alsawas M, Alahdab F. New evidence pyramid. Evid Based Med. 2016;21(4):125–127. doi: 10.1136/ebmed-2016-110401. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 41.Warnke G. Gadamer: Hermeneutics, Tradition, and Reason. Stanford: Stanford University Press; 1987. p. 220. [Google Scholar]
  • 42.Shearer HM, Carroll LJ, Côté P, Randhawa K, Southerst D, Varatharajan S, et al. The course and factors associated with recovery of whiplash-associated disorders: an updated systematic review by the Ontario protocol for traffic injury management (OPTIMa) collaboration. Eur J Physiother. 2021;23(5):279–294. [Google Scholar]
  • 43.Bolton JE. The evidence in evidence-based practice: What counts and what doesn’t count? J Manipulative Physiol Ther. 2001;24(5):362–366. doi: 10.1067/mmt.2001.115259. [DOI] [PubMed] [Google Scholar]
  • 44.Miller PJ, Jones-Harris AR. The Evidence-Based Hierarchy: Is It Time For Change? A Suggested Alternative. J Manipulative Physiol Ther. 2005;28(6):453–457. doi: 10.1016/j.jmpt.2005.06.010. [DOI] [PubMed] [Google Scholar]
  • 45.Giacomini MK. The rocky road: qualitative research as evidence. Evid Based Med. 2001;6(1):4–6. [PubMed] [Google Scholar]
  • 46.Emary PC, Stuber KJ, Mbuagbaw L, Oremus M, Nolet PS, Nash JV, et al. Risk of bias in chiropractic mixed methods research: a secondary analysis of a meta-epidemiological review. J Can Chiropr Assoc. 2022;66(1):7–20. [PMC free article] [PubMed] [Google Scholar]
  • 47.Emary PC, Stuber KJ, Mbuagbaw L, Oremus M, Nolet PS, Nash JV, et al. Quality of reporting in chiropractic mixed methods research: a methodological review protocol. Chiropr Man Ther. 2021;29(1) doi: 10.1186/s12998-021-00395-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 48.Roberge-Dao J, Yardley B, Menon A, Halle MC, Maman J, Ahmed S, et al. A mixed-methods approach to understanding partnership experiences and outcomes of projects from an integrated knowledge translation funding model in rehabilitation. BMC Health Serv Res. 2019;19(1) doi: 10.1186/s12913-019-4061-x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 49.Palermo TM, Davis KD, Bouhassira D, Hurley RW, Katz JD, Keefe FJ, et al. Promoting inclusion, diversity and equity in pain science. Eur J Pain. 2023;24:105–109. doi: 10.1093/pm/pnac204. [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from The Journal of the Canadian Chiropractic Association are provided here courtesy of The Canadian Chiropractic Association

RESOURCES