Skip to main content
BMC Medical Research Methodology logoLink to BMC Medical Research Methodology
. 2011 Feb 15;11:17. doi: 10.1186/1471-2288-11-17

The Australian 'FORM' approach to guideline development: The quest for the perfect system

Philipp Dahm 1, Benjamin Djulbegovic 2,
PMCID: PMC3055217  PMID: 21324126

Abstract

Background

Clinical practice guidelines have been defined as systematically developed statements to assist practitioner and patient decision-making about appropriate health care for specific clinical circumstances. They play an important role in guiding evidence based clinical practice. The Australian National Health and Medical Research Council has developed and pilot-tested a new framework for guideline development, the FORM approach, the role of which has yet to be further defined.

Methods

We critically review the elements of the FORM approach and compare it to other, more established methods for rating the quality of evidence and strength of recommendations.

Results

FORM recognizes five factors that impact the strength of a recommendation which are the evidence base, consistency, clinical impact, generalizability and applicability. Consideration of these elements leads to a four-tiered rating system represented by the letters A ("body of evidence can be trusted to guide practice") to D ("body of evidence is weak and recommendation must be applied with caution"). It builds on other existing guideline methodologies such as those developed by the Scottish Intercollegiate Guidelines Network (SIGN), the Strength of Recommendation Taxonomy (SORT) and the Grading of Recommendations Assessment, Development and Evaluation (GRADE) groups. FORM distinguishes itself from other systems by its strong emphasis on applicability, which is separated out as its own category and relates the relevance of the body of evidence to the Australian healthcare system.

Conclusions

The FORM approach offers a methodologically rigorous alternative approach to guideline development that places particular emphasis on aspects of applicability. This feature is unique and may prompt future adoption by other guidelines systems

Commentary

Clinical practice guidelines have been defined as systematically developed statements to assist practitioner and patient decision-making about appropriate health care for specific clinical circumstances[1]. Alongside with efforts to systematically draw together the entire body of evidence for a specific clinical question as promoted by the Cochrane Collaboration[2] and the evidence-based medicine movement with its emphasis on critical appraisal[3], the guideline movement has been one of the driving forces towards a more evidence-based practice of medicine. Clinical practice guidelines also hold a prominent position in the hierarchy of evidence-based resources, as they link evidence with decision-making for a given clinical condition at the point of care[4].

Since their humble beginnings in the early nineties, the defining characteristics of clinical practice guidelines that can rightfully consider themselves "evidence-based" have increasingly been developed[5]. These include a formal rating of the quality of the evidence that goes beyond study design alone and considers to what extent methodological safeguards against bias (such as allocation concealment, blinding, drop-out rates etc) are put in place to minimize the risk of bias. Early on, there was little consensus on how to rate the quality of evidence, and by 2002 there were 106 competing evidentiary systems available[6]. However, basing evidentiary rules on study design alone yielded unsatisfactory results when it came to guiding the action for clinical decision-makers, thereby promoting the development of a new generation of systems to develop clinical practice guidelines[7]. This generation of methodological frameworks is represented by those currently used by the U. S. Preventive Services Task Force (USPSTF), the National Institute for Health and Clinical Excellence (NICE), the Scottish Intercollegiate Guidelines Network (SIGN), the Strength of Recommendation Taxonomy (SORT) and the Grading of Recommendations Assessment, Development and Evaluation (GRADE) groups. A major contribution of these systems has been the recognition that factors other than the quality of evidence alone impact clinical recommendations, thereby prompting a clear separation of the quality of evidence from the strength of a recommendation.

The FORM framework represents a new arrival of an evidence-based methodology to develop clinical practice guidelines[8]. It clearly acknowledges its roots in the SIGN and SORT systems, which were adapted to meet the perceived needs of stakeholder organization representatives in the Australian healthcare system. In brief, it recognizes five factors that impact the strength of a recommendation which are the evidence base, consistency, clinical impact, generalizability and applicability. Consideration of these elements then leads to a four-tiered rating system represented by the letters A ("body of evidence can be trusted to guide practice") to D ("body of evidence is weak and recommendation must be applied with caution").

Although this system is novel, it should be recognized that it differs little from the existing guidelines systems. For example, when comparing FORM with GRADE, which is used by more than 55 organizations in 23 countries, "clinical impact" refers to the likely benefit that application of the guideline can realize while also taking into account the relevance of the effect to patients (clinical importance), precision and effect size[9]. GRADE considers all of these elements in operationally different ways- it starts with the clinical importance of the outcomes, takes into account the magnitude of the effect and its precision as part of the evaluation of quality of evidence and assesses the ratio of benefit to harm (which GRADE considers one of three other dimensions distinct from the quality of evidence) in formulation of the guideline recommendations[10]. However, what distinguishes FORM from other systems is its strong emphasis on applicability, which is separated out as its own category and relates the relevance of the body of evidence to the Australian healthcare system. This feature is unique and may prompt future adoption by other guidelines systems.

In an ideal world, guidelines developers would employ a unified system to rate the quality of evidence and strengths of recommendations[11]. Doing so would dispel the "Babylonian confusion" among users trying to make sense of the varying terminology and definitions used by various guidelines developers, ultimately helping to enhance guidelines implementation[12]. To date, no such unified system exists and we are confronted with a fairly large number of competing systems that fail to readily translate one into another[13].

How should we arrive at the "best" system? To do so, one would ultimately like to show that a) the system results in making recommendations that will lead to better outcomes than recommendations by other systems and b) the system is more reproducible than others. The first point will be difficult to prove empirically and may therefore remain forever unresolved. The second point relates to the issue of whether a system such as FORM can be operationalized in terms of practical, reproducible policy and procedures. To illustrate the challenge, consider the number of combinations that can arise from the FORM system that (as shown in table 1 in the manuscript) distinguishes between five factors that can be rated in four different ways thereby resulting in 45 = 1024 combinations. These must be considered in conjunction with recommendations that are made using a four-tiered scale for or against an intervention (42 = 16 combinations). This results in a mind-boggling 16,384 (1024 × 16) ways in which a body of evidence can theoretically be categorized to support clinical recommendations. It is, however, highly likely that some combinations are more prevalent than others making development of the guidelines system more feasible than these theoretical calculations appear to indicate. Nevertheless, it is also likely that this complexity, hitherto only implicitly acknowledged by the people in the field, drives the efforts to develop new systems for guidelines development. It is also clear that as we strive to develop a unified guidelines system, we must find a way to rate a body of evidence and strength of recommendations in a reproducible and reliable manner. We believe that the most important next step in the EBM field relates to the need to perform empirical methodological research to evaluate which of the existing guidelines systems is most reproducible and performs best in the hands of the individuals they are meant to serve. Without undertaking this research, the entire evidence-based medicine edifice may lose its solid ground, built so carefully over the last 20 years.

Pre-publication history

The pre-publication history for this paper can be accessed here:

http://www.biomedcentral.com/1471-2288/11/17/prepub

Contributor Information

Philipp Dahm, Email: p.dahm@urology.ufl.edu.

Benjamin Djulbegovic, Email: bdjulbeg@health.usf.edu.

References

  1. Shaneyfelt TM, Mayo-Smith MF, Rothwangl J. Are guidelines following guidelines? The methodological quality of clinical practice guidelines in the peer-reviewed medical literature. JAMA. 1999;281(20):1900–1905. doi: 10.1001/jama.281.20.1900. [DOI] [PubMed] [Google Scholar]
  2. Starr M, Chalmers I, Clarke M, Oxman AD. The origins, evolution, and future of The Cochrane Database of Systematic Reviews. Int J Technol Assess Health Care. 2009;25(Suppl 1):182–195. doi: 10.1017/S026646230909062X. [DOI] [PubMed] [Google Scholar]
  3. Sackett DL, Rosenberg WM, Gray JA, Haynes RB, Richardson WS. Evidence based medicine: what it is and what it isn't. BMJ. 1996;312(7023):71–72. doi: 10.1136/bmj.312.7023.71. [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. Haynes RB. Of studies, syntheses, synopses, summaries, and systems: the "5S" evolution of information services for evidence-based health care decisions. ACP J Club. 2006;145(3):A8. [PubMed] [Google Scholar]
  5. Sibbald B, Brouwers MC, Kho ME, Browman GP, Burgers JS, Cluzeau F, Feder G, Fervers B, Graham ID, Grimshaw J, Hanna SE, Littlejohns P, Makarski J, Zitzelsberger L. AGREE II: Advancing guideline development, reporting and evaluation in health care. J Clin Epidemiol. in press . [DOI] [PubMed]
  6. West S, King V, Carey TS, Lohr KN, McKoy N, Sutton SF, Lux L. Systems to rate the strength of scientific evidence. Evid Rep Technol Assess (Summ) 2002. pp. 1–11. [PMC free article] [PubMed]
  7. Atkins D, Eccles M, Flottorp S, Guyatt GH, Henry D, Hill S, Liberati A, O'Connell D, Oxman AD, Phillips B, Schunemann H, Edejer TT, Vist GE, Williams JW Jr. The GRADE Working Group. Systems for grading the quality of evidence and the strength of recommendations I: critical appraisal of existing approaches The GRADE Working Group. BMC Health Serv Res. 2004;4(1):38. doi: 10.1186/1472-6963-4-38. [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Hillier S, Grimmer-Somers K, Merlin T, Middleton P, Salisbury J, Tooher R, Weston A. FORM: An Australian method for formulating and grading recommendations in evidence-based clinical guidelines. BMC Med Res Methodol. in press . [DOI] [PMC free article] [PubMed]
  9. Guyatt GH, Oxman AD, Kunz R, Vist GE, Falck-Ytter Y, Schunemann HJ. for the GRADE Working Group. What is "quality of evidence" and why is it important to clinicians? BMJ. 2008;336(7651):995–998. doi: 10.1136/bmj.39490.551019.BE. [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Guyatt GH, Oxman AD, Kunz R, Falck-Ytter Y, Vist GE, Liberati A, Schunemann HJ. for the GRADE Working Group. Going from evidence to recommendations. BMJ. 2008;336(7652):1049–1051. doi: 10.1136/bmj.39493.646875.AE. [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Guyatt G, Vist G, Falck-Ytter Y, Kunz R, Magrini N, Schunemann H. An emerging consensus on grading recommendations? ACP J Club. 2006;144(1):A8–A9. [PubMed] [Google Scholar]
  12. Gugiu PC, Gugiu MR. A critical appraisal of standard guidelines for grading levels of evidence. Eval Health Prof. 2010;33(3):233–255. doi: 10.1177/0163278710373980. [DOI] [PubMed] [Google Scholar]
  13. Eddy DM. Evidence-based medicine: a unified approach. Health Aff (Millwood) 2005;24(1):9–17. doi: 10.1377/hlthaff.24.1.9. [DOI] [PubMed] [Google Scholar]

Articles from BMC Medical Research Methodology are provided here courtesy of BMC

RESOURCES