ABSTRACT
As part of the American Society for Microbiology (ASM) Evidence-Based Laboratory Medicine Practice Guidelines Committee of the Professional Practice Committee, an ad hoc committee was formed in 2014 to assess guidelines published by the committee using an assessment tool, Appraisal of Guidelines for Research Evaluation II (AGREE II). The AGREE II assessment helps reviewers determine whether published guidelines are robust, transparent, and clear in presenting practice recommendations in a standardized manner. Identifying strengths and weaknesses of practice guidelines by ad hoc assessments helps with improving future guidelines through the participation of key stakeholders. This minireview describes the development of the ad hoc committee and results from their review of several ASM best practices guidelines and a non-ASM practice guideline from the Emergency Nurses Association.
KEYWORDS: clinical practice guidelines, AGREE II assessment tool, blood cultures, evidence-based laboratory medicine practice guidelines, AGREE II, evidence-based laboratory medicine, guideline assessment
INTRODUCTION
As stated in the report Clinical Practice Guidelines We Can Trust, commissioned by the Institute of Medicine (1), “Clinical practice guidelines are statements that include recommendations intended to optimize patient care. They are informed by a systematic review of evidence and an assessment of the benefits and harms of alternative care options.” Clinical practice guidelines (CPGs) are ubiquitous across the broad spectrum of medical specialties. According to the Guidelines International Network (http://www.g-i-n.net/home), there are more than 6,000 CPGs from 85 countries listed in their database. In the United States (2), the National Guideline Clearinghouse includes over 1,100 guidelines from U.S.-based organizations (www.guideline.gov).
(Presented in part at the ASM Microbe 2017 Annual Meeting, 1 to 5 June 2017, New Orleans [3]).
Laboratory practice guidelines have been developed by a number of professional organizations. Only recently have the ASM and the clinical microbiology community developed a mechanism for developing formal practice guidelines. In 2010, ASM formed the Evidence-Based Laboratory Medicine Practice Guidelines (EBLMPG) committee, and by 2012 the committee had worked with Battelle Centers for Public Health Research and Evaluation to learn the Centers for Disease Control and Prevention (CDC) A-6 methodology for practice guideline development (4). This is a robust method developed by the Division of Laboratory Science and Standards at CDC with the Laboratory Medicine Best Practices (LMBP) initiative for evaluating scientific evidence to improve health care quality. A number of LMBP systematic reviews have been published, including reports on patient specimen identification and the effectiveness of barcoding (5), critical value reporting (6), practices to reduce blood sample hemolysis in emergency departments (7), and cardiac biomarkers (8) and several in clinical microbiology (9, 10).
Briefly, the A-6 cycle for developing a best practices guideline is composed of the following steps:
Asking the question—formulate one or more focused review questions.
Acquiring the evidence—compile all relevant evidence from published and unpublished sources.
Appraising individual studies for inclusion—review the acquired evidence, abstract the data, prepare evidence summary tables, and rate the individual studies.
Analyzing the evidence—rate the overall strength of the aggregate body of evidence (e.g., high, moderate, suggestive, or insufficient) and make recommendations (recommendation for, no recommendation for or against, or recommendation against).
Applying the recommendations—implement best practices, including disseminating recommendations to key stakeholders, performing economic analysis, and considering potential harms and benefits.
Assessing the impact—develop measures to determine improvements following implementation.
A detailed review of the ASM EBLMPG approach is forthcoming (A. S. Weissfeld, unpublished data). There are currently seven guidelines for the National Guideline Clearinghouse authored by the LMBP, and two ASM best practices guidelines have been published in Clinical Microbiology Reviews (10, 11). In 2014, as part of the practice guideline development program, ASM, under a CDC cooperative agreement, formed an ad hoc Evidence-Based Laboratory Medicine Practice Guidelines Assessment (EBLMPGA) Committee composed of the four coauthors of this minireview. The purpose of the committee was to provide feedback to the ASM working group about guidelines prepared under the CDC Cooperative Agreement, using a guideline assessment instrument.
There are a number of approaches to evaluating the published CPGs. Assessment instruments include those published by the Institute of Medicine (12), Cluzeau's Appraisal Instrument (13), Appraisal of Guidelines for Research Evaluation (AGREE) and AGREE II (14, 15), Structured Abstracts of CPGs (16), and Shaneyfelt's Methodologic Standards (17). All of the instruments have the goal of assessing the quality/soundness of CPGs for use in preparing CPGs, improving existing CPGs, or judging guidelines before their adoption (1) (also see Appendix Table C-1 in reference 1). AGREE II was chosen as the assessment tool by the CDC prior to the cooperative agreement with ASM for guideline development. We do not have any insight into the choice by CDC. However, it is an assessment tool that is relatively easy to use and well developed, and the training modules are informative. The ad hoc EBLMPGA committee had the goal of independently assessing ASM or other relevant guidelines using the AGREE II assessment tool (14, 15).
AGREE II ASSESSMENT INSTRUMENT
The AGREE instruments were developed by an international group of guideline developers and researchers, funded by the Canadian Institutes of Health Research, to standardize a method for assessing variability in the practice guidelines to be used by policy makers, educators, guideline developers, and health care providers to develop rigorous guidelines, evaluate existing guidelines, and make decisions for adopting guidelines (15) (www.agreetrust.org). There are six different areas (domains) addressed by the AGREE II assessment tool.
Scope and purpose—three questions.
Stakeholder involvement—three questions.
Rigor of development—eight questions.
Clarity of presentation—three questions.
Applicability—four questions.
Editorial independence—three questions.
Domain 1, scope and purpose, focuses on the overall aims of the guideline, the targeted population, and the health questions specifically being addressed. Domain 2, stakeholder involvement, addresses the intended users of the guideline. Domain 3, rigor of development, outlines the methods and process for developing the guideline. Domain 4, clarity of presentation, addresses how well the guideline is written. Domain 5, applicability, evaluates recommendations for implementation and the resources needed for this. The editorial independence and potential biases are addressed in domain 6. The reviewer(s) makes a final assessment of the quality of the guideline (lowest quality to highest quality) and makes a recommendation (yes, yes with modifications, or no) for use of the guideline by the intended audience/population. Although the reviewers make a final assessment, this is only to indicate whether the CPG meets the criteria outlined in the AGREE II tool. Additional comments to clarify what information is missing could be indicated in the assessment for consideration by the CPG authors.
Each question within the six domains is scored by the reviewer using a 7-point Likert-type scale (also known as a rating scale, ranging from 1 to 7). A score of 7 indicates strong agreement with each question, and a score of 1 indicates strong disagreement. A score between 1 and 7 indicates that the guideline satisfies some but not all of the criteria for that particular domain question. After summing all the domain scores, the total score is expressed as a percentage of the total possible score for that domain. A score of 100% indicates that all of the domain questions are completely addressed in the guideline. If the total score is 50%, half of the criteria are met, and so on. There are no criteria to determine whether a score is acceptable or unacceptable to the guideline assessment group (i.e., the ad hoc EBLMPGA committee) using the assessment tool. Acceptability of the guideline is determined by the review group.
To date, the ad hoc EBLMPGA committee has assessed four sets of clinical practice guidelines. These include two that evaluated preanalytical considerations to reduce blood culture contamination rates (9, 18), one that evaluated preanalytical considerations for urine cultures (11), and one that evaluated practices to increase the timeliness of targeted therapy for inpatients with bloodstream infections (BSI) (10). Three of the guidelines were developed by the ASM/CDC LMPG groups, whereas one was developed outside the LMPG by the Emergency Nurses Association (ENA) (18). The guidelines evaluated are summarized in Table 1.
TABLE 1.
Clinical practice guidelines evaluated by the ad hoc EBLMPGA Committeea
| Reference | Professional society that developed guideline | Intended goal of practices assessed by guideline | Parameters evaluated |
|---|---|---|---|
| Snyder et al. (9) | CDC LMBP | Reduce blood culture contamination | (1) Venipuncture |
| (2) Phlebotomy teams | |||
| (3) Use of prepacked prepn/collection kits | |||
| Proehl et al. (18) | ENA | Reduce blood culture contamination | (1) Skin prepn methods |
| (2) Sterile gloving | |||
| (3) Cleaning blood culture tops | |||
| (4) Use of prepacked collection kits | |||
| (5) Use of IV catheters | |||
| (6) Specimen diversion | |||
| (7) Blood vol | |||
| (8) Personnel and education | |||
| Buehler et al. (10) | ASM/CDC LMBP | Increase timeliness of targeted therapy for hospitalized patients with bloodstream infection | (1) Rapid molecular techniques with direct communication of test results to clinicians/pharmacists |
| (2) Rapid molecular techniques without direct communication of test results | |||
| (3) Rapid phenotypic techniques with direct communication of test results to clinicians or pharmacists | |||
| LaRocco et al. (11) | ASM/CDC LMBP | Reduce contamination and improve diagnostic accuracy of urine cultures | (1) Storage temp and use of preservatives |
| (2) Collection practices of urine from women with suspected UTI | |||
| (3) Collection practices of urine from men with suspected UTI | |||
| (4) Collection practices of urine from children with suspected UTI |
Abbreviations: ASM, American Society for Microbiology; CDC, Centers for Disease Control and Prevention; ENA, Emergency Nurses Association; LMBP, Laboratory Medicine Best Practices; UTI, urinary tract infection.
ASSESSMENT COMMITTEE TRAINING
Before the use of the AGREE II assessment instrument by the ad hoc committee, training of the committee members was provided via two online tutorials. The first of these was an avatar-guided overview, which introduced the principles behind the AGREE II tool. Included in this tutorial was a user's manual that described each of the six domains to be rated in assessing a practice guideline. Tips on where to find data supporting the assessed domains in a published practice guideline, as well as instructions on how to rate each item, using the 7-point rating scale, were provided. The latter consideration included guidance for the committee members on the criteria to be met in order for an item to receive a high score. The second tutorial provided the committee members with the opportunity to assess a sample practice guideline within AGREE II, with immediate feedback following each question on the trainee's ratings of the practice guideline, compared to those of an expert panel.
Each committee member independently reviewed and evaluated the four guidelines using the AGREE II tool. Following each evaluation, the committee members met via teleconference to discuss each question in the AGREE II tool and each committee member's grade for these and the rationale behind the grade assigned. In particular, questions with significant interassessor discrepancies were evaluated. In these discussions, the committee members had the opportunity to adjust their assigned score or to maintain the one initially assigned. In general, scores were adjusted only if a committee member had overlooked a piece of evidence in the guideline that met the criteria under evaluation.
RESULTS FROM THE AD HOC EBLMPGA COMMITTEE REVIEW OF CPGS
After group discussion of the guideline and clarification of some of the questions, agreement between committee members was good for domains that were awarded high scores (i.e., ≥6 on average) or low scores (<2 on average). In contrast, interuser agreement was less robust for questions with average scores between 2 and 5 (Table 2). The standard deviation of the mean between committee member scores was 0.54 for domain questions with an average score of ≥6 and 0.5 for scores of <2 but was 2.14 for domain questions with an average score of <6 but ≥2. This trend did not change between the first guideline evaluated (18) and the last (10), suggesting some ambiguity in assigning scores to domain questions for which a guideline did not clearly meet criteria for a high score or did not address any question which would result in a score of 1. A score is assigned depending on the completeness and quality of reporting, evaluation of which is up to the judgment of the individual appraiser and might vary. On average, no association was found between committee member and average scores or recommendations, indicating that no member was more or less lenient than the others in evaluating the guidelines.
TABLE 2.
AGREE II domain questions and ad hoc EBLMPGA Committee member scoresa
| AGREE II domain | Domain questions | Score for guideline assessed (reference) |
|||||||
|---|---|---|---|---|---|---|---|---|---|
| BC contamination (9) |
ENA BC contamination (18) |
Targeted treatment and BSI reporting (10) |
Preanalytical urine collection (11) |
||||||
| Mean | SD | Mean | SD | Mean | SD | Mean | SD | ||
| Scope and purpose | (1) Is the overall objective(s) of the guideline specifically described? | 6.50 | 1.00 | 3.25 | 2.22 | 7.00 | 0.00 | 6.75 | 0.50 |
| (2) Is the health question(s) covered by the guideline specifically described? | 6.50 | 1.00 | 3.50 | 2.38 | 7.00 | 0.00 | 6.33 | 1.15 | |
| (3) Is the population (patients, public, etc.) to whom the guideline is meant to apply specifically described? | 6.50 | 1.00 | 3.00 | 2.83 | 6.25 | 0.96 | 5.33 | 0.58 | |
| Stakeholder involvement | (4) Does the guideline development group include individuals from all relevant professional groups? | 6.25 | 0.50 | 2.25 | 2.50 | 6.00 | 0.82 | 6.33 | 0.58 |
| (5) Have the views and preferences of the target population (patients, public, etc.) been sought? | 2.25 | 2.50 | 1.50 | 0.58 | 2.25 | 1.50 | 2.00 | 1.73 | |
| (6) Are the target users of the guideline clearly defined? | 3.00 | 2.16 | 5.00 | 2.71 | 4.00 | 2.16 | 4.67 | 1.53 | |
| Rigor of development | (7) Were systematic methods used to search for evidence? | 6.50 | 0.58 | 6.25 | 1.50 | 7.00 | 0.00 | 7.00 | 0.00 |
| (8) Are the criteria for selecting the evidence clearly described? | 6.75 | 0.50 | 4.50 | 2.89 | 7.00 | 0.00 | 7.00 | 0.00 | |
| (9) Are the strengths and limitations of the body of evidence clearly described? | 6.00 | 0.00 | 4.25 | 1.50 | 6.50 | 1.00 | 6.67 | 0.58 | |
| (10) Are the methods for formulating the recommendations clearly described? | 6.75 | 0.50 | 4.75 | 2.87 | 6.75 | 0.50 | 7.00 | 0.00 | |
| (11) Have the health benefits, side effects, and risks been considered in formulating the recommendations? | 6.25 | 0.50 | 4.00 | 2.45 | 6.75 | 0.50 | 6.33 | 1.15 | |
| (12) Is there an explicit link between the recommendations and the supporting evidence? | 6.75 | 0.50 | 5.00 | 2.83 | 7.00 | 0.00 | 7.00 | 0.00 | |
| (13) Had the guideline been externally reviewed by experts prior to its publication? | 3.25 | 2.87 | 3.25 | 2.22 | 6.25 | 1.50 | 5.00 | 3.46 | |
| (14) Is a procedure for updating the guideline provided? | 1.00 | 0.00 | 1.25 | 0.50 | 3.50 | 3.00 | 2.00 | 1.73 | |
| Clarity of presentation | (15) Are the recommendations specific and unambiguous? | 6.00 | 0.82 | 3.50 | 2.38 | 5.25 | 2.87 | 6.67 | 0.58 |
| (16) Are the different options for management of the condition or health issue clearly presented? | 4.25 | 1.50 | 3.25 | 0.96 | 5.25 | 2.87 | 6.67 | 0.58 | |
| (17) Are key recommendations easily identifiable? | 6.75 | 0.50 | 4.75 | 2.63 | 5.00 | 2.71 | 7.00 | 0.00 | |
| Applicability | (18) Does the guideline describe facilitators of and barriers to its application? | 4.00 | 2.65 | 2.00 | 1.41 | 6.50 | 1.00 | 6.33 | 1.15 |
| (19) Does the guideline provide advice and/or tools with respect to how the recommendations can be put into practice? | 3.25 | 2.06 | 1.75 | 1.50 | 4.50 | 2.65 | 5.00 | 1.73 | |
| (20) Have the potential resource implications of applying the recommendations been considered? | 5.25 | 0.96 | 1.50 | 1.00 | 6.75 | 0.50 | 6.33 | 1.15 | |
| (21) Does the guideline present monitoring and/or auditing criteria? | 4.75 | 2.22 | 3.00 | 2.31 | 3.25 | 2.06 | 3.33 | 3.21 | |
| Editorial independence | (22) Is it the case that the views of the funding body have not influenced the content of the guideline? | 5.25 | 1.71 | 2.50 | 2.38 | 6.00 | 1.41 | 5.67 | 1.53 |
| (23) Have competing interests of guideline development group members been recorded and addressed? | 1.50 | 1.00 | 1.00 | 0.00 | 6.50 | 1.00 | 6.67 | 0.58 | |
| Overall score | Overall assessment | 5.50 | 0.58 | 2.75 | 1.71 | 5.67 | 1.53 | 6.00 | 0.82 |
Abbreviations: BC, blood culture; BSI, bloodstream infections; ENA, Emergency Nurses Association; SD, standard deviation of the mean.
Overall assessments of the guidelines by the EBLMPGA committee are summarized in Table 3 and were calculated by the use of the AGREE II system as described above. The assessment committee recommended the guideline published by LaRocco and colleagues (11) on preanalytical urine culture considerations (three reviewers recommended the guideline, one recommended the guideline with modifications) and recommended the guidelines by Snyder et al. (9) on blood culture contamination with modifications (two reviewers recommended the guideline, two recommended the guideline with modifications) as well as that of Buehler et al. (10) regarding use of diagnostics to improve timeliness of targeted therapy for bloodstream infections (two reviewers recommended the guideline, one recommended the guideline with modifications, one did not recommend the guideline). The guideline put forth by the ENA (18), however, was found to have several limitations (Table 2) and was not recommended overall (three reviewers did not recommend the guideline, one recommended it). It should be noted that the ENA guideline was the only one of the four evaluated that did not use the CDC's A-6 cycle for systematic review (4). To the best of our knowledge, there were no independent assessments of the guidelines reviewed by the ad hoc EBLMPGA committee other than peer review at the time of publication. For the ASM guidelines, the LMBP workgroup consisted of a group of experts who helped with the formulation of the guidelines.
TABLE 3.
Overall AGREE II domain score summary for practice guidelines assessed by the ad hoc EBLMPGA Committeea
| AGREE II domain | Score (%) for guideline assessed (reference) |
|||
|---|---|---|---|---|
| BC contamination (9) | ENA BC contamination (18) | Targeted treatment and BSI reporting timeliness (10) | Preanalytical urine collection (11) | |
| (1) Scope and purpose | 92 | 38 | 96 | 89 |
| (2) Stakeholder involvement | 47 | 32 | 51 | 49 |
| (3) Rigor of development | 73 | 53 | 89 | 83 |
| (4) Clarity of presentation | 78 | 47 | 69 | 86 |
| (5) Applicability | 57 | 18 | 71 | 65 |
| (6) Editorial independence | 40 | 13 | 88 | 88 |
| Overall recommendationb | Y (2), YM (2) | N (3), Y (1) | Y (2), YM (1), N (1) | Y (3), YM (1) |
BC, blood culture; BSI, bloodstream infection; ENA, Emergency Nurses Association; Y, yes; N, no; YM, yes with modifications.
Data in parentheses indicate the number of committee members with an overall recommendation.
Some trends emerged across all guidelines evaluated. As a whole, the ASM/CDC guidelines scored well (>80%) in scope and purpose. The ASM/CDC guidelines also scored well in the rigor of development domain, with scores ranging from 73% to 89% (Table 3). All four guidelines fell short with regard to inclusion of a procedure for updating the guideline. The only one of the four evaluated guidelines that received a score of >2 was the guideline that evaluated practices to reduce time to targeted therapy for BSI (10), which indicated that routine updates would occur but did not explicitly state the time interval or the criteria by which to guide decisions on when an update was to occur or the methodology for the updating procedure, which were required to satisfy this criteria (Table 2). For the clarity of presentation domain, scoring across the assessors in the committee varied on the point of whether the guideline presented different options for management of a given condition or health issue, resulting in the low scores shown in Table 3. This point was primarily an issue of differing judgments by the committee members on whether each guideline fulfilled this item, with scores ranging from 1 to 7. In this regard, further training and advice on how to grade this question may be needed. In the applicability domain, key issues with respect to which the guidelines performed suboptimally overall were those of inclusion of advice or tools on how to put the recommendations into practice and guidelines for monitoring and/or auditing criteria. Inclusion of more-concrete recommendations on how to implement recommendations and how to monitor their use and impact is needed for all clinical microbiology practice guidelines.
Finally, all four guidelines performed poorly in the stakeholder involvement domain (range, 32% to 51%). These scores were driven primarily by assessment of two criteria: “the views and preferences of the target population have been sought” and “the target users of the guideline are clearly defined” (Table 2). None of the four guidelines evaluated incorporated the views and preferences of the target population. The target population is a group or individuals whom the CPG is meant to be directed toward. For example, the CPGs for reducing blood culture contamination are targeted to patients having blood cultures drawn by peripheral venipuncture and are represented by a specific entry in domain 1 (item 3: “The population to whom the guideline is meant to apply is specifically described”). This is distinguished from item 6 (“The target users of the guideline are clearly defined”). Additionally, item 5 addresses whether “The views and preferences of the target population (patients, public, etc.) have been sought.” While the role of patient/public preferences in assessing laboratory testing practices may not be immediately apparent, this group is the one ultimately affected by the practice guideline. In addition, patients have perspectives and experiences that are unique and may be valuable for guideline development. Involving these groups ensures that the issues addressed are relevant, important aspects of the experience of illness are considered, critical outcomes are identified and prioritized, the decision-making process is transparent, and the final guideline can be understood by those it affects (19). That last consideration is important, as it allows lay people to make informed choices about the services, interventions, care, and treatments available to them. However, several barriers to participation of the public exist, including the lack of suitable representatives and the complexity of the scientific terminology used by the committees. Nonetheless, several organizations, including the National Institute for Health and Care Excellence in the United Kingdom, have developed recommendations on how to overcome these barriers (20). All three guidelines that evaluated blood culture practices could have included participation of one of the many patient advocacy groups for sepsis prevention; inclusion of these groups will inevitably benefit guideline development.
The second issue that scored poorly in the stakeholder involvement domain across all four guidelines was definition of the target users. While this seemed implicit in most instances, it was not fully addressed in any of the guidelines, leaving some uncertainty regarding the applicability of the guideline. For example, the preanalytical urine collection guideline did not specifically identify whether the guideline applied only to hospitalized patients or also to those in the clinic, nurses, physicians, laboratorians, etc. This lapse can easily be addressed by inclusion of language in the guidelines specifying the target user.
GENERAL APPROACHES TO IMPLEMENTATION OF GUIDELINES
Once there is commitment to adopting CPGs, including those related to diagnostic testing, the process of implementation must be initiated. The objective of guideline implementation is the full incorporation of the recommendations into laboratory and clinical practice so they become established as routine standard care (21). The ultimate goals are to decrease inappropriate variation to expedite patient care and ensure favorable clinical outcomes (21). Guideline implementation is a multistep process and is dependent on several elements, as illustrated in Fig. 1.
FIG 1.
Flow chart illustrating the implementation of CPGs, including those related to diagnostics processes, within an institution or organization (adapted from reference 21 with permission of the publisher). (A) Leadership support must be established and in place at the outset. (B) Subsequently, an implementation action team is formed and charged with spearheading dissemination of the CPG and “buy-in” by all stakeholders, including the users and operators of the test. (C) The action team develops a plan for implementation that ensures all associated stakeholders are educated and resources are available for successful application and adherence. The plan may be subject to modification based on the outcome of monitoring. (D) Depending on the consequences associated with applying the CPG, it may be prudent to implement guideline recommendations on a small scale (denoted by dashed arrows) prior to system-wide adoption. (E) In contrast, recommendations may be immediately rolled out at the system-wide level. (F) The implementation and adherence process is closely monitored and, if necessary, modified to ensure compliance with CPG recommendations and, ultimately, institutionalization.
Understandably, guideline implementation can be daunting in the absence of universally accepted strategies but at least three factors are critical for success. First, there must be dissemination to and “buy-in” by all stakeholder groups, namely, laboratory practitioners, laboratory professional organizations, health care workers, health care worker professional organizations, administrators, government regulatory agencies, accrediting groups, policy makers, and payers (4) and perhaps even the patients themselves. Patient engagement is recommended by multiple institutions and instruments for guideline development (22), and their opinions with respect to laboratory medicine-related recommendations may be increasingly sought. As noted in the section on guideline assessment above, patient involvement was lacking in developing the four guidelines reviewed by the ad hoc committee. Second, the resources (fiscal, infrastructure, logistic, labor, etc.) necessary for successful rollout and guideline adherence should be readily available, and the effort required for acquisition of the resources should not be prohibitively burdensome. Finally, laboratorians and health care workers must become familiar and comfortable with all aspects of the recommendations. This can be accomplished through educational mediums (peer-reviewed publications, newsletters, educational presentations at scientific and professional meetings, etc.) that describe guideline logic and the benefit to patient care.
With the factors described above noted, multiple barriers hindering or preventing the application of guidelines are frequently encountered. In a seminal paper, Cabana and colleagues reviewed barriers to physician adherence to guidelines (23). These barriers are often the same as those encountered by laboratorians in implementing diagnostic testing guideline recommendations and can be stratified into three intimately linked categories: knowledge, attitude, and behavior. Barriers that affect knowledge include lack of awareness of and familiarity with the guidelines due to the volume of information, lack of time required to stay informed, and even lack of accessibility to the guidelines themselves. This highlights the importance of easily accessible and relevant education (i.e., “news you can use”) during the implementation process. Attitudes that hinder guideline implementation include lack of agreement with the guideline, a belief that the recommended practices cannot be performed, and resistance due to the inclination to adhere to previous practices. Again, education is key to overcoming these issues, as is provision of resources for implementation. Finally, behavioral barriers are influenced by several factors, including environmental factors (e.g., scarcity of time and resources, organizational constraints, and lack of reimbursement), guideline-related factors (e.g., characteristics of the guidelines themselves, including ease of use, and the presence of contradictory and competing guidelines), and patient-related factors (e.g., an inability to align patient preferences with recommendations). Ultimately, since the barriers to implementation are numerous and multifaceted, it is essential they are taken into consideration prior to, and addressed during, guideline development to ensure successful implementation and adherence.
OTHER ASPECTS FOR CLINICAL PRACTICE GUIDELINES ASSESSMENTS
As of this writing, nearly 1,700 CPGs are available from the Agency for Healthcare Research and Quality Clearinghouse (https://www.guideline.gov/), only 47 of which are targeted specifically to “clinical laboratory personnel.” Even within this small subset, the guidelines are of variable quality and may offer conflicting recommendations. This is a common problem, and guidelines representing various levels of quality have been described in numerous disciplines (2, 24). Thus, standardization of guideline development and design to ensure timely and efficient implementation of new guidelines as they are published is an important goal that has not yet been completely achieved (25).
Competing guidelines are typically focused on achieving the same goal(s) but are developed from different viewpoints by various organizations and, paradoxically, introduce and reflect the variation they were meant to remove. An example encountered by our four-member ad hoc committee was the presence of two competing guidelines aimed at reducing blood culture contamination; one document was developed by the CDC-funded LMBP initiative (9) and the other by the ENA (18). Interestingly, no member of the committee was aware of either document prior to joining the group, reinforcing the need for better dissemination of CPGs and for better interaction between different groups focused on achieving the same goal(s).
In the event that conflicting and/or competing guidelines are encountered, coordinated interaction between the responsible organizations to understand the underlying reasons for the conflict or competition and how they may be addressed in guideline updates should be pursued. Indeed, after review of both the CDC- and the ENA-sponsored documents, a member of our committee initiated contact with the ENA to discuss future collaboration to better harmonize blood culture guidelines. Clearly, the optimal situation is one that engages all stakeholders at the very beginning so that they may work together and share decision making during the development process to ensure that implementation and adherence can be achieved across disciplines. Despite potential roadblocks, this can be accomplished even for a discipline as complex as infectious disease diagnostics, where, for example, the Infectious Diseases Society of America and ASM partnered to develop a reference guide for physicians when choosing tests for diagnosing infectious diseases (26).
While most guidelines undergo peer review prior to publication, rigorous assessment using standardized tools such as AGREE II allows the identification of areas that need improvement or further study. In addition, if CPG assessments in specific areas are undertaken by a consistent group of reviewers, conflicts between guidelines may be identified and addressed. Whereas the guidelines themselves are available through national clearinghouses and/or professional societies and other distributors, assessments of such guidelines often are not. Instead, guideline consumers must carefully sort through relevant literature citations to identify independent reviews or papers (if they exist) that evaluate the guidelines that they are planning to implement. An important opportunity exists to streamline this process by including the assessments performed using tools such as AGREE II in guidelines disseminated by the various mechanisms mentioned above or by making them available through the same channels as the original CPG documents as a form of postpublication peer review. Citations of published papers that evaluate CPGs using AGREE are available at http://www.agreetrust.org/. However, tighter linkage with the original CPGs would be beneficial. For example, in relation to CPGs generated by ASM and CDC for clinical microbiology practices, we could agree to have all new CPGs evaluated using the AGREE II instrument and to publish the reviews, perhaps as supplemental material, along with the CPG. Alternatively, and representing an approach that we recommend, AGREE II evaluation (or equivalent) by a consistent group of reviewers could be made a requirement by the professional society prior to publication and/or distribution through the professional society. If the latter plan were to be adopted, revisions based on assessment could be used to refine the documents prior to publication just as peer review comments are currently used. However, a key requirement to ensure that CPG assessments add value to the CPG development and implementation process is the careful selection and training of the assessors.
CONCLUSIONS
Assessing CPGs for the EBLMG committee using the AGREE II assessment tool has been a useful part of the guideline development process. Here, our assessments occurred after publication of the guideline and serve to provide feedback to the guideline committee to help prepare future revisions. However, using the AGREE II or another guideline assessment instrument during the guideline development process would likely result in even higher-quality guidelines. A particular challenge to ASM as it develops future guidelines is to identify guidelines that already exist at the time that a project is initiated and to determine whether partnerships with other (and sometime competing) organizations are feasible. At the expense of increasing the amount of time required to complete the writing and review process, such partnerships can help foster a better appreciation by non-ASM organizations of the work done by the EBLMG committee.
ACKNOWLEDGMENT
This work was performed as part of an ASM-CDC Cooperative Agreement (FOA no. CDC-RFA-OEIS-1304).
REFERENCES
- 1.Graham R, Mancher M, Wolman DM, Greenfield S, Steinberg E. 2011. Clinical practice guidelines we can trust. National Academy of Science, Washington, DC. [PubMed] [Google Scholar]
- 2.Bush SH, Marchington KL, Agar M, Davis DH, Sikora L, Tsang TW. 2017. Quality of clinical practice guidelines in delirium: a systematic appraisal. BMJ Open 7:e013809. doi: 10.1136/bmjopen-2016-013809. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Nachamkin I, Kirn TJ, Westblade LF, Humphries R. 2017. Abstr ASM Microbe Ann Meet, 1 to 5 June 2017, New Orleans, LA, abstract 485. [Google Scholar]
- 4.Christenson RH, Snyder SR, Shaw CS, Derzon JH, Black RS, Mass D, Epner P, Favoretto AM, Liebow EB. 2011. Laboratory medicine best practices: systematic evidence review and evaluation methods for quality improvement. Clin Chem 57:816–825. doi: 10.1373/clinchem.2010.157131. [DOI] [PubMed] [Google Scholar]
- 5.Snyder SR, Favoretto AM, Derzon JH, Christenson RH, Kahn SE, Shaw CS, Baetz RA, Mass D, Fantz CR, Raab SS, Tanasijevic MJ, Liebow EB. 2012. Effectiveness of barcoding for reducing patient specimen and laboratory testing identification errors: a laboratory medicine best practices systematic review and meta-analysis. Clin Biochem 45:988–998. doi: 10.1016/j.clinbiochem.2012.06.019. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6.Liebow EB, Derzon JH, Fontanesi J, Favoretto AM, Baetz RA, Shaw C, Thompson P, Mass D, Christenson R, Epner P, Snyder SR. 2012. Effectiveness of automated notification and customer service call centers for timely and accurate reporting of critical values: a laboratory medicine best practices systematic review and meta-analysis. Clin Biochem 45:979–987. doi: 10.1016/j.clinbiochem.2012.06.023. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Heyer NJ, Derzon JH, Winges L, Shaw C, Mass D, Snyder SR, Epner P, Nichols JH, Gayken JA, Ernst D, Liebow EB. 2012. Effectiveness of practices to reduce blood sample hemolysis in EDs: a laboratory medicine best practices systematic review and meta-analysis. Clin Biochem 45:1012–1032. doi: 10.1016/j.clinbiochem.2012.08.002. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Layfield C, Rose J, Alford A, Snyder SR, Apple FS, Chowdhury FM, Kontos MC, Newby LK, Storrow AB, Tanasijevic M, Leibach E, Liebow EB, Christenson RH. 2015. Effectiveness of practices for improving the diagnostic accuracy of non ST elevation myocardial infarction in the emergency department: a laboratory medicine best practices systematic review. Clin Biochem 48:204–212. doi: 10.1016/j.clinbiochem.2015.01.014. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Snyder SR, Favoretto AM, Baetz RA, Derzon JH, Madison BM, Mass D, Shaw CS, Layfield CD, Christenson RH, Liebow EB. 2012. Effectiveness of practices to reduce blood culture contamination: a laboratory medicine best practices systematic review and meta-analysis. Clin Biochem 45:999–1011. doi: 10.1016/j.clinbiochem.2012.06.007. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.Buehler SS, Madison B, Snyder SR, Derzon JH, Cornish NE, Saubolle MA, Weissfeld AS, Weinstein MP, Liebow EB, Wolk DM. 2016. Effectiveness of practices to increase timeliness of providing targeted therapy for inpatients with bloodstream infections: a laboratory medicine best practices systematic review and meta-analysis. Clin Microbiol Rev 29:59–103. doi: 10.1128/CMR.00053-14. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.LaRocco MT, Franek J, Leibach EK, Weissfeld AS, Kraft CS, Sautter RL, Baselski V, Rodahl D, Peterson EJ, Cornish NE. 2016. Effectiveness of preanalytic practices on contamination and diagnostic accuracy of urine cultures: a laboratory medicine best practices systematic review and meta-analysis. Clin Microbiol Rev 29:105–147. doi: 10.1128/CMR.00030-15. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12.Field M, Lohr KN. 1992. Guidelines for clinical practice: from development to use. National Academies Press, Institute of Medicine Committee on Clinical Practice Guidelines, Washington, DC. [PubMed] [Google Scholar]
- 13.Cluzeau FA, Littlejohns P, Grimshaw JM, Feder G, Moran SE. 1999. Development and application of a generic methodology to assess the quality of clinical guidelines. Int J Qual Health Care 11:21–28. doi: 10.1093/intqhc/11.1.21. [DOI] [PubMed] [Google Scholar]
- 14.Agree Collaboration. 2003. Development and validation of an international appraisal instrument for assessing the quality of clinical practice guidelines: the AGREE project. Qual Saf Health Care 12:18–23. doi: 10.1136/qhc.12.1.18. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15.Brouwers MC, Kho ME, Browman GP, Burgers JS, Cluzeau F, Feder G, Fervers B, Graham ID, Hanna SE, Makarski J; AGREE Next Steps Consortium . 2010. Development of the AGREE II, part 2: assessment of validity of items and tools to support application. CMAJ 182:E472–E478. doi: 10.1503/cmaj.091716. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16.Hayward RS, Wilson MC, Tunis SR, Bass EB, Rubin HR, Haynes RB. 1993. More informative abstracts of articles describing clinical practice guidelines. Ann Intern Med 118:731–737. doi: 10.7326/0003-4819-118-9-199305010-00012. [DOI] [PubMed] [Google Scholar]
- 17.Shaneyfelt TM, Mayo-Smith MF, Rothwangl J. 1999. Are guidelines following guidelines? The methodological quality of clinical practice guidelines in the peer-reviewed medical literature. JAMA 281:1900–1905. [DOI] [PubMed] [Google Scholar]
- 18.Proehl JA, Leviner S, Bradford JY, Storer A, Barnason S, Brim C, Halpern J, Lindauer C, Patrick VC, Williams J. 2012. Clinical practice guideline: prevention of blood culture contamination. Emergency Nurses Association, Des Plaines, IL. [Google Scholar]
- 19.Health Technology Assessment International. June 2014, posting date Values and quality standards for patient involvement in HTA. Health Technology Assessment International; http://www.htai.org/fileadmin/HTAi_Files/ISG/PatientInvolvement/v2_files/Info/PCISG-Info-ValuesandStandards-30-Jun14.pdf Accessed 5 September 2017. [Google Scholar]
- 20.Browman GP, Somerfield MR, Lyman GH, Brouwers MC. 2015. When is good, good enough? Methodological pragmatism for sustainable guideline development. Implement Sci 10:28. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21.Nicholas W, Farley DO, Vaiana ME, Cretin S. 2001. Putting practice guidelines to work in the department of defense medical system: a guide for action, document MR-1267-A. Rand. https://www.rand.org/pubs/monograph_reports/MR1267.html.
- 22.Armstrong MJ, Rueda JD, Gronseth GS, Mullins CD. 2017. Framework for enhancing clinical practice guidelines through continuous patient engagement. Health Expect 20:3–10. doi: 10.1111/hex.12467. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23.Cabana MD, Rand CS, Powe NR, Wu AW, Wilson MH, Abboud PA, Rubin HR. 1999. Why don't physicians follow clinical practice guidelines? A framework for improvement. JAMA 282:1458–1465. [DOI] [PubMed] [Google Scholar]
- 24.Matheson A, Mazza D. 2017. Recurrent vulvovaginal candidiasis: a review of guideline recommendations. Aust N Z J Obstet Gynaecol 57:139–145. doi: 10.1111/ajo.12592. [DOI] [PubMed] [Google Scholar]
- 25.Armstrong JJ, Goldfarb AM, Instrum RS, MacDermid JC. 2017. Improvement evident but still necessary in clinical practice guideline quality: a systematic review. J Clin Epidemiol 81:13–21. doi: 10.1016/j.jclinepi.2016.08.005. [DOI] [PubMed] [Google Scholar]
- 26.Baron EJ, Miller JM, Weinstein MP, Richter SS, Gilligan PH, Thomson RB Jr, Bourbeau P, Carroll KC, Kehl SC, Dunne WM, Robinson-Dunn B, Schwartzman JD, Chapin KC, Snyder JW, Forbes BA, Patel R, Rosenblatt JE, Pritt BS. 2013. A guide to utilization of the microbiology laboratory for diagnosis of infectious diseases: 2013 recommendations by the Infectious Diseases Society of America (IDSA) and the American Society for Microbiology (ASM). Clin Infect Dis 57:e22–e121. doi: 10.1093/cid/cit278. [DOI] [PMC free article] [PubMed] [Google Scholar]

