Skip to main content
HHS Author Manuscripts logoLink to HHS Author Manuscripts
. Author manuscript; available in PMC: 2019 May 21.
Published in final edited form as: J Public Health Manag Pract. 2014 Sep-Oct;20(5):542–550. doi: 10.1097/PHH.0b013e3182aa6560

Public Health Grand Rounds at the Centers for Disease Control and Prevention: Evaluation Feedback From a Broad Community of Learners

John Iskander 1, Mary Ari 2, Bin Chen 3, Sharon Hall 4, Neelam Ghiya 5, Tanja Popovic 6
PMCID: PMC6529180  NIHMSID: NIHMS1029053  PMID: 24100242

Abstract

Objective:

To evaluate the relevance and educational benefit of monthly Public Health Grand Rounds (GR), an hour-long interactive lecture series featuring 1 current, relevant public health topic.

Design:

Quantitative and qualitative analysis of data evaluating GR format and content submitted by 2063 continuing education (CE) participants.

Setting:

Survey data submitted electronically to the Centers for Disease Control and Prevention online CE system from January 2010 through December 2011.

Participants:

Physicians, nurses, pharmacists, health education specialists, and other health care professionals seeking CE credits for Public Health GR.

Main Outcome Measures:

Proportion of respondents agreeing or strongly agreeing that GR is using educational strategies that enhance user learning and is meeting preidentified learning objectives.

Results:

On questions involving instructional strategies and delivery methods, 95.0% and 95.6% of respondents, respectively, agreed or strongly agreed that the GR was conducive to learning. More than 90% of respondents agreed or strongly agreed that they could describe the burden of the disease/condition in question and identify key preventive interventions, knowledge gaps, and measures of public health progress.

Conclusions:

These evaluation results indicate that the GR is meeting content-specific and educational needs of diverse health care professionals. The GR models organized scientific discussions on evidence and translation into real-world impacts of decreased morbidity, mortality, and health care costs, and links public health to clinical practice. This promotes a greater understanding of the interplay of different health fields and may lead to greater and cross-disciplinary collaborations.

Keywords: continuing education, evaluation, public health


While public health is rapidly expanding in many fields, including infectious diseases, chronic disease, environmental and occupational health, and injury prevention,1 the communication of important public health interventions, progress, and resulting health impacts to the broader health workforce community has lagged behind. Many state and local health department personnel lack sufficient access to both Internet resources and public health libraries (CDC Public Health Library and Information Center, unpublished data, 2013). At the same time, given the amount of new knowledge that becomes available every day, it has become difficult even for experts in a narrow area to keep up and translate the latest scientific evidence into practice.

Public Health Grand Rounds (GR) was established by the Centers for Disease Control and Prevention (CDC) director in 2009 to build the knowledge base of the public health workforce, make the connection between evidence and its use in public health and clinical practice, and increase awareness of key scientific and programmatic challenges in addressing major public health issues. Because public health is multidisciplinary and very broad in scope, building a general fund of knowledge, while a challenge, is a prerequisite for success of any public health initiative. Addressing public health issues requires keeping up with the best available scientific evidence to inform practice. Grand Rounds was designed to allow practitioners to learn about contemporary topics in public health, in other words, to create a public health “commons” designed for education and discussion. Grand Rounds sessions showcase areas of public health that have translated their own rigorous science into programs that have positive health outcomes and financial impacts. Grand Rounds topics, which are annually solicited from throughout CDC, have included population health priorities, including tobacco control, health care–associated infections, motor vehicle accident reduction, prevention human immunodeficiency virus (HIV), and nutrition and food safety. However, GR content also encompasses cross-cutting issues such as electronic health records and potential uses and benefits of nanotechnology in medicine and public health. The forum provides public health staff at all levels of government and in all sectors with exposure to issues, including chronic disease prevention and the role of policy in health promotion, with which they may not have direct experience. Grand Rounds provides viewers with multiple ways to access its content by incorporating emerging and available technology (eg, webcast, YouTube).

In this study, we use satisfaction and learning self-assessment data from continuing education (CE) participants to evaluate the strengths and weaknesses of GR from the perspective of audience members. We assessed the relevance and “user friendliness” of presentation content and format, using information submitted by CE participants. Basic descriptive information about persons obtaining CE through GR is also presented. We discuss the lessons learned and implications for other public health institutions that provide education and training to their workforce and partners.

Background: Grand Rounds Content and Structure

Grand Rounds seeks to present both frontline public health and health care approaches to the topic in question. Multiple speakers address science and policy issues from national, state, and local perspectives, with an emphasis on translating evidence into policy, practice, and prevention. Speakers are recognized subject-matter experts on the presentation topic; contributors to GR have included cabinet-level officials (eg, the director of the Office of National Drug Control Policy), state and local health department leaders, and leaders of national and international nongovernmental organizations.

Because of the diverse audience whose educational backgrounds vary, a unique challenge of GR is presenting information in a way that makes sense to those first experiencing the material, without “talking down” to them. To ensure clarity, there is repeated internal scientific review of presentation content. Care is taken to ensure that slides, figures, and tables are clear and self-explanatory.

Most GR are organized around 3 key areas, beginning with establishing the current state of scientific knowledge about a disease or condition that forms the basis for public health decisions. This typically involves discussion of epidemiology (incidence and/or prevalence) but also draws in information from disciplines such as laboratory science, statistics, and health economics. Next, GR presenters seek to describe and explain the gap between what is known about evidence-based interventions and what progress is currently being made by the public health and health care communities. Finally, speakers present examples of implementation of evidence-based interventions, including successes and challenges, measurement of impact, and the ability of the activity to be scaled up, nationally and/or globally, so that greater health protection impacts can be achieved.

These 3 core goals are covered by 3 to 4 speakers for up to 40 minutes of presentation. The initial speaker provides a brief review of key scientific evidence. Terms, concepts, and acronyms likely to be unfamiliar to the audience are briefly defined and placed in context. Presenters explain known or likely reasons for significant trends in incidence, prevalence, or other measures of disease burden. The potential impact of interventions is portrayed in terms of lives saved, hospitalizations averted, and dollars saved. Economic impact of diseases and interventions is described using metrics including years of potential life lost and comparative years of potential life lost.2 By the end of the first presentation, the audience should understand the public health importance of the issue that is the focus of that month’s GR, and the quantitative significance of the topic as demonstrated by the public health burden of mortality, morbidity, and health care costs.

Subsequent presenters may communicate the CDC program or health department perspective, demonstrating the evidence that shows what the problem is, what the best interventions are, and how the evidence guided the program toward its current actions, be they surveillance, research, or funding of specific interventions and projects. Evidence that links problems to solutions, and solutions to programs, is presented whenever possible; this may involve descriptions of key studies or evidence-based reviews, such as those conducted by the US Preventive Services Task Force, the Institute of Medicine, the US Surgeon General, or the World Health Organization.35 Relevant case studies may also be presented when appropriate to the topic; a recent example highlighted the ability of science to influence policy, and ultimately save lives, by promoting nationwide adoption of 0.08 blood alcohol content laws.6

Additional speakers may highlight an international perspective or emerging policy issues. The final speaker, often a nationally or internationally recognized expert, usually provides a keynote-type summary of present scientific and policy status as well as identifying the key partners and stakeholders needed to ensure progress in the field. Both in-person and remote attendees are able to ask questions of presenters during the final 10 to 15 minutes of GR. By the end of a GR, the audience should be able to critically assess scientific evidence and identify knowledge gaps, while understanding that in many areas of public health the main challenge is the need for wider implementation of effective interventions through partnerships that go beyond the health sector. For instance, despite evidence of effectiveness in reducing risk for heart disease and stroke, none of the ABCS interventions (aspirin, blood pressure control, cholesterol control, and smoking cessation) are currently reaching even 50% of targeted populations.7

All of the sessions and related program content have been archived at www.cdc.gov/about/grand-rounds. Summary articles are published in the CDC Morbidity and Mortality Weekly Report.815 Since January 2010, GR has offered CE activities for health care professionals in 5 categories, including continuing medical education (CME) for physicians, continuing nursing education (CNE) for nurses, continuing education contact hours (CECH) for certified health education specialists, continuing pharmacy education (CPE) for pharmacists, and continuing education units (CEU) for other professionals. The number of credits or units awarded per session varies between professions, with a maximum of 1.0 credits or contact hours per session.

Methods: Evaluation of Grand Rounds

As part of the CE activity, learners are asked to complete an online evaluation to receive CE (see the Supplemental Digital Content Appendix available at: http://links.lww.com/JPHMP/A56). Because this is not considered to be a scientific survey, an overall response rate cannot be calculated. The evaluation consists of assessment of the content and learning materials (5 questions), course presentation (5–7 questions, depending on the number of presenters), learning environment (5 questions), learning objectives (6 questions), knowledge gain and changes to competence, skills, strategy or practice (3 questions), quality improvement needs (3 questions), and organization of the session (3 questions). Each evaluation includes 24 to 26 multiple-choice questions. We analyzed the responses to the multiple-choice questions, using Excel software. In addition to the multiple-choice questions, CE participants responded to 6 open-ended questions that required text responses: (1) comment about the content and learning materials; (2) indicate if they think activity was influenced by commercial interest; (3) indicate what technical difficulties were experienced; (4) comment on changes to competence, skills, strategies, and practice; (5) comment on learning objectives; and (6) provide suggestions for improvement. We analyzed these free text responses submitted voluntarily by learners using qualitative methods. Analysis was done manually using a thematic analysis approach16 ; content of responses for each question was explored to determine frequency and intensity with which issues were raised. Individual comments were further analyzed for interpretations, patterns, and any links between them, which led to categorization into specific themes. Findings are presented as themes supported by examples of direct quotes from survey responses along with descriptions of each theme and proportions of responses for each theme. Blank entries and entries that stated “none” or “no comments” for these questions were excluded. Participants also self-reported data on their work setting, educational level, and professional role; participants could list multiple responses for work setting and professional role.

Data were obtained from CE participants who watched the live event (EV) or watched the event via Web on demand (WD) and completed evaluation questions. The data covered 20 GR that occurred between January 2010 and September 2011. EV participant data were obtained within 30 days of each GR. Since WD was implemented in July 2010, no WD data for GR are available for January to June 2010. All data analyzed were received by December 31, 2011. For this analysis, a single individual may participate in 1 or more GR topics contributing multiple times to the evaluation data. Therefore, the term “participant” as used in this article can include multiple counts of participation by a single individual. Data submitted through the CDC online CE system are not subject to the institutional review board review.

Results: Description of Audience and Evaluation of Content

Audience

Between January 2010 and September 2011, data were obtained from 2723 GR participants (as defined previously) who watched EV or WD, 2063 of whom completed evaluation questions meeting requirements for CE. Their most commonly reported work settings were public health (N = 1242, including 600 in federal public health and 533 at state and territory public health agencies), health care (705, of whom 328 reported being hospital based), and nonmilitary government agencies (222). Commonly listed professional roles among responses received included nurse (692, with 569 registered nurses), epidemiologist or infection control practitioner (315), physician (290), and health educator (279); again, participants could specify multiple roles. Commensurate with an audience seeking continuing professional education, 63.1% of participants reported an education level of a master’s degree or higher. As of December 31, 2011, a total of 2072 contact hours have been awarded in the 6 CE categories. The 3 topics for which the most CE was awarded were “The Importance of Monitoring Vitamin D Status in the U.S.” (August 2010, 242 credits), “TB and HIV: A Deadly Duo” (March 2011, 170 credits), and “Why H1N1 Still Matters” (September 2010, 162 credits).

GR content, delivery, and overall quality

On broad evaluation measures, 94.3% of the participants indicated agreement or strong agreement that GR content filled a gap in their knowledge or skills. An identical percentage (94.3%) agreed or strongly agreed that their educational needs were met. More than 95% of participants agreed or strongly agreed that the instructional techniques and delivery of the material were conducive to learning. Similar proportions of participants agreed or strongly agreed that the overall length of GR (93.2%) and the number of presenters (91%) were appropriate (Table 1).

TABLE 1.

Evaluation of Public Health Grand Rounds by Participants of Continuing Education Activity During 2010–2011 (N = 2063)

Evaluation Question Strongly agree Agree Neither/ Undecided Disagree Strongly Disagree Not Applicable
The content and learning materials addressed a need or a gap in my knowledge or skills 778 (37.7%) 1167 (56.6%) 104 (5.0%) 11 (0.5%) 1 (0.05%)a 2 (0.1%)
This activity effectively met my educational needs 686 (33.3%) 1258 (61.0%) 98 (4.8%) 16 (0.8%) 1 (0.05%)a 4 (0.2%)
Specific learning objectives
   I can list key measures of burden of disease involving morbidity, mortality, and/or costs 542 (26.3%) 1320 (64.0%) 152 (7.4%) 21 (1.0%) 1 (0.05%)a 27 (1.3%)
   I can describe evidence-based preventive interventions and the status of their implementation 582 (28.2%) 1309 (63.5%) 130 (6.3%) 15 (0.7%) 1 (0.05%)a 26 (1.3%)
   I can identify one key prevention science research gap 591 (28.6%) 1265 (61.3%) 177 (8.6%) 9 (0.4%) 1 (0.05%)a 20 (1.0%)
   I can name one key indicator by which progress in meeting prevention goals is measured 568 (27.5%) 1328 (64.4%) 129 (6.3%) 16 (0.8%) 1 (0.05%)a 21 (1.0%)
If given an opportunity, I can apply the knowledge gained as a result of this activity 660 (32.0%) 1216 (58.9%) 154 (7.5%) 14 (0.7%) 1 (0.05%)a 18 (0.9%)
Presentation and delivery
   The instructional strategies (lecture, case scenarios, figures, tables, media, etc) helped me learn the content 699 (33.9%) 1261 (61.1%) 81 (3.9%) 11 (0.5%) 4 (0.2%) 7 (0.3%)
   The delivery method used (conference, journal article, webcast, e-learning, etc) helped me learn the content 798 (38.7%) 1174 (56.9%) 74 (3.6%) 9 (0.4%) 2 (0.1%) 6 (0.3%)
   The number of presenters is appropriate 569 (27.6%) 1307 (63.4%) 122 (5.9%) 59 (2.9%) 1 (0.05%)a 5 (0.2%)
   The time allotted for the full activity and for individual speakers is about right 578 (28.0%) 1346 (65.2%) 69 (3.3%) 54 (2.6%) 11 (0.5%) 5 (0.2%)
   Feedback (Q and A, knowledge checks) I received during the activity was helpful 513 (24.9%) 1005 (48.7%) 209 (10.1%) 5 (0.2%) 3 (0.1%) 328 (15.9%)
The availability of continuing education credit influenced my decision to participate in this activity 778 (37.7%) 876 (42.5%) 226 (11.0%) 148 (7.2%) 29 (1.4%) 6 (0.3%)
a

Rounded to 2 decimal places rather than 1 decimal place to avoid a zero value.

Of the 738 written comments for the 20 GR we evaluated, strong satisfaction was expressed about the quality, content, and timeliness of the GR program. Comments on content and learning materials represented 51.2% (378/738) of responses (Table 2). These comments centered on the quality of presentation or presenters, the quality of information, and the allotted time for presentation and questions and answers (QA). A total of 66.4% expressed satisfaction with content and material, in terms of relevance or quality of information. The value of information was captured by 1 participant who said, “as a physician I was amazed at the information described; this information should be much more widely circulated.” Another stated: “Very well presented and thought-provoking. Learned quite a bit!” Other respondents’ (31.5%) reports of satisfaction focused on the overall presentation or presenters. For example, one participant stated, “I felt the materials were presented very clearly in both a pace that was easy to follow and in a way that allowed me to take a lot of new information from it. I feel like I’ve taken a very good grasp of the ideas in the video.” Several participants were complimentary of the QA session expressing sentiments such as “The (QA) continues to be my favorite part of the presentation. Speakers address questions in a conversational manner rather than reading a lecture and I seem to have many of my questions answered during this portion.” Satisfaction was also expressed for the use of multiple learning options best articulated by this participant who stated, “I’m happy that the content was expressed through the use of multiple outlets video slide and speech. To use just one of these outlets can be boring to the audience but using all of them together allowed for the intended information to be spread in a way in which the audience is less likely to become bored with it.”

TABLE 2.

Common Themes Identified on Learning Objectives, Content and Materials, and Proportion of Responses

Evaluation Question Themes Identified Definition/Description Proportiona (n)b
Comments about the content and learning materials Satisfied with presentation/presenters Expressed in terms of conciseness, flow of presentation, presenters good, quality of slides and video, and multiple learning avenues 31.5% (378)
Satisfied with content/material Good information, relevant information or topic, desire for more of these types of information 66.4% (378)
Dissatisfied with content/material Expressed as content being too basic, expected some issues covered other than those addressed, complexity, and difficult to understand  9.3% (378)
Presentation and question and answer (Q&A) time inadequate Mostly related to need for more time for Q&A, also expressed as fast pace, rushed, superficial, too many topics, or information packed  6.1% (378)
Updated knowledge Statements that indicate increased knowledge, refreshed, better ability, or more understanding or realization 56.3% (368)
Sharing of knowledge Statements indicating sharing information with colleagues, others, or at other venues  3.3% (368)
Application of knowledge Statements that clearly indicate how information will be used to change practice, or in practice. Or, in decision making or advocacy 31.3% (368)
Comments regarding the learning objectives Satisfaction with learning objectives Expressed as appropriate, course met objectives, relevant, achieved, well defined, clear 87.3% (212)
Dissatisfaction with learning objective Expressed as not clear, not well tailored to presentation, not what was expected  4.7% (212)
No learning objectives Expressed as not knowing of objectives, not remembering or not finding  2.4% (212)
a

Proportion for each question may not add to 100% because some respondents made statements that cover more than 1 theme.

b

Text comments such as “none” “no comments” excluded from analysis; some comments contributing to denominator were not relevant to the question asked.

While the questions requiring text responses were generally similar for all GR topics, some participants provided observations that were specific to the particular topic of discussion. For example, a respondent stated, “As a pharmacist I am very happy to see that the Prescription Drug Overdose problem is being highlighted. I feel that Pharmacists should be brought to the table as potential partners as we try and solve this problem.”

Effects on learners’ knowledge, competence, skills, strategy, and practice

The great majority of participants indicated that the learning objectives common to all GR were met. More than 90% were able to list key measures of the burden of disease, describe evidence-based preventive interventions, and name 1 key prevention progress indicator. Slightly less than 90% could identify 1 key research gap (Table 1). More than 87% of written comments regarding learning objectives expressed satisfaction (Table 2).

Importantly, 90.9% agreed or strongly agreed that, if given an opportunity, they could apply what they have learned from the GR sessions (Table 1). The participants also specified in written comments how attending the educational activities resulted in knowledge gain and changes to their competence, skills/strategy, and practice. Four themes were identified from comments regarding changes to competency, skills, strategies, or practice: new, updated, shareable, and applied knowledge (Table 3). The majority of participants (56.3%) stated that their knowledge of the topic area was updated; as expressed by one participant, “As a board member of the local partnership for a drug free community the information helped to broaden my appreciation for interventions the partnership has been conducting in this community.” Plans to apply knowledge gained were clearly stated such as “Will incorporate the benefit of breastfeeding as lifelong advantage for obesity prevention more strongly during prenatal counseling” and “Information about adolescent driving will be helpful in educating some of my clients. Appreciated the detailed information about the cascade of events following (traumatic brain injury) … was very helpful for better understanding of the effects of those events on long-term outcomes.”

TABLE 3.

Common Themes Identified on Changes to Competency and Recommendations for Improvement, and Proportion of Responses

Evaluation Question Themes Identified Definition/Description Proportiona (n)b
Comments about changes to your competence, skills/strategy and practice New knowledge Statements that indicate no prior knowledge, limited knowledge, or not knowing about or unfamiliar with topic  9.5% (368)
Updated knowledge Statements that indicate increased knowledge, refreshed, better ability, or more understanding or realization 56.3% (368)
Sharing of knowledge Statements indicating sharing information with colleagues, others, or at other venues  3.3% (368)
Application of knowledge Statements that clearly indicate how information will be used to change practice, or in practice. Or, in decision making or advocacy 31.3% (368)
Suggestions to improve this educational activity Content Repeat topics, provide more background information, other topics, laboratory perspective  6.8% (160)
Format More time, more for Q&A, Q&A via e-mail, fewer presenters, interactive, less slide reading 28.8% (160)
Technical/Web Improvement to navigation, viewing, connectivity, and audio 10.6% (160)
Continuing education related Questions more relevant to content. Finding course, registering for course, more continuing education categories, fewer questions 31.3% (160)
General Provide handouts, make slides downloadable, provide print of slides 13.8% (160)
a

Proportion for each question may not add to 100% because some respondents made statements that cover more than 1 theme.

b

Text comments such as “none” “no comments” excluded from analysis; some comments contributing to denominator were not relevant to the question asked.

Suggestions for improvement

Lower ratings for some areas by participants point to areas that could be improved; for example, only 73.6% agreed or strongly agreed that feedback (eg, QA) received during GR was helpful. Recommendations for improvement were provided by 21.7% of participants (Table 3), mostly related to CE requirements, such as reducing the number of questions, improving the course registration process, and a few suggestions to ask content-specific questions to reenforce learning. Other comments involved improvements to format of GR, specifically the need for more time for audience QA (6.1%). A total of 13.8% of respondents also thought that it would be helpful to provide handouts or downloadable slides at the time of the presentation. Participants were also asked if they had technical difficulties; 24% indicated difficulties including audio, visual, connectivity, and Web navigation issues. Very few responses (n = 12) were received regarding whether activity was thought to be influenced by commercial interests and none of those expressed concern about undue commercial influence. Specific suggestions for improvement received from learners included “Improve webcast technology so that slides appear clear.” and “ … have CME tests that ask questions directly about the content/information in the seminar … This would help to better test knowledge and improve information recall.”

Discussion: Lessons Learned

There are several models in use for the evaluation of effectiveness of CE such as the Kirkpatrick model and the Prochaska’s model. The Kirkpatrick focuses on 4 levels of effect of educational activity, reactions, learning, transfer, and results, while the Prochaska’s focus is on behavior change.17,18 Although behavior change is thought to be the standard, it remains difficult to measure, requiring evaluation long after the training. Many publications on the impact and effectiveness of CE have concluded that knowledge does not always translate into practice17,19,20; however, evidence of benefit to providers has been seen in some settings. Grand Rounds evaluation responses do indicate that changes to competence and practice are acknowledged by participants. This is given some credence by the fact that in some cases participants identified in their comments the venue and context in which they hope to implement learning. This specificity in feedback supports the value and success of the GR program in presenting “actionable science” to the audience.

Analysis of the types of CE participants suggests that many work in state or local health departments, in ambulatory care settings, and in direct care or service professions such as nursing. An important limitation of the analysis presented here is that it does not reflect the full GR audience; only responses from participants of GR audience; only responses from participants of GR CE were available. Because no pretraining (ie, pretest) survey was administered, there was no baseline data with which to compare in measuring the impact of the training program21 and it has been suggested that the type of evaluation method used may impact analysis outcomes.22

Findings from this evaluation may indicate future ways to improve GR, including incorporating audience feedback in other ways in addition to question-andanswer sessions, for example, using social media to facilitate audience interaction with speakers. Since this evaluation was completed, we have expanded social media outreach (eg, GRs related Tweets and Facebook posts). To maintain audience interest and keep up with technological improvements, it may also be necessary to periodically revise Web and communications content and the “look and feel” of program materials. Findings regarding the types of professionals seeking CE through GR suggest the possibility of building links with public health or health care training programs and partnering with health care professional associations to develop GR programs. Maintaining the highest scientific standards for GR is essential for a science-based agency like CDC. There is, therefore, a need for ongoing evaluation of GR activities, including providing feedback opportunities for non-CE participants.

Conclusions: Broader Applicability

Grand Rounds fills an information need for public health personnel at the state and local levels, where specialization has increasingly become an unaffordable luxury. Just as importantly it also bridges the gap between public health and the health care workforce, both of which have key roles in improving people’s health. Grand Rounds focuses on helping its audience identify the best available evidence that can be used to improve health, as well as promoting understanding of both effective implementation strategies, and barriers to implementation. Data analyzed here indicate that GR is meeting many of the needs of its audience by enhancing their fund of knowledge and allowing them to translate new ideas into everyday practice. Responses also show a desire for better connection and interaction with GR presenters and speakers. Grand Rounds have become an integral part of a leadership vision for CDC, which supports science as the bedrock of public health practice, prioritizes the use of data for prevention and program implementation, and works with key partners such as state and local health departments to improve health.

For institutions such as health departments and schools of public health, GR may be a model of how to build a scientific and practice community with shared goals and a shared “vocabulary.” For geographically dispersed public health programs, utilizing available broadcast and Internet technology means that access to teaching expertise need no longer be limited by travel constraints faced by state and local program staff. Grand Rounds presents examples of how scientific evidence can be translated into “real world” decreases in morbidity, mortality, and health care costs. Our evaluation suggests that a diverse health care professional audience perceives benefits to their knowledge and practice related to GR participation. While ongoing areas for improvement of the GR have also been identified, we encourage groups and institutions seeking to develop “communities of practice” to build on the GR experience.

Acknowledgments

We acknowledge the contributions of more than 100 Grand Rounds speakers, CDC subject-matter experts and liaisons, the Grand Rounds communications team, and CDC CE team. This study was funded by the US government.

Footnotes

The authors declare no conflicts of interest.

Supplemental digital content is available for this article. Direct URL citation appears in the printed text and is provided in the HTML and PDF versions of this article on the journal’s Web site (www.JPHMP.com).

Contributor Information

John Iskander, Office of the Associate Director for Science, Centers for Disease Control and Prevention, Atlanta, Georgia..

Mary Ari, Office of the Associate Director for Science, Centers for Disease Control and Prevention, Atlanta, Georgia..

Bin Chen, Laboratory Science Policy and Practice Program Office, Centers for Disease Control and Prevention, Atlanta, Georgia..

Sharon Hall, Scientific Education and Professional Development Program, Centers for Disease Control and Prevention, Atlanta, Georgia..

Neelam Ghiya, Office of the Associate Director for Science, Centers for Disease Control and Prevention, Atlanta, Georgia..

Tanja Popovic, Office of the Associate Director for Science, Centers for Disease Control and Prevention, Atlanta, Georgia..

REFERENCES

  • 1.Centers for Disease Control and Prevention. Public Health Then and Now: Celebrating 50 Years of MMWR at CDC. MMWR 2011;60(Suppl):1–124. [PubMed] [Google Scholar]
  • 2.Gardner JW, Sanborn JS. Years of potential life lost (YPLL)–what does it measure? Epidemiology 1990;1(4): 322–329. [DOI] [PubMed] [Google Scholar]
  • 3.The Guide to Clinical Preventive Services 2010–2011: Recommendations of the U.S. Preventive Services Task Force. U.S. Preventive Services Task Force Rockville, MD: Agency for Healthcare Research and Quality; 2010. [PubMed] [Google Scholar]
  • 4.Centers for Disease Control and Prevention, National Center for Chronic Disease Prevention and Health Promotion, Office on Smoking and Health. How Tobacco Smoke Causes Disease: The Biology and Behavioral Basis for Smoking-Attributable Disease: A Report of the Surgeon General Atlanta, GA: Centers for Disease Control and Prevention; 2010. [PubMed] [Google Scholar]
  • 5.World Health Organization. WHO report on the global tobacco epidemic 2008: the MPOWER package; http://www.who.int/tobacco/mpower/2008/en/index.html. Accessed August 23, 2013. [Google Scholar]
  • 6.Mercer SL, Sleet DA, Elder RW, Cole KH, Shults RA, Nichols JL. Translating evidence into policy: lessons learned from the case of lowering the legal blood alcohol limit for drivers. Ann Epidemiol 2010;20(6):412–420. [DOI] [PubMed] [Google Scholar]
  • 7.Frieden TR, Berwick DM. The “Million Hearts” initiative— preventing heart attacks and strokes. N Engl J Med 2011;365(13):e27. [DOI] [PubMed] [Google Scholar]
  • 8.Centers for Disease Control and Prevention. CDC Grand Rounds: chlamydia prevention: challenges and strategies for reducing disease burden and sequelae. MMWR Morb Mortal Wkly Rep 2011;60(12):370–373. [PubMed] [Google Scholar]
  • 9.Centers for Disease Control and Prevention. CDC Grand Rounds: prescription drug overdoses—a U.S. epidemic. MMWR Morb Mortal Wkly Rep 2012;61:10–13. [PubMed] [Google Scholar]
  • 10.Centers for Disease Control and Prevention. CDC Grand Rounds: childhood obesity in the United States. MMWR Morb Mortal Wkly Rep 2011;60(2):42–46. Erratum in: MMWR Morb Mortal Wkly Rep. 2011;60(5):142. [PubMed] [Google Scholar]
  • 11.Centers for Disease Control and Prevention. CDC Grand Rounds: radiological and nuclear preparedness. MMWR Morb Mortal Wkly Rep 2010;59(36):1178–1181. [PubMed] [Google Scholar]
  • 12.Centers for Disease Control and Prevention. CDC Grand Rounds: additional opportunities to prevent neural tube defects with folic acid fortification. MMWR Morb Mortal Wkly Rep 2010;59(31):980–984. [PubMed] [Google Scholar]
  • 13.Centers for Disease Control and Prevention. CDC Grand Rounds: current opportunities in tobacco control. MMWR Morb Mortal Wkly Rep 2010;59(16):487–492. [PubMed] [Google Scholar]
  • 14.Centers for Disease Control and Prevention. CDC Grand Rounds: dietary sodium reduction—time for choice. MMWR Morb Mortal Wkly Rep 2012;61:89–91. [PubMed] [Google Scholar]
  • 15.Centers for Disease Control and Prevention. Grand Rounds: the opportunity for and challenges to malaria eradication. MMWR Morb Mortal Wkly Rep 2011;60(15): 476–480. [PubMed] [Google Scholar]
  • 16.Miles MB, Heberman AM. Qualitative Data Analysis 2nd ed. Newbury Park CA: Sage; 1994:10–12. [Google Scholar]
  • 17.Turner NM. Continuing medical education in pediatric anesthesia—a theoretical overview. Paediatr Anaesth 2008;18(8):697–701. [DOI] [PubMed] [Google Scholar]
  • 18.Randahwa S Using the transtheoretical model for outcome evaluation in continuing education. J Contin Educ Nurs 2012;43(4):148–149. [DOI] [PubMed] [Google Scholar]
  • 19.Davis D, O’Brien MA, Freemantle N, Wolf FM, Mazmanian P, Taylor-Vaisey A. Impact of formal continuing medical education: do conferences, workshops, rounds, and other traditional continuing education activities change physician behavior or health care outcomes? JAMA 1999;282(9):867–874. [DOI] [PubMed] [Google Scholar]
  • 20.Forsetlund L, Bjørndal A, Rashidian A, et al. Continuing education meetings and workshops: effects on professional practice and health care outcomes. Cochrane Database Syst Rev 2009;(2):CD003030. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Pehrson C, Sorensen JL, Amer-Wahlin I. Evaluation and impact of cardiotocography training programmes: a systematic review. BJOG 2011;118:926–935. [DOI] [PubMed] [Google Scholar]
  • 22.Santos M, Vicente T, Monteiro A. Temporalities in evaluation of training courses: standards and restrictions practiced by human resources professionals. Work 2012;41(2):217–226. [DOI] [PubMed] [Google Scholar]

RESOURCES