Abstract
Context
The National Comprehensive Cancer Control Program (NCCCP) performance measurement system seeks to understand both the processes that funded programs undertake with their respective coalitions to implement the objectives of their cancer plans and outcomes of those efforts.
Objective
To identify areas of achievement and technical assistance needs of NCCCP awardees.
Design
Program performance was assessed through surveys completed by program directors on performance indicators in 2009 and 2010 and queries from a web-based management information system in 2011 and 2012.
Setting
Programs funded by CDC’s NCCCP.
Participants
69 programs.
Main Outcome Measure(s)
The key performance measures assessed were: inclusion of diverse partners and key sectors in cancer coalitions; partners’ involvement in activities; receiving in-kind resources from partners; using evidence-based interventions and data for setting priorities; conducting program evaluation; using community- or organization-level strategies to address cancer control efforts; and demonstrating progress toward achieving health outcomes.
Results
Most programs reported having active coalitions that represent diverse organizational sectors. Nearly all programs routinely assess the burden of cancer. In-kind resources to implement activities peaked at $64,716 in the second year of a five year funding cycle, and declined in subsequent project years. By year 3, over 70% of programs reported having an evaluation plan. While programs reported that nearly two-thirds of their interventions were evidence-based, some programs implemented non-evidence-based interventions. A majority of programs successfully used at least one community- or organization-level change strategy. However, many programs did not incorporate objectives linked to health outcomes as they reported progress in implementing interventions. Conclusions: While NCCCP programs were strong at building and maintaining infrastructure, some programs may need additional technical assistance to increase the adoption of evidence-based interventions, develop solid and responsive evaluation plans, and better link efforts to population-based measures that demonstrate impact toward reducing the burden of cancer.
Keywords: cancer control, program evaluation/standards, program evaluation/utilization, performance measurement, public health administration, neoplasms/prevention and control, Public Health Practice/prevention and control
Introduction
Cancer remains a significant public health problem in the United States. In 2010, approximately 1.5 million incident cases were diagnosed,1 and over a half million people died from the disease, making it the second overall leading cause of death.2 While the overall age-adjusted incidence has been decreasing during the past decade, for some preventable cancer sites such as melanoma, incidence rates are increasing.3 As progress in cancer treatment advances, more Americans are surviving their cancer diagnosis,4 but they often face lower quality of life and other chronic health conditions.5,6
In 1998, the Centers for Disease Control and Prevention (CDC) piloted the National Comprehensive Cancer Control Program (NCCCP) with five state health departments and one tribal health board with existing state or tribal cancer control plans.7 Currently, CDC funds a total of 65 NCCCP awardees (states, territories, Pacific Island jurisdictions, and tribes/tribal organizations) in 69 comprehensive cancer control (CCC) programs to establish broad-based comprehensive cancer control (CCC) coalitions, assess the burden of cancer, and develop and implement CCC plans to reduce cancer incidence and mortality and improve the quality and duration of life among cancer survivors.8 CCC coalitions generally rely on volunteers from partner organizations on the premise that donated time and financial support can be leveraged from these stakeholders to plan and implement interventions from the CCC plan while integrating and coordinating activities that will more effectively address the cancer burden. Traditionally, CCC coalitions produce CCC plans that address the cancer control continuum (http://cancercontrol.cancer.gov/od/continuum.html), but they may prioritize plan implementation to a few key areas (e.g. tobacco control, colorectal cancer screening, etc.).
Increasingly, public health programs such as the NCCCP are expected to account for use of Federal funding by demonstrating measurable results-oriented outcomes.9 Performance measurement is one method of documenting accountability.10 Performance measurement systems are essential to program monitoring and quality improvement efforts and provide a platform to evaluate key programmatic efforts; thus findings should be actively used to improve programs.11-13 Often, these systems serve as the primary data source to directly evaluate programmatic efforts. Quality improvement activities and performance measurement are now commonplace in numerous health care settings.14 In public health, the Government Performance and Results Act from the 1990’s, and more recently CDC’s National Public Health Performance Standards Program and the public health department accreditation movement reflect efforts to improve programmatic performance.9,15,16
In 2007, a set of performance measures was developed and pilot tested with 61 NCCCP awardees17 to measure performance in the five-year funding cycle spanning 2007-2012. To begin the process, data were systematically collected across all awardees to describe attributes of the NCCCP to invested stakeholders, document current progress, and foster quality improvement. The NCCCP performance measurement system is an integral part of ongoing NCCCP evaluation efforts that seek to understand both the processes that NCCCP awardees undertake with their respective CCC coalitions to develop and implement the objectives of their cancer plans and track the resulting health outcomes of those efforts so that quality may be continually improved. However, the primary objective of NCCCP evaluation efforts are to continually use evaluation findings to improve the program; therefore, we undertook an analysis of NCCCP performance measures data from 2008 – 2012 (project years 2 – 5) to enhance our understanding of the current state of the NCCCP, recognize areas of achievement, and identify emerging issues that can be addressed through training and technical assistance as programs continue into a new funding period.
Methods
Data collection
As a condition of funding, NCCCP awardees are expected to report performance measures annually to CDC. Project year 2 represents the time period of June 30, 2008 through June 29, 2009, while year 3 represents the time period of June 30, 2009 through June 29, 2010. The same convention follows for project years 4 – 5. Development and pilot testing of the NCCCP performance measurement system has been described previously.17 In 2008, the NCCCP performance measurement system was refined to clarify survey questions and strengthen indicators to more accurately measure activities and outcomes. Additional refinement of the survey occurred in 2009 and 2010 in response to program directors’ and program staff feedback and the need to collect data for emerging programmatic processes. Subsequently, some performance measures were removed and new ones were added. Therefore, data for some performance measures were not collected continuously throughout years 2 – 5.
In 2010, a web-based chronic disease management information system (CDMIS) was developed to collect programmatic data from select CDC chronic disease program awardees. The NCCCP was an inaugural member of CDMIS, and the NCCCP module systematically captures data from all NCCCP awardees about staff, CCC coalition members, resources (e.g., leveraged financial resources, donated meeting space, volunteer time, etc.), and planning tools (e.g. surveillance data sources and evaluation plans). Annual action plans that include objectives linked to overall five-year project period objectives, and describe programmatic work that form the basis for interim and annual progress reports are also included in CDMIS. In 2011, CDMIS replaced the previous NCCCP performance measurement system as the data collection tool, and was used solely for NCCCP performance measurement in project years 4 and 5. In order to reduce duplication of effort and burden on awardees in reporting performance measures as well as to ensure continuity in measurement systems, key programmatic contextual information, action plans, progress reports, and performance reporting was seamlessly integrated throughout the data entry fields of the CDMIS NCCCP module. To facilitate reporting of performance measures and maintain transparency in the reporting process, a document was developed by CDC and disseminated to NCCCP awardees that mapped all existing NCCCP performance measures to data entry fields in CDMIS. Since only programmatic information was collected from respondents reporting performance measures, institutional review board approval was not required for data collection and analysis. Approval of data collection through CDMIS was obtained from the Office of Management and Budget (OMB Control #0920-0841). Response rates ranged from 98.6% of funded programs in years 2 – 3 to 100% in years 4 – 5.
Performance measures that were assessed
NCCCP performance measures are indicators developed to address activities of the NCCCP that funded CCC programs conduct as a condition of funding that fall under the following domains: 1) Assess and enhance current infrastructure, 2) Build strong partnerships, 3) Assess the burden of cancer (i.e. incidence, mortality, and risk factors), 4) Mobilize support for comprehensive cancer control (e.g. receiving in kind resources from partners such as donated staff time, supplies, or meeting space), 5) Implement the cancer plan, 6) Conduct evaluation of the cancer plan 7) Use systems and environmental strategies to address the cancer burden, and 8) Monitor changes in population-based health outcomes. In this analysis, we focused on a subset of performance measures that represent key activities and measure impact of the NCCCP (see Table, Supplemental Digital Content 1 for the list of indicators and response options). The performance measures assessed were: 1) partners’ involvement in CCC activities; 2) inclusion of organizations that represent underserved/underrepresented populations in cancer coalitions; 3) use of cancer surveillance data for setting priorities and program planning; 4) inclusion of key sectors in cancer coalitions; 5) ability to garner in-kind resources from partners; 6) inclusion of key aspects of the cancer care continuum in implementation activities with partners; 7) use of evidence-based interventions; 8) routine program evaluation; 9) use of environmental and systems change strategies to address cancer control; and 10) demonstration of progress toward achieving preset goals for cancer prevention and control.
Data Management and Analysis
Performance measurement data collected through the initial NCCCP performance measurement system (years 2 – 3) were entered into an MS Access database. Approximately one-third of records were reviewed for data entry errors and corrected, if necessary. All outliers were verified against the original data to identify and correct errors as needed. These data were imported into SAS version 9.2 (SAS Institute, Cary, NC) for analysis. For project years 4 and 5, we developed a set of CDMIS queries to systematically abstract data across all NCCCP awardees. Some data from years 4 and 5 were downloaded into MS Excel for additional analysis. Codebooks and a data analysis plan were developed so that analyses across years 4 and 5 were standardized. If assignment of categories was required to summarize data, assignment of predetermined codes would be consistent between years based on inclusion and exclusion criteria. Sixty-eight NCCCP programs completed performance measurement in the initial NCCCP system in project years 2 and 3. In years 4 -5, all NCCCP programs completed performance measurement and all used CDMIS for reporting (n=69). Because not all questions were asked every year, data not available for a given year are shown as “not applicable” (N/A). The most questions were asked in year 3 because we developed new indicators to better measure some performance measures. However, many of the indicators new in year 3 were not included in CDMIS due to the system’s deployment schedule.
We calculated descriptive statistics using SAS version 9.2 for performance measures reported for years 2 and 3. For variables that were collected on a continuous scale (e.g. monetary amount of in-kind resources), we calculated the mean, median, range, and sum (if appropriate). The denominator in most analyses was the total number of NCCCP programs who reported performance measures in a given year. The numerator represented the number of programs meeting the indicator. However, in project year 2, one NCCCP program was still developing their cancer plan. Therefore, some analyses excluded this program. In some cases, programs were excluded from the denominator if they did not respond to the question or if the question did not apply to them (e.g. evaluation plan components if no evaluation plan was developed). For some analyses on the use of evidence-based interventions or proportion of partners implementing the cancer plan, we calculated a mean of the percentages that were reported by programs.
For project years 4 – 5, descriptive statistics were calculated using Excel based on results from the CDMIS queries.
Results
Table one presents findings from performance measures that assess infrastructure-related activities (partners’ involvement in CCC activities, inclusion of organizations that represent underserved/underrepresented populations in cancer coalitions, and use of cancer surveillance data for setting priorities and program planning; (performance measures 1-3). Seventy-nine percent of NCCCP programs conducted at least one in-person meeting with their entire CCC coalition in project year 3, and most reported that workgroups and the executive committee met at least 3 – 4 times during the course of the year (73.5% and 82.4%, respectively). Most programs reported that partners volunteered to take the lead on action items (83.8%) and followed-up on action items in a timely manner (73.5%). In project year 2, NCCCP programs reported that an average of 60.4% of their partners implemented at least one priority strategy from the cancer plan. The majority of NCCCP programs with underserved/underrepresented populations residing in their jurisdictions reported having organizations that represent or serve these communities (range: 60.0% - 86.4%) on their coalitions. These results were highest for African-Americans, but lowest for Asians. In project years 4 and 5, 50.7% and 56.5% of NCCCP programs, respectively, reported having organizations that represent priority populations and cultural/ethnic organizations as key coalition partners.
Table 1.
Performance Measure Indicator | NCCCP Programs that met the indicator* | |||
---|---|---|---|---|
| ||||
Partners’ Involvement in CCC
Activities |
Year 2 (n=68), n
(%) |
Year 3 (n=68), n
(%) |
Year 4
(n=69) n, % |
Year 5 (n=69)
n, % |
All partner meeting convened at least once during this 12-month period. (face-to-face) |
N/A | 54 (79.4) | N/A | N/A |
Each workgroups/subcommittees met 3-4 times during this 12-month period (face-to-face or by phone) |
N/A | 50 (73.5) | N/A | N/A |
Executive committee/steering committee met 3-4 times during this 12-month period (face-to-face or by phone) |
N/A | 56 (82.4) | N/A | N/A |
Formal by-laws with written roles and responsibilities are shared routinely with partners |
N/A | 48 (70.6) | N/A | N/A |
Partners provide evidence that they use the CCC Plan |
N/A | 51 (75.0) | N/A | N/A |
Partners volunteer to take the lead on action items identified at meetings |
N/A | 57 (83.8) | N/A | N/A |
Partners report follow-up on action taken in a timely manner |
N/A | 50 (73.5) | N/A | N/A |
CCC program staff members do not lead the majority of CCC activities (i.e. partners lead) |
N/A | 45 (66.2) | N/A | N/A |
Partner organizations implement at least one strategy focused on the priorities of the cancer plan† |
63 (60.4) | N/A | N/A | N/A |
Inclusion of organizations that
represent underserved/underrepresented populations in cancer coalitions‡ |
||||
American Indian/Alaska Native | 25/31 (80.7) | 25/32 (78.1) | N/A | N/A |
Asian | 22/34 (64.7) | 21/35 (60.0) | N/A | N/A |
Black or African American | 38/44 (86.4) | 36/46 (78.3) | N/A | N/A |
Native Hawaiian or Other Pacific | 13/16 (81.3) | 10/14 (71.4) | N/A | N/A |
Islander | ||||
Hispanic or Latino | 37/46 (80.4) | 36/47 (76.6) | N/A | N/A |
CCC Programs with coalitions that include at least one organization representing priority populations and cultural/ethnic organizations |
N/A | N/A | 35 (50.7) | 39 (56.5) |
Performance measures that assess
use of cancer surveillance data for setting priorities and program planning |
||||
Routine Assessment of Cancer Burden Data | 60/65 (92.3) | 65/67 (97.0) | N/A | N/A |
Years since last assessment, mean (range) |
1.3 (0 – 6) | 1.2 (0 – 7) | N/A | N/A |
Results of midpoint review were presented to partners during a partnership meeting |
N/A | 39/68 (57.4) | N/A | N/A |
A written report of the results from the midpoint review was provided to partners |
N/A | 31/68 (45.6) | N/A | N/A |
The program updated CCC plan goals or objectives based on a review of data and trends |
28/66 (42.4) | 24/58 (41.4) | N/A | N/A |
The program met one or more action plan objectives for assessing the burden of cancer |
N/A | N/A | 37 (53.6) | 38 (55.1) |
Data are not available for every indicator throughout the entire time period. Indicators not collected for a given year are denoted with “N/A”. Data displayed are for programs who met the indicator (indicated “yes” to the survey question, etc.)
63 programs provided a response to this question on the proportion of partners implementing cancer plan strategies, with responses ranging from 12.0 – 100%. The data presented are the mean percentage of partners that programs report. This indicator was originally collected under the domain of “implement the cancer plan.”
The denominator is limited to programs who report having a portion of residents in their jurisdiction of a specified racial/ethnic background.
Abbreviations: CCC, Comprehensive Cancer Control; N/A, not applicable
Nearly all NCCCP programs (92.3% in year 2 and 97.0% in year 3) routinely conducted a comprehensive review of their available cancer surveillance data (incidence, mortality, and prevalence of cancer screening and health behaviors linked to cancer), with a review conducted on average within the past two years in years 2 and 3 (1.3 and 1.2 years, respectively). In year 3, 57.4% of NCCCP programs shared results of a midpoint cancer surveillance data review during a partnership meeting, and 41.4% updated goals and objectives in their cancer plan based upon a review of data and trends. Slightly over half of programs in years 4 and 5 met their objectives for assessing the burden of cancer in their action plans reported to CDC through CDMIS (53.6% and 55.1%, respectively).
Most NCCCP programs reported having CCC coalitions that represent diverse organizational sectors (performance measure 4; Figure 1). Political leaders, business/industry, and other government organizations tended to be reported less frequently, particularly in years 4 and 5. Most reported that they received in-kind resources (i.e. personnel/volunteers, meeting space, travel, etc.) for CCC efforts (performance measure 5); median values of these resources ranged from $40,005 (year 4) to $64,716 (year 2) across project years 2 – 5 (Figure 2).
Table 2 presents the results of performance measures that assess cancer plan implementation and outcomes of CCC activities (performance measures 6 - 8 and 10). Most programs reported addressing screening/early detection (range: 84.1% - 98.5%) and primary prevention of cancer (range: 79.7% - 92.6%) in their implementation activities. Fewer programs focused on diagnosis (range: 34.8% - 76.5%) and palliation/end-of-life care (range: 44.9% - 85.3%). On average, NCCCP programs reported that approximately two thirds of their implemented interventions were evidence-based (range: 60.4% - 67.5%), and over 85% had at least one action plan annual objective in CDMIS with an evidence-based or promising practice source identified (years 4 and 5). Most programs reported having or provided a copy of a formal evaluation plan (range: 50.7% - 70.6%), and the majority of programs with evaluation plans reported having addressed all NCCCP-required components of evaluation plans (range: 62.2% - 72.9%). Programs reported less often that they addressed potential effects of selected activities as an evaluation plan component (range: 68.9% - 81.3%). While most NCCCP programs routinely monitor key population-based indicators for cancer control (year 3 range: 54.4% - 76.5%), few included these as project period objectives in their CDMIS action plans in years 4 – 5 .The most commonly reported population-based indicator was for colorectal cancer screening, which was reported by just over 40% of programs.
Table 2.
NCCCP Programs that met the indicator* | ||||
---|---|---|---|---|
| ||||
Performance measure indicator | Year 2(n=68), n (%) |
Year 3 (n=68), n (%) |
Year 4 (n=69), n (%) |
Year 5 (n=69), n (%) |
The continuum of cancer care is
addressed in the implementation priorities with their CCC coalitions |
||||
Primary prevention | N/A | 63 (92.6) | 55 (79.7) | 57 (82.6) |
Screening/Early detection | N/A | 67 (98.5) | 58 (84.1) | 59 (85.5) |
Diagnosis | N/A | 52 (76.5) | 24 (34.8) | 24 (34.8) |
Treatment | N/A | 53 (77.9) | 35 (50.7) | 34 (49.3) |
Palliation/End-of-Life Care | N/A | 58 (85.3) | 31 (44.9) | 33 (47.8) |
Survivorship | N/A | 63 (92.6) | 45 (65.2) | 48 (69.5) |
Evidence-based interventions
are used |
||||
Average % of implemented interventions that are evidence- based† |
53 (60.4) | 55 (67.5) | 68 (64.0) | 69 (63.0) |
At least one action plan annual objective with any evidence-based or promising practice source |
N/A | N/A | 61 (88.4) | 63 (91.3) |
At least one action plan annual objective with practice- based/program experience – your own program‡ |
N/A | N/A | 35 (50.7) | 42 (60.9) |
Program evaluation is routinely
conducted |
||||
A formal written evaluation plan has been developed |
46/67 (68.7) | 48/68 (70.6) | 35/69 (50.7) | 48/69 (69.6) |
Evaluation plan includes these components: |
||||
Stakeholder involvement | 45/45 (100) | 46/48 (95.8) | N/A | N/A |
Data collection and analysis methods |
43/45 (95.6) | 46/48 (95.8) | N/A | N/A |
How the goals/objectives link to outcomes |
41/45 (91.1) | 45/48 (93.8) | N/A | N/A |
Potential effects of selected activities |
31/45 (68.9) | 39/48 (81.3) | N/A | N/A |
Plans for communication and utilization of findings |
35/45 (77.8) | 41/47 (87.2) | N/A | N/A |
All elements included | 28/45 (62.2) | 35/48 (72.9) | N/A | N/A |
Progress is demonstrated toward
achieving preset goals for cancer prevention and control |
||||
NCCCP programs routinely monitor these indicators (years 2- 3) and action plan project period objectives include these indicators (years 4-5):§ |
||||
Adult smoking | 60 (88.2) | 52 (76.5) | 18 (26.1) | 17 (24.6) |
Adolescent smoking | 55 (80.9) | 44 (64.7) | 9 (13.0) | 7 (10.1) |
Obesity|| | 59 (86.8) | 37 (54.4) | 5 (7.2) | 6 (8.7) |
Breast cancer screening | 59 (86.8) | 51 (75.0) | 16 (23.2) | 10 (14.5) |
Cervical cancer screening/human papillomavirus vaccination** |
58 (85.3) | 47 (69.1) | 14 (20.3) | 16 (23.2) |
Colorectal cancer screening |
55 (80.9) | 48 (70.6) | 29 (42.0) | 31 (44.9) |
Data are not available for every indicator throughout the entire time period. Indicators not collected for a given year are denoted with “N/A”.
Data presented are mean % of all interventions that programs report are evidence-based.
These data were included to assess how many programs were using other non evidence-based sources.
In year 2, NCCCP programs reported data on each of these six indicators. The percentages presented are the programs who reported data for these indicators.
Data for years 2 – 3 include adults only. Years 4 – 5 include adolescents and adults.
Years 4 – 5 include human papillomavirus vaccination objectives.
Abbreviations: CCC, Comprehensive Cancer Control; N/A, not applicable
Fifty-two percent of NCCCP programs in year 4 and 63.8% in year 5 established and met at least one action plan annual objective that used an environmental or system change as a strategy (performance measure 9, Figure 3). Commonly addressed topic areas included tobacco, health disparities, access to care, breast or cervical cancer screening, colorectal cancer screening, and nutrition/physical activity/obesity.
Discussion
Results from these performance measure indicators reveal that during the 2007-2012 project period, NCCCP programs were successful at developing diverse, active partnerships and routinely using cancer surveillance data to plan and implement cancer control interventions. However, throughout the funding cycle, not all programs consistently used evidenced-based interventions, conducted formal evaluations of program activities, or linked efforts to population-based measures that demonstrated impact toward reducing the burden of cancer. For example, even within core infrastructure indicators, some Tribal and Pacific Island jurisdiction programs reported that they did not have adequate cancer surveillance data for their populations (data not shown) to fully assess the cancer burden and monitor their programs. Additionally, while most programs reported using evidence-based or promising practice-based interventions, around half of programs had at least one action plan annual objective in CDMIS that cited “practice-based evidence: your own program” as part of the evidence source for implementing an intervention. In some cases this option was selected along with other evidence-based sources, possibly reflecting a desire on behalf of programs to acknowledge their own practice-based experiences in implementing evidence-based interventions that were adapted for use with their own populations. While most programs reported receiving in-kind resources, the total amount peaked in project year 2 and declined in subsequent project years, which may impact the ability to plan and implement activities. The majority of programs did not use population-based health indicators as action plan project period objectives, even though various data sources to monitor health outcomes are available to most programs.
The decreasing trend in in-kind resources reported by CCC programs may indicate that some programs had fewer staff and donated supplies available from partners and the CDC-funded health department to implement cancer control efforts. However, we did not collect the total amount of resources needed to implement the cancer plan to truly assess this finding. CCC programs are encouraged but not required to develop a resource plan, which may serve to promote the cancer plan, engage partners, articulate outcomes, and communicate the need for additional in-kind and leveraged resources.18
One challenge to a NCCCP performance measurement system is the issue of decentralized public health efforts and which entities are ultimately accountable for community health.10 While public health departments may be held accountable for performance, in reality public health efforts are increasingly being implemented with partners.11,19 It may be challenging to collaborate with different partners that may not have any financial incentive for demonstrating performance and they often have their own set of priorities that may not always align with health department priorities.19-22 From the onset, collaboration and leveraging shared resources were embodied in the definition of CCC, making accountability and evaluation of NCCCP efforts challenging.
In 2012, the NCCCP ended a five-year project period, and a new five-year project period through 2017 began. Many of the same performance measures have been carried over into this new project period and included in the logic model, but there is now a greater emphasis on collaborating with partners by forming coalition workgroups to implement interventions and coordinate activities with other chronic disease programs, including those that focus on primary prevention.18 NCCCP awardees are still encouraged to address the cancer burden through environmental and system change strategies.23 For example, programs include at least one project period objective in their action plan that addresses primary prevention of cancer and uses an environmental or system change strategy. CDC continues to emphasize using evidence-based interventions, and additional guidance and support are provided for interventions in NCCCP priority areas.
For a performance management system to be effective, use of performance measures must be tied to quality improvement efforts.11 Quality improvement in public health focuses on deliberate and defined improvement processes to achieve measurable improvements in the efficiency, effectiveness, performance, accountability, and outcomes of efforts.24 Public health services and systems research is a new field that developed out of a need to better understand how public health infrastructure affects population health outcomes.25,26 A key initiative of this movement is public health department accreditation, which emphasizes continuous quality improvement to ensure effective delivery of public health services.16,24 Similar to NCCCP performance measures, domains of accreditation include the use of evidence-based practices and conducting evaluation. Emerging evidence suggests that local health departments who perform better at the core public health functions may have an increased impact on some community health outcomes.27
CDC continues to use performance measures to inform program design, better target technical assistance, drive public health translational research, provide needed insight into the NCCCP evaluation design, and guide NCCCP awardees to existing resources that may help them improve their programs. Results from performance measures have been incorporated into mid-year technical reviews of program progress, and detailed action plan reviews using a team approach that includes staff from a variety of disciplines (e.g. program delivery, epidemiology, evaluation, health services research). The aim of these reviews is to improve performance by gauging progress and helping to select appropriate evidence-based interventions that would promote efficient use of resources. As new funding opportunities are developed, results from performance measures will likely contribute to expectations for funded programs.
Understanding how CCC programs find and select evidence-based interventions is an ongoing area of study within the NCCCP.28 To aid NCCCP programs and their coalitions with evaluation efforts, a domain in which results from performance measures indicate an area that needs improvement, an evaluation toolkit and cancer plan index planning tool have been developed and disseminated.29 These tools were used by one NCCCP program to successfully conduct an outcome evaluation of their cancer plan,30 and their findings may be used to refine future cancer plan implementation efforts. Additionally, an external performance measures advisory workgroup consisting of CCC program directors working in state/tribal/Pacific Island jurisdiction health departments was convened to provide input into data interpretation and considerations for future refinement of measures. Webinars and conference calls that cover how CDC collects and uses performance measures continually inform CCC programs that performance measures are central to program monitoring and quality improvement efforts.
Future research may assess if program performance among CCC programs is linked with desired health outcomes such as improved screening rates, reductions in cancer-related risk factors, and improvements in quality of life and health behaviors. Additionally, future research may focus on which composite of indicators is associated with making progress toward achieving health outcomes. Alternative study designs such as those used in systems design research along with statistical modeling may help address these questions.12 To achieve long-term cancer-related outcomes, sustaining high functioning coalitions should remain a priority, and CDC’s national partners are providing technical assistance workshops to coalitions to address this issue.31-33 Additionally, determining optimal benchmarks to set so that programs are in better position to achieve health outcomes may need attention. As the literature on coalition functioning evolves, it may be reasonable to align some performance measures with other indicators of coalition effectiveness.34
Strengths and limitations
This study has a number of strengths and limitations. A great strength is that the use of CDMIS increases the quantity of data that can be collected from NCCCP programs, providing a level of detail that can enhance our understanding of how NCCCP programs function. The use of CDMIS also helps to ensure that data are collected from awardees in a systematic fashion. Limitations include the fact that not all performance measure indicators were included in the CDMIS application, in order to meet the strict timelines of CDMIS deployment. Another limitation is that changing from a paper-based survey to seamless web-based data entry may have introduced mode effects in the trend data, particularly for measures that are now queried from CDMIS action plans. Therefore, trend results need to be interpreted with caution. Third, performance measure data are self-reported by programs and results may be influenced by social desirability bias. Fourth, performance measure indicators provide a general overview of how well NCCCP programs are functioning, but their lack of depth may obscure other issues that may hinder programs’ progress in achieving health outcomes. For example, the quality of a particular intervention (as opposed to just its evidence-based nature) or evaluation plan may be obscured.
Conclusion
Results from the NCCCP performance measures indicate that most programs have the core infrastructure in place with their coalition partners to implement cancer control interventions and educate stakeholders on effective environmental or system change strategies for cancer control. In the future, technical assistance efforts may focus on improving the ability of programs to effectively use evidence-based interventions, improving the quality of CDMIS data, conducting continuous program quality improvement through evaluation, and conducting outcome evaluation to monitor the effects of cancer control efforts on population-based measures such as adolescent and adult tobacco use prevalence, obesity prevalence, and colorectal cancer screening. Performance measurement is a cornerstone of continuous quality improvement, and efforts to monitor performance remain a priority within the NCCCP.
Supplementary Material
Acknowledgments
We would like to thank the comprehensive cancer control program directors and coordinators who provided us with valuable input on our reports and data interpretation through the performance measures consultation workgroup. We especially thank Chris Stockmyer, MPH, RD, for her dedication and commitment to our work on performance measures.
The findings and conclusions in this report are those of the authors and do not necessarily represent the official position of the Centers for Disease Control and Prevention.
Footnotes
Conflicts of interest and source of funding: We have no funding sources to declare or conflicts of interest to report.
References:
- 1.United States Cancer Statistics: 1999–2010 Incidence and Mortality Web-based Report . Department of Health and Human Services, Centers for Disease Control and Prevention and National Cancer Institute; 2013. www.cdc.gov/uscs. Accessed February 3, 2014. [Google Scholar]
- 2.Heron M. National vital statistics reports. Vol 61. National Center for Health Statistics; Hyattsville, MD: 2012. Deaths: Leading Causes for 2009. [Google Scholar]
- 3.Edwards BK, Noone AM, Mariotto AB, et al. Annual Report to the Nation on the status of cancer, 1975-2010, featuring prevalence of comorbidity and impact on survival among persons with lung, colorectal, breast, or prostate cancer. Cancer. 2014;120(9):1290–1314. doi: 10.1002/cncr.28509. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Centers for Disease Control and Prevention Cancer survivors--United States, 2007. MMWR. 2011 Mar 11;60(9):269–272. [PubMed] [Google Scholar]
- 5.Underwood JM, Townsend JS, Stewart SL, et al. Surveillance of demographic characteristics and health behaviors among adult cancer survivors--Behavioral Risk Factor Surveillance System, United States, 2009. MMWR Surveill Summ. 2012 Jan 20;61(1):1–23. [PubMed] [Google Scholar]
- 6.Weaver KE, Forsythe LP, Reeve BB, et al. Mental and physical health-related quality of life among U.S. cancer survivors: population estimates from the 2010 National Health Interview Survey. Cancer Epidemiol Biomarkers Prev. 2012 Nov;21(11):2108–2117. doi: 10.1158/1055-9965.EPI-12-0740. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Rochester PW, Townsend JS, Given L, Krebill H, Balderrama S, Vinson C. Comprehensive cancer control: progress and accomplishments. Cancer Causes Control. 2010 Dec;21(12):1967–1977. doi: 10.1007/s10552-010-9657-8. [DOI] [PubMed] [Google Scholar]
- 8.Abed J, Reilley B, Butler MO, Kean T, Wong F, Hohman K. Comprehensive cancer control initiative of the Centers for Disease Control and Prevention: an example of participatory innovation diffusion. J Public Health Manag Pract. 2000 Mar;6(2):79–92. doi: 10.1097/00124784-200006020-00012. [DOI] [PubMed] [Google Scholar]
- 9.U.S. General Accounting Office . Results-oriented government: GPRA has established a solid foundation for achieving greater results. U.S. General Accounting Office; Washington, DC: 2004. [Google Scholar]
- 10.DeGroff A, Schooley M, Chapel T, Poister TH. Challenges and strategies in applying performance measurement to federal public health programs. Eval Program Plann. 2010 Nov;33(4):365–372. doi: 10.1016/j.evalprogplan.2010.02.003. [DOI] [PubMed] [Google Scholar]
- 11.Landrum LB, Baker SL. Managing complex systems: performance management in public health. J Public Health Manag Pract. 2004 Feb;10(1):13–18. doi: 10.1097/00124784-200401000-00003. [DOI] [PubMed] [Google Scholar]
- 12.Institute of Medicine . For the Public's Health: The Role of Measurement in Action and Accountability. The National Academies Press; Washington, DC: 2011. [PubMed] [Google Scholar]
- 13.Centers for Disease Control and Prevention Framework for program evaluation in public health. MMWR Recomm Rep. 1999 Sep 17;48(RR-11):1–40. [PubMed] [Google Scholar]
- 14.Institute of Medicine . Performance Measurement: Accelerating Improvement. The National Academies Press; Washington, DC: 2006. [Google Scholar]
- 15.Corso LC, Lenaway D, Beitsch LM, Landrum LB, Deutsch H. The national public health performance standards: driving quality improvement in public health systems. J Public Health Manag Pract. 2010 Jan-Feb;16(1):19–23. doi: 10.1097/PHH.0b013e3181c02800. [DOI] [PubMed] [Google Scholar]
- 16.Riley WJ, Bender K, Lownik E. Public health department accreditation implementation: transforming public health department performance. Am J Public Health. 2012 Feb;102(2):237–242. doi: 10.2105/AJPH.2011.300375. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17.Rochester P, Porterfield DS, Richardson LC, McAleer K, Adams E, Holden D. Piloting performance measurement for Comprehensive Cancer Control programs. J Public Health Manag Pract. 2011 May-Jun;17(3):275–282. doi: 10.1097/PHH.0b013e3181fd4d19. [DOI] [PubMed] [Google Scholar]
- 18.Belle Isle L, Plescia M, La Porta M, Shepherd W. In conclusion: looking to the future of comprehensive cancer control. Cancer Causes Control. 2010 Dec;21(12):2049–2057. doi: 10.1007/s10552-010-9666-7. [DOI] [PubMed] [Google Scholar]
- 19.Woulfe J, Oliver TR, Zahner SJ, Siemering KQ. Multisector partnerships in population health improvement. Prev Chronic Dis. 2010 Nov;7(6):A119. [PMC free article] [PubMed] [Google Scholar]
- 20.Himmelman AT. On coalitions and the transformation of power relations: Collaborative betterment and collaborative empowerment. Am J Commun Psychol. 2001 Apr;29(2):277–284. doi: 10.1023/A:1010334831330. [DOI] [PubMed] [Google Scholar]
- 21.Brown LD, Feinberg ME, Greenberg MT. Measuring coalition functioning: refining constructs through factor analysis. Health Educ Behav. 2012 Aug;39(4):486–497. doi: 10.1177/1090198111419655. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 22.Chinman MJ, Anderson CM, Imm PS, Wandersman A, Goodman RM. The perceptions of costs and benefits of high active versus low active groups in community coalitions at different stages in coalition development. J Community Psychol. 1996 Jul;24(3):263–274. [Google Scholar]
- 23.Steger C, Daniel K, Gurian GL, et al. Public policy action and CCC implementation: benefits and hurdles. Cancer Causes Control. 2010 Dec;21(12):2041–2048. doi: 10.1007/s10552-010-9668-5. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24.Riley WJ, Moran JW, Corso LC, Beitsch LM, Bialek R, Cofsky A. Defining quality improvement in public health. J Public Health Manag Pract. 2010 Jan-Feb;16(1):5–7. doi: 10.1097/PHH.0b013e3181bedb49. [DOI] [PubMed] [Google Scholar]
- 25.Mays GP, Halverson PK, Scutchfield FD. Behind the curve? What we know and need to learn from public health systems research. J Public Health Manag Pract. 2003 May-Jun;9(3):179–182. doi: 10.1097/00124784-200305000-00001. [DOI] [PubMed] [Google Scholar]
- 26.Riley WJ, Lownik EM, Scutchfield FD, Mays GP, Corso LC, Beitsch LM. Public health department accreditation: setting the research agenda. Am J Prev Med. 2012 Mar;42(3):263–271. doi: 10.1016/j.amepre.2011.10.021. [DOI] [PubMed] [Google Scholar]
- 27.Ingram RC, Scutchfield FD, Charnigo R, Riddell MC. Local public health system performance and community health outcomes. Am J Prev Med. 2012 Mar;42(3):214–220. doi: 10.1016/j.amepre.2011.10.022. [DOI] [PubMed] [Google Scholar]
- 28.Steele CB, Rose JM, Chovnick G, et al. Use of Evidence-Based Practices and Resources Among Comprehensive Cancer Control Programs. J Public Health Manag Pract. 2014 doi: 10.1097/PHH.0000000000000053. [e-pub ahead of print] [DOI] [PMC free article] [PubMed] [Google Scholar]
- 29.Rochester P, Adams E, Porterfield DS, Holden D, McAleer K, Steele CB. Cancer Plan Index: a measure for assessing the quality of cancer plans. J Public Health Manag Pract. 2011 Nov-Dec;17(6):E12–17. doi: 10.1097/PHH.0b013e318215a603. [DOI] [PubMed] [Google Scholar]
- 30.Alberg AJ, Cartmell KB, Sterba KR, Bolick S, Daguise VG, Hebert JR. Outcome evaluation of a state comprehensive cancer control plan: laying the foundation. J Public Health Manag Pract. 2013 Jul-Aug;19(4):300–307. doi: 10.1097/PHH.0b013e31825d208c. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 31.Butterfoss FD, Francisco VT. Evaluating community partnerships and coalitions with practitioners in mind. Health Promot Pract. 2004 Apr;5(2):108–114. doi: 10.1177/1524839903260844. [DOI] [PubMed] [Google Scholar]
- 32.Hohman K, Rochester P, Kean T, Belle-Isle L. The CCC National Partnership: an example of organizations collaborating on comprehensive cancer control. Cancer Causes Control. 2010 Dec;21(12):1979–1985. doi: 10.1007/s10552-010-9644-0. [DOI] [PubMed] [Google Scholar]
- 33.Wait K, Belle-Isle, L, Moore, A, Dignan, M. Minneapolis, MN: A national evaluation to assess local factors influencing coalition performance Paper presented at: 26th Annual Conference of the American Evaluation Association 2012. [Google Scholar]
- 34.Zakocs RC, Edwards EM. What explains community coalition effectiveness? A review of the literature. Am J Prev Med. 2006 Apr;30(4):351–361. doi: 10.1016/j.amepre.2005.12.004. [DOI] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.