Skip to main content
Rand Health Quarterly logoLink to Rand Health Quarterly
. 2012 Mar 1;2(1):17.

Analysis of the Cities Readiness Initiative

Christopher Nelson, Andrew M Parker, Shoshana R Shelton, Edward W Chan, Francesca Pillemer
PMCID: PMC4945299  PMID: 28083239

Short abstract

This article examines (1) the status of communities' capability to deliver medical countermeasures within 48 hours of a federal decision to deploy assets and (2) whether the Cities Readiness Initiative has improved communities' capability to meet that goal.

Abstract

The Centers for Disease Control and Prevention's (CDC's) Cities Readiness Initiative (CRI) provides funding, program guidance, and technical assistance to improve communities' ability to rapidly provide life-saving medications in response to a large-scale bioterrorist attack, naturally occurring disease outbreak, or other public health emergency. Focusing on both capacities and operational capabilities, the authors examine (1) the current status of communities' operational capability to meet CRI program goals related to delivering medical countermeasures within 48 hours of a federal decision to deploy assets and (2) whether there is evidence that CRI has improved communities' capability to meet the 48-hour goal.

Analysis shows that, overall, state capacity appears to be strong; CRI appears to have improved state capacity, but the data are not conclusive. Performance across Metropolitan Statistical Areas varies considerably, as does performance in particular functional areas. The authors also note that testing of operational capabilities has not been conducted at a large enough scale to measure readiness for the 48-hour scenario, recommending that jurisdictions be required to conduct drills at a larger scale. Other proposed recommendations include improving CDC feedback to jurisdictions, attempting to leverage assessments of non-CRI sites as a comparison group, and assessing program cost-effectiveness.


The Cities Readiness Initiative (CRI) provides funding, program guidance, and technical assistance to improve communities' ability to rapidly provide life-saving medications in response to a large-scale bioterrorist attack, naturally occurring disease outbreak, or other public health emergency. Currently, the program operates in each of the 50 states and involves the participation of local “planning jurisdictions” (which have diverse structures, ranging from single public health departments to multiple municipalities working together) in 72 of the nation's largest metropolitan areas. These areas correspond roughly to the federally defined Metropolitan Statistical Areas (MSAs).

In 2010, the Centers for Disease Control and Prevention (CDC) asked the RAND Corporation to conduct an analysis of CRI program data collected by the CDC over the course of the program in order to assess

  • the current status of communities' operational capability to meet the CRI program goal of delivering medical countermeasures to entire MSAs within 48 hours of the federal decision to deploy assets

  • whether there is evidence that CRI has improved communities' capability to meet the 48-hour goal.

The analysis focused on both capacities (i.e., plans, equipment, personnel, partner agreements, and protocols) and operational capabilities (i.e., the ability to use capacities in real-life operational contexts). At CDC's request, the study relied, where possible, on existing data to minimize the need to burden program participants with new data collection requirements. Capacities were assessed using data from a standardized written assessment tool—the Technical Assistance Review (TAR)—and capabilities were assessed from self-reported data derived from operational drill performance. These sources were supplemented by discussions with a small number of stakeholders in participating jurisdictions.

Capacity as Measured by TAR Scores Appears to Be Strong

The TAR measures the completion of a weighted composite of critical planning tasks identified by CDC. There are two closely related versions of the TAR: one for state health departments (the State Technical Assistance Review, or S-TAR) and the other for local planning jurisdictions in the participating MSAs (the Local Technical Assistance Review, or L-TAR).

State capacities. As of 2009–2010, all states' overall scores—the average of all 13 functional areas, weighted by each function's importance—were equal to or above the 79-percent threshold deemed acceptable, with the average state scoring 94 out of 100. Although performance was strong across all functional areas, performance was somewhat lower in three particularly critical areas: coordination and guidance for dispensing,* security, and distribution.

MSA capacities. As of 2009–2010, planning jurisdictions in the average MSA achieved a score of 86 out of 100, with a median of 89 percent. (Note that there is no official threshold for the L-TAR.) However, there was more variability in local scores, which are aggregated into an MSA score, than among state scores. Performance was lower in the critical areas of training, exercise, and evaluation; security; and dispensing. MSAs in higher-scoring states with centralized public health systems performed better on the 2009–2010 TAR compared with MSAs in lower-scoring states and in those in less-centralized public health systems, after controlling for other factors.

Operational Testing Has Not Been Conducted at the Scale Required to Test Readiness for the 48-Hour Scenario

Planning jurisdictions conducted and reported data on 1,364 drills in 2008–2009 and on 1,422 drills in 2009–2010. However, few jurisdictions have tested their capabilities at a large scale. For example, a large number of jurisdictions have tested staff call-down procedures. However, in 2009–2010, nearly 90 percent of these tests involved 100 or fewer people, thus limiting efforts to estimate the capability to contact all needed staff during a large-scale emergency. Similarly, in 2009–2010, only 32 percent of drills that tested dispensing at points of dispensing (PODs) involved 500 clients or more. POD drills with higher numbers of clients reported higher throughputs, suggesting that, if jurisdictions run more large-scale drills that place more stress on PODs, greater countermeasure dispensing capability might be revealed.

Significant Growth in Capacities Suggests That CRI Has Had an Impact, but the Data Are Not Conclusive

State TAR scores have improved consistently (the median increased from 85 in 2006–2007 to 95 in 2009–2010),** and the variation among states' scores has been reduced. MSA-level TAR scores showed similar patterns, with the median increasing from 52 in 2006–2007 to 89 in 2009–2010. But more variability remained among MSAs' performance than among states' performance. There was also anecdotal evidence both that CRI has improved responses to real incidents and of spillover effects in the form of states using the TAR (and similar instruments) to assess non-CRI communities.

The fact that greater “exposure” to CRI is associated with considerable increases in TAR scores is consistent with CRI having an effect on preparedness. However, the absence of data from a representative comparison group makes it difficult to rule out the possibility that other factors drove the increases. Thus, the findings must be regarded as suggestive but not conclusive.

Implications and Recommendations

The recommendations presented in this report focus on improving systems for measuring, improving capacities and capabilities at the local and state levels, and enhancing accountability to decisionmakers and the public.

Recommendation 1: Attempt to Validate the TAR

Given heavy reliance on the TAR as a measure of CRI readiness, it is important to (1) assess the extent to which TAR scores represent actual variations in communities' preparedness and (2) confirm that the TAR scores assigned to different states and planning jurisdictions are truly comparable.

Recommendation 2: Continue Refining the Drill-Based Measures by Requiring Jurisdictions to Conduct Drills at a More Realistic Size and Scale

In keeping with Homeland Security Exercise and Evaluation Program guidelines, CDC encourages jurisdictions to conduct increasingly difficult drills that lead to full-scale exercises. Perhaps because of the burdens associated with conducting large exercises, many of the drills are conducted at a smaller scale than would be required by the CRI scenario. Requiring that at least some call-down drills call the entire POD volunteer list, and that PODs in at least some dispensing drills more closely resemble those that would be implemented in a CRI scenario (in terms of their procedures, size, staffing, and throughput), would lead to more-realistic assessments of jurisdictions' capabilities.

Recommendation 3: Improve Performance Feedback to Jurisdictions and Develop Stronger Improvement Tools

Several stakeholders perceived the need for additional performance feedback to jurisdictions in convenient, easy-to-use formats and for tools that would further assist them in closing the performance gaps revealed through the TAR and drill-based measures. CDC should consider reviewing its current tools and feedback procedures in order to better understand the extent and sources of this perceived deficiency and should, as necessary, promote or revise existing tools and develop new ones.

Recommendation 4: Seek to Leverage Assessments of Non-CRI Sites as a Comparison Group

CDC should consider efforts to encourage states to collect data on a broader range of non-CRI communities to support systematic performance comparisons between CRI and non-CRI sites.

Recommendation 5: Assess Cost-Effectiveness

In the future, it would be useful to assess the program's cost-effectiveness (i.e., costs relative to benefits found by this and other studies) in order to inform discussions about whether the program's accomplishments justify the investments made.

Notes

*

Local planning jurisdictions are mainly responsible for operating dispensing sites.

**

The CRI program operates on a budget year that has typically been from August of one year to an end date in August of the following year. Thus, the period 2006–2007 to 2009–2010 is four years.


Articles from Rand Health Quarterly are provided here courtesy of The RAND Corporation

RESOURCES