Skip to main content
Translational Behavioral Medicine logoLink to Translational Behavioral Medicine
. 2014 Jan 11;4(1):34–45. doi: 10.1007/s13142-013-0242-z

Fiscal loss and program fidelity: impact of the economic downturn on HIV/STI prevention program fidelity

Joseph A Catania 1,, M Margaret Dolcini 1, Alice A Gandelman 2, Vasudha Narayanan 3, Virginia R Mckay 1
PMCID: PMC3958591  PMID: 24653775

ABSTRACT

The economic downturn of 2007 created significant fiscal losses for public and private agencies conducting behavioral prevention. Such macro-economic changes may influence program implementation and sustainability. We examined how public and private agencies conducting RESPECT, a brief HIV/STI (sexually transmitted infection) counseling and testing intervention, adapted to fiscal loss and how these adaptations impacted program fidelity. We collected qualitative and quantitative data in a national sample of 15 agencies experiencing fiscal loss. Using qualitative analyses, we examined how program fidelity varied with different types of adaptations. Agencies reported three levels of adaptation: agency-level, program-level, and direct fiscal remedies. Private agencies tended to use direct fiscal remedies, which were associated with higher fidelity. Some agency-level adaptations contributed to reductions in procedural fit, leading to negative staff morale and decreased confidence in program effectiveness, which in turn, contributed to poor fidelity. Findings describe a “work stress pathway” that links program fiscal losses to poor staff morale and low program fidelity.

KEYWORDS: Implementation, Fidelity, HIV/STI behavioral prevention, Economic downturn, Fiscal loss, Adaptions, Work strain

INTRODUCTION

Overview

The persistence of the HIV epidemic among at-risk populations in the U.S. has changed the direction of HIV prevention and stimulated efforts to augment prevention programs with new biomedical strategies (e.g., pre-exposure prophylaxis, improved antiretroviral therapy) and HIV testing approaches testing (e.g., at-home testing) [1]. There are limitations, however, to an overreliance on biomedical solutions to HIV/STI (sexually transmitted infection) prevention (e.g., cost, adherence [2, 3]). Consequently, the ability to sustain effective behavioral prevention programs is important. In this context, research on the translation of HIV/STI behavioral prevention programs to real-world settings is critical to our understanding of how to successfully sustain program effectiveness. One tenet of translation research is that maintaining program fidelity is essential to sustaining program effectiveness [4]. There are many threats to program fidelity, including the programmatic consequences of organizational adaptations to intra- and extra-agency strains (e.g., financial loss, staff turnover) (see refs. [4, 5]). With regard to HIV/STI behavioral prevention programs, prior work has examined adaptations occurring during the early phase of the translation process (see below), but less attention has been paid to challenges to sustainability.

The present report is based on a national qualitative study of departments of public health (DPHs) and community-based organizations (CBOs) that have experienced financial strain. We examine how agencies have adapted to these strains and the impact of these adaptations on compliance fidelity for RESPECT, an HIV/STI behavioral prevention program. RESPECT is one of the Centers for Disease Control and Prevention (CDC) Diffusion of Effective Behavioral Interventions (DEBIs) programs [6]. RESPECT is an individual-level program, can be delivered in two sessions of 20–30 min in length, and has demonstrated success in changing behavior and reducing STI prevalence [79].

Economic strain

National-level data from local health departments demonstrate that the current recession (2007–present) has had significant effects on public health programs in the United States. Studies have found that (a) a large majority of states have cut fiscal support for public health programs since 2008; (b) fiscal cuts have resulted in job loss, program termination, or program reductions (e.g., in service hours); and (c) among programmatic cuts, substantial declines in population-based primary prevention programs have occurred: 25 % of primary prevention programs were reduced or eliminated in 2008–2009, and approximately 12–17 % were cut annually thereafter [10, 11].

Although national-level data are not available, state-level evidence suggests that there have been macro-level reductions in fiscal support for HIV prevention specifically. California, for instance, has defunded HIV prevention to the point that current programs are terminating and the ability to adopt new HIV prevention efforts is challenged [12]. Moreover, a handful of studies have found that economic factors influence the selection of specific HIV prevention programs, their implementation, and their termination [1315]. Less is understood about how programs that experience reduced fiscal support adapt to sustain programs while attempting to continue implementation and, in turn, how those adaptations affect program fidelity. The current work examines these issues.

Program fidelity

It is well known that program efficacy at the clinical trials phase of program development does not automatically translate into success in the real world [16]. Even when programs (such as RESPECT) are fielded with the aid of professionally conducted training programs, it may be difficult to sustain good program fidelity in the changing conditions of real-world settings [17]. In general, some program modifications or adaptations are expected and may be beneficial [1820]. However, eliminating or substantially changing core program components (essential elements in producing behavior change) and program structures (e.g., session length, session order, delivery method) may diminish program efficacy [4, 5, 21].

Program fidelity in its most parsimonious form refers to the presentation of core program components (i.e., essential elements in producing behavior change) [21]. Changes or adaptations in core program components may diminish program efficacy. Program fidelity may be further conceptualized along a number of dimensions including compliance and competence fidelity, but may also include other dimensions such as structural fidelity (e.g., adhering to the required session number and length) [4, 5, 21]. Of these, compliance fidelity is fundamental to other forms of fidelity in that compliance with performing core components is essential to program effectiveness. That is, the program is assumed to be efficacious only if all core components are presented as intended by the program developers.

Antecedents of fidelity

Program fidelity may be influenced by a wide range of factors including antecedents (a) at the level of the counselor (e.g., client–counselor exchange or counselor–agency relationship), (b) at the program or intra-agency level (e.g., agency mandates governing session length), (c) at the local community and agency network levels, and/or (d) at the state or national level (e.g., institutions such as Congress and the NIH or resources [tax revenues]) [4, 5, 2023]. Prior work examining HIV program fidelity has identified a number of fidelity challenges at the program level (e.g., training, client fit, procedural fit) [13, 14, 17, 2429]. However, past studies have not examined how agencies adapt to economic strains and the influence of those adaptations on program fidelity.

Organizations may vary in the adaptations that are available to them in responding to economic strain. Agencies may adapt through a variety of means including direct fiscal adaptations or through indirect savings by making changes in personnel, client load, or the program proper. How agencies adapt to economic strain, however, may have differential effects on program fidelity. Agencies that adapt to economic strain by reducing their workforce, for instance, may adversely impact staff morale; poor morale may negatively influence job performance, and that, in turn, may reduce program fidelity. That is, program adaptations may impact staff-related variables that subsequently influence working conditions and, consequently, program fidelity. Alternatively, agencies that are able to "borrow" resources from other programs or agencies may avoid layoffs and negative effects on program fidelity (e.g., see [23]).

It is unclear, however, if different types of agencies (e.g., CBO vs. DPH) vary in the kinds of adaptations they select. A CBO, for instance, may lack a direct governmental financial resource to draw upon and may operate under different regulations and accounting rules than a DPH. In addition, it is unclear how adaptations in response to financial pressure affect program fidelity. We will explore variations in adaptations to economic strain by CBOs and DPHs and their influence on program fidelity. We employ a directed qualitative approach that hypothesizes a link between program adaptations and program fidelity. However, because this is an understudied area of inquiry, our broader goal is model development rather than formal hypothesis testing. That is, as a qualitative study, there is an opportunity to identify emergent concepts and relationships beyond those identified in the literature.

METHODS

Overview

Data for the current paper are based on a subsample of agencies obtained from the Translation into Practice (TIP) study, a national, mixed-methods survey of CBOs and DPHs delivering the RESPECT program. We examined only those agencies reporting a fiscal loss for their HIV/STI behavioral prevention programs and expressing motivation to continue RESPECT programming. We examined data from four sources: (a) brief fiscal surveys, (b) semi-structured interviews with agency directors and (c) program delivery staff (counselors), and (d) quantitative client exit surveys. Westat Corporation collaborated on instrument development and conducted interview quality control, sampling, and data collection work. Institutional review boards at Westat Corporation and Oregon State University approved all protocols.

Procedures and samples

Agencies

The initial agency sample frame (n = 30) was constructed from listings of the Academy of Educational Development, the CDC Behavioral Training Centers, and Kaiser Family Fund's National HIV Prevention Inventory [30]. Quota sampling was used to fill four cells: urban DPH (n = 7) and CBO (n = 10); and non-urban DPH (n = 7) and CBO (n = 6). Agencies were from 11 states, representative of all regions of the U.S. (Names were deleted to assure confidentiality.) For the present analysis, only agencies reporting a decline in funding support for RESPECT were included (n = 15) (four urban and two rural CBOs; five urban and four rural DPHs). A fiscal loss was defined as any statement of reported declines in fiscal support for the RESPECT behavioral prevention counseling program; this qualitative measure is discussed below and elsewhere (see Results). Agencies were conducting both single-session and two-session versions of RESPECT in conjunction with standard venipuncture and rapid-testing protocols.

Directors/fiscal officers

Agency directors (n = 15) were interviewed by telephone or in-person April–July 2010. Semi-structured interviews of 45–60 min were conducted under private conditions and were recorded (instrument available from first author). Directors for the current report were 57 % White, 21 % African American, 22 % other; and 57 % were female; M age = 49 years; education = 57 % Master's degree, 36 % Bachelor's degree 7 % Associate's degree; M time employed at current agency = 11 years; M time administering RESPECT = 3.2 years. Directors who had multiple roles completed hybrid interviews as appropriate. Interviews were recorded, and transcripts were compared to the original digital recordings. Directors provided access to other personnel including fiscal officers (n = 15), who completed a brief, self-administered interview or budget survey. No identifying data were collected from fiscal officers. Small incentives were provided ($25 gift certificate cards) to individual respondents, and agencies were allowed to keep a computer provided as an aid to tracking client surveys.

Program counselors

Program counselors were enumerated during director interviews. Priority was given to selecting counselors that administrators identified as being core program providers (i.e., provided the program on a regular basis). In agencies with four or fewer core counselors, we recruited and interviewed all counselors. In agencies with more than four core counselors, we randomly selected four for recruitment and interview. If agencies had insufficient numbers of core counselors, we then sampled from the pool of counselors who were providing RESPECT on an ad hoc basis. All counselors agreed to participate. Approximately 87 % of counselors were full-time employees, all were paid workers, and 84 % divided time between RESPECT and other tasks. Counselors had worked at their current agency for an average of 8.5 years and had been conducting RESPECT for approximately 4 years on average (n = 39; M age = 44 years; 67 % female; 44 % White, 21 % African American, 13 % Latino, 21 % other; education = 13 % ≤ High School, 26 % Associate's degree, 39 % Bachelor's, 18 % Master's). Interviews were recorded, and transcriptions were compared to the digital recording for quality checking. Small incentives were provided ($25 gift certificate cards).

Client exit surveys

We obtained an opportunistic sample of anonymous client exit surveys (n = 458) from 14 of the 15 agencies selected for the present paper, with a targeted quota of 30 surveys per agency (obtained M = 32/agency; all clients 18 years or older; M age = 31 years; 46 % female; 56 % White, 24 % African American, 16 % Hispanic, 5 % other; 52 % employed, 23 % unemployed, 25 % student/retired/other). We included in our analysis the one agency that did not provide client exit data in order to examine their data on other facets of the study. Exit surveys were approximately 5 min in length, in English and Spanish (94 % of respondents were primarily English-speaking), and completed under private conditions after the first RESPECT session. No written consent was obtained, in order to ensure confidentiality. Following informed consent, agency staff (trained by Westat Corporation) provided clients with the survey along with an information sheet and envelope and explained the procedures. Clients either completed the survey at the agency or took the survey elsewhere to complete. Clients placed completed surveys in envelopes without identifiers, sealed them, and returned them to agency staff or to a secure drop box. Occasionally, clients preferred to mail the envelopes themselves. Westat collected envelopes from agencies via mail or site visit. The full measure is available from the first author.

Measures

Funding loss

A series of open-ended questions was used to determine if there had been a programmatic fiscal loss; these questions were asked of either the agency director or fiscal officer or both (if agencies had both). For instance, fiscal officers were asked: (a) Have there been any significant changes in your agency's finances since the economic downturn of 2007? (b) Has financial support for behavioral prevention within your HIV/STI testing program changed at your agency since 2007? These questions were probed as needed to determine the degree to which fiscal conditions might have changed and, when possible, specifically how.

Adaptations

Adaptation in the present context refers to the changes or modifications that an organization may make in response to environmental stressors [31, 32]. In this context organizations could respond to changing fiscal circumstances by making any number of adaptations at the organizational level, program level, or resource level. Multiple sources of data were used to determine adaptations that agencies made in adjusting to changing fiscal conditions (e.g., counselor and executive director interviews). Whenever the issue of economic problems for the agency or program arose, interviewers probed for details on how the agency had responded to the problems and the agency's plans for program continuation or termination. Moreover, the interviews contained semi-structured questions to more directly address topics of adaptation and workload. For instance, directors were asked, among other related questions, If funding decreased…, (a) How has this change in funding affected the delivery of [RESPECT]? (b) Do you have plans to continue [RESPECT] beyond the current funding? (c) Has workload increased or decreased for individual employees or for particular groups of employees, such as delivery staff or supervisors [on RESPECT]? Counselors, for example, were asked, (a) Have there been significant changes in your organization since you started delivering [RESPECT] that affect program delivery? How have these changes affected the delivery of [RESPECT]? (b) Did your agency modify or change [RESPECT] to better fit with your agency's financial resources? What specific changes did your agency make, and who made the decisions? Why did your agency make these changes?

Counselor perceptions

Semi-structured questions were asked of counselors to assess perceived program utility and cost-effectiveness, the impact on themselves and other counselors of running the program, and the morale of the counselors, with probes on factors affecting morale. For instance, counselors were asked, (a) Do you feel that [RESPECT] is a useful intervention and practical for your organization to run? (b) Do the personnel working on [RESPECT] feel that it is a useful intervention and practical to deliver? (c) Do you think the program has been cost-effective? (d) Has [RESPECT] affected the morale of the program personnel? How has it affected morale, and what about the program do you think has been having this affect?

Client survey: overview

Client surveys assessed compliance fidelity from the clients' perspective based on what transpired during their first counseling sessions. We focused on the first session since it is fundamental to the counseling process, regardless of how and when test results are given. Moreover, by focusing on the first session, we standardized the fidelity assessment across counselors and agencies. The client survey also assessed behavior and background characteristics. Individual-level compliance scores were computed, and from these an overall agency fidelity index was derived (see below).

Fidelity index: individual-level scoring

We scored data from individual clients for each agency (range = 0–6), with a score of six indicating that the client reported receiving all the assessed elements of three core program components based on the three primary objectives of the RESPECT program [see 33]. Items and scoring are described in Table 1: (a) establishing rapport and conducting a risk assessment, (b) risk prevention analysis (identification of participant's behavior or conditions that facilitate or inhibit healthy actions), and (c) negotiating and agreeing on a risk-reduction plan. The "rapport/risk assessment" component aggregated a number of program elements and included items assessing (a) one-on-one counseling skills important for rapport building; to reflect a high standard for counselor skills, a "yes" response to all items was required to receive 1 point; (b) discussing health goals (1 point); and (c) discussing risk behaviors. Scoring was adjusted so that clients did not get additional points for discussing more than one risk factor; that is, they received one point for discussing either sex or drug risk behaviors or both. As an index, the total score is a simple summation, and there is no assumption that items are correlated (see [34]). Scoring at the individual level indicates whether elements of each program component occurred in broad stroke. This was done in order to compensate for the fact that clients will have limitations in recalling details or they may misreport their risk behavior in the interview. Individual-level scores indicate if the program components are performed; they do not address the quality of the work nor if the program is presented to eligible clients (see below).

Table 1.

Fidelity index: survey items

Component Items Points
Rapport Building and Risk Assessmenta,b 3
Did you feel that the counselor listened to you?
Did the counselor give you a chance to talk as much as you needed about your concerns and questions?
Did the counselor understand what you had to say?
Did you talk about your health goals? For example, health goals might include eating better, drinking less alcohol, or not catching diseases in the next month?
In your session today, did you discuss your sexual behavior?
Did you discuss the sexual things you have done recently that may have put you at risk for getting the AIDS virus or other sexual diseases?
Did you discuss your drug use?
Did you discuss how your drug use might put you at risk for getting the AIDS virus?
Risk Prevention Analysisc 2
Did you discuss the reasons why you sometimes have sex without a condom?
Did you discuss how bigger things in life might influence your sexual behavior?4
Did you discuss how bigger things in life might influence your drug use?d
Did you talk about how to change things that might make it difficult for you to reach your health goals?
Negotiated Risk Reductionc 1
Did you agree on something about your sexual behavior that you can do in the next few weeks that would help lower your risk for getting the AIDS virus or other sexual diseases?
Did you and the counselor agree on something that you can do in the next few weeks that would help lower your risk for getting the AIDS virus from drugs?

We took as positive evidence any discussions of sex or drugs in the context of the various components

aScoring: to reflect a high standard for counselor skills, a "yes" response to all three items was required to receive 1 point

bScoring: one point for discussing either sex or drug risk behaviors or both

cScoring: Risk Prevention — 1 point if any one of first three items received a "yes" response, and 1 point if "yes" to health goals item; Negotiated Risk – 1 point if either item received a "yes" response

dDefinition provided: “Bigger things might include losing a job, family problems, your friends, your lifestyle, or what you do for fun.”

Fidelity index: agency-level scoring

The individual scores do not take into account the policy directive that RESPECT should be delivered primarily to clients with some type of risk factor, nor do they reflect agency-specific compliance. Consequently, our agency-level measure was designed to index (a) overall compliance fidelity across counselors at a given agency, (b) if personnel are applying the program efficiently by targeting at-risk persons over low/no-risk persons, and (c) if personnel are keeping the program logic in mind in working with low/no-risk clients (i.e., knowing that only the risk-assessment component is reasonable to provide). Based on client reports, we categorized clients as low/no-risk versus at-risk, with risk defined broadly by answers to ten items assessing sex- and drug-related risk behaviors (available from first author). At-risk was defined by a “yes” response to any one of the sex- or drug-risk items. This definition is appropriate to the counseling context wherein the role of the counselor is to make a broad categorization of clients by risk when deciding who should receive the program. Agency-level scores were, therefore, constructed to reflect the percentage of clients who received appropriate core components of the program (risk assessment, risk prevention analysis, negotiated risk reduction plan). Thus, at-risk clients who scored 6 on their individual fidelity index, indicating they received all three components, and low/no-risk clients who reported only receiving the risk assessment component were categorized as "high-fidelity." We then computed the percentage of high-fidelity clients for each agency, and these percentages represent the agency scores (ranging from 0 to 100; 100 = consistent reports of high fidelity across clients). We anticipated that counselors would be able to deliver the intervention with a high degree of accuracy because (a) the RESPECT program is a relatively straightforward behavioral intervention and (b) the CDC/Prevention Training Centers have made a substantial effort to train agency personnel.

Data management and analysis

NVivo was used to facilitate coding of semi-structured interview data, and SPSS and STATA were used for management and analysis of client survey data. Content analyses were conducted to identify themes by using a team approach similar to that described by Stern [35]. Then, using a directed content analysis approach [36], we identified and defined initial codes. We continued to analyze for patterns until no new patterns emerged [37]. Two coders reviewed the themes to check for reliability of interpretation. When there were disagreements in interpretation, coders discussed and came to consensus. For the present analyses, we focused on interview data that addressed themes related to the impact of the fiscal condition of the agency/program on the delivery of the RESPECT program, adaptations of the program related to the changing fiscal environment, and the impact of these types of conditions on counselors' perceptions of the program and their work morale. As noted, we included one agency that failed to provide client data in order to include their data on adaptations.

RESULTS

Overview

Fifteen (50 %) of 30 agencies reported a decline in fiscal support for RESPECT over the past year. All agency administrators indicated that they supported the continuance of the RESPECT program, although delivery staff at some agencies were less supportive. Nine of the 15 agencies were DPHs, with the remainder (n = 6) being CBOs.

Fiscal loss

Although there is variation in the kinds of budgetary information agencies presented, for the present study, we defined any reported decline in funding as evidence of program fiscal loss. In many instances, it was difficult for agencies to quantify the losses or clearly define program costs and expenses [38]. It is not surprising, then, that none of the agencies reported using CDC's budget guidance online materials for RESPECT/DEBIs. Based on director and fiscal officer interviews, the extent of the losses experienced by the agencies in our study appear to be extensive, and similar for both high- and low-fidelity agencies and for DPHs and CBOs. For example, CBO and DPH agency directors indicated as follows:

Our agency has lost all funding for prevention education activities…now [we are] only funded for the raw cost of HIV testing…all other HIV activities are now temporarily funded…and face complete elimination without an infusion of additional funds. [CBO; Agency N, Fidelity = 88]

The state…cut its prevention fund to the agency 53 %. Donations have been down 35 %. [CBO; Agency M, Fidelity = 85]

Previously, the state…provided a significant portion of the funding for behavioral prevention programs. Those funds have been slashed 80 % since 2007. [DPH; Agency A, Fidelity = 27]

We no longer contract with the state for the behavioral prevention programs…we do receive funding for testing…none for behavioral prevention. [DPH; Agency D, Fidelity = 45]

Adaptations

Agencies reported three general categories of adaptations to a loss in program fiscal support:

  1. Agency-level changes: reducing program staff size (six agencies) or the number of clients served (three agencies), increasing staff workload (six agencies), and decreasing workload (two agencies);

  2. Program-level changes: changes in program structure (one agency) or non-core programmatic components (two agencies) (e.g., reducing session length, eliminating quality assurance procedures); and

  3. Direct fiscal remedies: All adaptations had the goal of reducing agency-program costs; however, some of these are best described as direct fiscal remedies (six agencies). These included inter-agency resource sharing, intra-agency resource sharing (e.g., using HIV testing program resources to support prevention costs), and miscellaneous other fiscal remedies (e.g., rolling costs to the next fiscal year, obtaining external/governmental fiscal support).

Several patterns were observed among agencies' selected adaptations. First, DPHs represented all six agencies that reported agency-level adaptations involving increased staff workload, typically created by staff reductions (in five of six agencies) (see Table 2). Only one agency (DPH Agency G) reported increased workload and no staff reductions. Secondly, among the six agencies that used direct fiscal remedies, four were CBOs (see Table 2). CBOs were also two of the three agencies that enacted the agency-level adaptation of reducing the number of clients receiving the RESPECT program. This latter strategy was enacted in conjunction with either intra-agency resource sharing (Agency H) or staff reductions (Agency K). Client reductions would presumably lead to a net reduction in workload.

Table 2.

Agency type, fidelity score, and adaptations to economic downturn

Agency Adaptations
Agency Program Direct fiscal
ID Type Fidelity scorea Cut staff Reduce clients Workload
Increase Decrease
A DPH 27 × × ×
B DPH 35 × × ×b
C CBO 41 ×c
D DPH 45 × ×
E DPH 48 × ×
F DPH 49 × ×b
G DPH 57 ×
H CBO 60 × ×c
I DPH 61 × ×
J DPH 79 ×c
K CBO 80 × × ×
L DPH 81 ×c
M CBO 85 ×c
N CBO 88 ×c
O CBO ×b

Note that Agency O did not provide client data

aFidelity scores range from 0 to 100, with 100 indicating that all clients reporting a risk factor received the full program as measured here

bProgram adaptations: Agency B — initiated express visits; F and O — quality assurance procedure change

cDirect fiscal remedies: Agency C — interagency resource sharing; H, J, and N — intra-program resource sharing (i.e., shifted costs to HIV testing or another program); L — increase in external funding from county; M — deficit rolled into next year's budget

Adaptations and program fidelity

Table 2 summarizes key agency adaptations arrayed by program fidelity scores. One agency (Agency O) did not provide client fidelity data and is not discussed here. Agency fidelity scores ranged from a low of 27 to a high of 88. We selected a cut-off point of two-thirds (i.e., score = 66) for organizing our presentation. Scores under 66 indicate that, one third of the clients did not receive the appropriate level of the program. Our cut-off was based on inspection of the data that showed distinct clusters of adaptations among agencies above and below this cut-off point.

Inspection of Table 2 reveals several patterns: The most noticeable is that five of the nine agencies with fidelity scores below 66 reported making staff reductions to offset funding losses (Agencies A, B, D, E, and I). Moreover, in six of nine agencies in this group, personnel reported experiencing an increase in workload. None of the agencies with fidelity scores >66 reported increasing workload. These patterns imply that factors contributing to higher workload may have some association with reductions in program fidelity.

Three agencies reduced the size of their client populations; one of these had high fidelity (Agency K; score = 80), and two had lower program fidelity (scores <66; Agencies A and H). The higher-fidelity agency decreased workload for counselors while the lower-fidelity agencies increased or saw no change in workload.

Of the five agencies with high fidelity scores (≥79; Agencies K through N), four reported efforts to sustain program resources through more direct fiscal adaptations (e.g., inter-agency sharing, shifting costs to other programs, and obtaining external support). The one agency (Agency K) that did not report making a direct fiscal adaptation reported both staff reductions and reduction in the number of clients; consequently, workload was reported by counselors at this agency to have decreased.

Two agencies with lower fidelity scores also reported program changes. Agency B (DPH) initiated a reduction in program delivery time for clients ("express visits"), and Agency F (DPH) dropped some of their quality-assurance procedures, presumably to reduce workload for counselors and supervisors. Personnel at agency F reported a reduction in workload. (Note that Agency O also modified quality assurance, but did not provide fidelity data.)

Three agencies among those with lower fidelity scores (Agencies C, F, and H) did not report increases in workload. Agency C reported having made a direct fiscal adaptation (inter-agency resource sharing) in an effort to offset program losses. Agency F reported reducing workload, possibly through eliminating workload associated with quality assurance procedures. Agency H cut client load and initiated some intra-agency program resource sharing to offset losses. These three agencies underscore the possibility that antecedents such as workload are not the only contributors to lower fidelity. Modifying quality-assurance components presumably may have a direct impact on lower fidelity; however, other antecedents, not addressed here, may also be at play.

Counselor observations

Counselors from lower-fidelity organizations with heavier workloads and fiscal problems indicated that their work conditions were affecting morale. They also reported dissatisfaction with the amount of time spent with clients, noting that the program is impractical at their agency. This latter observation was typical at agencies where workload had increased.

In addition, we want to note that, although all counselors were willing to be interviewed and to discuss many aspects of the agency and the RESPECT program, some were less willing to discuss the issue of morale and working conditions. It is possible that some counselors were concerned about reporting negative working conditions for fear of reprisal, although we cannot directly address this question. Nevertheless, many agency counselors did address working conditions. For instance, a counselor from Agency D, which had cut staff and increased workload, notes a linkage between cutbacks, staff morale, reduced program–client time, and the perception that RESPECT was not practical to conduct:

R: With all the cutbacks, morale is just a little bit low…we wish we had more time [with clients].

I: Given the time that you have, would you say it's practical?

R: No.

[Agency D: Fidelity = 45]

Counselors' statements from two other low-fidelity agencies (B and I) illustrate the following points: (a) the combination of reduced staff size, increased workload, and reduced program delivery time may add to the perception that the program is not functional; and (b) some counselors perceive linkages between program funding loss, staff reductions, and staff morale.

R: The staff that work at this clinic don't feel that [RESPECT is] real valuable in this clinic.

[Agency B: Fidelity = 35]

R2: It's just less money, with less staff trying to do the same thing.

R3: You know, times are tough…cutbacks and all of that…morale is down.

[Agency I: Fidelity = 60]

Counselors from agencies with higher fidelity scores have more positive perspectives and morale (Agencies K, L, and M; Fidelity Scores = 80, 81, and 85, respectively). These counselors had positive statements to make about the program and/or staff morale and made no negative comments on workload, session length, or other program components. For instance, although agency K cut staff, they also reduced the number of clients being seen and thereby reduced workload. Agencies L and M made no changes in staffing or client workload, but adapted through direct fiscal remedies. These remedies, respectively, involved obtaining more external funding and rolling deficits over to the next fiscal year.

R: I think [RESPECT has] made a lot of us more confident because of the structure.

[Agency K: Fidelity = 80]

I: [Is RESPECT a] useful intervention?

R: Yes, very.

I: [Has RESPECT] affected the morale of the program personnel?

R: [RESPECT] was…like a learning step for me…and I actually use RESPECT in my life.

[Agency L: Fidelity = 81]

R: I feel like RESPECT has [allowed my] other coworkers to feel better for what they're doing.

[Agency M: Fidelity = 85]

DISCUSSION

Overview

We identified three general categories of adaptations made by public health organizations confronted with reductions in financial support for RESPECT: agency-level, program-specific, and direct fiscal remedies. We found relationships between the type of organization (CBO or DPH) and specific adaptations, and, in turn, identified potential linkages between the types of adaptations, counselor stress/program perceptions, and program fidelity.

Organizational differences in adaptations

Agency directors universally felt positively toward the RESPECT program and supported continuing the program despite fiscal strains. The methods chosen for adapting to fiscal loss, however, varied by type of agency. Overall, DPHs were more likely to have made agency-level adaptations that involved administrative decisions to reduce program staff. Although an immediate and identifiable cost reduction, decreasing staff size resulted in increased workloads for program counselors. In addition, one DPH (Agency B) that reduced staff size appeared to have adjusted for a smaller staff by reducing session length. Presumably, fewer program counselors would be able to see more clients in a given day with shorter sessions. The impact of this adaptation, however, may be a reduction in procedural fit such that counselors are more rushed in their program delivery, and/or cut program components to fit a reduced session length (see [23]). According to the counselors, overwork and time constraints increased work stress (negative morale) and decreased confidence in program effectiveness. Moreover, the data suggest a linkage between these adaptation-related outcomes and poor program fidelity. Our results suggest that DPHs, particularly those that increased workload and/or reduced program-procedural fit, were more likely to be clustered among agencies with lower fidelity scores.

CBOs, on the other hand, were more likely to use direct fiscal remedies as a means of adapting to program fiscal loss. Direct fiscal remedies reflect an effort to leave the current work-program structure and workforce intact and, as a consequence, had less impact on program fidelity. In addition, CBOs infrequently cut staff positions. When staff positions were cut, counselors saw fewer clients instead of trimming allotted program delivery time. The net result for these various adaptations was sustainability of higher program fidelity. Indeed, CBOs were clustered among higher-fidelity agencies.

These findings bring up the question of what organizational factors might contribute to particular patterns of adaptation, and consequently, differences in the ability to sustain program fidelity. In general, agencies have a limited number of adaptations they can make in adjusting to reduced financial support. The agencies in our study reported seven variants on specific adaptations. DPHs, relative to CBOs, may have access to fewer of these options because DPHs are under governmental mandates and regulations. For example, DPHs function with relatively rigid constraints with regard to exercising options that limit the number of qualified clients they can serve. These public organizations may also operate under less-flexible accounting standards that inhibit resource sharing with agencies and across programs. In short, DPHs may typically have little choice but to reduce program staff and non-personnel costs to save money while leaving client load relatively constant. The end result is work overload. The overload may be accommodated for short periods of time; however, the current economic crisis is in its fifth year. Over longer time spans, chronic work overloads and related work stress may have corrosive effects on staff performance. Indeed, many DPH staff from agencies with low fidelity scores reported morale problems and less confidence in the program. Furthermore, staff reductions may force secondary adaptations that, for instance, reduce the time available for program delivery (e.g., as with Agency B). Reduced delivery time, as noted previously, leads to a reduction in compliance fidelity.

Although CBOs may have fewer adaptational constraints than DPHs, this does not mean they use a larger number of adaptive variants. Rather, CBOs opt for adaptations that are less programmatically disruptive. In particular, CBOs used direct fiscal remedies and avoided work overloads. These conditions led to maintaining good staff/client ratios and a more optimistic climate with regard to program delivery and effectiveness, resulting in better compliance fidelity.

Lastly, we note that a small minority of agencies did not fit these general patterns. One CBO (Agency C), for instance, reported a direct fiscal adaptation but nevertheless had very low program fidelity. Furthermore, one DPH (agency G) with moderately low fidelity was found to have increased workload but had not reduced staff size. These cases remind us that other agency, personnel, and program factors besides those discussed here may influence program fidelity. The agency's history of utilizing quality assurance and training programs, the presence of a program champion, and the overall economic stability of the agency and service network are but a small number of additional factors that may impact program fidelity [23]. Moreover, factors aside from workload stress may link staff reductions and program fidelity. For instance, agencies might eliminate more expensive personnel with advanced degrees and training skills in the hope that lower-paid counselors have learned enough to sustain the program. A reduction in human capital over time, with no ability to recoup or maintain training programs, may further contribute to reductions in fidelity.

Theoretical implications

The present study is consistent with social–ecological models in public health that describe interdependent, circumjacent frameworks (e.g., see [39]). These models build on the early social ecological work of Bronfenbrenner [40, 41] that conceptualizes macro, exo, meso, and micro levels of interdependent influences on a given outcome. Although theorists in the translation field do not typically invoke Bronfenbrenner's terminology, prior theoretical work parallels this conceptual terminology (e.g., [19, 39, 42]). For instance, Shediac-Rizkallah and Bone [42] describe national economic (macro), community-environmental (exo), organizational/interprogram (meso), and program-specific and program-personnel (micro) factors that influence sustainability of community-based programs. The present study describes concepts and relationships that are congruent with this social–ecological framework.

Our study qualitatively describes relationships at multiple levels of a social–ecological system that influence program fidelity, and identifies antecedents at all four levels: (a) macro — national-level economic and program funding factors; (b) exo — community-level, interagency networks (e.g., regarding interagency resource sharing); (c) meso — organizational flexibility and agency decisions regarding program continuation, intra-program resource sharing, personnel layoffs and hiring, factors affecting workload, and size and characteristics of the client population; and (d) micro — changes in program structure, core components, and the job performance of counselors that directly impact program fidelity. Although a social–ecological framework is useful for organizing variables and relationships, considerable work is needed in this area to specify models and hypotheses within conceptual levels and to specify relationships between circumjacent levels.

Conceptualizing fidelity

The current work utilizes a definition of program fidelity based on the perspective that investigators can define a set of programmatic core components that are absolutely essential to program efficacy. This is a challenging task because clinical trials often bundle core components together in a single intervention, and it is unclear which components are absolutely essential and which could potentially be adapted or truncated. In terms of the present investigation, it should be noted that the core components that constitute our fidelity measure are presumed to be the essential components. Further, like other measures of program fidelity, we cannot measure every aspect of the program's core components, but are limited by our methodology in being able to represent some elements but not all (see Limitations). Moreover, some investigators expand the definition of program fidelity to include a wide range of program-related features and antecedents of program compliance. For instance, some investigators include training programs and implementer skills in the definition of fidelity. Clearly, these are antecedents of program compliance and competence fidelity. That is, well-trained, highly skilled employees are necessary for all program components to be delivered with quality, but skills and training by themselves do not make up an intervention. Dane and Schneider [4] address the complexities of defining program fidelity noting that there is a lack of consensus among investigators as to what constitutes program fidelity (note that Dane and Schneider use the term program integrity, which is synonymous with program fidelity).

Factors affecting program fidelity and sustainability

Scheirer's [23] review of health programs identified several general factors important to program sustainability including (a) maintaining fit between program features and organizational procedures over time, (b) staff perceptions that the program has benefits for clients and for themselves, and (c) interagency support. The present study extends this research to suggest that workload factors are also important when agencies are faced with economic stressors. Moreover, the present research suggests that organizational constraints in mission and accounting practices may moderate the relationships between economic stressors and selection of adaptational strategies. The adaptations selected, in turn, impact counselors and program procedures, eventuating in effects on program fidelity.

New directions

The field of HIV/STI prevention is undergoing a number of historical changes that reflect changes in policy and technology. At the policy level, there is a renewed interest in biomedical prevention [1] that may dampen enthusiasm for behavioral prevention programs. In terms of technological changes, the development of rapid testing and the extension of clinic-based rapid testing technology to over-the-counter, at-home testing pose new challenges to the sustainability of current HIV/STI behavioral prevention programs. Future research on program sustainability should be directed at understanding the impact of these policy and technological changes on HIV behavioral prevention programs.

Limitations

The present study has several limitations. The design of the study is cross-sectional. Since the economic downturn began in 2007, we might expect that some agencies have been making adaptations over a long period of time. Thus, fidelity scores may be a function of longer-term processes or more distant antecedents than we have accounted for. Longitudinal studies would be useful in this regard. As with all non-probability-based samples, generalizability is limited. By using multiple windows on the process of adaptation (directors, counselors, fiscal managers, and client surveys), we have sought to increase the internal validity of the study. Nevertheless, we acknowledge that we may not have identified the full range of adaptations (i.e., not reached saturation); nor can we comment on the prevalence of these adaptive strategies in the population of agencies in the U.S. The present study also does not quantitatively define fiscal loss. The magnitude of loss may impact decisions that agencies make in ways we could not determine. Nevertheless, the losses experienced by both high- and low-fidelity agencies appeared to be relatively severe. Moreover, our agency reports are consistent with the larger picture for prevention programming at the national level [10, 11].

In addition, our self-report client measure of counseling fidelity has limitations. It is not able to assess counseling strategies that clients may not recognize (e.g., a teachable moment), and we did not assess what was transacted in session two for those agencies running the two-session version of RESPECT. The first session, however, is fundamental to the success of the second session, and some agencies only run a single-session version of RESPECT in conjunction with rapid testing. Thus, we had substantive reasons for focusing our assessment on the first session. We did not find an association between fidelity scores and whether agencies were running a one versus two-session version of RESPECT. This may reflect the impact of the Prevention Training Center's efforts to train agencies on both session formats (Prevention Training Centers are regionally located and provide training on all DEBIs). Moreover, the current measure provides coverage of a wide range of core program elements. Furthermore, alternative measures also have methodological problems. Observations of counseling sessions and counselor reports both have reporting biases (e.g., social desirability and/or Hawthorn effects) [4, 5]. However, studies using observation techniques tend to identify a wider range of fidelity problems and evidence of significant associations with hypothesized antecedents than do measures based on personnel reports of program fidelity (i.e., personnel measures may be more biased toward providing self-flattering reports). Hitt and colleagues [17] found that client exit interviews and observational methods were both sensitive to the effects of training programs designed to enhance program fidelity. Lastly, client fidelity indices have been found to be reliable reports of what transpires in related settings [26, 43, 44].

CONCLUSIONS

The economic downturn has reverberated throughout the public health sector, with substantial cuts taking place in behavioral prevention programming. Although the Prevention and Public Health Fund created by the 2010 Patient Protection and Affordable Care Act may slow this downward trend, the fund has already been significantly reduced, and continued challenges by those politically opposed to the Affordable Care Act may diminish its impact further (e.g., see [45]). Adequate funding is an essential matter. That said, the present study suggests that the short-term and long-term effects of these economic strains may be undermining the ability of agencies to conduct behavioral HIV prevention. Even among agencies that wish to continue behavioral prevention, the methods adopted to remedy economic strains may, at times, reduce program fidelity. As noted previously, to maintain program effectiveness, it is fundamental to sustain program fidelity [4]. Prevention programs must be able to adapt to changing circumstances, and organizations that have greater decisional flexibility may be better able to adapt in ways that sustain program fidelity. Translation research on the DEBI program is at an early stage of development. The present study hopes to inform the next stage of this national translation effort in terms of broader-based empirical studies and potential recommendations for making economic adaptations that avoid declines in program fidelity.

Acknowledgments

We would like to thank Lance Pollack, Ph.D. (University of California San Francisco), Kim Richards, Ph.D., Kathleen Conte, MA, Marcia Macomber, MS (Oregon State University) and Westat Corporation for assistance in data collection, extraction and analytical work, and Carla Cudmore and Natalie Tiexiera for help in manuscript preparation. This research was supported by a grant from NIMH MH085502 to Dr. Dolcini.

Footnotes

Implications

Practice: Agencies experiencing fiscal loss should consider the impact of adaptations on staff morale and program fidelity, and when possible, make choices that enhance implementation.

Policy: Economic factors have had a negative impact on implementation of behavioral HIV prevention programs, placing strain on the public health safety net.

Research: Implementation is influenced by economic factors and further research is needed to identify best practices for sustaining evidence-based programs in the face of fiscal loss.

References

  • 1.Office of National AIDS Policy. National HIV/AIDS Strategy: update of 2011–2012 federal efforts to implement the National HIV/AIDS Strategy. Washington D.C.: White House Office of National AIDS Policy; 2012.
  • 2.Morin SF, Yamey G, Rutherford GW. HIV pre-exposure prophylaxis. BMJ. 2012;345. [DOI] [PubMed]
  • 3.Kippax S, Stephenson N. Beyond the distinction between biomedical and social dimensions of HIV prevention through the lens of a social public health. Am J Public Health. 2012; 102(5): 789-799. [DOI] [PMC free article] [PubMed]
  • 4.Dane AV, Schneider BH. Program integrity in primary and early secondary prevention: are implementation effects out of control? Clin Psychol Rev. 1998;18(1):23–45. doi: 10.1016/S0272-7358(97)00043-3. [DOI] [PubMed] [Google Scholar]
  • 5.Mowbray CT, Holter MC, Teague GB, Bybee D. Fidelity criteria: development, measurement, and validation. Eval Pract. 2003;24(3):315–340. [Google Scholar]
  • 6.Collins C, Harshbarger C, Sawyer R, Hamdallah M. The diffusion of effective behavioral interventions project: development, implementation, and lessons learned. AIDS Educ Prev. 2006;18(4 Suppl A):5–20. doi: 10.1521/aeap.2006.18.supp.5. [DOI] [PubMed] [Google Scholar]
  • 7.Rhodes F, Stein J, Fishbein M, Goldstein R, Rotheram-Borus M. Using theory to understand how interventions work: Project RESPECT, condom use, and the integrative model. AIDS Behav. 2007;11(3):393–407. doi: 10.1007/s10461-007-9208-9. [DOI] [PubMed] [Google Scholar]
  • 8.Kamb ML, Fishbein M, Douglas JM, Jr, et al. Efficacy of risk-reduction counseling to prevent human immunodeficiency virus and sexually transmitted diseases: a randomized controlled trial. JAMA. 1998;280(13):1161–1167. doi: 10.1001/jama.280.13.1161. [DOI] [PubMed] [Google Scholar]
  • 9.Metcalf CA, Douglas JM, Jr, Malotte CK, et al. Relative efficacy of prevention counseling with rapid and standard HIV testing: a randomized, controlled trial (RESPECT-2) Sex Transm Dis. 2005;32(2):130–138. doi: 10.1097/01.olq.0000151421.97004.c0. [DOI] [PubMed] [Google Scholar]
  • 10.National Association of County & City Health Officials. NACCHO Survey of Local Health Departments’ Budget Cuts and Workforce Reductions. 2009, January. http://www.naccho.org/advocacy/upload/JobLossProgramCuts_ResearchBrief_final.pdf. Accessed Verified 1/31/13.
  • 11.National Association of County & City Health Officials. Local Health Department Job Losses and Program Cuts: findings from January/February 2010 Survey. 2010, May. http://www.naccho.org/topics/infrastructure/lhdbudget/upload/Job-Losses-and-Program-Cuts-5-10.pdf. Accessed Verified 1-31-13.
  • 12.Arnold EA, Galindo GR, Gaffney S, Steward WT, Morin SF. Examining the impact of the HIV-related state budget cuts: comparing Alameda, Fresno, and Los Angeles Counties: California HIV/AIDS Research Program; 2010: http://ari.ucsf.edu/programs/policy/state_budget_cuts.pdf. Accessed Verified 1-31-13.
  • 13.Kalichman SC, Hudd K, Diberto G. Operational fidelity to an evidence-based HIV prevention intervention for people living with HIV/AIDS. J Prim Prev. 2010;31(4):235–245. doi: 10.1007/s10935-010-0217-5. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Kegeles S, Rebchook G, Pollack L, et al. An intervention to help community-based organizations implement an evidence-based HIV prevention intervention: the Mpowerment Project Technology Exchange System. Am J Community Psychol. 2012;49(1):182–198. doi: 10.1007/s10464-011-9451-0. [DOI] [PubMed] [Google Scholar]
  • 15.Dolcini MM, Gandelman AA, Vogan SA, et al. Translating HIV interventions into practice: community-based organizations’ experiences with the diffusion of effective behavioral interventions (DEBIs) Soc Sci Med. 2010;71(10):1839–1846. doi: 10.1016/j.socscimed.2010.08.011. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Planas LG. Intervention design, implementation, and evaluation. Am J Health Syst Pharm. 2008;65(19):1854–1863. doi: 10.2146/ajhp070366. [DOI] [PubMed] [Google Scholar]
  • 17.Hitt JC, Robbins AS, Galbraith JS, et al. Adaptation and implementation of an evidence-based prevention counseling intervention in Texas. AIDS Educ Prev. 2006;18(4 Suppl A):108–118. doi: 10.1521/aeap.2006.18.supp.108. [DOI] [PubMed] [Google Scholar]
  • 18.Glasgow RE, Vinson C, Chambers D, Khoury MJ, Kaplan RM, Hunter C. National Institutes of Health approaches to dissemination and implementation science: current and future directions. Am J Public Health. 2012;102(7):1274–1281. doi: 10.2105/AJPH.2012.300755. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Wandersman A, Duffy J, Flaspohler P, et al. Bridging the gap between prevention research and practice: the interactive systems framework for dissemination and implementation. Am J Community Psychol. 2008;41(3):171–181. doi: 10.1007/s10464-008-9174-z. [DOI] [PubMed] [Google Scholar]
  • 20.Durlak JA, DuPre EP. Implementation matters: a review of research on the influence of implementation on program outcomes and the factors affecting implementation. Am J Community Psychol. 2008;41(3–4):327–350. doi: 10.1007/s10464-008-9165-0. [DOI] [PubMed] [Google Scholar]
  • 21.Fixsen DL, Naoom SF, Blase KA, Friedman RM, Wallace F. Implementation research: a synthesis of the literature 2005. http://nirn.fpg.unc.edu/sites/nirn.fpg.unc.edu/files/resources/NIRN-MonographFull-01-2005.pdf. Accessed 12/4/2012.
  • 22.Massatti R, Sweeney H, Panzano P, Roth D. The De-adoption of innovative mental health practices (IMHP): why organizations choose not to sustain an IMHP. Adm Policy Ment Health. 2008;35(1):50–65. doi: 10.1007/s10488-007-0141-z. [DOI] [PubMed] [Google Scholar]
  • 23.Scheirer MA. Is sustainability possible? A review and commentary on empirical studies of program sustainability. Eval Pract. 2005;26(3):320–347. [Google Scholar]
  • 24.Harshbarger C, Simmons G, Coelho H, Sloop K, Collins C. An empirical assessment of implementation, adaptation, and tailoring: the evaluation of CDC's National Diffusion of VOICES/VOCES. AIDS Educ Prev. 2006;18(4 Suppl A):184–197. doi: 10.1521/aeap.2006.18.supp.184. [DOI] [PubMed] [Google Scholar]
  • 25.Galbraith JS, Stanton B, Boekeloo B, et al. Exploring implementation and fidelity of evidence-based behavioral interventions for HIV prevention: lessons learned from the Focus on Kids diffusion case study. Health Educ Behav. 2009;36(3):532–549. doi: 10.1177/1090198108315366. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Iverson EF, Balasuriya D, Garcia GP, et al. The challenges of assessing fidelity to physician-driven HIV prevention interventions: lessons learned implementing Partnership for Health in a Los Angeles HIV clinic. AIDS Behav. 2008;12(6):978–988. doi: 10.1007/s10461-008-9392-2. [DOI] [PubMed] [Google Scholar]
  • 27.Gandelman A, Dolcini M. The influence of social determinants on evidence-based behavioral interventions—considerations for implementation in community settings. Transl Behav Med. 2012;2(2):137–148. doi: 10.1007/s13142-011-0102-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Veniegas RC, Kao UH, Rosales R. Adapting HIV prevention evidence-based interventions in practice settings: an interview study. Implement Sci. 2009;4(1):76. doi: 10.1186/1748-5908-4-76. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29.Owczarzak J, Dickson-Gomez J. Provider perspectives on evidence-based HIV prevention interventions: barriers and facilitators to implementation. AIDS Patient Care STDS. 2011;25(3):171–179. doi: 10.1089/apc.2010.0322. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30.National Alliance of State and Territorial AIDS Directors, The Kaiser Family Foundation. The national HIV prevention inventory: the state of HIV prevention across the U.S. 2009; http://www.kff.org/hivaids/upload/7932.pdf. Accessed Verified 1-31-13.
  • 31.Milliken FJ, Dutton JE, Beyer JM. Understanding organizational adaptation to change: the case of work-family issues. Human Resource Planning: Springer; 1992. pp. 279–295. [Google Scholar]
  • 32.Rose RA. Organizational adaptation from a rules theory perspective. West J Speech Commun. 1985;49(4):322–340. doi: 10.1080/10570318509374205. [DOI] [Google Scholar]
  • 33.Kamb ML, Dillon BA, Fishbein M, Willis KL. Quality assurance of HIV prevention counseling in a multi-center randomized controlled trial, Project RESPECT Study Group. Public Health Rep. 1996;111(Suppl 1):99–107. [PMC free article] [PubMed] [Google Scholar]
  • 34.Streiner DL, Norman GR. Health measurement scales: a practical guide to their development and use. New York: Oxford University Press; 2008. [Google Scholar]
  • 35.Stern PN. Are counting and coding a cappella appropriate in qualitative research? In: Morse JM, editor. Qualitative nursing research. Thousand Oaks, CA: Sage Publications; 1991. pp. 146–163. [Google Scholar]
  • 36.Hsieh H-F, Shannon SE. Three approaches to qualitative content analysis. Qual Health Res. 2005;15(9):1277–1288. doi: 10.1177/1049732305276687. [DOI] [PubMed] [Google Scholar]
  • 37.Corbin J, Strauss A. Basics of qualitative research: techniques and procedures for developing grounded theory. 3. Thousand Oaks, CA: Sage Publications; 2008. [Google Scholar]
  • 38.Bernell S, Dolcini M, Catania J. The cost of implementing an evidence-based HIV/STI intervention in practice. Washington, D.C.: Fourth annual NIH conference on the science of dissemination and implementation; 2011. [Google Scholar]
  • 39.Golden SD, Earp JA. Social ecological approaches to individuals and their contexts: twenty years of Health Education & Behavior health promotion interventions. Health Educ Behav. 2012;39(3):364–372. doi: 10.1177/1090198111418634. [DOI] [PubMed] [Google Scholar]
  • 40.Bronfenbrenner U. Ecological systems theory. In: Vasta R, editor. Annals of child development. Six theories of child development: revised formulations and current issues. London: JAI Press; 1989. pp. 187–249. [Google Scholar]
  • 41.Bronfenbrenner U. The ecology of cognitive development: research models and fugitive findings. In: Wonziak R, Fischer K, editors. Development in context: acting and thinking in specific environments. Hillsdale, NJ: Erlbaum; 1993. pp. 3–44. [Google Scholar]
  • 42.Shediac-Rizkallah MC, Bone LR. Planning for the sustainability of community-based health programs: conceptual frameworks and future directions for research, practice and policy. Health Educ Res. 1998;13(1):87–108. doi: 10.1093/her/13.1.87. [DOI] [PubMed] [Google Scholar]
  • 43.Ozer EM, Adams SH, Lustig JL, et al. Can it be done? Implementing adolescent clinical preventive services. Health Serv Res. 2001;36(6 Pt 2):150–165. [PMC free article] [PubMed] [Google Scholar]
  • 44.Ozer EM, Adams SH, Lustig JL, et al. Increasing the screening and counseling of adolescents for risky health behaviors: a primary care intervention. Pediatrics. 2005;115(4):960–968. doi: 10.1542/peds.2004-0520. [DOI] [PubMed] [Google Scholar]
  • 45.Johnson TD. Prevention and public health fund paying off in communities. Nations Health. 2012;42(6):1–31. [Google Scholar]

Articles from Translational Behavioral Medicine are provided here courtesy of Oxford University Press

RESOURCES