Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2013 Mar 1.
Published in final edited form as: Eval Health Prof. 2012 Apr 19;36(1):73–92. doi: 10.1177/0163278712442536

Tailoring Evidence-Based Interventions for New Populations: A Method for Program Adaptation Through Community Engagement

Emily K Chen 1, M C Reid 2, Samantha J Parker 2, Karl Pillemer 1
PMCID: PMC3553290  NIHMSID: NIHMS412166  PMID: 22523308

Abstract

Evidence-based interventions (EBIs) are an important tool for community health practitioners, but there is often a mismatch between the population in which the EBI was validated and the target population in which it will be used. Methods of planned adaptation identify differences in the new target population and attempt to make changes to the EBI that accommodate these differences without diluting the program’s effectiveness. This article outlines an innovative method for eliciting ideas for program modifications and deciding on program changes. The Method for Program Adaptation through Community Engagement (M-PACE) uses systematic and detailed feedback from program participants to guide adaptation. The authors describe procedures for obtaining high-quality participant feedback and adjudicating recommendations to decide on program changes. M-PACE was developed as part of the adaptation of an evidence-based, arthritis self-management program for older adults. The application and results of the M-PACE method are presented using this case as an example.

Keywords: evidence-based interventions, program adaptation, cultural adaptation, community-based participatory research, self-management programs


Evidence-based interventions (EBIs) represent the gold standard for health promotion programs. Practitioners who attempt to use existing EBIs, however, are often confronted with a mismatch between the characteristics of individuals whom the intervention is intended to benefit (the target population) and the characteristics of individuals who received the intervention during its development and validation. Differences in the culture, language, age, and socioeconomic status of the target population can operate as barriers to successful implementation of an EBI (Kumpfer, Alvarado, Smith, & Bellamy, 2002; Solomon, Card, & Malow, 2006). Some advocates of program fidelity argue that adaptation of a program to accommodate these differences can threaten the effectiveness of an EBI (Calsyn, Tornatzky, & Dittmar, 1977; Elliott & Mihalic, 2004). In contrast, others assert that every implementation of a program is necessarily unique and that rather than asking “Should we permit reinvention?” the practical issue is “How and what is going to change?” (p. 624, Bauman, Stein, & Ireys, 1991;Hall & Loucks, 1978). Indeed, in the dissemination of EBIs in diverse populations, adaptation may produce more effective programs than the original unadapted EBIs, while encouraging a sense of local ownership because the program is culturally tailored (Castro, Barerra, & Martinez, 2004; Kelly, Heckman, et al., 2000).

Because program changes can be seen as inevitable and possibly beneficial, guidelines and protocols for program adaptation abound. Some guidelines offer general advice on what elements of programs are most amenable to modification, such as the CDC’s Red Light/Yellow Light/Green Light guidelines for the adaptation of sexually transmitted infections/HIV prevention programs (highly detailed adaptation “kits” based on this framework now exist for several EBIs; Centers for Disease Control and Prevention and Education Training and Research Associates, 2010; Education Training and Research Associates, 2011). Other guidelines outline step-by-step processes for the selection and adaptation of EBIs (Card, Solomon, & Cunningham, 2011; Krivitsky et al., in press). Although these methods differ in specifics, they all describe stages of the selection and adaptation process, including conducting community needs assessments, choosing an EBI to be modified, identifying differences between the population for which the EBI was designed and the new target population, deciding what to change about the EBI in response to these differences, and pilot testing the adapted program with diverse stakeholders (e.g., prospective participants, practitioners, and community partners).

A weakness of most adaptation methods is that individual steps in the process often lack the detail necessary for others to apply the method. We believe the study of adapting interventions has reached a point where it is now both possible and necessary to focus on articulating separate steps of the process, rather than offering general advice or broad models that cover many steps. This article focuses on two components of the adaptation process that require greater articulation: (1) identifying population differences and (2) deciding what to change about an existing program. We first review how existing adaptation methods approach these two objectives. We then report on guidelines that we developed to improve these components of the adaptation process, which we have termed the Method for Program Adaptation through Community Engagement (M-PACE). This method was developed as part of the adaptation of an evidence-based arthritis self-management program for older adults in New York City. (The larger research project also included outcome evaluations on the original and adapted programs, reported elsewhere, Parker, Vasquez, et al., 2011.) The application of the M-PACE method is illustrated with examples from that research project.

Planned Adaptation and Stakeholder Input

A key task of program adaptation is identifying differences between the new target population and the community for which the EBI was originally developed. Surprisingly, existing methods do not emphasize feedback from program participants in designing adaptations, relying instead on program theory and recommendations of researchers and practitioners to determine program changes. Although soliciting participant feedback is suggested in several methods (Card et al., 2011; Kumpfer, Pinyuchon, Teixeira de Melo, & Whiteside, 2008; McKleroy et al., 2006; Smith & Caldwell, 2007; Wingood & DiClemente, 2008), it is given lower priority when making program adaptations. (See also Ringwalt & Bliss, 2006, for a program adaptation based on participant feedback.) In contrast, we believe there are strong arguments in favor of generating reactions and suggestions for change from actual program participants drawn from the population for whom the program is intended. Relying on the feedback of individuals who have experienced the EBI can identify important population differences (e.g., cultural, language, educational) that, if addressed, could measurably improve program fit.

An effective method for program adaptation based on participant feedback requires procedures for two activities: (1) exposing participants to the program and soliciting feedback and (2) evaluating participant recommendations to determine which program changes will be made.

Obtaining participant input

If participant feedback is to form the basis for program adaptations, community members must have some experience of the original (unadapted) EBI. Two existing adaptation methods expose participants to the unadapted program. Wingood and DiClemente (2008) propose showing potential participants a section of the original program followed by interviews and a focus group to capture participants’ reactions and suggestions. However, because these individuals are exposed to only a portion of a program, their reactions and suggestions are limited to the sections that were presented. This type of feedback may accurately ascertain participants’ general impressions, but obtaining specific reactions to the number of sessions, progression of content, and application in daily life is not possible. Second, surveys or focus groups conducted after exposure to only one program session, as suggested by Wingood and DiClemente, are unlikely to generate highly detailed recommendations for program adaptation.

Kumpfer and colleagues (2008) call for the original EBI to be administered to the members of the new target population. A “process evaluation” is done to collect information about the successes and barriers to success for the program. This method emphasizes participant feedback as a basis for program adaptation, but it is unclear what methods are used to solicit reactions and recommendations and how these data actually guide decision making about program modifications. In later stages of Kumpfer and colleagues’ method, weekly feedback forms are administered to program leaders to collect suggestions for program adaptations, but responses from participants are not solicited.

In the M-PACE method, described below, we developed methods for obtaining extensive participant feedback while maximizing full and unbiased responses. M-PACE involves: (1) exposing participants to the complete unadapted EBI; (2) collecting participant and instructor feedback after each session through individual interviews; and (3) conducting focus groups with participants and instructors at the end of the program. The combined use of individual interviews and focus groups captures the opinions of both more and less vocal members of the group and reduces the influence of social desirability bias (Hollander, 2004) that might unduly influence focus groups. In this way, systematic and comprehensive data collection from each participant generates a pool of recommendations on which program changes are based.

Deciding on Program Modifications

The goal of existing adaptation models is to modify a program in a way that makes it most suitable for the target population without changing the program’s core components—that is, the elements of the intervention thought to be responsible for its effectiveness (Kelly, Heckman, et al., 2000; Solomon et al., 2006). By definition, EBIs are programs that produce stated outcomes when their protocol is followed accurately. Unfortunately, EBIs are often not tested to see which programmatic elements are essential to achieving beneficial outcomes (Elliott & Mihalic, 2004; Kelly, Sogolow, & Neumann, 2000). In the absence of experimental evidence that identifies core components, other methods have been suggested to determine what parts of a program are essential, including those guided by behavioral or social science theory and those that rely on developers of the EBI and/or experienced program staff (Backer, 2001; Kelly, Heckman, et al., 2000; Lee, Altschul, & Mowbray, 2008; McKleroy et al., 2006; Solomon et al., 2006). Despite these options, authoritative understanding of core components is sometimes not possible.

The M-PACE method solicits comprehensive participant feedback as the basis for program adaptation. Once such data are collected, however, a process is required to analyze, adjudicate, and incorporate reactions and suggestions that ultimately culminate in a revised program. Not all suggestions for program changes will be feasible or considered necessary by program adaptors, and some suggestions for change will likely be rejected (Krivitsky et al., in press). The available literature provides limited concrete guidance on how to balance the need for adaptation against the need for fidelity to the original EBI.

Across existing models, regardless of what guides adaptation—theory, practice wisdom, or in this case, stakeholder feedback—there is notable variation in who makes the final decisions about changes to the original EBI. This component of the adaptation process is the one most likely to raise issues of authority over decision making. On one hand, community members may have clear priorities and preferences regarding which elements of a program to adapt. These priorities might be in conflict with prior evidence on program effectiveness or researchers’ or program developers’ beliefs about the needs of the EBI. Especially when attempting to use participant feedback to inform program adaptation, who decides which suggestions are implemented and which are not may be particularly contentious. In the M-PACE method, we created an organizational structure and process so that decision making is shared between steering committee members, some of whom monitor the integrity of the EBI, others of whom advocate for changes voiced by participants.

The M-PACE Approach

M-PACE consists of five steps that collect and incorporate participant input to create an adapted EBI. We first describe each step in the method then illustrate the step (under the subheading M-PACE in practice) with a description of how it was applied to the adaptation of an arthritis self-management program (Arthritis Self-Help Program; ASHP) for use by older adults in three senior centers in New York City. The goal of the project was to understand what changes, if any, the ASHP course required to be relevant, well-received, and effective for older, non-Hispanic White, Hispanic (Spanish-speaking, mostly of Caribbean origin), and African American adults living in an urban setting. The ASHP was chosen as an intervention by a preexisting researcher–community partnership focused on older adults in New York City who experience pain due to chronic conditions such as osteoarthritis. (A full description of the intervention and related results are reported elsewhere; Parker, Chen, et al., in press.) We draw on our experience using M-PACE in this research project to comment on the strengths and challenges within the specific application context.

Step 1: Convene an Adaptation Steering Committee

Researcher and community project leaders recruit 10–12 individuals to serve on a steering committee that will oversee the adaptation process. This number was found to be optimal; having more members could make achieving consensus around program change difficult (e.g., lengthen the adjudication process), whereas having fewer members could jeopardize the ability to gain input from relevant stakeholder groups. The steering committee should consist of researchers, implementers or practitioners, and community members who themselves would benefit from participating in the EBI or have a strong stake in having others benefit (e.g., spouses of individuals who would receive the intervention). It is critical that at least one member of the steering committee be familiar with the theory of change of the EBI that has been selected as well as research on its effectiveness. As discussed earlier, few EBIs have empirically tested which parts of a program are essential or the mechanisms by which an EBI produces results. However, many EBIs suggest core components based on theory or practice wisdom. A person affiliated with the creation or validation of the EBI, or someone with extensive experience using the EBI, would be an ideal member of the steering committee.

The M-PACE model differs from stakeholder or community advisory committees, which simply provide input to researchers who eventually make decisions. The M-PACE Steering Committee, in contrast, draws on principles of community-based participatory research (Israel et al., 2003; Minkler & Wallerstein, 2003) to involve researchers, program developers, and community members as equal-status partners. Because M-PACE dictates that the steering committee makes decisions by consensus, the equal status of all members of the committee should be made explicit in the earliest stages of committee formation.

M-PACE in Practice

The steering committee for the ASHP adaptation included the directors of the three senior center research sites, two program directors from the senior centers, a member of the Visiting Nurse Service of New York City, and an individual with chronic arthritis pain who regularly attended one of the partner senior centers. Experts on the EBI on the steering committee were a national Arthritis Foundation board member and two ASHP instructors. The four members of the research team (a geriatrician with research experience in pain management, a sociologist with expertise in the design of interventions for older populations, a health education researcher with particular expertise regarding the ASHP, and a gerontologist with experience delivering health promotion programs in community settings) were also members of the steering committee.

The steering committee met in person at least once each month. Monthly meetings were supplemented by meetings of an adaptation subcommittee (charged with implementing the changes accepted by the steering committee), as well as occasional specific tasks done by individual steering committee members.

Step 2: Implement the Unadapted Program to Generate Recommendations for Program Change

A fundamental feature of M-PACE is conducting the entire unadapted program with participants (rather than exposing them to sections of a program or just the program materials). Prior to initiating the program, the steering committee familiarizes itself with the original (unadapted) EBI to coordinate a high-quality implementation of the program. The steering committee should meet 1 or 2 times to review the program curriculum and instructional materials of the EBI to ensure that all members are familiar with the program’s protocol, core components, theory of change, and impact on outcomes.

The unadapted program is administered under the same conditions (recruitment, setting, timing, and personnel) that are planned for the adapted program and with fidelity to the original EBI. Participants recruited to take part in the unadapted program are solicited from the same population for which the adapted program is being developed. Recruitment differs slightly from normal program recruitment in that all participants must be willing (and able to provide informed consent) to participate in the research and evaluation component of the program, which includes the activities outlined below. If possible, modest compensation should be given to participants for time spent on program evaluation activities.

M-PACE in Practice

The original ASHP was conducted consecutively and evaluated at each senior center (with class size varying from 10 to 14 participants) 3 times from July 2008 to March 2009, resulting in 113 total participants. Care was taken to ensure fidelity to the original program, including having a member of the research team serve as a cotrainer who monitored fidelity. The number of participants was sufficient to reach thematic saturation regarding suggestions for programmatic change. Participants were required to be 60 years of age or older, have a self-identified arthritis or arthritis-related disorder, and speak English or Spanish. Recruitment was done by senior center staff by various methods, including posting flyers, verbal announcements, and notices in monthly senior center newsletters. In compensation for the time spent in doing evaluation and feedback sessions, participants were paid up to a total of 70 dollars (10 dollars each for six, weekly phone interviews and final focus group).

Step 3: Systematically Obtain Evaluations of Program Components

A key component of M-PACE is the systematic solicitation of feedback on specific program components using standard social science techniques. M-PACE uses survey research and focus group methodology to triangulate participant and facilitator responses to the unadapted program.

Survey

Obtaining extensive, high-quality feedback from participants requires soliciting responses to every major program component from all participants soon after participants experience it. The steering committee uses an instrument to assess likes, dislikes, and reactions to all topics covered within a given program module. The survey should include closed and open-ended questions, to simultaneously encourage responses from all participants while capturing longer or more detailed reactions and ideas. Using some questions with scaled responses (e.g., On a scale from1 to 10, how would you rate [program activity or lesson]?) allows summary measures to be combined across participants. The use of open-ended questions for qualitative analysis is also necessary, both to allow participants to raise any issues or ideas and to create a collection of tangible suggestions for program change.

Shortly after each program session, an interviewer contacts each participant by telephone and administers the instrument. Telephone calls ideally should be audio tape-recorded so that responses to open-ended questions can be transcribed; at a minimum, careful notes should be taken. Although this amount of data collection may seem burdensome, frequent assessments close to the time of exposure increase the chances that participants will accurately recall feelings and reactions to program material. Suggestions will also be specific to elements of a program (e.g., a given session or activity), rather than being generalized over many program sessions.

Focus group

The second mode of participant feedback is a focus group with all participants immediately after the unadapted program has been completed. General questions about program improvement are asked to solicit feedback and ideas. An advantage of M-PACE is that specific suggestions from the weekly phone calls can be presented to the group to gauge how widely each idea is endorsed. The combination of both individual structured interviews with the focus group discussion helps ensure that more vocal individuals are not disproportionately represented, and that ideas from a small number of individuals (from weekly individual interviews) can be evaluated against the reactions of the larger group. The focus group should be recorded and transcribed, or detailed notes should be taken.

Program facilitator feedback

The facilitators of the unadapted EBI are also an important source of suggestions. Facilitator responses are collected through the same combination of structured weekly individual feedback during telephone calls and a postprogram focus group with all instructors. The weekly survey instrument used for instructors parallels the one used for participants, with a focus on how the instructor thought the class liked, understood, or found the topics or activities within the program module useful.

M-PACE in Practice Survey

Within a day or two of each class, research assistants telephoned participants to solicit feedback in a short interview (approximately 10 min) that was audio tape-recorded. Participants were asked what they liked most and least about the week’s class. They were also asked to rate how useful the class content was to them on a scale of 1–10. Open-ended questions about program content were also asked, such as “Please tell me what you thought about the section of the class that covered healthy eating?” and “Did you find these materials helpful or not?”

Focus group

At the end of each 6-week program, a seventh meeting was held to conduct a focus group with participants of each class. Two researchers from the steering committee moderated the groups, asking open-ended questions to obtain additional suggestions for program modification. Questions posed to all groups included: “What would an ideal arthritis self-management program look like to you?” and “Do you have additional comments or suggestions about how to improve the program for older adults?”

Specific program suggestions garnered from the weekly individual participant interviews were raised to the group to see how widely each idea was endorsed. Participants were presented with each suggestion and then asked to vote by a show of hands to indicate whether they agreed with the proposed modification. All focus groups were audio tape-recorded.

Program facilitator feedback

The six ASHP instructors were telephoned weekly after each class to generate additional recommendations for program adaptation. The instructors were asked to review each activity completed during that week’s session and comment on the most and least successful aspects of the class. The research team also met with the instructors after all courses had been completed to solicit suggestions from the group as a whole. All interviews and focus groups were audio tape-recorded.

Step 4: Summarize Stakeholder Feedback

The contents of the weekly surveys and focus groups are compiled by designated members of the steering committee. The distributions of responses for each quantitative item are displayed in simple graphs or charts. Open-ended responses are transcribed and a full list of statements is compiled. This list is sorted into categories by theme with a measure of mentions for each theme. Results of the focus group are also summarized and included with the results of the individual surveys. These documents are distributed to all members of the steering committee. Additional analysis may be requested by the steering committee which can be provided in a subsequent meeting.

M-PACE in Practice

Two researchers on the steering committee synthesized stakeholder feedback. Participant interviews and focus groups were transcribed and entered into qualitative data analysis software (N-VIVO), allowing the researchers to identify and count suggestions and opinions that occurred throughout participants’ responses. Focus group transcripts were read by all members of the research team and a list of program recommendations was compiled. Suggestions that emerged in interviews with instructors were summarized by one member of the research team who reviewed the audio tape-recordings. The instructors reviewed and finalized the list of suggestions at a postprogram meeting with the research team.

Step 5: Adjudicate Program Feedback to Select ProgramModifications

The steering committee meets to review all feedback and make choices about how to adapt the EBI. The steering committee evaluates each response to the open-ended questions and all suggestions and ideas are raised in the focus group for importance, feasibility, and congruence. The importance of a suggestion is the degree to which it is perceived to be a change that could improve program effectiveness and reach in the new target population, and address the concerns of multiple participants. Feasibility is approached from several perspectives, including that of the participants, representatives of the host site, and program instructors. For example, each suggestion is evaluated based on how burdensome the change would be to participants, instructors, or program sponsors. Each idea for program adaptation is also judged for congruence, that is, as working with, working against, or not interfering with the core components of the EBI.

Suggestions for change that fall outside of the scope of the EBI, such as addressing specific concerns about weight loss during an arthritis self-management course, may challenge the steering committee to be clear about the purpose of the selected intervention. The goal of adaptation is to work within an existing EBI to meet any unique needs or desires of a specific community, not to invent a new program based on every participant concern. After evaluating each suggestion using these criteria, the steering committee seeks consensus on whether to adapt a specific element of the program in response to a recommended change. When consensus to accept a suggestion is not obtained, no change is made. Careful notes should be taken to document the rationale behind the adoption or rejection of each suggestion.

The success of this stage of program adaptation depends on the diverse perspectives represented on the steering committee and the mechanism of consensus. Researchers, program developers, or experienced users of the EBI should be mindful of how the suggestions for program changes interfere with, bolster, or do not interact with the core program components that are thought to make it effective. Steering committee members who represent potential program users are likely to have insight into the concerns of participants regarding the program that are not apparent to researchers and program developers.

After the suggestions are adjudicated, the list of accepted program changes is compiled. Although some accepted suggestions for program change may involve minimal effort (e.g., make the print size of handouts larger), the addition or deletion of content may present a challenge for the steering committee. If many additions are accepted, content may also have to be removed if lengthening sessions or increasing the number of meetings is not feasible for the host site or desirable for participants. Like all modifications, removing content should be done with care, using the same consensus-based process with which stakeholder feedback was adjudicated. If additional materials are to be developed for the adapted program, or implementing a revision (e.g., reproducing all handouts for a low literacy consumer) requires extensive work, several steering committee members (including a person with expertise in the EBI) might form a subcommittee to perform the tasks. The completed adapted program is finally reviewed by all steering committee members for approval.

M-PACE in Practice

Steering committee members met in person on two separate occasions to review and adjudicate all suggested program changes. One week before the meeting, documents summarizing interviews and focus groups with participants and instructors were distributed via e-mail to all committee members who were asked to review the documents before the meeting.

To adjudicate the recommendations, the steering committee discussed each suggestion then voted on the change based on an evaluation of its importance, feasibility, and congruence. The steering committee had previously decided that a unanimous vote was needed to adopt a recommended change, so that a strong objection from any individual could block consensus and prevent a change from being made. Although individual steering committee members had different expertise, experience, and opinions about the needs of an adapted program, they agreed that a modification must simultaneously meet all three criteria to be adopted.

Perceived importance was considered because some recommendations resonated with the experience of the steering committee in working with the target population. Even though some recommendations were infrequently endorsed, their relative importance compelled the steering committee to accept them. For example, only two participants recommended distributing information about local exercise classes as part of the course. This recommendation was deemed to be a potentially important addition to the program (as well as being feasible to implement and congruent with program theory) and was accepted by the steering committee.

Feasibility was considered from the perspective of instructors, participants, host sites, and program experts. Suggestions to add new topics to the curriculum that would require extensive teaching time were rejected by program experts as not feasible, because adding these items would have required other more important topics to be covered in less detail (or omitted altogether) to make time for a new module. Feasibility of suggestions was also judged by senior center staff on the steering committee, who understood the limitations of community facilities to host ASHP. For example, several participants suggested that the class meet twice a week. However, this recommendation was deemed impractical because host sites are often heavily scheduled with other activities, making space for the additional meeting time unavailable.

Congruence with program theory and the original evidence-based program was the third critical concern for the group. The initial steering committee meetings that reviewed the EBI also included discussion of fidelity and the importance of retaining core program components during adaptation. The hypothesized core components and mechanisms of the ASHP are summarized by Lorig and Holman (2003), using the term core self-management skills. These skills are problem solving, decision making, resource utilization, partnering with one’s health care providers, and “taking action,” the act of executing a planned behavioral change (Lorig & Holman, 2003), all of which are taught or practiced in the ASHP. Although these individual skills are thought to be necessary for positive outcomes, the underlying mechanism for behavior change is thought to be self-efficacy, that is, a person’s confidence in his or her ability to plan and carry out actions (Lorig & Holman, 2003; Lorig, Ung, Chastain, Shoor, & Holman, 1989). The ASHP includes five strategies to enhance efficacy: (1) weekly action plans, (2) feedback from other participants and record-keeping, (3) modeling of efficacy behaviors by peers and instructors, (4) reinterpretation of symptoms through education about the linkages of pain with depression and fatigue, and (5) persuasion by peers and instructors to plan and accomplish realistic goals (Lorig, 2002).

Steering committee members embraced the importance of congruence with program theory and, for each recommendation, considered whether or not it reinforced the central message of self-efficacy that the ASHP promotes. A suggestion was considered to be congruent if it clarified or enhanced core components of the ASHP, or if it was thought to be a neutral addition or deletion that would not interfere with these programmatic elements.

For example, the adapted program allowed instructors to limit individual sharing to three to five people rather than all participants if sharing was taking too much time. Although individual sharing of the past week’s challenges and achievements supports problem solving and strengthens the development of self-efficacy, both important processes in the ASHP (Lorig & Holman, 2003), the steering committee honored the feedback that repetitious or verbose sharing detracted from the class experience. The steering committee did not eliminate individual sharing, but chose a middle path between participant preferences and the original ASHP protocol.

Both adjudication meetings lasted approximately 4 hr each. The second half of each meeting focused on how to operationalize the accepted recommendations. For example, to address the recommendation that the course provide additional attention to diet and nutrition, the steering committee decided that supplemental materials be created and distributed as handouts. An implementation subcommittee (composed of two community agency members, one researcher, and one program content expert) volunteered to continue meeting in order to make required changes to program materials. The full steering committee met for a third time (by telephone) to review all changes made to the program, including content additions, content deletions, and program delivery changes, and to accept the complete adapted program.

Discussion

The M-PACE method, as applied in the adaptation of the ASHP for racially diverse older adults in New York City, was successful in its ability to elicit recommendations from participants and to provide a process for adjudicating and implementing stakeholder feedback. Soliciting ideas in both individual and group formats provided multiple opportunities for reactions and ideas to be raised by participants and instructors. Regular data collection through weekly interviews also yielded more detailed and session-specific feedback than would have been possible from a single interview at the end of the 6-week class. Using CBPR principles, the creation of a steering committee involving community members and researchers as equal-status partners, utilizing a consensus procedure, proved to be an effective method of adjudicating program modifications.

This implementation of M-PACE benefited from a highly motivated and dedicated steering committee. Two conditions helped make this possible. First, steering committee members drawn from agency staff participated as part of their paid employment, so the time spent in meetings and working on this project was part of the normal work day. Steering committees who are not compensated for their time may find it challenging to intensively focus on the data analysis and consideration of suggestions outlined by M-PACE. Second, many steering committee members had participated in a related community-researcher partnership and had preexisting relationships of mutual respect and trust. Steering committees that are newly-formed to conduct a program adaptation will require time to build rapport that was already in place in the example presented above.

Intensive data collection from so many program participants and compensation for staff both add to the cost of the method. Program participants were compensated, if modestly, for the time spent being interviewed. Appropriate percentages of staff time were also paid by the federal grant that funded this research. Compared to a program adaptation that is done by one or two individuals using no stakeholder feedback, we acknowledge that M-PACE may be a more costly method for determining how to adapt an EBI. Because cost certainly affects the feasibility of undertaking program adaptation, future research should estimate the cost of program adaptation in light of outcomes associated with the adapted program.

The EBI adapted using this method was a 6-week course that was implemented a total of 9 times to collect extensive stakeholder feedback. The necessity of running the full unadapted program and intensively collecting feedback makes M-PACE most suitable to EBIs of shorter duration that focus on a single user group. For example, soliciting feedback from all participants of a school-based behavioral intervention that uses teacher training, student skill-building, and family counseling would be triply resource-intensive. In addition, the ASHP was available in English and Spanish, making it suitable to administer to participants in a totally unadapted form.

M-PACE was developed for the adaptation of the ASHP, an EBI for which the core components have been identified. We believe that M-PACE could be successfully used in the adaptation of EBIs without previously identified core components, provided that the steering committee determine a provisional set of essential program elements before undertaking adjudication of stakeholder feedback. Useful advice for determining a program’s core components is offered by Card and colleagues (2011). However, we acknowledge that the current research does not address this case and M-PACE does not include a method for determining core components.

Although increasing attention has focused on the development of program adaptation tools (Krivitsky et al., in press), no gold standard strategy for adaption currently exists. Moreover, there are many ways in which an adapted program could be considered an improvement over an original program. A certain adaptation method might prove superior in its ability to improve user experience to boost attendance while retaining outcome gains. Another method might be most suitable for adapting programs to new settings, or determining minimum dosage required for sustained improvement. Indeed, it is likely that certain approaches may work well in some settings and with certain EBIs, but not in (or with) others. To advance understanding of how EBIs can be adapted for local circumstances and improved according to clearly stated goals, future research is needed that compares different adaptation methods, where outcomes include cost and other relevant endpoints such as program reach. The field must also consider how to quantify the benefits of adapting an existing program. For example, research related to the current project compared participant outcomes from the unadapted and adapted ASHP curricula, finding that the adapted program produced similar improvements in outcomes but had better attendance compared to the unadapted program (Reid, Kwon, Parker, Chen, & Pillemer, 2011). Efforts such as these could inform a dialogue within the field about which types of adaptation approaches are best suited for specific settings, EBIs, and desired program improvements.

A growing consensus suggests that adaption with fidelity is possible, and there are now several models for the comprehensive selection, evaluation, and adaptation of EBIs (Card et al., 2011; Krivitsky et al., in press). Moreover, systematic, planned adaptation of EBI for new target populations has the potential to extend the reach of evidence-based public health. We developed M-PACE for researchers, practitioners, and program developers who wish to base the adaptation of an EBI on stakeholder feedback. By answering Solomon and colleagues’ (2006) call for developing “service provider-focused adaptation tools” (p. 183), researchers and practitioners can continue to contribute to the body of intentional and systematic adaptation methods that will further the goals of evidence-based practice. The experience of M-PACE suggests that focusing intensively on (and innovating individual steps of) the adaptation process can result in detailed and therefore actionable guidelines for program modification. The state of the field has advanced to the point where more detailed, empirically based guidelines that provide sufficient details to ensure replicability are needed for individual phases of program adaptation, to ensure that programs can retain or enhance their reach and effectiveness while maintaining fidelity.

Acknowledgments

Funding

The author(s) disclosed receipt of the following financial support for the research, authorship and/or publication of this article: This research project was supported by grants from the National Institute of Nursing Research (R21NR010200) and the National Institute on Aging: An Edward R. Roybal Center Grant (P30AG022845).

Footnotes

Declaration of Conflicting Interests

The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

References

  1. Backer TE. Program fidelity and adaptation in substance abuse prevention, 2002. Rockville, MD: Department of Health and Human Services; 2002. (Conference ed.). [Google Scholar]
  2. Bauman LJ, Stein REK, Ireys HT. Reinventing fidelity: The transfer of social technology among settings. American Journal of Community Psychology. 1991;19:619–639. doi: 10.1007/BF00937995. [DOI] [PubMed] [Google Scholar]
  3. Calsyn R, Tornatzky LG, Dittmar S. Incomplete adoption of innovation: The case of goal attainment scaling. Evaluation. 1977;4:128–130. [Google Scholar]
  4. Card JJ, Solomon J, Cunningham SD. How to adapt effective programs for use in new contexts. Health Promotion Practice. 2011;12:25–35. doi: 10.1177/1524839909348592. [DOI] [PubMed] [Google Scholar]
  5. Castro FG, Barerra M, Martinez CR., Jr The cultural adaptation of prevention interventions: Resolving tensions between fidelity and fit. Prevention Science. 2004;5:41–45. doi: 10.1023/b:prev.0000013980.12412.cd. [DOI] [PubMed] [Google Scholar]
  6. Centers for Disease Control and Prevention and Education Training and Research Associates. Promoting science-based approaches: Adaptation guidelines. 2010 Apr 1; Retrieved from http://www.cdc.gov/TeenPregnancy/Docs/AdaptationGuidelines.docx.
  7. Education Training and Research Associates. Making adaptations to science-based pregnancy and STD/HIV prevention programs. 2011 Retrieved from http://www.etr.org/recapp/index.cfm?fuseaction=pages.AdaptationsHome.
  8. Elliott DS, Mihalic S. Issues in disseminating and replicating effective prevention programs. Prevention Science. 2004;5:47–53. doi: 10.1023/b:prev.0000013981.28071.52. [DOI] [PubMed] [Google Scholar]
  9. Hall GE, Loucks SF. Innovation configurations: Analyzing the adaptations of innovations. Austin: University of Texas, Research and Development Center in Teacher Education; 1978. (Rep. No. 3049). [Google Scholar]
  10. Hollander JA. The social contexts of focus groups. Journal of Contemporary Ethnography. 2004;33:602–637. [Google Scholar]
  11. Israel BA, Schulz AJ, Parker EA, Becker AB, Allen AJ, III, Guzman JR. Critical issues in developing and following community-based participatory research principles. In: Minkler M, Wallerstein N, editors. Community-based participatory research for health. San Francisco, CA: Jossey-Bass; 2003. pp. 53–76. [Google Scholar]
  12. Kelly JA, Heckman TG, Stevenson LY, Williams PN, Ertl T, Hays RB, Spink Neumann M. Transfer of research-based HIV prevention interventions to community service providers: Fidelity and adaptation. AIDS Education and Prevention. 2000;12:87–98. [PubMed] [Google Scholar]
  13. Kelly JA, Sogolow ED, Neumann MS. Future directions and emerging issues in technology transfer between HIV prevention researchers and community-based service providers. AIDS Education and Prevention. 2000;12:126–141. [PubMed] [Google Scholar]
  14. Krivitsky L, Parker SJ, Pal A, Shengelia R, Meckler L, Reid MC. A review of health promotion and disease prevention program adaptations: How are programs adapted? In: Wethington E, Dunifon R, editors. Translational research for improving outcomes across the life course. American Psychological Association Press; pp. 73–99. (in press). [Google Scholar]
  15. Kumpfer KL, Alvarado R, Smith P, Bellamy N. Cultural sensitivity and adaptation in family-based prevention interventions. Prevention Science. 2002;3:241–246. doi: 10.1023/a:1019902902119. [DOI] [PubMed] [Google Scholar]
  16. Kumpfer KL, Pinyuchon M, Teixeira de Melo A, Whiteside HO. Cultural adaptation process for international dissemination of the strengthening families program. Evaluation & the Health Professions. 2008;31:226–239. doi: 10.1177/0163278708315926. [DOI] [PubMed] [Google Scholar]
  17. Lee SJ, Altschul I, Mowbray CT. Using planned adaptation to implement evidence-based programs with new populations. American Journal of Community Psychology. 2008;41:290–303. doi: 10.1007/s10464-008-9160-5. [DOI] [PubMed] [Google Scholar]
  18. Lorig K. The arthritis self-help course leader’s manual. Atlanta, GA: Arthritis Foundation; 2002. [Google Scholar]
  19. Lorig KR, Holman HR. Self-management education: History, definition, outcomes, and mechanisms. Annals of Behavioral Medicine. 2003;26:1–7. doi: 10.1207/S15324796ABM2601_01. [DOI] [PubMed] [Google Scholar]
  20. Lorig K, Ung E, Chastain R, Shoor S, Holman H. Development and evaluation of a scale to measure perceived self-efficacy in people with arthritis. Arthritis & Rheumatism. 1989;32:37–44. doi: 10.1002/anr.1780320107. [DOI] [PubMed] [Google Scholar]
  21. McKleroy VS, Galbraith JS, Cummings B, Jones P, Harshbarger C, Collins C ADAPT Team. Adapting evidence-based behavioral interventions for new settings and target populations. AIDS Education and Prevention. 2006;18:59–73. doi: 10.1521/aeap.2006.18.supp.59. [DOI] [PubMed] [Google Scholar]
  22. Minkler M, Wallerstein N, editors. Community-based participatory research for health. San Francisco, CA: Jossey-Bass; 2003. [Google Scholar]
  23. Parker SJ, Chen EK, Pillemer K, Filiberto D, Laureano E, Piper J, Reid MC. Participatory adaptation of an evidence-based, arthritis self-management program: Making changes to improve program fit. Family and Community Health. doi: 10.1097/FCH.0b013e318250bd5f. (in press). [DOI] [PMC free article] [PubMed] [Google Scholar]
  24. Parker SJ, Vasquez R, Chen EK, Henderson CR, Jr, Pillemer K, Robbins L, Reid MC. A comparison of the arthritis foundation self-help program across three race/ethnicity groups. Ethnicity & Disease. 2011;21:444–450. [PMC free article] [PubMed] [Google Scholar]
  25. Reid MC, Kwon R, Parker S, Chen E, Pillemer K. Comparing an adapted (vs. original) self-management pain program: Is adaptation always necessary?; Paper presented at the Annual Meetings of the Gerontological Society of America; Boston, MA. 2011. Nov, [Google Scholar]
  26. Ringwalt C, Bliss K. The cultural tailoring of a substance use prevention curriculum for American Indian youth. Journal of Drug Education. 2006;36:159–177. doi: 10.2190/369L-9JJ9-81FG-VUGV. [DOI] [PubMed] [Google Scholar]
  27. Smith E, Caldwell L. Adapting evidence-based programs to new contexts: What needs to be changed? Journal of Rural Health. 2008;23:37–41. doi: 10.1111/j.1748-0361.2007.00122.x. [DOI] [PubMed] [Google Scholar]
  28. Solomon J, Card JJ, Malow RM. Adapting efficacious interventions: Advancing translational research in HIV prevention. Evaluation and the Health Professions. 2006;29:162–194. doi: 10.1177/0163278706287344. [DOI] [PubMed] [Google Scholar]
  29. Wingood GM, DiClemente RJ. The ADAPT-ITT model: A novel method of adapting evidence-based HIV Interventions. Journal of Acquired Immune Deficiency Syndromes. 2008;47:S40–S46. doi: 10.1097/QAI.0b013e3181605df1. [DOI] [PubMed] [Google Scholar]

RESOURCES