Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2013 Sep 1.
Published in final edited form as: J Public Health Dent. 2011 Winter;71(0 1):S69–S79. doi: 10.1111/j.1752-7325.2011.00241.x

Qualitative methods to ensure acceptability of behavioral and social interventions to the target population

Guadalupe X Ayala 1, John P Elder 1
PMCID: PMC3758883  NIHMSID: NIHMS504923  PMID: 21656958

Abstract

This paper introduces qualitative methods for assessing the acceptability of an intervention. Acceptability refers to determining how well an intervention will be received by the target population and the extent to which the new intervention or its components might meet the needs of the target population and organizational setting. In this paper, we focus on two common qualitative methods for conducting acceptability research and their advantages and disadvantages: focus groups and interviews. We provide examples from our own research and other studies to demonstrate the use of these methods for conducting acceptability research and how one might adapt this approach for oral health research. Finally, we present emerging methods for conducting acceptability research, including the use of community-based participatory research, as well as the utility of conducting acceptability research for assessing the appropriateness of measures in intervention research.

INTRODUCTION

This paper introduces qualitative methods for assessing the acceptability of an intervention. Too often interventions are developed without sufficient understanding of how the target population will receive intervention activities (Ayala, Elder, Campbell, Engelberg, Olson et al., 2001; Brieger, Nwankwo, Ezike, Sexton, Breman, et al., 1997). This oversight risks the implementation of interventions that do not work well, heightens the distrust of the target population to any future research involvement, and diminishes the possibility of sustaining promising interventions in community settings (Abelson, 1997; Boote, Telford, & Cooper, 2002). The use of interventions that are culturally appropriate and acceptable to the target population can help reduce health disparities in the U.S. healthcare system (Cooper, Hill, & Powe, 2002). Having the target population and organizational settings involved in the process of assessing the acceptability of an intervention is a critical component of behavioral and community intervention development research (Bernal, 2006; Perry, 1999).

Acceptability refers to determining how well an intervention will be received by the target population and the extent to which the new intervention or its components might meet the needs of the target population and organizational setting (also called adoptability, Green & Kreuter, 1999; Steckler & Linnan, 2002). In early stages of intervention development work, formative research methods such as focus groups and interviews can be conducted to determine how to rank various determinants on their urgency, importance, and changeability to inform the acceptability of intervention strategies. In later stages of intervention development, these same methods can be used with the target audience to assess the acceptability of materials, in terms of literacy level, content, presentation and delivery (Freimuth & Mettger, 1990). These steps are critical for developing effective interventions and compliment other types of intervention development studies such as component or pilot studies (Bartholomew, Parcel, Kok, & Gottlieb, 2006; Bernal, 2006).

In this paper, we focus on two common qualitative methods for conducting acceptability research and their advantages and disadvantages: focus groups and interviews. We provide examples from our own research and other studies to demonstrate the use of these methods for conducting acceptability research and how one might adapt this approach for oral health research. Finally, we present emerging methods for conducting acceptability research, including the use of community-based participatory research, as well as the utility of conducting acceptability research for assessing the appropriateness of measures in intervention research.

ASSESSING ACCEPTABILITY

There are several standard qualitative methods for assessing the acceptability of an intervention for the target population and setting. Two common methods are focus groups and interviews. Each approach provides considerable opportunity for discussion between the researchers and the target population. Focus groups and interviews also allow the researchers to probe further on particular topics as they come up in the discussion. This type of interaction generally results in a deeper understanding of forces in a community that may impede or facilitate effective intervention implementation. We describe each method below. We conclude this section by providing an additional perspective on this issue by considering the level of community involvement in acceptability research as seen in community-based participatory research (Israel, Schulz, Parker, Becker, Allen et al., 2003).

Focus groups

In a focus group, a moderator leads approximately eight to ten people in an open discussion of a topic. Focus groups have been described as one of the most widely used qualitative research tools in the applied social sciences (Sussman, Burton, Dent, Stacy, & Flay, 1991). Focus groups are useful for designing health interventions, pretesting intervention materials, and establishing acceptable procedures for delivering an intervention. Several key issues require consideration when conducting focus groups, including their structure, the focus group guide, the facilitation process, and what data to collect and how. We conclude this section by providing an example from our research of a variant of this approach that has yielded rich data to inform a healthy eating intervention.

Structure

Group size, homogeneity, setting, facilitator characteristics, and number comprise the structure of the group (see Krueger, 2000 for a six volume focus group kit). First, eight to ten respondents is the maximum number recommended for a focus group. Smaller groups may become dominated by one or two more vocal individuals, and larger groups often are difficult to manage. Second, focus groups generate more open dialogue if they are relatively homogeneous and the members are not familiar with each other prior to the group interaction. Group homogeneity on important characteristics such as gender and ethnicity creates a more comfortable environment for the participants, especially when sensitive topics are discussed (e.g., sexual behavior, racism/discrimination). Creating separate, multiple, homogenous groups (e.g., separate groups for alcoholics and occasional drinkers, clinic users and non-users, delinquent students and “A” students, etc.) allows for “case-control” comparisons. These types of comparisons allow the researcher to hone in on factors that influence acceptability. On the other hand, heterogeneity in the group may provide evidence of important group differences to consider in the intervention (e.g., the influence of grade level on asthma management at school among middle school students; details provided below; Ayala, Miller, King, Riddle, Zagami, & Willis, 2006 ) Third, the group setting should be comfortable, easily accessible, and reasonably private. For example, certain populations may benefit from on-site childcare. Fourth, selection of a facilitator is an important decision in this process. An individual who is a member of the target population or intimately familiar with it, may help build trust in group members and thus they may be more likely to contribute to the discussion. However, familiarity may also inhibit discussion if there is risk in encountering the person in an everyday setting (Cook & Rice, 2006). Finally, the number of groups needed depends on several factors including but not limited to: how many participants are in each group; variability in the groups’ responses and how soon saturation of themes is achieved; and how many different topics are covered during the focus group. Although resources often dictate how many focus groups will be conducted (Basch, DeCicco, & Malfetti, 1989; Kohler, Dolce, Manzella, Higgins, Brooks, et al., 1993), when variability in responses is low, two groups per topic should be sufficient.

Focus group guide

Developing a focus group guide is essential to hone in on the research objectives. The guide consists of a general question-by-question outline used by a moderator who is thoroughly informed about the project. The moderator flexibly asks the questions while considering the dynamics of the group. To aid in establishing rapport, questions should proceed from general and non-threatening topics to more specific and potentially controversial topics. This also allows the moderator to probe responses to general questions with more specific items related to the topic of interest (Basch et al., 1989). Table 1 provides an example of a focus group guide used in a study to assess the acceptability of various intervention activities to promote healthy eating in a family-based intervention. The guide proceeds from a fun, warm-up activity and introductory questions, to more specific questions on various aspects of the intervention structure. The purpose of the warm-up activity is to help everyone in the group feel comfortable speaking and to encourage their participation later.

Table 1.

Sample focus group guide

FUN ACTIVITY
1. Let’s start with a fun activity. I am going to ask each of you to fill in this sentence.
  • When I want to learn how to do something, I ________________________________.

  • Three words to describe the person who has helped me most in life are ____________.

INTRODUCTORY QUESTIONS
2. What comes to mind when you think of making changes to your family’s eating habits?
  • PROBE: Your own eating habits?

3. How can we help you make changes in your family’s eating habits?
  • PROBE: What type of information would be most helpful to receive?

TRANSITION QUESTIONS
4. Thinking about your own family, if we assigned someone to work with you - like a counselor or a promotora - describe what type of person would be most helpful?
  • PROBE: What specific things could s/he do that would be most helpful?

  • PROBE: How old should the person be?

  • PROBE: Should the person be male or female?

  • PROBE: Should the person have kids? Be married?

  • PROBE: Should the person be Latino/Hispanic/Mexican?

DEPTH QUESTIONS
We plan to provide 10 sessions of learning activities on how to become a healthier family.
5. How can we involve the entire family in these 10 learning activities?
  • PROBE: In what ways might involvement be different for dads?

  • PROBE: How about for kids?

6. If you were in the program, what type of contact would you like to have with the people implementing the program?
  • PROBE: Face to face? Telephone? Mail? Email? Combination?

  • PROBE: If contact is face-to-face, where should these contacts take place?

  • PROBE: How long should each contact last? Should this differ by type of contact?

  • PROBE: How often should we have contact with your family?

7. What types of incentives would motivate people to GET involved in the program?
  • PROBE: What types of incentives would help them STAY involved in the program?

8. What do you think is going to make it hard for families to participate in the program?
  • PROBE: What would help families overcome these barriers?

MATERIAL ASSESSMENT ACTIVITY
9. We have a few examples here to demonstrate how health information could be given to people: (a) children’s handbook, (b) recipe cards, (c) stickers, (d) health education brochure.
  • PROBE: What do you like best about these materials?

  • PROBE: What do you NOT like about these materials?

  • PROBE for feedback on fonts, font size, colors, images, and types of graphics

SCENARIO-BASED ACTIVITY
Imagine this scenario: Your family signed up to be part of a healthy lifestyle program because you realized that all of you were ‘filling in’ and did not have much energy to do things together. As a parent, you are worried about the future of your children and their ability to succeed in life, like finishing school and getting a good job. The program involves home visits and the person coming to your home will bring some materials like a video and other interactive things.
10. What would this home visit look like if the whole family were involved?
11. Who should be present at the home visit?
  • PROBE: Only the mother? Only the mother and father? Only the mother and child?

  • PROBE: If we could only talk to one person, who should that person be? Why?

Good ideas. Now we’ll describe a few different ways we think the program could happen. I will read both examples and then ask you to react to them.
12. EXAMPLE 1: The promotora comes in and meets with your family for about ten minutes. Then everyone watches a 10-minute video that is similar to a telenovela about a family. In the video, there is a family conflict and then the conflict is resolved. The video shows what the mom does, what the dad does, and what the kids do related to the conflict and their involvement in the resolution. Then the promotora will engage in your family in a discussion about what your family would do in a similar situation, and how you could make the same kinds of changes. The promotora facilitates the discussion. Before she leaves, your family sets goals for the following visit and she gives you some important information to read.
13. EXAMPLE 2: The promotora comes in and only wants to meet with the mom. They watch a video together and then spend some time talking. They walk around the house together and the promotora is making notes on her clipboard about things in the home that are related to eating and exercise. Before the promotora leaves, she gives the mom some paperwork to fill out. As she is walking out the door, the promotora looks at the other family members present and asks them to support the mom in the changes she is trying to make in the home so that the family is healthier.
  • PROBE: What did you like about the first example?

  • PROBE: What did you like about the second example?

  • PROBE: Which approach would work best for your family?

CLOSING QUESTION
We have asked a lot of question of you. Now we want to turn the tables a bit. What questions do you have of us that are related to this program?
Thank you to everyone who made an effort to be here tonight. With your comments we will be better able to design a program that is relevant to the families in ….

Blending open- and closed-end questions and specific or general ones to obtain key information from the participants about the acceptability of an intervention is an essential moderator skill (Krueger, 2000). Open-ended, general questions are best used at the beginning of the focus group when the facilitator is trying to get the participants to begin thinking about the topic. (e.g., “What do you think about when I say nutrition?”). Specific open-ended questions are used to gather information about particular behaviors or attitudes (Berg, 1998). Closed-ended questions, whether specific or general, are generally used at the end of the focus group (Krueger, 2000). These types of questions attempt to get at very specific information about preferences for various types of delivery channels (e.g., internet versus newsletter) and intervention components. The facilitator should end the focus group by asking the participants if anything was left out in the discussion to elicit any remaining important elements related to the topic of interest.

Data collection and analysis

Focus group data collection can take several forms. In most cases, researchers will audio- or video-record a focus group and transcribe it, verbatim if possible to minimize important omissions (Weinberger et al., 1998). In addition, a note taker is often present to capture non-verbal behavior, for example, an extremely positive or negative affective response to a question or topic area. With videos, this same information can be captured post-hoc. Finally, researchers often provide participants with a short survey to capture demographic and other important characteristics related to the issue in question.

To prepare for focus group analysis, the transcripts are integrated with the notes from the note taker. For example, a transcript will be revised with a comment such as [“Group reacted positively to the idea of accessing the information off the web”]. Transcripts are then coded using either an inductive or deductive approach. The assumption of an inductive approach is that the most important themes will emerge from specific group discussions and this information is then organized into a codebook. To accomplish this, at least two individuals (coders), knowledgeable on the topic or study, will read the transcripts independently and generate themes, and making note of good quotes that illustrate the themes. Themes are organized into a code book which is then applied independently by the coders to code the remaining transcripts. They come together after coding is completed to compare notes. More recently, researchers have begun to use qualitative data analysis software to manage the extraction of themes and quotes from transcribed discourse. Examples of software used for this purpose are NVivo (http://www.qsrinternational.com/#tab_you), ATLAS.ti (http://www.atlasti.com/), HyperRESEARCH (http://www.researchware.com/), and MAXQDA (http://www.maxqda.com/).

This inductive approach to coding focus group discussions is distinct—both in methods and in purpose--from a deductive coding approach commonly used in clinical research. In the deductive coding approach, coding of discussions is theory-driven, with coders judging the degree to which discussion topics or process fits with the theoretical framework proposed a priori. This approach typically involves at least two coders evaluating transcripts independently, guided by a standardized coding scheme, with some process for resolving disagreements between coders. Psychometrically-sound coding systems have been developed to measure a variety of health-relevant behaviors and social processes (Brennan & Hayes, 1992). These coding systems can be powerful tools in understanding the determinants of health behaviors, in measuring possible mediators, moderators or mechanisms of action of health interventions, and in monitoring the degree to which interventions are delivered as they are intended to be delivered (i.e., with “fidelity”). Deductive coding systems tend not to be used in establishing the acceptability of an intervention to a target population.

Variant on traditional focus groups

In our research, we often use a panel series design to conduct focus groups (Ayala et al., 2001; 2005a; in press). In panel series designs, the researcher recruits 8–10 individuals, dyads, or families and participants must consent to attend several focus groups that occur consecutively over the course of several weeks. The topics discussed build on each other and often conclude with specific recommendations for the proposed intervention. Table 2 provides an example from our research demonstrating what topics were considered in a series of focus groups conducted with adolescents with asthma (see additional study details below). What resulted from this panel series of focus groups was rich data on what intervention components would be most acceptable to the adolescents and how they might best be implemented within a school setting.

Table 2.

Panel series focus group format and corresponding sample focus group questions

Session 1: Asthma management and barriers:
What are some reasons why kids don’t manage their asthma very well, like take their asthma medications like they are supposed to? [Probe for: lack of knowledge, lack of access to medication, lack of motivation, lack of perceived response efficacy]
Session 2: Developmental issues:
How has the management of your asthma changed since you were a kid? [Probes: What are you doing the same? What are you doing differently?]
Session 3: Support for asthma care:
How are the following people involved with your asthma care? [Probes: parents, siblings, other family members, friends, teachers, coaches, doctor, school nurse]
Session 4: Intervention preferences:
What word would you use to describe something that is cool?
Session 5: Critique of existing materials:
[Lay out sample brochures]. Nobody took the asthma brochures we made! What did we do wrong? How should we change them? [Probe for: color, size, title, content, pictures.] [Probe: Should they be on paper, on DVD, on CD, on the web?]
Session 6: Intervention brainstorming:
If someone gave you this kit to take home, what should it include? Would you do with it (where would you put it)?
Session 7: Review sample materials:
How would you change these materials? [Probe for: more/less information, more/fewer pictures]

Panel series focus groups have some advantages, including obtaining richer data and containing costs. Participating in multiple focus group discussion facilitates obtaining more in-depth information from participants given that they become more familiar with the focus group process, and thus less distracted by it. Recruitment of focus group participants is not an inconsequential activity, in terms of costs. A panel series design involves fewer participants and thus reduces personnel time in identification and recruitment of participants.

Interviews

During an interview, the interviewer engages an individual (i.e., the informant) in a discussion about the proposed intervention. This individual may represent a member of the target audience for whom the intervention is intended, or may be an organizational representative to assess the potential adoptability of an intervention. Interviews have been used successfully to design cancer prevention interventions in beauty salons (Solomon, Linnan, Wasilewski, Lee, Katz et al., 2004) and healthy eating interventions in grocery stores (Ayala, Laraia, Kepka & Ornelas, 2005b). As with focus groups, there are several key issues to consider when conducting interviews including the type of interview to conduct (key informant, intercept), the structure of the interview guide, the interview process, and how data are analyzed.

Types of interviews

Key Informant Interviews

Key informants are individuals who, because of their professional training, affiliation with particular organizations, or status within a target population (e.g., spokesperson for a homeless population), can provide important information about the acceptability of the intervention to the target population and feasibility of implementation within an organizational setting. Religious and political leaders, business and labor representatives, and administrators of social service organizations are often used to provide these insights. Good informants also are those individuals who may be somewhat critical about their community, as these individuals tend to be more observant and reflective about their environment (Berg, 1998). However, more important and more difficult to identify, are those individuals who hold informal positions of authority in the community, yet who may not have a specific title or position of authority. These individuals are often “opinion leaders” and may have information about the community that organized leaders may not know. Identifying informal “opinion leaders” may be difficult to do at the outset. It may prove fruitful to first ask several community members to identify “opinion leaders” who are aware of the priorities and potential of the community. To insure that informant’s are knowledgeable about the issues relevant to the research project, interview questions can be included to establish the competence of the informants (Poggie, 1972). For example, key informants were identified for a study examining the influence of the food environment on diet by asking them about the existence and location of several important food outlets known to the investigative team (Ayala et al., 2005a). The accuracy with which the informant answered these questions served as a validity check on the informant’s status as a key informant.

Intercept Interviews

Intercept interviews are used to assess the reaction of the target population to potential intervention products and materials. Intercept interviews are often conducted with target population individuals at the point (time and location) they are most likely to be exposed to an intervention. Thus, they are an ideal and convenient way to assess acceptability. In general, these interception points (e.g., in a clinic waiting room, in a supermarket) should also be “high traffic” areas, allowing researchers to contact large numbers of people in a short period of time. For example, intercept interviews have been used successfully to assess reactions to program components in a zoological park (Mayer et al., 2001). In this evaluation, visitors to a zoo were asked about the use of sunscreen and wearing a hat to protect against skin cancer, as they exited the park. This provided additional information about the usefulness of various program activities before a sun safety intervention was fully implemented in the zoo stores and buses (Mayer et al. 2001).

Interview Structure

Researchers have identified and utilized three major categories of interviews: standardized/formal/structured, unstandardized/informal/unstructured, and semi-standardized/focused/semi-structured (Berg, 1998). Structured interviews consist of a schedule of precise questions that are administered in a predetermined order to each informant. The objective is to provide each informant with the same set of stimuli, thus allowing comparability of responses to the stimuli across all informants. Informants do not engage in a conversation with the interviewer; instead, they respond verbally to the interviewer’s questions or other stimuli as they are presented to them. Unstructured interviews involve the use of an interview guide with a few open-ended questions, with the assumption that the interviewer will probe further on information deemed most relevant to informing acceptability. Unstructured interviews rely heavily on knowing how to probe effectively to stimulate an informant to provide more information. Examples of “probes” include: (a) echoing the informant’s most recent response, (b) providing verbal affirmation or approval of the informant’s response, (c) asking the informant directly for more information, (d) asking open-ended questions that “work off” of a response, or (e) simply remaining silent and waiting for the informant to continue his/her response (e.g., Bernard, 1994). The key is to elicit more information without injecting the interviewer’s own preconceptions into the interaction. Unstructured interviews may provide unique information that might not have emerged with more formal querying.

The use of the semi-structured interview is based on a data collection “compromise.” Use of a structured interview may not allow the interviewer the freedom to probe further on issues related to program activities in question that are not part of the standard interview schedule. On the other hand, use of an unstructured interview may give the informant too little guidance when responding to questions posed by the interviewer, leading both parties to digress during the interview. Thus, a semi-structured interview brings the best of both alternatives to the interview process which may be why it is the most widely used type of interview in formative research studies (Higgins, O’Reilly, Tashima, Crain, Beeker, et al., 1996; Hubbell, Chavez, Mishra, Magana, & Valdez, 1995; Ledda, Walker, & Basch, 1997). Researchers have used semi-structured interviews to assess the availability of cancer screening services and treatment (Hubbell et al., 1995), and reactions to intervention material (Ledda et al., 1997), thus providing the researchers with information on the acceptability of intervention components.

Interview process

Similar to focus groups, the interviewer will engage the informant by asking a series of questions, from general questions about the topic to specific questions about key intervention strategies. During the interview, the interviewer may use stimuli on which to obtain responses, including presenting a set of plausible intervention activities to the informant. Interviews can range from 10 minutes for intercept interviews to over two hours for key informant interviews depending on the number of questions asked and how talkative the informant is. With key informant interviews, many interviewers prefer to audiotape the interviews rather than simply relying on the interviewer’s notes, as this reduces the potential for biased recall on the part of the interviewer. On some occasions, however, informants will not allow the interview to be recorded. In these cases, the interviewer should make careful notes of the key points in the conversation and create a summary of the interview as soon as possible following the interview. If appropriate, the interviewer may ask the informant to review the summary and add any additional information not recalled by the interviewer.

Interviews can be conducted face-to-face or by phone (Rosenthal & Rosnow, 1991). Face-to-face interviews, although optimal in terms of developing rapport with the respondent and allowing for easy clarification of questions, also tend to be more costly, more time-consuming, and less effective for obtaining information on sensitive topics with some populations (Marín & Marín, 1989). Interviewer bias is more likely to occur in face-to-face interviews as the interviewer may adjust the wording of a question to fit the respondent or may only record a portion of their responses (Shaughnessy & Zechmesiter, 1997). Telephone interviews have some advantages over face-to-face interviews. The interviewer does not need to travel to the respondent’s home and can schedule several interviews in one day. Also, refusal rates generally are lower over the phone (Rosenthal & Rosnow, 1991). However, phone interviews are restricted to those who have full access to a phone, and the informant may have a more difficult time understanding only verbally presented material.

A final consideration is deciding where to conduct the interviews. Requiring informants to come to a central location (e.g., clinic, church, or office) may bias the sample to individuals who have a certain degree of time and motivation to participate, as well as the necessary transportation and childcare. Conversely, conducting interviews where behaviors naturally occur will not only improve the generalizability of your sample but also presents the opportunity to address contextually relevant stimuli to inform an intervention. For example, to design a healthy eating intervention that addressed food purchasing behavior, interviews were conducted with participants in the grocery store to better understand how features of the store influenced their decisions on what to purchase (Ayala et al., 2005b). Despite the advantages of interviewing informant in their natural setting, they often require additional resources. In some communities, it may be necessary to send two interviewers to a respondent’s home for security safety, the interviewer needs to be prepared for interruptions, and the researchers need to budget for additional travel costs (Bernard, 1994).

Data collection and analysis

The analysis of open-ended interview data follows closely the analysis of focus group data. Audio-taped interviews are transcribed, coded, and assessed for relevant themes and recommendations. The analysis of closed-ended data may be summarized in tables for all informants or may be reported based on meaningful groups (e.g., whether men and women differ in the type of signage they prefer to elicit a behavior).

Variants on interviews

Interviews can include other formats for obtaining self-reported information. For example, after having some difficulty recruiting fathers to participate in focus groups to inform the development of a family-based healthy eating intervention, we designed a card sort task for them (Ayala et al., in press). In this formative research study, fathers were recruited from several community settings (i.e., a mall, outside a large chain store) and asked to participate in an intercept interview that involved a card sort task. During the task, men who identified as fathers of children between 7–13 years old were asked to rank order 16 roles they play as a father and those that they find most challenging. The card sort task yielded information on the relevance of these roles, and informed the characteristics of the father portrayed in the video, thereby producing a video that was much more acceptable to the target population.

Comparison of focus groups and interviews

The focus group has an advantage over interviews because it allows for members of the group to engage in collective brainstorming of ideas, issues, and solutions to a problem, sometimes called a “synergistic group effect” (Sussman et al., 1991). They may even test the behavior in this group setting. We observed this in a series of focus groups examining the extent to which classmates encouraged or impeded the performance of asthma management behavior designed to occur at school and around their classmates (Ayala et al., 2006). Focus groups are also preferred over interviews because they are more cost effective in both time and resources expended. Many more individuals can be studied through a format that gathers eight to ten people at one time rather than individually.

Newer methods for ensuring intervention acceptability

In this chapter we limited our discussion to focus groups and interviews. However, there are several less well-studied methods that hold promise for helping researchers design acceptable interventions. For example, we used mapping techniques to identify the appropriate location to conduct a healthy eating intervention (Ayala et al., 2005a). In this study, we provided mothers and their children with maps of their community and asked them to indicate where their home was, what locations they frequented on a weekly basis, and for what purposes. This helped to identify tiendas (small Latino food stores) as an acceptable location to intervene with the target population on a nutrition issue given the frequency (8 times a month) and purpose of these trips (food shopping). Another promising approach is the use of Photovoice, a technique used to understand an issue from the perspective of the target population, using photographs and facilitated discussions of these photographs (Wang & Burris, 1997). To examine the influence of migration and acculturation to the U.S. among new immigrant adolescents to North Carolina, we engaged a group of youth to identify relevant themes associated with their experience coming to the U.S., take pictures to reflect these themes, and then present selected photographs to stakeholders to inform programmatic and policy changes (Streng, Rhodes, Ayala, Eng, Arceo et al., 2004).

Community Based Participatory Research

Acceptability research is designed to obtain useful information for examining the extent to which an intervention meets the needs of the target population and organizational setting. How this is achieved can differ on many dimensions including the level of community involvement. Community Based Participatory Research (CBPR; Viswanathan et al., 2009), a philosophy (Paolo Freire) and set of methods (Israel et al., 2003) for conducting research informed by community engagement at all stages of the research process. Thus, it embodies acceptability research in the most fundamental of ways. A full description of CBPR is beyond the scope of this article. We provide a brief introduction here, and encourage the reader to seek additional information from any of several books (Israel et al., 2003; Minkler & Wallerstein, 2008; Wallerstein & Duran, 2006) and websites (http://depts.washington.edu/ccph/commbas.html; http://muse.jhu.edu/journals/progress_in_community_health_partnerships_research_education_and_action/).

Community based participatory research is a shared decision-making approach to the conduct of community-based research. Decisions are generally shared between academic institutions and community partners (e.g., health care system, public health department, community agencies, for-profit companies). Often, community advisory boards, comprised of community members, are formed to represent the community interests in a research study (Viswanathan et al., 2004). As an example, the San Diego Prevention Research Center (SDPRC), which we direct, uses community based participatory research methods, informed by its Community Engagement Committee, as well as the National Community Committee, the organization comprised of community representatives of all 37 Prevention Research Centers in the national network. In implementing its activities, the SDPRC is “a joint effort involving researchers and community representatives in all phases of the research process. The joint effort engages community members, employs local knowledge in the understanding of health problems and the design of interventions, and invests community members in the processes and products of research” (www.cdc.gov/prc). The SDPRC’s dynamic Community Engagement Committee comprises representatives from two city recreation departments, a school district, a school, a health clinic, a county office of health and human services, two key social service agencies, and a public housing service center. Four focus groups conducted with members from this committee over a 4-month period followed by monthly meetings for over five years informed the development and implementation of an intervention that improved the health of its residents (reduced blood pressure, decreased waist circumference).

Although CBPR embodies the principles of acceptability research, it is not without its challenges. CBPR typically requires additional time and resources to establish partnerships-with many important issues needing to be negotiated- in order to ensure equity among the partners while optimizing the value of the expertise each partner brings to the research enterprise. Another of the difficult challenges is negotiating priorities in terms of organizational practices and expectations of the participating partners. For instance, community partners committed to providing health services broadly and quickly may clash with funding systems that require a targeted, conservative approach to testing services before wide distribution. Also, the time-limited nature of most research funding means that academic and community partners often face the difficult task of finding ways to sustain services that are shown at least initially to be successful.

Finally, some funders may allow little flexibility in terms of research design, health targets and measurement approaches, creating tension between the “CBP” base of the practitioner and the “R” base of the scientist. Full and early transparency with respect to the nature and demands of funding may prevent or at least minimize this tension, and allow for a win-win compromise to evolve. For example, in two of our past studies, targeted migrant communities emphasized to us that their social and health priorities were English language literacy in one case (Candelaria, Woodruff, & Elder, 1996) and home and child safety in the other (Campbell, Ayala, Litrownik, Slymen, Zavala, et al., 2001). Our NIH funding, however, was for heart disease prevention and teen smoking prevention (respectively). To optimize acceptability while meeting our funding mandates, we implemented the CHD program through regular English as a Second Language (ESL) classes, and the smoking prevention intervention was tested against a home-and-child safety attention placebo control. Both produced important and statistically significant changes (Elder, Candelaria, Woodruff, Criqui, Talavera, et al., 2000; Litrownik, Elder, Campbell, Ayala, Slymen et al., 2000).

RELEVANCE TO ORAL HEALTH INTERVENTION RESEARCH

To illustrate a complete application of the methods discussed above, we describe our Breathe Easy study in which we designed an intervention to improve the asthma management behaviors of middle school students. Data from 42 focus groups with middle school students and 10 interviews with school personnel and parents were used to design the Breathe Easy intervention. In this study, students with asthma in two different middle schools were invited to participate in seven consecutive focus groups (i.e., panel series). The focus groups met weekly for seven weeks and engaged in a series of discussions that began with talking about asthma management and barriers (Session 1) to brainstorming intervention channels (Session 6). The final session involved obtaining feedback on intervention materials that were developed based on the previous six discussion groups with students at both schools. See Table 2 for the entire series of focus group discussions. Interviews with school personnel, including school nurses, and parents helped determine how best to involve them in a school-based intervention.

Findings from the focus groups and interviews helped to determine a number of important features of the Breathe Easy intervention, including some that were not anticipated by the research team. For example, we learned that an intervention held during the students’ lunch hour would be the most acceptable time and location for youth-directed activities. This was deemed important for minimizing transportation barriers to attendance after school. Despite reluctance to have such a short intervention dose, we implemented the intervention in this manner and achieved excellent participation among the youth (Terpstra, Johnson, & Ayala, in press).

The Breathe Easy study can serve as a guide to the researchers interested in developing interventions that are responsive to the needs of the target population and the organizational setting. Combining the use of focus groups with the target population with interviews with members of the organizational setting can help to design interventions that are feasible and relevant to the target population.

ASSESSING ACCEPTABILITY OF MEASURES IN INTERVENTION RESEARCH

In addition to assessing the acceptability of intervention components or strategies, researchers interested in conducting behavioral or social intervention research should also consider assessing the acceptability of measures used to assess moderators, mediators, and outcomes. One technique for ensuring that questionnaires, surveys, or other self-report instruments are acceptable to a target population is called “cognitive interviewing” (Willis, DeMaio & Harris-Kojetin, 1999). This technique uses several strategies to understand four aspects of how study participants respond to the study measures. These four aspects are: 1) Comprehension, e.g., how participants comprehend the intent and meaning of terms in study questions; 2) Retrieval, e.g., what types of information participants need to recall, and what types of strategies are used to recall relevant information; 3) Decision processes, e.g., the level of motivation required to complete study measures and tendencies to give socially-acceptable responses; and 4) Response processes, e.g., how well participants’ answers map onto response categories provided in the study questionnaire. For a more detailed discussion on mediators and moderators in behavioral intervention research, see the MacKinnon and Luecken article in this issue.

The most common methods used to conduct cognitive interviewing are think-aloud interviewing and verbal probing. As the label suggests, think-aloud interviewing involves having participants think aloud while they complete the study measures, in order to uncover potential problems with study questions. Typically an interviewer reads each study question and guides the participant to describe what he or she is thinking while trying to answer the question. In Willis’ comprehensive guide to cognitive interviewing (1999), he describes an example of a participant’s response to the question, “How many times have you talked to a doctor in the last 12 months?” By eliciting how the participant arrived at his answer, the interviewer discovers that the question needs to specify more clearly whether it refers only to doctor visits related to the participant’s own health, and the type of doctors to count in the response. The interviewer also observed that the participant had difficulty recalling with certainty the timing of his doctor visits, so that the 12-month timeframe might be too long.

Verbal probing can also be used to ensure that measures are acceptable and appropriate for a target population. In verbal probing, an interviewer asks specific questions about the study measure, either item-by-item, or at the end of the study measures. Willis provides examples of probes to address different aspects of acceptability. For example, to assess comprehension of a particular phrase, an interviewer might ask, “What does the ‘term X’ mean to you?” Example probes are also suggested for assessing participants’ confidence in their responses, how participants recall information, and how participants would phrase the same question in their own words. Whether assessed via think-aloud interviews, verbal probing, or in other ways (see Forsyth & Lessler, 1991, for a comprehensive list of cognitive interviewing methods), the data are meant to be used to confirm or improve study measures. For example, if most participants in the cognitive interviewing described difficulty with a question, changes to that question are likely needed. In other cases, it may be sufficient justification to change a question if only one participant described difficulty with the question. The psychometric properties of the study measure also factors into the decision to make changes, so that the bar for changing items in a new measure might be much lower than the bar for changing items in an established measure with established norms.

Two final considerations when designing written materials is to consider the participants’ reading level and language preferences. In our research, we have used the SMOG Readability Formula to determine the grade level of written materials (McLaughlin, 1969). Several websites are available (e.g., http://www.wordscount.info/hw/smog.jsp) to facilitate assessment of written materials simply by cutting and pasting the content into the website and obtaining a grade level score. Generally, we aim for a reading level of sixth grade or less given that if a person reads at or above a grade level, they will understand 90–100% of the information. Other software programs exist, such as the Flesch-Kincaid Grade Level Index that accompanies Microsoft Word. However, we have found this less sensitive to lower reading levels. Second, researchers interested in conducting intervention research with populations whose dominant language is other than English need to prepare materials in multiple languages. This is the case for our research; all materials are either available only in Spanish or bilingual (Spanish and English). As such, written materials must be translated. Below are the steps we take to prepare materials in two languages:

  1. Translate all materials from the original English version. Replication of all or part of the translation by a second translator is ideal.

  2. Ask the translators to make the concepts understandable by the target population. If the target population uses multiple dialects of the same language, it may be necessary to have translators who speak the different dialects translate the materials and agree on the best translation.

  3. To ensure conceptual and linguistic equivalence, translators should consider changing the words to get across the same meanings. For example, use of the term “vigorous” to refer to physical activity may not be understood as easily as other terms such as “very hard”. For all written information, make sure the underlying concept is retained in the translation.

  4. Review the translated and original English materials by a group of bilingual people who are similar to the target population. Ask the group to ensure that the translation will be acceptable to the target population.

  5. If time and funding permits, it is recommended that two different translators translate the new version back into English (back translation). The new English materials are then compared with the original English materials to ensure conceptual equivalence with the original materials. It is more important to ensure conceptual and linguistic equivalence rather than matching on exact wording.

DISCUSSION

The goal of this paper was to provide the reader with specific recommendations for assessing the acceptability of intervention components and strategies, and accompanying measures. Assessing acceptability is a critical early step in program development, yet it lacks a broadly agreed-upon, scientifically sound set of methods (Winett, King, & Altman, 1989). Among the most tested methods are focus groups and interviews, which are useful for assessing potential acceptability. In both formats, the respondent provides a series of at first general then increasingly specific reactions to key intervention strategies. The focus group’s format’s advantage is that it allows for group members to engage in collective brainstorming; however, interviews may yield information about more sensitive topics. Community mapping and Photovoice comprise two of the more innovative methods used to assess intervention acceptability, while ‘cognitive interviewing’ may be effective for assessing evaluation acceptability. Appropriate readability levels and accurate and appropriate language translation obviously will enhance acceptability. The richness and validity of acceptability data may be strengthened by methods in Community Based Participatory Research (CBPR) in that community involvement is critical at all stages of the PIE (planning, implementation and evaluation) sequence. The challenge to scientists studying community interventions is to balance the “CBP” with the “R” aspects of this process.

Whether time-tested or innovative, we believe that acceptability research is advancing in sophistication and will continue to gain scientific respect over the next decade. For researchers committed to studying behavioral or social interventions to improve health and oral health, but who do not have this expertise, we encourage them to seek out advice from or even collaborate with researchers who can assist with the methods discussed in this article.

Acknowledgments

This paper was commissioned by the National Institute of Dental and Craniofacial Research. Additional support was provided by grants from the National Institutes of Health (Ayala: NCI R01 CA138894-01A1; Elder: NIDDK R01 DK084331-01A1; R01 DK072994-05), the Centers for Disease Control and Prevention (Elder and Ayala: U48DP001917-01), the American Cancer Society (Ayala: RSGPB 113653), and the Peers for Progress Network (Ayala and Elder: SOOOII24OIGEL).

References

  1. Abelson AG. Managing students with behavior disorders: Perceived efficacy of interventions. Psychological Reports. 1997;80(3 Pt 2):1167–1170. doi: 10.2466/pr0.1997.80.3c.1167. [DOI] [PubMed] [Google Scholar]
  2. Ayala GX, Elder JP, Campbell NR, Engelberg M, Olson S, Moreno C, Serrano V. Nutrition communication for a Latino community: Formative research foundations. Family and Community Health. 2001;24(3):72–87. doi: 10.1097/00003727-200110000-00009. [DOI] [PubMed] [Google Scholar]
  3. Ayala GX, Ibarra L, Arredondo E, Horton L, Hernandez E, Parada H, Slymen D, Rock C, Engelberg M, Elder JP. Promoting healthy eating by strengthening family relations: The Entre Familia: Reflejos de Salud intervention. In: Elk R, editor. Cancer Disparities: Causes and Evidence-Based Solutions. Springer; (in press) [Google Scholar]
  4. Ayala GX, Laraia B, Kepka D, Ornelas I. Evidence supporting tiendas as a setting for health promotion: Characteristics of tiendas and tienda managers. Paper presented at the annual meeting of the American Public Health Association; Philadelphia, PA. 2005b. [Google Scholar]
  5. Ayala GX, Maty S, Cravey A, Webb L. Mapping social and environmental influences on health: A community perspective. In: Israel, et al., editors. Multiple methods for conducting community-based participatory research for health. 2005a. pp. 188–209. [Google Scholar]
  6. Ayala GX, Miller D, King D, Riddle C, Zagami E, Willis S. Asthma in middle schools: What students want in a program. Journal of School Health. 2006;76:208–214. doi: 10.1111/j.1746-1561.2006.00098.x. [DOI] [PubMed] [Google Scholar]
  7. Bartholomew LK, Parcel GS, Kok G, Gottlieb NH. Planning Health Promotion Programs: Intervention Mapping. San Francisco, CA: Jossey-Bass; 2006. [Google Scholar]
  8. Basch CE, DeCicco IM, Malfetti JL. A focus group study on decision processes of young drivers: Reasons that may support a decision to drink and drive. Health Education Quarterly. 1989;16(3):389–396. doi: 10.1177/109019818901600307. [DOI] [PubMed] [Google Scholar]
  9. Berg BL. Qualitative Research Methods for the Social Sciences. 3. Boston, MA: Allyn and Bacon; 1998. [Google Scholar]
  10. Bernal G. Intervention development and cultural adaptation research with diverse families. Family Processes. 2006;45(2):143–151. doi: 10.1111/j.1545-5300.2006.00087.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Bernard HR. Approaches. 2. Thousand Oaks, CA: Sage; 1994. Research Methods in Anthropology: Qualitative and Quantitative. [Google Scholar]
  12. Boote J, Telford R, Cooper C. Consumer involvement in health research: A review and research agenda. Health Policy. 2002;61(2):213–236. doi: 10.1016/s0168-8510(01)00214-7. [DOI] [PubMed] [Google Scholar]
  13. Brennan PF, Hays BJ. Focus on psychometrics: The kappa statistic for establishing interrater reliability in the secondary analysis of qualitative clinical data. Research in Nursing & Health. 1992;15(2):153–158. doi: 10.1002/nur.4770150210. [DOI] [PubMed] [Google Scholar]
  14. Brieger WR, Nwankwo E, Ezike VI, Sexton JD, Breman JG, Parker JK. Social and behavioural baseline for guiding implementation of an efficacy trial of insecticide impregnated bednets for malaria control at Nsukka, Nigeria. Int Q Community Health Education. 1997;16:47–61. doi: 10.2190/43HT-6MEH-MTDE-HBV2. [DOI] [PubMed] [Google Scholar]
  15. Campbell NR, Ayala GX, Litrownik AJ, Slymen DJ, Zavala F, Elder JP. Evaluation of a first aid/home safety program for Hispanic migrant adolescents. American Journal of Preventive Medicine. 2001;20(4):258–265. doi: 10.1016/s0749-3797(01)00300-2. [DOI] [PubMed] [Google Scholar]
  16. Candelaria J, Woodruff SI, Elder JP. Language for Health: Nutrition education curriculum for low-English literate adults. Journal of Nutrition Education. 1996;28:266. [Google Scholar]
  17. Cook KS, Rice E. Social exchange theory in the Handbook of Social Psychology. 2006. pp. 53–76. [Google Scholar]
  18. Cooper LA, Hill MN, Powe NR. Designing and evaluating interventions to eliminate racial and ethnic disparities in health care. Journal of General Internal Medicine. 2002;17(6):477–486. doi: 10.1046/j.1525-1497.2002.10633.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  19. Elder JP, Candelaria J, Woodruff SI, Criqui M, Talavera G, Rupp J. Results of Language for Health: Cardiovascular disease nutrition education for Latino English-as-a-second-language students. Health Education and Behavior. 2000;27(1):50–63. doi: 10.1177/109019810002700106. [DOI] [PubMed] [Google Scholar]
  20. Forsyth BH, Lessler JT. Cognitive laboratory methods: A taxonomy. In: Biemer PP, Groves RM, Lyberg LE, Mathiowetz NA, Sudman S, editors. Measurement Errors in Surveys. Wiley; New York: 1991. [Google Scholar]
  21. Freimuth VS, Mettger W. Is there a hard to reach audience? Public Health Reports. 1990;105(3):232–238. [PMC free article] [PubMed] [Google Scholar]
  22. Green LW, Kreuter MW. Health promotion planning: An educational and ecological approach. 3. Mountain View: Mayfield; 1999. [Google Scholar]
  23. Higgins DL, O’Reilly K, Tashima N, Crain C, Beeker C, Goldbaum G, Elifson CS, Galavotti C, Guenther-Grey C. Using formative research to lay the foundation for community level HIV prevention efforts: An example from the AIDS community demonstration project. Public Health Reports. 1996;111(S):28–35. [PMC free article] [PubMed] [Google Scholar]
  24. Hubbell FA, Chavez LR, Mishra SI, Magana JR, Valdez RB. From ethnography to intervention: Developing a breast cancer control program for Latinas. Journal of the National Cancer Institute Monographs. 1995;18:109–115. [PubMed] [Google Scholar]
  25. Israel BA, Schulz AJ, Parker EA, Becker AB, Allen A, Guzman JR. Critical issues in developing and following community-based participatory research principles. In: Minkler M, Wallerstein N, editors. Community-Based Participatory Research for Health. San Francisco, CA: Jossey-Bass; 2003. pp. 56–73. [Google Scholar]
  26. Kohler CL, Dolce JJ, Manzella BA, Higgins D, Brooks CM, Richards JM, Bailey WC. Use of focus group methodology to develop an asthma self-management program useful for community-based medical practices. Health Education Quarterly. 1993;20(3):421–429. doi: 10.1177/109019819302000311. [DOI] [PubMed] [Google Scholar]
  27. Krueger RA. Focus groups: A practical guide for applied research. 3. Thousand Oaks, CA: Sage; 2000. [Google Scholar]
  28. Ledda MA, Walker EA, Basch CE. Development and formative evaluation of a foot self-care program for African-Americans with diabetes. Diabetes Educator. 1997;23(1):48–51. doi: 10.1177/014572179702300105. [DOI] [PubMed] [Google Scholar]
  29. Litrownik AJ, Elder JP, Campbell NR, Ayala GX, Slymen DJ, Parra-Medina D, Zavala F, Lovato C. Evaluation of a tobacco and alcohol use prevention program for Hispanic migrant adolescents: Promoting the protective factor of parent-child communication. Preventive Medicine. 2000;31:124–133. doi: 10.1006/pmed.2000.0698. [DOI] [PubMed] [Google Scholar]
  30. Marín G, Marín BV. A comparison of three interviewing approaches to studying sensitive topics with Hispanics. Hispanic Journal of Behavioral Sciences. 1989;11:330–340. [Google Scholar]
  31. Mayer JA, Lewis EC, Eckhardt L, Slymen D, Belch G, Elder J, Engelberg M, Eichenfield L, Achter A, Nichols T, Walker K, Kwon H, Talosig M, Gearen C. Promoting sun safety among zoo visitors. Preventive Medicine. 2001;33(3):162–9. doi: 10.1006/pmed.2001.0875. [DOI] [PubMed] [Google Scholar]
  32. Minkler M, Wallerstein N. Introduction to CBPR: New issues and emphases. In: Minkler M, Wallerstein N, editors. Community-based participatory research for health: From process to outcomes. San Francisco: Jossey-Bass; 2008. pp. 47–66. [Google Scholar]
  33. McLaughlin G. SMOG grading: A new readability formula. Journal of Reading. 1969;12(8):639–646. [Google Scholar]
  34. Perry CL. Creating health behavior change: How to develop community-wide programs for youth. Thousand Oaks, CA: Sage Publications; 1999. [Google Scholar]
  35. Poggie J. Toward quality control in key informant data. Human Organization. 1972;31:23–30. [Google Scholar]
  36. Rosenthal R, Rosnow RL. Essentials of Behavioral Research: Methods and Data Analysis. 2. New York: McGraw-Hill; 1991. [Google Scholar]
  37. Shaughnessy JJ, Zechmeister EB. Research Methods in Psychology. New York: McGraw-Hill; 1997. [Google Scholar]
  38. Solomon FM, Linnan LA, Wasilewski Y, Lee AM, Katz ML, Yang J. Observational study in ten beauty salons: Results informing development of the North Carolina BEAUTY and Health Project. Health Education and Behavior. 2004;31(6):790–807. doi: 10.1177/1090198104264176. [DOI] [PubMed] [Google Scholar]
  39. Streng JM, Rhodes S, Ayala GX, Eng E, Arceo R, Phipps S. Realidad Latina: Latino adolescents, their school, and a university use photovoice to examine and address the influence of immigration. Journal of Interprofessional Care. 2004;18(4):403–415. doi: 10.1080/13561820400011701. [DOI] [PubMed] [Google Scholar]
  40. Steckler A, Linnan L. Process evaluation for public health interventions and research. San Francisco: Jossey-Bass; 2002. p. 480. [Google Scholar]
  41. Sussman S, Burton D, Dent CW, Stacy AW, Flay BR. Use of focus groups in developing an adolescent tobacco use cessation program: Collection of norm effects. Journal of Applied Social Psychology. 1991;21:1772–1782. [Google Scholar]
  42. Terpstra J, Johnson ML, Ayala GX. A school-based intervention trial to increase caregiver support for asthma management in middle school-aged youth. Health Education and Behavior. doi: 10.3109/02770903.2012.656866. (in press) [DOI] [PubMed] [Google Scholar]
  43. Viswanathan M, Ammerman A, Eng E, et al., editors. Community-Based Participatory Research: Assessing the evidence. Rockville, MD: Agency for Healthcare Research and Quality; 2004. [PMC free article] [PubMed] [Google Scholar]
  44. Wallerstein N, Duran B. Using community-based participatory research to address health disparities. Health Promotion Practice. 2006;7(3):312–23. doi: 10.1177/1524839906289376. [DOI] [PubMed] [Google Scholar]
  45. Wang C, Burris MA. Photovoice: Concept, methodology, and use for participatory needs assessment. Health Education and Behavior. 1997;24(3):369–87. doi: 10.1177/109019819702400309. [DOI] [PubMed] [Google Scholar]
  46. Weinberger M, Ferguson JA, Westmoreland G, Mamlin LA, Segar DS, Eckert GJ, Greene JY, Martin DK, Tierney WM. Can raters consistently evaluate the content of focus groups? Social Science & Medicine. 1998;46(7):929–33. doi: 10.1016/s0277-9536(97)10028-4. [DOI] [PubMed] [Google Scholar]
  47. Willis G, DeMaio T, Harris-Kojetin B. Is the bandwagon headed to the methodological promised land? Evaluation of the validity of cognitive interviewing techniques. In: Sirken M, Herrmann D, Schechter S, Schwarz N, Tanur J, Tourangeau R, editors. Cognition and Survey Research. New York: Wiley; 1999. [Google Scholar]
  48. Winett RA, King AC, Altman DG. Health psychology and public health: An integrative approach. Elmsford, New York: Pergamon Press; 1989. [Google Scholar]

RESOURCES