Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2012 Dec 6.
Published in final edited form as: J Sex Res. 1999 Feb;36(1):16–24. doi: 10.1080/00224499909551963

Interview Mode and Measurement of Sexual Behaviors: Methodological Issues

James N Gribble 1, Heather G Miller 1, Susan M Rogers 1, Charles F Turner 1
PMCID: PMC3516293  NIHMSID: NIHMS332001  PMID: 23226876

Abstract

Studies of sexual and other sensitive behaviors are often fraught with a variety of reporting biases. When IAQs are used to collect data, respondents may underreport certain sensitive behaviors and overreport normative behaviors. SAQs can also pose problems: requiring that respondents be literate and able to follow skip patterns. In recent years, the development of computerized technologies--audio-CASI and T-ACASI--have begun to overcome some of the limitations of IAQs and SAQs. By providing a more private mode for data collection and standardized delivery of all questions, as well as automated skip patterns and range checks, audio-CASI and T-ACASI have been tested in a number of studies and found to be an effective way of reducing response bias, and thus, contributing to a better understanding of the prevalence and patterns of sexual and other sensitive behaviors.


Surveys, or, more generally the method of asking questions and recording answers, continue to be one of the most commonly used techniques in obtaining data on sexual behavior. Although much attention tends to focus on the statistical information derived from surveys, behind the cross-tabulations, logistic regression, or other analytical models used by researchers lies a human encounter between two individuals, an interviewer and a respondent. The situational, cognitive, social, and psychological factors that arise within that interpersonal exchange affect the answers that are given and the data that are thereby generated (Turner, 1985). To understand the behaviors of interest in research on fertility, contraceptive practices, and STD transmission, one must confront the uncertainties introduced when this question-and-answer process is used to gather data on sensitive sexual behaviors. This task has become increasingly important given the increase in the number of surveys asking such sensitive questions and the increased reliance on data from these surveys in determining appropriate public policy.

In this review, we have drawn extensively on the contributions of the authors of several jointly-authored works. These works include Gribble, Rogers, Miller and Turner (1998); Turner, Danella and Rogers (1995); Turner, Ku, Sonenstein, and Pleck (1996); Turner, Miller, Smith, Cooley, and Rogers (1996); Turner, Miller, and Rogers (1997);Turner, Ku, Rogers, Lindberg, Pleck, and Sonestein (1998); Rogers, Miller, and Turner (1998); and Miller, Gribble, Mazade, Rogers, and Turner (1998). With the exception of this statement, we do not note the numerous places where we have drawn on these works.

In the collection of data on sexual behavior, a variety of approaches have been used. Some studies use interviewer-administered questionnaires (e.g., Fox, Odaka, Brookmeyer, & Polk, 1987; Martin, 1987; McCusker et al., 1988; Winkelstein, Lyman, & Padian, 1987) while others use self-administered questionnaires (e.g., Joseph et al., 1987; Marmor et al., 1982; McKusick, Hortsman, & Coates, 1985). Because sexual behavior is complex, survey designs may require elaborate skip and branch instructions so that the information gathered is tailored to the sexual histories of individual respondents. This approach may yield survey questionnaires that are too complex for reliable administration as a self-administered paper questionnaire. Reporting sexual behavior in a face-to-face interview, however, may be embarrassing to the respondent, and may cause some respondents to conceal important aspects of their sexual histories. Therefore, the issue of reporting bias becomes central to the assessment of the different modes of data collection: how the properties of a specific mode influence reporting bias and how modes compare in their effects on reporting bias (Groves, 1989).

When estimating the prevalence of illicit or stigmatizing behaviors, survey methodologists believe that a net negative bias occurs. The negative bias occurs because the number of respondents who deny engaging in those sensitive behaviors that, in fact, they have engaged in is expected to be larger than the number of respondents who report sensitive behaviors that they have not engaged in (Bradburn & Sudman, 1979; Catania, Gibson, Chitwood, & Coates, 1990; Fay, Turner, Klassen, & Gagnon, 1989; Miller, Turner, & Moses, 1990; Turner, Lessler, & Gfroerer, 1992). The challenge facing researchers is to reduce the bias in estimates of the prevalence of high-risk behaviors. By experimenting with the modes of survey administration, it is possible to estimate the relative level of reporting bias associated with one mode versus another.

In this review, we examine the effectiveness of different modes of administering surveys of both sexual behaviors and ancillary behaviors that increase the risk for the transmission of HIV and other STDs. Although the use of alcohol and most drugs is not a direct source of transmission of HIV and other STDs, data from several studies provide prevalence estimates of such behaviors, which have been shown to be associated with sexual risk taking (Stall, 1988; Stall, McKusick, Wiley, Coates, & Ostrow, 1986).

Comparison of In-Person Interviews and Self-Administered Questionnaires

In-person surveys, using both interviewer-administered and self-administered questionnaires, are two of the most common ways in which surveys are conducted. With an interviewer-administered questionnaire (IAQ), the interviewer reads the question aloud and the respondent provides a response. While this mode may produce higher interviewer credibility than other modes (Catania et al., 1990), the IAQ is likely to produce high levels of reporting bias for sensitive questions. IAQs allow the interviewer to probe into ambiguous responses, but they may pose a threat to the respondents, who may then underestimate or deny certain sensitive behaviors. In addition, with an IAQ respondents have less control over the pace of the interview, and are more subject to choosing incorrectly a response category presented early in a list over a category presented later in the list (referred to as a “primacy” effect; see Krosnick & Alwin, 1987; Tourangeau & Smith, 1996).

Self-administered questionnaires (SAQs) provide an alternative approach for conducting in-person interviews. By reducing the fear of embarrassment or disclosure, SAQs provide a more private and less threatening means of reporting sensitive behaviors (Catania, McDermott, & Pollack, 1986). An increasing number of studies have investigated the effect of the mode of interview in surveys of self-reported data, often reporting a mode effect for a number of sexual behaviors.

A significant mode effect was found among women who were offered SAQs to report on abortion. When comparing data from abortion providers to data on abortions contained in three major national survey programs--National Survey of Family Growth (NSFG), National Longitudinal Survey of Labor Market Participation (NLS-Y), and Kantner and Zelnik’s national surveys of young women--Jones and Forrest (1992) estimated that the self-reported levels in the 1988 NSFG included only 35 percent of the abortions reported by providers during the 1984–1987 period. Women in the 1988 wave of the NSFG were also given an SAQ to offer a second, private opportunity to report abortions (London & Williams, 1990), which, according to Jones and Forrest, increased the reporting of abortions to 71 percent of the level reported by providers.

Limitations of SAQs

Although some variation is seen, in-person SAQs in general offer a number of advantages over in-person IAQs. In addition to the substantial reduction in reporting bias, SAQs can be less expensive to administer if given to many people simultaneously in group situations such as school classes, HIV testing centers, or STD clinics (Catania et al., 1990). Using in-person surveys that include a SAQ component, however, can be a high-cost method of data collection. Although SAQs provide a private setting for reporting sexual and other sensitive behaviors, there are limitations to their use. Because sexual behavior is complex, survey instruments that collect such data are designed with contingent questioning (branching or skip patterns) that may be too complex for respondents to follow in a self-administered form (Lessler & Holt, 1987).

Crucial to the effectiveness of SAQs is the respondents’ ability to read and comprehend the questions. According to the National Center for Education Statistics (1993), the reading skills for a sizable segment of the population of the United States are limited. Populations of special interest in the study of sexual behavior, including people whose history of STDs or drug use places them at a higher risk of HIV infection, often have extremely severe literacy problems. In studies of intravenous drug users in Baltimore, for instance, AIDS researchers estimated that between 30% and 50% of study participants could not reliably complete a self-administered survey form.1

In addition to problems related to the correct administration of SAQs, data-quality problems can also arise. Respondents may be capable of responding to the questionnaire, but may refuse to answer a particular question. Reviews of surveys of sexual behavior found that refusal rates on questions about masturbation in SAQs ranged from almost 7% to 19% and from 6% to 13% on questions regarding frequency of vaginal intercourse (Bradburn et al., 1978; Catania et al., 1986; Johnson & DeLamater, 1976). Similarly, Rogers and Turner (1991) analyzed SAQ data on sexual behavior and found that questions on male-male contact were not answered by 19% of men completing the sexual behavior SAQs used in the 1989 and 1990 General Social Surveys conducted by the National Opinion Research Center. Attempting to impute values when such a large percentage of the data are missing can be extremely labor intensive and time consuming (e.g., see Fay et al., 1989; Turner, Miller, & Moses, 1989). In addition, respondents may provide answers to all questions in an SAQ, but the answers may be logically inconsistent. For example, Cox, Witt, Traccarella, and Perez-Michael (1992) reported that of 946 respondents in the 1988 National Household Survey of Drug Abuse who indicated in one or more questions that they had used cocaine, more than 14% gave responses in other places that indicated that they had never used cocaine.

Data Collection with Computers

As early as the mid 1980s (e.g. Waterton & Duffy, 1984), automated self-interviewing with computers has been used in an attempt to reduce underreporting of sensitive or stigmatized behaviors. Both computer-assisted personal interviewing (CAPI) and computer-assisted telephone interviewing (CATI) have been used to automate survey procedures, but in neither case has the respondent-interviewer interaction been fundamentally changed by the introduction of the new technology. In CAPI and CATI surveys, the measurement process continues to rely on an interviewer asking questions and a respondent providing answers. It is not surprising that the evidence regarding the effect of CAPI and CATI technologies on the willingness of respondents to provide accurate responses to survey questions is both weak and inconsistent (Turner et al., in press).

The development of computer-assisted self-interview (CASI) technology has fundamentally altered the interview context for measurements of sexual and other sensitive behaviors. With CASI technologies, the role of the interviewer is minimized and the key elements of measurement are standardized across the interview. In the video-CASI technology, respondents view the questions on a computer screen and enter their answers by using the computer keyboard. Video-CASI allows the respondent to interact privately with the computer, but requires that the respondent be able to read and understand the question appearing on the screen. Video-CASI has been explicitly tested in several methodological studies discussed below (O’Reilly, Hubbard, Lessler, Biemer, & Turner, 1994; Tourangeau & Smith, 1996; Rogers, Miller, Forsyth, Smith, & Turner, 1996), and it is an alternative interview mode that is routinely available in most audio-CASI and CAPI systems.

Audio-CASI Data Collection Systems

Audio-CASI is a more advanced technology that adds audio features to those provided by video-CASI. Using portable laptop computers, respondents listen to questions through headphones and enter answers by pressing labeled keys. The audio component has voice-quality sound and does not rely on synthesized voices. Audio-CASI does not present any significant delays in playing back audio-delivered questions. Unlike video-CASI and other interviewing modes, audio-CASI does not require that the respondent be literate. Moreover, because audio-CASI relies on pre-recorded questions, the technology allows multilingual administration without requiring that the interviewers be multilingual. Audio-CASI thus offers the advantages of computer-based interviewing, provides a private mode of collecting data, and is completely standardized with respondents hearing all questions in the same way.

The relative preference for audio-CASI was studied by Rogers et al. (1996), who compared CASI using both audio+video (i.e., full audio-CASI) to CASI using audio alone in a study that included questions on a range of sexual behaviors (e.g., anal sex, masturbation, oral sex, same-gender sex, and STDs). In a sample of 194 respondents between 18 and 45 years of age living in Baltimore County, Maryland, the audio+video CASI mode performed significantly better, requiring less time to complete, fewer questions played more than once, and fewer instances of backing up to a previous question. In an overall assessment of the two modes, respondents who indicated a preference selected the audio+video mode by a margin of 9 to 1. In general, the audio+video mode was preferred on most dimensions: it was easier to use and understand, more interesting, better for asking sensitive questions about sex and illegal activities, and better for eliciting honest answers. The only perceived drawback of the audio+video mode was that respondents thought it offered less privacy than the audio-only mode -- due, we suspect, to the perceived possibility that someone else might see the video screen.

A comparison of CAPI, CASI, and audio-CASI was carried out by Tourangeau and Smith (1996) using an area probability sample of more than 300 adults in Cook County, Illinois. They hypothesized that CASI and audio-CASI would increase reported use of illicit drugs and decrease the disparity between the average number of sex partners reported by men and women. Their experiment also included context (permissive or restrictive views about sexual activity) and format (one open format and two closed formats) of sex partner items. For the three recall periods (1 year, 5 years, and lifetime), audio-CASI elicited the highest mean number of reported sex partners. Both forms of CASI interviewing showed a reduction in the disparity between men and women in the number of sex partners reported. More respondents indicated that they had had oral sex and anal sex under the audio-CASI mode than with CASI or CAPI. Audio-CASI also yielded the highest reported levels of marijuana and cocaine during the past month, past year, or lifetime, with statistically significant differences for lifetime use of marijuana and marginally significant differences for lifetime use of cocaine and marijuana use in the past year. They concluded that both audio-CASI and CASI provide a greater sense of privacy and make participants feel that the study is both legitimate and of scientific value: However, the auditory input provided in audio-CASI and CAPI interviews facilitates understanding of the questions and completion of the questionnaire.

Audio-CASI in National Surveys of Sexual Behavior

Today, the use of audio-CASI is growing explosively. In January 1995, the in-person audio-CASI technology developed at the Research Triangle Institute (RTI) was field tested in two major national surveys: the National Survey of Adolescent Males, or NSAM (new cohort: N = 1,741 males, ages 15 to 19) and the National Survey of Family Growth, or NSFG (Cycle V, N = 10,000 females, ages 15 to 44). Data from these surveys indicate that both field interviewers and respondents had little trouble with the audio-CASI technology and that audio-CASI yields substantially higher levels of reporting of sensitive behaviors.

National Survey of Adolescent Males

Since 1988, NSAM has tracked the sexual, contraceptive, and HIV risk-related behaviors of a national probability sample of US men who were between the ages of 15 and 19 in 1988. Follow-up interviews took place in 1991 and 1995. In 1995, a new cohort of 1,729 young men between the ages of 15 and 19 was also interviewed. In the 1988 and 1991 rounds of the NSAM, an SAQ was used at the end of the interview to ask questions about sensitive behaviors, such as use of illicit drugs, male-male sexual contacts, and violence (Sonenstein, Pleck, & Ku, 1991). Puzzling results, especially the low estimates of male-male sexual contacts, prompted concerns about underreporting bias in these rounds. This concern, coupled with the desire to increase privacy (both actual and perceived), led to the inclusion of a methodological experiment that included 1,690 respondents (97.7% of those interviewed) in the 1995 round (Turner, Ku, et al., 1996). Questions on sensitive behaviors were administered either in an SAQ at the end of the IAQ or in an audio-CASI mode.

Results on the effect of mode of survey administration for measurement of male-female and male-male sexual behaviors are presented in Table 1 (Turner et al., 1998). The reporting of male-female sexual contacts was generally not affected by mode of interview administration. With the exception of having had sex with a prostitute, which was highly significant (crude OR = 3.65), none of the differences in reported levels of male-female sexual contact were statistically significant.

TABLE 1.

Alternate estimates of prevalence of male-female and male-male sexual behaviors among 1995 NSAM respondents obtained by using different methods of questioning.

MEASUREMENT ESTIMATED PREVALENCE (per 100) Crude ODDS RATIO Adjusted ODDS RATIO (a)
Paper SAQ Audio-CASI
Male-Female Sexual Contacts
Ever had sex with a prostitute (b) 0.7 2.5 3.65** 4.24**
Ever been paid for sex * 1.6 3.8 2.36A 2.60
Sexual intercourse with female within last year (d, e) 49.6 47.8 0.93 1.24
5+ lifetime female partners (d.e) 16.4 18.4 1.15 1.46A
Condom use at last sex (among males reporting sex) (d) 64.4 64.0 0.98 1.01
Ever had anal intercourse w/female 10.3 11.4 1.13 1.26
Ever made girl pregnant (d.e) 7.9 6.5 0.81 0.98
Ever fathered a child (d.e) 4.6 2.4 0.51 0.59
Ever had vaginal, oral, or anal intercourse with female M 68.1 63.9 0.83 0.81
Male-Male Sexual Contacts
Ever masturbated another male 1.4 2.6 1.94 2.25A
Ever been masturbated by another male 0.9 3.5 3.79* 4.23A
Ever had receptive oral sex with another male (your mouth on his penis) 0.5 2.3 5.08* 5.68A
Ever had insertive oral sex with another male (your penis in his mouth) 1.1 3.1 2.83 3.50AA
Ever had insertive anal sex with another male (your penis in his rectum or butt) 1.0 1.9 1.85 2.41
Ever had receptive anal sex with another male (his penis in your rectum or butt) 0.1 0.8 7.91** 7.85*
Any Male-Male Sex 1.5 5.5 3.84** 4.20**
[a]

Odds ratio adjusted for covariates (race [white, black, other as residual category], whether they have health insurance, age, whether they currently attend school, and whether they had sexual intercourse with a female), as reported in the interviewer-administered portion of the survey.

[b]

Although we have listed contact with a prostitute under male-female behaviors, the question was not gender-specific. It is possible that some contacts were with a male prostitute.

[c]

A total of 59 respondents reported ever being paid for sex; of those, 88% reported being paid by a female or females, 7% by a male(s), and 5% by both male(s) and female(s). An additional 11 respondents in the paper SAQ (not included in the Table 2 estimate) reported they had never been paid for sex, yet noted the gender of that person(s) in the subsequent question.

[d]

This question from the experiment repeats a question on heterosexual contact that was previously asked in the interviewer-administered portion of the survey. Because respondents may have felt compelled to answer consistently, responses to this question could be subject to a consistency bias that might have attenuated the effect of the interview mode.

[e]

In the SAQ, respondents who reported no sexual activity were instructed to skip a series of questions on specific sexual practices. For this analysis, these respondents were recoded as not reporting this particular behavior.

[f]

Estimated prevalence is for responding yes to one (or more) of four questions asking whether a respondent had engaged in: vaginal sex, insertive or receptive oral sex with a female, or anal sex with a female.

^

p=.15 or less for two-tailed test of null hypothesis that: O R = 1.0.

*

P=.05 or less for two-tailed test of null hypothesis that: O R = 1.0.

**

P=.01 or less for two-tailed test of null hypothesis that: OR = 1.0.

Unlike its effect on the reporting of male-female sexual contacts, audio-CASI yielded a number of significant differences in the reporting of male-male sexual contacts. Among audio-CASI respondents, 5.5 percent reported some type of male-male sex, which is significantly higher (p < 0.001) than the 1.5 percent reported by respondents given the paper-and-pencil SAQ. The higher prevalence of male-male sex reported by audio-CASI respondents is more consistent with prevalences based on retrospective data reported by adult men when asked about their male-male sexual behavior during adolescence (4 percent to 9 percent).

When examining the effect on the six specific male-male sexual contacts included in the NSAM, which included active and passive masturbation and oral and anal sex, audio-CASI respondents always reported higher levels of the behavior, with crude odds ratios ranging from 1.85 to 7.91. Five of the seven odds ratios were statistically significant, contributing to a consistent pattern of higher reported levels of male-male sex elicited with audio-CASI than with paper-and-pencil SAQs.

Turner et al. (1998) also estimated adjusted odds ratios, controlling for race (Black, White, Other), respondent’s age, whether the respondent was attending school, whether the respondent reported during the earlier IAQ portion of the interview that he had ever had sex with a female, and whether the respondent was covered by health insurance. In general, the estimated adjusted odds ratios were higher than the crude odds ratios, and the significance of the global indicator of any male-male sexual contact remained highly significant.

National Survey of Family Growth

The 1995 NSFG collected repeated measurements using IAQs and audio-CASI. Women in the 1995 NSFG first reported on their history of abortions, number of sexual partners, and a wide variety of other topics to a female interviewer who used CAPI technology to record their answers. In the second phase of the interview, a small number of the same questions were repeated, but this time the respondent used audio-CASI. Thus, all women who completed the NSFG face-to-face IAQ had a second chance to provide responses in the audio-CASI mode, which was hypothesized to produce a more complete reporting of sensitive events such as abortion.

Analyses of the data indicated that among sexually active women, the odds of reporting an abortion were approximately 1.3 times greater when audio-CASI was used to collect data than with an IAQ (Kinsey, Thornberry, Carson, & Duffer, 1995). Audio-CASI had a larger effect among Black women than White women: 7.3 percent of Black females who reported no abortions in the IAQ mode reported one or more abortions when administered the audio-CASI version; for White women, the comparable percentage was 4.2 percent. Similarly, 10.3 percent of Black women reporting one abortion in the IAQ mode reported two or more in the audio-CASI mode, compared with 5.0 percent of White women.

These results may actually understate the impact of audio-CASI on reporting bias. In the NSFG study, all respondents gave their responses to a human interviewer before giving responses in audio-CASI mode. In that experimental design, women may have been motivated to provide consistent responses, or to avoid admitting that they provided false information to the human interviewer. Had a simple randomized experimental design been used, in which the order of mode presentation was varied, respondents may have felt less need to provide consistent responses. As a result, audio-CASI may have had an even larger effect than IAQ in obtaining data on abortion history.

Telephone Interviews

Because of the high costs associated with sending interviewers to thousands of households throughout the country, many surveys of AIDS-related and other sensitive behaviors have been conducted using telephone survey techniques. However, one of the drawbacks of this approach is that telephone interviewing often cannot access some of the very high-risk populations, such as intravenous drug users and street youth, because both groups are highly transient and are less likely to have residential telephones (Catania et al., 1990). Geographic differences have been found in telephone ownership: Fewer households in the South and in rural areas have telephones. Socioeconomic and demographic factors also influence the likelihood of a person to have a phone: Blacks, people under age 25, divorced or separated people, unemployed and low-income people, and people with low educational attainment are less likely to have telephones (Aquilino & LoSciuto, 1989; Frank, 1985; Thornberry & Massey, 1978; Thornberry & Massey, 1989 ). If the risk behaviors of these groups of people differ significantly from those of people with telephones, then telephone surveys may incorrectly estimate the prevalence of AIDS risk behaviors and levels of knowledge about AIDS.

The effectiveness of telephone interviews in collecting data on sensitive behaviors appears to be higher than the effectiveness of IAQs. Bradburn and Sudman (1979) found that telephones yielded the highest percentage of completed interviews on drunken driving when compared with a standard IAQ, a randomized response IAQ, and SAQ modes. Czaja (1987) compared telephone interviews and in-person IAQs of sexual behaviors and found that respondents were equally likely to provide responses about some sexual behaviors in both modes, but more likely to report sensitive behaviors on the telephone than with an in-person IAQ. Although the respondents were not randomly assigned to conditions, Czaja’s work suggests that response bias is lower with telephone interviews than with in-person IAQs.

Although telephone interviews may have higher completion rates and may have less reporting bias than other in-person IAQs, they are not without their own problems. Gfroerer and Hughes (1992) found significant differences in the estimated prevalences of lifetime and past-year marijuana and cocaine use. Most notable was the 121% relative increase in reported use of cocaine in the past 12 months among participants who were given the NHSDA SAQ, compared to the telephone interview respondents. Aquilino (1994) also found a sizable difference in the reported use of illicit drugs and alcohol by different modes of survey administration, with estimated prevalences of marijuana, cocaine, crack, and alcohol use higher among SAQ respondents than among telephone respondents.

Telephone-Audio-CASI Data Collection System

In the past five years, an advance in telephone survey technology, the telephone audio computer-assisted self-interview (T-ACASI) system, has been developed by researchers at RTI (Turner, Miller, et al., 1996). With T-ACASI, it is possible to conduct complex, fully private call-in and call-out interviews. In a call-in survey, the respondent initiates the interview by calling a number that is answered by the T-ACASI interviewing system. In a call-out survey, a human telephone interviewer calls the respondent and subsequently transfers the call to the T-ACASI system. In addition to privacy, T-ACASI offers the advantages of other computer-assisted survey technologies: branching through a complex questionnaire, consistency and range checks, and production of data files. Like audio-CASI, T-ACASI provides a completely standardized measurement system by ensuring that every respondent hears the same question asked in exactly the same way. Because questions and response categories are presented aurally to participants, traditional survey constraints regarding literacy and English language skills are eliminated.

The T-ACASI system developed by RTI is a logical extension of the audio-CASI technology (Turner et al., in press)2. In the T-ACASI system, computers equipped with a hardware interface can handle incoming and outgoing calls. The system can be implemented on laptops in the field, on desktop computers in clinics, or from residences where respondents call into the system. Administration of a survey can be moved from one environment to another by simply copying the relevant software and the digitized voice files.

The call-out T-ACASI system relies on live interviewers to screen households, recruit participants, and explain the purpose of the study. Once a participant is recruited and has completed nonsensitive portions of the questionnaire, the call is transferred to a T-ACASI computer. The participant then hears prerecorded questions and response categories, and answers by pushing touch-tone telephone buttons. The T-ACASI portion is entirely private. When the subject has completed the recorded portion of the survey, the call can be sent back to the interviewer to close out the interview.

The T-ACASI system has been pilot tested in two studies that have yielded favorable results. The first implementation of the system used the questionnaire from the National AIDS Behavior Survey (Catania et al., 1992). The pilot study used a crossover experimental design to test respondents’ reactions to the new technology (see Turner, Miller, et al., 1996). Respondents answered introductory questions (Section A) from the telephone interviewer to elicit nonsensitive personal characteristics and attitudes3. The sensitive questions, which included heterosexual and same-gender sexual experiences, HIV serostatus, and drug use, were presented in two sections (Sections B and C), with half of the sensitive questions asked by standard telephone methods and the other half asked by T-ACASI. Respondents were randomly assigned to one of two experimental conditions: (a) questions in Section B administered by a human interviewer and questions in Section C administered by T-ACASI, or (b) questions in Section B administered by T-ACASI and questions in Section C administered by a human interviewer. In the last segment (Section D) of the interview, respondents were asked a series of questions by a human interviewer that evaluated their experience with each mode of interviewing.

The pilot study was not only a test of the feasibility of implementing T-ACASI, but also a test of two other explicit hypotheses: (a) respondents feel more comfortable reporting sensitive sexual behaviors to a computer in a T-ACASI interviewer than to a human in a standard telephone interview, and (b) respondents will be more likely to report engaging in stigmatized or sensitive behaviors (e.g., anal intercourse) and less likely to report normative behaviors (e.g., always using condoms) in the more private T-ACASI interview mode than in a standard telephone interview with a human interviewer.

Published results (Turner, Miller, et al., 1996) from the preliminary analysis of the first 142 cases (see Table 2) suggest that T-ACASI is both feasible and likely to yield high-quality data on sensitive behaviors. Among those who judged one method superior, T-ACASI was thought to provide better protection of respondents’ privacy by a 9 to 2 margin, and T-ACASI was felt to be a better way to collect information on sensitive behaviors by a margin of 3:1. T-ACASI was also thought to be more likely to elicit honest reporting of sexual and drug use behaviors by a margin of 4 to 1 among respondents who had an opinion. Among respondents who reported a preference, T-ACASI was thought to provide a more comfortable environment for answering sensitive questions by a 2 to 1 margin.

TABLE 2.

Estimates of Prevalence of Sensitive Behaviors Obtained from Telephone Interviews using: (1) Human Interviewers, and (2) Telephone Audio-CASI (T-ACASI)

MEASUREMENT ESTIMATED PREVALENCE (per 100) ODDS RATIO P
HUMAN INT. T-ACASI
Estimate (Base N) Estimate (Base N)
Anal Intercourse
Ever had anal intercourse* 25.4 (67) 42.0 (50) 2.13 0.03
Had anal intercourse in past 6 months 3.0 (67) 12.0 (50) 4.43 0.03
Oral Sex
Given oral sex (since age 18) 79.7 (59) 79.5 (73) 0.99 ns
Received oral sex (since age 18) 89.8 (59) 89.0 (73) 0.92 ns
Limited Sexual Experience
Had no sex partners since age 18. 1.6 (61) 7.6 (79) 4.93 0.09
Had no sex in last 5 years. 4.8 (62) 11.4 (79) 2.53 0.15
Did not have sex in past 6 months 1.5 (67) 8.0 (50) 5.74 0.01
Had sex fewer than 10 times in past 6 months 22.7 (67) 41.3 (50) 2.51 0.01
Condom Use
Never used a condom in lifetime 8.1 (62) 18.4 (76) 2.57 0.07
Used condom every time had sex in past 6 months 14.8 (54) 6.8 (44) 0.42 0.14
Used condom almost every time or every time when having sex in past 6 months 27.8 (54) 15.9 (44) 0.49 0.14
Stability and Quality of Relationships
Most recent sexual relationship lasted less than 6 months 5.8 (52) 21.3 (61) 4.42 0.01
Never discussed sex life with most recent partner 1.9 (52) 14.8 (61) 8.83 0.03
Discussed sex life less than once a month with most recent partner 28.8 (52) 49.2 (61) 2.39 0.03
Ever had a one-night stand since age 18 59.0 (61) 64.4 (73) 1.26 ns

NOTE. P-values are those for statistical tests of association in 2-way tabulations of Interview Mode by Question Response. In cases where the response distributions have more than two categories, these p-values do not apply to each individual odds ratio. See source for further details of statistical tests.

*

Unpublished analyses of final sample indicate that this result was attenuated in the complete sample.

Although the sample sizes included in the preliminary analysis were relatively small, the mode effect was significant (or approaching significance) in the responses given to many of the most sensitive questions in the survey (see Table 2). For example, anal sex among heterosexuals, which is thought to be the most sensitive behavior reported by a sizable proportion of the U.S. population (see Laumann, Gagnon, Michael, & Michaels, 1994), was reported to have a significantly higher prevalence (42.0 percent) with T-ACASI than with a human interviewer (25.4 percent)4. Using T-ACASI, respondents were more likely also to admit that they never used a condom, that their most recent sexual relationship lasted less than 6 months, and that they have very limited sexual experience (that is, having no sexual partners in adulthood, no sexual intercourse in the past 6 months, and intercourse 10 or fewer times in the past 6 months).

Normative behaviors also appear to be overreported less often in T-ACASI mode than with human interviewers. For example, only 6.8 percent of respondents indicated they had “used a condom every time they had sex in the past 6 months” when the T-ACASI mode was used, compared to 14.8 percent when responses were provided directly to a human interviewer.

Currently, T-ACASI is being used in two large-scale studies of sexual and other sensitive behaviors. One is a large-scale national survey of sexual behavior conducted as a randomized field experiment, in which one-half of respondents will be interviewed using standard telephone survey procedures and one-half will be interviewed using T-ACASI. The other is an experimental test of the impact of T-ACASI on the reporting by gay men of HIV status, sexual behavior, drug use, and other sensitive behaviors, which is being conducted as a methodological add-on to the Urban Men’s Health Survey.

Conclusions

In recent years, the technologies available for conducting interviews about sexual behavior, illicit drug use, and other sensitive subjects have developed rapidly. These new means of data collection appear to reduce reporting bias, which can have a substantial effect on the willingness of people to report activities that are either embarrassing, stigmatizing, or illegal. What people may not be willing to admit to a human interviewer, they are much more likely to report in a more private setting, such as a self-administered questionnaire. However, the depth of information that is needed about certain high-risk behaviors can make an SAQ extremely complex and difficult to complete. Moreover, many individuals who engage in high-risk behaviors have limited literacy skills, which may preclude them from accurately responding to an SAQ. Audio-CASI technology is a step towards providing a private, self-administered means of collecting data on sensitive behaviors that does not depend on the respondent’s ability to read. Data from the 1995 NSAM indicate that audio-CASI is even more effective than paper-and-pencil SAQs in reducing reporting bias related to a number of sensitive behaviors, at least for adolescent males.

In-person interviews can become prohibitively expensive when it is necessary to reach thousands of households to find a sufficient number of participants who engage in certain low-prevalence, high-risk behaviors, especially related to HIV transmission. Telephone interviews have often been used in these large-scale studies. But telephone interviewing has the inherent limitation of not being able to reach households without a telephone, which may lead to an underrepresentation of high-risk groups. Although the interviewer-respondent interaction is not face-to-face, telephone interviewing can still produce significant reporting bias. Telephone audio-CASI (T-ACASI) technology is a new means of collecting data that creates privacy for the respondent, thus contributing to a reduction in reporting bias. With a T-ACASI system, the participants’ phone calls are transferred to an automated system that plays prerecorded questions and response categories, to which participants respond by pushing touch-tone telephone buttons. Pilot studies indicate that most respondents prefer T-ACASI to a human telephone interviewer on several dimensions. Preliminary results also indicate that participants report significantly higher levels of stigmatized or illicit activities and lower levels of normative behaviors when interviewed using T-ACASI technology.

The advent of computer-based self-interviewing technologies -- such as audio-CASI and T-ACASI -- are changing the ways in which sexual and other sensitive data are collected. By reducing response bias in research measurements, these new technologies may improve our understanding of the prevalence and patterns of sexual and other sensitive behaviors.

Acknowledgments

Preparation of this review was supported by grants from the National Institute of Child Health and Human Development and the National Institute on Aging (R01-HD/AG31067-04) and from the National Institute of Mental Health (R01-MH56318-01).

Footnotes

1

David Celentano, School of Hygiene and Public Health, Johns Hopkins University, in discussion with the steering committee for the Multisite Trial of Behavior Interventions to Halt the Spread of HIV, sponsored by the National Institute for Mental Health, February, 1991.

2

Early experimentation with T-ACASI began at the Bureau of Labor Statistics during the late 1980s under the rubric of Touch-Tone data entry (TTDE) surveys (Clayton, 1991; Clayton & Harrell, 1989; Werking & Clayton, 1990; Werking, Clayton, Rosen, & Winter, 1988; Werking, Tupek, & Clayton, 1988).

3

The sample was restricted to people between ages 18 to 45. The pilot study used a composite sample with two strata. The first and largest stratum (target n = 200) was recruited from a probability sample of households with listed telephones in Cook County, Illinois. The second stratum (target n = 50) was made up of patients recruited from the Wake county STD clinic in Raleigh, NC. Results presented here are based on preliminary analysis of the first 142 interviews conducted in the pilot study.

4

Unpublished analyses of the completed study indicate that this effect was attenuated in the complete sample.

References

  1. Aquilino W. Interviewer mode effects in surveys of drug and alcohol use: A field experiment. Public Opinion Quarterly. 1994;58:210–240. [Google Scholar]
  2. Aquilino W, LoSciuto L. Effects of mode of data collection on the validity of reported drug use. Conference Proceedings: Health Survey Research Methods. (DHHS Publication No. PHS 89-3447); Washington, DC: U.S. Government Printing Office; 1989. [Google Scholar]
  3. Bradburn MM, Sudman S. Improving interview method and questionnaire design. San Francisco: Jossey-Bass, Inc., Publishers; 1979. [Google Scholar]
  4. Bradburn N, Sudman S, Blair E, Stocking C. Question threat and response bias. Public Opinion Quarterly. 1978;42:221–234. [Google Scholar]
  5. Catania JA, Coates TJ, Stall R, Turner H, Peterson J, Hearst N, Dolcini MM, Hudes H, Gagnon J, Wiley J, Groves R. Prevalence of AIDS-related risk factors and condom use in the United States. Science. 1992;258:1101–1106. doi: 10.1126/science.1439818. [DOI] [PubMed] [Google Scholar]
  6. Catania JA, Gibson DR, Chitwood DD, Coates TJ. Methodological problems in AIDS behavioral research: Influences on measurement error and participation bias in studies of sexual behavior. Psychological Bulletin. 1990;108:339–362. doi: 10.1037/0033-2909.108.3.339. [DOI] [PubMed] [Google Scholar]
  7. Catania JA, McDermott L, Pollack L. Questionnaire response bias and face-to-face interview sample bias in sexuality research. The Journal of Sex Research. 1986;22:52–72. [Google Scholar]
  8. Clayton RL. Developing CASI data collection methods in the current employment statistics survey. Paper presented at the CASIC Methodology Panel at the U.S. Bureau of the Census; Suitland, MD. 1991. Apr, [Google Scholar]
  9. Clayton R, Harrell L. Developing a cost model of alternative data collection methods: MAIL, CATI, and TDE. Proceedings of the American Statistical Association, Survey Research Methods Section.1989. [Google Scholar]
  10. Cox B, Witt M, Traccarella M, Perez-Michael A. Inconsistent reporting of drug use in the 1988 NHSDA. In: Turner CF, Lessler JT, Gfroerer JC, editors. Survey measurement of drug use: Methodological issues (DHHS Pub. No. 92-1929) Washington, DC: Government Printing Office; 1992. [Google Scholar]
  11. Czaja R. Asking sensitive behavioral questions in telephone interviews. International Quarterly of Community Health Education. 1987;8:23–32. doi: 10.2190/XT6W-31CX-TD87-E643. [DOI] [PubMed] [Google Scholar]
  12. Fay RE, Turner CF, Klassen AD, Gagnon JH. Prevalence and patterns of same-gender sexual contact among men. Science. 1989;243:334–348. doi: 10.1126/science.2911744. [DOI] [PubMed] [Google Scholar]
  13. Fox R, Odaka NJ, Brookmeyer R, Polk BF. Effect of HIV antibody disclosure on subsequent sexual activity in homosexual men. AIDS. 1987;1:241–246. [PubMed] [Google Scholar]
  14. Frank B. Telephone surveying for drug abuse: Methodological issues and an application. In: Rouse B, Kozel N, Richards L, editors. Self-report methods of estimating drug use. (NIDA Research Monograph No. 57, DHHS Pub. No. ADM 85-1402) Washington, DC: U.S. Government Printing Office; 1985. [PubMed] [Google Scholar]
  15. Gfroerer JC, Hughes AL. Collecting data on illicit drug use by phone. In: Turner CF, Lessler JT, Gfroerer JC, editors. Survey measurement of drug use: Methodological issues (DHHS Publication No. 92-1929) Washington, DC: Government Printing Office; 1992. [Google Scholar]
  16. Gribble JN, Rogers SM, Miller HG, Turner CF. Measuring AIDS-related behaviors in older populations: Methodological issues. Research on Aging. 1998;20:798–821. doi: 10.1177/0164027598206009. [DOI] [PMC free article] [PubMed] [Google Scholar]
  17. Groves R. Actors and questions in telephone and personal interview surveys. Public Opinion Quarterly. 1989;43:190–205. [Google Scholar]
  18. Johnson W, DeLamater J. Response effects in sex surveys. Public Opinion Quarterly. 1976;40:165–181. [Google Scholar]
  19. Jones EF, Forrest JD. Under-reporting of abortion in surveys of U.S. women: 1986–1988. Demography. 1992;29:113–126. [PubMed] [Google Scholar]
  20. Joseph JG, Montgomery SB, Emmons CA, Kessler RC, Ostrow DG, Wortman CB, O’Brien K, Eller M, Eshleman S. Magnitude and determinants of behavioral risk reduction: Longitudinal analysis of a cohort at risk for AIDS. Psychological Health. 1987;1:73–96. [Google Scholar]
  21. Kinsey SH, Thornberry JS, Carson CP, Duffer AP. Respondent preferences toward audio-CASI and how that affects data quality. Paper presented at the Conference of the American Association of Public Opinion Research; Fort Lauderdale, FL. 1995. [Google Scholar]
  22. Krosnick J, Alwin D. An evaluation of a cognitive theory of response order effects in survey measurement. Public Opinion Quarterly. 1987;52:526–538. [Google Scholar]
  23. Laumann E, Gagnon J, Michael R, Michaels S. Social organization of sexuality. Chicago: University of Chicago Press; 1994. [Google Scholar]
  24. Lessler JT, Holt M. Proceedings of the American Statistical Association, Survey Research Methods Section. 1987. Using response protocols to identify problems in the U.S. Census long form; pp. 262–266. [Google Scholar]
  25. London KA, Williams LB. Comparison of abortion under-reporting in an in-person interview and a self-administered questionnaire. Paper presented at the annual meeting of the Population Association of America; Toronto. 1990. May, [Google Scholar]
  26. Marmor M, Laubenstein L, William DC, Friedman-Kien AE, Byrum RD, D’Onofrio S, Dubin N. Risk factors for Kaposi’s sarcoma in homosexual men. Lancet. 1982;1:1083–1087. doi: 10.1016/s0140-6736(82)92275-9. [DOI] [PubMed] [Google Scholar]
  27. Martin JL. The impact of AIDS on gay male sexual behavior patterns in New York city. American Journal of Public Health. 1987;77:578–581. doi: 10.2105/ajph.77.5.578. [DOI] [PMC free article] [PubMed] [Google Scholar]
  28. McCusker J, Stoddard AM, Mayer KH, Zapka J, Morrison C, Saltzman SP. Effects of HIV antibody test knowledge on subsequent sexual behaviors in a cohort of homosexually active men. American Journal of Public Health. 1988;75:462–467. doi: 10.2105/ajph.78.4.462. [DOI] [PMC free article] [PubMed] [Google Scholar]
  29. McKusick L, Horstman W, Coates TJ. AIDS and sexual behavior reported by gay men in San Francisco. American Journal of Public Health. 1985;75:493–496. doi: 10.2105/ajph.75.5.493. [DOI] [PMC free article] [PubMed] [Google Scholar]
  30. Miller HG, Gribble JN, Mazade L, Rogers SM, Turner CF. Abortion and breast cancer risk: Fact or artifact? In: Stone Arthur., editor. Science of Self Report. Mahwah, N.J: Lawrence Erlbaum Associates; (In press) [Google Scholar]
  31. Miller HG, Turner CF, Helzlsouer KJ, Zenilman JM, Newcomb PA. Unpublished proposal. 1997. Abortion, breast cancer, and reporting bias. [Google Scholar]
  32. Miller HG, Turner CF, Moses LE. AIDS: The second decade. Washington, DC: National Academy Press; 1989. [PubMed] [Google Scholar]
  33. Millstein S, Irwin C. Acceptability of computer-acquired sexual histories in adolescent girls. The Journal of Pediatrics. 1983;103:815–819. doi: 10.1016/s0022-3476(83)80493-4. [DOI] [PubMed] [Google Scholar]
  34. National Center for Education Statistics. Adult literacy in America: A first look at the results of the National Adult Literacy Survey. Washington, DC: U.S. Department of Education; 1993. [Google Scholar]
  35. O’Reilly J, Hubbard J, Lessler J, Biemer P, Turner CF. Audio and video computer-assisted self-interviewing: Preliminary tests of new technologies for data collection. Journal of Official Statistics. 1994;10:197–214. [PMC free article] [PubMed] [Google Scholar]
  36. Rogers SM, Miller HG, Forsyth BH, Smith TK, Turner TK. Audio-CASI: The impact of operational characteristics on data quality. ASA/AAPOR Joint Proceedings; Alexandria, VA. 1996. [Google Scholar]
  37. Rogers SM, Miller HG, Turner CF. Effects of interview mode on bias in survey measurements of drug use: Do respondent characteristics make a difference? Substance Use and Misuse [formerly International Journal of Addictions] 1998;33:2179–2200. doi: 10.3109/10826089809069820. [DOI] [PubMed] [Google Scholar]
  38. Rogers SM, Turner CF. Patterns of same-gender sexual contact among men in the U.S.A.: 1970–1990. The Journal of Sex Research. 1991;28:491–519. [Google Scholar]
  39. Sonenstein FL, Pleck JH, Ku LC. Levels of sexual activity among adolescent males in the United States. Family Planning Perspectives. 1991;23:162–167. [PubMed] [Google Scholar]
  40. Stall R. The prevention of HIV infection associated with drug and alcohol use during sexual activity. Advances in AIDS and Substance Abuse. 1988;7:73–88. doi: 10.1300/j251v07n02_07. [DOI] [PubMed] [Google Scholar]
  41. Stall R, McKusick L, Wiley J, Coates TJ, Ostrow DG. Alcohol and drug use during sexual activity and compliance with safe sex guidelines for AIDS: The AIDS behavioral research project. Health Education Quarterly. 1986;13:359–371. doi: 10.1177/109019818601300407. [DOI] [PubMed] [Google Scholar]
  42. Thornberry OT, Massey JT. Correcting for undercoverage bias in random digit dialed national health surveys. Proceedings of the American Statistical Association, Survey Research Methods Section 1978 [Google Scholar]
  43. Thornberry OT, Massey JT. Trends in Untied States telephone coverage across time and subgroups. In: Groves RM, Biemer PP, Lyberg LE, Massey JT, Nicholls WL, Waksberg J, editors. Telephone Survey Methodology. New York: John Wiley and Sons; 1988. [Google Scholar]
  44. Tourangeau R, Smith T. Asking sensitive questions: The impact of data collection mode, question format, and question context. Public Opinion Quarterly. 1996;60:275–304. [Google Scholar]
  45. Turner CF. Why do surveys disagree? Some preliminary hypotheses and some disagreeable examples. In: Turner CF, Martin E, editors. Surveying Subjective Phenomena. New York: Russell Sage; 1985. pp. 159–214. [Google Scholar]
  46. Turner CF, Danella R, Rogers S. Sexual behavior in the United States: 1930–1990: Trends and methodological problems. Sexually Transmitted Diseases. 1995;22:173–190. doi: 10.1097/00007435-199505000-00009. [DOI] [PubMed] [Google Scholar]
  47. Turner CF, Forsyth BH, O’Reilly J, Cooley PC, Smith TK, Rogers SM, Miller HG. Automated self-interviewing and the survey measurement of sensitive behaviors. In: Couper MP, Baker RP, Bethlehem J, Clark CZ, Martin J, Nicholls W, O’Reilly JM, editors. Computer-assisted survey information collection. New York: Wiley and Sons, Inc; 1998. [Google Scholar]
  48. Turner CF, Ku L, Rogers SM, Lindberg LD, Pleck J, Sonenstein FL. Adolescent sexual behavior, drug use, and violence: Increased reporting with computer survey technology. Science. 1998;280:867–873. doi: 10.1126/science.280.5365.867. [DOI] [PubMed] [Google Scholar]
  49. Turner CF, Ku L, Sonenstein FL, Pleck JH. Impact of audio-CASI on bias in reporting male-male sexual contacts: Preliminary results from the 1995 National Survey of Adolescent Males. In: Warnecke RB, editor. Health survey research methods: Conference proceedings. (DHHS Publication No. PHS 96-1013) Hyattsville, MD: National Center for Health Statistics; 1996. [Google Scholar]
  50. Turner CF, Lessler JT, Devore JW. Effects of mode of administration and wording on reporting of drug use. In: Turner CF, Lessler JT, Gfroerer JC, editors. Survey measurement of drug use: Methodological issues (DHHS Publication No. 92-1929) Washington, DC: Government Printing Office; 1992. [Google Scholar]
  51. Turner CF, Lessler JT, Gfroerer JG. Improving measurements of drug use: Future directions for research and practice. In: Turner CF, Lessler JT, Gfroerer JC, editors. Survey measurement of drug use: Methodological issues (DHHS Publication No. 92-1929) Washington, DC: Government Printing Office; 1992. [Google Scholar]
  52. Turner CF, Miller HG, Moses LE, editors. AIDS, sexual behavior, and intravenous drug use. Washington, DC: National Academy Press; 1989. [PubMed] [Google Scholar]
  53. Turner CF, Miller HG, Rogers SM. Survey measurement of sexual behaviors: Problems and progress. In: Bancroft J, editor. Researching Sexual Behavior. Bloomington: Indiana University Press; 1997. [Google Scholar]
  54. Turner CF, Miller HG, Smith TK, Cooley PC, Rogers SM. Telephone audio computer-assisted self-interviewing (T-ACASI) and survey measurements of sensitive behaviors: Preliminary results. In: Banks R, Fairgrieve J, Gerrard L, Orchard T, Payne C, Westlake A, editors. Survey and Statistical Computing 1996. Chesham, Bucks, UK: Association for Survey Computing; 1996. [Google Scholar]
  55. Waterton JJ, Duffy JC. A Comparison of computer interviewing techniques and traditional methods in the collection of self-report alcohol consumption data in a field survey. International Statistical Review. 1984;52:173–182. [Google Scholar]
  56. Werking GS, Clayton RL. Enhancing the quality of time critical estimates through the use of mixed mode CATI/CASI collection. Proceedings of the Statistics Canada Symposium 90 Measurement and Improvement of Data Quality.1990. [Google Scholar]
  57. Werking GS, Clayton RL, Rosen R, Winter D. Conversion from mail to CATI in the current employment statistics survey. Proceedings of the American Statistical Association, Survey Methods Section.1988. [Google Scholar]
  58. Werking GS, Tupek A, Clayton R. CATI and touchtone self-response applications for establishment surveys. Journal of Official Statistics. 1988;4:349–362. [Google Scholar]
  59. Winkelstein W, Jr, Lyman DM, Padian NS. Sexual practices and risk of infection by the AIDS-associated retrovirus: The San Francisco Men’s Health Study. Journal of the American Medical Association. 1987;257:321–325. [PubMed] [Google Scholar]

RESOURCES