Abstract
Privacy, achieved through self-administered modes of interviewing, has long been assumed to be a necessary prerequisite for obtaining unbiased responses to sexual identity questions due to their potentially sensitive nature. This study uses data collected as part of a split-ballot field test embedded in the National Health Interview Survey (NHIS) to examine the association between survey mode (computer-assisted personal interviewing (CAPI) versus audio computer-assisted self-interviewing (ACASI)) and sexual minority identity reporting. Bivariate and multivariate quantitative analyses tested for differences in sexual minority identity reporting and non-response by survey mode, as well as for moderation of such differences by sociodemographic characteristics and interviewing environment. No significant main effects of interview mode on sexual minority identity reporting or nonresponse were found. Two significant mode effects emerged in subgroup analyses of sexual minority status out of 35 comparisons, and one significant mode effect emerged in subgroup analyses of item nonresponse. We conclude that asking the NHIS sexual identity question using CAPI does not result in estimates that differ systematically and meaningfully from those produced using ACASI.
Keywords: Sexual orientation, mode of administration, question sensitivity, item nonresponse, field experiment
1. Introduction
In recent years, there has been a call for more research on the health of lesbian, gay, bisexual, and transgender (LGBT) persons. In 2011, the Institute of Medicine (IOM 2011) published a seminal report that assessed the overall state of science on sexual minority health and identified a number of gaps in the scientific literature on this population. It noted, for instance, that more research is needed on inequities in health care, and that such research depends on the collection of sexual orientation data in community, state, and national health surveys. In line with the IOM’s call for ongoing collection of sexual orientation data in federally funded surveys Healthy People 2020, the federal initiative that sets ten-year national objectives for improving the health of Americans, set an explicit objective of increasing the number of population-based data collection systems that can be used to monitor LGBT health (Healthy people 2020). A number of federal surveys collect sexual orientation data, including the National Health Interview Survey (NHIS), the National Survey of Family Growth, and the National Health and Nutrition Examination Survey. The NHIS, with the largest sample size of those three, began collecting data on the sexual identity of adult respondents in 2013.
An important design feature of the NHIS is face-to-face (rather than telephone or mail-based) interviewing (FTFI). To collect data for the NHIS, an interviewer visits respondents in their homes and administers the survey questions aided by a laptop computer, otherwise referred to as computer-assisted personal interviewing (CAPI). (To convert some reluctant respondents and/or to complete missing portions of the interview, telephone follow-up is permissible.) There are several reasons for administering the NHIS face-to-face. Interviewers’ ability to address respondents’ questions and concerns about the survey can lead to higher cooperation rates (Groves et al. 2004), potentially reducing nonresponse bias in key survey statistics. Compared to other modes of administration, FTFI tends to produce higher response rates (Hox and De Leeuw 1994; Sykes and Collins 1988), lower item nonresponse rates (Brazier et al. 1992; De Leeuw and Van der Zouwen 1988), and longer verbal responses (De Leeuw and Van der Zouwen 1988; Sykes and Collins 1988). It also allows for longer, more complex interviews (Dialsingh 2008; Fowler 1993), and enables the collection of observational data on the part of the interviewers (Fowler 1993). During FTFI, interviewers can also assist in clarifying terms, probing, and motivating respondents to provide complete and accurate responses.
While FTFI offers many advantages over other modes of survey data collection, there are drawbacks to using it to collect sensitive information. A question is considered sensitive if it “raises concerns about disapproval or other consequences (such as legal sanctions) for reporting truthfully, or if the question itself is seen as an invasion of privacy” (Tourangeau and Smith 1996, 276). Asking such questions face-to-face may lead to greater nonresponse and deliberate misreporting than when such questions are included in self-administered formats.
Questions on sexual behavior, attraction, and identity, the three facets of sexual orientation, are generally considered sensitive. Take, for example, the following excerpt from a “best practices” document on asking sexual orientation survey questions: “Survey administrators need to be aware that (Lesbian, Gay and Bisexual, LGB) individuals are socially stigmatized, and disclosure of a gay, lesbian, or bisexual orientation (or same-sex sexual behavior or attraction) can have meaningful negative consequences for individuals with respect to workplace, family, and social outcomes” (Sexual Minority Assessment Research Team (SMART) 2009, 17). The report goes on to emphasize privacy as a guiding principle for collecting sexual orientation data and recommends self-administered modes of interviewing such as paper-and-pencil (PAPI), audio computer-assisted self-interviewing (ACASI), and telephone-ACASI (T-ACASI, also known as interactive voice response) (SMART 2009). However, there is little research evaluating these recommendations with regard to asking questions on sexual identity.
To address a gap in the scientific community’s knowledge about the impact of survey data collection mode on sexual identity reporting, we present results from a field test conducted with the NHIS. The primary goal of this field test was to inform selection of a data collection mode for fielding the sexual identity questions beginning with the 2013 NHIS. To this end, adult respondents participating in the field test were randomly assigned to receive a ten-minute battery of questions on sexual identity and other topics of differing levels of sensitivity (e.g., neighborhood attachment, mental health, financial worries, sleep, HIV testing) in either ACASI or CAPI (the standard mode of administration for the NHIS). The specific research questions we set out to answer and report on here include:
Do estimates of the prevalence of sexual minorities (i.e., gay/lesbian and bisexual) differ by whether sexual identity questions are asked via CAPI or ACASI? If so, does ACASI produce a higher estimate of sexual minorities, as suggested by the literature?
Does the impact of mode of administration on the reporting of sexual identity vary by subgroups defined by respondent sociodemographics and characteristics of the interviewing environment?
Do item nonresponse rates differ by mode? If so, are the rates lower in ACASI compared to CAPI?
Before addressing these questions, we briefly summarize the literature on the difficulties inherent in obtaining accurate answers in response to sensitive questions, and then detail the findings of research on mode differences in the collection of data on sexual behavior, sexual attraction and sexual identity. We then describe the NHIS field test and the statistical analyses designed to address our research questions. Following the results of our analyses, we conclude by discussing the implications of our findings for the survey collection of sexual identity data.
2. Literature
2.1. Asking Sensitive Questions
As noted above, Tourangeau and Smith (1996) define a sensitive question as one that “raises concerns about disapproval or other consequences (such as legal sanctions) for reporting truthfully or if the question itself is seen as an invasion of privacy” (Tourangeau and Smith 1996, 276). Of particular concern to survey researchers are systematic misreporting and item nonresponse, especially refusal responses (Bradburn 1983; Fowler 1995; Tourangeau et al. 2000) that may occur when respondents are confronted with sensitive questions. Beyond studies documenting high item nonresponse to income questions (Dahlhamer et al. 2003, Dahlhamer et al. 2004; Juster and Smith 1997; Moore et al. 1999), few studies have formally addressed the link between question sensitivity and item nonresponse. Shoemaker et al. (2002) had students rate the sensitivity of survey questions and found that question sensitivity was positively related to item refusals, while Tourangeau and Yan (2007) identified what appeared to be a positive relationship between question sensitivity and item nonresponse, although the authors noted that a formal measure of sensitivity was not used.
With regard to misreporting, it has been demonstrated that asking sensitive questions can and does elicit systematic under- or over-reporting on a range of topics including abortion (Fu et al. 1998), substance use (Aquilino 1994; Gfroerer and Hughes 1992; Turner et al. 1992), and voter turnout (Bernstein et al. 2001; McDonald 2003). Tourangeau and Yan (2007) concluded that misreporting about sensitive topics is fairly common in surveys, that the extent of misreporting is contingent on whether the respondent has anything embarrassing to report, and that the level of misreporting is responsive to certain survey design features. They also conclude that misreporting is a motivated process in which respondents alter their responses to avoid embarrassing themselves, particularly in the presence of an interviewer or other people. Hence, survey design features found to be effective in reducing motivated misreporting include self-administered data collection modes and providing respondents with a private setting in which to answer (Tourangeau and Smith 1996; Tourangeau and Yan 2007). Few empirical analyses report differences in responses to sensitive questions across different types of self-administered modes (Couper et al. 2003; Tourangeau et al. 2000), but removal of the interviewer from the interview setting consistently reduces misreporting on sensitive questions (Tourangeau and Yan 2007).
Consistent with “best practice” documents and research on the impacts of sensitive questions, questions about sexual identity and the other facets of sexual orientation (sexual attraction and sexual behavior) may be best suited for private survey settings and self-administered modes of data collection, such as computer-assisted self-interviewing (CASI), ACASI, and T-ACASI. In the next section, we review the existing literature on mode effects when asking about sexual orientation.
2.2. Mode Effects with Questions on Sexual Orientation
Sexual orientation consists of three distinct constructs: sexual behavior, sexual attraction, and sexual identity. Although heterosexuality is indeed a sexual orientation, in this review we consider only studies that have examined mode differences in reporting of sexual minority identities, attractions, and behaviors. In addition, this review focuses on those whose sexual minority status is defined either by their identity as a gay/lesbian or bisexual person, or by their attraction to or sexual behavior with persons of the same sex. Finally, we focus exclusively on studies comparing self-administered to interviewer-administered modes of data collection, as these are the most pertinent to our research.
A small number of studies have examined mode effects when asking about sexual identity. Midanik and Greenfield (2008) compared responses to questions on sexual identity, sexual behavior, and sexual and physical abuse between T-ACASI and CATI with the 2005 National Alcohol Survey and found that a significantly greater percentage of adults answering with T-ACASI identified as bisexual or homosexual. However, in multivariate analyses, significant differences held only for adults aged 40 or older. Interestingly, no differences in reporting of same-sex sexual behavior by mode were identified. Among patients at a sexually transmitted disease (STD) clinic, a significantly greater percentage identified as gay, lesbian, or bisexual when answering in ACASI compared to FTFI (Ghanem et al. 2005). Finally, field testing of a sexual identity question for inclusion in Office for National Statistics (United Kingdom) surveys revealed CASI to produce higher (but not statistically significantly different) estimates of people with a sexual minority identity (gay/lesbian or bisexual) than CAPI. In three CASI trials, prevalence estimates of sexual minorities ranged from 1.4% to 2.5%, with a combined estimate of 1.9%. (In the third trial, the interviewer could administer the sexual identity question in CAPI if the respondent did not want to use the laptop to enter their answers to the sexual identity question and other sensitive items.) For the fourth and final trial, CAPI produced an estimate of 1.6% (Malagoda and Traynor 2008).
Studies examining reporting of sexual attraction by mode found a similar pattern. For example, Caltabiano and Dalla-Zuanna (2012) found that a higher percentage of respondents to a self-administered questionnaire (SAQ) reported same-sex attraction compared to those who answered by CATI. In addition, and in one of the few studies examining item nonresponse, the authors found that CATI elicited considerably more refusal responses to the same-sex attraction question than the SAQ. Both reported mode effects held in multivariate analyses. Based on data collected as part of the National STD and Behavior Measurement Experiment (NSBME), Villarroel et al. (2006) found more reporting of same-gender attraction (as well as same-gender sexual experiences and same-gender genital contact) among respondents answering by T-ACASI compared to respondents answering by CATI, effects that held in multivariate analyses. Item nonresponse rates to the gender attraction question, however, did not differ significantly by mode.
A greater number of studies exploring mode effects with sexual orientation reporting have focused on same-sex sexual behaviors. Other than the Villarroel et al. (2006) and Midanik and Greenfield (2008) studies reported earlier, these studies have relied on clinic or community samples. Simoes et al. (2006) explored mode differences in sexual behaviors reporting among a sample of adults seeking treatment for drug and alcohol abuse. Controlling for age, education, race, and marital status, they found that ACASI elicited more reports of men-having-sex-with-men (MSM) than FTFI. Similar results were observed among a sample of syringe-exchange program participants, with ACASI producing higher incidence rates of same-sex sexual behavior than FTFI (Des Jarlais et al. 1999). Likewise, among patients of an STD clinic, Kurth et al. (2004) found that ACASI elicited significantly more reporting of same-sex sexual encounters among both men and women compared to a clinician-administered health interview. ACASI also produced a lower item nonresponse rate to the sexual behavior questions than the clinician interview, although the authors note that this difference may have been due to other factors than item sensitivity (e.g., data entry error). Finally, Potdar and Koenig (2005) explored mode effects among two contrasting samples of urban men aged 18–22 from India: college students and slum residents not attending college. Among the college students, SAQ and ACASI both produced more reports of same-sex oral sex than FTFI. In addition, ACASI produced a significantly higher percentage of 2+ same-sex partner reports than did FTFI. Among the slum residents, ACASI elicited significantly higher reports of same-sex oral sex compared to FTFI, although FTFI elicited a significantly higher percentage of respondents reporting same-sex anal intercourse compared to ACASI.
Other studies using clinic and community samples found no differences in same-sex sexual behavior reporting by mode. A study of patients aged 15–39 at an urban STD clinic found no statistically significant differences in the percentage of respondents reporting same-sex sexual experiences in ACASI versus FTFI (Rogers et al. 2005). Similarly, a study of patients at an Australian sexual health clinic found no difference in the percentage reporting same-sex sexual behavior nor in the mean number of same-sex sexual partners reported across CASI and FTFI (Tideman et al. 2007). A study of perinatally HIV-exposed youth aged 9–16 attending an urban medical clinic identified no mode differences in responses to questions about same-sex sexual behavior (Dolezal et al. 2012). Furthermore, Jaya et al. (2008) found no mode differences in reports of same-sex sexual intercourse among economically disadvantaged youth in urban India.
In sum, research on the impact of survey mode on the reporting of sexual orientation has generally found that self-administered modes such as CASI, ACASI, and T-ACASI elicit more reports of gay/lesbian and bisexual self-identities, same-sex and bisexual sexual attraction, and, to a lesser extent, more reports of same-sex sexual behaviors than interviewer-administered modes (see Table 1). When significant effects have not been identified, the trend is generally toward greater reporting in the self-administered modes. However, many of the studies utilized very small, specialized, or international samples (e.g., clinic patients, youth in India), potentially limiting the generalizability of findings to large-scale, U.S. data collections. In addition, only a handful of these studies have looked at mode effects with regard to item nonresponse rates to questions on sexual orientation, with results being somewhat mixed. In the next section we describe the field test designed to address the question of whether ACASI would yield a greater percentage of adults identifying as a sexual minority than CAPI in the NHIS, a large-scale, general purpose, nationally representative health survey.
Table 1.
Summary of studies examining mode effects with questions on sexual orientation.
| Study | Measure | Population | Effect (%) | Item nonresponse (%) |
|---|---|---|---|---|
| Sexual attraction | ||||
| Caltabiano and Dalla-Zuanna (2012) | Same sex attraction | Weighted convenience sample (SAQ)/representative national sample (CATI), aged 18–69 in Italy (n = 3058 SAQ, 8285 CATI) | SAQ > CATI (6.9 vs. 3.0) | SAQ < CATI (1.3 vs. 11.6) |
| Villarroel et al. (2006) | Same sex attraction | U.S. nationally representative sample, and Baltimore representative sample, aged 18–45 (n = 1543 US, 744 Baltimore) | T-ACASI > CATI (17.8 vs. 12.8) | T-ACASI = CATI (1.5 vs. 1.2) |
| Sexual identity | ||||
| Midanik and Greenfield (2008) | Lesbian/gay/bisexual identity | U.S. nationally representative sample aged 18+ (n = 563 T-ACASI, 559 CATI) | T-ACASI > CATI among adults aged 40+ (bisexual: 2.5 vs. 0.6; homosexual: 1.9 vs. 0.9) | |
| Ghanem et al. (2005) | Lesbian/gay/bisexual identity | Baltimore STD clinic respondents aged 18–65 (n = 671 both modes) | ACASI > FTFI (3.0 vs. 1.0) | |
| Malagoda and Traynor (2008) | Lesbian/gay/bisexual identity | Nationally representative sample aged 16+ in Britain (n = 6422 CASI, 3429 CAPI) | CASI = CAPI (1.9 vs. 1.6) | |
| Sexual behavior | ||||
| Nationally representative samples | ||||
| Villarroel et al. (2006) | Same sex sexual behavior | U.S. nationally representative sample, and Baltimore representative sample, aged 18–45 (n = 1543 US, 744 Baltimore) | T-ACASI > CATI (same-gender sexual experiences: 14.2 vs. 9.1; same-gender genital contact: 10.3 vs. 7.0) | |
| Midanik and Greenfield (2008) | Same sex sexual behavior | U.S. nationally representative sample aged 18+ (n = 563 T-ACASI, 559 CATI) | T-ACASI = CATI (both genders: 6.9 vs. 4.7; same gender: 1.0 vs. 1.5) | |
| Clinic and community samples | ||||
| Simoes et al. (2006) | Male-male sexual behavior | Men aged 18+ seeking treatment for drug and alcohol abuse in Brazil (n = 367 ACASI and 368 FTFI) | ACASI > FTFI (12.6 vs. 5.7) | |
| Des Jarlais et al. (1999) | Same sex sexual behavior | Participants in syringe exchange programs in 4 U.S. cities (n = 724 ACASI and 757 FTFI) | ACASI > FTFI (10.0 vs. 5.0) | |
| Kurth et al. (2004) | Same sex sexual behavior | Patients ages 14+ at an urban, public STD clinic in the U.S. (n = 609 in both modes) | ACASI > clinician interview (men: 36.9 vs. 28.7; women: 19.6 vs. 11.5) | ACASI < clinician interview (men: 2.5 vs. 7.3; women: 0.7 vs. 6.2) |
| Rogers et al. (2005) | Same sex sexual behavior | Patients ages 15–39 at an urban U.S. STD clinic (n = 677 ACASI and 673 FTFI) | ACASI = FTFI (men: 10.1 vs. 8.5; women: 26.6 vs. 21.5) | |
| Tideman et al. (2007) | Same sex sexual behavior and median number of same sex partners | Patients at a sexual health clinic in Melbourne Australia (n = 255 CASI and 356 FTFI) | CASI = FTFI (men: 37.0 vs. 34.0; women: 11.0 vs. 7.0) | |
| Dolezal et al. (2012) | Ever any same sex sexual behavior | Urban, ethnic-minority, perinatally HIV-exposed medical clinic patients ages 9–16 in New York City (n = 135 ACASI and 139 FTFI) | ACASI = FTFI (baseline: 4.0 vs. 4.0; follow-up: 5.0 vs 11.0) | |
| Potdar and Koenig (2005) | Male-male oral sex | Unmarried male college students ages 18–22 in India (n = 300 ACASI, 300 SAQ and 300 FTFI) | ACASI, SAQ > FTFI (5.0 vs. 2.3 vs. 0.7) | |
| 2+ same sex partners | ACASI > FTFI (8.3 vs. 4.3) | |||
| Male-male oral sex | ACASI > FTFI (6.0 vs. 2.0) | |||
| Same-sex anal intercourse | Unmarried slum-residents not attending college ages 18–22 in India (n = 300 ACASI and 300 FTFI) | FTFI > ACASI (7.3 vs. 4.3) | ||
| Jaya et al. 2008 | Ever sexual intercourse with someone of the same sex | 15–19 year old economically disadvantaged residents of a neighborhood in Delhi India (n = 1058 FTFI and 523 ACASI) | FTFI = ACASI (boys: 6.6 vs. 6.2; girls: 0.4 vs. 1.4) | |
SAQ is Self-administered questionnaire.
CATI is Computer-assisted telephone interviewing.
ACASI is Audio computer-assisted self-interviewing.
T-ACASI is Telephone audio computer-assisted self-interviewing.
FTFI is Face-to-face interviewing.
CASI is Computer-assisted self-interviewing.
CAPI is Computer-assisted personal interviewing.
3. Data and Methods
3.1. National Health Interview Survey
The National Health Interview Survey (NHIS) is a multi-purpose survey of the health of the civilian, noninstitutionalized household population of the United States. Conducted by the National Center for Health Statistics (NCHS), the survey has been in the field continuously since 1957. Utilizing a multistage, clustered sample design, the NHIS produces nationally representative data on health insurance coverage, health care access and utilization, health status, health behaviors, and other health-related topics. The data are collected by trained interviewers with the U.S. Census Bureau using CAPI. Each year, interviews are conducted in roughly 35,000 households, yielding data on approximately 85,000–100,000 persons. Most interviews are conducted face-to-face in or immediately outside of respondents’ homes.
The core survey instrument contains four main components: Household Composition, Family, Sample Child, and Sample Adult. For the household composition module, a household respondent provides basic sociodemographic information on all members of the household. Within each family, the family module is completed by a family respondent who provides health information on each member of the family. Additional health information is subsequently collected from the parent or guardian of one randomly selected child under aged 18 (the “sample child”), and one randomly selected adult (the “sample adult”) aged 18 years or older. For the field test behind the analyses presented here, the ACASI module was located toward the end of the sample adult interview.
3.2. Description of the Field Test
Implemented between August 1 and October 15, 2012, the field test had three primary goals: 1) to test the ACASI instrument using normal interviewing protocols with a nationally representative sample, 2) to evaluate response rates for the newly-developed NHIS sexual identity question and the effect of adding the sexual identity question on response to the NHIS, and 3) to compare estimates of sexual identity and sexual minority status between ACASI and CAPI. The test was designed to achieve a final minimum sample size of 5,000 completed interviews.
3.2.1. Sample Design
To achieve a nationally representative sample for the test, previously worked and unworked sample addresses from the 2006–2010 NHIS were utilized. To facilitate mode comparisons of sexual identity estimates, a split-ballot experiment was conducted in which 60% of sample adults were randomly assigned to receive the sexual identity questions by ACASI and 40% by CAPI. To ensure that both the ACASI and the CAPI samples were nationally representative, random assignment of mode took place at the timeof sample formation. No identifying information was present on a case that would permit interviewers to determine beforehand the assigned mode.
3.2.2. Field Test Implementation
After receiving a self-study and one-day classroom training on ACASI and the purposes of the test, roughly 475 U.S. Census Bureau interviewers were assigned caseloads to complete over a two-month period. Interviewers were asked to work these cases as they would their regular NHIS cases. Normal NHIS interviewing protocols were in place, including refusal conversion and telephone follow-up to complete missing portions of the interview. Interviewers were given permission to complete the module including the sexual identity question by telephone if it was not possible to obtain the data otherwise. This applied to both the CAPI and ACASI paths. While this placed constraints on our ability to isolate the effects of mode, telephone follow-up is and will continue to be a regular part of NHIS interviewing. Hence, it was decided that the field test should reflect current and future interviewing procedures rather than produce a true test of mode effects. In addition to telephone follow-up, interviewers were allowed to conduct ACASI cases by CAPI if respondents were reluctant to use the computer and would otherwise break off the interview. Roughly 13% of the interviews on each path were completed primarily by telephone. An additional 6% of interviews on the ACASI path were completed in CAPI.
At the conclusion of the test, 5,445 interviews had been completed at least to the beginning of the sexual identity module, exceeding the initial goal of 5,000. Of these, 3,210 sample adults were randomly assigned to ACASI and 2,235 to CAPI. The final family response rate for the field test was 77.3%. The final response rate for sample adults was 64.9% and 64.0% for the CAPI and ACASI paths, respectively. Overall, the field test achieved response rates that were slightly higher than the final response rates for the 2012 NHIS (family: 76.8%; sample adult: 61.2%).
3.2.3. Design of the ACASI Module
Because the NHIS is a general health survey with a diverse array of respondents, respondents with little to no computer experience and/or low levels of literacy would still need to be able to complete the ACASI module. Therefore, a simple three-key interface was developed. When the respondent was presented with a question, he/she would press the Space bar to scroll through the available response options, with a circle appearing around the currently selected response. Once the desired answer was circled, he/she would press the Enter key to select and retain that answer. When a respondent wanted to back up to review or change a previous answer, he/she would use the Tab key. Audio recordings of the question and response options automatically played when each question appeared on the screen. Recordings and question text were available in either English or Spanish.
3.2.4. Survey Instrument
The sexual identity questions were included in the Adult Selected Items (ASI) section of the Sample Adult interview. Both the CAPI and ACASI version of the ASI section was available in English and Spanish. Since the questions were not translated into other languages, interviewers were asked to skip the ASI section (both ACASI and CAPI) if the respondent was not comfortable answering in either English or Spnaish The ASI section appeared toward the end of the Sample Adult interview and followed questions on access to health care and health care utilization. The section began with questions on computer use, satisfaction with health care, and neighborhood tenure and attachment. The sexual identity questions then followed. The remainder of the module consisted of questions on financial worries, sleep, mental health, and HIV testing.
In CAPI, interviewers proceeded seamlessly from the prior section into the ASI module. For ACASI, interviewers explained to respondents that they would complete the next set of questions on their own. They were asked in which language, English or Spanish, they would like to complete the questions. The interviewer then plugged headphones into the computer, turned the computer to the respondent, and proceeded to give a short tutorial on how to use the keyboard to enter responses and advance to the next question. Respondents were then asked to don the headphones and begin. At the outset of the ACASI module, the interviewer instructions were reinforced with a short set of practice questions. Throughout, respondents could wear the headphones or simply read the questions. Either way, they were asked to leave the headphones plugged in to mute the audio recordings. Once the respondent completed the questions, an exit screen appeared and asked them to return the laptop computer to the interviewer.
3.3. The Sexual Identity Question
The development and testing of a sexual identity question was an extensive effort carried out over an 11-year period. A total of 377 in-depth cognitive interviews were conducted by the NCHS Questionnaire Design Research Laboratory to better understand the interpretive and response process patterns people use to answer questions on sexual identity. A thorough description of the process is beyond the scope of this article, but readers may refer to the cognitive testing report by Miller and Ryan (2011). The resulting question used with the ACASI module read as follows:
Do you think of yourself as:
Gay
Straight, that is, not gay
Bisexual
Something else
I don’t know the answer
Female ACASI respondents received a version where the first response option read “Gay or lesbian” and the second response option read “Straight, that is, not gay or lesbian.” Although the goal was to keep question wording consistent across modes, some minor revisions were necessary in CAPI. To provide as much privacy as possible during face-to-face administration, the decision was made to use a flashcard listing the sexual identity response categories. To accommodate the use of a flashcard, the wording of the question stem was slightly different in CAPI compared to ACASI. When the main sexual identity question was reached, the interviewer would hand the flashcard to the respondent and read the following text: “Which of the following best represents how you think of yourself?” Looking at the flashcard, the respondent was asked to report the number associated with the most appropriate answer. The response categories on the flashcard were identical to those that appeared on the computer screen for the ACASI respondents, with separate flashcards for male and female respondents. Respondents who answered gay, lesbian, or bisexual were considered to be sexual minorities.
3.4. Other Measures
For all mode comparisons of sexual minority estimates and item nonresponse, results are presented overall and for a select set of respondent sociodemographic and interviewing environment characteristics. Sociodemographic measures include age (18–44 versus 45 and older); sex; race and ethnicity (non-Hispanic white versus other); education (less than a high school diploma/General Educational Development high school equivalency diploma (GED) versus high school diploma/GED and higher); employment status (working versus not working); marital status (never married vs. other); reported health status (excellent/very good health versus poor/fair/good health); whether or not the respondent has a functional limitation; total family income from the prior calendar year (less than USD50,000 versus USD50,000 or more); whether or not the residence is owned/being bought or rented/some other arrangement; whether or not the residence is located in the central city of a metropolitan statistical area (MSA); and whether or not the residence is located in the West region. Interviewing environment measures included whether or not other family members aged 17 or older were present during the entire interview (including the ASI section) (yes or unknown); the number of contact attempts required to complete the interview (a commonly used measure to characterize the difficulty of the case; 1–2, 3–4, 5 or more attempts); whether or not the case was re-assigned to a different interviewer; whether or not householders expressed time constraints and/or privacy-related concerns prior to or during the interview; and location of the interview (inside the home, outside the home). Interviews completed by telephone were excluded from the “location of the interview” measure.
3.5. Statistical Procedures
Consistent with an intent-to-treat analysis, we retained sample adult interviews randomly assigned to one mode but conducted in another in the assigned mode group. Analyses performed with these cases removed did not substantially alter the results presented here.
To assess whether randomization had been successfully achieved, we compared the CAPI and ACASI groups on a set of sociodemographic and interviewing environment characteristics. Next, we compared overall estimates of sexual minority status by administration mode, including crude and adjusted odds ratios from logistic regressions. Covariates included in the multiple logistic regressions, listed in the footnote under Table 4, were found to be significantly associated with sexual minority status (P < 0.15) in bivariate analyses.
Table 4.
Percentage of sample adults identifying as a sexual minority (gay/lesbian or bisexual) by interview mode: NHIS sexual identity field test, 2012 (weighted).
| ACASI |
CAPI |
ACASI versus CAPI |
||||||||
|---|---|---|---|---|---|---|---|---|---|---|
| n | % | 95% CI | n | % | 95% CI | OR | 95% CI | AORa | 95% CI | |
| Identified as a sexual minority | 3,031 | 2.2 | 1.56, 2.84 | 2,162 | 2.4 | 1.59, 3.14 | 0.93 | 0.59, 1.45 | 0.92 | 0.59, 1.44 |
Note. CI = confidence interval; OR = odds ratio; AOR = adjusted odds ratio.
The following covariates were included in the multivariate logistic regression: age, gender, education, marital status, reported health status, total family income from the prior calendar year, whether the residence is owned/being bought or rented/some other arrangement, whether or not the residence is in the West region, whether or not the residence is in the central city of an MSA, total count of contact attempts on the household, whether or not householder(s) expressed privacy or trust concerns, and whether or not householder(s) expressed time constraints.
We then compared estimates of sexual minority status by mode within sociodemographic subgroups and types of interviewing environment. For each subgroup, we present the crude odds ratio for mode from a bivariate logistic regression of sexual minority status. If a significant association was identified within a subgroup, we then estimated a logistic regression model with sexual minority status as the dependent variable and mode, the sociodemographic or interviewing environment measure under analysis (e.g., age), and an interaction term for the two as covariates. This allowed us to further assess whether the impact of mode on sexual minority reporting was homogeneous across subpopulations or whether certain subpopulations were particularly sensitive to mode.
We next compared sexual identity item nonresponse (a category which included not only refused but also “something else”, and “I don’t know the answer”) rates by mode. For ACASI, “refused” included respondents who failed to provide a response to the question. We present crude and adjusted odds ratios from logistic regression models. Covariates included in the multiple logistic regressions (listed in the footnote under Table 7), were found to be significantly associated with sexual identity nonresponse (P < 0.15) in bivariate analyses.
Table 7.
Item nonresponse rate to sexual identity question by interview mode: NHIS sexual identity field test, 2012 (weighted).
| ACASI |
CAPI |
ACASI versus CAPI |
||||||||
|---|---|---|---|---|---|---|---|---|---|---|
| n | % | 95% CI | n | % | 95% CI | OR | 95% CI | AORa | 95% CI | |
| Item nonresponse rate to sexual identity question | 3,150 | 2.7 | 2.00, 3.42 | 2,223 | 2.3 | 1.40, 3.12 | 1.20 | 0.77, 1.89 | 1.31 | 0.83, 2.07 |
Note. CI = confidence interval; OR = odds ratio; AOR = adjusted odds ratio.
The following covariates were included in the multivariate logistic regression: age, sex, race/ethnicity, education, employment status, marital status, reported health status, family income, own/rent resident, MSA status, region, presence of others, location of interview, number of contact attempts, case reassigned to different interviewer, householder(s) expressed privacy concerns, and householder(s) expressed time-related concerns.
Finally, we compared sexual identity item nonresponse rates by mode within sociodemographic subgroups and types of interviewing environment. We again present the crude odds ratio for mode from a bivariate logistic regression of sexual identity nonresponse. If a significant association between mode and item nonresponse was identified within a subgroup, we estimated a logistic regression model with item nonresponse as the dependent variable and mode, the sociodemographic/interviewing environment measure under analysis (e.g., number of contact attempts on the household), and an interaction term for the two as covariates.
We present 95% confidence intervals for all estimates. CAPI is the reference category for all mode odds ratios. All analyses were performed using SAS-callable SUDAAN version 11.0.1 to account for the complex sample design of the NHIS. Finally, to mimic normal NHIS production procedures and to ensure that estimates from each data collection mode were generalizable to the U.S. adult, civilian noninstitutionalized population aged ≥ 18 years, all analyses (unless otherwise noted) used final sample adult weights adjusted for nonresponse and calibrated to population control totals.
4. Results
4.1. Sample Equivalency
To determine whether the field test provides a valid assessment of mode differences, if any, in sexual minority reporting and item nonresponse, we compared the two mode groups on 13 respondent sociodemographic and social environmental measures (see Table 2). As shown, the distributions were similar by mode, with no significant differences being identified. The similar sample compositions by mode bolster our confidence in the subsequent results.
Table 2.
Characteristics of sample adults who reached the sexual identity questions by interview mode: NHIS sexual identity field test, 2012 (weighted).
| ACASI |
CAPI |
||||
|---|---|---|---|---|---|
| % | 95% CI | % | 95% CI | χ2 p-value | |
| Gender | 0.87 | ||||
| Male | 48.0 | 45.83, 50.24 | 48.3 | 45.98, 50.59 | |
| Female | 52.0 | 49.76, 54.17 | 51.7 | 49.41, 54.02 | |
| Age | 0.85 | ||||
| 18–24 | 12.9 | 11.10, 14.73 | 12.3 | 10.07, 14.58 | |
| 25–44 | 34.5 | 31.67, 37.40 | 35.5 | 32.51, 38.46 | |
| 45–64 | 35.4 | 32.93, 37.86 | 34.6 | 32.15, 36.99 | |
| 65+ | 17.2 | 14.96. 19.35 | 17.6 | 15.43, 19.80 | |
| Race/ethnicity | 0.87 | ||||
| Hispanic | 14.9 | 11.40, 18.48 | 14.8 | 11.59, 18.09 | |
| Non-hispanic white | 66.6 | 61.20, 72.05 | 66.3 | 60.82, 71.74 | |
| Non-hispanic black | 11.6 | 8.75, 14.41 | 11.4 | 8.65, 14.12 | |
| Non-hispanic other | 6.9 | 4.35, 9.35 | 7.5 | 5.10, 9.90 | |
| Education | 0.77 | ||||
| Less than high school | 14.8 | 12.52, 17.00 | 15.1 | 12.80, 17.42 | |
| High school/GED | 27.2 | 24.81, 29.64 | 26.8 | 24.06, 29.45 | |
| Some college | 30.7 | 28.24, 33.07 | 32.1 | 28.95, 35.22 | |
| Bachelor’s+ | 27.4 | 23.64, 31.08 | 26.1 | 22.15, 29.95 | |
| Employment status | 0.07 | ||||
| Working | 59.7 | 56.81, 62.64 | 62.4 | 59.43, 65.31 | |
| Not working | 40.3 | 37.36, 43.20 | 37.6 | 34.69, 40.57 | |
| Marital status | 0.36 | ||||
| Never married | 20.8 | 18.42, 23.24 | 22.0 | 19.28, 24.74 | |
| Married or cohabiting | 61.7 | 59.07, 64.32 | 60.0 | 57.07, 63.00 | |
| Divorced | 10.8 | 9.52, 12.16 | 12.0 | 10.35, 13.56 | |
| Widowed | 6.6 | 5.53, 7.74 | 6.0 | 4.98, 7.02 | |
| Reported health status | 0.70 | ||||
| Excellent/very good | 58.5 | 55.83, 61.12 | 59.8 | 56.80, 62.80 | |
| Good | 27.9 | 25.86, 29.86 | 27.2 | 24.90, 29.46 | |
| Poor/fair | 13.7 | 11.89, 15.44 | 13.0 | 10.95, 15.10 | |
| Family income | 0.81 | ||||
| < USD20,000 | 15.9 | 13.88, 17.92 | 16.8 | 14.38, 19.23 | |
| USD20,000 – < USD50,000 | 29.5 | 26.54, 32.53 | 28.3 | 25.81, 30.89 | |
| USD50,000 – < USD100,000 | 28.3 | 25.65, 30.98 | 28.6 | 25.86, 31.34 | |
| ≥ USD100,000 | 18.4 | 14.88, 21.99 | 17.8 | 14.55, 20.99 | |
| Unknown | 7.8 | 6.45, 9.18 | 8.5 | 6.71, 10.25 | |
| Own or rent | 0.56 | ||||
| Own or buying | 64.6 | 60.36, 68.82 | 65.4 | 60.99, 69.81 | |
| Rent or some other arrangement | 35.4 | 31.18, 39.64 | 34.6 | 30.19, 39.01 | |
| MSA status | 0.73 | ||||
| MSA, central city | 30.3 | 23.45, 37.15 | 30.6 | 23.85, 37.35 | |
| MSA, non-central city | 52.8 | 44.95, 60.75 | 53.2 | 45.58, 60.90 | |
| Non-MSA | 16.9 | 10.85, 22.85 | 16.2 | 10.27, 22.04 | |
| Region | 0.19 | ||||
| Northeast | 21.2 | 12.15, 30.35 | 20.0 | 11.56, 28.50 | |
| Midwest | 24.0 | 14.14, 33.89 | 22.9 | 13.87, 32.02 | |
| South | 32.5 | 23.96, 41.01 | 33.4 | 24.67, 42.10 | |
| West | 22.2 | 13.98, 30.52 | 23.6 | 15.06, 32.22 | |
Note. CI = confidence interval.
4.2. Estimates of Sexual Minority Status by Mode
Table 3 presents response distributions for the sexual orientation question by mode. A higher percentage of adults identified as gay or lesbian (1.4%) in CAPI compared to ACASI (0.9%), while a slightly higher percentage of adults identified as bisexual in ACASI (1.2%) compared to CAPI (1.0%). However, neither difference reached statistical significance. Compared to CAPI, ACASI also yielded a slightly higher percentage of adults answering “something else”, “I don’t know the answer”, and refused. Again, the differences were not statistically significant. Since the overall number of adults identifying as a sexual minority was small (ACASI = 79, CAPI = 57), subsequent analyses focus on a dichotomous measure of sexual minority status (gay/lesbian or bisexual versus straight).
Table 3.
Responses to main sexual identity question by interview mode: NHIS sexual identity field test, 2012 (weighted).
| ACASI |
CAPI |
|||||
|---|---|---|---|---|---|---|
| n | % | 95% CI | n | % | 95% CI | |
| Sexual identity | ||||||
| Gay or lesbian | 36 | 0.9 | 0.43, 1.38 | 28 | 1.4 | 0.73, 1.99 |
| Straight, that is not gay or lesbian | 2,952 | 94.6 | 93.58, 95.58 | 2,105 | 95.2 | 94.01, 96.42 |
| Bisexual | 43 | 1.2 | 0.80, 1.63 | 29 | 1.0 | 0.53, 1.38 |
| Something else | 14 | 0.4 | 0.11, 0.60 | 8 | 0.3 | 0.05, 0.56 |
| Don’t know | 70 | 1.9 | 1.31, 2.54 | 33 | 1.3 | 0.71, 1.92 |
| Refuseda | 38 | 1.0 | 0.61, 1.44 | 21 | 0.9 | 0.28, 1.44 |
Note. CI = confidence interval.
Included in the “refused” for ACASI participants are 33 cases where the respondent skipped the question.
Table 4 presents estimates of sexual minority status by mode. Overall, ACASI produced a slightly lower estimate (2.2%) of sexual minorities than CAPI (2.4%), although the difference was not statistically significant.
Table 5 shows estimates of sexual minority status by mode and respondent sociodemographics. Again, the goal of these analyses is to test whether the effect of interview mode is homogeneous across subpopulations. For example, “Is the impact of survey mode on reporting a sexual minority status equivalent for men and women?” While the CAPI estimates were, on average, slightly higher (in 20 of 26 comparisons), only one statistically significant difference was identified. Adult respondents from families with annual incomes of USD50,000 or more were significantly more likely to identify as a sexual minority in ACASI (2.2%) than in CAPI (0.8%) (unadjusted odds ratio (UOR) = 2.95, 95% confidence interval (CI) = 1.16–7.50). A logistic regression of sexual minority reporting in which interview mode, total family income, and their interaction were included as covariates yielded a significant interaction term (p < .001), lending support to non-equivalence in the impact of survey mode on reporting a sexual minority status across income subgroups.
Table 5.
Percent sexual minority by select sociodemographics and interview mode: NHIS sexual identity field test, 2012 (weighted).
| ACASI |
CAPI |
ACASI versus CAPI |
||||||
|---|---|---|---|---|---|---|---|---|
| n | % | 95% CI | n | % | 95% CI | OR | 95% CI | |
| Age | ||||||||
| 18–44 | 1,405 | 2.5 | 1.53, 3.46 | 1,011 | 3.5 | 2.15, 4.79 | 0.71 | 0.39–1.30 |
| 45+ | 1,626 | 1.9 | 1.06, 2.78 | 1,151 | 1.3 | 0.41, 2.27 | 1.44 | 0.63–3.31 |
| Sex | ||||||||
| Male | 1,366 | 1.8 | 1.09, 2.48 | 931 | 2.1 | 1.10, 3.09 | 0.85 | 0.45–1.62 |
| Female | 1,665 | 2.6 | 1.61, 3.54 | 1,231 | 2.6 | 1.48, 3.76 | 0.98 | 0.56–1.71 |
| Race/ethnicity | ||||||||
| Non–Hispanic white | 1,810 | 1.9 | 1.17, 2.67 | 1,263 | 2.3 | 1.39, 3.13 | 0.85 | 0.47–1.51 |
| Other | 1,221 | 2.8 | 1.39, 4.16 | 899 | 2.6 | 0.93, 4.24 | 1.08 | 0.46–2.50 |
| Education | ||||||||
| Less than a high school diploma/GED | 1,269 | 2.5 | 1.40, 3.59 | 903 | 3.3 | 1.73, 4.88 | 0.75 | 0.41–1.35 |
| High school diploma/GED or more | 1,756 | 2.0 | 1.33, 2.65 | 1,253 | 1.7 | 1.05, 2.38 | 1.16 | 0.70–1.94 |
| Employment status | ||||||||
| Working | 1,732 | 2.5 | 1.63, 3.37 | 1,282 | 2.3 | 1.30, 3.23 | 1.11 | 0.65–1.90 |
| Not working | 1,267 | 1.7 | 0.99, 2.48 | 879 | 2.5 | 1.20, 3.86 | 0.68 | 0.33–1.38 |
| Marital status | ||||||||
| Never married | 712 | 4.2 | 2.32, 6.10 | 539 | 4.4 | 2.44, 6.42 | 0.95 | 0.47–1.93 |
| Other | 2,312 | 1.7 | 1.05, 2.28 | 1,616 | 1.8 | 0.99, 2.60 | 0.93 | 0.51–1.69 |
| Reported health status | ||||||||
| Excellent/very good | 1,708 | 1.8 | 1.05, 2.54 | 1,236 | 2.0 | 1.10, 2.85 | 0.91 | 0.49–1.70 |
| Good/fair/poor | 1,322 | 2.8 | 1.60, 3.92 | 923 | 3.0 | 1.49, 4.42 | 0.93 | 0.50–1.73 |
| Family income | ||||||||
| < USD50,000 | 1,577 | 2.5 | 1.64, 3.31 | 1,151 | 3.4 | 2.14, 4.71 | 0.72 | 0.41–1.24 |
| ≥ USD50,000 | 1,228 | 2.3 | 1.18, 3.33 | 835 | 0.8 | 0.20, 1.36 | 2.95 | 1.16–7.50 |
| Own or rent residence | ||||||||
| Own or buying | 1,787 | 1.5 | 0.82, 2.11 | 1,238 | 1.0 | 0.50, 1.58 | 1.42 | 0.70–2.85 |
| Rent or some other arrangement | 1,236 | 3.6 | 2.02, 5.11 | 920 | 4.9 | 2.91, 6.88 | 0.72 | 0.35–1.46 |
| MSA status | ||||||||
| MSA, central city | 1,049 | 3.4 | 2.08, 4.67 | 784 | 3.7 | 2.00, 5.39 | 0.91 | 0.48–1.72 |
| Other | 1,982 | 1.7 | 1.03, 2.35 | 1,378 | 1.8 | 0.95, 2.61 | 0.95 | 0.52–1.73 |
| Region | ||||||||
| West | 735 | 1.4 | 0.42, 2.47 | 576 | 1.8 | 0.80, 2.79 | 0.80 | 0.31, 2.05 |
| Other | 2,296 | 2.4 | 1.66, 3.15 | 1,586 | 2.5 | 1.57, 3.51 | 0.95 | 0.57–1.56 |
Note. CI = confidence interval; OR = odds ratio.
Next, we turned to measures of the interviewing environment (Table 6) under the hypothesis that ACASI would elicit greater sexual minority (i.e., more accurate) reporting in more challenging (e.g., reluctant respondents), less private scenarios. Only one significant difference emerged by mode. The percentage of adults identifying as a sexual minority was significantly greater in CAPI (3.5%), compared to ACASI (1.4%), when one or more householders expressed privacy concerns (UOR = 0.40, 95% CI = 0.19–0.83). As with total family income, a logistic regression of sexual minority status with interview mode, expression of privacy concerns, and their interaction included as covariates produced a significant interaction term (p < .05). Here again, there appears to be non-equivalence in the impact of survey mode on reporting a sexual minority status across subgroups defined by whether or not householders expressed privacy concerns.
Table 6.
Percent sexual minority by select interview environment measures and interview mode: NHIS sexual identity field test, 2012 (weighted).
| ACASI |
CAPI |
ACASI versus CAPI |
||||||
|---|---|---|---|---|---|---|---|---|
| n | % | 95% CI | n | % | 95% CI | OR | 95% CI | |
| Presence of others | ||||||||
| Yes | 989 | 2.5 | 1.33, 3.69 | 692 | 3.0 | 1.55, 4.43 | 0.84 | 0.44–1.59 |
| Unknown | 2,042 | 2.0 | 1.29, 2.64 | 1,449 | 1.9 | 1.09, 2.69 | 1.04 | 0.60–1.82 |
| Location of interview | ||||||||
| Inside respondent’s home | 1,913 | 1.9 | 1.27, 2.62 | 1,361 | 2.3 | 1.28, 3.35 | 0.84 | 0.47–1.51 |
| Outside respondent’s home | 743 | 3.1 | 1.50, 4.73 | 511 | 2.4 | 1.00, 3.79 | 1.31 | 0.59–2.89 |
| Number of contact attempts | ||||||||
| 1–2 | 1,310 | 2.0 | 1.16, 2.84 | 921 | 2.9 | 1.57, 4.24 | 0.68 | 0.35–1.34 |
| 3–4 | 746 | 1.5 | 0.50, 2.47 | 578 | 1.5 | 0.56, 2.45 | 0.99 | 0.43–2.29 |
| 5+ | 971 | 3.0 | 1.61, 4.43 | 656 | 2.4 | 0.89, 3.82 | 1.29 | 0.56–2.98 |
| Case reassigned to different interviewer | ||||||||
| Yes | 717 | 1.7 | 0.83, 2.61 | 522 | 1.8 | 0.44, 3.19 | 0.95 | 0.40–2.23 |
| No | 2,314 | 2.3 | 1.54, 3.15 | 1,640 | 2.5 | 1.65, 3.43 | 0.92 | 0.54–1.58 |
| Householder(s) expressed privacy-related concerns | ||||||||
| Yes | 525 | 1.4 | 0.54, 2.27 | 410 | 3.5 | 1.75, 5.19 | 0.40 | 0.19–0.83 |
| No | 2,492 | 2.4 | 1.66, 3.09 | 1,738 | 2.1 | 1.30, 2.99 | 1.11 | 0.67–1.84 |
| Householder(s) expressed time-related concerns | ||||||||
| Yes | 897 | 2.9 | 1.44, 4.27 | 662 | 2.9 | 1.32, 4.51 | 0.98 | 0.43–2.23 |
| No | 2,120 | 1.9 | 1.22, 2.62 | 1,486 | 2.2 | 1.24, 3.06 | 0.89 | 0.48–1.65 |
Note. CI = confidence interval; OR = odds ratio.
4.3. Item Nonresponse
For the next set of analyses we examined item nonresponse as a measure of data quality. Table 7 presents item nonresponse rates (all responses other than those coded as sexual minority or straight) to the main sexual identity question by mode. While the overall item nonresponse rate was slightly higher in ACASI (2.7%) compared to CAPI (2.3%), the difference did not reach statistical significance (UOR = 1.20, 95% CI = 0.77–1.89; AOR = 1.27, 95% CI = 0.83–1.96).
Table 8 presents item nonresponse rates by mode and respondent sociodemographics. Are the effects of mode on item nonresponse to the sexual identity question equivalent across subgroups? Consistent with the overall rate, we observe slightly higher nonresponse to the main sexual identity question in ACASI compared to CAPI across 12 respondent sociodemographics measures. In all, ACASI produced a higher item nonresponse rate for 19 of 26 comparisons. However, none of the observed differences were statistically significant.
Table 8.
Item nonresponse rate to sexual identity question by select sociodemographics and interview mode: NHIS sexual identity field test, 2012 (weighted).
| ACASI |
CAPI |
ACASI versus CAPI |
||||||
|---|---|---|---|---|---|---|---|---|
| n | % | 95% CI | n | % | 95% CI | OR | 95% CI | |
| Age | ||||||||
| 18–44 | 1,454 | 2.4 | 1.55, 3.27 | 1,035 | 1.8 | 0.75, 2.84 | 1.35 | 0.76, 2.39 |
| 45+ | 1,696 | 3.0 | 2.05, 3.91 | 1,188 | 2.7 | 1.56, 3.82 | 1.11 | 0.64, 1.92 |
| Sex | ||||||||
| Male | 1,422 | 2.7 | 1.67, 3.66 | 957 | 2.5 | 1.15, 3.95 | 1.05 | 0.52, 2.10 |
| Female | 1,728 | 2.8 | 1.80, 3.70 | 1,266 | 2.0 | 1.07, 2.92 | 1.39 | 0.79, 2.46 |
| Race/ethnicity | ||||||||
| Non-hispanic white | 1,847 | 1.4 | 0.79, 1.96 | 1,287 | 1.5 | 0.64, 2.37 | 0.91 | 0.45, 1.84 |
| Other | 1,303 | 5.4 | 3.91, 6.84 | 936 | 3.7 | 2.20, 5.30 | 1.46 | 0.94, 2.25 |
| Education | ||||||||
| Less than a high school diploma/GED | 1,344 | 4.2 | 2.89, 5.47 | 939 | 3.5 | 1.97, 5.01 | 1.21 | 0.75, 1.94 |
| High school diploma/GED or more | 1,799 | 1.6 | 0.92, 2.34 | 1,277 | 1.3 | 0.61, 1.97 | 1.27 | 0.62, 2.59 |
| Employment status | ||||||||
| Working | 1,789 | 2.1 | 1.38, 2.87 | 1,313 | 1.7 | 1.01, 2.47 | 1.23 | 0.72, 2.08 |
| Not working | 1,357 | 3.6 | 2.41, 4.76 | 909 | 3.1 | 1.65, 4.62 | 1.15 | 0.66, 2.01 |
| Marital status | ||||||||
| Never married | 738 | 2.6 | 1.42, 3.84 | 559 | 2.6 | 1.18, 4.05 | 1.00 | 0.47, 2.13 |
| Other | 2,404 | 2.7 | 1.92, 3.51 | 1,655 | 2.1 | 1.15, 3.09 | 1.29 | 0.76, 2.18 |
| Reported health status | ||||||||
| Excellent/very good | 1,768 | 2.3 | 1.52, 3.05 | 1,268 | 2.1 | 1.13, 2.98 | 1.11 | 0.62, 2.00 |
| Good/fair/poor | 1,381 | 3.3 | 2.18, 4.45 | 952 | 2.6 | 1.43, 3.73 | 1.29 | 0.76, 2.19 |
| Family income | ||||||||
| < USD50,000 | 1,646 | 3.3 | 2.20, 4.48 | 1,184 | 2.8 | 1.30, 4.26 | 1.21 | 0.68, 2.14 |
| ≥ USD50,000 | 1,261 | 1.8 | 0.95, 2.68 | 850 | 1.3 | 0.50, 2.07 | 1.42 | 0.67, 3.03 |
| Missing | 243 | 4.4 | 2.25, 6.54 | 189 | 4.9 | 2.06, 7.64 | 0.90 | 0.40, 2.05 |
| Own or rent residence | ||||||||
| Own or buying | 1,839 | 2.0 | 1.32, 2.73 | 1,268 | 2.0 | 0.83, 3.25 | 0.99 | 0.50, 1.99 |
| Rent or some other arrangement | 1,303 | 4.0 | 2.68, 5.27 | 949 | 2.5 | 1.31, 3.79 | 1.58 | 0.96, 2.61 |
| MSA status | ||||||||
| MSA, central city | 1,104 | 3.8 | 2.21, 5.48 | 810 | 2.1 | 1.00, 3.21 | 1.86 | 0.98, 3.35 |
| Other | 2,046 | 2.2 | 1.49, 2.94 | 1,413 | 2.3 | 1.18, 3.48 | 0.95 | 0.54, 1.66 |
| Region | ||||||||
| West | 774 | 4.5 | 2.51, 6.39 | 600 | 2.9 | 1.59, 4.20 | 1.56 | 0.94, 2.61 |
| Other | 2,376 | 2.2 | 1.50, 2.92 | 1,623 | 2.1 | 1.01, 3.12 | 1.07 | 0.58, 1.96 |
Note. CI = confidence interval; OR = odds ratio.
Mode comparisons by the interviewing environment measures yielded similar results (see Table 9). Thirteen comparisons across six variables produced one significant difference: interviews conducted inside respondents’ homes led to a higher nonresponse rate to the main sexual identity question in ACASI (2.6%) compared to CAPI (1.4%) (UOR = 1.79,95% CI = 1.07–2.99). However, a logistic regression of item nonresponse to the sexual identity question with interview mode, location of interview, and the interaction of the two as covariates did not yield a significant interaction term (P = .06).
Table 9.
Item nonresponse rate to the sexual identity question by select interview environment measures and interview mode: NHIS sexual identity field test, 2012 (weighted).
| ACASI |
CAPI |
ACASI versus CAPI |
||||||
|---|---|---|---|---|---|---|---|---|
| n | % | 95% CI | n | % | 95% CI | OR | 95% CI | |
| Presence of others | ||||||||
| Yes | 1,022 | 2.3 | 1.37, 3.30 | 732 | 1.8 | 0.75, 2.85 | 1.31 | 0.66, 2.61 |
| Unknown | 2,128 | 3.0 | 2.00, 3.96 | 1,491 | 2.6 | 1.50, 3.74 | 1.14 | 0.70, 1.87 |
| Location of interviewa | ||||||||
| Inside respondent’s home | 1,982 | 2.6 | 1.70, 3.41 | 1,385 | 1.4 | 0.72, 2.18 | 1.79 | 1.07, 2.99 |
| Outside respondent’s home | 768 | 2.3 | 0.93, 3.68 | 528 | 3.1 | 1.19, 4.95 | 0.74 | 0.31, 1.81 |
| Number of contact attempts | ||||||||
| 1–2 | 1,358 | 2.6 | 1.57, 3.72 | 945 | 2.0 | 1.03, 2.98 | 1.32 | 0.71, 2.46 |
| 3–4 | 781 | 3.6 | 2.05, 5.19 | 596 | 3.4 | 0.94, 5.80 | 1.08 | 0.45, 2.60 |
| 5+ | 1,007 | 2.1 | 1.09, 3.07 | 675 | 1.7 | 0.77, 2.54 | 1.26 | 0.61, 2.59 |
| Case reassigned to different interviewer | ||||||||
| Yes | 745 | 2.7 | 1.42, 3.89 | 537 | 1.6 | 0.60, 2.57 | 1.69 | 0.78, 3.69 |
| No | 2,405 | 2.7 | 1.92, 3.53 | 1,686 | 2.5 | 1.47, 3.49 | 1.10 | 0.68, 1.79 |
| Householder(s) expressed privacy-related concerns | ||||||||
| Yes | 567 | 5.0 | 3.11, 6.79 | 428 | 4.6 | 1.81, 7.34 | 1.09 | 0.56, 2.12 |
| No | 2,569 | 2.2 | 1.56, 2.92 | 1,780 | 1.7 | 1.02, 2.44 | 1.30 | 0.77, 2.20 |
| Householder(s) expressed time-related concerns | ||||||||
| Yes | 942 | 2.9 | 1.58, 4.29 | 678 | 1.7 | 0.32, 3.01 | 1.78 | 0.70, 4.55 |
| No | 2,194 | 2.6 | 1.85, 3.41 | 1,530 | 2.5 | 1.52, 3.55 | 1.04 | 0.63, 1.71 |
Note. CI = confidence interval; OR = odds ratio.
Telephone interviews are excluded from this measure.
5. Discussion
5.1. Sexual Minority Status
While the larger literature on sexual orientation reporting is consistent that self-administered interview modes yield more sensitive and socially undesirable responses, we found no statistically significant differences in the overall percentage of adults identifying as a sexual minority (gay/lesbian and bisexual) in ACASI (2.2%) compared to CAPI (2.4%) in the NHIS field test. In addition, no significant mode differences in sexual minority estimates were observed within 24 of 26 sociodemographic and interview environment subgroups. The two exceptions were the following: 1) adult respondents from higher income families, compared to adults from lower income families, were significantly more likely to identify as a sexual minority in ACASI compared to CAPI; and 2) among adults in households with privacy concerns, a higher percentage identified as a sexual minority in CAPI than they did in ACASI. This second finding is counterintuitive and neither of these exceptions to the pattern of null results can be easily explained. For both CAPI and ACASI the information is being entered into a computer, so concerns about electronic data security cannot explain the difference. Given the number of comparisons performed, these findings may simply reflect type I error. If we had set the p-value cutoff for statistical significance lower, to account for the multiple comparisons, we would have found no significant differences.
Assuming that significant mode differences reveal question sensitivity, these findings suggest that the sexual identity question was not considered sensitive by field test participants. As a further potential indication that respondents did not find the sexual identity question to be sensitive, there were only three breakoffs (i.e., respondent quit the survey) at the sexual identity question across the two modes: two in ACASI, one in CAPI.
There could be a number of explanations for the lack of mode differences found in sexual minority estimates, including features of the survey design and larger societal trends. From a question design perspective, it is important to recall that the use of a flashcard with the CAPI version of the sexual identity question was designed to maximize privacy in a face-to-face setting. By using a flashcard, the two parties can navigate the question without the respondent directly disclosing their sexual identity to the interviewer, or the interviewer reading the response options to the respondent. When the flashcard is handed to the respondent, they are asked to report the number on the card that corresponds to their answer, not their actual sexual identity. In both the ACASI and CAPI versions of the question, the question text does not utilize terminology or allude to the fact that the question is attempting to capture the respondent’s sexual identity. To what extent this design minimized differences in estimates between CAPI and ACASI is difficult to measure. It is not always easy to use the flashcard, especially with interviews conducted on doorsteps or other difficult interviewing environments. Thus, it is unknown to what extent interviewers used the flashcard, even when the interview setting was conducive to its use. With that said, another mode comparison study that used a flashcard in its CAPI mode also found no significant difference between CAPI and CASI in the percentage of persons reporting a sexual minority identity (Malagoda and Traynor 2008).
The minimal differences by mode could potentially also be explained by decreased stigma associated with a sexual minority identity. If sexual minority respondents have less reason to fear embarrassment or reprisal from interviewers and/or third parties for revealing their sexual identity, they would have less reason to conceal it in face-to-face interviews. Unlike sexual behavior and attraction, which are still considered private matters, sexual identity is increasingly perceived as a standard demographic characteristic that can be shared in social contexts (Fredriksen-Goldsen and Muraco 2010; Rosenfeld 1999). Indeed, the phenomenon of “coming-out” (disclosing one’s sexual identity) is a well-known and studied one (e.g., Legate et al. 2012; McGarrity and Huebner 2014). In contrast, there is no analogous widespread phenomenon of revealing one’s sexual attractions or activities to colleagues or family members. In addition, although nationally representative trend data is scarce, research using convenience samples suggest that LGB people may be coming out earlier and in more contexts today than they did in the past (Pew Research Center 2013; Floyd and Bakeman 2006). As a result of these changes, the sensitivity of sexual identity questions may have decreased. Alternatively, it may be that the public’s declining trust in the ability of the government, or any institution, to keep computerized information secure and confidential means that even an ACASI instrument no longer gives the assurance of privacy that it once did, and both the ACASI and CAPI estimates are underestimates.
Such a discrepancy between the sensitivity of questions on sexual identity versus those on attraction and behavior could explain why studies examining sexual attraction reporting by mode (Caltabiano and Dalla-Zuanna, 2012; Villarroel et al. 2006) and some studies examining sexual behavior by mode (e.g., Villarroel et al. 2006; Potdar and Koenig 2005) found differences by mode. Likewise, in the two studies which found differences in sexual identity reporting by mode (Midanik and Greenfield, 2008; Ghanem et al. 2005), the survey instruments also asked about sexual behavior, and that implicit linking of identity to behavior may explain the mode effects found in those studies and not found here. The temporal differences between those studies and the present one could also explain the difference.
5.2. Item Nonresponse
While few empirical assessments of the sensitivity-item nonresponse link have been performed, it is widely assumed that sensitive items produce more item nonresponse (Tourangeau and Yan 2007), an assumption largely attributable to high refusal rates to income questions (Dahlhamer et al. 2003; Dahlhamer et al. 2004). The very limited literature examining this issue with sexual orientation questions has found either lower or roughly equal item nonresponse in self-administered modes compared with interviewer-administered surveys (Caltabiano and Dalla-Zuanna 2012; Kurth et al. 2004; Villarroel et al. 2006). As we found for sexual minority reporting, there was no overall difference in item nonresponse to the sexual identity question by mode (CAPI = 2.3%, ACASI = 2.7%). In addition, while slightly higher sexual identity item nonresponse rates were observed in ACASI across a number of the sociodemographic and interview environment subgroups, none of the differences reached statistical significance.
Since results indicated a lack of perceived sensitivity, the lack of significant differences in item nonresponse by mode is also consistent with these results. However, we also believe the design of the ACASI screens for this field test contributed to the lack of significant mode differences in item nonresponse rates (Dahlhamer et al. 2013). An “implicit filter” design (Derouvray and Couper 2002) was adopted whereby “don’t know” and “refused” options were not presented on the screen. The goal was to mimic a face-to-face interview as closely as possible. For example, an interviewer in the CAPI setting does not provide explicit “don’t know” and “refused” options to a respondent. If the respondent attempted to skip the question in ACASI (pressing Enter without selecting a response), he/she was routed to a follow-up question that provided an option to return to the main question. Refused and don’t know options were also provided on-screen. Here again, the design attempted to mimic CAPI interviewing. In the face-to-face setting, interviewers are trained to probe respondents in an attempt to convert don’t know responses.
5.3. Limitations
This study was subject to at least six limitations. First, we lacked the statistical power necessary to detect modest differences in sexual minority estimates by mode. Given effective sample sizes and a sexual minority estimate of 2.4% in CAPI, we had less than 20% power to detect a half percentage point difference in estimates by mode (80% power to detect a difference of 1.8 percentage points). The problem was further compounded for subgroup comparisons. Second, the small number of adults identifying as a sexual minority in this study precluded us from exploring mode differences in gay/lesbian responses separate from bisexual responses. Third, while the CAPI and ACASI instruments were available in Spanish, there was insufficient sample to explore associations between mode of interview and sexual minority status by language of interview. Fourth, we approached the field test and subsequent data analysis from an “intent to treat” perspective. For roughly 17% of cases where the sexual identity module was completed, the questions were asked in a mode different from the one assigned. While separate analyses removing these cases revealed no substantive differences in the conclusions reported here, we cannot say with certainty that our results would be the same if strict adherence to the experimental protocol had been maintained. Fifth, the use of a flashcard with the sexual identity question in the CAPI administration may have afforded the CAPI respondent a level of privacy approaching that of ACASI, contributing to the null findings of this study. However, we do not have information on how often the flashcard was actually used. And sixth, both unit and item nonresponse may have influenced the results reported here. The final sample adult response rates for the two mode paths were 64.9% (CAPI) and 64.0% (ACASI), respectively. It is possible that subgroups who are more sensitive to mode effects with regard to sexual minority reporting may have had lower propensities to respond to the survey as a whole or to the sample adult interview. Nonresponse rates to the sexual identity question were 2.3% for CAPI and 2.7% for ACASI, rates that were similar to or higher than the percentage of adults identifying as a sexual minority. Furthermore, item nonresponse to the sexual identity question was lower among better educated, employed, and more affluent adults, which may have minimized the effect of survey mode in this test. As Drydakis (2014) notes, sexual minorities with higher socioeconomic status may be more open and forthcoming with their sexual identity.
5.4. Conclusion
In conclusion, the findings from this field test may be of interest for other general health surveys that want to add questions on sexual orientation, but cannot afford the additional costs that may come with more complex mixed-mode designs. Our results suggest that one or more questions on sexual identity can be integrated within the existing design structure of such surveys with little implication for social desirability bias. With that said, this research represents one of a small number of studies that have attempted to isolate the effects of administration mode on responses to a question on sexual identity. Future experimental research could attempt to replicate our findings and/or extend it by exploring other interviewer- and self-administered modes (e.g., CATI, T-ACASI, web surveys) and/or experimentally-manipulating the context in which sexual identity questions are asked.
6. References
- Aquilino W 1994. “Interview Mode Effects in Surveys of Drug and Alcohol Use.” Public Opinion Quarterly 58: 210–240. DOI: 10.1086/269419. [DOI] [Google Scholar]
- Bernstein R, Chadha A, and Montjoy R. 2001. “Over-reporting Voting: Why it Happens and Why it Matters.” Public Opinion Quarterly 65: 22–44. DOI: 10.1086/320036. [DOI] [PubMed] [Google Scholar]
- Bradburn N 1983. “Response Effects” In Handbook of Survey Research, edited by Rossi P, Wright J, and Anderson A, 289–328. New York: Academic Press. [Google Scholar]
- Brazier JE, Harper R, Jones NMB, O’Cathain A, Thomas KJ, Usherwood T, and Westlake L. 1992. “Validating the SF-36 Health Survey Questionnaire: New Outcome Measure for Primary Care.” British Medical Journal 305(6846): 160–164. DOI: 10.1136/bmj.305.6846.160. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Caltabiano M and Dalla-Zuanna G. 2012. “A Comparison of Survey Techniques on Sensitive Sexual Behavior in Italy.” Journal of Sex Research 50(6): 537–547. DOI: 10.1080/00224499.2012.674573. [DOI] [PubMed] [Google Scholar]
- Couper MP, Singer E, and Tourangeau R. 2003. “Understanding the Effects of Audio-CASI on Self-Reports of Sensitive Behavior.” Public Opinion Quarterly 67: 385–395. DOI: 10.1086/376948. [DOI] [Google Scholar]
- Dahlhamer J, Dixon J, Doyle P, Eargle J, Griffin DH, and McGovern P. 2003. “Quality at the Item Level: Terms, Methods, and Guidelines for Cross-Survey Comparisons.” In Proceedings of the Federal Committee on Statistical Methodology Research Conference, Sheraton Crystal City Hotel, VA (November 2003): 17–19. Available at: https://www.researchgate.net/profile/James_Dahlhamer/publication/265540556_Quality_at_the_Item_Level_Terms_Methods_and_Guidelines_for_Cross-Survey_Comparisons_1/links/54b441750cf26833efd0123b.pdf (accessed November 2019). [Google Scholar]
- Dahlhamer J, Dixon J, Doyle P, Eargle J, and McGovern P. 2004. “Quality at the Item Level: Decomposing Item and Concept Response Rates.” Proceedings of the European Conference on Quality and Methodology in Official Statistics (Q2004), May 24–26, Mainz, Germany. [Google Scholar]
- Dahlhamer JM, Galinsky A, Joestl S, Cynamon M, Madans J, and Cain V. 2013. “Minor Design Changes with Major Impacts: Testing Explicit Versus Implicit Don’t Know and Refused Options in Audio Computer-Assisted Self-Interviewing.” Presented at the AAPOR 68th Annual Conference, May 17 Boston, Massachusetts, U.S.A. [Google Scholar]
- De Leeuw ED and van der Zouwen J. 1988. “Data Quality in Telephone and Face-to-Face Surveys: A Comparative Meta-Analysis” In Telephone survey methodology, edited by Groves RM, Biemer PP, Lyberg LE, Massey JT, Nicholls WL II, and Waksberg J, 283–299. New York: Wiley. [Google Scholar]
- Des Jarlais DC, Paone D, Milliken J, Turner CF, Miller H, Gribble J, Shi Q, Hagan H, and Friedman SR. 1999. “Audio-Computer Interviewing to Measure Risk Behaviour for HIV among Injecting Drug Users: A Quasi-Randomised Trial.” The Lancet 353(9165): 1657–1661. DOI: 10.1016/S0140-6736(98)07026-3. [DOI] [PubMed] [Google Scholar]
- Derouvray C and Couper MP. 2002. “Designing a Strategy for Reducing ‘No Opinion’ Responses in Web Surveys.” Social Science Computer Review 20(1): 3–9. DOI: http://doi.org/10.1.1.877.199. [Google Scholar]
- Dialsingh I 2008. “Face-to-Face Interviewing” In Encyclopedia of Survey Research Methods, edited by Lavrakas P, 260–262. Thousand Oaks, CA: SAGE Publications, Inc; DOI: 10.4135/9781412963947. [DOI] [Google Scholar]
- Dolezal C, Marhefka SL, Santamaria EK, Leu C-S, Brackis-Cott E, and Mellins CA. 2012. “A Comparison of Audio Computer-Assisted Self-Interviews to Face-to-Face Interviews of Sexual Behavior among Perinatally HIV-Exposed Youth.” Archives of Sexual Behavior 41: 401–410. DOI: 10.1007/s10508-011-9769-6. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Drydakis N 2014. “Sexual Orientation and Labor Market Outcomes.” IZA World of Labor 111: 1–10. DOI: 10.15185/izawol.111. [DOI] [Google Scholar]
- Floyd FJ and Bakeman R. 2006. “Coming-Out across the Life Course: Implications of Age and Historical Context.” Archives of Sexual Behavior 35(3): 287–296. DOI: 10.1007/s10508-006-9022-x. [DOI] [PubMed] [Google Scholar]
- Fowler F 1993. Survey Research Methods. Newbury Park, CA: Sage Publications. [Google Scholar]
- Fowler F 1995. Improving Survey Questions: Design and Evaluation. Thousand Oaks, CA: Sage Publications. [Google Scholar]
- Fredriksen-Goldsen KI and Muraco A. 2010. “Aging and Sexual Orientation: A 25-Year Review of the Literature.” Research on Aging 32(3): 372–413. DOI: 10.1177/0164027509360355. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Fu H, Darroch JE, Henshaw SK, and Kolb E. 1998. “Measuring the Extent of Abortion Underreporting in the 1995 National Survey of Family Growth.” Family Planning Perspectives 30: 128–133 & 138. DOI: 10.2307/2991627. [DOI] [PubMed] [Google Scholar]
- Ghanem KG, Hutton HE, Zenilman JM, and Erbelding EJ. 2005. “Audio Computer Assisted Self Interview and Face to Face Interview Modes in Assessing Response Bias among STD Clinic Patients.” Sexually Transmitted Infections 81: 421–425. DOI: 10.1136/sti.2004.013193. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Gfroerer J and Hughes A. 1992. “Collecting Data on Illicit Drug Use by Phone” In Survey Measurement of Drug Use: Methodological Studies, edited by Turner C, Lessler J, and Gfroerer J, 277–295. Washington, DC: U.S. Government Printing Office. [Google Scholar]
- Groves RM, Fowler FJ, Couper MP, Lepkowski JM, Singer E, and Tourangeau R. 2004. Survey Methodology. Hoboken, NJ: John Wiley & Sons, Inc. DOI: 10.2307/27590808. [DOI] [Google Scholar]
- Healthy People 2020. (n.d.) “Lesbian, Gay, Bisexual, and Transgender Health.” Available at: https://www.healthypeople.gov/2020/topics-objectives/topic/lesbian-gay-bisexual-and-transgender-health (accessed November 2019).
- Hox JJ and de Leeuw EDD. 1994. “A Comparison of Nonresponse in Mail, Telephone, and Face-to-Face Surveys.” Quality and Quantity 28(4): 329–344. DOI: 10.1007/BF01097014. [DOI] [Google Scholar]
- IOM (Institute of Medicine). 2011. The Health of Lesbian, Gay, Bisexual, and Transgender People: Building a Foundation for Better Understanding. Washington, DC: The National Academies Press; DOI: http://doi.org/0.17226/13128. [PubMed] [Google Scholar]
- Jaya MJH, Hindin MJ, and Ahmed S. 2008. “Differences in Young Peoples’ Reports of Sexual Behaviors According to Interview Methodology: A Randomized Trial in India.” American Journal of Public Health 98(1): 169–174. DOI: 10.2105/AJPH.2006.099937. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Juster FP and Smith JT. 1997. “Improving the Quality of Economic Data: Lessons from HRS and AHEAD.” Journal of the American Statistical Association 92: 1268–1278. DOI: 10.1080/01621459.1997.10473648. [DOI] [Google Scholar]
- Kurth AE, Martin DP, Golden MR, Weiss NS, Heagerty PJ, Speilberg F, Handsfield HH, and Holmes KK. 2004. “A Comparison between Audio Computer-Assisted Self-Interviews and Clinician Interviews for Obtaining the Sexual History.” Sexually Transmitted Diseases 31(12): 719–726. DOI: 10.1097/01.olq.0000145855.36181.13. [DOI] [PubMed] [Google Scholar]
- Legate N, Ryan RM, and Weinstein N. 2012. “Is Coming Out Always a “Good Thing”? Exploring the Relations of Autonomy Support, Outness, and Wellness for Lesbian, Gay, and Bisexual Individuals.” Social Psychological and Personality Science 3(2): 145–152. DOI: 10.1177/1948550611411929. [DOI] [Google Scholar]
- Malagoda M and Traynor J. 2008. “Developing Survey Questions on Sexual Identity: Report on National Statistics Omnibus Trial 4.” London: Social Economic Micro-Analysis and Reporting Division, Office for National Statistics. [Google Scholar]
- McDonald MP 2003. “On the Over-Report Bias of the National Election Study Turnout Rate.” Political Analysis 11: 180–186. DOI: 10.1093/pan/mpg006. [DOI] [Google Scholar]
- McGarrity LA and Huebner DM. 2014. “Is Being Out about Sexual Orientation Uniformly Healthy? The Moderating Role of Socioeconomic Status in a Prospective Study of Gay and Bisexual Men.” Annals of Behavioral Medicine 47(1): 28–38. DOI: 10.1007/s12160-013-9575-6. [DOI] [PubMed] [Google Scholar]
- Midanik LT and Greenfield TK. 2008. “Interactive Voice Response Versus Computer-Assisted Telephone Interviewing (CATI) Surveys and Sensitive Questions: The 2005 National Alcohol Survey.” Journal of Studies on Alcohol and Drugs 69: 580–588. DOI: 10.15288/jsad.2008.69.580. [DOI] [PubMed] [Google Scholar]
- Miller K and Ryan JM. 2011. Design, Development and Testing of the NHIS Sexual Identity Question. Hyattsville, MD: Cognitive Testing Report, Questionnaire Design Research Laboratory, Office of Research and Methodology, National Center for Health Statistics; Available at: https://wwwn.cdc.gov/qbank/report/Miller_NCHS_2011_NHIS%20Sexual%20Identity.pdf (accessed June 2013). [Google Scholar]
- Moore J, Stinson L, and Welniak E. 1999. “Income Reporting in Surveys: Cognitive Issues and Measurement Error” Chapter 10 in Cognition and Survey Research, edited by Sirkin M, Herrmann D, Schechter S, Schwarz N, Tanur J, and Tourangeau R. New York: John Wiley & Sons. [Google Scholar]
- Pew Research Center. 2013. “A Survey of LBGT Americans: Attitudes, Experiences and Values in Changing Times.” Washington, DC: The Pew Research Center; Available at: https://www.pewsocialtrends.org/files/2013/06/SDT_LGBT-Americans_06-2013.pdf (accessed June 2014). [Google Scholar]
- Potdar R and Koenig MA. 2005. “Does Audio-CASI Improve Reports of Risky behavior? Evidence from a Randomized Field Trial among Young Urban Men in India.” Studies in Family Planning 36(2): 107–116. DOI: 10.1111/j.1728-4465.2005.00048.x. [DOI] [PubMed] [Google Scholar]
- Rogers SM, Willis G, Al-Tayyib A, Villarroel MA, Turner CF, Ganapathi L, Zenilman J, and Jadack R. 2005. “Audio Computer Assisted Interviewing to Measure HIV Risk Behaviors in a Clinic Population.” Sexually Transmitted Infections 1: 501–507. DOI: 10.1136/sti.2004.014266. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Rosenfeld D 1999. “Identity Work among Lesbian and Gay Elderly.” Journal of Aging Studies 13(2): 121–144. DOI: 10.1016/s0890-4065(99)80047-4. [DOI] [Google Scholar]
- Shoemaker P, Eicholz M, and Skewes E. 2002. “Item Nonresponse: Distinguishing between Don’t Know and Refuse.” International Journal of Public Opinion Research 14: 193–201. DOI: 10.1093/ijpor/14.2.193. [DOI] [Google Scholar]
- Simoes AA, Bastos FI, Moreira RI, Lynch KG, and Metzger DS. 2006. “A Randomized Trial of Audio Computer and In-Person Interview to Assess HIV Risk among Drug and Alcohol Users in Rio De Janeiro, Brazil.” Journal of Substance Abuse Treatment 30: 237–243. DOI: 10.1016/j.jsat.2005.12.002. [DOI] [PubMed] [Google Scholar]
- SMART (Sexual Minority Assessment Research Team). 2009. Best Practices for Asking Questions about Sexual Orientation on Surveys. Los Angeles: The Williams Institute, University of California School of Law; Available at: https://escholarship.org/uc/item/706057d5 (accessed November 2019). [Google Scholar]
- Sykes W and Collins M. 1988. “Effect of Mode of Interview: Experiments in the UK” In Telephone Survey Methodology, edited by Groves RM, Biemer PP, Lyberg LE, Massey JT, Nicholls II WL, and Waksberg J, 301–320. New York: Wiley. [Google Scholar]
- Tideman RL, Chen MY, Pitts MK, Ginige S, Slaney M, and Fairley CK. 2007. “A Randomised Control Trial Comparing Computer-Assisted with Face-to-Face Sexual History Taking in a Clinical Setting.” Sexually Transmitted Infections 83: 52–56. DOI: 10.1136/sti.2006.020776. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Tourangeau R, Rips L, and Rasinski K. 2000. The Psychology of Survey Response. Cambridge: Cambridge University Press; DOI: 10.1017/CBO9780511819322. [DOI] [Google Scholar]
- Tourangeau R and Smith TW. 1996. “Asking Sensitive Questions: The Impact of Data Collection Mode, Question Format, and Question Context.” Public Opinion Quarterly 60: 275–304. DOI: 10.1086/297751. [DOI] [Google Scholar]
- Tourangeau R and Yan T. 2007. “Sensitive Questions in Surveys.” Psychological Bulletin 133(5): 859–883. DOI: 10.1037/0033-2909.133.5.859. [DOI] [PubMed] [Google Scholar]
- Turner CF, Lessler JT, and Devore J. 1992. “Effects of Mode of Administration and Wording on Reporting of Drug Use” In Survey Measurement of Drug Use: Methodological Studies, edited by Turner C, Lessler J, and Gfroerer J, 177–220. Rockville, MD: National Institute on Drug Abuse. [Google Scholar]
- Villarroel MA, Turner CF, Eggleston E, Al-Tayyib A, Rogers SM, Roman AM, Cooley PC, and Gordek H. 2006. “Same-Gender Sex in the United States: Impact of T-ACASI on Prevalence Estimates.” Public Opinion Quarterly 70(2): 166–196. DOI: 10.1093/poq/nfj023. [DOI] [PMC free article] [PubMed] [Google Scholar]
