Abstract
The accuracy of self-report data may be marred by a range of cognitive and motivational biases, including social desirability response bias. The current study used qualitative interviews to examine self-report response biases among participants in a large randomized clinical trial in Vietnam. A sample of study participants were reinterviewed. The vast majority reported being truthful and emphasized the importance of rapport with the study staff for achieving veridical data. However, some stated that rapport may lead to under reporting of risk behaviors in order not to disappoint study staff. Other factors that appeared to influence accuracy of self-reports include fear that the information may be divulged, desire to enroll in the study, length of the survey, and memory. There are several methods that can be employed to reduce response biases, and future studies should systematically address response bias and include methods to assess approaches and survey items are effective in improving accuracy of self-report data.
Keywords: self-reports, social desirability bias, measurement, risk behaviors, substance use
The topic of response biases in self-report data in the field of HIV prevention is like the weather; everyone talks but little is done about it. Given the number of studies and the amount of funding in HIV research, it is surprising that there have been relatively few systematic efforts to understand and address biases in self-reports. Many HIV intervention and prospective studies in domestic and international settings have reported sharp drops in HIV risk behaviors after the baseline assessment in all study conditions (Healthy Living Project Team, 2007; Johnson et al., 2008; Koblin, Chesney, Coates, & EXPLORE Study Team, 2004; NIMH Collaborative HIV/STD Prevention Trial Group, 2010). In such studies it is often difficult to disentangle whether behaviors actually changed, the reporting about the behaviors changed, or both. In intervention studies, self-reported behavior change in both experimental and control groups may mask actual behavior change in effective interventions and/or exaggerate behavior change in less effective programs.
It is not known if these drops in reported behaviors are actual changes due to HIV testing and counseling at baseline (administered regardless of treatment arm), the delivery of assessments leading to changes in risk behaviors, or whether the behaviors do not change but the reports of the behavior change are a result of social desirability response bias or changes in perceptions of one’s behaviors. Changes in participants’ perceptions of their own risk behaviors may be a result of greater or less attention focused on the behaviors or reconceptualization of the behavior such that a different internal metric is used to calculate the frequency of the behavior.
Prior studies of social desirability response bias have revealed two dimensions: Impression Management, which includes actions to alter how others perceive the individual and Self-Deception, which are attitudes and behaviors that are motivated to enhance self-perceptions (Pauls & Stemmler, 2003). Some HIV-related studies have attempted to measure or adjust social desirability response bias, but it is often measured as a psychological trait (a stable disposition), rather than a state (existing in the context of the specific assessment). Studies that have controlled for social desirability do not tend to find that it has a significant impact on the interpretation of study results (Latkin, Vlahov, & Anthony, 1993).
Although there have been fewer systematic studies to examine the sources of self-report biases in the field of HIV prevention and care, there is a substantial literature on methods to improve self-reports by use of Audio Computer Assisted Self Interview (ACASI). Several studies have found that there is a tendency to report greater risk behaviors with the use of ACASI as compared to face-to-face.(Anastario, Chun, Soto, & Montano, 2013; Islam et al., 2012; Langhaug et al., 2011; Yeganeh et al., 2013). However, investigators have also reported that in some populations in some countries ACASI may not improve accuracy of self-reports or provide inconsistent responses (Gorbach et al., 2013; Minnis et al., 2009; NIMH Collaborative HIV/STD Prevention Trial Group, 2007; Phillips, Gomez, Boily, & Garnett, 2010). It is likely that the population, topics, and context may impact reporting differences.
The timeframe for reporting on risk behaviors to minimize recall bias has also been studied. For self-reports of sexual risk behaviors a three month window has been found to be superior to longer timeframes.(Weinhardt, Forsyth, Carey, Jaworski, & Durant, 1998) Another approach to enhancing accuracy of self-reports is through a time follow-back approach with the interviewer asking about behaviors on each day during a time period.(Hjorthoj, Hjorthoj, & Nordentoft, 2012) The advantage of this approach is that the participants do not aggregate the behaviors. The drawback is that it is time consuming, both in the duration and frequency of interview, and often these data are aggregated in subsequent analyses. A more recent approach is though mhealth technology which allows for the capturing of real time self-reports of behaviors.(Catalani, Philbrick, Fraser, Mechael, & Israelski, 2013; Labrique, Kirk, Westergaard, & Merritt, 2013) Although mhealth is a viable method to collect real time data, it is likely that many epidemiological and intervention studies will continue to rely on ACASI and face-to-face surveys. Moreover, the use of mhealth devices for data collect does not obviate the issue of social desirability response bias as well as other cognitive and motivational biases that influence self-reports.
In evaluating the outcomes of an intervention to reduce HIV risk behaviors and enhance medical care among people who inject drugs (PWID) in Vietnam, we observed an unexpected and significant drop in self-reported substance use and HIV risk behaviors in all study conditions. Consequently, we conducted a set of qualitative interviews to assess potential causes of these changes. Based on our findings we also developed a set of recommendations to address the issue of how to enhance self-reported HIV risk behaviors.
Methods
Participants who previously had a face-to-face survey on risk behaviors as part of a community based stigma and secondary HIV prevention study, which has been reported elsewhere (Gol et al., In Press), were eligible to participate as the consent forms in the parent study allowed for re-contact for future studies. All interviews were conducted in 2014 after the intervention component of the study had ended in 2013. The data collection for the intervention study included several steps to reduce response biases, including use of 3-month timeframe, separating interview and intervention staff, assuring confidentiality and a private location for interviews, and attempting to normalize risk behaviors by prefacing some questions with statements, such as “some people share syringes when they don’t have one available, when was the last time you shared a syringe.”
Individuals who agreed to be interviewed and recorded received compensation of 50,000 VD (≈ 3USD). The study protocols were approved by the Institutional Review Boards at University of North Carolina and Thai Nguyen Center for Preventive Medicine. Participants included 10 men. The interview guide focused on whether the respondents thought that other participants had provided truthful answers, whether they themselves had been truthful, and why they or other participants may have fabricated answers. Interviews were conducted in Vietnamese by three highly trained ethnographers who had over 10 years of experience and had previously worked with this population. Interviews took place in private patient rooms at the study site lasting about 60 minutes. A semi-structured interview guide was used, and the interviews audio recorded, transcribed, and translated into English. Transcripts were imported into Atlas.ti qualitative analysis software for coding and analysis. Two coders developed thematic codes. However, as the focuses of the analyses were events and examples, which could be infrequent, rather than themes, we did not use consensus coding.
Results
The participants were male with a mean age of 38, and living with HIV. In the qualitative interviews the vast majority of participants reported that they had been truthful in their responses to the quantitative risk behavior surveys. There was an overwhelming positive attitude toward the project. Participants felt that they had been treated with respect by the staff and consequently were honest. The attitudes about the project and participants’ experiences can be summed up by one participant who stated: ”I talk to you because you are friendly and open like my sister. I respect the project, and I respect the invitation, so I tell the truth, so I think that I should come here.” Another participant said: “I felt very comfortable. I was not afraid at all. For example, if I answered the police, then I may have some fear and it is different from the interviewer, with this I am responding very truly.”
As this study focused on potential inaccurate responses, the interviewers asked participants in detail about whether they gave accurate response and whether they thought other participants provided truthful information. Although most of participants said that they themselves had been truthful, some suggested that others in the study may not have always been truthful. A few participants gave estimates about the percentage of participants who would not tell the truth. When asked by the interviewer the question “For example if there are 10 persons, how many of them will tell the truth and how many people will lie?” One participant responded “Probably about 70%, [will tell the truth] and another person said 8 out of 10 participants will tell the truth.
The interviewers then asked about the reasons other participants may not tell the truth. Issues that emerged from these discussions were shame of not protecting their family when infected, desire to please the counselors, ensuring that they were eligible for the study, and to a lesser degree desire to shorten the interview.
Rapport with the study staff was seen as a factor that influenced reporting of risk behaviors. As observed in this exchange greater rapport may lead to more truthful responses:
Q: Now I would like to ask about your case. For example, when you start to join the study, I ask you how many sexual partners you have, and would you tell the truth when we talk at the first time?
A: It is not sure if it is the truth….
Q: And after a certain period of participating in the study and you met me for several times, will you tell the truth or you tell much more?
A: I think that I will tell the truth.
Rapport can also impede accurate reporting of risk behavior. For example, when asked why other participants may not tell the truth one respondent stated, “They already knew that they were infected, and they still did that [engaged in risk behaviors] then they might transmit to others, for example like that. They are afraid that it would be thought so, afraid that the counselors would think so.”
In an admission of not telling the truth, one participant stated that he did not want to report behavior that went against the advice of the study staff, stating “You [other staff members] have spent hours talking to me, so when you asked me whether I used condom when I had sex with my wife the night before, I would answer ‘yes’ even though I did not use it. To tell the truth, I have been advised five for seven times already, that is not to mention the newspapers and TV. It is cruel to answer ‘no’.” These finding suggests that rapport may lead to both under and over reporting of risk.
In addition to rapport potentially leading to either increased and decreased reports of risk behaviors, perceptions of study enrollment criteria may also influence responses. One responded observed that “One guy came and replied that he didn’t use drugs, and then the project staff didn’t let him have a test and allowance.” Failure to meet eligibility criteria and enrolled in the study may lead participants to rescreen and alter their responses or to tell others the reasons that they perceived that they were ineligible, which may influence others self-report data.
Although participants were repeatedly assured of confidentiality, concerns may linger as seen in this participant’s statement: “They [may be] afraid that for some reasons if by chance the center informed their family, wife, children that they used more or like this, something like that.”
In responding to questions about how the length of the survey may influence responses, a few participants indicated that duration of the survey may influence responses. One participant when asked whether the length of the interview may tire participants he stated:” Yes, definitely yes. As I think the topic is all about this then people will be bored, tired and do not want to reply.” In addition to fatigue and boredom, memory may influence reporting as seen in this exchange:
Q: Did you tell exactly or almost exactly?
A: About ninety percent of accuracy.
Q: Why was there ten percent of inaccuracy?
A: Perhaps I forgot. For example, I was asked how many times I used drug every month, and my answer was ten times. In fact, it could be nine times or eleven times.
There is also the possibility that interacting with other participants in the study may have an impact on self-reports. One respondent stated that a few study participants discussed the content of the surveys with others in the study. He reported that they asked about the survey questions but did not discussed how to answers the questions. We do not know how these discussions may have influenced responses.
Discussion
In this study we implemented several methods to enhance accuracy of self-reports. In general, study participants reported that they had excellent rapport with the interviewers and were honest in their responses; however, a few participants stated that they did not always answer questions truthfully. They also reported that a minority of participants may not have answered the survey questions accurately. Desire to please the study staff were perceived as reasons for inaccurate reports. Wanting to enroll in the study, length of the survey, and memory were also reported as factors that may influence self-reports.
Although we were able to document some inaccurate self-reports, it is difficult to determine the impact that these inaccurate reports may have on intervention outcome analyses and the identification of determinants of risk behaviors. However, as outcome analyses are based on inferential tests statistical that assess the probability of the results being different from chance, response biases, even at low levels, can lead to both type I and type II errors.
To address response bias it may be helpful for investigators to change perspective of assuming self-report data is valid unless proven otherwise to the perspective of why should participants provide (if cognitively feasible) accurate information. These behaviors are personal, potentially embarrassing, and may indicate failure to follow instructions from health care professionals and diminish their self-concept.
There are several recommendations for improving self-reports of HIV risk behaviors. Studies should not provide incentives to underreport risk behaviors. If a survey is 15 minutes for someone who reports no drug or sexual risk behaviors but one hour for individuals who report risk behaviors, there may be a strong incentive to under-report risk. It is feasible to reduce this incentive by having the survey the same length for all participants. For those who are not currently engaged in the risk behavior, interviews can ask about the last 3 months when they were engaging in the risk behaviors. Providing participants with information about the length of the survey as well as informing them that the survey will be about the same length regardless of level of risk behaviors, may reduce the incentive for under-reporting of risk.
There may be an incentive to provide inaccurate data, often over-reporting of risk, at screening. Many studies screen based on highly selective self-report criteria. Once individuals know the study eligibility criteria there may be an incentive to report behaviors that fit the criteria. Two potential methods of reducing these criteria are to mask the criteria by adding numerous filler questions not related to the actual criteria, such as left or right handedness, height, weight, or food preferences. Moreover, these questions can change over time to reduce the chances of deducing the study criteria. Another approach is to enroll a certain percentage of individuals who do not meet all of the study criteria. In randomize clinical trials (RCT) these data can be used as a second comparison group. This approach has an ethical dimension; data should not be collected unless it is scientifically useful. But such data can also protect individuals in a study since the act of enrolling in the study is less likely to divulge specific information. For example, enrolling a percentage of HIV uninfected individuals in a risk reduction study for those who are infected may reduce the probability that the study will be perceived as only for HIV infected and hence enrollment leading to de facto disclosure. For RCT of behavioral interventions, investigators may also want to assess whether they can broaden their enrollment criteria. Although there are statistical power considerations to assess risk reduction if a sample has low levels of risk, homogenous high risk groups are unlikely to be found in community samples, and many NGOs and health departments do not have the resources to provide interventions to narrowly defined high risk groups.
One theme that emerged from the data was that the desire to please, or at least not displease, the staff may influence self-reports of risk behaviors. Most often when studies involve counseling on risk reduction, participants are informed that the risk behaviors are undesirable and bad for their health. On follow-up surveys, participants are asked again about these behaviors. If the counseling sessions are successful they will enhance the social desirability bias associated with risk behaviors. Consequently, social desirability response bias may be stronger at follow-up assessments. A common approach to address this issue is to separate the intervention from the assessment staff. However, it is not known the degrees to which this approach ameliorates the increase in social desirability biases associated with VCT and other intervention activities. One potential approach to reduce this bias, which we have utilized, is to normalize the risk behaviors by prefacing the question to acknowledge that it is understandable why people my engage in the behavior. An example of a preface for injection drug use risk is that “there are many reasons people may share injection equipment, such as fear of police, desire to evenly split drugs, or being in a rush“. An example of sexual risk preface is “we understand that it is difficult for people to always use condoms.”
These data suggest that investigators who research sensitive topics may benefit from a tabulation of the issues related to the accuracy of self-reporting using qualitative methodologies. Based on qualitative research, a preface to a sexual risk question could state, “sometimes in the heat of the moment, after drinking, or with a close partner” it is difficult to use condoms. Of course, such prefaces could lead to the over reporting of risk. Another method to increase reporting of risk is by altering the response categories (Cappelleri, Jason Lundy, & Hays, 2014). If the categories are skewed to the high frequency end, then individuals may think that the behavior is more normative and be more likely to report greater frequency of the behavior. For example, if the highest response category for sharing syringes can be changed from “several times a week” to “several times a day”. This approach, however, can result in over-reporting. Even subtle change in questions, such as “how often do you use a condom” to “how seldom do you use a condom” is likely to alter responses.
Rapport requires a balance. Too little rapport may lead to individuals not wanting to divulge personal information, whereas too much rapport may lead to hesitancy in reporting continued risk behaviors after VCT or other risk reduction interventions. To promote rapport the organizational culture of a project should foster treating participants with respect and dignity and sanctioning staff who do not.
It has been well documented that it is important not to begin a survey with sensitive questions, but this approach may not be sufficient in settings where respondents are not used to surveys or talking about risk behaviors (Rockwood, Sangster, & Dillman, 1997). Open-ended questions can help establish rapport (Dillman, 1991; Mitchell et al., 2007; Vallano & Compo, 2011). Although they can be time consuming, questions that are focused require little time. Innocuous brief open-ended questions, include what is your favorite TV show or food?
To enhance motivation to provide accurate information, it can be useful to explicitly state why participants should provide accurate information. For example, an introduction to a survey may state, “We realize that people come here for a variety of reasons. Some people may benefit from the program, some enjoy talking to the staff, and some come here for the money. Regardless of your reasons, we hope that you will help us to develop programs that may improve the health of people in your community. To do this we need you to answer the questions honestly. We would rather you not answer a question than not tell us the truth. We also realize that some of the questions are very personal. The only reason we are asking them is so we can better understand what is going on with you and others in the community and to develop better programs.”
To enhance accuracy of self-report data it can also be valuable to obtain feedback during the piloting of the surveys about the social acceptability of item and potential motivations to provide inaccurate data. Potential questions to assess this include, “how do you think people you know (or friends) would feel about answering questions on this survey?” and “what questions do you think people may not provide truthful responses?” One can also ask about ways to improve the survey to obtain more accurate information. During the study it can be helpful to ask participants what other people in the community are saying about the study and the risk behavior survey.
To improve self-report data it is useful to have methods to verify the accuracy of the data. It is possible that altering the format and wording of survey questions can lead to a lower level of false negatives but a higher level of false positives. By asking risk behavior questions in several formats it is possible assess which questions are more strongly associated with outcomes such as incident HIV and STIs. Another approach is having risk partners report on the same behaviors. Mhealth and Ecological Momentary Assessment (EMA) devices are promising technologies that may help to improve self-reports due to greater privacy and the potential to report on behaviors more frequently.
Several study limitations should be noted. This was a convenient sample and we do not know how the findings among substance users in one province in Vietnam may generalized to other groups and geographic areas. It is also likely that many biases that influence self-reports are not conscious and hence are not reflected in findings from the current study. We also do not know what biases have the greatest impact on self-reports or the magnitude of the impact of the factors described in this study.
Although one can use a range of techniques to enhance rapport and reduce incentives for consciously inaccurately reporting risk behaviors, it is incumbent on investigators to develop methods to systematically improve self-report measures. This requires either separate validity and reliability studies or to imbed such studies in ongoing research projects. It is likely that the latter is more cost effective. Funding agencies may want to consider encouraging and supplementing such investigations, especially large research networks that could institute systematic approaches to enhance self-reports. Large amounts of resources and time is spent improving and validating laboratory assays, yet little is spent on enhancing self-report measures. If 5–10% of survey items were devoted to improving validity and reliability it is likely that our measures would be much improved. Investigators also need to foster a behavioral and social science HIV prevention and care research culture that values the development and dissemination of methods and approach to improve in self-report data.
Acknowledgments
This project was supported with funding from the National Institutes of Health (DA022962; AI094189; DA032217)
References
- Anastario M, Chun H, Soto E, Montano S. A trial of questionnaire administration modalities for measures of sexual risk behaviour in the uniformed services of peru. International Journal of STD & AIDS. 2013;24(7):573–577. doi: 10.1177/0956462413476273. [DOI] [PubMed] [Google Scholar]
- Cappelleri JC, Jason Lundy J, Hays RD. Overview of classical test theory and item response theory for the quantitative assessment of items in developing patient-reported outcomes measures. Clinical Therapeutics. 2014;36(5):648–662. doi: 10.1016/j.clinthera.2014.04.006. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Catalani C, Philbrick W, Fraser H, Mechael P, Israelski DM. mHealth for HIV treatment & prevention: A systematic review of the literature. The Open AIDS Journal. 2013;7:17–41. doi: 10.2174/1874613620130812003. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Dillman D. The design and administration of email surveys. Annual Review of Sociology. 1991;17:225–249. [Google Scholar]
- Gol VF, Frangakis C, Minh NL, Latkin C, Hi TV, Mo TT, … Quan VM. Efficacy of a multi-level intervention to reduce injecting and sexual risk behaviors among HIV-infected people who inject drugs in vietnam: A four-arm randomized controlled trial. PloS One. doi: 10.1371/journal.pone.0125909. In Press. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Gorbach PM, Mensch BS, Husnik M, Coly A, Masse B, Makanani B, … Forsyth A. Effect of computer-assisted interviewing on self-reported sexual behavior data in a microbicide clinical trial. AIDS and Behavior. 2013;17(2):790–800. doi: 10.1007/s10461-012-0302-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Healthy Living Project Team. Effects of a behavioral intervention to reduce risk of transmission among people living with HIV: The healthy living project randomized controlled study. Journal of Acquired Immune Deficiency Syndromes (1999) 2007;44(2):213–221. doi: 10.1097/QAI.0b013e31802c0cae. [DOI] [PubMed] [Google Scholar]
- Hjorthoj CR, Hjorthoj AR, Nordentoft M. Validity of timeline follow-back for self-reported use of cannabis and other illicit substances--systematic review and meta-analysis. Addictive Behaviors. 2012;37(3):225–233. doi: 10.1016/j.addbeh.2011.11.025. [DOI] [PubMed] [Google Scholar]
- Islam MM, Topp L, Conigrave KM, van Beek I, Maher L, White A, … Day CA. The reliability of sensitive information provided by injecting drug users in a clinical setting: Clinician-administered versus audio computer-assisted self-interviewing (ACASI) AIDS Care. 2012;24(12):1496–1503. doi: 10.1080/09540121.2012.663886. [DOI] [PubMed] [Google Scholar]
- Johnson WD, Diaz RM, Flanders WD, Goodman M, Hill AN, Holtgrave D, … McClellan WM. Behavioral interventions to reduce risk for sexual transmission of HIV among men who have sex with men. Cochrane Database of Systematic Reviews (Online) 2008;(3):CD001230. doi: 10.1002/14651858.CD001230.pub2. [DOI] [PubMed] [Google Scholar]
- Koblin B, Chesney M, Coates T EXPLORE Study Team. Effects of a behavioural intervention to reduce acquisition of HIV infection among men who have sex with men: The EXPLORE randomised controlled study. Lancet (London, England) 2004;364(9428):41–50. doi: 10.1016/S0140-6736(04)16588-4. [DOI] [PubMed] [Google Scholar]
- Labrique AB, Kirk GD, Westergaard RP, Merritt MW. Ethical issues in mHealth research involving persons living with HIV/AIDS and substance abuse. AIDS Research and Treatment. 2013;2013:189645. doi: 10.1155/2013/189645. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Langhaug LF, Cheung YB, Pascoe SJ, Chirawu P, Woelk G, Hayes RJ, Cowan FM. How you ask really matters: Randomised comparison of four sexual behaviour questionnaire delivery modes in zimbabwean youth. Sexually Transmitted Infections. 2011;87(2):165–173. doi: 10.1136/sti.2009.037374. [DOI] [PubMed] [Google Scholar]
- Latkin CA, Vlahov D, Anthony JC. Socially desirable responding and self-reported HIV infection risk behaviors among intravenous drug users. Addiction (Abingdon, England) 1993;88(4):517–526. doi: 10.1111/j.1360-0443.1993.tb02058.x. [DOI] [PubMed] [Google Scholar]
- Minnis AM, Steiner MJ, Gallo MF, Warner L, Hobbs MM, van der Straten A, … Padian NS. Biomarker validation of reports of recent sexual activity: Results of a randomized controlled study in zimbabwe. American Journal of Epidemiology. 2009;170(7):918–924. doi: 10.1093/aje/kwp219. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Mitchell K, Wellings K, Elam G, Erens B, Fenton K, Johnson A. How can we facilitate reliable reporting in surveys of sexual behaviour? evidence from qualitative research. Culture, Health & Sexuality. 2007;9(5):519–531. doi: 10.1080/13691050701432561. 781206056 [pii] [DOI] [PubMed] [Google Scholar]
- NIMH Collaborative HIV/STD Prevention Trial Group. The feasibility of audio computer-assisted self-interviewing in international settings. AIDS (London, England) 2007;21(Suppl 2):S49–58. doi: 10.1097/01.aids.0000266457.11020.f0. [DOI] [PMC free article] [PubMed] [Google Scholar]
- NIMH Collaborative HIV/STD Prevention Trial Group. Results of the NIMH collaborative HIV/sexually transmitted disease prevention trial of a community popular opinion leader intervention. Journal of Acquired Immune Deficiency Syndromes (1999) 2010;54(2):204–214. doi: 10.1097/QAI.0b013e3181d61def. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Pauls CA, Stemmler G. Substance and bias in social desirability responding. Personality & Individual Differences. 2003;35(2):263. [Google Scholar]
- Phillips AE, Gomez GB, Boily MC, Garnett GP. A systematic review and meta-analysis of quantitative interviewing tools to investigate self-reported HIV and STI associated behaviours in low- and middle-income countries. International Journal of Epidemiology. 2010;39(6):1541–1555. doi: 10.1093/ije/dyq114. [DOI] [PubMed] [Google Scholar]
- Rockwood T, Sangster R, Dillman D. The effect of response categories on questionnaire answers. Sociological Methods & Research. 1997;26(1):118–140. [Google Scholar]
- Vallano J, Compo N. A comfortable witness is a good witness: Rapport-building and susceptibility to misinformation in an investigative mock-crime interview. Applied Cognitive Psychology. 2011;25(6):960–970. [Google Scholar]
- Weinhardt LS, Forsyth AD, Carey MP, Jaworski BC, Durant LE. Reliability and validity of self-report measures of HIV-related sexual behavior: Progress since 1990 and recommendations for research and practice. Archives of Sexual Behavior. 1998;27(2):155–180. doi: 10.1023/a:1018682530519. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Yeganeh N, Dillavou C, Simon M, Gorbach P, Santos B, Fonseca R, … Nielsen-Saines K. Audio computer-assisted survey instrument versus face-to-face interviews: Optimal method for detecting high-risk behaviour in pregnant women and their sexual partners in the south of brazil. International Journal of STD & AIDS. 2013;24(4):279–285. doi: 10.1177/0956462412472814. [DOI] [PMC free article] [PubMed] [Google Scholar]