Abstract
The Centers for Disease Control and Prevention (CDC) recommend routine HIV screening in clinical settings, including emergency departments (EDs), because earlier diagnosis enables treatment before symptoms develop and delivery of interventions to reduce continued transmission. However, patients frequently decline testing.
This study delivered a 16-minute video-based intervention to 160 patients who declined HIV tests in a high volume, urban ED. One third of participants (N=53) accepted an HIV test post-intervention. Interviews with a subset of participants (n=40) show that before the video, many were unaware HIV testing could be conducted without drawing blood, or that results could be delivered in 20 minutes.
Keywords: HIV, emergency department, video, technology, decline
Since 2006, the Centers for Disease Control and Prevention (CDC) have called for routine HIV screening of patients in all types of US clinical settings, including non-traditional venues such as emergency departments (EDs).(1) EDs frequently serve populations that have little or no other access to care. Persons with undiagnosed HIV may be identified within EDs and referred for treatment long before they experience symptoms (1)
New York State now requires that hospital (including ED) and primary care providers offer HIV testing to all patients 13-64-years of age, with limited exceptions. (2) The ED where data were collected for the current study offers opt-in HIV testing to patients in accordance with New York State law. However, far more patients decline than accept. In 2012, the year study data were collected, 88.5 percent (n=137,430) of patients who were offered an HIV test at the ED study site declined. (3)
Patients who decline testing, in New York and beyond, frequently cite low-self risk. Christopoulos et al., interviewed patients who declined ED testing, and found that some who said they declined because they were not at risk also said they did not want to know their HIV status and preferred uncertainty to the consequences of being diagnosed with HIV. (4) Moreover, Czarnogorski et al. compared the HIV prevalence in discarded, de-identified blood samples from ED patients who declined testing and were not known to be HIV positive, to the prevalence in ED patients who accepted testing during the same time period, and found that although the most common reason for not accepting a test was patients’ reported belief that they were not at risk, the prevalence of HIV infection among those who declined testing was 2.74 times higher than among patients who accepted HIV testing. (5) Thus, interventions for people who elect not to test appear urgently needed. (5)
Uses of technology to increase test rates
In recent years, EDs have examined how computer-based video could potentially increase HIV test rates. Merchant, et al. (6) established video as an effective method to deliver pre-test information about HIV testing to patients who accepted HIV testing offered by hospital staff. In a 2009 study of ED patients who accepted HIV testing offered by hospital staff, Calderon et al. (7) found greater post-intervention knowledge increases among patients who watched an educational video compared to patients who met with a counselor. In a 2011 study Calderon et al. found higher post-intervention knowledge scores and test uptake among adolescent ED patients who watched a video compared to adolescent ED patients who received in-person HIV counseling. (8) Aronson and Bania (9) documented significant increases in HIV test uptake among ED patients who were offered a test after watching a short video on a computer, compared to ED patients who were offered a test by hospital staff at triage. However, none of the above studies was designed to specifically target patients who had already declined HIV tests. In EDs, as in many other settings, patients who do not accept the first offer of an HIV test may not be presented with another chance to learn their status. Therefore, we hypothesized that an intervention encouraging reluctant patients to test for HIV could potentially reach high-risk patients who would otherwise be highly unlikely to test.
What types of video designs most effectively encourage reluctant patients to test for HIV?
Fundamental questions also remain as to how videos designed to promote HIV testing can be made most effective. The Information, Motivation, Behavioral Skills Model (IMB) (10) posits that information alone is not enough to change behavior. Instead, content must elicit adequate motivation to act. (10) However, fundamental design questions emerge when attempting to develop video-based interventions based on IMB. For example, what should the people who appear onscreen look like, and what type of emotional response should a video message aim to elicit in viewers?
The Aronson and Bania (9) study randomized patients into 4 groups, and showed each group a different video. Patients saw either a White healthcare provider speaking with a White patient about the importance of HIV testing, or a Black healthcare provider speaking with a Black patient. Additionally, the Aronson and Bania study randomized participants to see healthcare providers speaking in positive terms, (describing the benefits of HIV testing), or in negative terms, (emphasizing the dangers of not testing). Differences in treatment group efficacy emerged by participant race. The study found significantly greater increases in knowledge and intent to use a condom during vaginal sex among Black/African American participants who watched videos depicting White people, compared to Black/African American participants randomly assigned to see Black people onscreen. (9) The study also found White participants were significantly more likely to accept an HIV test after viewing videos with positive emotional content, compared to White participants who watched negative emotional content. (9)
The findings above lead to additional questions about how videos can be optimized for greatest effectiveness in high-volume clinical settings. For example, could the intervention’s effectiveness be increased by keeping the onscreen healthcare providers’ race constant, and randomly assigning participants to see either male or female providers and patients onscreen?
What intervention components other than video may encourage testing?
Because Aronson and Bania (9) did not conduct qualitative interviews with participants who completed the intervention, much remains unknown about why ED patients responded to the videos as they did. For example, did the structure of the computer-based intervention itself, or the knowledge participants gained from watching the video, play a greater role in decisions about testing than the characteristics of the people who appeared onscreen? We hypothesized that mixed-methods examinations of what intervention elements did, or did not, contribute to participants’ decisions to test could potentially inform more efficacious future efforts to promote testing. Building upon studies described above which used technology help ED staff offer HIV testing to more patients, or to offer information to patients who had already accepted a test, the current study sought to examine whether a computer-based intervention could increase HIV test uptake among patients who initially declined tests offered by ED staff at triage, and if so, how participant feedback could inform more efficacious interventions in the future. Specifically, the objectives of our preliminary trial were to examine:
Whether patients who declined HIV testing in the main treatment areas of a high volume, urban ED would be willing to receive a computer-based video intervention designed to increase HIV testing. If yes, would patients complete the intervention, and would patients accept an HIV test offered by computer at the end?
Which intervention components encouraged participants to test? This includes quantitative and qualitative examinations of potential relationships between knowledge change, video content, and participants’ decisions to test. This also includes comparative examinations of the multiple video designs to determine whether participants would be more likely to accept an HIV test after watching a particular video configuration.
Method and Sample
From June through August 2012, our research team evaluated a computer-based video intervention for patients in the main treatment areas of a high-volume, New York City ED that receives approximately 170,000 patient visits per year. The intervention randomly assigned participants into 4 groups. Each group was shown a different video to explore how widely accepted models of behavior change (e.g., IMB) can be applied to develop more effective technology-based interventions – which not only present information but also provide a means to help individuals change behavior. As a result, the research team developed a set of videos for the current pilot study designed to address barriers to testing in the ED, including baseline knowledge deficits about how tests work (information) and low perceived self-risk (motivation to test). To provide participants with the means to act (behavior), computers offered an HIV test at the end of the intervention. IMB does not specifically address whether video content should depict people who are gender concordant with intervention recipients, nor does IMB specifically recommend emphasizing benefits of testing versus dangers of not testing. We created separate videos with males or females onscreen, discussing the importance of HIV testing in positive or negative terms, to examine which would result in greater HIV test uptake.
Participants
Research assistants (RAs) recruited a convenience sample of 160 adult ED patients. Participants were 65% female. Approximately forty-five percent were Black or African American (16.2 % Black Latino, n=26, and 28.8% Black non-Latino, n=46). Approximately 39 percent were White (15.6% White Latino, n=25, and 23.8% White non-Latino n=38).
Procedure
Only patients whose medical records indicated they declined a test at triage were approached to participate in the study. Participants who were intoxicated, unconscious, or otherwise unable to provide informed consent were excluded from participation, as were those in most emergent need of care (e.g. gunshot victims). Patients aged greater than 17, not a prisoner, and not known to be HIV positive were eligible. RAs were instructed to ask all eligible patients in the ED if they would like to participate in the pilot study.
RAs approached eligible patients individually, in hospital beds or exam rooms, and asked if they would like to know more about the study. Those who said they were interested were given written informed consent documents describing what they would be asked to do for the study, and emphasizing that participation was voluntary. Consent forms also explained that some participants who completed the intervention would be asked if they would agree to be interviewed about their experience with the computer-based video, and that participants could complete the intervention without being interviewed, depending on their preference. Participants did not receive any incentives to complete the computer-based intervention. Participants who were interviewed received $25 gift cards. Patients who provided informed consent were handed a set of headphones along with a handheld computer presenting intervention software custom designed for the current pilot study. Patients completed the computer-based intervention in the hospital beds or exam rooms where they were already receiving treatment.
Intervention Design
The intervention software delivered pre- post intervention data collection instruments and very short educational videos approximately 2 minutes and 40 seconds long. Once participants completed the pre- intervention items, the computers automatically displayed the video. The first author developed an algorithm enabling the software to dynamically randomize an even number of participants into each treatment group, and to evenly distribute male and female participants across treatment groups. Each group was shown a different video depicting either a male doctor speaking with a male patient, or a female doctor speaking with a female patient. The videos also depicted the doctors describing the dangers of not testing (negative version), or the benefits of testing (positive version). In all versions, the doctors appear in an ED treatment room, and inform patients that the only way to know their HIV status is to test. All videos also depict the doctor removing a rapid oral HIV test paddle from a sealed packet, administering the oral test, placing the paddle in the test solution, then telling the patient results would be available in 20 minutes. Both doctors who appear on camera are Senior Attending physicians in the ED where study data were collected, and are demographically concordant in terms of age and race. Both doctors are White, both onscreen patients are White as well.
To help participants focus their attention on the intervention content instead of on the delivery technology, the videos and the intervention were developed in accordance with empirically derived theories of multimedia learning (please see Aronson et al. 2012 (11) for additional detail). For example, to avoid distracting participants and splitting their attention away from essential content, the intervention did not employ any music or use colorful screen backgrounds. All onscreen text appeared in large easy to read black type against a white screen. The video segments used close-up shots to direct participants’ attention to important details (e.g. the demonstration of an HIV test).
The research team and the physicians who appear onscreen in the intervention developed the video dialog through a collaborative process. The video content was designed to specifically answer each item in the pre- post knowledge tests. Thus, the intervention was designed so that participants would answer a set of pre-test knowledge questions, watch a video addressing each item on the pre-test, and then answer the questions a second time during a post-test immediately following the video. In educational terms, this is called an advance organizer in which learners are given suggestions of what to look for, or what to think about, before they are presented with new information. In this case, the knowledge pre-test was intended not only to measure what participants knew at the start of the intervention, but to focus their attention on key content before watching the video. For more on this strategy, please see Gangné et. al, 2005.
When participants responded to all post-test items, the computers asked participants if they would like an HIV test. Once participants had responded to the onscreen offer of an HIV test, the computer displayed a set of acceptability measures (described in the next section). When participants completed the acceptability items, the RAs collected the computers and headphones. All patient responses were sent via wireless connection to an off-site, password-protected server, and stored as data for later analysis using SPSS. No data were stored on the individual computers. Additionally, all data collected by the intervention were anonymous – no patient names, medical record numbers or other identifying data were stored by the current study. If participants accepted the onscreen offer of an HIV test, RAs verbally informed the patient’s physician. An ED staff member then came to the patient and administered a rapid oral HIV test. All testing was performed by hospital staff, the current study did not record any test results.
RAs asked a subsample of participants who completed the intervention (25% of the total sample, n=40) if they would agree to be interviewed about their experience. The research team developed a guide to help RAs ensure a diverse qualitative sample of patients in terms of age, race/ethnicity, and gender. Additionally, software developed for the study tracked interview respondents by these demographics. Participants who agreed to be interviewed provided additional written consent and were asked a series of semi-structured interview questions, such as “why did you decide to accept an HIV test after watching the video?” All interviews were recorded digitally, and conducted immediately after patients completed the computer-based intervention. After each interview, the RAs used a software interface, custom-designed specifically for this study, to upload the audio files to the same password-protected database as the quantitative data. Each participant’s data were identified by a unique, 13 digit anonymous ID generated by the software.
Measures
Would patients agree to participate, and if so, to test?
The pilot study recorded the number of patients who agreed to participate, and how many completed the intervention. The study also recorded how long participants’ took to respond to the pre-test items; watch the video segments; and respond to the post-test items, including the offer of an HIV test. HIV testing at the end of the intervention was measured by response to the question “Would you like an HIV test now,” displayed on the computer screen. Possible responses were “yes” or “no.”
Efficacy of components: knowledge and condom intent
Knowledge of HIV prevention and testing was measured using a shortened version of the 18-item Brief HIV Knowledge Questionnaire. (12) The measure was developed to provide a brief and reliable method to assess HIV knowledge among high-risk populations, including low income men and women, and has been tested extensively. (12) The current study used 6 of the 18 knowledge-test items. A subset of items was selected to ensure patients could complete the full intervention, including the pre-post knowledge test, without disrupting staff workflows. Additionally, because patients were not offered an incentive to participate in the current research, the study materials were designed to include fewer pre- post data collection items to shorten the time needed to respond to all items and complete the intervention.
Efficacy of components: qualitative interviews
RAs used an interview guide developed for the study (described above) that contained a series of open-ended questions asking participants what they thought about the design and content of the videos, and what, if anything, in the video influenced their decision to accept or decline an HIV test at the end of the intervention.
Acceptability
Acceptance of the intervention was measured using an on-screen instrument employing visual analog scales in which patients clicked on a series of lines to rate how useful the intervention was, how much new information the intervention provided, how easy it was to use, and how much they understood the content. Each acceptance item contained a single question above a horizontal line that was labeled with a negative response at the start of the line, and a positive response at the end. For example, the question “How useful was the program you just completed?” appeared above a line labeled with “not useful” at the beginning and “very useful” at the end. Versions of this assessment were successfully used in prior evaluations of computer-based behavioral health interventions (e.g. Marsch et al., 2007).
Analysis
Descriptive statistics were used to describe the sample and HIV test uptake. Paired t-tests were conducted to examine mean differences in pre- and post-intervention HIV knowledge. Chi-square analyses were also conducted to examine differences in HIV testing among different intervention groups following the intervention. A priori power analyses were conducted using G*Power (version 3.1) to estimate the expected power and effect size for the key proposed analyses. For the analyses using paired t-tests to examine mean differences in pre- and post-intervention knowledge and with the proposed total sample size, we expected to have a power of 0.95 at an alpha-level of 0.05 and to detect a small effect size (Cohen’s d) of 0.29. For the examination of test uptake by video condition using Chi-square tests, we expected to have a power of 0.91 at an alpha level of 0.05 and to detect a medium effect size of 0.30.
All qualitative interviews were transcribed, then entered into QSR-NVivo software for coding and analysis. A broad a-priori thematic coding scheme was created following the main points of the interview guide (i.e. “reasons for accepting a test post-intervention”). Two experienced qualitative researchers conducted two rounds of independent coding of the first 10 transcripts; finding 90% or greater agreement on coding during these preliminary rounds, the remaining 30 transcripts were divided and coded independently. To check for continued inter-rater reliability, the qualitative researchers jointly coded the results of transcripts 20, 30, and 40. A check of inter-coder reliability following completion of all coding showed that the average agreement between coders was 97.6 percent. Narratives from the ‘reasons for accepting a test post-intervention’ report were analyzed for emerging themes across transcripts; for example, not having previously known that they could receive test results in only 20 minutes, or that HIV tests could be conducted by an oral swab instead of a blood draw, etc. Quotes illustrating each theme emerging from the “reasons for accepting a test post-intervention” code are presented below.
Results
Quantitative
Of the 160 patients who consented to take part in the study, 155 (96.9%) completed the intervention. RAs did not track the number of patients who were asked to participate in this pilot study but declined. The average time spent watching the video was 3.1 (± 4.0) minutes. The average time spent on the entire intervention (watching the video and responding to pre- post-test questions, including the offer of an HIV test) was 15.7 (± 22.1) minutes. When the computers asked participants if they would like an HIV test, 33.1% (n = 53) said yes.
Chi-square analyses were conducted to explore differences in HIV testing post-intervention across treatment type. No statistically significant differences were found by individual video (X2 =0.46, df = 3, p = .928); by males vs. females onscreen (X2=0.52, df = 1, p = .820); or by positive vs. negative emotional content (X2=0.20, df = 1, p = .653).
The total knowledge scores across all six items, both pre- and post-intervention, were calculated. Higher scores indicate more correct answers, the highest possible score was 6 and the lowest possible score was 0. The mean pre-test score was found to be 5.1 (± .947) and the mean post-test score was found to be 5.6 (± 1.01). Post-test scores were statistically significantly higher than pre-test scores across the entire sample (p < .001). See Table I for more detail.
Table I.
Question | Pre-Test Item Percent Accuracy |
Post-Test Percent Accuracy |
Percent Change [95% CI] |
T-Test Results |
---|---|---|---|---|
Taking a test for HIV one week after having sex will tell a person if she or he has HIV. (F) |
79.4% | 90.6% | 11.2% [9.0%, 18.2%] |
t = −2.294
p = .023* |
People who have been infected with HIV quickly show serious signs of being infected. (F) |
90.6% | 94.4% | 3.8% [0%, 5.0%] |
t = −.277
p = .783 |
There is a female condom that can help decrease a woman’s chance of getting HIV. (T) |
65.6% | 88.1% | 22.5% [19.0%, 23.0%] |
t = −5.717
p < .001** |
A person can get HIV from oral sex. (T) |
74.4% | 89.4% | 15.0% [14.0%, 19.0%] |
t = −4.855
p <.001** |
Pulling out the penis before a man climaxes/cums keeps a woman from getting HIV during sex. (F) |
93.1% | 96.3% | 3.2% [2.0%, 4.2%] |
t = .904
p = .367 |
A person will NOT get HIV if she or he is taking antibiotics. (F) |
94.4% | 93.1% | −1.3% [−6.0%, 0%] |
t = 1.907
p = .057 |
Paired t-test results are statistically significant at the p < .05 level.
Paired t-test results are statistically significant at the p < .001 level.
Mean scores on the acceptability items were generally high. Participants reported the intervention was easy to understand (90 out of 100, ± 12), easy to use (88 out of 100, ± 14), and useful (74 out of 100, ± 26). However, despite the significant increases in knowledge noted above, scores on the acceptability items indicated that participants did not report the intervention presented much new information (54 out of 100, ± 35).
Qualitative
Interviews with participants who accepted an HIV test after completing the intervention indicate that learning specific information from the video contributed to their decisions to test. Of the 40 participants interviewed, 15 (37.5%) accepted an HIV test after watching a video. Of these, six said that prior to watching the video segments, they did not know HIV test results could be available in 20 minutes, and that learning this encouraged them to test:
A lot of people probably still have misconceptions about the test because I know they did make it a lot easier now. That's on the video with the swabbing. And I know I was hearing that before, but it's like, okay, I'm going to have to go for a blood test, and then I'm going to have to sit and wait and wait and wait. While you wait for a week to ten days to hopefully get back the results, and then if it's something that you don't want to hear, that's just it. Okay, this is—it's a misconception because this is the test, you know, swab around your mouth. In 20 minutes I'm going to know the results. People might not know that. #25 Black, non-Latino male, age 31
Three of the 15 participants who were interviewed after agreeing to test said they learned from the video that HIV tests could be administered without drawing blood, and that learning about oral swab tests during the intervention contributed to their decisions to test.
When I saw that I had the opportunity to retake it after learning that it was the swab test, then that was definitely helpful and, and, as you could see, I went to go take the test afterwards. #20 Black, non-Latino, female, age 32
Discussion
Would patients agree to participate and test?
The primary goals of the current pilot study were to examine whether patients in a high volume, urban ED who had declined HIV tests offered at triage would be willing to receive a computer-based video intervention intended to increase test uptake, and whether participants would accept an HIV test offered by computer. The finding that patients did agree to participate, that almost all participants completed the intervention, and that approximately one-third (33.13%, n=53) accepted an HIV test at the end is highly encouraging. It indicates not only that a very brief intervention can be implemented in the main treatment areas of an exceptionally high volume clinical setting, but that it appears to encourage testing among patients who declined HIV tests offered at triage.
The finding that approximately 97 percent of participants (96.9%, n=155) completed the entire intervention appears to indicate the preliminary feasibility of the intervention design. The completion rate also suggests participants found the intervention at least minimally acceptable —all participants had the ability to withdraw from the study at any time and almost all completed. Given that patients were eligible for the study only if they declined an HIV test at triage, these preliminary data, along with the finding that 33 percent of participants agreed to HIV testing after watching a video, suggest the intervention and methodology are highly promising.
If, as our qualitative interviews suggest, a straightforward onscreen demonstration of a rapid oral HIV test can encourage HIV testing among reluctant patients, comparably brief technology-based intervention designs may prove highly valuable in both clinical and non-clinical settings. The finding that significant differences in HIV test rates did not emerge by video intervention condition may suggest the demographic characteristics of the people who appear in an educational video, or the emotional tone of their message, may not, in themselves, determine an intervention’s success. It may instead emerge that other elements, in this case the onscreen demonstration of an HIV test or the offer of an HIV test by a computer, may have contributed more strongly to participants’ decisions to test after watching a video. Our team is now developing follow-up study designs to more thoroughly examine these possibilities. Data from our current and upcoming studies may inform technology-based interventions to facilitate additional health behaviors among hard-to-reach and underserved populations nationwide.
Acknowledgements
This work was partially supported by grant # R03 DA031603 from the National Institute on Drug Abuse, by P30 Center grant # P30DA029926 from the National Institute on Drug Abuse and by P30 Center Grant # P30DA011041 from the National Institute on Drug Abuse.
Madiha Tariq and Kate Haley collected data. Kevicha Echols, Ph.D. contributed to data analysis. Mary Ann Greene, MS contributed to the literature review.
References
- 1.Branson BM, Handsfield HH, Lampe MA, Janssen RS, Taylor AW, Lyss SB, et al. Revised recommendations for HIV testing of adults, adolescents, and pregnant women in health-care settings. MMWR Recommendations and reports : Morbidity and mortality weekly report Recommendations and reports / Centers for Disease Control. 2006 Sep 22;55:1–17. RR-14. quiz CE1-4. PubMed PMID: 16988643. [PubMed] [Google Scholar]
- 2.Health NYSDo Frequently asked questions regarding the HIV testing law. 2012 [cited 2014 May 8]. Available from: http://www.health.ny.gov/diseases/aids/providers/testing/law/faqs.htm.
- 3.Mealy A. Aronson ID, editor. 2014.
- 4.Christopoulos KA, Weiser SD, Koester KA, Myers JJ, White DA, Kaplan B, et al. Understanding patient acceptance and refusal of HIV testing in the emergency department. BMC public health. 2012;12:3. doi: 10.1186/1471-2458-12-3. PubMed PMID: 22214543. Pubmed Central PMCID: 3267671. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5.Czarnogorski M, Brown J, Lee V, Oben J, Kuo I, Stern R, et al. The Prevalence of Undiagnosed HIV Infection in Those Who Decline HIV Screening in an Urban Emergency Department. AIDS research and treatment. 2011;2011:879065. doi: 10.1155/2011/879065. PubMed PMID: 21738860. Pubmed Central PMCID: 3124124. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6.Merchant RC, Clark MA, Mayer KH, Seage GR, Iii, DeGruttola VG, Becker BM. Video as an effective method to deliver pretest information for rapid human immunodeficiency testing. Academic emergency medicine : official journal of the Society for Academic Emergency Medicine. 2009 Feb;16(2):124–35. doi: 10.1111/j.1553-2712.2008.00326.x. PubMed PMID: 19120050. Pubmed Central PMCID: 2633421. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Calderon Y, Leider J, Hailpern S, Haughey M, Ghosh R, Lombardi P, et al. A randomized control trial evaluating the educational effectiveness of a rapid HIV posttest counseling video. Sex Transm Dis. 2009 Apr;36(4):207–10. doi: 10.1097/OLQ.0b013e318191ba3f. PubMed PMID: 19265735. Pubmed Central PMCID: 2982699. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Calderon Y, Cowan E, Nickerson J, Mathew S, Fettig J, Rosenberg M, et al. Educational effectiveness of an HIV pretest video for adolescents: a randomized controlled trial. Pediatrics. 2011 May;127(5):911–6. doi: 10.1542/peds.2010-1443. PubMed PMID: 21482613. Pubmed Central PMCID: 3081187. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Aronson ID, Bania TC. Race and emotion in computer-based HIV prevention videos for emergency department patients. AIDS education and prevention : official publication of the International Society for AIDS Education. 2011 Apr;23(2):91–104. doi: 10.1521/aeap.2011.23.2.91. PubMed PMID: 21517659. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.Fisher JD, Fisher WA. Theoretical Approaches to Individual-Level Change in HIV Risk Behavior. In: DiClemente JPR, editor. Handbook of HIV prevention. Klumer Academic/Plenum Press; New York: 2000. pp. 3–55. [Google Scholar]
- 11.Aronson ID, Marsch LA, Acosta CA. Using findings in multimedia learning to inform technology-based behavioral health interventions. Translational Behavioral Medicine. 2012:1–10. doi: 10.1007/s13142-012-0137-4. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12.Carey MP, Schroder KE. Development and psychometric evaluation of the brief HIV Knowledge Questionnaire. AIDS education and prevention : official publication of the International Society for AIDS Education. 2002 Apr;14(2):172–82. doi: 10.1521/aeap.14.2.172.23902. PubMed PMID: 12000234. Pubmed Central PMCID: 2423729. [DOI] [PMC free article] [PubMed] [Google Scholar]