Abstract
Although the Centers for Disease Control and Prevention recommends routine HIV testing in emergency departments and other facilities, many patients are never offered testing, and those who are offered testing frequently decline. In response, our team developed and evaluated a series of differently configured technology-based interventions to explore how we can most effectively increase HIV testing among reluctant patients. The current study examines how different videos (onscreen physician vs. onscreen community member), and different intervention configurations (enabling some participants to select a video while others are assigned to watch a video or to view bullet-point text), could potentially increase self-efficacy to test for HIV among patients who had never tested. Analyses of data from 285 emergency department patients in New York City who declined HIV testing offered by hospital staff indicated that participants reported highly significant differences in self-efficacy depending on their history of previous testing, whether they were enabled to select a video or were assigned a video, and which video they watched. Participants who reported no previous testing reported significantly lower pre-test self-efficacy compared to those who had tested at least once before. Among those who had not previously tested, the greatest pre-post increases in self-efficacy were reported by participants who were randomly enabled to select an intervention video and chose to watch video depicting a physician. Our findings highlight the importance, not only of intervention content, but how that content is delivered to specific participants. These findings may inform more effective technology-based behavioral health interventions.
Keywords: HIV testing, technology-based intervention, self-efficacy, New York City
HIV persists as a serious global public health issue, including in New York City (NYC). Recent data indicate that nearly 2,000 people in NYC were diagnosed with HIV in 2018 alone and that over 100,000 people are currently living in NYC with HIV (AidsVu, 2020). Indeed for years, NYC was considered the epicenter of the HIV epidemic (Frieden 2005), and multiple social and cultural factors continue to influence its transmission (Leonard et al. 2014; Remien et al. 2015). Encouraging research has confirmed that early HIV testing is necessary for both prevention as well as for accessing effective treatment (Kako et al. 2013) and there is a significant role that thoughtfully developed health education efforts can and do play in facilitating increased testing (Aronson et al. 2015), particularly among those who may be hesitant to test or have initially declined testing.
Prior studies
Technology-based interventions have been used to address barriers to HIV testing (Aronson et al. 2013); however, fundamental questions remain about what makes a specific use of technology effective for a particular population, and why. Our team previously implemented a series of studies using differently configured, custom coded computer-based interventions to increase HIV testing among emergency department (ED) patients in NYC (Aronson et al. 2016). EDs provide important points of contact for patients who may not have regular access to care, and also offer immediate opportunities for referral to treatment for those who test HIV positive. To prevent patients with undiagnosed HIV from passing through an ED without being tested, the CDC has recommended routine universal HIV testing of patients in EDs and other facilities since 2006 (Branson et al. 2006). Unfortunately, many eligible ED patients are never offered testing, and in New York State, where care providers are required to offer testing to all patients with limited exceptions (Health, N. Y. S. D. o 2018), far more patients decline HIV tests compared to those who accept. This can prove especially problematic because prior research indicates ED patients who decline testing are significantly more likely to have undiagnosed HIV compared to patients who accept testing (Czarnogorski et al. 2011).
In multiple trials, our interventions increased HIV test rates by approximately 30 percent, including increased testing among patients who had initially declined tests offered by hospital staff upon their arrival in the ED (Aronson et al. 2015, 2020). While we consider this a very promising result, in hopes of developing more effective future interventions we have returned to the study data to examine why the interventions may have encouraged testing among some patients but not others. As an exploratory analysis presented in this paper, we reviewed trial data to examine how participants who agreed to test for HIV post-intervention might have differed from those who did not agree to test for HIV post-intervention. Because our analysis was exploratory in nature, we looked for trends within our data that could potentially inform additional, more fully powered studies. This led us to examine differences in self-efficacy to test for HIV by participants’ history of HIV testing, which as explained below, revealed significant differences among our sample.
In addition, our team has focused for some time on how particular configurations of technology may have increased effectiveness among different populations within our sample. As noted in a prior publication (Aronson et al. 2020), intervention videos designed to increase HIV testing frequently match the demographics of people in video segments to the demographics of the target audience. However, ED patients in a combined sample from 3 of our prior studies, all of whom initially declined testing (N = 560), were not more likely to test for HIV after viewing short video segments in which the people onscreen were demographically matched to the participant in terms of race and gender, compared to those who watched people in a video who were intentionally not matched to the participants’ race and gender. Instead, patients were more likely to test for HIV if they reported problem substance use (e.g. trying but failing to quit using a specific substance) at the start of the intervention during an automated substance use screening.
Nonetheless, determining who should appear onscreen in a technology-based intervention has been, and remains, an important consideration as we consider the best ways to promote HIV testing. Therefore, we implemented a 2015 trial to examine the effectiveness of watching an intervention video depicting a community member compared to an equivalent intervention video depicting a doctor, and also examined the effectiveness of enabling some participants to select which video they watched compared to other participants who were randomly assigned a video or were shown bullet-point text explaining the same information. In a preliminary analysis of the 2015 trial data, no statistically significant differences in HIV testing emerged via treatment group. Among participants in the study 8.3% of those who were assigned a video depicting a physician, 9.3% those who were assigned a video depicting a community member, 6.0% of those who were enabled to select a video, and 6.7% of those who were shown bullet-point text instead of a video, accepted an HIV test post-intervention.
In contrast, initial analyses indicate participants’ decision to test for HIV-post intervention was driven largely by their history of prior HIV testing. An analysis of test rates among participants shows that patients who had previously tested for HIV 5 or more times were more likely to accept an HIV test offered at the end of the intervention compared to participants in the sample who reported fewer prior HIV tests or who reported not testing previously (OR = 1.86; 95% CI: 1.05–3.32; p = 0.03).
Study purpose
The above finding informed our exploratory analyses and led us to an important new question that has yet to be answered in the existing literature: how can we reach participants who initially decline testing and have never tested for HIV? Effectively addressing undiagnosed HIV entails encouraging testing by those who have previously tested and may have been exposed to HIV after their last test. It also entails encouraging others to test for the first time.
Thus, we conducted a secondary analysis of data from the 2015 study to examine who should appear onscreen and what configurations might encourage testing by those who have previously not learned their HIV status. We conducted a secondary analysis of data from the 2015 study to examine what components may have encouraged testing among those who have previously not learned their HIV status, and we tied this exploration back to our original question of who should appear onscreen in an intervention video and what configurations might prove most effective. Therefore, the current study aims to explore how the role of the person depicted in a video (community member versus physician) can potentially encourage testing (or at least diminish the reluctance to test) among patients who have never tested for HIV.
Study design rationale
Based on a review of existing literature and drawing on our team’s previous work, it remains unclear whether a well-informed community member or an expert, such as a physician, will be more effective in a technology-based intervention. A meta-analysis of in-person interventions by Durantini et al. (2006) found that medical experts at times deliver more convincing arguments and provide information more effectively, yet the similarity of community members to an intervention recipient may lead to increased credibility and understanding. Other in-person studies (e.g. Broadhead et al. (2002); Latkin (1998)) have successfully trained community members to deliver health education and encourage HIV-preventive behaviors, but the effectiveness of HIV-prevention education delivered by physicians versus community members who are willing to discuss their own HIV status has not been directly compared in either in-person or technology-based interventions.
A computer-based intervention called the CARE tool, which has been used successfully in a number of settings including EDs (Spielberg et al. (2011)), enabled participants to select an onscreen ‘counselor’ to guide them through the content. Choices varied by race/ethnicity and gender, and by role (community member or clinician). However, the CARE tool did not record which onscreen counselor each participant selected, and thus precludes analyses of potential relationships between counselor selections and outcomes (e.g. whether the selection of one counselor over another resulted in different rates of HIV testing). Also, the CARE tool did not randomize some participants into groups that could select who appears onscreen and other participants into groups that were assigned an onscreen counselor. Thus, the CARE study design precludes examinations of how enabling some participants to select who appears onscreen and assigning others in the sample to view pre-selected videos might influence outcomes (i.e. are participants who can select what video they watch more likely to test for HIV post-intervention compared to participants who are assigned to watch a specific video).
Methods
RAs recruited a convenience sample of 300 patients in a high volume ED in New York City. Participants were eligible if medical records indicated they: declined an HIV test offered at triage; were aged 18 years or older; and were not known to be HIV positive. Patients were excluded from the study if they were: a prisoner; classified by ED staff as in most urgent need of medical care or experiencing a severe psychological problem; intoxicated; unconscious; or otherwise unable to provide written consent in English. RAs approached all eligible patients, and all participants provided written informed consent. After providing consent, participants used handheld tablet computers to complete the pre- and post-test assessments and intervention. The present study includes data from all participants who completed the full intervention (including pre- and post-test measures (n = 285)). Only 5% of the original sample (n = 15) did not complete all components and therefore were not included. All consent forms were approved by governing institutional review boards.
Participants were randomized into four groups: assigned video, community member; assigned video, physician; select video (community member or physician); control group (no video, bullet point text displayed onscreen). Intervention software included an algorithm developed by the Principal Investigator to ensure an equal number of male and female participants were distributed to each treatment. Participants viewed all intervention materials via tablet computer in the treatment areas of the ED where they were recruited.
RAs provided headphones to ensure participants could hear the video dialogue and to protect participant privacy.
Study materials
The physician video and the community member video were both very brief, approximately 1.6 minutes, included on-camera demonstrations of a rapid oral HIV test, and explained that results could be available in 20 minutes. Both videos depicted African American men who appeared to be in their 20s or 30s. The community member video depicted an actual HIV positive community member who emphasized the importance of testing by disclosing his positive HIV status on camera and explaining that no one could look at him and know he is positive. The physician video depicted an ED physician who emphasized the importance of testing by explaining that the only way people can know their status is to test.
If participants were randomized into the community member or physician video conditions, the tablets automatically displayed the video immediately after they completed the pre-test data collection items. If participants were randomized into the select video condition, the tablets displayed a still image of the physician on one side of the screen, wearing a white coat and stethoscope, and displayed a still image of the community member on the other side of the screen, wearing casual clothing, accompanied by text explaining participants could click on either image to choose a video. If participants were randomly assigned to the no video (control) condition, the tablet computer displayed bullet-point text describing the availability of treatment and the importance of HIV testing.
Measures
Intervention software collected basic demographic data (age, race/ethnicity, and gender), substance use and sexual risk (which have been described previously), HIV test history (had the participant ever tested, and if so, how many times), as well as pre-post measures of self-efficacy to test for HIV and potentially handle a positive result which were created by our team. These pre–post self-efficacy measures included six Likert-type scale items, each using a 10-point scale, asking participants how confident they were that: they can understand what an HIV test result means; they know what to do to get care if they test positive; could handle the emotional impact of a positive test result; could handle the effects of a positive HIV diagnosis on relationships with family; could handle the effects of a positive HIV diagnosis on relationships with romantic and sexual partners; and could decide who to tell about a positive HIV diagnosis.
At the end of the intervention, the tablets asked each participant if they would like an HIV test. Potential responses were yes or no.
Data treatment, analysis, and power
Data were cleaned and organized in Microsoft Excel and then translated into R (version 4.0.0) for analyses. Descriptive statistics were used to describe sample characteristics. Chi-Square tests were used to examine the relationship between likelihood to test for HIV and exposure to the different treatments. As described above, we originally conceptualized the study as a comparison of four treatment groups: Assigned Community Member, Assigned Physician, Choose Video, No Video. However, upon preliminary examinations of our data we realized that by assigning some participants to watch a given video and enabling other participants to select a video, our design actually created five treatment groups: Assigned Community Member, Assigned Physician, Chose Community Member, Chose Physician, No Video. Although this resulted in a slightly underpowered analysis, it was the most accurate way to reflect the true study conditions. Moreover, our findings based on this 5-group analysis remain informative for understanding the relationship between whether participants were able to choose their video or not and who they ultimately viewed during the intervention.
Our first goal was to explore differences in pre-test self-efficacy score and delta score for self-efficacy between participants who reported never having been tested before (n = 36) and those who have not (n = 249). Prior to conducting our analyses, we evaluated our data for two key assumptions: normality and homogeneity. As the data were not normally distributed and we identified that there was heterogeneity in the variances observed across the treatment groups, we chose to use non-parametric tests to conduct our analyses. We implemented Welch’s t-tests to investigate the above-mentioned differences. We also explored these differences across five treatment groups. This analysis was conducted using the whole sample (n = 285) and again among the subset of participants who reported never having been tested before (n = 36). Given the nature of the data and in line with best suggested practice, we utilized the results of the Welch’s ANOVA test to inform our final results.
Results
Sample description
Participants were 65.3% female (n = 186) and 34.7% male (n = 99). Nearly 50% of the participants identified as Black or African American (with 32.2% reporting as Black non-Latino (n = 92) and 16.8% reporting as Black Latino (n = 48)). And nearly a third (22.8% (n = 65)) identified as White (with 9.8% (n = 28) reporting as White Latino and 13.0% (n = 37) White non-Latino). Overall 81.1% of participants reported some type of substance use, including alcohol, during the past 3 months, and 84.2% reported having sex in the past 12 months. The total number of participants who agreed to test for HIV after completing the intervention was 91 (31.9%).
Comparison of post-intervention test rates
Chi-Square analyses of post-intervention test rates did not reveal significant differences by treatment group when we examined testing by four treatment groups (assigned community member, assigned physician, choice of video, no video) χ2 (3, N = 285) = 3.727, p = 0.292 or by five treatment groups (assigned community member, assigned physician, chose physician, chose community member, no video) χ2 (4, N = 285) = 3.745, p = 0.442.
Changes in self-efficacy
Analyses of pre-test self-efficacy scores revealed significant differences between those who reported previously having tested for HIV (M = 9.10, SD = 1.49), and those who reported never having tested for HIV (M = 7.31, SD = 2.23), Welch’s t = −4.68, p < .001. As detailed in Table 1, patients who tested for HIV at least once prior to participating in our study reported significantly greater self-efficacy in their ability to understand an HIV test result compared to participants who had never tested. Participants also reported significant differences in pre-test self-efficacy to handle the effects of a potential HIV diagnosis on relationships with family members (Welch’s t = −2.226, p < .05) and on relationships with romantic partners (Welch’s t = −2.277, p < .05), based on whether or not they had previously tested for HIV.
Table 1.
Pre self-efficacy score by previous HIV testing (n = 285).
| Pre Self-efficacy Item How confident are you … | Never tested for HIV before (n = 36) |
Tested for HIV at least once before (n = 249) |
Welch t test statistics | Welch test p-value |
|---|---|---|---|---|
| That you can understand what an HIV test result means | 7.31 (2.23) | 9.10 (1.49) | −4.68 | <.001*** |
| That if you test positive for HIV, you know what to do to get health care for HIV infection | 6.39 (2.89) | 7.30 (2.93) | −1.76 | 0.085 |
| That if you test positive for HIV, you could handle the emotional impact of the HIV diagnosis | 4.11 (2.78) | 5.00 (2.33) | −1.75 | 0.086 |
| That if you test positive for HIV, you could handle the effects of the HIV diagnosis on your relationships with family | 4.25 (2.81) | 5.39 (3.32) | −2.23 | 0.031* |
| That if you test positive for HIV, you could handle the effects of the HIV diagnosis on your relationships with romantic and sexual partners | 4.00 (2.88) | 5.21 (3.52) | −2.28 | 0.027* |
| That if you test positive for HIV, you could decide who to tell about your HIV diagnosis | 5.89 (3.45) | 6.81 (3.27) | −1.50 | 0.140 |
Note: Standard deviations reported in parentheses.
Analyses also indicate highly significant differences in pre – post changes in response to the self-efficacy item examining ability to understand the results of an HIV test among those who reported never testing for HIV compared to those who reported previous HIV testing. Participants who had never tested before (M = 1.00, SD = 1.72) reported significantly greater increases in self-efficacy to understand an HIV test result after watching a video compared to participants who had tested for HIV prior to our study (M = 0.08 SD = 1.42), Welch’s t = 3.068, p = 0.004.
Among participants in our study who had not previously tested for HIV, significant differences were detected among five treatment groups regarding how they handle emotional impact of HIV diagnosis and its effect on romantic relationships. The treatment differences on handling emotional impact yielded a Welch test statistic of 4.27, p = 0.021. The treatment differences on handling effect on romantic relationship yielded a Welch test statistic of 3.63, p = 0.035. A post hoc pairwise nonparametric analysis shows significant difference on handle emotional impact among participants who were assigned a video by a community member (M = −0.38, SD = 1.19) and those who chose to watch the video depicting a physician (M = 3.75, SD = 1.71), Games–Howell test statistic = 4.12, p = 0.044. Interestingly, participants who were assigned to watch video of a community member reported decreases in response to two items (Table 2). However, due to the limited sample size, we did not investigate further on this phenomenon.
Table 2.
Change in self-efficacy score by treatment group, participants who report no previous HIV testing (N = 36).
| Self-efficacy Item How confident are you … | Assigned Community Member (n = 8) |
Assigned Physician (n = 8) |
Chose Community Member (n = 7) |
Chose Physician (n = 4) |
No Video (n = 9) |
Welch test statistics | Welch test p-value |
|---|---|---|---|---|---|---|---|
| That you can understand what an HIV test result means | 1.63 (2.62) | 1.00 (1.20) | 0.57 (0.98) | 1.75 (2.36) | 0.44 (1.33) | 0.57 | 0.687 |
| That if you test positive for HIV, you know what to do to get health care for HIV infection | 0.13 (1.13) | 1.75 (2.25) | 1.43 (2.99) | 3.50 (2.08) | 0.67 (1.80) | 2.45 | 0.102 |
| That if you test positive for HIV, you could handle the emotional impact of the HIV diagnosis | −0.38 (1.19) | 0.38 (2.00) | 1.29 (2.22) | 3.75 (1.71) | 0.67 (1.41) | 4.27 | 0.021* |
| That if you test positive for HIV, you could handle the effects of the HIV diagnosis on your relationships with family | 0.00 (0.54) | 0.75 (2.19) | 1.57 (2.30) | 3.75 (2.63) | 0.44 (1.42) | 2.48 | 0.104 |
| That if you test positive for HIV, you could handle the effects of the HIV diagnosis on your relationships with romantic and sexual partners | −0.13 (0.84) | 0.63 (1.41) | 2.14 (2.73) | 2.50 (1.29) | 0.67 (1.00) | 3.63 | 0.035* |
| That if you test positive for HIV, you could decide who to tell about your HIV diagnosis | 0.25 (0.46) | 1.00 (1.77) | 0.00 (1.92) | 0.00 (0.82) | −0.11 (2.03) | 0.47 | 0.756 |
Note: Standard deviations reported in parentheses.
Discussion
The present study provides insight that is critical to our understanding of how to promote HIV testing in NYC and, importantly, better understanding factors that contribute to the choice a patient might make to decline testing within a hospital setting. Specifically, our findings utilizing a sample of 285 ED patients who declined HIV testing offered by hospital staff indicated that these participants reported highly significant differences in self-efficacy depending on their history of previous testing, whether they were enabled to select a video or were assigned a video, and which video they watched. Participants who reported no previous testing reported significantly lower pre-test self-efficacy compared to those who had tested at least once before. And among those who had not previously tested, the greatest pre-post increases in self-efficacy were reported by participants who were randomly enabled to select an intervention video and chose to watch video depicting a physician.
It appears highly logical that people who had previously tested for HIV reported greater pre-test self-efficacy compared to those who had not previously tested. These participants had already experienced the testing process, and therefore, reported feeling better prepared to go through it. Similarly, it could be expected that participants who had never tested before reported significantly greater increases in self-efficacy after watching a video compared to those who had already tested, if for no other reason than because they started with lower self-efficacy scores.
What we find most interesting about the current results is the degree to which participants who had never tested for HIV reported statistically significant differences, by treatment group, in the degree to which their self-efficacy changed from the start of the intervention to the end. Participants who were enabled to select which video they watched, and who then chose video of a physician, reported the highest gains in self-efficacy, followed by participants who were enabled to select a video and who chose to watch video of a community member. These gains were significantly greater, by treatment group, in response to questions measuring self-efficacy to handle the emotions related to a potential HIV test result, and to discuss a potential HIV diagnosis with a romantic partner.
Some of this may be due to the content of each video. While the videos were mostly the same (e.g. both demonstrated an oral swab HIV test and discussed the importance of re-testing after potential exposures to HIV), the community member disclosed his positive HIV status on camera and emphasized that the only way a person could know their status was to test. This could have frightened some viewers, which might explain why participants who were assigned to watch the community member reported decreases in self-efficacy to handle the emotions of a potential HIV diagnoses and their ability to handle the effects of an HIV diagnosis on relationships with romantic and sexual partners. However, when participants who had the ability to select a video chose to watch the same community member video, they reported increases in response the same measures.
These differences suggest that choice impacts the effectiveness of a video. The fact that participants had notably different responses based on whether they were assigned to watch a particular video or were enabled to select their intervention video appears to indicate the same content led to different outcomes depending on the participants’ ability, or lack of ability, to decide what they watched. In response to multiple items the participants who had never tested for HIV and who chose the community member video reported the second highest self-efficacy increases, while participants who had never tested and were assigned to watch the exact same community member video reported self-efficacy decreases in response to two items and no change on a third (please see Table 2 for more detail). This finding suggests the importance of not only examining which intervention components are most effective, but of examining how the same component may be more or less effective depending on how it is delivered (e.g. whether a single video has a greater contribution to intervention outcomes based on whether it is selected or assigned) and thus warrants further research.
Findings from the current study also suggest that patients who were unsure about testing may have been looking for reassurance from an expert. This could explain why, among those who had never tested for HIV, participants who selected the physician video reported the greatest increases in self-efficacy. It could also explain why participants who had never tested and were assigned the physician video still reported greater increases in self-efficacy compared to participants who were assigned to watch video of the community member. This would align with findings from Parmar et al. (2018) that a computer-generated onscreen virtual agent dressed in a physician’s white coat was perceived by patients as more professional, trustworthy, reassuring, and ‘appropriate for the job’ compared with the same agent dressed in professional clothing without a white coat. Again, further research is warranted to examine potential relationships between participants’ ability to select an onscreen interventionist, and the interventionist (or agent’s) appearance, including demographic characteristics, wardrobe, and perceived role (community member, physician, etc.).
Lastly, the differences described above do not apply to all self-efficacy items. In response to the items that asked participants about their ability to understand the results of an HIV test or the items asking participants if they knew what to do to get care for HIV, analyses do not indicate significant differences among the 5 groups of participants who had not previously tested. Additionally, among those who had not tested, participants in all of the video conditions reported substantial increases in their ability to understand an HIV test result, while participants in the no-video control condition (who were shown bullet-point text) reported a smaller increase. This is likely due to the fact both intervention videos demonstrated how an oral swab HIV test is performed, and explained how to interpret the results. Thus, whether enabled to select a video or not, all participants who watched a video appear to have gained a more thorough understanding of the test process, and experienced a related increase in self-efficacy.
This lack of between-group difference (as compared to the responses to the emotion and romantic/sexual partner measures described above) may also be explained by noting the ‘understand’ item measures a cognitive construct, while the emotion and romantic partner items measure more emotional constructs. It may emerge that the increased engagement that stems from participant choice, combined with the increased reassurance and perceived authority afforded by an onscreen physician have greater influence over emotion-related responses compared to increases in knowledge and understanding. This finding, in particular, may prove especially important to the design of future interventions, because decisions to test for HIV can be highly emotional for patients who have never tested before and may need additional reassurance, and especially so for those who fear a positive result.
Limitations
A chief limitation of the current study is the small sample size of participants who have never tested for HIV (n = 36) compared to those who reported testing at least once previously (n = 249). The data set comes from a study designed to examine the effectiveness of different video configurations on participants who declined HIV testing offered by ED staff, and was not designed to compare differences by participants’ HIV test history. An additional, related, limitation is that we did not create videos, or other intervention materials, to specifically increase self-efficacy among first time testers. Fortunately, these limitations can be addressed in future studies. Also, our study was conducted with participants recruited in a single ED in New York City, and therefore, may not be generalizable to populations in areas outside New York, or even to other EDs elsewhere in the city.
Conclusions
Encouraging patients in New York ED settings who have never tested for HIV to learn their status remains a key goal as public health practitioners and clinicians seek to reduce new HIV infections in NYC, in part, by addressing undiagnosed HIV to prevent people from unknowingly spreading the virus. If we view HIV testing not as a single decision, but a process through which people can potentially move from not testing toward accepting an HIV test (e.g. the Transtheoretical model (Prochaska, Redding, and Evers 2015)), our findings may have particular applicability. Even if a patient elects not to test after completing a single intervention, it may be possible to move the person closer to eventual testing via subsequent interventions. Thus, our team is designing additional interventions aimed at learning how different components and configurations can be optimized to encourage HIV testing among those who decline HIV tests when offered, and most importantly, among those who have never tested before.
Acknowledgments
The current study was funded by grants from the National Institutes of Health, including NICHD R42 HD088325; NIDA P30 DA029926; NIDA P30 DA011041. Dr. Aronson is a co-founder of Digital Health Empowerment, which is a small business that creates technology-based behavioral health interventions. No Digital Health Empowerment products were used as part of the current research.
Funding
This work was supported by the Eunice Kennedy Shriver National Institute of Child Health and Human Development [R42HD088325]; National Institute on Drug Abuse [P30DA011041, P30DA029926,R34DA037129].
Footnotes
Disclosure statement
No potential conflict of interest was reported by the author(s).
References
- AIDSvu. 2020. Local Data: New York City: AIDSvu [cited 2021 April 19] Available from: https://aidsvu.org/local-data/united-states/northeast/new-york/new-york-county/new-york-city/.
- Aronson ID, Marsch LA, Rajan S, Koken J, and Bania TC. 2015. “Computer-based Video to Increase HIV Testing among Emergency Department Patients Who Decline.” AIDS and Behavior 19 (3): 516–522. doi: 10.1007/s10461-014-0853-5. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Aronson ID, Rajan S, Marsch LA, Bania TC 2013. How patient interactions with a computer-based video intervention impact decisions to test for HIV. Health Education Behavior. 41(3): 259–266. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Aronson ID, Cleland CM, Perlman DC, Rajan S, Sun W, Bania TC. 2016. Feasibility of a Computer-Based Intervention Addressing Barriers to HIV Testing Among Young Patients Who Decline Tests at Triage. Journal of health communication. 21(9): 1039–1045. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Aronson ID, Cleland CM, Rajan S, Marsch LA, Bania TC. 2020. Computer-Based Substance Use Reporting and Acceptance of HIV Testing Among Emergency Department Patients. AIDS and Behavior. 24(2): 475–483. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Aronson ID, Marsch LA, Rajan S, Koken J, Bania TC. 2015. Computer-based video to increase HIV testing among emergency department patients who decline. AIDS and Behavior. 19(3): 516–522. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Branson BM, Handsfield HH, Lampe MA, Janssen RS, Taylor AW, and Lyss SB; Prevention. 2006. “Revised Recommendations for HIV Testing of Adults, Adolescents, and Pregnant Women in Health-care Settings.” Mmwr. Recommendations And Reports : Morbidity And Mortality Weekly Report. Recommendations And Reports / Centers for Disease Control 55 (RR–14): 1–17; quiz CE11–14. [PubMed] [Google Scholar]
- Broadhead RS, Heckathorn DD, Altice FL, Van Hulst Y, Carbone M, Friedland GH, O’Connor PG, and Selwyn PA. 2002. “Increasing Drug Users’ Adherence to HIV Treatment: Results of a Peer-driven Intervention Feasibility Study.” Social Science & Medicine 55 (2): 235–246. doi: 10.1016/S0277-9536(01)00167-8. [DOI] [PubMed] [Google Scholar]
- Czarnogorski M, Brown J, Lee V, Oben J, Kuo I, Stern R, and Simon G. 2011. “The Prevalence of Undiagnosed HIV Infection in Those Who Decline HIV Screening in an Urban Emergency Department.” AIDS research and Treatment 2011: 879065. doi: 10.1155/2011/879065. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Durantini MR, Albarracin D, Mitchell AL, Earl AN, and Gillette JC. 2006. “Conceptualizing the Influence of Social Agents of Behavior Change: A Meta-analysis of the Effectiveness of HIV-prevention Interventionists for Different Groups.” Psychological Bulletin 132 (2): 212–248. doi: 10.1037/0033-2909.132.2.212. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Frieden TR 2005. “Can the HIV/AIDS Epidemic in New Yor City Be Stopped?” The PRN Notebook. https://www.prn.org/index.php/transmission/article/hiv_aids_epidemic_in_new_york_city_52 [Google Scholar]
- Health, N. Y. S. D. o. 2018. “Overview of NYS HIV Testing Law for Health Care Providers.” https://www.health.ny.gov/diseases/aids/providers/testing/docs/testing_toolkit.pdf-%5B%7B%22num%22%3A1%2C%22gen%22%3A0%7D%2C%7B%22name%22%3A%22FitH%22%7D%2C792%5D
- Kako PM, Stevens PE, Mkandawire-Valhmu L, Kibicho J, Karani AK, and Dressel A. 2013. “Missed Opportunities for Early HIV Diagnosis: Critical Insights from Stories of Kenyan Women Living with HIV.” International Journal of Health promotion and Education 19 (3). doi: 10.1080/14635240.2012.750070. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Latkin CA 1998. “Outreach in Natural Settings: The Use of Peer Leaders for HIV Prevention among Injecting Drug Users’ Networks.” Public Health reports 113 Suppl 1: 151–159. [PMC free article] [PubMed] [Google Scholar]
- Leonard NR, Rajan S, Gwadz MV, and Aregbesola T. 2014. “HIV Testing Patterns among Urban YMSM of Color.” Health Education & Behavior 41 (6): 673–681. doi: 10.1177/1090198114537064. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Parmar D, Olafsson S, Utami D, and Bickmore T. 2018. “Looking the Part: The Effect of Attire and Setting on Perceptions of a Virtual Health Counselor.” Paper presented at the IVA ‘18: International Conference on Intelligent Virtual Agents, Sydney NSW Australia. [Google Scholar]
- Prochaska JO, Redding CA, and Evers KE. 2015. “The Transtheoretical Model and Stages of Change.” In Glanz K, Rimer BK, & Viswanath K“V„ (Eds.), Health Behavior: Theory, Research, and Practice, (pp. 125–148). Jossey-Bass/Wiley. San Francisco, CA. [Google Scholar]
- Remien RH, Bauman LJ, Mantell JE, Tsoi B, Lopez-Rios J, Chhabra R, … DiCarlo A. 2015. “Barriers and Facilitators to Engagement of Vulnerable Populations in HIV Primary Care in New York City.” JAIDS Journal of Acquired Immune Deficiency Syndromes 69 (Suppl Supplement 1): S16–24. doi: 10.1097/QAI.0000000000000577. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Spielberg F, Kurth AE, Severynen A, Hsieh Y-H, Moring-Parris D, Mackenzie S, and Rothman R. 2011. “Computer-facilitated Rapid HIV Testing in Emergency Care Settings: Provider and Patient Usability and Acceptability.” AIDS Education and Prevention 23 (3): 206–221. doi: 10.1521/aeap.2011.23.3.206. [DOI] [PubMed] [Google Scholar]
