Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2008 Jun 3.
Published in final edited form as: J Acquir Immune Defic Syndr. 2008 Mar 1;47(Suppl 1):S28–S33. doi: 10.1097/QAI.0b013e3181605c77

Defining, Designing, Implementing, and Evaluating Phase 4 HIV Prevention Effectiveness Trials for Vulnerable Populations

Jeffrey A Kelly *, Freya Spielberg , Timothy L McAuliffe *
PMCID: PMC2409151  NIHMSID: NIHMS45225  PMID: 18301131

Summary

The efficacy of behavioral HIV prevention interventions has been convincingly demonstrated in a large number of randomized controlled phase 3 research outcome trials. Little research attention has been directed toward studying the effectiveness of the same interventions when delivered by providers to their own clients or community members, however. This article argues for the need to conduct phase 4 effectiveness trials of HIV prevention interventions that have been found efficacious in the research arena. Such trials can provide important information concerning the impact of interventions when applied in heterogeneous “real-world” circumstances. This article raises design issues and methodologic questions that need to be addressed in the conduct of phase 4 trials of behavioral interventions. These issues include the selection and training of service providers engaged in such trials, maintenance of fidelity to intervention protocol in provider-delivered interventions, determination of intervention core elements versus aspects that require tailoring, selection of relevant phase 4 study outcomes, interpretation of findings indicative of field effectiveness, sustainability, and other aspects of phase 4 trial design.

Keywords: effectiveness trial, HIV prevention, phase 4 trial, methodology


Behavioral HIV primary prevention interventions have been shown to reduce risk behaviors in a wide range of community populations, including men who have sex with men (MSM), injection drug users (IDUs), women, adolescents, patients treated for sexually transmitted diseases (STDs) or seen in health clinics, and persons with other risk issues. Often utilizing randomized controlled trial (RCT) designs, a large body of research literature has established the efficacy of culturally tailored behavioral interventions—derived from several theoretic frameworks and using different delivery modalities—directed toward individuals, couples, small groups, social networks, and even subsets of entire community populations. Several meta-analyses, reviews, and compendia16 have summarized evidence-based HIV prevention behavioral intervention research that met high standards for study design and produced interventions deemed ready for use by service providers. One of the most noteworthy achievements in the entire history of the behavioral and psychologic sciences is how rapidly applied efforts were mobilized to develop and evaluate rigorously interventions designed to protect persons from HIV/AIDS.

For advances in our scientific understanding of HIV prevention interventions to contribute to public health goals of curtailing the disease, at least 2 additional steps must now be taken. The first is evaluating optimal strategies for disseminating evidence-based HIV prevention interventions, after they are found efficacious, from the research arena to the front-line service providers who carry out applied programs in their own communities. Although publication of intervention outcome findings in journals is a traditional academic end product, the true audience for HIV prevention intervention research is service providers whose applied programs can benefit if informed by results of HIV behavioral research. Advances have been made in identifying training delivery modalities that can be used to help AIDS service organizations to adopt evidence-based HIV prevention intervention models.7,8 Much remains to be learned about how best to transfer interventions developed and tested in the research arena to meet the needs of the service providers who are hoped to use them eventually, however.9,10

A second step necessary for translating research advances made in HIV prevention science to their public health application is determining the effects that are produced when evidence-based interventions are implemented by service providers in the field as opposed to researchers in carefully controlled research trials.

WHAT ARE PHASE 4 EFFECTIVENESS TRIALS OF BEHAVIORAL INTERVENTIONS?

Research in the medical therapeutics arena—often involving medications—has traditionally been conceptualized in terms of study phases that range from initial basic science discovery research through studies that are meant to test mechanisms of action, dose/response relations, and safety (phase 1); to preliminary studies of efficacy (phase 2); to definitive studies of efficacy using larger samples and well-controlled outcome evaluation research designs (phase 3); and, finally, to wide-scale intervention application in the field under diverse, heterogeneous, real-world conditions beyond the confines of a highly controlled phase 3 trial (phase 4). Medical therapeutics are usually taken through all these study phases, with phase 4 findings—the “effectiveness” of the intervention when applied by real providers in the real world—considered the final test of an approach that had been found “efficacious” in the well-controlled phase 3 research trials. Phase 4 trials are an essential culmination of the intervention development and deployment picture. It is, after all, an intervention’s performance when used by providers in the field that determines its true public health benefit.

Although there has been a long and intensive history of research evaluating the effects of HIV prevention behavioral interventions, few of these evaluations have been carried beyond phase 3 studies that examine intervention effects in highly controlled studies. Our knowledge concerning the impact of HIV prevention interventions is generally derived from studies that delivered interventions following carefully specified protocols that employed highly trained research staff (or counselors trained to function as research staff surrogates), who were taught and were closely monitored to deliver the intervention with fidelity, that enrolled study volunteers motivated and willing to complete intervention and assessment protocols, and that used participation and intervention supports (eg, incentive payments, high-quality delivery support resources) common in the research arena but rare in the service provision sector. Further, participants enrolled hi phase 3 trials of HIV prevention interventions are usually persons who are carefully screened for study eligibility, and who therefore represent a select subsample of all persons for whom the intervention is ultimately likely to be offered in real-life service provision settings. For these reasons, we now know more about the impact of HIV prevention interventions in the context of phase 3 efficacy trials than their effectiveness when offered by providers under more genuine and diverse circumstances.

Although there is general consensus that phase 4 trials should involve implementation of an intervention with larger samples and under conditions more heterogeneous and realistic than those conditions that characterize a highly controlled phase 3 experimental trial,11 there is little precedent in the field for determining the methodologies and designs needed to carry out field effectiveness studies of behavioral interventions. For the field to advance to the point of determining the field effectiveness of evidence-based HIV prevention interventions that were found efficacious in the phase 3 research arena, several critical design and methodologic issues need to be resolved.

To conduct phase 4 trials of behavioral HIV prevention interventions, dissemination and design frameworks are needed for engaging and training service providers in how to deliver to their own clients an intervention whose efficacy has already been established. Effectiveness trials require that the intervention being tested is offered by applied providers to their own clients or to community populations in real-world field settings. In an HIV prevention effectiveness trial, providers are likely to be AIDS service organizations, non-governmental organizations (NGOs), public health departments, schools, clinics, or other agencies that carry out programs serving populations for whom the intervention is believed to be useful. Initial considerations when planning a phase 4 trial of an HIV prevention Intervention include determining the types of providers who are going to implement the intervention; assessing provider capacity, resources, skills, and motivation for systematic intervention implementation and outcome measure data collection; and determining the appropriateness of the providers’ client populations as recipients of the intervention that is to be delivered.

To what extent should providers who deliver an HIV prevention intervention in a phase 4 effectiveness trial be monitored for delivery adherence and to what extent should quality control (QC) procedures be used to detect and correct any provider “drift” from protocol in intervention delivery? Even brief HIV prevention behavioral interventions involve considerable presentation of information; active interaction between a client and the deliverer of the intervention; and, often, staged exercises intended to influence the client’s risk reduction skills, attitudes, motivations, beliefs, and intentions. Behavioral interventions are sufficiently complex that the facilitators who deliver intervention in phase 3 trials are usually intensively trained in all content and procedures, follow manualized guides, are frequently monitored to measure their adherence to the intervention’s protocol and methods, and are retrained whenever deviations from protocol (sometimes called drift) are observed. These procedures serve to maintain high levels of fidelity in intervention delivery in phase 3 trials.

If the same oversight was attempted in a phase 4 trial, one could argue that the intervention under evaluation is not being tested in real-world circumstances and that the trial remains in phase 3 with overly tight controls imposed on intervention delivery. Conversely, the absence of oversight QC monitoring and periodic provider retraining could lead to a circumstance in which the delivered intervention is fundamentally different—perhaps in unknown ways—from the one that was intended. Thus, it would not be possible to specify accurately what intervention was being delivered. Planners of phase 4 behavioral trials are confronted with the need to maintain oversight of intervention delivery sufficient to ensure that the approach being implemented is the one that was planned, whereas, at the same time, allowing for realistic adaptation and tailoring by providers. Presumably, a phase 4 trial of an HIV prevention behavioral intervention would begin by carefully training providers in how to implement the model correctly with their own clients. The question that needs to be conceptualized is how closely to monitor and correct any subsequent drift in providers’ intervention delivery to ensure that the intended intervention is really being offered while not violating the conceptual underpinnings of a phase 4 (as opposed to phase 2) trial. The collection of ongoing intervention delivery process data permits one to characterize how well an intervention adhered to a protocol or how it may have been changed.

The accomplishment of successful phase 4 interventions depends on the design and structure of the preceding phases. Table 1 describes key components of design, goals, considerations, phase-dependent outcomes, and potential funding mechanisms for each study phase. In each research phase, we suggest working with the same populations and in the same settings in which the program is ultimately going to be disseminated. In phase 1, an iterative design process and qualitative evaluation allows a theory-driven model to be refined and operationalized in a manner consistent with end-user priorities. The goal in phase 2 is to estimate potential effect size and to gather further information on intervention acceptability and feasibility. With the effect size estimated, phase 3 and 4 trials can be designed to determine the impact of an HIV intervention on individual (phase 3) and population (phase 4) levels.

Table 1.

Structure and Design of Behavioral Intervention Research and Evaluation

Research Phase Design Goal Approximate Sample Size Outcomes Potential Funding Mechanism
Phase 1 research Qualitative iterative design process To design and pilot a theory-based HIV prevention intervention 20 per setting and iterative design Acceptability and usability/feasibility among potential clients and service providers NIH R21, R38, SBIR CDC RFA, SBIR Foundation
Phase 2 research Longitudinal quantitative pilot To determine potential effect sizes for a new HIV prevention intervention 150 per arm per setting Behavioral (unprotected vaginal or anal sex or needle/works sharing with a partner of unknown or discordant status) and biologic outcomes (HIV or STDs) to define prevalence and potential effect size NIH R38, R01 CDC RFA, SBIR Foundation
Phase 3 research Longitudinal randomized controlled trial (individuals are randomized) To test the efficacy of a new behavioral intervention Required sample size to define significant differences is determined based on results of phase 2 Larger sample sizes to allow the measurement of biologic outcomes should be funded whenever possible Comparison of behavioral and biologic outcomes between intervention and control groups NIH R01, U10 CDC RFA Foundation
Phase 4 evaluation Longitudinal randomized controlled trial by time frame (d, wk, y) To test the effectiveness and cost of a new behavioral intervention when scaled up in real-world settings Sample size calculations take into account phase 3 results and cluster correlation Acceptance rates, process outcomes, cost of delivery, and behavioral/biologic outcomes of intervention vs. standard of care on a population basis New combined funding mechanisms between service providers and funders of research
Postmarketing evaluation Postmarketing surveillance To evaluate sustainable dissemination of effective interventions Market dependent Safety and quality Measures of dissemination Measures of financial sustainability Social enterprise models that rely on public-private partnerships and a commitment to ongoing quality improvement

NIH indicates National Institutes of Health, RFA, request for applications; SBIR, small business innovation research.

Phase 4 research seeks to determine the impact of an intervention when disseminated by service providers in real-world settings. In phase 3, individuals are typically randomized to an intervention or control condition and incentives are generally provided to ensure intervention participation and longitudinal follow-up. The use of consents and incentives may lead to a different population participating in the study than would accept services in the absence of a research study. In phase 4 trials, one way to allow accurate collection of process, outcomes, and cost data is by randomizing to intervention or standard-of-care services based on a discrete time period such as day, week, or month.14 Using incentives to increase initial acceptance of intervention services or the provision of additional staff is discouraged in phase 4 designs unless they are part of the usual standard of care or are only for follow-up data collection.

If an intervention that is being tested in a phase 3 trial cannot be conducted without extensive training, or if it is too costly, the likelihood of successful scale-up and dissemination in a sustainable way is low. Thus, when choosing interventions for phase 3 and phase 4 trials, we suggest ensuring that the ultimate target populations have been included in all phases of the research; that the intervention is grounded in qualitative data from clients in real-world settings; that a training system has been developed with high potential for efficient and consistent replication; that the intervention has expected higher effectiveness or lower cost than existing effective models; and that plans for scaling, dissemination, and ongoing funding are well thought out. There may also be circumstances when proceeding directly from phase 2 to phase 4 may provide the most relevant data at the least cost. This is particularly the case for interventions based on proven theories that are being adapted for new populations, disseminated through new technologies, tested in new settings, or simply targeting different risk groups.

In designing phase 4 trials, it is critical to distinguish between essential “core elements” of an intervention that cannot be changed versus other intervention characteristics that can (or must) be tailored. One way to resolve the tension between intervention delivery consistency versus variation is by identifying core elements that must be present for a delivered intervention to constitute what it is meant to be. Core elements may involve critical content, intensity or duration, procedures used, or other aspects of the model that, if changed, would also fundamentally change the intervention from that which had been shown efficacious in prior phase 3 trials. The determination of an intervention’s core elements could be based on empiric component analysis to determine its “active ingredients,”12 theoretic principles underlying the intervention, core elements that were always present in past evaluations of the intervention that have been found efficacious, or cumulative past practical experience with the intervention.9 The core elements may then be differentiated from other characteristics of intervention delivery that can be tailored to meet provider needs and to ensure intervention relevance across client populations more diverse than those studied in earlier and efficacy outcome studies.

Critical core elements may also be studied within the context of the phase 4 trial by using a time period randomization methodology or by varying key aspects of the core content and comparing it with the full strategy delivered on random weeks, for example. Likewise, it is possible to utilize methods of an iterative phase 4 design, where services that isolate different core elements are varied during randomized time periods and are evaluated to determine which core elements have the most significant impact on service acceptance, effectiveness, and cost.

The iterative phase 4 design model for effectiveness trials varies time frames for conditions in contrast to historical designs, where individuals are randomized and study arms are compared. Advantages in utilizing an iterative design strategy include feasibility to integrate the design into a functioning service delivery program; potential cost sharing between service flinders and research flinders; reduction of biases if individuals do not receive their hoped for strategy as with individual randomization;13 and the potential to collect data on service acceptance, program effectiveness among people who are likely to use it, and the real costs of providing such services.

What constitute appropriate study outcomes in a phase 4 behavioral intervention trial, and how can they best be measured? One aim of a phase 4 trial is to determine whether the effectiveness of an intervention when used by providers in the field is comparable to, is better than, or is worse than the same intervention when tested under the controlled circumstances of a phase 3 trial. To address this question in the most direct possible manner, it would seem ideal to use outcome measures, assessment procedures, and follow-up periods in the phase 4 trial that closely match those used in earlier efficacy studies of the same intervention. If providers’ real-world clients are assessed in the same way and with the same measures as were participants in earlier efficacy studies, it should be possible to compare directly the effect sizes produced in the provider-delivered (relative to the earlier researcher-delivered) intervention.

If used alone, this approach fails to take full advantage of the potential scientific contributions of phase 4 effectiveness trials, however, which, we would argue, should also attempt to address broader questions about intervention acceptance, impact, safety, and cost. Interventions taken to the field, especially when using larger sample sizes, can potentially be designed to assess impact on public health outcome indicators broader than self-reported behavior change alone. Reductions in STD and HIV incidence, increased (or decreased) levels of client health service utilization, unwanted or early pregnancy reduction, and similar health outcomes can rarely be directly assessed, except in the largest of phase 3 studies, because of limited statistical power to detect such indicators of impact. As interventions are taken to the field in phase 4 trials, larger client sample sizes may afford increased statistical power to measure such public health effects.

An additional goal of an effectiveness trial is determining the acceptability of the intervention to the clients who receive it and to the providers who deliver it. Assessed outcomes in this domain can involve the measurement of client acceptance rates, client and staff satisfaction and perceived benefit, but also potential unanticipated adverse events that might be of sufficiently low frequency to avoid detection in earlier stage trials. Because phase 4 trials are intended to examine the impact of an intervention on populations more heterogeneous and less rigidly selected than research participants in phase 3 trials, effectiveness studies also afford the opportunity to examine whether different outcomes are achieved among clients with differing demographic, risk, or other background characteristics.

With what should phase 4 trial outcomes be compared to reach conclusions about their effectiveness? If the intent of a phase 4 field trial is to ascertain the effectiveness of an evidence-based intervention when offered by providers to their own clients and in their own communities, a trial design question that arises involves determining with what to compare observed phase 4 intervention outcomes. Phrased differently, providers in a phase 4 trial might administer pre- and postintervention measures to their own clients who receive the intervention being evaluated. With whom or with what are potential changes found on these measures compared to determine whether the tested intervention is, in fact, effective?

Several comparison strategies are available. One is to determine the pre- to postintervention risk reduction effect sizes produced in the provider-delivered intervention and to examine descriptively the magnitude of those effect sizes compared with the effects that were produced by the same intervention in earlier controlled efficacy outcome studies. This approach does not take into account temporal changes that may have an impact on study outcomes. Another strategy is to compare the intervention’s outcomes relative to those produced by other programs (including present standard-of-prevention services) also offered by the provider agency. The latter approach necessitates the systematic collection of outcome data from clients who receive the intervention being tested and also clients who participate in the comparison program. A rigorous application of this approach includes the design described earlier, where standard-of-care and the tested intervention services are randomized by day, week, or month. This approach provides comparison outcome data that control for temporal changes, allowing accurate collection of process outcomes necessary for realistic cost comparisons.14 Although data collection from a comparison group considerably increases the scope of a phase 4 trial, it also permits one to reach conclusions about the relative effectiveness of the tested intervention compared with other programs in current use.

In addition, to maintain high-quality programs, data collection should be efficiently incorporated into ongoing systems. National efforts are currently underway to require routine data collection, such as in the Program Evaluation and Monitoring System (PEMS) for all organizations receiving HIV funding from the Centers for Disease Control and Prevention (CDC).15 Most service providers presently lack adequate infrastructure to manage data collection and reporting, however. New systems that allow real-time data collection on tablet personal computers (PCs),16 cell phones,17 or personal digital assistants (PDAs)18 could facilitate data collection, QC, and evaluation reports, minimizing staff time needed for data entry and report preparation. Such new technologies can enhance prospects for phase 4 evaluations of interventions with little negative impact on program delivery.

How large should a phase 4 trial be, and how can phase 4 trials be made feasible in an era of research funding constraints? There is general consensus that phase 4 effectiveness trials should seek to enroll diverse heterogeneous samples larger than those typically enrolled in phase 3 outcome studies.11 The sample size requirement of an effectiveness trial is determined, as in any other study, by the expected effect size of the intervention on measures of primary interest and by the nature of comparisons to be made with other standards of care. Additional considerations must also be taken into account, however. If an effectiveness trial enrolls participants attending different clinics, accessed in different settings, or seen by different counselors, participant data may not constitute independent observations because clients are nested within clinics, settings, or counselors. In such instances, effectiveness trial statistical power is determined not just by the total number of enrolled clients but by the number of clinic, setting, or counselor units represented. Additionally, sample size determination is influenced by whether there are plans to analyze outcomes in relation to differences in client risk, background, or demographic characteristics. Sufficient sample sizes are needed to support such subanalyses.

What measures should be used to assess sustainability? Collection of real cost and population-based effectiveness outcomes may result in data that allow health care funders to prioritize spending. In addition, it may be worthwhile to consider conducting phase 4 effectiveness studies in collaboration with “social enterprise” businesses—organizations that trade in goods or services and link that trade to a social mission—to ensure sustainable dissemination of effective interventions. Such for-profit or not-for-profit enterprises have clearly defined business models to promote sustainable and scalable services. Some social enterprises use mixed financing models for service delivery; higher cost services are provided to populations with resources to subsidize the care of people without resources. A successful example of this model is David Green’s Project Impact,19 which aims to make medical technology and health care services (eg, cataract surgery, low-cost hearing aids, other basic medical technology) accessible and affordable to those in the developing world in a self-sustaining manner. Private and sliding-scale delivery models produce adequate profit to allow providers also to offer services to persons without resources. Other social enterprise models pair lucrative businesses with HIV prevention delivery programs,20 relying on market demand and standard business structures to scale, disseminate, and sustain their programs. Relative to traditional nonprofit organization models that rely solely on grant funding and result in a constant struggle for organizational survival, new social enterprises for sustainable dissemination of effective HIV prevention interventions must be developed.

Finally, AIDS service providers in the United States and abroad are increasingly expected by program funders to use evidence-based interventions in their activities.21 Given that service providers are often being expected to conduct the same kinds of interventions in their service programs that researchers may seek to study in phase 4 trials, there are excellent prospects for researcher/provider collaboration. Historically, these types of collaborations have been limited by funding streams for service provision that commonly allow only 5% of the budget to be spent on evaluation. To facilitate well-designed effectiveness studies that leverage existing dollars spent on service provision, new collaborations are required between funders of research and funders of services to foster joint applications from researchers and service delivery partners. This type of collaboration can leverage the expertise and dollars of both communities, ultimately resulting in well-designed field effectiveness studies of large-scale HIV prevention interventions.

CONCLUSIONS

Great advances have been made in the development of theory-based culturally tailored behavioral interventions to help persons reduce risk for contracting HIV infection. Interventions that have been shown to be efficacious in well-controlled experimental studies must be disseminated to service providers, and the effectiveness of the interventions studied under diverse “real-world” conditions must be evaluated. The development of approaches, methodologies, and analytic models for designing and implementing phase 4 effectiveness trials should allow the field to reach firm conclusions about the impact and benefits of behavioral HIV prevention interventions when used by service providers.

Acknowledgments

Preparation of this article was supported by grant P30-MH52776 from the National Institute of Mental Health.

REFERENCES

  • 1.Albarracin D, Durantini MR, Earl A. Empirical and theoretical conclusions of an analysis of outcomes of HIV prevention interventions. Curr Dir Psychol Sci. 2006;15:73–78. [Google Scholar]
  • 2.Exner TM, Seal DW, Ehrhardt AA. A review of HIV prevention interventions for women. AIDS Behav. 1997;1:93–124. [Google Scholar]
  • 3.Herbst JH, Sherba RT, Crepaz N, et al. HIV/AIDS Prevention Research Synthesis Team. A meta-analytic review of HIV behavioral interventions for reducing sexual risk behavior of men who have sex with men. J Acquir Immune Defic Syndr. 2005;39:228–241. [PubMed] [Google Scholar]
  • 4.Holtgrave DR, Curran JW. What works and what remains to be done in HIV prevention in the United States. Annu Rev Public Health. 2006;27:261–275. doi: 10.1146/annurev.publhealth.26.021304.144454. [DOI] [PubMed] [Google Scholar]
  • 5.Kalichman SC, Carey MP, Johnson BT. Prevention of sexually transmitted HIV infection: a meta-analytic review of the behavioral outcome literature. Ann Behav Med. 1996;18:6–15. doi: 10.1007/BF02903934. [DOI] [PubMed] [Google Scholar]
  • 6.Centers for Disease Control and Prevention. Compendium of HIV Prevention Interventions with Evidence of Effectiveness. Atlanta, GA: CDC; 1999. Nov, revised August 31, 2001. [Google Scholar]
  • 7.Kelly JA, Somlai AM, DiFranceisco WJ, et al. Bridging the gap between the science and service of HIV prevention: transferring effective research-based HIV prevention interventions to community AIDS service providers. Am J Public Health. 2000;90:1082–1088. doi: 10.2105/ajph.90.7.1082. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Kelly JA, Somlai AM, McAuliffe TL, et al. Distance communication transfer of HIV prevention interventions to service providers. Science. 2004;305:1953–1955. doi: 10.1126/science.1100733. [DOI] [PubMed] [Google Scholar]
  • 9.Kelly JA, Heckman TG, Stevenson LY, et al. Transfer of research-based HIV prevention interventions to community service providers: fidelity and adaptation. AIDS Educ Prev. 2000;12 Suppl A:87–98. [PubMed] [Google Scholar]
  • 10.Kelly JA, Sogolow ED, Neumann MS. Future directions and emerging issues in technology transfer between HIV prevention researchers and community-based service providers. AIDS Educ Prev. 2000;12 Suppl A:126–141. [PubMed] [Google Scholar]
  • 11.Flay BR. Efficacy and effectiveness trials (and other phases of research) in the development of health promotion programs. Prev Med. 1986;15:451–474. doi: 10.1016/0091-7435(86)90024-1. [DOI] [PubMed] [Google Scholar]
  • 12.Kalichman SC, Rompa D, Coley B. Experimental component analysis of a behavioral HIV/AIDS prevention intervention for inner-city women. J Consult Clin Psychol. 1996;64:687–693. doi: 10.1037//0022-006x.64.4.687. [DOI] [PubMed] [Google Scholar]
  • 13.Spielberg F, Reidy B, Marlatt A. The Importance of Choice: An RCT of Client-Centered Counseling vs. Written Educational Materials with Oral Fluid Testing at a Needle Exchange. Seattle, WA: International Society for Sexually Transmitted Disease Research. 2007 July;
  • 14.Spielberg F, Branson BM, Goldbaum GM, et al. Choosing HIV counseling and testing strategy for outreach settings: a randomized trial. J Acquir Immune Defic Syndr. 2005;38:348–355. [PubMed] [Google Scholar]
  • 15.Thomas CW, Smith BD, Wright-DeAguero L. The Program Evaluation and Monitoring System: a key source of data for monitoring evidence-based HIV prevention program processes and outcomes. AIDS Educ Prev. 2006;18 Suppl A:74–80. doi: 10.1521/aeap.2006.18.supp.74. [DOI] [PubMed] [Google Scholar]
  • 16.Mackenzie SLC, Kurth AE, Spielberg F, et al. Patient and staff perspectives on the use of a computer counseling tool for HIV and sexually transmitted infection risk reduction. J Adolesc Health. 20O7;40:572.e9–572.el6. doi: 10.1016/j.jadohealth.2007.01.013. [DOI] [PubMed] [Google Scholar]
  • 17.Curioso WH. [Accessed December 2, 2007];Cell phones as a health care intervention in Peru: the Cell-PREVEN project. Globalization and Health. 2006 October 12; Available at: http://www.globalizationandhealth.com/content/2/1/9/comments.
  • 18.Krishnamurthy R, Frolov A, Wolkon A, et al. Bethesda, MD: American Medical Informatics Association; 2006. Application of preprogrammed PDA devices equipped with global GPS to conduct paperless household surveys in rural Mozambique; p. 991. Annual Symposium Proceedings. [PMC free article] [PubMed] [Google Scholar]
  • 19.Westall A, Chalkley D. Social Enterprise Futures. London, England: Smith Institute; 2007. [Accessed December 2, 2007]. Available at: http://www.socialenterprise.org.uk/cms/documents/Smith_Institute_Social_Enterprise.pdf. [Google Scholar]
  • 20.D’Agnes T. From Condoms to Cabbages: An Authorized Biography of Mechai Viravaidya. Bangkok, Thailand: Bangkok Post Books; 2001. [Google Scholar]
  • 21.Collins C, Harshbarger C, Sawyer R, et al. The Diffusion of Effective Behavioral Interventions Project: development, implementation, and lessons learned. AIDS Educ Prev. 2006;18:5–20. doi: 10.1521/aeap.2006.18.supp.5. [DOI] [PubMed] [Google Scholar]

RESOURCES