Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2016 Dec 1.
Published in final edited form as: Addict Disord Their Treat. 2015 Dec;14(4):211–219. doi: 10.1097/ADT.0000000000000047

Recruitment techniques for alcohol pharmacotherapy clinical trials: A cost-benefit analysis

D Andrew Tompkins 1, Jessica A Sides 2, Joseph A Harrison 1, Eric C Strain 1
PMCID: PMC4704795  NIHMSID: NIHMS575266  PMID: 26752979

Abstract

Objectives

Alcohol use disorders (AUDs) represent a large public health burden with relatively few efficacious pharmacotherapies. Randomized controlled trials (RCTs) for new AUD therapies can be hampered by ineffective recruitment, leading to increased trial costs. The current analyses examined the effectiveness of recruitment efforts during two consecutive outpatient RCTs of novel AUD pharmacotherapies conducted between 2009 and 2012.

Methods

During an initial phone screen, participants identified an ad source for learning about the study. Qualified persons were then scheduled for in-person screens. The present analyses examined demographic differences amongst the eight ad sources utilized. Recruitment effectiveness was determined by dividing the number of persons meeting criteria for an in-person screen by the total number of callers from each ad source. Cost-effectiveness was determined by dividing total ad source cost by number of screens, participants randomized, and completers.

Results

1,813 calls resulted in 1,005 completed phone screens. The most common ad source was TV (34%), followed by print (29%), word-of-mouth (11%), flyer (8%), internet (5%), radio (5%), bus ad (2%), and billboard (1%). Participants reporting bus ads (46%), billboard (44%), or print ads (34%) were significantly more likely than the other sources to meet criteria to be scheduled for in-person screens. The most cost-effective ad source was print ($2,506 per completer), while bus ad was the least cost-effective ($13,376 per completer).

Conclusions

Recruitment in AUD RCTs can be successful using diverse advertising methods. The present analyses favored use of print ads as most cost-effective.

Keywords: alcohol use disorders, clinical trials, recruitment, advertisement methods

INTRODUCTION

Alcohol use disorders (AUDs) are highly prevalent in the United States and worldwide. The World Health Organization estimates that 76 million persons meet diagnostic criteria for AUDs and 2.5 million individuals die yearly from alcohol related causes 1. Given the global burden of disease and heterogeneous causes of AUDs across individuals, it is imperative that optimal and individualized treatments are available to meet the needs of patients in a wide variety of settings. Although four accepted medications are available for the treatment of AUDs in the United States and Europe (disulfiram, oral naltrexone, extended-release naltrexone, and acamprosate) 2, these medications do not work for the majority of patients in clinical practice and new pharmacotherapies are needed.

Randomized controlled trials (RCTs) are the gold standard in proving efficacy and obtaining regulatory approval for the use of novel therapies in the treatment of disorders. However, RCTs are expensive and can be hampered by slow recruitment. In fact, poor participant recruitment is one of the most common barriers faced in addiction treatment clinical trials 3. Although ease of recruitment can be optimized in the design phase (e.g., use of less restrictive inclusion/exclusion criteria, limited matching strategies), safety and other protocol-specific concerns may limit the use of these methods.

Recruitment methods in trials of AUD pharmacotherapies have received relatively little attention in the scientific literature even as there is a strong financial commitment from governmental agencies like the National Institute on Alcohol Abuse and Alcoholism (NIAAA) and industry to fund these trials 4. One study examined telephone calls for an AUD clinical trial participation and found people referred by friends/family, yellow pages, or newspaper ads had the highest likelihood of meeting criteria for an in-person screen as compared to TV, radio, and the internet 5. However, that study did not examine recruitment effectiveness beyond the phone screen qualification, nor did it examine cost-effectiveness. As well, that study examined persons with a wide variety of substance use disorders and did not focus exclusively on AUDs. A secondary analysis of an Australian pharmacotherapy RCT found that there were no significant differences in treatment retention or drinking outcomes between four recruitment strategies – inpatient referral, outpatient referral, live media (i.e., TV and radio) or print media 6. The two largest AUD trials in the United States, MATCH (Matching Alcoholism Treatments to Client Heterogeneity) and COMBINE (Combining Medications and Behavioral Interventions), have only published descriptions of recruitment methods without any objective data on their efficacy or cost-effectiveness, which would help to guide future trial planning 7, 8.

Therefore, the purpose of this analysis was to examine the effectiveness of eight different advertising methods in attracting eligible participants in two consecutive NIAAA funded Phase II clinical trials for the treatment of very heavy drinking. Effectiveness was examined in terms of (1) attraction of analyzable study participants (persons who met study entry criteria, were randomized, and completed the trials) and (2) costs associated with the advertising methods.

MATERIAL AND METHODS

The parent clinical trials were conducted in accordance with Good Clinical Practices (International Conference on Harmonisation, 1996) and were registered on clinicaltrials.gov (NCT00970814 and NCT01146613). Written informed consent was obtained from all participants before engaging in study related activities. Eligibility criteria as well as a complete description of the clinical trials are described in detail elsewhere 9, 10. Both trials had similar inclusion/exclusion criteria and lasted for 16 weeks. Separate IRB approval waiving the right to written informed consent was obtained to collect and analyze the present phone screen and enrollment data.

Participants

Both clinical trials were multisite projects. Data presented here are from one site in Baltimore, MD – the Behavioral Pharmacology Research Unit (BPRU). As recruitment effectiveness was not a primary outcome for the parent trials, data regarding advertising cost-effectiveness from the other sites were unavailable for analysis. All participants were recruited between September 2009 and January 2012 from the greater Baltimore-Washington, DC metropolitan area. Interested persons called the BPRU, which had one phone line exclusively designated for these studies. Research assistants then completed a brief set of standardized questions that asked about drinking habits and willingness to participate in a 16-week outpatient clinical trial. During this screen, callers were asked how they heard about the study (“ad source”). If calls were not answered immediately, an attempt was made to return all calls within 48 hours. Persons who met eligibility criteria on the phone screen were scheduled for an in-person screening visit. There was a 14-day observation period prior to randomization in each RCT during which a person had to self-report at least one heavy drinking day (5 or more drinks/drinking day for men or 4 or more drinks/drinking day for women) during the levetiracetam ER RCT and at least two heavy drinking days in the subsequent varenicline RCT. Completers in this analysis were defined as completing all 16 weeks of study visits following randomization.

Recruitment Plan

BPRU’s recruitment plan for both studies was developed in consultation with an outside advertising agency familiar with the local media market. Advertisements were designed to target heavy drinkers at places and times that they were likely to consider getting help for their drinking, e.g. late night TV ads for people who have consumed too much alcohol, early morning radio and TV ads for people with a hangover getting ready for work, or billboards on heavily trafficked roads. Baltimore, MD is a relatively small city with a population of 620,000, but the 2008 Baltimore-Washington, DC metropolitan area had a population of 8.2 million 11. This metropolitan area has urban, suburban, and rural subparts, and the advertising strategy recruited from all three subparts in these clinical trials.

Recruitment sources included (1) television ads on network affiliates, (2) print ads in local specialized free publications, subscription magazines catering to suburban audiences, and the daily periodical in Baltimore, (3) bus ads in Baltimore, (4) a billboard (located adjacent to the main highway leading into the downtown Baltimore area), (5) flyers posted by a team of BPRU research assistants around Baltimore, (6) internet (www.getcontrol.org, Google AdWords, www.clinicaltrials.gov, and Baltimore area business email list serve), (7) radio, and (8) word-of-mouth. Participants did not receive a financial benefit for word-of-mouth referrals. Recruitment was monitored in weekly meetings attended by the PI, study coordinator, research assistants, post-doctoral fellow, and student interns. Changes in advertising strategy were made on a monthly basis. Billboard and radio ads were not purchased for the varenicline RCT based upon review of recruitment data following the levetiracetam ER RCT.

Advertisement Costs

Ads were placed both by BPRU’s centralized recruitment team (majority of print ads, internet and flyers) and by the outside ad agency (radio, billboard, bus, some print ads, and television). Costs reported here included charges for production and running of ads as well as fees charged by the ad agency for placement, and were maintained at the site as part of the ongoing budgetary monitoring process. NIAAA produced and distributed a television commercial and professional website (www.getcontrol.org) exclusively to be used in these clinical trials; these production costs were paid by NIAAA and not the site. There were minimal costs associated with individualizing the television commercial for the Johns Hopkins site (e.g., adding the Hopkins name, logo and contact information), which were included in the present analyses.

Data Analysis

If more than one advertising modality was reported by a participant, the first source reported by the applicant was used. Differences in demographics across advertising sources were analyzed with Fischer exact and chi square tests for categorical variables and one-way analysis of variance (ANOVA) for continuous variables. Recruitment effectiveness was determined by dividing the number of persons meeting criteria for an in-person screen by the total number of callers from each ad source. A chi square test then looked at differences in recruitment effectiveness amongst the ad sources. Cost-effectiveness was examined by dividing the total cost of the advertising method by the number of screens, persons randomized and completers.

RESULTS

Participants Screened

There were a total of 1,813 calls during the course of active recruitment, of which 1,005 (55%) resulted in a completed phone screen (Figure 1). Of the 1,005, 274 (27%) met screening criteria and were scheduled for an in-person screen. Sixty-eight percent (187/274) of these persons attended the scheduled screen. Thirty-six percent of in-person screens met inclusion/exclusion criteria and were randomized, with a 78% retention rate for those randomized (completed all 16 weeks of the trial).

FIGURE 1.

FIGURE 1

CONSORT diagram of all calls received.

Demographics

Phone screens

The only demographic variable collected during phone screens was age (in order to make an initial determination of eligibility). Relative to the average age of persons assessed during a phone screen (46.3 years +/− 11.1 SD), participants reporting TV (50.6, +/−9.4), billboard (48.2, +/−10.8) or radio (47.4, +/− 9.2) ad sources were slightly older. Those persons recruited from the internet (38, +/−13), word of mouth (41.1, +/−12), or who did not identify an ad source (42.6, +/−12.9) were slightly younger. These differences on age between ad sources were significantly different on ANOVA (F= 15.82, df=8, p=0.002).

In-person screens (IPS) and randomizations

Persons who participated in an in-person screen provided information on their sex, race, years of education, marital status, and annual household income. Of those persons attending an in-person screen, there was no significant difference between ad sources as a function of sex (p=0.072) (Table 1). However, there were significant differences between ad sources for age (p<0.001), race (p<0.001), years of education (p<0.001), marital status (p<0.001), and annual household income (p<0.001) (Table 1). Of the 187 participants who completed an in-person screen, 67 were randomized into a RCT. The significant demographic differences between ad sources for these 67 participants were race (p<0.001), marital status (p<0.001) and income (p<0.001).

Table 1.

Demographic Differences between Ad Sources for In-Person Screens (IPS) and Persons Randomized (Rand).

Print Ad TV Word of Mouth Internet Flyer Radio Bus Ad Billboard Unknown Total

IPS Rand IPS Rand IPS Rand IPS Rand IPS Rand IPS Rand IPS Rand IPS Rand IPS Rand IPS Rand

N=62 N=19 N=60 N=24 N=21 N=8 N=10 N=4 N=9 N=3 N=8 N=4 N=7 N=2 N=3 N=2 N=7 N=1 N=187 N=67
Sex (% Female) 26 21 35 42 5 13 10 0 22 33 0 0 29 50 0 0 29 0 24 25
Race (% Non-Caucasian) 73 79 17 17 48 50 30 50 56 33 13 0 43 50 0 0 71 100 44* 42*
Age – years (SD) 45.7 (10.4) 42.3 (11) 50.7 (9.9) 49.3 (9.2) 37.2 (12.8) 34.9 (5.4) 40.2 (13.0) 40.8 (15.5) 47.1 (11.3) 40.1 (17.8) 44.6 (9) 47.3 (13) 45.3 (11.4) 44.5 (3.5) 42.5 (7.5) 41.2 (10.2) 43.7 (13.3) 32 (0) 46.0* (11.4) 43.9 (11)
Education – years (SD) 12.9 (2) 13.4 (2.1) 14.7 (2.2) 14.4 (1.2) 12.5 (2.1) 12.9 (2.1) 13.8 (2.3) 11.8 (1.3) 13.4 (2.4) 13.3 (2.3) 14.4 (2.2) 14.3 (3.1) 14 (2.3) 13 (1.4) 15.7 (1.5) 15 (1.4) 12.7 (2.6) 11 (0) 13.6 (2.3) 13.6 (1.9)
Marital Status (%)
Never married 45 53 13 8 67 63 20 25 33 33 0 0 29 0 0 50 71 100 33* 30*
Married/partnered 11 21 50 67 0 0 40 50 33 67 63 75 57 100 67 50 29 0 30* 45*
Divorced/separated 40 26 27 25 19 25 30 25 22 0 37 25 14 0 33 0 0 0 29* 24*
Household Income (%)
$0–15,000 32 21 7 4 38 63 10 25 22 0 0 0 0 0 0 0 29 0 22* 16*
$15,001–30,000 39 53 12 4 24 25 10 0 11 33 25 50 43 50 0 0 14 0 26* 25*
$30,001–45,000 16 11 10 4 14 12 20 50 22 33 0 0 14 0 0 0 29 0 15* 10*
$45,000–90,000 6 11 25 33 5 0 30 25 11 33 25 0 29 0 33 50 0 0 17* 19*
>$90,000 2 5 33 50 0 0 20 0 11 0 50 50 14 50 67 50 29 100 20* 27*
*

p< 0.05. Significance was determined by 1-factor ANOVA for continuous and chi-square or Fischer exact tests for categorical variables. Some percentages do not add up to 100% due to missing data for that demographic characteristic or rounding.

Recruitment Effectiveness by Ad Source

The most frequently mentioned ad source for all callers was TV (34%), followed by print (29%), word-of-mouth (11%), flyer (8%), internet (5%), radio (5%), bus ad (2%), and billboard (1%) (Figure 2a). Only 51 participants (5%) failed to provide a source of hearing about the study. There was a significant difference between ad sources in the percentage of total callers from each source qualifying and being scheduled for an in-person screen [χ2(8, N=1005)=23.9, p=0.002]. Participants reporting print ads as their ad source had the most number of persons who qualified and were scheduled for an in-person screen (N=100; 34% of total), followed by TV (N=87; 25% of total), and word of mouth (N=28; 26% of total) (Figure 2b). Although representing only 3% of all phone screens, those participants reporting bus ads and the billboard as their ad sources were most likely to qualify for an in-person screen (46% and 44% of participants, respectively). Flyers and word of mouth accounted for 18% of all phone screens. Only a small minority of persons reporting flyers qualified for an in-person screen (17%), whereas word of mouth was more successful at recruiting these persons (26%).

FIGURE 2.

FIGURE 2

Recruitment by ad source. IPS = in-person screen. A) This demonstrates the breakdown of all phone screens by ad source, with TV and print ads accounting for 64% of total phone screen volume. B) Scheduled IPS = the percentage of phone screens from an ad source that qualified and were scheduled for an IPS. Randomized = the percentage of persons scheduled for an IPS from an ad source who were randomized in to an RCT. Completed = the percentage of persons randomized from an ad source who completed the study.

Of those who qualified for an in-person screen, there was no significant differences on Fisher’s exact test between ad sources on percentage of persons randomized (p=0.583) or on percentage of persons who completed the study (p=0.32).

Cost Effectiveness of Ad Sources

There was a total of $175,000 spent on advertising for the BPRU site over the course of the two RCTs. Of all the ad sources, the most money was spent on TV ads (approximately $75,000). Although TV ads resulted in the highest number of interested callers for the study at a relatively inexpensive cost ($216 per phone screen), the majority of callers did not qualify for the study and the costs per in-person screen ($1,245), person randomized ($3,113) and study completer ($4,151) were much higher than print ads (but less than most other modalities) (Figure 3). Of all ad sources that cost money, print ads led to the second most number of calls and were the most cost-effective ($128 per phone screen, $606 per in-person screen, $1,978 per person randomized and $2,505 per completer). The use of a single billboard displayed for two months during the levetiracetam ER RCT was the third most expensive modality used ($21,662), amounting to 12% of the total combined advertising budget. The billboard was placed at a strategic spot on the main artery into the downtown area but only led to nine phone screens, three in-person screens and two completers ($10,831 per completer). Interestingly, two participants in the varenicline RCT reported learning about the study from seeing the billboard several months earlier during the prior levetiracetam ER RCT. One of those persons was a varenicline RCT completer.

FIGURE 3.

FIGURE 3

Cost-effectiveness by ad source for phone screens, in-person screens, randomization, and study completion. Bus ads were not cost-effective in recruiting eligible study participants, with each completer costing in excess of $13,000 in advertising expenses alone. Print ads were the most cost-effective with each completer costing approximately $2,500.

Although there was no cost for the study or clinicaltrials.gov websites to investigators, $8,300 was spent in placing email list serve ads to area business professionals during the levetiracetam ER RCT and $1,020 on Google AdWords during the varenicline RCT. No phone screens resulted from the list serve; all screens reporting the internet as the ad source came from the study website or Google AdWords.

DISCUSSION

Recruitment of eligible participants for pharmacological intervention clinical trials of AUDs can be successfully obtained through a comprehensive and targeted advertising strategy. In this analysis, greater than half of the advertising dollars for two AUD pharmacotherapy trials were spent on two sources – TV and print ads. Not surprisingly, the majority of phone screens reported hearing about the study from these two sources. However, when looking at effectiveness of recruiting eligible participants, print ads produced the largest number of callers qualifying for in-person screens, as well as the second largest number of persons randomized. Of those ad sources that cost money, print ads were also the most cost-effective strategy (approximately $2,500 per completer in advertising dollars). In contrast, the professionally developed TV commercial resulted in a high number of interested callers (N=346) but only 17% of those calls led to in-person screens. Additionally, the TV costs presented in this study are an underestimate, as the site investigators did not pay commercial production costs. Bus ads were expensive and the least cost-effective strategy for study recruitment, providing only one completer across both RCTs.

Analysis of recruitment outcomes should not be exclusively concerned with costs and timelines. Recruitment strategies should also be judged by the final study population. Was it similar to the target population of heavy drinkers and are the statistical inferences generalizable? In this study, the majority of print ads were in the local free weekly newspapers, and these ads reached participants usually of a lower socioeconomic status (SES), not in a committed relationship, and African American (Table 1) – demographic variables known to influence inclusion and outcomes from AUD pharmacologic clinical trials 1215. Additionally, most in-person screens reporting TV as their ad source were Caucasian, perhaps influenced by the commercial. Although it was shot in black-and-white, the ad featured a middle-aged male with uncertain racial features. Nevertheless, the final site populations contained 42% non-Caucasian participants, a wide spread of SES, and good distribution of marital statuses – increasing the demographic variability in the two larger multi-site study populations 9, 10. This analysis provides evidence for the use of diverse advertising sources when trying to recruit a study sample similar to the target population of AUD clinical trials. Use of one ad source would have limited the diversity of study participants and the generalizability of the study findings.

These results do have limitations. First, persons may have needed to see advertisements in several locations or several times before finally calling for a screen as product advertising theory suggests 16. These analyses only allowed a person to give one ad source and did not assess frequency of exposure to an ad before calling. Although a person may have seen the billboard and TV commercial, he or she may have listed the TV as the ad source because it was seen most proximal to the phone screen. Second, BPRU and other Baltimore area drug abuse research programs have historically used the local free weekly newspapers to advertise RCTs. People seeking treatment for AUDs and open to clinical trial participation may already know to look there. It may be that branching out into live media (TV and radio) will take several years to increase the number of qualified phone screens stemming from these strategies as awareness of these other advertising sources increases, as proposed by the theory of effective frequency in advertising 17. There certainly was a trend for TV to be more effective at reaching qualified persons, as only 11% of TV phone screens led to in-person screens in the first (levetiracetam ER) RCT, whereas this number increased to 19% in the second (varenicline) RCT. In addition, the impact of the internet for recruitment doubled in the varenicline RCT, with 35 phone screens listing internet as the ad source (compared to 16 in levetiracetam ER RCT). As access to the internet increases 18 – even among persons with alcohol and other drug use disorders 19 – this may be an important tool reducing the cost of recruitment in future AUD RCTs. Third, this was an analysis of one site in two multisite clinical trials. These results may not be generalizable to other research locations even within these same clinical trials. Fourth, these analyses were done across studies using two distinct pharmacological agents. Although the inclusion/exclusion criteria were similar across studies, the varenicline RCT required a person never to have tried this medication in the past and also that the person have two very heavy drinking days during the screening period. These slight differences may have influenced study recruitment as there are documented racial differences in use of smoking cessation pharmacotherapies such as varenicline 20. Finally, the exact same ad sources were not used in both studies. Knowledge from the levetiracetam ER RCT showed the poor efficacy for the billboard and radio for recruitment. More money was spent on TV and print ads during the varenicline RCT, and a billboard and radio ads were not purchased (although they may have had enduring effects on recruitment for the second study).

In conclusion, recruitment of a diverse and generalizable study population in AUD clinical trials likely requires a multi-pronged advertising strategy. Knowledge of local media markets is important and consultation with an ad agency may enhance recruitment goals to ensure adequate study power and a study population that closely resembles the target population. The most effective ad source for recruiting eligible participants in an alcohol RCT may be print ads, especially those in local free weekly papers. Use of higher cost billboards, bus and radio ads may not be cost-effective and should be used cautiously and only if other sources are unavailable or have proven ineffective.

Acknowledgments

Source of Funding: This research was funded by the National Institute on Drug Abuse (NIDA) 5T32 DA07209, K24 DA023186, and K23 DA029609, as well as two subcontracts from the National Institute on Alcohol Abuse and Alcoholism (NIAAA) administered by Fast-Track Drugs and Biologics, LLC.

We would like to thank the nursing and research staff at BPRU who assisted with the conduct of the clinical trial, specifically Jenna Cohen, Kym Nelson, and Brendan Burke.

Footnotes

Conflicts of Interest

The authors have no relevant conflicts of interest to declare.

References

  • 1.World Health Organization. Global status report on alcohol 2004. Geneva: WHO; 2004. [Google Scholar]
  • 2.Edwards S, Kenna GA, Swift RM, Leggio L. Current and promising pharmacotherapies, and novel research target areas in the treatment of alcohol dependence: a review. Curr Pharm Des. 2011;17:1323–1332. doi: 10.2174/138161211796150765. [DOI] [PubMed] [Google Scholar]
  • 3.Ashery RS, McAuliffe WE. Implementation issues and techniques in randomized trials of outpatient psychosocial treatments for drug abusers: recruitment of subjects. Am J Drug Alcohol Abuse. 1992;18:305–329. doi: 10.3109/00952999209026069. [DOI] [PubMed] [Google Scholar]
  • 4.Litten RZ, Egli M, Heilig M, et al. Medications development to treat alcohol dependence: a vision for the next decade. Addict Biol. 2012;17:513–527. doi: 10.1111/j.1369-1600.2012.00454.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Sayre SL, Evans M, Hokanson PS, et al. “Who gets in?” Recruitment and screening processes of outpatient substance abuse trials. Addict Behav. 2004;29:389–398. doi: 10.1016/j.addbeh.2003.08.010. [DOI] [PubMed] [Google Scholar]
  • 6.Morley KC, Teesson M, Sannibale C, Haber PS. Sample bias from different recruitment strategies in a randomised controlled trial for alcohol dependence. Drug Alcohol Rev. 2009;28:222–229. doi: 10.1111/j.1465-3362.2008.00022.x. [DOI] [PubMed] [Google Scholar]
  • 7.Zweben A, Barrett D, Berger L, Murray KT. Recruiting and retaining participants in a combined behavioral and pharmacological clinical trial. J Stud Alcohol Suppl. 2005;15:72–81. doi: 10.15288/jsas.2005.s15.72. [DOI] [PubMed] [Google Scholar]
  • 8.Zweben A, Donovan DM, Randall CL, et al. Issues in the development of subject recruitment strategies and eligibility criteria in multisite trials of matching. J Stud Alcohol Suppl. 1994;12:62–69. doi: 10.15288/jsas.1994.s12.62. [DOI] [PubMed] [Google Scholar]
  • 9.Litten RZ, Ryan ML, Fertig JB, et al. A Double-Blind, Placebo-Controlled Trial Assessing the Efficacy of Varenicline Tartrate for Alcohol Dependence. J Addict Med. 2013;7:277–286. doi: 10.1097/ADM.0b013e31829623f4. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Fertig JB, Ryan ML, Falk DE, et al. A double-blind, placebo-controlled trial assessing the efficacy of levetiracetam extended-release in very heavy drinking alcohol-dependent patients. Alcohol Clin Exp Res. 2012;36:1421–1430. doi: 10.1111/j.1530-0277.2011.01716.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.US Census Bureau. State and Metropolitan Area Data Book: 2010. 7. Washington, DC: US Census Bureau; 2010. [Google Scholar]
  • 12.Hughes JC, Cook CCH. The efficacy of disulfiram: a review of outcome studies. Addiction. 1997;92:381–395. [PubMed] [Google Scholar]
  • 13.Kranzler HR, Van Kirk J. Efficacy of naltrexone and acamprosate for alcoholism treatment: A meta-analysis. Alcohol Clin Exper Res. 2001;25:1335–1341. [PubMed] [Google Scholar]
  • 14.Humphreys K, Weisner C. Use of exclusion criteria in selecting research subjects and its effect on the generalizability of alcohol treatment outcome studies. Am J Psychiatry. 2000;157:588–594. doi: 10.1176/appi.ajp.157.4.588. [DOI] [PubMed] [Google Scholar]
  • 15.Burlew AK, Weekes JC, Montgomery L, et al. Conducting research with racial/ethnic minorities: methodological lessons from the NIDA Clinical Trials Network. Am J Drug Alcohol Abuse. 2011;37:324–332. doi: 10.3109/00952990.2011.596973. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Vakratsas D, Ambler T. How Advertising Works: What Do We Really Know? J Market. 1999;63:26–43. [Google Scholar]
  • 17.Jones JP. What Does Effective Frequency Mean in 1997? J Advert Res. 1997;37:14–20. [Google Scholar]
  • 18.International Telecommunications Union (ITU) [Accessed January 8, 2014];Global ICT Developments. Available at: http://www.itu.int/en/ITU-D/Statistics/Pages/stat/default.aspx.
  • 19.McClure EA, Acquavita SP, Harding E, Stitzer ML. Utilization of communication technology by patients enrolled in substance abuse treatment. Drug Alcohol Depend. 2013;129:145–150. doi: 10.1016/j.drugalcdep.2012.10.003. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Ryan KK, Garrett-Mayer E, Alberg AJ, Cartmell KB, Carpenter MJ. Predictors of cessation pharmacotherapy use among black and non-Hispanic white smokers. Nicotine Tob Res. 2011;13:646–652. doi: 10.1093/ntr/ntr051. [DOI] [PMC free article] [PubMed] [Google Scholar]

RESOURCES