Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2023 Sep 1.
Published in final edited form as: Addict Neurosci. 2023 Apr 11;7:100090. doi: 10.1016/j.addicn.2023.100090

Review of strategies to investigate low sample return rates in remote tobacco trials: A call to action for more user-centered design research

Roger Vilardaga a,b,*, Johannes Thrul c,d,e, Anthony DeVito a, Darla E Kendzor f, Patricia Sabo a, Tatiana Cohab Khafif a,g
PMCID: PMC10327900  NIHMSID: NIHMS1909959  PMID: 37424632

Abstract

Remote collection of biomarkers of tobacco use in clinical trials poses significant challenges. A recent meta-analysis and scoping review of the smoking cessation literature indicated that sample return rates are low and that new methods are needed to investigate the underlying causes of these low rates. In this paper we conducted a narrative review and heuristic analysis of the different human factors approaches reported to evaluate and/or improve sample return rates among 31 smoking cessation studies recently identified in the literature. We created a heuristic metric (with scores from 0 to 4) to evaluate the level of elaboration or complexity of the user-centered design strategy reported by researchers. Our review of the literature identified five types of challenges typically encountered by researchers (in that order): usability and procedural, technical (device related), sample contamination (e.g., polytobacco), psychosocial factors (e.g., digital divide), and motivational factors. Our review of strategies indicated that 35% of the studies employed user-centered design methods with the remaining studies relying on informal methods. Among the studies that employed user-centered design methods, only 6% reached a level of 3 in our user-centered design heuristic metric. None of the studies reached the highest level of complexity (i.e., 4). This review examined these findings in the context of the larger literature, discussed the need to address the role of health equity factors more directly, and concluded with a call to action to increase the application and reporting of user-centered design strategies in biomarkers research.

Keywords: Smoking cessation, Clinical trials, Biomarkers, Biochemical verification, Remote studies, Sample return rates, User-centered design, Health equity

Introduction

Biomarkers are of key strategic importance in clinical science, allowing the objective measurement of proximal pathogenic processes without the need to measure distal clinical outcomes, such as cancer, cardiovascular disease, or other serious health consequences [1]. Their use was envisioned as a tool to improve the efficiency of the clinical research enterprise [1]. In the area of tobacco control and more specifically, in intervention studies that aim at changing tobacco use behaviors, biomarkers of tobacco use (commonly referred to as biochemical verification of smoking status), have been used as a surrogate endpoint for risk of cancer or tobacco-related disease [2].

Biochemical verification of smoking status has fueled much controversy over the decades [35]. Some researchers have emphasized the public health dimension of tobacco science and made the case that biochemical verification might offer little advantage over self-reports in studies with low demand characteristics [4,5]. Other researchers have argued that participants may misrepresent their smoking status and therefore that biomarkers are necessary [3]. While this debate is still ongoing, the most recent expert consensus from a taskforce of the Society for Research on Nicotine and Tobacco (SRNT)[2] recommended biochemical verification for all cessation studies, with the understanding that in some study designs (e.g., very large clinical trials, minimal-contact interventions), this verification process may or may not be feasible.

As new modalities of intervention delivery emerge (e.g., digital interventions) and more trials are conducted remotely, improving the quality of remote biomarker collection procedures has become increasingly important. A recent meta-analysis and scoping review of the literature by Thrul et al. [6], evaluated studies reporting the use of remotely obtained biomarkers of smoking status. This meta-analysis showed that cotinine is the most common biochemical verification method and that only 47% of individuals who self-reported abstinence were biochemically confirmed. Two additional findings from this meta-analysis were that mean sample return rates ranged from 65% to 77%, and that none of the strategies that studies employed to increase adherence to remote biomarkers collection, or their combination (i.e., monetary incentives, participant training, and reminders), were associated with improved sample return rates. In light of these findings, the authors of the meta-analysis concluded that there is a large knowledge gap with regards to (a) the underlying causes for low sample return rates and (b) the most effective strategies to improve remote biomarkers collection.

While this meta-analysis is an important contribution on the topic of remote collection of tobacco biomarkers, it did not examine in detail the importance of human factors involved in the interaction with these novel devices or sample collection methods. Examining these factors could identify potential underlying causes of low sample return rates or possible approaches to prospectively improve them. Further, the Advance Research Projects Agency for Health (ARPA-H), a new federal government agency to accelerate health outcomes in biomedical and health research recently made the case that these methods are key to address the challenges of intervention design and delivery in diverse populations [7]. Therefore, the goal of the current review was to evaluate the degree to which the studies reviewed by Thrul et al. [6] employed user-centered design methodologies to either assess the causes of low sample return rates, or to iteratively design strategies to improve those rates. This paper will discuss these findings in the context of the larger tobacco and user-centered design literature and make a call to action to their application in tobacco research. Implications of this call to action for health equity will be discussed.

Overview of review methods

We used data from a recent scoping review and meta-analysis of remote biochemical verification procedures published in Nicotine and Tobacco Research [6]. This meta-analysis identified 82 publications that reported procedures for remote biochemical verification of smoking status. Only studies specifically designed to assess combustible tobacco use were included in the original study. For a detailed description of the study selection process see Thrul et al. [6]. Our current review focused on all studies that reported specific problems to obtain biomarker samples from participants or low sample return rates, regardless of the type of methodology utilized to evaluate or address those problems. We also extracted all causes identified in the literature for low sample return rates, and grouped them in categories.

User-centered design research aims to triangulate data from a rich array of sources to explain and generate hypothesis about the user experience of an individual in interaction with a system [8]. Qualitative, quantitative, and observational data are the most common data sources in this approach. Another component that characterizes this approach is the use of rapid cycles of iterative testing to refine a procedure or system. Therefore, among the articles that reported specific problems to obtain biomarker samples or low sample return rates, we determined whether they: (a) conducted any formal or informal research to identify reasons for the missing samples or reported problems (Yes/No); or (b) collected any user-centered design data (Yes/No). Among those that collected any user-centered design data we subsequently classified them based on whether they: (c) utilized a qualitative method (Yes/No; e.g., open ended questions, interviews); (d) utilized a quantitative method (Yes/No; e.g., usability or satisfaction survey questions); (e) utilized an observational method (Yes/No; e.g., direct observation of user following sample procedure); or (f) iteratively applied the knowledge extracted from the user-centered design method to improve the procedure or method (Yes/No). Note that this heuristic was applied to these studies regardless of whether or not the authors described their approach as user-centered design strategies.

Finally, based on criteria (c), (d), (e), and (f) we created a heuristic score of the level of elaboration or complexity of the user-centered design strategy used by the studies included in our review. The total score ranged from 0 to 4, with 4 being the highest and 0 the lowest level of complexity.

Review results

Overview of findings

Thrul et al. [6]. classified the 82 articles as smoking cessation studies (n = 42; e.g., randomized trials, pilot studies, and quasi-experiments but excluding contingency management studies), contingency management (CM) studies (n = 32), or other studies (n = 14; e.g., feasibility, validation, or secondary analysis studies). Overall, among the 82 articles identified by Thrul et al. [6]. in their systematic search, the percentage of studies that reported specific problems to obtain biomarker samples from participants was 38% (31 studies). When we examined the percentage of studies reporting problems by type of study, we found that problems were reported by 36% (15 studies) of smoking cessation studies, 31% (10 studies) of CM studies, and 50% (7 studies) of other studies.

Our review of causes for low return rates of biomarkers of tobacco use identified by the literature identified six different categories and ordered them from the most to the least common: (1) usability and procedural causes; (2) technical or device related causes; (3) sample contamination due to polytobacco use or other substances; (4) digital divide or psychosocial factors; and (5) individual motivational factors. See Table 1 for a list of all the causes identified in our review and their classification.

Table 1.

Causes identified to explain problems with collection of biomarker samples or low sample return rates.

Types of Problems Encountered Causes Identified for Low Sample Return Rates (from oldest to newest publication year)
Usability and Procedural Challenges  • Invalid addresses, vials broken, cotton rolls had not been used in some cases, insufficient volume of saliva, and sample contamination (Etter et al., 2005) [60].
 • Limited time allowed to participants to provide the sample, complete the accompanying brief survey, and mail these items (Hennrikus et al., 2005) [61].
 • Biochemical verification as a potential burden to participants (Fix et al., 2010) [62].
 • Patients commented on follow-up phone calls that they were “turned off” by the urinary cotinine strip (Duffy et al., 2016) [63].
 • Participants reported that collecting and testing saliva using video was difficult (Kong 2017) [14].
 • Low acceptability ratings about the convenience of the app and carrying devices during the day (McClure et al., 2018) [64].
 • Participants desired more flexibility in time to complete sessions (McClure et al., 2018) [64].
 • Inability to confirm addresses, trace package delivery, and retrieve missing devices (Herbec et al., 2019) [15].
 • Clinicians and staff found it difficult to integrate the sample collection procedure into their infectious disease clinics routine (Gryaznov et al., 2020) [65].
 • Breath test did not show the participant’s mouth (Valencia et al., 2020) [9].
 • Participants indicated challenges using the CO monitoring device (Gryaznov et al., 2020) [65].
 • Lack of a mailing address for receiving sample collection kits (Vogel et al., 2020) [66].
 • Installation of smoking cessation and CO monitoring apps was time consuming (Gryaznov et al., 2020) [65].
 • Participants struggled with smartphone use and technology (Medenblik et al., 2021) [12].
 • Participants had long delays between completing the test (Urine) and photographing the result (Garg et al., 2021) [23].
 • Participants had missing iCO Smokerlyzer or forgot smartphone (Schwaninger et al., 2021) [67].
 • Participants did not have time to provide a urine sample at the time of the video check-in or did not fully saturate the saliva test (Joyce et al., 2021) [68].
 • Potential burden on participants to provide biochemical verification (Meacham et al., 2021) [69].
 • Deviations from sampling fidelity (e.g., breath exhalation into the monitor was not visible; Dallery et al., 2015) [10].
Technical (device)  • Undefined technical issues (Dallery et al., 2013) [70].
 • Samples were partially evaporated, thus increasing the cotinine concentration in the sample (Cunningham et al., 2016) [71].
 • Consistent complains about potential inaccuracy of the device (Tan et al., 2018) [72].
 • Technical difficulties connecting to Facebook and downloading an app (Tan et al., 2018) [72].
 • Malfunction of the mobile CO device (Nomura et al., 2019) [73].
 • Unreadable cotinine strips possibly due to technical errors (May et al., 2019) [21].
 • NicAlert recorded a non-smoking result in one participant who admitted to smoking once within the prior 14 days (May et al., 2019)
 • [21].
 • Problems with automated facial recognition (Kendzor et al., 2020) [11].
 • Malfunctions of saliva test kit (Vogel et al., 2020) [74].
 • Audio collection: corrupted file, background noise (Valencia et al., 2020) [9].
 • Large number of mobile phone models and operating systems created compatibility problems with the CO device (Gryaznov et al., 2020)
 • [65].
 • CO monitors occasionally recorded high CO numbers (>130 ppm; Medenblik et al., 2021) [12].
 • Problems to establish the connection between a remote CO device and a smartphone, and submitting CO test videos (Bloom et al., 2021)
 • [13].
 • Issues with unreadable test strips and need to send multiple kits to participants (Meacham et al., 2021) [69].
 • Missing or not working iCO Smokerlyzer devices (Schwaninger et al., 2021) [67].
 • Issues with electronic watch not holding a charge long enough to detect cigarette use (Joyce et al., 2021) [68].
Sample Contamination  • Saliva samples were insufficient or contaminated (Aveyard et al., 2003) [75].
 • Presence of other forms of nicotine use (Cha et al., 2017) [76].
 • Lack of biochemical verification due to current NRT use (May et al., 2019) [21].
 • Participants’ test results could have been influenced by other tobacco use not assessed at follow-up
 • Growing popularity of e-cigarettes (Vogel et al., 2020) [66].
 • Cotinine testing did not allow to differentiate smoking from other sources of nicotine exposure, like high nicotine e-cigarettes (Vogel et al., 2020) [66].
 • Environmental factors affecting the readings of a remote CO monitor, such as air pollution, secondhand smoke, or the use of cannabis (Valencia et al., 2020) [9].
 • Presence of smokeless tobacco, heavy secondhand smoke exposure, delays between self-report and testing, and certain food items that may result in positive cotinine test - eggplant, tomatoes, peppers (Meacham et al., 2021) [69].
 • Use of e-cigarettes among study participants (Meacham et al., 2021) [69].
 • Other forms of tobacco use not assessed at follow-up (Garg et al., 2021) [23].
Digital Divide and Psychosocial Factors  • Depressed participants may be less likely to return saliva sample (Etter et al., 2005) [60].
 • Participants did not own a computer or did not have internet service (Reynolds et al., 2015) [77].
 • Participants had defective mobile phones (Tan et al., 2018) [72].
 • Participants did not have adequate computer access, and ability to download programs and upload photographs (May et al., 2019) [21].
 • Problems with internet bandwidth for video conferencing and video streaming (May et al., 2019) [21].
 • Not all participants could set up a video chat session with the investigator (Garrison et al., 2020) [78].
 • Participants did not have a phone or camera to capture and send results (Garg et al., 2021) [23].
 • Scheduling and transportation barriers to complete breath tests (Garg et al., 2021) [23].
Motivational factors  • Inadequate levels of incentives for study participants from different countries (Fix et al., 2010).
 • Low levels of incentives (Cunningham et al., 2016) [79].
 • Lack of incentives in app design such as interactive features, gamification to encourage more sustained use of CO monitor (Tan et al., 2018) [72].
 • Lack of incentives for sample submission (Herbec et al., 2019) [15].
 • Lack of immediate reinforcement (e.g., compensation) after the sample return session (Garrison et al., 2020) [78].

Note. CO = Carbon Monoxide; NRT = Nicotine Replacement Therapy.

Among the 31 studies that reported problems to obtain biochemical verification or low sample return rates, 17 studies (55%) investigated those problems by informal means. Eleven studies (35%) utilized at least one user-centered design method to examine the underlying causes of the problems observed, or to proactively examine the usability and/or user experience of the biochemical verification method. Among the 11 studies that applied user-centered design methods, 55% (n = 6) utilized qualitative methods, 82% (n = 9) utilized quantitative methods (e.g., satisfaction ratings on a Likert Scale), and 18% (n = 2) utilized observational methods. Finally, 18% (n = 2) of the studies utilized user-centered design data to improve or iterate on the subsequent biomarker collection procedure.

In terms of our heuristic score of the complexity or elaboration of the user-centered design strategy employed by the reviewed studies, scores ranged from 0 to 3, with 6% of the studies with a score of 3, 10% with a score of 2, 19% with a score of 1, and 65% with a score of 0. The mean scores were 0.61 (SD = 0.95). See Table 2 for all the studies included in the review, their sample return rates of biomarkers of tobacco use, the causes identified, and the methods they employed to evaluate them.

Table 2.

Overview of studies collecting remote biomarkers of smoking cessation, problems identified during sample collection, and methods to identify their causes.

Author, Year Design, Sample Size, and Study Type Biomarker and Ver. Method Population of Indiv. who Smoke Return Samples (%) and Causes Identified for Low Return Rates Informal Methods (Yes/No) Any User-Centered Design Method (Yes/No) Qualitative Method (Yes/No) Quantitative Method (Yes/No) Observational Method (Yes/No) Iterative UCD Testing (Yes/No) Level of Complexity Score (0–4)
Medenblik et al., 2021[12] Pilot, N = 13, CM Study CO; Video Specific; Smokers with alcohol use disorder Return rates: Cohort 1: 32%–49%; Cohort 2: 59%–67%
Causes Reported:

 • Participants struggled with smartphone use and technology
 • CO monitors occasionally recorded high CO numbers (>130 ppm)
No Yes Yes Yes No Yes 3
Kendzor et al., 2020[11] Pilot, N = 16, CM Study CO; Photo Specific; low SES smokers Return rates: Weeks 1–4: 76%; Week 8: 60%; Week 12: 30%
Causes Reported:

 • Problems with automated facial recognition
No Yes Yes Yes No Yes 3
Kong et al., 2017[14] Pilot, N = 15, CM Study CO and Cotinine; Video, Photo, In-person. Specific; Adolescents Return rates: CO 86%; Cot 67%
Causes Reported:

 • Participants reported that collecting and testing saliva using video was difficult
No Yes Yes Yes No No 2
Valencia et al., 2020[9] Pilot, N = 10, CM Study CO; App: Web platform Specific; pregnant women who smoked Return rates: 50%
Causes Reported:

 • Breath test did not show the participant’s mouth
 • Environmental factors affecting the readings of a remote CO monitor, such as air pollution, secondhand smoke, or the use of cannabis
 • Audio collection: corrupted file, background noise
No Yes No Yes Yes No 2
Bloom et al., 2021[13] Pilot, N = 50, CM Study CO; Video General Return rates: 42%
Causes Reported:

 • Problems to establish the connection between a remote CO device and a smartphone, and submitting CO test videos
No Yes Yes Yes No No 2
Dallery et al., 2015[10] RCT, N = 43, CM Study CO; Video General Return rates: Not Reported
Causes Reported:

• Deviations from sampling fidelity (e.g., breath exhalation into the monitor was not visible)
No Yes No No Yes No 1
Garg et al., 2021[23] Feas. study; N = 270; Secondary Analysis CO and Cotinine; Photo and in-person Specific; low income smokers Return rates: 46%
Causes Reported:

 • Other forms of tobacco use not assessed at follow-up
 • Participants did not have a phone or camera to capture and send results
 • Participants had long delays between completing the test (Urine) and photographing the result
 • Scheduling and transportation barriers to complete breath tests
No Yes No Yes No No 1
Herbec et al., 2019[15] Secondary analysis of RCT, N = 59 CO; App General Return rates: 25%
Causes Reported:

 • Inability to confirm addresses, trace package delivery, and retrieve missing devices
 • Lack of incentives for sample submission
No Yes No Yes No No 1
Tan et al., 2018[72] Val. Study, N = 15 CO; App General Return rates: 13 participants used the device with median frequency of 9 (range 2–30) over the two-week observation period
Causes Reported:

 • Lack of incentives in app design such as interactive features, gamification to encourage more sustained use of CO monitor
 • Consistent complains about potential inaccuracy of the device
 • Technical difficulties connecting to Facebook and downloading an app
 • Participants had defective mobile phones
No Yes Yes No No No 1
McClure et al., 2018[64] Feas. study, N = 16 CO; Photo Return rates: Random assessments (non-smoking EMA + CO sessions) were completed 73% of the time
Causes Reported:

 • Low acceptability ratings about the convenience of the app and carrying devices during the day
 • Participants desired more flexibility in time to complete sessions
No Yes No Yes No No 1
Meacham et al., 2021[69] RCT, N = 179 Cotinine; Photo Specific; young adult smokers with heavy drinking Return rates: 71%
Causes Reported:

 • Issues with unreadable test strips and need to send multiple kits to participants
 • Potential burden on participants to provide biochemical verification
 • Presence of smokeless tobacco, heavy secondhand smoke exposure, delays between self-report and testing, and certain food items that may result in positive cotinine test - eggplant, tomatoes, peppers
 • Use of e-cigarettes among study participants
No Yes Yes No No No 1
Cha et al., 2017[76] Pilot, N = 247 Cotinine; Mail-in sample General Return rates: 71%
Causes Reported:

• Presence of other forms of nicotine
Yes No No No No No 0
Aveyard et al., 2003[75] RCT, N = 2471 Cotinine; In person and mail-in sample General Return rates: 41%
Causes Reported:

• Saliva samples were insufficient or contaminated
Yes No No No No No 0
Cunningham et al., 2016[71]. RCT, N = 999 Cotinine; Mail-in sample General Return rates: 70% (reported for SR quitters only)
Causes Reported:

 • Samples were partially evaporated, thus increasing the cotinine concentration in the sample
 • Low levels of incentives
Yes No No No No No 0
Hennrikus et al., 2005[61] RCT, N = 2095 Cotinine; Mail-in sample Specific; Hospitalized individuals Return rates: 72%
Causes Reported:

• Limited time allowed to participants to provide the sample, complete the accompanying brief survey, and mail these items
Yes No No No No No 0
Garrison et al., 2020[78] RCT, N = 325 CO; Video General Return rates: 73%
Causes Reported:

 • Lack of immediate reinforcement (e.g., compensation) after the sample return session
 • Not all participants could set up a video chat session with the investigator
Yes No No No No No 0
Vogel et al., 2020[66] RCT, N = 165 Cotinine; Photo Specific; Sexual minority young adults Return rates: Not Reported
Causes Reported:

 • Lack of a mailing address for receiving sample collection kits
 • Growing popularity of e-cigarettes
 • Cotinine testing did not allow to differentiate smoking from other sources of nicotine exposure, like high nicotine e-cigarettes
 • Malfunctions of saliva test kit
Yes No No No No No 0
Schwaninger et al., 2021[67] RCT, N = 162 CO; App; General Return rates: Not Reported
Causes Reported:

 • Missing or not working iCO Smokerlyzer devices
 • Participants had missing iCO Smokerlyzer or forgot smartphone
Yes No No No No No 0
Etter et al., 2005[60] Feas. study, N = 392 Cotinine; Mail-in sample General Return rates: 84%
Causes Reported:

 • Invalid addresses, vials broken, cotton rolls had not been used in some cases, insufficient volume of saliva, and sample contamination
 • Depressed participants may be less likely to return saliva sample
Yes No No No No No 0
Gryaznov et al., 2020[65] RCT, N = 81 CO; App; Web platform Specific; smokers with HIV Return rates: Not Reported
Causes Reported:

 • Large number of mobile phone models and operating systems created compatibility problems with the CO device
 • Clinicians and staff found it difficult to integrate the sample collection procedure into their infectious disease clinics routine
 • Participants indicated challenges using the CO monitoring device
 • Installation of smoking cessation and CO monitoring apps was time consuming
Yes No No No No No 0
Fix et al., 2010[62] Feas. study, N = 400 Cotinine; Mail-in sample General Return rates: 52%
Causes Reported:

 • Biochemical verification as a potential burden to participants
 • Inadequate levels of incentives for study participants from different countries
Yes No No No No No 0
Joyce et al., 2021[68] Pilot 1: n = 27; Pilot 2: n = 8; Pilot 3: n = 4 Cotinine; Mail-in sample for urine, video for saliva, SmokeBeat wrist sensor detected hand to mouth movement Specific; pregnant and postpartum Medicaid members who smoked Return rates: Pilot 1: Intervention 71%, Control 86%; Pilot 2: Pay-to-wear 75%, Pay-to-quit 100%; Pilot 3: Not Reported
Causes Reported:

 • Issues with electronic watch not holding a charge long enough to detect cigarette use
 • Participants did not have time to provide a urine sample at the time of the video check-in or did not fully saturate the saliva test
Yes No No No No No 0
Dallery et al., 2013[70] RCT, N = 77 CO; Video General Return rates: Non-contingent 90%; Contingent 92%
Causes Reported:

• Undefined technical issues
Yes No No No No No 0
Duffy et al., 2016[63] RCT, N = 1336 Cotinine; Mail-in Nic Alert test Specific; hospitalized patients Return rates: 33%
Causes Reported:

• Patients commented on follow-up phone calls that they were “turned off” by the urinary cotinine strip
Yes No No No No No 0
Reynolds et al., 2015[77] RCT, N = 62 CO; Video Specific; adolescents Return rates: Entire sample 46%; Active Treatment 38%; Control Treatment 69%
Causes Reported:

• Participants did not own a computer or did not have internet service
Yes No No No No No 0
Nomura et al., 2019[73] RCT, N = 115 CO; App General Return rates: Not Reported
Causes Reported:

• Malfunction of the mobile CO device
Yes No No No No No 0
May et al., 2019[21] Pilot, N = 17 Cotinine; General Return rates: 93%
Causes reported:

 • Unreadable cotinine strips possibly due to technical errors
 • NicAlert recorded a non-smoking result in one participant who admitted to smoking once within the prior 14 days
 • Participants did not have adequate computer access, and ability to download programs and upload photographs
 • Problems with internet bandwidth for video conferencing and video streaming
 • Lack of biochemical verification due to current NRT use
Yes No No No No No 0
Ferketich et al., 2014[80] RCT, N = 214 Cotinine; Mail-in sample Specific; Individuals covered by Medicaid Return rates: 55%
Causes reported: Not Reported
Yes No No No No No 0
Stoops et al., 2009[81] RCT, N = 68 CO; Video General Return rates: Abstinence contingent group 68%; Yoked control group 67%
Causes Reported: Not Reported
No No No No No No 0
Simon et al., 1997[82] RCT, N = 324 Cotinine; Mail-in sample; blood was in person Specific; Individuals who received surgery at VA and were hospitalized Return rates: Not Reported
Causes Reported: Not Reported
No No No No No No 0
Curry et al., 1995[19] RCT, N = 1137 Cotinine; In-person and
mail-in
sample
General Return rates: 67%
Causes Reported: Not Reported
No No No No No No 0
TOTAL 55%[1] 35%[1] 55%[2] 82% 18% 18% M = 0.61

Note: UCD = user-centered design; RCT = Randomized Controlled Trial; CM = Contingency Management; Val. = Validation; Feas. = Feasibility; CO = Carbon Monoxide; Ver. = Verification; Indiv. = Individual; Num. = Number; 1: Percent of articles classified as “Yes” relative to the total number of articles that reported problems (n = 31); 2: Percent of articles classified as “Yes” relative to the total number of articles that used a user-centered design method (n = 11); M = Mean.

Discussion of findings

Overall, the number of studies reporting sample collection problems was very low. Only about one third of the smoking cessation (e.g., intervention trials) and CM studies identified by Thrul et al. [6], reported any specific sample collection problems leading to low sample return rates. This rate was higher among feasibility and secondary analysis studies, where at least half of those studies reported specific problems with biomarker collection or low sample return rates. Also, while 35% of studies in this review used some form of user-centered design strategy, the wide range of user-centered design strategies offered by this methodology were generally underutilized, and in some cases the strategies used (i.e., observational methods) were not for the intended purposes of this methodology. For example, observational data in user-centered design research allows researchers to evaluate common usability errors, such as mistakes when following procedures or using a system. In our review of these studies, only two[9,10] explicitly used direct observational data to identify problems in remote collection of biomarkers. However, it is important to note that in these CM studies program incentives had to be paired with biomarker data. Thus, observational data was primarily used to prevent participant’s attempts to falsify their smoking status to obtain an incentive, and not to improve the sample collection procedure. Not surprisingly we noted that some of the studies that had the highest level of elaboration or complexity from an user-centered design perspective were pilot CM studies [9,1114].

In what follows, we will discuss four general themes that emerged during our review of the above literature, and we will lay out some potential frameworks and recommendations.

General reliance on informal data collection procedures

Among the 31 studies reviewed 55% collected usability data from informal sources. Informal sources (e.g., study logs from study coordinators) were typically collected from scheduled phone interviews, video-conference, or in-person visits. While these sources of data can inform sample collection challenges and inform future procedures, a common observation among the studies that we reviewed was that the reasons for low sample rate remained unclear. Several authors pointed out that there is a need to systematically collect more quality data that would allow researchers to identify the underlying causes of those low samples rates [15]. Informal means of data collection are not likely to provide a reliable body of knowledge to comprehensively evaluate the different variables involved in low sample return rates or enough information to design alternative strategies to increase those rates.

No discussion of implementation and dissemination factors

Among the studies reviewed, those that collected data about the usability, user experience, and overall acceptability of the sample collection procedure were in a better position to assess the potential for implementation and dissemination of the intervention in real world settings [16]. For example, health organizations interested in implementing a CM smoking cessation intervention will closely look at the added cost of specific remote methods of biomarker detection. Take HCA Health-care[17] for example, one of the largest healthcare organizations in the United States. While the $65 cost of an iCOquit (Bedfont Scientific Ltd) monitor per unit seems low, the $18 cost of a cotinine strip is three times lower, which might be substantial in the context of HCA Healthcare’s potential 4.8 million annual encounters with smokers (i.e., $312 M versus $86 M). At the same time, a cotinine test takes 10–20 min to provide results (as opposed to immediate results with a CO monitor), and hence lead to a less powerful CM intervention (less reinforcing), which can ultimately lead to fewer patients with positive intervention outcomes. On the other hand, patients might consider that the cotinine procedure is more cumbersome (i.e., less usability) and less pleasant (i.e., negative user experience). The interaction between the usability of a novel intervention or procedure and the institutional context in which it could be implemented has been recently noted in the implementation science literature, which has started to integrate these two disciplines to address those interactions [16]. In sum, both the procedural aspects of an intervention, as well as how the intervention is used and perceived by the target population can significantly outweigh initial cost considerations.

Another dissemination and implementation challenge that has been recently documented by investigators in the context of the COVID-19 pandemic has to do with how clinicians could eventually adopt the use of certain biochemical verification devices in clinical settings [18]. Manta et al. [18], reviewed current biometric monitoring devices for the collection of vital signs in clinical care. The authors discussed the opportunity provided by these technologies to improve clinical care, but also noted that clinicians tend to lack the experience or precise protocols to navigate the remote administration of these devices, and that the devices themselves may lack basic usability standards. This in turn, can lead to lack of confidence in the results on the part of the clinician/researcher collecting those measures. The authors also noted that the large proliferation of remote Bluetooth devices and multiple validation standards (e.g., FDA, trade-specific organizations) may lead to mistrust on the part of the clinician/researcher about the results of such devices.

Lack of measures of psychological, psychosocial, and health equity factors

Few of the studies discussed in the current review directly or indirectly addressed psychosocial factors that may affect sample return rates, or the preferences of participants for certain sample procedures. One study mentioned “lack of cooperation” from participants [19], but contextual factors to understand this lack of cooperation were not described. Another study emphasized the importance of building a positive rapport with study participants [15], but the study did not suggest strategies to standardize this process. One potential model to follow could be the supportive accountability coaching model for digital health proposed by Mohr [20]. This model aims to establish clear expectations between users and coaches, and a sense that coaches are trustworthy, benevolent, and have domain expertise. This model could be used to develop specific technical coaching protocols to train staff to more effectively aid the user to resolve specific technical barriers. Another psychosocial factor that was identified in our review was the fact that some participants lacked the technological literacy to effectively use some biomarker collection approaches [21]. However, no study systematically evaluated participants’ technology literacy, a topic that has recently been discussed in the smoking cessation literature [22]. Finally, some studies explicitly identified psychosocial factors such as transportation barriers and socioeconomic status as barriers to specific sample collection procedures [23]. However, these factors, which are directly tied to the health equity of the sample collection procedure, were barely discussed in the studies we reviewed.

Different levels of emphasis on identity verification

Deception in online studies has been identified by researchers as an important concern. In a study by Devine et al., the authors found that 75% of participants withheld information to meet inclusion criteria for a clinical trial [24]. Other authors have warned the field about this challenge [25,26]. Similar concerns have been raised in remote clinical trials of smoking cessation interventions. [22,27,28] Heffner et al. [27]. laid out specific examples of deception in digital smoking cessation studies, suspicious patterns of data that should be flagged for verification, and a detailed list of recommendations to prevent deception.

Among the studies that we reviewed, those that put the strongest emphasis on identity verification were CM studies that utilized incentives. In those studies, the use of direct video calls and recorded videos required time consuming procedures from staff that took preference over other less time-consuming strategies (e.g., pictures). However, some authors developed face recognition software to satisfy this requirement and make their procedure more scalable [11]. These authors developed artificial intelligence software to avoid the need for study staff to personally verify the identity of participants and the validity of the samples. With some exceptions [1114], the usability and user experience of specific identity verification procedures, or their impact on recruitment and retention was not reported.

A potential framework to evaluate the level of identity evidence provided by remote studies is the one provided by the National Institute of Standards and Technology (NIST) Digital Identity Guidelines, Special Publication 800–63A [29]. This framework outlines how to evaluate the quality of different types of identity evidence and classifies them as weak, fair, strong, and superior. The goal of identity verification is to confirm and establish a linkage between the claimed identity and the real-life existence of the subject presenting the evidence. Proofing requirements, including remote interactions with the subject (e.g., video-conference), may involve a biometric comparison between the individual and a driver’s license or passport. However, as stated earlier, this level of security may not be necessary in all remote trials, and factors such as the stage of development of the intervention and the purpose of the trial could guide the levels of identity evidence needed.

A call to action for user-centered design in biomarker research

As shown in our review, an important consideration when it comes to what we typically describe as feasibility and acceptability outcomes, is the fact that the effective use of remote biomarker sample collection requires the use of well refined and tested procedures. Software companies dedicate a large amount of their research and development budgets to evaluate the user experience of their software, or to the most seamless and effective approach to “onboard” their users with their new software or device. While this might seem out of the scope of a clinical trial, the fact is that the same human factors that influence the marketability of a device also influence the feasibility of a novel biomarker or biomarker collection method. Hence, there is a need to increase the application of user-centered design research to test and refine biomarker sample collection procedures.

User-centered design research is a discipline that took form during the 1980s and emerged from the human factors, psychology, and engineering fields [30]. The discipline obtained its status with the first ISO standard[31] and includes a broad range of design research methods, such as design ideation, prototyping, usability testing, think aloud methods, qualitative interviews, and data triangulation [32]. User-centered design research has been widely used by the computer hardware and software industry to design, develop, and evaluate hardware products and user-interfaces. User-centered design research has a strong engineering emphasis, which is best captured by the ‘DUB’ moto: Design, Use, and Build. For that reason, this approach provides an ideal set of tools for the design and evaluation of procedures to collect remote biomarkers.

This research approach purposely collects quantitative, qualitative, and observational data and integrates the results with the goal of designing new solutions to human o engineering problems. In this way, user-centered design research is well suited to examine potential barriers and facilitators of biomarker use. Examples include the views of certain minority groups towards the use of biomarkers, the credibility and value of the results themselves, and unanticipated barriers and factors affecting adherence to biomarker collection procedures. As demonstrated by several studies in our review [11,12,14], the use of traditional self-report measures (e.g., detailed surveys of phone quality and data plans) can be combined with qualitative interviews (e.g., rapid thematic analysis). The integration of these methods can also increase our understanding of how racial and ethnic minorities and other vulnerable populations might be disproportionally affected by biomarker data collection procedures. Factors as these will influence the potential of any biomarker device to wirelessly collect biomarkers in real world settings. User-centered design research is key to thoroughly examine these contextual factors. Further, these research methods align with both the ORBIT model[33,34] and NIH’s stage model[35] which emphasize the importance of establishing key rigor benchmarks in early-phase studies prior to moving forward with large randomized trials.

In recent years, several studies have emerged in the literature describing user-centered design research applied to the process of biomarker collection. For example, Harte et al. [36]. proposed a modified user-centered design framework to evaluate connected health systems in patients at fall risk that could be generalizable to other devices and medical populations. The authors emphasized the key role of user-centered design research to determine the safety of medical devices and the fact that the FDA requires evidence of user involvement during the design of such devices while reviewing premarket submissions [37]. Other studies have applied user-centered design research to address specific usability challenges, such as the feasibility of repeated remote collection of a digital biomarker in patients with Alzheimer disease [38], and the feasibility and acceptability of a remote EEG device for patients with epilepsy [39]. Finally, some authors have emphasized the need to evaluate the large proliferation of remote biomarkers with user-centered design research and have noted that small early-phase studies are rarely published, creating a “file drawer effect” that undermines what the field can learn in the medium/long term [40].

An illustration of studies integrating user-centered design and clinical trial methods

Below we illustrate three types of studies that utilized user-centered design methodology with different degrees of complexity. The purpose of this illustration is to show with specific examples the fact that user-centered design strategies can be flexibly and feasibly incorporated in clinical trial protocols without compromising the trials’ ability to collect traditional clinical trials’ outcomes.

Combining quantitative and qualitative data, and iterative refinements

Two studies in our review had a level of complexity of 3, one degree below the highest in our heuristic metric. These studies used quantitative and qualitative measures and met an additional criterion, using their usability data to iteratively refine the sample collection procedure. In the first study, Kendzor et al. [11], evaluated the feasibility of a novel face recognition software combined with a portable CO monitor to deliver financial incentives in a CM intervention. The investigators developed a “Perceptions of the Intervention Survey” that included two items to quantitatively evaluate the sample collection procedure (i.e., “The smoking monitor was easy to use”, and “It was difficult to blow into the smoking monitor while keeping my face in front of the smartphone screen”). In addition, researchers asked at the end of each day via an app survey if participants had experienced any problems with the end-of-day assessment routine. If they answered “yes” to this question, then they were offered an open-ended question to provide qualitative information about the problem. The study was divided into two phases, where feedback during the first phase was used to improve the study procedures in the second phase. Results of their user-centered design data indicated that 26% of participants reported that the portable monitor was either “never” correct or was correct “some of the time.” In addition, 25% of participants reported that the portable monitor “was difficult to find” to complete their study assessments. As a result of these assessments, over the course of the study investigators refined their procedures to use their facial recognition software and provided detailed information to participants about how to avoid potential problems.

In the second study, Medenblik et al. [12], tested in two pilot cohorts a mobile contingency management intervention combined with cognitive-behavioral therapy among individuals with alcohol and tobacco use disorders. The study utilized a usability survey that asked two questions about the sample collection procedure embedded in their CM app: “How easy to understand was the Contingency Management app ”, and “How easy to use was the Contingency Management app.” In addition, study coordinators were instructed to collect and document usability data during their contacts with participants throughout the study, and a brief qualitative interview was conducted and transcribed at the end of the study. The study was structured in two separate cohorts, where specific qualitative and quantitative feedback from cohort 1 was utilized to improve the sample collection procedure in cohort 2. Investigators identified problems with unusually high levels of CO (CO > 130 parts per million) and subsequently adjusted the software to correct this issue. They also identified the need to provide additional support to participants to upload their samples into a website portal and developed an instructional video that was incorporated in their app to describe in detail the process for submitting videos with the results of their alcohol and smoking samples.

Combining quantitative, and qualitative or observational data, without iterative refinements

Three studies had a level complexity of 2 in our user-centered design metric. These studies utilized a combination of quantitative, qualitative, or observational methods, but did not apply their user-centered design data to iterate and refine their sample collection procedure. In one study Bloom et al. [13], reported the results of a pilot trial of a digital serious game intervention for smoking cessation. This study included 3 items that examined user preferences about their sample collection method: (a) whether the CO cut-off of 6 parts per million (ppm) should be lower, higher, or was just right; (b) whether there were any days during the intervention period on which they smoked but their CO was 6 ppm or lower (false negatives); or (c) whether a baseline positive CO test of 7 or more should be required prior to the intervention. In addition, researchers added open-ended questions that asked about technical issues that participants may have had with the breathalyzer device. Finally, the study included user experience interviews at the end of the intervention period. Results showed that most participants believed that a CO sample should be collected prior to the intervention (78%) and that the cut-off for abstinence was appropriate (73%). The study identified several specific procedural and technical challenges to submitting biomarker samples.

The second study by Kong et al. [14], collected qualitative and quantitative usability data to evaluate the feasibility of a remote procedure to collect CO and cotinine samples remotely in a CM study among adolescent smokers. Participants had to separately rate on a Likert scale how easy or difficult it was to collect either CO (i.e., “How easy or difficult was it to provide the CO video?”) or cotinine samples (i.e., “How easy or difficult was it to provide the saliva video.”) during study participation. Finally, the study included an open-ended question in which participants could provide specific feedback (i.e., “What would make the [CO video/saliva video] recording process easier?”). Data suggested that participants found the CO sample collection procedure feasible, but consistently reported difficulties with the saliva cotinine procedure. Qualitative data supported and elaborated the survey data indicating that participants found the procedure to sending video samples on time challenging and frustrating. Differences in the feasibility of collecting CO versus cotinine samples were discussed and investigators argued that CO samples may have had higher return rates due to the immediacy of the contingency management reinforcer.

In a third study by Valencia et al. [9], investigators examined the feasibility of a remote smartphone based CO system to deliver a CM intervention for smoking cessation. The study measured via self-report the ease of use of the CO sample collection procedure and collected observational data about participant’s performance with the front facing camera of the device. Sixty three percent of participants indicated that the sample collection procedure was “extremely easy to use” with the remaining indicating that it was “easy to use.” Observational data from videos, pictures, and audio recordings indicated that background noises affected the accuracy of the system. However, most participants used the front-facing camera correctly. Investigators pointed out that more research was needed to identify potential barriers to improve sample collection rates, and more specifically, the social context of participants at the time they were asked to provide a sample test (e.g., the presence of family or friends).

Using a single source of data: quantitative, qualitative, or observational data

Six studies (16%) had a level of complexity of 1 in our metric. Among them we would like to highlight two studies that utilized useful or creative designs to evaluate sample collection procedures. Garg et al. [23]. conducted a feasibility study that offered participants to choose among different biochemical verification methods among low-income smokers. One unique aspect of this study is that it assessed user preferences, which is a common user-centered design goal. Overall, the study concluded that in-home urine tests were preferable to in-office breath tests. The study focused on a quantitative analysis of participant choices, and no qualitative data was collected to examine why those choices were made. The authors recognized the need to understand the underlying causes of those choices and argued that transportation barriers could have been a factor favoring in-home urine versus in-office breath tests. Digital divide factors linked to the socioeconomic status of participants (e.g., participants who did not have a smartphone with camera), were recognized as possible causes that may have impacted their user preferences.

Another example of this type of approach is a study by Herbec et al. [15], where the authors conducted a secondary analysis of a larger trial testing a digital intervention in which a remote CO breathalyzer was used to biochemically verify abstinence. The larger trial incorporated a quantitative user-centered design component to examine the usability of the remote CO device and procedure. Specifically, it incorporated three self-report questions within the digital intervention to ask about its acceptability (i.e., “Did you find the CO monitor an acceptable way to assess your abstinence status”), easy to use (i.e., “Did you find the CO monitor easy to use?”), and correct use (i.e., “Do you think you managed to use the CO monitor properly”). In addition, the study “opportunistically collected” reasons for missing CO samples, but it did not state by whom or how. Results indicated that 100% of participants found the sample procedure acceptable, 93% considered it was easy to use, and 80% considered they had used the device correctly. The article argued that some of the challenges were lack of practical means to trace package delivery and retrieve unused devices, but it also pointed out that the study lacked a systematic method to collect feedback and reasons for the missing samples.

Health equity considerations

The rigor of biomarkers cannot be separated from the larger cultural and social context of the individuals that provide those samples. Racial and ethnic minorities experience significant health inequities in all areas of health and medicine [41]. These patients consistently have poor access to psychiatric services and experience poor treatment outcomes [4144]. Hence the same social determinants of health that are responsible for these health disparities (e.g., low income, familiarity with technology, education), naturally have an impact on access to the technologies required for the remote collection of biomarkers. Our review identified a few studies highlighting this problem: low quality devices limited sample collection for some individuals due to the impossibility of implementing certain verification procedures (see section ‘Digital Divide and Psychosocial Factors’ in Table 1). In other studies, participant devices led to data transmit errors by impacting the functionality of the software utilized to process and transmit the data [21]. While the gap in smartphone adoption is rapidly closing among racial and ethnic minorities compared to their White and Anglo American counterparts [45,46], digital divide factors have been identified as a new barrier in dissemination of digital interventions [47,48], and of remote smoking cessation treatments [22]. This digital divide is a new social determinant of health [49] that may increase health inequities among underserved patient populations [48]. As noted by Dahne et al. [22], this digital divide could easily offset the promise of remote clinical trials to ensure the sample diversity and representativeness of our clinical trials. Likewise, the continuity of a patient’s monthly voice and data plans will impact the possibility to transmit biomarker results accurately and in a timely manner.

Health equity considerations in the remote collection of biomarkers have been pointed out by several researchers in recent years. For example, one of the articles that we reviewed indicated that 70% of African American participants in this study were more likely to take a remote urine test versus an office-based carbon monoxide (CO) test [23], and the study identified digital divide and transportation barriers as possible causes for this result. This could negatively affect the collection of samples from African American smokers in current research protocols relying on CO devices. Another recent study found that cannabis use and dual use with tobacco products was more likely among African Americans [50], which may bias the results of trials relying on expired air CO for biochemical verification. High misclassification rates for biochemical verification among non-white individuals have also been noted in the most recent SRNT taskforce [2]. Factors as such racism, and implicit bias could play an important role explaining the root causes of these differences [51]. Finally, in a paper discussing remote collection of digital biomarkers [52], the authors pointed to the fact that since many biomarker validation studies rely on samples of data obtained from high quality devices – often owned by individuals with high socioeconomic status – this creates an inherent bias in our body of research. This artifact, coupled with the fact that racial and ethnic minorities are poorly represented in research studies of digital biomarkers [52], contributes to potential inequities in this important area of research.

Closely related methods to user-centered design research such as mixed-methods research, have been increasingly utilized to address health inequities [53]. In a study by Williams et al. [54]., the authors identified a series of key stakeholder barriers to the use of a novel digital intervention among adults with multiple chronic conditions. Some of the barriers included inadequate levels of digital literacy required to utilize the system, the inadvertent use of jargon in the instructions supplied to use the device, patient concerns about the legitimacy of automated message reminders, and lack of compliance with charging devices prior to completing the digital task. In another article [55], the authors described a research protocol outlining a community participatory study that would combine user-centered design and mixed methods research to develop a novel digital intervention to increase physical activity and cardiovascular health among African American women. Finally, the integration of user-centered design, mixed methods, and implementation science has been proposed as a key strategy to translate evidence-based interventions to community settings [56]. These community centered studies hold great promise in the future.

Study limitations

This review had several limitations. While the set of articles that we selected for review were the result of a systematic meta-analysis and scoping review of the most recent literature on remote collection of biomarkers of tobacco use [6], this meta-analysis noted that low sample return rates of biomarkers of tobacco use were not comprehensively and consistently reported in the literature. Therefore, it is possible that our results did not include the work of researchers who utilized user-centered design strategies to improve their sample collection procedures but that did not explicitly report them in the scientific literature. For example, some researchers may have informally utilized qualitative feedback and iterative strategies to improve their study procedures but felt that these data were not necessarily reportable. A second limitation is our use of a rating scale to assess the level of complexity of the user-centered design strategy utilized by researchers. User-centered design is a methodology that provides a wide range of tools that need to be tailored to the circumstances of each study and requires considerable expertise in their interpretation. Consequently, the scale we applied might not have fully captured the complexity of the user-centered design strategy employed by these studies. However, we think that our emphasis on four clearly distinct components of this methodology, the presence of qualitative, quantitative, and observational data, as well as the use of rapid cycle iterations, provided a useful heuristic to conduct an initial review of the current literature. Future studies may consider the use of more elaborate metrics. Third, it is possible that biochemically verified abstinence may not be the primary outcome in every study, and therefore collection of biomarkers samples and obtaining high sample return rates may not be a study priority. However, this was not the case for the majority of studies included in our review. Finally, justified by our review results, our article was framed as a call to action for more user-centered design research to evaluate low sample return rates of biomarkers of tobacco use in clinical trials. However, there is an extensive literature using other methodologies to evaluate and/or improve low survey response rates more generally, such as token monetary incentives [57], mixed-mode push-to-web surveys [58], or bogus pipeline methods [59]. While these methodologies could have been proposed to examine and improve low response rates, we believe the methods presented here offer a new approach that is important to highlight considering the human factors involved in the remote collection of biomarkers, which essentially comprise a human-device interaction.

Conclusion

This review of the published literature on strategies to remotely verify smoking cessation showed that the field has generated little knowledge about the underlying human factors affecting low sample return rates of biomarkers of tobacco use. Our review also showed that current approaches used to examine the causes of these low sample return rates are not likely to generate useful knowledge and novel solutions. User-centered design strategies have recently emerged as a new approach in the literature to rigorously identify and address these causes. However, among the few studies that utilized these methods, the range of user-centered design strategies to examine low sample return rates could be improved with direct observation of sample collection procedures and with more iterative strategies. Finally, user-centered design and its integration with other mixed-methods and community-based approaches holds promise to address health equity factors in the collection of biomarkers in tobacco research. A concerted effort is needed to systematically evaluate, report on, and utilize user-centered design methods to study the use of remote biochemical verification procedures and their impact on health equity outcomes.

Acknowledgments

This work was supported by the following grants from the National Institute on Drug Abuse: R01DA047301, K02DA054304; and the National Cancer Institute: R01CA246590, R01CA251451.

Footnotes

Declaration of Competing Interests

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Data availability

No data was used for the research described in the article.

References

  • [1].Biomarkers Definitions Working Group, Biomarkers and surrogate endpoints: preferred definitions and conceptual framework, Clin. Pharmacol. Ther 69 (3) (2001) 89–95, doi: 10.1067/mcp.2001.113989. [DOI] [PubMed] [Google Scholar]
  • [2].Benowitz NL, Bernert JT, Foulds J, et al. , Biochemical verification of tobacco use and abstinence: 2019 update, Nicotine Tob. Res 22 (7) (2020) 1086–1097, doi: 10.1093/ntr/ntz132. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [3].Glasgow RE, Mullooly JP, Vogt TM, et al. , Biochemical validation of smoking status: pros, cons, and data from four low-intensity intervention trials, Addict. Behav 18 (5) (1993) 511–527, doi: 10.1016/0306-4603(93)90068-K. [DOI] [PubMed] [Google Scholar]
  • [4].Patrick DL, Cheadle A, Thompson DC, Diehr P, Koepsell T, Kinne S, The validity of self-reported smoking: a review and meta-analysis, Am. J. Public Health 84 (7) (1994) 1086–1093. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [5].Velicer WF, Prochaska JO, Rossi JS, Snow MG, Assessing outcome in smoking cessation studies, Psychol. Bull 111 (1) (1992) 23–41, doi: 10.1037/0033-2909.111.1.23. [DOI] [PubMed] [Google Scholar]
  • [6].Thrul J, Howe CL, Devkota J, Alexander A, Allen AM, Businelle MS, Hébert ET, Heffner JL, Kendzor DE, Ra CK, Gordon JS. A Scoping Review and Meta-Analysis of the Use of Remote Biochemical Verification Methods of Smoking Status in Tobacco Research. Nicotine Tob Res. 2022. Nov 30:ntac271. doi: 10.1093/ntr/ntac271. Epub ahead of print. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [7].ARPA-HNational Institutes of Health (NIH), 2023. Accessed April 3 https://www.nih.gov/arpa-h.
  • [8].Albert W, Tullis T, Measuring the user experience, Second Edition: Collecting, Analyzing, and Presenting Usability Metrics, Morgan Kaufmann, 2013. 2 edition. [Google Scholar]
  • [9].Valencia S, Callinan L, Shic F, Smith M, Evaluation of the momba live long remote smoking detection system during and after pregnancy: development and usability study, JMIR Mhealth Uhealth 8 (11) (2020) e18809, doi: 10.2196/18809. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [10].Dallery J, Meredith S, Jarvis B, Nuzzo PA, Internet-based group contingency management to promote smoking abstinence, Exp. Clin. Psychopharmacol 23 (3) (2015) 176–183, doi: 10.1037/pha0000013. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [11].Kendzor DE, Businelle MS, Waring JJC, et al. , Automated mobile delivery of financial incentives for smoking cessation among socioeconomically disadvantaged adults: feasibility study, JMIR Mhealth Uhealth 8 (4) (2020) e15960, doi: 10.2196/15960. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [12].Medenblik AM, Calhoun PS, Maisto SA, et al. , Pilot cohorts for development of concurrent mobile treatment for alcohol and tobacco use disorders, Subst. Abuse 15 (2021) 11782218211030524, doi: 10.1177/11782218211030524. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [13].Bloom EL, Japuntich SJ, Pierro A, Dallery J, Leahey TM, Rosen J, Pilot trial of QuitBet: a digital social game that pays you to stop smoking, Exp. Clin. Psychopharmacol 30 (5) (2022) 642–652, doi: 10.1037/pha0000487. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [14].Kong G, Goldberg A, Dallery J, Krishnan-Sarin S, An open-label pilot study of an intervention using mobile phones to deliver contingency management of tobacco abstinence to high school students, Exp. Clin. Psychopharmacol 25 (5) (2017) 333–337, doi: 10.1037/pha0000151. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [15].Herbec A, Brown J, Shahab L, West R, Lessons learned from unsuccessful use of personal carbon monoxide monitors to remotely assess abstinence in a pragmatic trial of a smartphone stop smoking app—a secondary analysis, Addict. Behav. Rep 9 (2019) 100122, doi: 10.1016/j.abrep.2018.07.003. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [16].Dopp AR, Parisi KE, Munson SA, Lyon AR, A glossary of user-centered design strategies for implementation experts, Transl. Behav. Med 9 (6) (2019) 1057–1064, doi: 10.1093/tbm/iby119. [DOI] [PubMed] [Google Scholar]
  • [17].Healthcare HCA. HCA Healthcare Accessed August 1, 2022. https://hcahealthcare.com/.
  • [18].Manta C, Jain SS, Coravos A, Mendelsohn D, Izmailova ES, An evaluation of biometric monitoring technologies for vital signs in the era of COVID-19, Clin. Transl. Sci 13 (6) (2020) 1034–1044, doi: 10.1111/cts.12874. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [19].Curry SJ, McBride C, Grothaus LC, Louie D, Wagner EH, A randomized trial of self-help materials, personalized feedback, and telephone counseling with nonvolunteer smokers, J. Consult. Clin. Psychol 63 (6) (1995) 1005–1014, doi: 10.1037//0022-006x.63.6.1005. [DOI] [PubMed] [Google Scholar]
  • [20].Mohr DC, Cuijpers P, Lehman K, Supportive accountability: a model for providing human support to enhance adherence to eHealth interventions, J. Med. Internet Res 13 (1) (2011) e30, doi: 10.2196/jmir.1602. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [21].May R, Walker F, Burgh S de, Bartrop R, Tofler GH, Pilot study of an internet-based, simulated teachable moment for smoking cessation, J. Smok. Cessat 14 (3) (2019) 139–148, doi: 10.1017/jsc.2018.32. [DOI] [Google Scholar]
  • [22].Dahne J, Tomko RL, McClure EA, Obeid JS, Carpenter MJ, Remote methods for conducting tobacco-focused clinical trials, Nicotine Tob. Res 22 (12) (2020) 2134–2140, doi: 10.1093/ntr/ntaa105. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [23].Garg R, McQueen A, Wolff J, et al. , Comparing two approaches to remote biochemical verification of self-reported cessation in very low-income smokers, Addict. Behav. Rep 13 (2021) 100343, doi: 10.1016/j.abrep.2021.100343. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [24].Devine EG, Waters ME, Putnam M, et al. , Concealment and fabrication by experienced research subjects, Clin. Trials 10 (6) (2013) 935–948, doi: 10.1177/1740774513492917. [DOI] [PubMed] [Google Scholar]
  • [25].Kramer J, Rubin A, Coster W, et al. , Strategies to address participant misrepresentation for eligibility in Web-based research, Int. J. Methods Psychiatr. Res 23 (1) (2014) 120–129, doi: 10.1002/mpr.1415. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [26].Teitcher JEF, Bockting WO, Bauermeister JA, Hoefer CJ, Miner MH, Kl-itzman RL, Detecting, preventing, and responding to “fraudsters” in internet research: ethics and tradeoffs, J. Law Med. Ethics 43 (1) (2015) 116–133, doi: 10.1111/jlme.12200. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [27].Heffner JL, Watson NL, Dahne J, et al. , Recognizing and preventing participant deception in online nicotine and tobacco research studies: suggested tactics and a call to action, Nicotine Tob. Res 23 (10) (2021) 1810–1812, doi: 10.1093/ntr/ntab077. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [28].LePine SE, Peasley-Miklus C, Farrington ML, Young WJ, Bover Manderski MT, Hrywna M, Villanti AC. Ongoing Refinement and Adaptation are Required to Address Participant Deception in Online Nicotine and Tobacco Research Studies. Nicotine Tob Res 2023. Jan 1;25(1):170–172. doi: 10.1093/ntr/ntac194. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [29].NIST Special Publication 800-63A Accessed July 26, 2022. https://pages.nist.gov/sp800-63a.html.
  • [30].Moggridge B, Designing Interactions. 1 Edition, Cambridge, MA: The MIT Press, 2007. [Google Scholar]
  • [31].ISO 13407:1999Human-Centred Design Processes For Interactive Systems, International Organization for Standardization, 2017. Accessed November 8 https://www.iso.org/standard/21197.html. Archived at http://www.webcitation.org/6uyJdsDdC. [Google Scholar]
  • [32].Rubin J, Chisnell D, Spool J, Handbook of Usability Testing: How to Plan, Design, and Conduct Effective Tests, Indianapolis, MN: Wiley, 2008. 2 edition. [Google Scholar]
  • [33].Czajkowski SM, Powell LH, Adler N, et al. , From ideas to efficacy: the orbit model for developing behavioral treatments for chronic diseases, Health Psychol. 34 (10) (2015) 971–982, doi: 10.1037/hea0000161. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [34].Czajkowski SM, Hunter CM, From ideas to interventions: a review and comparison of frameworks used in early phase behavioral translation research, Health Psychol. 40 (12) (2021) 829–844, doi: 10.1037/hea0001095. [DOI] [PubMed] [Google Scholar]
  • [35].Onken LS, Carroll KM, Shoham V, Cuthbert BN, Riddle M, Reenvisioning clinical science: unifying the discipline to improve the public health, Clin. Psychol. Sci 2 (1) (2014) 22–34, doi: 10.1177/2167702613497932. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [36].Harte R, Glynn L, Rodríguez-Molinero A, et al. , A human-centered design methodology to enhance the usability, human factors, and user experience of connected health systems: a three-phase methodology, JMIR Hum. Factors 4 (1) (2017) e8, doi: 10.2196/humanfactors.5443. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [37].Hooper D, User-Centered Design For Medical Devices: If It Isn’t Documented It Doesn’t Exist, 2012 mddionline.com. Published September 7 Accessed July 26, 2022 https://www.mddionline.com/design-engineering/user-centered-design-medical-devices-if-it-isnt-documented-it-doesnt-exist.
  • [38].McWilliams EC, Barbey FM, Dyer JF, et al. , Feasibility of repeated assessment of cognitive function in older adults using a wireless, mobile, dry-eeg headset and tablet-based games, Front. Psychiatry 12 (2021) 574482, doi: 10.3389/fpsyt.2021.574482. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [39].Biondi A, Laiou P, Bruno E, et al. , Remote and long-term self-monitoring of electroencephalographic and noninvasive measurable variables at home in patients with epilepsy (EEG@HOME): protocol for an observational study, JMIR Res. Protoc 10 (3) (2021) e25309, doi: 10.2196/25309. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [40].Godfrey A, Vandendriessche B, Bakker JP, et al. , Fit-for-purpose biometric monitoring technologies: leveraging the laboratory biomarker experience, Clin. Transl. Sci 14 (1) (2021) 62–74, doi: 10.1111/cts.12865. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [41].Office of the Surgeon General (US)Center For Mental Health Services (US), National Institute of Mental Health (US), 2001 Mental Health: Culture, Race, and Ethnicity: A Supplement to Mental Health: A Report of the Surgeon General. Substance Abuse and Mental Health Services Administration (US) Accessed August 20, 2020 http://www.ncbi.nlm.nih.gov/books/NBK44243/. [PubMed] [Google Scholar]
  • [42].Institute of Medicine (US) Committee on Quality of Health Care in AmericaCrossing the Quality Chasm: A New Health System For the 21st Century, National Academies Press (US), 2001. Accessed August 20, 2020 http://www.ncbi.nlm.nih.gov/books/NBK222274/. [Google Scholar]
  • [43].Institute of Medicine (US) Committee on Understanding and Eliminating Racial and Ethnic Disparities in Health Care. Unequal Treatment: Confronting Racial and Ethnic Disparities in Health Care Smedley BD, Stith AY, Nelson AR, editors. Washington (DC): National Academies Press (US); 2003. [PubMed] [Google Scholar]
  • [44].President’s New Freedom Commission on MH: report to the President: roster of Commissioners Accessed August 24, 2020. https://govinfo.library.unt.edu/mentalhealthcommission/reports/FinalReport/downloads/downloads.html.
  • [45].St L Demographics of Mobile Device Ownership and Adoption in the United States, Pew Research Center: Internet, Science & Tech, 2020. Published 2020. Accessed June 10 https://www.pewresearch.org/internet/fact-sheet/mobile/.
  • [46].Perrin A, Turner E, Smartphones Help Blacks, Hispanics Bridge Some – But Not All – Digital Gaps With Whites, Pew Research Center, 2019. Published Accessed June 21, 2021 https://www.pewresearch.org/fact-tank/2019/08/20/smartphones-help-blacks-hispanics-bridge-some-but-not-all-digital-gaps-with-whites/. [Google Scholar]
  • [47].Comstock J, With Many Regulatory Barriers Clear, Remaining Roadblocks For Digital Therapeutics Are Cultural, Logistical, MobiHealthNews, 2020. Published September 10 Accessed October 11, 2020 https://www.mobihealthnews.com/news/emea/many-regulatory-barriers-clear-remaining-roadblocks-digital-therapeutics-are-cultural.
  • [48].Spanakis P, Peckham E, Mathers A, Shiers D, Gilbody S. The digital divide: amplifying health inequalities for people with severe mental illness in the time of COVID-19. Br J Psychiatry 2021. Oct;219(4):529–531. doi: 10.1192/bjp.2021.56. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [49].Benda NC, Veinot TC, Sieck CJ, Ancker JS, Broadband internet access is a social determinant of health!, Am. J. Public Health 110 (8) (2020) 1123–1125, doi: 10.2105/AJPH.2020.305784. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [50].Boyle RG, Sharma E, Lauten K, D’Silva J, W St Claire A, Examining use and dual use of tobacco products and marijuana among minnesota adults, Subst. Use Misuse 56 (11) (2021) 1586–1592, doi: 10.1080/10826084.2021.1936049. [DOI] [PubMed] [Google Scholar]
  • [51].Ashford RD, Brown AM, Curtis B Substance use, recovery, and linguistics: the impact of word choice on explicit and implicit bias. Drug Alcohol Depend. 2018;189:131–138. doi: 10.1016/j.drugalcdep.2018.05.005. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [52].Goergen CJ, Tweardy MJ, Steinhubl SR, et al. , Detection and monitoring of viral infections via wearable devices and biometric data, Annu. Rev. Biomed. Eng 24 (2022) 1–27, doi: 10.1146/annurev-bioeng-103020-040136. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [53].Glasgow RE, Askew S, Purcell P, et al. , Use of RE-AIM to address health inequities: application in a low-income community health center-based weight loss and hypertension self-management program, Transl. Behav. Med 3 (2) (2013) 200–210, doi: 10.1007/s13142-013-0201-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [54].Williams K, Markwardt S, Kearney SM, et al. , Addressing implementation challenges to digital care delivery for adults with multiple chronic conditions: stakeholder feedback in a randomized controlled trial, JMIR Mhealth Uhealth 9 (2) (2021) e23498, doi: 10.2196/23498. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [55].Tamura K, Vijayakumar NP, Troendle JF, et al. , Multilevel mobile health approach to improve cardiovascular health in resource-limited communities with Step It Up: a randomised controlled trial protocol targeting physical activity, BMJ Open 10 (12) (2020) e040702, doi: 10.1136/bmjopen-2020-040702. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [56].Dopp AR, Parisi KE, Munson SA, Lyon AR, Integrating implementation and user-centred design strategies to enhance the impact of health services: protocol from a concept mapping study, Health Res. Policy Syst 17 (2019) 1, doi: 10.1186/s12961-018-0403-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [57].Abdelazeem B, Hamdallah A, Rizk MA, et al. , Does usage of monetary incentive impact the involvement in surveys? A systematic review and meta-analysis of 46 randomized controlled trials, PLoS One 18 (1) (2023) e0279128, doi: 10.1371/journal.pone.0279128. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [58].Lynn P, Evaluating push-to-web methodology for mixed-mode surveys using address-based samples, Surv. Res. Methods 14 (1) (2020) 19–30, doi: 10.18148/srm/2020.v14i1.7591. [DOI] [Google Scholar]
  • [59].Adams J, Parkinson L, Sanson-Fisher RW, Walsh RA, Enhancing self-report of adolescent smoking: the effects of bogus pipeline and anonymity, Addict. Behav 33 (10) (2008) 1291–1296, doi: 10.1016/j.addbeh.2008.06.004 . [DOI] [PubMed] [Google Scholar]
  • [60].Etter JF, Neidhart E, Bertrand S, Malafosse A, Bertrand D, Collecting saliva by mail for genetic and cotinine analyses in participants recruited through the Internet, Eur. J. Epidemiol 20 (10) (2005) 833–838, doi: 10.1007/s10654-005-2148-7. [DOI] [PubMed] [Google Scholar]
  • [61].Hennrikus DJ, Lando HA, McCarty MC, et al. , The TEAM project: the effectiveness of smoking cessation intervention with hospital patients, Prev. Med 40 (3) (2005) 249–258, doi: 10.1016/j.ypmed.2004.05.030. [DOI] [PubMed] [Google Scholar]
  • [62].Fix BV, O’Connor R, Hammond D, et al. , ITC “spit and butts” pilot study: the feasibility of collecting saliva and cigarette butt samples from smokers to evaluate policy, Nicotine Tob. Res 12 (3) (2010) 185–190, doi: 10.1093/ntr/ntp191. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [63].Duffy SA, Ronis DL, Karvonen-Gutierrez CA, et al. , Effectiveness of the tobacco tactics program in the trinity health system, Am. J. Prev. Med 51 (4) (2016) 551–565, doi: 10.1016/j.amepre.2016.03.012. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [64].McClure EA, Tomko RL, Carpenter MJ, Treiber FA, Gray KM, Acceptability and compliance with a remote monitoring system to track smoking and abstinence among young smokers, Am. J. Drug Alcohol Abuse 44 (5) (2018) 561–570, doi: 10.1080/00952990.2018.1467431. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [65].Gryaznov D, Chammartin F, Stoeckle M, et al. , Smartphone app and carbon monoxide self-monitoring support for smoking cessation: a randomized controlled trial nested into the Swiss HIV cohort study, J. Acquir. Immune Defic. Syndr 85 (1) (2020) e8–e11, doi: 10.1097/QAI.0000000000002396. [DOI] [PubMed] [Google Scholar]
  • [66].Vogel EA, Ramo DE, Meacham MC, Prochaska JJ, Delucchi KL, Humfleet GL, The Put It Out Project (POP) Facebook intervention for young sexual and gender minority smokers: outcomes of a pilot, randomized, controlled trial, Nicotine Tob. Res 22 (9) (2020) 1614–1621, doi: 10.1093/ntr/ntz184. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [67].Schwaninger P, Berli C, Scholz U, Lüscher J, Effectiveness of a dyadic buddy app for smoking cessation: randomized controlled trial, J. Med. Internet Res 23 (9) (2021) e27162, doi: 10.2196/27162. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [68].Joyce CM, Saulsgiver K, Mohanty S, et al. , Remote patient monitoring and incentives to support smoking cessation among pregnant and postpartum medicaid members: three randomized controlled pilot studies, JMIR Form Res 5 (9) (2021) e27801, doi: 10.2196/27801. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [69].Meacham MC, Ramo DE, Prochaska JJ, et al. , A Facebook intervention to address cigarette smoking and heavy episodic drinking: a pilot randomized controlled trial, J. Subst. Abuse Treat 122 (2021) 108211, doi: 10.1016/j.jsat.2020.108211. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [70].Dallery J, Cassidy RN, Raiff BR, Single-case experimental designs to evaluate novel technology-based health interventions, J. Med. Internet Res 15 (2) (2013) e22, doi: 10.2196/jmir.2227. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [71].Giacobbi P Jr, Hingle M, Johnson T, Cunningham JK, Armin J, Gordon JS. See Me Smoke-Free: Protocol for a Research Study to Develop and Test the Feasibility of an mHealth App for Women to Address Smoking, Diet, and Physical Activity. JMIR Res Protoc 2016. Jan 21;5(1):e12. doi: 10.2196/resprot.5126. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [72].Tan NC, Mohtar Z Bte Mohd, Koh EYL, et al. , An exhaled carbon monoxide self-monitoring device linked to social media to support smoking cessation: a proof of concept pilot study, Proc. Singap. Healthc 27 (3) (2018) 187–192, doi: 10.1177/2010105818757257. [DOI] [Google Scholar]
  • [73].Nomura A, Tanigawa T, Muto T, et al. , Clinical efficacy of telemedicine compared to face-to-face clinic visits for smoking cessation: multicenter open-label randomized controlled noninferiority trial, J. Med. Internet Res 21 (4) (2019) e13520, doi: 10.2196/13520. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [74].Vogel EA, Belohlavek A, Prochaska JJ, Ramo DE, Development and acceptability testing of a Facebook smoking cessation intervention for sexual and gender minority young adults, Internet Interv. 15 (2019) 87–92, doi: 10.1016/j.invent.2019.01.002. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [75].Aveyard P, Griffin C, Lawrence T, Cheng KK, A controlled trial of an expert system and self-help manual intervention based on the stages of change versus standard self-help materials in smoking cessation, Addiction 98 (3) (2003) 345–354, doi: 10.1046/j.1360-0443.2003.00302.x. [DOI] [PubMed] [Google Scholar]
  • [76].Cha S, Ganz O, Cohn AM, Ehlke SJ, Graham AL, Feasibility of biochemical verification in a web-based smoking cessation study, Addict. Behav 73 (2017) 204–208, doi: 10.1016/j.addbeh.2017.05.020. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [77].Reynolds B, Harris M, Slone SA, et al. , A feasibility study of home-based contingency management with adolescent smokers of rural appalachia, Exp. Clin. Psychopharmacol 23 (6) (2015) 486–493, doi: 10.1037/pha0000046. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [78].Garrison KA, Pal P, O’Malley SS, et al. , Craving to quit: a randomized controlled trial of smartphone app-based mindfulness training for smoking cessation, Nicotine Tob. Res 22 (3) (2020) 324–331, doi: 10.1093/ntr/nty126. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [79].Cunningham JA, Kushnir V, Selby P, Tyndale RF, Zawertailo L, Leatherdale ST, Effect of mailing nicotine patches on tobacco cessation among adult smokers: a randomized clinical trial, JAMA Intern. Med 176 (2) (2016) 184–190, doi: 10.1001/jamainternmed.2015.7792. [DOI] [PubMed] [Google Scholar]
  • [80].Ferketich AK, Pennell M, Seiber EE, et al. , Provider-delivered tobacco dependence treatment to medicaid smokers, Nicotine Tob. Res 16 (6) (2014) 786–793, doi: 10.1093/ntr/ntt221. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [81].Stoops WW, Dallery J, Fields NM, et al. , An internet-based abstinence reinforcement smoking cessation intervention in rural smokers, Drug Alcohol Depend 105 (1–2) (2009) 56–62, doi: 10.1016/j.drugalcdep.2009.06.010. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [82].Simon JA, Solkowitz SN, Carmody TP, Browner WS, Smoking cessation after surgery. A randomized trial, Arch. Intern. Med 157 (12) (1997) 1371–1376. [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

No data was used for the research described in the article.

RESOURCES