Abstract
Objective:
This research evaluated the effects of two methodological factors (i.e., delivery modality and incentives) on attrition, data quality, depth of processing, and perceived value of a personalized normative feedback (PNF) intervention targeting drinking reduction in college students. We expected in-lab (vs. remote) participation would be associated with lower attrition, better data quality, and greater depth of processing and intervention value. We further expected that being offered an incentive (vs. not) would be associated with less attrition and better data quality, but lower depth of processing and intervention value. Finally, we expected depth of processing and intervention value to be related to reductions in drinking among PNF participants.
Method:
Heavy drinking college students (N=498) participated in a 2 (in-person vs. remote delivery) × 2 (incentive [$30 gift card] vs. no incentive) × 2 (PNF vs. attention-control) design. Follow-ups occurred remotely three and six months later; all participants were compensated with a $15 giftcard per completed follow-up.
Results:
In-lab participants and those offered an incentive were less likely to drop out of the study. In-lab participants gave higher quality data at baseline and reported greater depth of processing and higher intervention value. PNF was related to reductions in drinking, but depth of processing and intervention value were not, nor was the interaction with PNF.
Conclusion:
Results suggest several benefits for motivating students to come into the lab and a few for offering an incentive, but suggest that this is not a necessary requirement for PNF brief interventions to work.
Keywords: Alcohol, personalized normative feedback, college students, remote interventions, web-based interventions
Alcohol Use in College and PNF as a Brief Intervention
Excessive alcohol consumption among college students is widespread and problematic. According to recent reports, 35% of college students report having been drunk in the past 30 days and a third (33%) of students report a heavy drinking episode (5 or more drinks on one occasion) in the previous two weeks (Schulenberg et al., 2020). This is particularly concerning given that heavy episodic drinking is associated with a range of negative outcomes including poor class attendance, hangovers, engaging in risky sexual behavior, sexual assault, eating disorders, depression, trouble with authorities, injuries, and fatalities (Abbey, Clinton-Sherrod, McAuslan, Zawacki, & Buck, 2003; Dunn, Larimer, & Neighbors, 2002; Geisner, Larimer, & Neighbors, 2004; Hingson, Edwards, Heeren, & Rosenbloom, 2009; Kaysen, Neighbors, Martell, Fossos, & Larimer, 2006). To offset the potential immediate and long-term consequences associated with alcohol use among college students, extensive research has focused on the development, evaluation, and dissemination of interventions to curb alcohol consumption and reduce rates of alcohol-related problems among college students (Carey et al., 2012; Cronce et al., 2018; NIAAA, 2002, 2015).
Personalized feedback interventions have emerged as among the most common individually-focused empirically-supported interventions for problematic alcohol use in college students. These interventions provide users with personalized information on use/severity profile, risk factors, negative consequences, and normative comparisons (Bandura, 1997; Miller & Rollnick, 2012). Personalized normative feedback (PNF) is a critical component of these interventions and is largely recognized as a robust therapeutic tool in motivating behavior change (PNF; Dotson et al., 2015; Reid & Carey, 2015). PNF interventions make use of consistently documented findings that students tend to overestimate the drinking of their peers and these overestimations are strongly associated with their own drinking (Borsari & Carey, 2003; Neighbors et al., 2007). Specifically, PNF reduces drinking among heavy drinkers by correcting normative misperceptions and by providing an accurate comparison of one’s drinking relative to others (Dotson et al., 2015; Neighbors et al., 2016). PNF interventions have garnered considerable support for reducing problematic alcohol use among college students (e.g., Dotson et al., 2015; Lewis & Neighbors, 2006; Riper et al., 2009; Walters & Neighbors, 2005).
Several methodological factors likely contribute to the efficacy of these interventions. For example, PNF is typically delivered via a web-based platform on a computer. The flexible means of administration allows for both in-lab and remote delivery (Rodriguez et al., 2015). Additionally, monetary incentives, which may impact interest in participation, degree of engagement, and intervention efficacy, have been reported as a method to help recruit and retain PNF recruit participants. Ultimately, both the delivery modality of intervention content (in-lab vs. remote) and incentive structure may affect one’s engagement with and attention to PNF intervention material, which may impact alcohol use outcomes.
Brief Intervention Design Considerations
Delivery Modality
In-lab and remotely administered computer-delivered web-based interventions for alcohol use among collect students have become increasingly common since they were first introduced in the late 1990s (Hester & Delaney, 1997; LaBrie et al., 2013; Lewis & Neighbors, 2007; Lewis et al., 2014; Martens et al., 2013; Neighbors et al., 2010; Neighbors et al., 2016; Young & Neighbors, 2019). Computer-based delivery of intervention components offers a more cost-effective approach that requires fewer resources, ensures high fidelity, and enables the rapid diffusion and widespread adoption of science-based interventions that appeals to both research scholars and clinicians and will likely lead to the greatest public health impact (Carroll et al., 2008). Notably, this approach has also permitted intervention delivery flexibility such that intervention materials can be administered in a designated research setting (e.g., a research lab) or a setting of the participants’ choosing (e.g., a dorm room).
Theoretical and empirical work suggests that computer-based interventions are more efficacious when delivered in person than when delivered remotely. For example, justification of effort in the context of cognitive dissonance theory describes a phenomenon whereby individuals tend to place more value upon things that require greater cost or more effect to obtain (Aronson & Mills, 1959; Festinger, 1957; Harmon-Jones & Mills, 2019). As interventions administered in the lab require more effort and planning on the part of intervention recipients, this theoretical model would support greater efficacy for in-lab versus remotely-delivered PNF. Additionally, empirical support has been demonstrated for behavioral change following interventions that require more effort. Specifically, participants assigned to a three-week higher-effort weight loss intervention lost more weight than participants whose weight-loss program required less effort (Axsom & Cooper, 1985). In the context of computer-delivered interventions, attending an in-lab session requires more effort than completing the session remotely. Additional effort and planning for receiving a web-based intervention in person may include scheduling an appointment, negotiating travel/parking, finding a specific unfamiliar location, and interacting with research staff. Receiving a web-based intervention remotely requires comparatively little effort.
Web-based interventions delivered in a designated location also may be accompanied by other characteristics that facilitate greater effort, engagement, and attention relative to remote delivery. For example, locations are designed to ensure that students complete interventions privately, without unexpected interruptions, noise, or other distractions. Interventions completed remotely may be done so at any hour and with no structure to limit distractions. The absence of a person affiliated with the intervention in the case of remote delivery may also reduce any sense of accountability to take the intervention seriously. Indeed, most students (62.3%) report simultaneously engaging in another activity (e.g., watching television; texting; emailing; browsing websites) and about half of these report engaging in multiple activities while participating in a remotely-delivered web-based PNF intervention (Lewis & Neighbors, 2015). The differential effect of PNF intervention location delivery on engagement and processing of intervention materials likely translates to differences in alcohol-related outcomes.
While some evidence suggests an advantage to PFI interventions delivered in a lab setting compared to those delivered remotely (Rodriguez et al., 2015), empirical studies directly comparing the effectiveness of a web-based intervention delivered in-lab to that of the same intervention delivered remotely are lacking. The absence of direct comparisons between in-lab and remotely administered web-based interventions is a critical gap in the literature as no firm conclusions can be made for the potential disparity in efficacy for PNF interventions when delivered at different settings. If the effects of web-based interventions are significantly weaker when delivered remotely versus in person in the context of randomized controlled trials, their reduced impact when implemented in real-world settings outside of the research context is important knowledge for university stakeholders and policymakers.
Incentive Effects
Beyond the location in which interventions are received, incentive structure is another design characteristic that may impact the results of research evaluating web-based feedback interventions for drinking. Drawing from motivational theories of behavior change such as cognitive evaluation theory (CET; Deci et al., 1975; Deci & Ryan, 1985), external rewards, such as financial incentives, can decrease intrinsic motivation (Quoidbach et al., 2010; Tang & Hall, 1995) and thereby result in less engagement with intervention materials, diminished treatment response, and lower value placed on the intervention (Deci et al., 1999). This theoretical model may explain why incentives may result in less effective treatments. That is, there is likely more internal effort and drive (i.e., intrinsic motivation) needed to make and sustain behavior change, as such, incentives may not provide the necessary elements to promote behavior change such as alcohol use. Further, in the event that an incentive results in initial behavior change, these gains are unlikely to be maintained long-term once the financial incentive is removed. In the context of PNF intervention engagement and participation, incentivized participants may use their incentive as justification for their participation with or without actual behavior change, whereas those who are not incentivized may have to generate an alternative justification (e.g., potential health benefits).
As such, incentives may be paramount to initial treatment engagement, but may not translate to maintained reduced alcohol use over time. Indeed, initial work has demonstrated that college students are more likely to participate in a PNF treatment when monetary incentives are offered (Neighbors et al., 2018). Although promising, different incentive structures, and specifically non-incentivized participation, have the potential to introduce participation bias the may obscure findings (Hsieh & Kocielnik, 2016; Sharp, Pelletier, & Lévesque, 2006), particularly among college students. For example, given that college students are more apt to participate when an incentive is present, those who participate without an incentive may exhibit characteristics, such personality traits and drinking behaviors, that may not limit be representativeness of the larger college student population (DeCamp & Manierre, 2016). Thus, the potential inherent differences in incentivized and non-incentivized participation in a PNF may contribute to differential treatment effects. Yet, the extent to which incentives affect outcomes of a PNF intervention over time remains to be empirically tested.
Possible Outcomes of Design Considerations
Attrition
Study attrition is a major concern in clinical research (Flick, 1988). Alcohol intervention studies targeting young adults have an average attrition rate of about 23% with an average follow-up of about five months (Tanner-Smith & Lipsey, 2015). Such high rates of attrition carry serious implications for data analysis, interpretation, and conclusions that can be inferred from data (Marcellus, 2004). Emerging work has suggested that intervention delivery location and incentive structure may be critical factors related to study attrition (Neighbors et al., 2018). In the case of intervention delivery, transportation, advanced planning, and other logical factors for in-lab intervention administration may contribute to increased participant dropout, whereas the ease and convenience of remote delivery may promote retention. Conversely, cognitive evaluation theory would suggest the increased effort required to come into the lab may result in lower dropout. Even further, dropout may not be different as a function of modality. One recent ecological momentary assessment (EMA) study evaluated whether participants who chose to complete EMA reports online would evince different completion rates over time compared to participants who chose to complete their reports in person (Carr et al., 2020). There were no differences in response rates as a function of modality, and although event-contingent response rates fell sharply over time, they were not different by modality.
Additionally, monetary incentives serve to promote retention and improve follow-up rates in randomized controlled trials (Bower et al., 2014; Khadjesari et al., 2011). Given critical profile differences between those who complete versus drop out of intervention studies (i.e., those lost to follow-up tend to be younger, heavier drinkers, and lower educated or not enrolled in college (Edwards & Rollnick, 1997; Suffoletto et al., 2014), scientific work is needed to better understand how methodological factors relate to attrition within the context of a PNF intervention for college students. This work is needed to help inform appropriate intervention methods that will limit inherent biases that may present in data where attrition is high and improve generalizability of findings that can directly inform university alcohol control policies.
Data Quality
Careless responding (i.e., answering questions without paying full attention), also referred to as satisficing, is relatively common in research with college student samples, with one study showing 10–12% of responses qualifying as careless (Meade & Craig, 2012). As responding to surveys requires increased cognitive effort, participants may be inclined to respond in a manner that minimizes cognitive effort at the cost of response accuracy. Careless responding can be detected by attention check items that instruct the participant to respond with a specific answer to an item. A response other than what is specified is indicative of careless responding (Meade & Craig, 2012). High rates of careless responding impact the integrity of the data, bring into question the validity of the findings, and interfere with treatment efficacy. For example, one study assessed attentiveness related to a remotely-delivered PNF intervention found that the intervention was more effective at reducing drinks consumed per week at three-month follow-up for individuals who were more attentive to the feedback (Lewis & Neighbors, 2014). The noise introduced with careless responding also reduces experimental power, therefore requiring experimenters to allocate more resources to ensure adequate power to detect effects (Oppenheimer et al., 2009). The extent to which methodological factors may be relevant to data quality in the context of a PNF intervention, however, remains unknown.
Justification of effort may be an important theoretical consideration. Individuals who participate at their own convenience (i.e., remotely) do not need to justify the relatively little effort needed to engage in the study and therefore are less likely to focus and attend to the intervention material. Alternatively, those designs requiring more effort (i.e., attending an in-lab session) may subconsciously justify the effort by supposing increased internal motivations and therefore respond more carefully. Similarly, as stated above, lower intrinsic motivation resulting from being incentivized for participation might contribute to both decreased intervention effects and lower quality of data as there is limited internal motivation to make behavior change or attend to information that may assist with behavior change. Such findings between low motivation and careless responding or satisficing have been reported (Krosnick, 1991). We also believe that data quality may be enhanced among participants completing the study in the lab because this environment offers fewer distractions and primes a research mindset. It is unknown how incentives will affect the quality of the data; while it is possible that participants may provide better data to the extent that they are being incentivized for it, it is also possible that participants may believe they are being incentivized regardless of data quality, so they provide lower quality data. In this study, we explore careless responding as a function of both where the intervention was delivered and whether incentives were offered.
Depth of Processing and Value of Intervention Material
Depth of processing refers to how one cognitively interprets, assimilates, and encodes information. Depth of processing is a critical component of treatment efficacy (Harvey et al., 2014), with greater depth resulting in more strongly encoded information that is likely to have a stronger effect on behavior. In the context of a PNF intervention, participants who complete the intervention in the lab would theoretically be more likely to attribute their motives for participation to genuine interest and value to the treatment (Floyd et al., 2009), which would facilitate more thoughtful and deliberate consideration of the intervention content. In contrast, participants who complete the intervention at any convenient time or location with access to a computer will not need to justify the little effort it takes, and therefore would be less likely to view the experience as important, valuable, and/or worthy of full engagement. In short, the expense of effort for participating in an in-lab intervention should cause participants to place more value on the intervention experience relative to those who expend less effort by participating remotely. It is also possible that participants completing a study in the lab value the intervention more or pay more attention to the material because of the reduced distractions or due to the research environment (e.g., research assistants, the laboratory setting).
Current Research
Grounded in cognitive evaluation theory, this research evaluates two factors that influence brief interventions from a methodological perspective – intervention delivery remotely versus in-lab and the presence of incentives – on attrition, data quality, depth of processing, perceived intervention value, and ultimately, on intervention efficacy. We derived hypotheses based on findings from previous studies. Specifically, we hypothesized that participating in the lab (vs. remotely) and being offered an incentive (vs. not) would be associated with less attrition from the study (Aim 1; Hypotheses 1a and 1b, respectively) and higher quality data (Aim 2; Hypotheses 2a and 2b). In other words, students who participate in the lab and who are offered money to participate should be more likely to stay in the study and should answer more quality check questions correctly.
Consistent with justification of effort, we expected that participating in the lab and not being offered an incentive would be associated with greater depth of processing of intervention material (Aim 3; Hypotheses 3a and 3b) and greater perceived value of the intervention (Aim 3; Hypotheses 4a and 4b). In other words, coming into the lab and not being offered money to participate should lead participants to process the information at a deeper level and to more highly value the feedback they receive. Ultimately, we hypothesized that only among PNF participants, greater depth of processing and intervention value would be associated with greater reductions in drinking (Aim 3; Hypotheses 5a and 5b). We expected this because the personalized normative feedback that participants receive in the PNF condition is explicitly related to correcting misperceptions around alcohol use, whereas attention control feedback does not discuss drinking. Our conceptual model for Aim 3 is presented in Figure 1.
Figure 1.
Aim 3: Evaluating condition effects on depth of processing and intervention value, and whether depth of processing and intervention value are associated with reduced drinking differentially among PNF participants. Covariates (not included in the image for parsimony) include a baseline drinking latent variable and gender.
Method
Participants
Individuals were recruited from a large, public university in the southern United States. To participate in the study, individuals had to be 18–26 years old and report at least one heavy drinking episode (4/5+ drinks per occasion for a woman/man) and at least one negative alcohol consequence in the last month. A total of 498 college students (56.9% female) aged 18–26 (M = 21.74, SD = 2.28) participated. Participants’ racial backgrounds included: 47.1% White/Caucasian, .6% American Indian/Alaskan Native, 10.0% Black/African American, 22.86% Asian, .2% Native Hawaiian/Pacific Islander, 10.2% Multi-Ethnic, and 9.0% Other. Further, 32.8% of the sample identified as Hispanic/Latinx.
Procedure
The study consisted of a brief, web-based intervention with a 2 (personalized normative feedback [PNF] intervention vs. attention control) × 2 (intervention delivery in-lab vs. remote) × 2 (incentive offered vs. no incentive offered) design. In the PNF condition, participants received standard PNF feedback, which presents participants’ own drinking data (e.g., drinking frequency, typical amount consumed) from their baseline survey alongside their perceived norms and actual normative data from same-sex peers at their university to identify discrepancies in their perceived norms (and their own drinking) compared to actual data provided by peers. In the attention control condition, participants received personalized normative feedback on their texting, video game playing, and music downloading behaviors.
Recruitment materials and informed consent documents were different based on assigned condition for the delivery modality and incentive variables. After screening into the study, participants in the remote condition were directed to the survey link to complete baseline and the intervention assessment, while participants in the in-lab condition were directed to a scheduler to choose a time to complete their baseline and intervention session. Additionally, participants in the incentive condition were offered and received a $30 gift card after completing the baseline assessment and intervention session. Importantly, participants in the “no incentive” condition did not know that they would receive compensation. In other words, recruitment materials did not mention an incentive for the baseline and intervention procedure. Instead, participants were told in the consent form, “In exchange for your participation, you will receive feedback about your health-related behaviors as compared to other students at no cost.” Following the baseline and intervention session, participants were invited to complete two remote follow-up assessments occurring three and six months later. All participants received $15 gift cards for completing each of the follow-up assessments. Although they were not told they would receive it, participants in the no-incentive condition received a $30 gift card upon study completion. They were provided this regardless of follow-up assessment completion. All procedures were approved by the university’s Institutional Review Board.
Measures
Demographics.
Participants were asked to report basic demographic information such as their age, sex, ethnicity, and racial background.
Data quality (careless responding).
Informed by guidelines for online survey assessments (Meade & Craig, 2012), three attention check questions were presented throughout each assessment to ensure that participants were paying attention. Example items included, “In order to answer this question correctly, please select ‘More than 10 times.’” We coded whether participants missed any attention check questions (0=did not answer any incorrectly, 1=answered at least one incorrectly) at each timepoint; higher scores represent more careless responding or lower quality data.
Depth of processing.
Depth of processing of the feedback was assessed immediately following the intervention content (i.e. post-intervention). Depth of processing of the intervention content was assessed with seven items developed by the researchers. These items measured the extent to which participants were paying attention to the feedback that was presented. This assessment contained one open-ended question, asking participants to list what they remembered about the feedback. The remaining items consisted of questions about how the feedback affected their thoughts about their drinking. The seven items were, “I was attentive when viewing the personalized feedback”, “I was distracted when viewing the personalized feedback” (reverse scored), “The personalized feedback I viewed online made me think about my drinking”, “The personalized feedback I received reduced the amount I intend to drink”, “I liked the personalized feedback I received”, “The personalized feedback I received seemed accurate”, and “I found the personalized feedback I received compelling.” Response options were Strongly disagree (1) to Strongly agree (7). We removed two items related to drinking to ensure that both the PNF and attention control conditions were responding to the material presented in the intervention. This resulted in five items (α = .72).
Intervention value.
Perceived value of the intervention was also assessed immediately post-intervention. Value of the intervention was assessed with items the researchers developed specifically for this project. Seven items probed the extent to which individuals saw benefit in the information presented in the intervention. Items included: “I am glad to have learned this information,” “I found this information to be valuable,” “This information was very useful for my own knowledge,” “I do not find this information to be helpful” (reverse scored), “I do not find this information to be accurate” (reverse scored), “The feedback does not relate to my drinking” (reverse scored), and “I think others would find this information valuable.” Response items ranged from Strongly disagree (1) to Strongly agree (7) (α = .89).
Drinking.
Alcohol use was assessed at baseline and follow-up five ways: three assessing consumption and two assessing alcohol-related consequences. First, the Timeline Follow-back (TLFB; Sobell & Sobell, 1992) was used to assess alcohol use over the prior 30 days. The number of drinks consumed in the past month was summed to create a total past-month drinks score. Second, the Daily Drinking Questionnaire (DDQ; Collins et al., 1985) assessed the average number of drinks consumed each day of the week over the previous three months. Responses were summed to create a total of the average number of drinks consumed per week. Third, the Quantity/Frequency/Peak Alcohol Use Index (Dimeff, 1999) measured peak drinks, or the number of drinks consumed during an individual’s heaviest drinking occasion in the past month. Response options for the number of peak drinks consumed range from 0 to 25+ drinks. Consequences related to alcohol use were assessed using the Brief Young Adult Alcohol Consequences Questionnaire (BYAACQ; Kahler, Strong, & Read, 2005) and the Rutgers Alcohol Problem Index (RAPI; White & Labouvie, 1989). The BYAACQ asks participants to indicate whether they have experienced any of the 24 consequences listed. An example item is, “I have taken foolish risks when I have been drinking” (α = .85 for baseline and α = .90 for 3-month follow-up). Three additional items assessing common consequences were adapted from the Young Adult Alcohol Problems Screening Test (YAAPST; Hurlbut & Sher, 1992) and added to the BYAACQ. Responses are coded 0 for No and 1 for Yes and then a summed score is created. The RAPI consists of 23 items. Two additional items were included and assessed the frequency of driving after 2 and after more than 4 drinks. Participants were asked how frequently they experienced a range of negative consequences as a result of their drinking. An example item is, “neglected your responsibilities.” Response options ranged from “Never” (coded 0) to “More than 10 times” (coded 4). Reliability was α = .90 for baseline and α = .92 for 3-month follow-up.
Analysis Plan
Analytic techniques differed based on aim. However, in all models, gender (coded 0=female, 1=male) and PNF condition (0=attention control, 1=PNF) were included as covariates. Additionally, in all models, delivery modality was coded 0 (In-lab) and 1 (Remote) and incentive was coded 0 ($0) and 1 ($30).
For tests on attrition (Aim 1), we were interested in how delivery modality and incentives were associated with dropout from the study at three and six months. We expected that those who completed the study in the lab and those who were offered an incentive would be less likely to drop out (i.e., to complete the follow-up surveys). For these analyses, all participants who completed the baseline assessment were included in the models. A multilevel logistic regression model was used to evaluate hypotheses related to attrition which specified dropout (at 3- and 6-month follow-up) as the outcome. Because dropout was operationalized as a multilevel variable, each person had a score for whether they dropped out of each of the two follow-ups (coded 0=completed, 1=did not complete).
Hypotheses around Aim 2 were that participants who completed the study in the lab and who were offered an incentive would provide higher quality data (i.e., these students would be more likely to answer attention check questions correctly). We coded whether participants missed any attention check questions (0=did not answer any incorrectly, 1=answered at least one incorrectly) at each timepoint. Because both follow-up timepoints were completely remotely by all participants, and because all participants were compensated for follow-up assessments, time was coded 0 (Baseline) and 1 (3- and 6-month follow-up). Whether participants missed any attention check questions served as the outcome variable in multilevel logistic regression models. Specifically, we examined main effects of delivery modality and incentive on the likelihood of missing any attention check questions as well as whether these effects changed over time. In the cases of significant delivery modality or incentive × time interactions, we followed up by testing marginal effects (e.g., differences between in-lab and remote at baseline and follow-up; Long & Freese, 2006).
Objectives around Aim 3 were twofold: First, examine effects of delivery modality and intervention value on depth of processing of intervention content and perceived value of the intervention, and second, evaluate how these are associated with changes in drinking from baseline to three-month follow-up and whether these associations were moderated by PNF condition. We used generalized structural equation modeling (SEM) to evaluate hypotheses. This conceptual model is displayed in Figure 1. We created latent drinking variables for both baseline and follow-up drinking using all five observed drinking variables (i.e., TLFB, drinks per week, peak drinks, brief YAACQ, and RAPI). We then specified a SEM path model wherein the three study independent variables (PNF, delivery modality, and incentive) were predictors of depth of processing and intervention value, which were then specified as predictors of 3-month latent follow-up drinking. We also included direct effects and the depth of processing × PNF and intervention value × PNF interactions to test Hypotheses 5a/5b. Interaction constituent variables were centered. Gender and baseline drinking were included as covariates.
Results
Descriptive Statistics and Baseline Differences
We tested for differences in demographic characteristics and baseline alcohol consumption and problems across conditions. No differences emerged in race (χ2(42)=40.73, p=.527) or age (t=.58, p=.561). No differences emerged for baseline alcohol consumption or the RAPI measure of consequences; however, participants in the remote condition reported fewer consequences via the BYAACQ than participants in the in-person condition (Z=−2.27, p=.023).
Aim 1: Effects of Delivery Modality and Incentives on Attrition
We first examined rates of attrition in the study. Of the 498 participants who completed baseline, 418 participants (83.9%) completed at least one follow-up timepoint. Further examination revealed that 333 participants (66.9%) completed both follow-up timepoints, 85 participants (17.1%) completed one follow-up, and 80 participants did not complete either follow-up (16.1%).
To evaluate how the study conditions affe/cted follow-up completion, we conducted multilevel logistic regression analysis testing completion of follow-ups as a function of delivery modality and incentive. Results are presented in Table 1 and revealed significant main effects of both delivery modality and incentive, suggesting that those completing the intervention remotely were more likely to drop out and those offered an incentive were less likely to drop out. In support of Hypothesis 1a, participants were 5.4 times more likely to drop-out if they completed the baseline and intervention procedure remotely compared to in the lab. Additionally, supporting Hypothesis 1b, participants were more likely to complete follow-ups if they were offered an incentive at baseline. Thus, participants were 2.89 (i.e., 1/.346) times more likely to complete the follow-up surveys if they were offered an incentive.
Table 1.
Aim 1: Multilevel Logistic Regression Model Evaluating Delivery Modality and Incentive Effects on Study Dropout
| b | SE(b) | Z | p | OR | OR LLCI | OR ULCI | |
|---|---|---|---|---|---|---|---|
|
|
|||||||
| Gender | .847 | .388 | 2.19 | .029 | 2.333 | 1.091 | 4.988 |
| PNF | −.313 | .379 | −.83 | .409 | .731 | .348 | 1.536 |
| Delivery Modality | 1.689 | .403 | 4.19 | <.001 | 5.415 | 2.457 | 11.933 |
| Incentive | −1.061 | .383 | −2.77 | .006 | .346 | .163 | .734 |
Note. Outcome = Study Dropout: Coded 0 (did not drop out) or 1 (dropped out). OR = odds ratio. LLCI = Lower level 95% confidence interval. ULCI = Upper level 95% confidence interval. PNF is coded 0=attention control, 1=PNF. Delivery modality is coded 0=In-lab, 1=Remote; Incentive is coded 0=$0, 1=$30; Gender is coded 0=Woman, 1=Man. Significant effects are bolded.
Aim 2: Effects of Delivery Modality and Incentives on Data Quality
Multilevel logistic regression models were used to evaluate effects of delivery modality and incentive on the likelihood of answering attention check (i.e., careless responding) questions incorrectly over time. We then examined whether the effects of delivery modality and incentive were moderated by (i.e., different over) time. This is important because delivery modality and incentive design differences were only present at baseline (i.e., all follow-up assessments were remote and all participants were compensated the same amount for them). Results are presented in Table 2 and reveal a significant main effect of time, suggesting that overall, participants were more likely to answer attention check questions wrong at follow-up compared to baseline. This is consistent with expectations given that both follow-up assessments were completed remotely by all participants. Additionally, there was a significant main effect of delivery modality, which suggests that overall, people in the remote condition were more likely to answer attention check questions incorrectly.
Table 2.
Aim 2: Multilevel Logistic Regression Model Evaluating Delivery Modality and Incentive Effects over Time on Data Quality
| b | SE(b) | Z | p | LLCI | ULCI | |
|---|---|---|---|---|---|---|
|
|
||||||
| Gender | .265 | .268 | .99 | .324 | −.261 | .790 |
| PNF | −.014 | .264 | −.05 | .957 | −.533 | .504 |
| Time | 1.266 | .345 | 3.67 | <.001 | .590 | 1.941 |
| Incentive | −.041 | .354 | −.12 | .908 | −.735 | .653 |
| Delivery modality | 1.093 | .364 | 3.00 | .003 | .379 | 1.808 |
| Incentive × time | .564 | .381 | 1.48 | .139 | −.184 | 1.311 |
| Delivery modality × time | −1.071 | .385 | −2.78 | .005 | −1.825 | −.317 |
Note. LLCI = Lower level 95% confidence interval. ULCI = Upper level 95% confidence interval. PNF is coded 0=attention control, 1=PNF. Delivery modality is coded 0=In-lab, 1=Remote; Incentive is coded 0=$0, 1=$30; Gender is coded 0=Woman, 1=Man. Time is coded 0=Baseline, 1=Follow-up (3- and 6-month). Significant effects are bolded.
Importantly, there was a significant delivery modality × time interaction, which is illustrated in Figure 2. Marginal effects (i.e., differences between in-lab and remote) revealed a significant difference between those who completed the survey remotely and in lab at baseline, b=.103, p=.002, but not at follow-up, b=.003, p=.939. This difference at baseline supports Hypothesis 2a. Neither the main effect of incentive nor the interaction between time and incentive was significant, thus Hypothesis 2b was not supported.
Figure 2.
Aim 2: Delivery modality × time interaction on data quality (i.e., the likelihood of answering any attention check questions incorrectly).
Aim 3: Depth of Processing and Value and Effects on Drinking
In Aim 3, we explored depth of processing related to intervention content and perceived intervention value. We expected that those completing the intervention in the lab and without an incentive would report greater depth of processing and intervention value (Hypotheses 3a/3b and 4a/4b). We also expected that associations between depth of processing and intervention value and changes in drinking would be moderated by PNF condition, such that depth of processing and intervention value would be related to reductions in drinking only among participants who received PNF (Hypothesis 5a/5b). The conceptual model is presented in Figure 1.
The drinking latent variables held together well, with all indicators loading significantly on the latent factors for both baseline and follow-up drinking. Table 3 provides zero-order correlations among study variables, including the observed drinking variables at both timepoints, depth of processing, and intervention value. Interestingly, only drinks per week at baseline was negatively correlated with intervention value.
Table 3.
Correlations among Study Variables
| 1. | 2. | 3. | 4. | 5. | 6. | 7. | 8. | 9. | 10. | 11. | 12. | 13. | 14. | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
| ||||||||||||||
| 1. 3M Dropout | ||||||||||||||
| 2. 6M Dropout | .54*** | |||||||||||||
| 3. BL TLFB | .06 | .13** | ||||||||||||
| 4. BL Drinks per Week | .10* | .17*** | .70*** | |||||||||||
| 5. BL Peak Drinks | .09* | .14** | .56*** | .56*** | ||||||||||
| 6. BL BYAACQ | .02 | .09 | .51*** | .45*** | .34*** | |||||||||
| 7. BL RAPI | .07 | .13** | .44*** | .41*** | .21*** | .71*** | ||||||||
| 8. PI Depth of Processing | .04 | −.04 | .06 | .00 | .03 | .06 | .01 | |||||||
| 9. PI Intervention Value | .01 | −.07 | −.01 | −.10* | −.01 | .08 | .01 | .61*** | ||||||
| 10. 3M TLFB | -- | .12* | .63*** | .55*** | .40*** | .37*** | .36*** | .01 | −.06 | |||||
| 11. 3M Drinks per Week | -- | .05 | .59*** | .53*** | .37*** | .36*** | .33*** | −.03 | −.09 | .79*** | ||||
| 12. 3M Peak Drinks | -- | .13* | .53*** | .43*** | .48*** | .31*** | .25*** | .05 | −.05 | .72*** | .66*** | |||
| 13. 3M BYAACQ | -- | .20*** | .45*** | .38*** | .28*** | .56*** | .48*** | −.08 | −.08 | .61*** | .53*** | .54*** | ||
| 14. 3M RAPI | -- | .22*** | .44*** | .41*** | .29*** | .45*** | .51*** | −.08 | −.05 | .58*** | .58*** | .50*** | .76*** | |
|
| ||||||||||||||
| Mean (N/% for Dropout) | 118/24% | 127/26% | 24.41 | 9.05 | 6.81 | 6.48 | 6.10 | 4.63 | 5.02 | 20.18 | 6.79 | 5.55 | 4.43 | 4.07 |
| SD | -- | -- | 22.54 | 7.49 | 3.68 | 4.71 | 6.90 | 1.16 | 1.15 | 22.03 | 7.03 | 3.97 | 4.79 | 6.02 |
| Range | 0–1 | 0–1 | 0–147 | 0–50 | 0–25 | 0–25 | 0–48 | 1–7 | 1–7 | 0–147 | 0–50 | 0–25 | 0–27 | 0–49 |
Note. Dropout was coded 0=completed follow-up; 1=did not complete follow-up. 3M=3 month. 6M=6 month. BL=Baseline. PI=Post-intervention. TLFB=Timeline follow-back. BYAACQ=Brief Young Adult Alcohol Consequences Questionnaire. RAPI=Rutgers Alcohol Problems Questionnaire.
p <. .05
p < .01
p < .001.
Table 4 presents results from the SEM model. Results supported Hypotheses 3a and 4a by indicating that participants reported lower depth of processing, b = −.453, p <.001, and intervention value, b = −.495, p < .001, when completing the intervention remotely. However, incentive was not associated with depth of processing or intervention value, indicating no support for Hypotheses 3b and 4b. Interestingly, while not hypothesized, PNF participants reported higher depth of processing and greater intervention value compared to control.
Table 4.
Aim 3 Generalized Structural Equation Model
| Outcome | Predictor | b | SE(b) | Z | p |
|---|---|---|---|---|---|
|
| |||||
| Depth of Processing (Mediator 1) | BL Drinking | .000 | .003 | .01 | .990 |
| Gender | .253 | .126 | 2.03 | .045 | |
| PNF | .253 | .122 | 2.08 | .037 | |
| Delivery modality | −.466 | .119 | −3.91 | <.001 | |
| Incentive | −.155 | .119 | −1.31 | .192 | |
|
| |||||
| Intervention Value (Mediator 2) | BL Drinking | −.002 | .003 | −.52 | .603 |
| Gender | −.126 | .132 | −.96 | .337 | |
| PNF | .385 | .127 | 3.03 | .002 | |
| Delivery modality | −.488 | .125 | −3.91 | <.001 | |
| Incentive | .039 | .124 | .31 | .755 | |
|
| |||||
| 3-Month Follow-up Latent Drinking Variable (Outcome) | BL Drinking | .032 | .003 | 11.21 | <.001 |
| Gender | .064 | .099 | .65 | .518 | |
| PNF | −.299 | .096 | −3.12 | .002 | |
| Delivery modality | .049 | .095 | .52 | .603 | |
| Incentive | −.009 | .093 | −.10 | .923 | |
| Depth of processing | .091 | .057 | 1.60 | .110 | |
| Intervention value | −.055 | .055 | −1.00 | .316 | |
| PNF × Depth of processing | .141 | .111 | 1.27 | .205 | |
| PNF × Intervention value | .067 | .107 | .63 | .530 | |
Note. BL Drinking = Baseline latent drinking variable. PNF is coded −0.5=Attention control, 0.5=PNF (centered as it is a predictor in moderation analyses); Delivery modality is coded 0=In-lab, 1=Remote; Incentive is coded 0=$0, 1=$30; Gender is coded 0=Woman, 1=Man. Depth of processing and intervention value are centered. We specified a negative binomial distribution for the follow-up drinking outcome variable. Significant effects are bolded.
Finally, we examined whether depth of processing and intervention value were associated with changes in drinking, and whether that was different based on PNF condition. While we did expect PNF participants to reduce their drinking more when they reported greater depth of processing and intervention value, we did not expect attention control participants to report the same reductions in drinking as the attention control feedback was unrelated to alcohol use. Results showed that depth of processing and intervention value were not associated with changes in drinking. Further, this association was not different based on PNF condition. As such, Hypotheses 5a/5b were not supported.
Discussion
The present study provides a novel examination of important methodological factors in brief alcohol intervention studies for college students and how they are associated with attrition, data quality, depth of processing, and perceived value of the intervention. Results highlighted the importance of delivery modality. Specifically, we found that participating in the lab was associated with a lower likelihood of dropping out of the study, more correct attention-check questions (i.e., better quality data), greater depth of processing of intervention content, and perceiving more value of the intervention. It also interacted with time to predict data quality in that in-lab participants answered significantly fewer attention check questions wrong than those who participated remotely, but only at baseline when the modality difference was present.
Being paid for intervention participation was only associated with a lower likelihood of dropping out of the study. As such, our results demonstrate that when accounting for whether the session is performed remotely or in the lab, the presence of an incentive was relatively unimportant. Finally, the PNF intervention was associated with greater depth of processing, greater perceived intervention value, and decreased drinking over time. PNF has been received much empirical support for reducing drinking but this is the first study to demonstrate its effect on depth of processing and perceived intervention value.
While we did not directly assess distractions, data from the present study are consistent with previous findings indicating that those who participate remotely may be engaged in other activities while viewing the personalized feedback (Lewis & Neighbors, 2015). It is likely that the distractions offered by remote participation are mitigated substantially when a participant completes an intervention in a more formal setting. Our results suggest that regardless of the feedback they received, participants processed the information better and valued it more when they were given it in the lab. This finding is also consistent with the justification of effort effect, as participants who participated in-lab were required to expend more effort to participate. Alternatively, there may be a selection bias in that those who had the resources (e.g., bus fare, easy access to campus, time) to attend an in-lab session may also have mental resources to engage more deeply with the material. Considering we did not find depth of processing or intervention value to be related to drinking differently for PNF or attention control participants is worthy of discussion. While it appears that asking participants to come to the lab may result in better processing of the intervention material, this did not translate into stronger intervention effects as we predicted. Interventionists should weigh the benefits found here of encouraging more in-depth processing against the costs of limiting access to those who have the resources to attend an in-person session, particularly when we did not find differential intervention efficacy.
The finding that incentives were related to lower attrition from the study is consistent with previous work with this dataset showing that participants were more likely to participate when they were offered incentives (Neighbors et al., 2018). In that way, incentives do motivate both participation and staying in the study. However, it appears that this motivation did not play a role on the perceptions about intervention value or depth of processing. That is, participants who saw the feedback content did not discount it simply because they were paid to participate. Perhaps an intervention that required participants to describe their motivations for decreasing drinking would be undermined by the presence of incentives, as it requires an examination of motivations, but the nature of PNF does not depend on the motivation of the participants. The facts related to misperceived norms may be less susceptible to being discounted by the overjustification effect. Moreover, the effects of incentives on intrinsic motivation generally apply to activities in which intrinsic motivation exists initially. Very few college students tend to enjoy research surveys and alcohol interventions for their own sakes. Furthermore, incentives were not related to careless responding. It is unlikely participants were thinking about the money they would receive during the entire hour of participation. It is more likely that the incentives motivated participation and kept them in the study. The decreased attrition for those who received payment suggests that future studies might derive benefit from offering incentives, and may not have to worry about lack of intrinsic motivation decreasing the way participants process the material.
This study has important implications for alcohol intervention research. Efficacy of in-lab intervention conditions has been used as a roadmap for expectations of effectiveness in real-world settings. However, the way intervention studies are delivered influences the efficacy of interventions. It is also important to note that many intervention trials pay their participants, which is not only an effective way to increase participation, but may be helpful for implementation of interventions as it does not result in lower effectiveness of the intervention and was associated with less attrition.
A broader question raised by the findings is how to integrate established social psychological effects into studies that evaluate alcohol interventions. It is interesting that justification of effort (Aronson & Mills, 1959; Harmon-Jones & Mills, 2019) helped with study retention, data quality, processing of intervention content, and perceived value of the intervention. In this way, it is useful to consider effort justification when designing intervention trials or implementing interventions in practice. That is, interventions that are slightly more difficult to participate in do have some benefits, despite our lack of effects on drinking. Social psychological research could be useful for developing and refining effective interventions (e.g., Walton & Wilson, 2018), but heretofore has been largely disregarded in intervention research.
Limitations and Future Directions
While providing a novel exploration of methodological factors and how they are associated with important outcomes for brief alcohol interventions, this study does have some notable limitations. One important limitation is that it used a PNF intervention, and findings may not generalize to other types of alcohol interventions (e.g., BASICS). As previously mentioned, an intervention study that explores motivations may have important differences from the present study in the effects of incentives on the intervention. It is also possible that participants who were willing to attend an in-lab session were higher in social desirability or other characteristics than their remote counterparts, and that reports of depth of processing and value of intervention were affected by social desirability. Unfortunately, we have no way to test if this was a contributing factor. Another limitation of this study is that depth of processing and intervention value were not associated with drinking outcomes (nor as a function of PNF condition). While it is possible that this was simply because participants did not need to process or value their participation to be responsive to it, another concerning possibility is the inadequacy of the measures to assess what we were targeting. Further, the depth of processing measure is new and evinced relatively low reliability. Future research should consider assessing depth of processing in other ways or with multiple measures. Similarly, there are other ways to test for careless responding (e.g., “straightlining” patterns [Kim et al., 2019], survey completion time) which should be explored in future studies. Another limitation is the operationalization of alcohol use; future research with three or more measures of alcohol-related problems may wish to determine whether delivery modality or incentives influence the efficacy of the PNF intervention via depth of processing or intervention value differently for alcohol consumption and problems. Finally, future work should examine these factors with real-world interventions that do not pay participants (e.g., college wellness interventions) or mandated interventions.
In conclusion, this study provides the first exploration of delivery modality and incentives on attrition, data quality, depth of processing, and intervention value on the effectiveness of a brief alcohol intervention for college students. Of particular interest in the modern remotely-driven world, participating remotely was associated with greater attrition and lower quality data. This work informs appropriate methods for intervention trials that will limit biases present in data and improve generalizability of the findings of those trials, increasing the ability of intervention work to effectively inform university alcohol control policies. Moreover, the effects of delivery modality and incentives are crucial in any study that explores an intervention, whether for gambling, smoking, or any other problematic behavior, especially in a world moving toward virtual interventions (e.g., telehealth).
Public Health Significance.
This research investigates how interventions are delivered (remotely or in-lab) and whether participants are paid as two important methodological factors in brief alcohol interventions. Participants who came into the lab were less likely to drop out, offered higher quality data, and processed the intervention material more deeply and with greater value; however, the intervention was not more efficacious for these participants and incentives were only associated with lower drop out. This work suggests particular benefits to data quality for researchers who ask participants to come into the lab, despite this benefit not spilling over to reductions in drinking.
Acknowledgments
This research was supported by National Institute on Alcohol Abuse and Alcoholism Grant R21AA022369.
References
- Aronson E, & Mills J. (1959). The effect of severity of initiation on liking for a group. The Journal of Abnormal and Social Psychology, 59(2), 177–181. 10.1037/h0047195 [DOI] [Google Scholar]
- Axsom D, & Cooper J. (1985). Cognitive dissonance and psychotherapy: The role of effort justification in inducing weight loss. Journal of Experimental Social Psychology, 21(2), 149–160. 10.1016/0022-1031(85)90012-5 [DOI] [Google Scholar]
- Bandura A. (1997). Self-efficacy: The exercise of control (1st ed.). Freeman. [Google Scholar]
- Borsari B, & Carey KB (2003). Descriptive and injunctive norms in college drinking: A meta-analytic integration. Journal of Studies on Alcohol, 64(3), 331–341. 10.15288/jsa.2003.64.331 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bower P, Brueton V, Gamble C, Treweek S, Smith CT, Young B, & Williamson P. (2014). Interventions to improve recruitment and retention in clinical trials: a survey and workshop to assess current practice and future priorities. Trials, 15(1), 1–9. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Carey KB, Scott-Sheldon LA, Elliott JC, Garey L, & Carey MP (2012). Face-to-face versus computer-delivered alcohol interventions for college drinkers: a meta-analytic review, 1998 to 2010. Clinical Psychology Review, 32(8), 690–703. 10.1016/j.cpr.2012.08.001 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Carr DJ, Adia AC, Wray TB, Celio MA, Pérez AE, & Monti PM (2020). Using the Internet to access key populations in ecological momentary assessment research: Comparing adherence, reactivity, and erratic responding across those enrolled remotely versus in-person. Psychological Assessment, 32(8), 768–779. 10.1037/pas0000847 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Carroll K, Ball S, Martino S, Nich C, Babuscio T, Nuro K, Gordon M, Portnoy G, & Rounsaville B. (2008). Computer-assisted delivery of cognitive-behavioral therapy for addiction: A randomized trial of CBT4CBT. American Journal of Psychiatry, 165(7), 881–888. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Cronce JM, Toomey TL, Lenk K, Nelson TF, Kilmer JR, & Larimer ME (2018). NIAAA’s College Alcohol Intervention Matrix. Alcohol Research: Current Reviews, 39(1), 43–47. [DOI] [PMC free article] [PubMed] [Google Scholar]
- DeCamp W, & Manierre MJ (2016). Money will solve the problem: Testing the effectiveness of conditional incentives for online surveys. Survey Practice, 9(1), 1–9. [Google Scholar]
- Deci EL, Cascio WF, & Krusell J. (1975). Cognitive evaluation theory and some comments on the Calder and Staw critique. Journal of Personality and Social Psychology, 31(1), 81–85. 10.1037/h0076168 [DOI] [Google Scholar]
- Deci EL, Koestner R, & Ryan RM (1999). A meta-analytic review of experiments examining the effects of extrinsic rewards on intrinsic motivation. Psychological Bulletin, 125. 10.1037/0033-2909.125.6.627 [DOI] [PubMed] [Google Scholar]
- Deci EL, & Ryan RM (1985). Intrinsic motivation and self-determination in human behavior. Plenum. 10.1007/978-1-4899-2271-7 [DOI] [Google Scholar]
- Dotson KB, Dunn ME, & Bowers CA (2015). Stand-alone personalized normative feedback for college student drinkers: A meta-analytic review, 2004 to 2014. PLoS One, 10(10). [DOI] [PMC free article] [PubMed] [Google Scholar]
- Edwards AGK, & Rollnick S. (1997). Outcome studies of brief alcohol intervention in general practice: The problem of lost subjects. Addiction, 92(12), 1699–1704. 10.1111/j.1360-0443.1997.tb02890.x [DOI] [PubMed] [Google Scholar]
- Festinger L. (1957). A theory of cognitive dissonance. Stanford University Press. [Google Scholar]
- Flick SN (1988). Managing attrition in clinical research. Clinical Psychology Review, 8(5), 499–515. [Google Scholar]
- Floyd KS, Harrington SJ, & Santiago J. (2009). The effect of engagement and perceived course value on deep and surface learning strategies. Informing Science: the International Journal of an Emerging Transdiscipline, 12, 181–190. [Google Scholar]
- Harmon-Jones E, & Mills J. (2019). An introduction to cognitive dissonance theory and an overview of current perspectives on the theory. In Harmon-Jones E. (Ed.), Cognitive dissonance: Reexamining a pivotal theory in psychology., 2nd ed. (pp. 3–24). American Psychological Association. 10.1037/0000135-001 [DOI] [Google Scholar]
- Harvey AG, Lee J, Williams J, Hollon SD, Walker MP, Thompson MA, & Smith R. (2014). Improving outcome of psychosocial treatments by enhancing memory and learning. Perspectives on Psychological Science, 9(2), 161–179. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hester RK, & Delaney HD (1997). Behavioral self-control program for Windows: Results of a controlled clinical trial. Journal of Consulting and Clinical Psychology, 65(4), 686–693. 10.1037/0022-006X.65.4.686 [DOI] [PubMed] [Google Scholar]
- Hsieh G, & Kocielnik R. (2016, February). You get who you pay for: The impact of incentives on participation bias. In Proceedings of the 19th ACM Conference on Computer-Supported Cooperative Work & Social Computing (pp. 823–835). [Google Scholar]
- Hurlbut, Stephanie C, & Sher, Kenneth J. (1992). Assessing Alcohol Problems in College Students. Journal of American College Health, 41, 49–58. [DOI] [PubMed] [Google Scholar]
- Khadjesari Z, Murray E, Kalaitzaki E, White IR, McCambridge J, Thompson SG, Wallace P, & Godfrey C. (2011). Impact and costs of incentives to reduce attrition in online trials: Two randomized controlled trials. Journal of Medical Internet Research, 13(1), e26–e26. 10.2196/jmir.1523 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kim Y, Dykema J, Stevenson J, Black P, & Moberg DP (2019). Straightlining: Overview of measurement, comparison of indicators, and effects in mail–web mixed-mode surveys. Social Science Computer Review, 37(2), 214–233. [Google Scholar]
- Krosnick JA (1991). Response strategies for coping with the cognitive demands of attitude measures in surveys. Applied Cognitive Psychology, 5(3), 213–236. [Google Scholar]
- LaBrie JW, Lewis MA, Atkins DC, Neighbors C, Zheng C, Kenney SR, Napper LE, Walter T, Kilmer JR, Hummer JF, Grossbard J, Ghaidarov TM, Desai S, Lee CM, & Larimer ME (2013). RCT of web-based personalized normative feedback for college drinking prevention: Are typical student norms good enough? Journal of Consulting and Clinical Psychology, 81(6), 1074–1086. 10.1037/a0034087 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lewis MA, & Neighbors C. (2007). Optimizing Personalized Normative Feedback: The Use of Gender-Specific Referents. Journal of Studies on Alcohol and Drugs, 68(2), 228–237. 10.15288/jsad.2007.68.228 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lewis MA, & Neighbors C. (2015). An examination of college student activities and attentiveness during a web-delivered personalized normative feedback intervention. Psychology of Addictive Behaviors, 29(1), 162–167. 10.1037/adb0000003 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lewis MA, Patrick ME, Litt DM, Atkins DC, Kim T, Blayney JA, Norris J, George WH, & Larimer ME (2014). Randomized controlled trial of a web-delivered personalized normative feedback intervention to reduce alcohol-related risky sexual behavior among college students. Journal of Consulting and Clinical Psychology, 82(3), 429–440. 10.1037/a0035550 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Long JS, & Freese J. (2006). Regression models for categorical dependent variables using Stata (Vol. 7). Stata press. [Google Scholar]
- Mantzari E, Vogt F, Shemilt I, Wei Y, Higgins JP, & Marteau TM (2015). Personal financial incentives for changing habitual health-related behaviors: a systematic review and meta-analysis. Preventive Medicine, 75, 75–85. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Marcellus L. (2004). Are we missing anything? Pursuing research on attrition. Canadian Journal of Nursing Research Archive, 82–98. [PubMed] [Google Scholar]
- Martens MP, Smith AE, & Murphy JG (2013). The efficacy of single-component brief motivational interventions among at-risk college drinkers. Journal of Consulting and Clinical Psychology, 81(4), 691–701. 10.1037/a0032235 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Meade AW, & Craig SB (2012). Identifying careless responses in survey data. Psychological methods, 17(3), 437–455. 10.1037/a0028085 [DOI] [PubMed] [Google Scholar]
- Miller WR, & Rollnick S. (2012). Motivational interviewing: Helping people change. Guilford press. [Google Scholar]
- Neighbors C, Lee CM, Lewis MA, Fossos N, & Larimer ME (2007). Are social norms the best predictor of outcomes among heavy-drinking college students? Journal of Studies on Alcohol and Drugs, 68(4), 556–565. 10.15288/jsad.2007.68.556 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Neighbors C, Lewis MA, Atkins DC, Jensen MM, Walter T, Fossos N, Lee CM, & Larimer ME (2010). Efficacy of web-based personalized normative feedback: A two-year randomized controlled trial. Journal of Consulting and Clinical Psychology, 78(6), 898–911. 10.1037/a0020766 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Neighbors C, Lewis MA, LaBrie J, DiBello AM, Young CM, Rinker DV, Litt D, Rodriguez LM, Knee CR, Hamor E, Jerabeck JM, & Larimer ME (2016). A multisite randomized trial of normative feedback for heavy drinking: Social comparison versus social comparison plus correction of normative misperceptions. Journal of Consulting and Clinical Psychology, 84(3), 238–247. 10.1037/ccp0000067 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Neighbors C, Rodriguez LM, Garey L, & Tomkins MM (2018). Testing a motivational model of delivery modality and incentives on participation in a brief alcohol intervention. Addictive Behaviors, 84, 131–138. 10.1016/j.addbeh.2018.03.030 [DOI] [PMC free article] [PubMed] [Google Scholar]
- NIAAA. (2002). A call to action: Changing the culture of drinking at US colleges. Author; Rockville, MD. [Google Scholar]
- NIAAA. (2015). Planning alcohol interventions using NIAAA’s CollegeAIM (Alcohol Intervention Matrix). Author; Rockville, MD. [Google Scholar]
- Oppenheimer DM, Meyvis T, & Davidenko N. (2009). Instructional manipulation checks: Detecting satisficing to increase statistical power. Journal of Experimental Social Psychology, 45(4), 867–872. [Google Scholar]
- Quoidbach J, Dunn EW, Petrides KV, & Mikolajczak M. (2010). Money giveth, money taketh away: The dual effect of wealth on happiness. Psychological Science, 21(6), 759–763. 10.1177/0956797610371963 [DOI] [PubMed] [Google Scholar]
- Reid AE, & Carey KB (2015). Interventions to reduce college student drinking: State of the evidence for mechanisms of behavior change. Clinical Psychology Review, 40, 213–224. 10.1016/j.cpr.2015.06.006 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Riper H, van Straten A, Keuken M, Smit F, Schippers G, & Cuijpers P. (2009). Curbing problem drinking with personalized-feedback interventions: A meta-analysis. American Journal of Preventive Medicine, 36(3), 247–255. [DOI] [PubMed] [Google Scholar]
- Rodriguez LM, Neighbors C, Rinker DV, Lewis MA, Lazorwitz B, Gonzales RG, & Larimer ME (2015). Remote versus in-lab computer-delivered personalized normative feedback interventions for college student drinking. Journal of Consulting and Clinical Psychology,, 83(3), 455–463. 10.1037/a0039030 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Sharp EC, Pelletier LG, & Lévesque C. (2006). The double-edged sword of rewards for participation in psychology experiments. Canadian Journal of Behavioural Science, 38(3), 269. [Google Scholar]
- Suffoletto B, Kristan J, Callaway C, Kim KH, Chung T, Monti PM, & Clark DB (2014). A text message alcohol intervention for young adult emergency department patients: A randomized clinical trial. Annals of Emergency Medicine, 64(6), 664–672. 10.1016/j.annemergmed.2014.06.010 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Tang S-H, & Hall VC (1995). The overjustification effect: A meta-analysis. Applied Cognitive Psychology, 9(5), 365–404. 10.1002/acp.2350090502 [DOI] [Google Scholar]
- Tanner-Smith EE, & Lipsey MW (2015). Brief alcohol interventions for adolescents and young adults: A systematic review and meta-analysis. Journal of Substance Abuse Treatment, 51, 1–18. 10.1016/j.jsat.2014.09.001 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Vlaev I, King D, Darzi A, & Dolan P. (2019). Changing health behaviors using financial incentives: A review from behavioral economics. BMC Public Health, 19(1), 1–9. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Young CM, & Neighbors C. (2019). Incorporating writing into a personalized normative feedback intervention to reduce problem drinking among college students. Alcoholism, Clinical & Experimental Research, 43(5), 916–926. 10.1111/acer.13995 [DOI] [PMC free article] [PubMed] [Google Scholar]


