Skip to main content
PLOS One logoLink to PLOS One
. 2022 Dec 20;17(12):e0279347. doi: 10.1371/journal.pone.0279347

Does receiving a SARS-CoV-2 antibody test result change COVID-19 protective behaviors? Testing risk compensation in undergraduate students with a randomized controlled trial

Christina Ludema 1,#, Molly S Rosenberg 1,*,#, Jonathan T Macy 2, Sina Kianersi 1, Maya Luetke 1, Chen Chen 1, Lilian Golzarri-Arroyo 3, Erin Ables 3, Kevin Maki 2, David B Allison 1
Editor: Hermano Alexandre Lima Rocha4
PMCID: PMC9767325  PMID: 36538498

Abstract

Background

Risk compensation, or matching behavior to a perceived level of acceptable risk, can blunt the effectiveness of public health interventions. One area of possible risk compensation during the SARS-CoV-2 pandemic is antibody testing. While antibody tests are imperfect measures of immunity, results may influence risk perception and individual preventive actions. We conducted a randomized control trial to assess whether receiving antibody test results changed SARS-CoV-2 protective behaviors.

Purpose

Assess whether objective information about antibody status, particularly for those who are antibody negative and likely still susceptible to SARS-CoV-2 infection, increases protective behaviors. Secondarily, assess whether a positive antibody test results in decreased protective behaviors.

Methods

In September 2020, we enrolled 1076 undergraduate students, used fingerstick tests for SARS-CoV-2 antibodies, and randomized participants to receive their results immediately or delayed by 4 weeks. Two weeks later, participants completed a survey about their engagement in 4 protective behaviors (mask use, social event avoidance, staying home from work/school, ensuring physical distancing). We estimated differences between conditions for each of these behaviors, stratified by antibody status. For negative participants at baseline, we also estimated the difference between conditions for seroconversion over 8 weeks of follow-up.

Results

For the antibody negative participants (n = 1029) and antibody positive participants (n = 47), we observed no significant differences in protective behavior engagement between those who were randomized to receive test results immediately or after 4 weeks. For the baseline antibody negative participants, we also observed no difference in seroconversion outcomes between conditions.

Conclusions

We found that receiving antibody test results did not lead to significant behavior change in undergraduate students whether the SARS-CoV-2 antibody result was positive or negative.

Introduction

College campuses feature environments that are high risk for SARS-CoV-2 transmission, characterized by indoor locations with close contact (e.g., dorms, classrooms) and academic and social mixing that results in high contact rates [1]. Although student populations are at lower risk for severe COVID-19 disease themselves, increased infections among younger populations are often associated with corresponding increases among older, higher-risk populations [2]. Understanding the motivations and drivers of practicing SARS-CoV-2 protective behaviors among university student populations has consequences for both campus infection prevention and control as well as the health of the broader community.

Many universities that pursued in-person education in 2020, and had the resources to do so, implemented SARS-CoV-2 preventive measures including mask mandates, isolation and quarantine procedures, reduced classroom density, online learning, regular testing regimes of active infections using RT-PCR and/or antigen tests, and vaccine mandates [35]. The main goals of regular testing were to control disease spread through quarantine of close contacts of individuals who tested positive and to track the prevalence of infection in the campus community. However, students may have understood a positive SARS-CoV-2 test to mean that they could relax preventive behaviors, assuming that a prior infection conferred immunologic protection. SARS-CoV-2 antibody tests, though less frequently used than tests of active infections, are another source of information about prior infection status that may influence risk perceptions.

Learning the results of a SARS-CoV-2 test may influence individual behavior through several plausible mechanisms. Theoretically, risk compensation postulates that individuals have some amount of risk that they are willing to assume and will change their behaviors to match that level of risk [6]. Relatedly, behavioral disinhibition theory posits that feelings of protection against one health concern may cause people to engage in behaviors that put them at risk for other health issues. A necessary condition for these behavioral pathways to operate with SARS-CoV-2 tests is that people must have working perceptions that a relationship between prior infection and protection against future infections exists. These perceptions are likely related to overall COVID-19 and immunological knowledge [79].

A number of public health interventions have been questioned for potentially causing risk compensation including wearing a bicycle helmet to prevent head injury [10, 11], seat belt mandates to prevent traffic fatalities [12], pre-exposure prophylaxis [13] and male circumcision [14] to prevent HIV infection, and HPV vaccination to prevent cervical cancer [15, 16]. However, rigorous randomized studies have shown that though an individual may make riskier choices, the interventions in question still have an overall beneficial effect on the population outcomes they are designed to prevent. Given the many influences on SARS-CoV-2 protective behaviors that may also correlate with seeking information about prior infection status, a randomized trial is the most appropriate design to avoid confounding by these factors. Understanding the impact of information about past SARS-CoV-2 infection on preventive behavior is essential to managing viral control and for learning more about expected behavior post natural infection and vaccination. The SARS-CoV-2 vaccine has an overwhelmingly beneficial effect on lowering risk of serious COVID-19 disease and mortality [17]. However, understanding how much, if any, risk compensation might occur after natural infection or vaccination has important potential consequences for disease control.

In this study, we aimed to answer the research question, ‘Does learning the results of an antibody test change SARS-CoV-2 protective behaviors?’, with a randomized control trial (RCT) in a sample of undergraduate students during the Fall 2020 semester. Aligned with our hypothesis that behavior change would be differential by antibody status (i.e., those with a positive antibody test may change their behavior in a way that would be different than those with a negative antibody test), we assessed results separately for antibody-positive and antibody-negative participants.

Methods

Study setting

This study was conducted among undergraduate students enrolled at Indiana University’s (IU) flagship Bloomington campus in Fall 2020. IU Bloomington is a public university located in southern Indiana with a large undergraduate student population of around 33,000 students. During the Fall 2020 semester, IU Bloomington had strict COVID-19 protection measures in place, including a mask mandate, classroom and dorm de-densification to support physical distancing, restrictions on event sizes, and mandatory, random asymptomatic RT-PCR SARS-CoV-2 testing.

Study participants and eligibility criteria

We conducted a simple random sample of all IU Bloomington undergraduate students enrolled at the beginning of the Fall 2020 semester yielding 7,499 students. Students in the sample were eligible to participate if they were (1) aged 18 years or older, (2) a current IU Bloomington undergraduate student, and (3) currently residing in Monroe County, Indiana. Of those sampled, 4,069 potential participants met the inclusion criteria for the study. All sampled students were contacted by email with a study invitation and a link to detailed information about study objective and procedures. The study team offered potential participants the opportunity for email or telephone consultations to answer any additional questions about study participation. After reviewing study information, interested and eligible students provided written informed consent remotely [18, 19] The IU Human Subjects and Institutional Review board provided ethical approval for this study protocol (Protocol #2008293852). We also prospectively registered this study protocol in the ClinicalTrials.gov database (#NCT04620798) and the data collection protocol was followed without any changes [20].

Data collection

After enrolling in the study, all participants completed an online baseline survey capturing socio-demographic and behavioral data as well as information on SARS-CoV-2 testing and infection history. Participants attended an in-person clinic visit for baseline antibody testing between September 14 and 30, 2020, and again for a second antibody test between November 8–14, 2020. Indiana University had a shortened semester for on-campus activities in Fall 2020, and this second set of dates aligned with the last weeks students were physically on-campus. Participants were instructed not to attend their in-person visits if they were experiencing COVID-19 symptoms, had tested positive for SARS-CoV-2 in the two weeks before their appointment, or had been directed to isolate or quarantine.

All antibody testing was conducted using the BGI Colloidal Gold IgM/IgG rapid assay kit [21]. Trained nursing staff, who were blinded to the randomized group, collected fingerstick blood samples, and the laminar flow test kits displayed results of antibody positivity for both IgG and IgM SARS-CoV-2 antibodies within 5 minutes. At the time of the visit, research staff read the antibody test results and immediately entered them into the study database. Randomized conditions and results delivered were determined by this initial test read. Participants did not learn their antibody test results at time of the baseline study visit. They were informed their results would be communicated to them by email within 4 weeks. At the randomly assigned time for participants to receive their results, they were emailed a secure link with password protection to access their results (positive or negative). This link also included a brief educational message about the importance of maintaining COVID-19 protective behaviors regardless of antibody test status (see S1 Fig).

We took several steps to improve and better understand the accuracy of the antibody test results. First, a team of trained research staff independently conducted a second review of the test results using high-resolution digital photographs of the test kits. Discrepancies between these assessments were resolved by consensus based on careful review of the digital photographs. Second, to assess the sensitivity and specificity of the antibody test kits, we conducted an independent laboratory assessment with banked hospital-based samples that were known to be SARS-CoV-2 antibody negative (n = 100) and known SARS-CoV-2 antibody positive blood samples (n = 94). The test kits correctly ascertained 100% (100/100) of the known negative samples, and 63.8% (60/94) of the known positive samples. This sensitivity is consistent with estimates from other laminar flow point of care antibody tests [22]. Importantly, the moderately low sensitivity of our antibody tests should not influence the results of this study given the randomized design and the objective to understand behavior change after receiving test results.

After the baseline antibody test, participants were followed-up in parallel every two weeks with online surveys assessing their engagement in key COVID-19 protective behaviors using a scale from the World Health Organization COVID-19 survey tool [23]. A total of four surveys were administered over the 8 weeks of follow-up. All online surveys were developed in and delivered using REDCap electronic data capture software [24, 25]. REDCap also supported all other data entry and database management aspects of the study. Participants were compensated up to $30 for completion of all study procedures.

Randomization

Participants were randomly assigned to receive their baseline antibody test results either immediately (within 24 hours) or delayed (in 4 weeks). We used stratified block randomization, with a block size of 10, to obtain an equal number of participants in the two conditions (1:1 allocation) between strata of baseline SARS-CoV-2 antibody status. An independent statistician generated a random sequence using SAS software [26]. We programmed REDCap to randomly allocate participants to either the immediate or the delayed condition based on the allocation sequence at the time study staff entered baseline antibody test results. Allocation was concealed from all investigators and field staff as it was not possible to predict or decipher the next allocation performed by REDCap.

Key measures

Our exposure of interest was the randomized timing of antibody test result distribution: immediate (within 24 hours) versus delayed (4 weeks).

The primary outcomes of interest were engagement in four key COVID-19 protective behaviors two weeks after the baseline antibody test. In an on-line survey, participants were asked to quantify their level of engagement with the following behaviors in the past 7 days [23]: 1) avoiding social events, 2) staying at home from work/school, 3) wearing a face mask in public, and 4) ensuring physical distance in public. Response options were always, very often, sometimes, rarely, and never. All outcomes were dichotomized for primary analysis into: Always and Very Often versus Sometimes, Rarely, and Never. We conducted a sensitivity analysis to assess the robustness of our results to the choice of dichotomizing the behavioral outcomes. In this sensitivity analysis, we converted the Likert responses into a continuous variable by assigning numeric values (1 = never, 5 = always) to the responses and summing across all four protective behaviors (maximum possible value = 25).

As a secondary outcome, we measured SARS-CoV-2 seroconversion for participants who tested negative for SARS-CoV-2 antibodies at the baseline round of testing. If a participant tested newly positive for SARS-CoV-2 antibodies at the endline study visit after 8 weeks of follow-up, they were considered to have experienced the seroconversion outcome.

We also collected data on key covariates to characterize the study sample, and to stratify the sample for analysis. The variables we used to characterize the study sample were: age (in years), sex at birth (male/female), gender identity (man/woman/gender non-conforming), race (white/Asian/Black/multi-racial/other), school year, residence (on- or off-campus), and Greek organization affiliation (yes/no).

We used baseline antibody test result (positive/negative) to stratify the sample for analysis as we expected behavior change to be differential by antibody status. Participants with positive test results for either IgG or IgM antibodies were considered antibody-positive, while those negative for both IgG and IgM antibodies were considered antibody-negative. We expected participants who tested positive for SARS-CoV-2 antibodies to reduce protective behaviors after receiving results based on the belief that antibodies might provide protection against future infection. We expected participants who tested negative for SARS-CoV-2 antibodies to exhibit no change or a small increase in protective behaviors after receiving results given that a negative status could serve as a reminder that they are still susceptible to infection.

Statistical analysis

We used Pearson’s chi-square test to test the independence of demographic distributions between the two treatment conditions. We ensured that outcome categories were mutually exclusive and that all cells had values of at least 5.

We specified modified Poisson regression models to conduct intention-to-treat analyses, estimating risk ratios for the associations between randomly assigned treatment conditions (immediate vs. delayed antibody test results) and participation in four protective behaviors at two weeks from baseline. We specified separate models for each of the four behavioral outcomes: social event avoidance, staying home from work or school, mask use in public, and physical distancing in public. For the sensitivity analyses around the dichotomized outcome coding decision, we specified linear regression models to estimate the associations between randomly assigned treatment condition and the total of the sums of the Likert responses across all four protective behaviors separately for those who tested positive and those who tested negative at baseline.

A modified Poisson regression model was also specified to compare the risk of SARS-CoV-2 seroconversion at the end of the trial (8 weeks after baseline) between immediate and delayed conditions, among the subsample of those with a negative antibody test result at baseline.

Analysts were blinded from the group allocation by enlisting an independent statistician to hold the key to which group (1 or 2) was randomized to immediate versus delayed test results. The reliability of the analysis was confirmed by a second independent analysis. Data were analyzed using SAS 9.4 statistical software [26]. All figures were plotted using R version 4.1.1 [27].

Results

Between September 14, 2020 and September 30, 2020, 1,397 participants (response rate of 34.4%) consented to participate in this study from a sample of 4,069 randomly selected, eligible IU-B undergraduate students. We report here the results from 1,076 (77% of those who consented) who completed the baseline survey and baseline antibody test (Fig 1 and Table 1). The median age of participants was 20 years (IQR 19–21) and the ages of study participants largely aligned with traditional undergraduate student ages of 18–21 (90.6%). The majority of study participants identified as women (64%). The study sample was also majority white (79%), with lower representation of Asian (8%), Black (1%), and multi-racial (8%) students. Participants were fairly evenly distributed across the four years of traditional class standing (freshmen through seniors). About one-third (32%) of participants lived on-campus and about one-quarter (24%) were affiliated with Greek student organizations. As expected, none of these demographic variables differed significantly by randomized delayed versus immediate treatment condition.

Fig 1. Consort participant flow diagram.

Fig 1

Table 1. Characteristics of the n = 1076 Indiana University undergraduate students enrolled in the trial, Fall 2020.

Total (n = 1076) Delayed condition (n = 540) Immediate condition (n = 536) P-value
N (%) N (%) N (%)
Sociodemographic covariates
Age 0.766
 18 years old 208 (20.6) 97 (19.3) 111 (21.9)
 19 years old 224 (22.2) 115 (22.9) 109 (21.5)
 20 years old 228 (22.6) 119 (23.7) 109 (21.5)
 21 years old 255 (25.3) 123 (24.5) 132 (26.0)
 22+ years old 95 (9.4) 48 (9.6) 47 (9.3)
 Missing n = 66 n = 38 n = 28
Sex at birth 0.179
 Male 382 (35.7) 181 (33.7) 201 (37.6)
 Female 689 (64.3) 356 (66.3) 333 (62.4)
 Missing n = 5 n = 3 n = 2
Gender identity 0.162
 Male 379 (35.4) 179 (33.3) 200 (37.5)
 Female 680 (63.5) 354 (65.9) 326 (61.1)
 Non-conforming 12 (1.1) 4 (0.7) 8 (1.5)
 Missing n = 5 n = 3 n = 2
Race 0.938
 Asian 80 (7.5) 37 (6.9) 43 (8.1)
 Black 13 (1.2) 7 (1.3) 6 (1.1)
 Multi-racial 85 (7.9) 42 (7.8) 43 (8.1)
 Other 46 (4.3) 22 (4.1) 24 (4.5)
 White 847 (79.1) 430 (79.9) 417 (78.2)
 Missing n = 5 n = 2 n = 3
Undergraduate school year 0.514
 First year 236 (22.1) 113 (21) 123 (23.1)
 Second year 246 (23) 127 (23.7) 119 (22.3)
 Third year 264 (24.7) 137 (25.5) 127 (23.8)
 Fourth year 297 (27.8) 143 (26.6) 154 (28.9)
 Fifth year or more 27 (2.5) 17 (3.2) 10 (1.9)
 Missing n = 6 n = 3 n = 3
Residence 0.326
 Off-campus 733 (68.4) 375 (69.8) 358 (67.0)
 On-campus 338 (31.6) 162 (30.2) 176 (33.0)
 Missing n = 5 n = 3 n = 2
Greek affiliation status 0.918
 No 812 (75.9) 409 (76.0) 403 (75.8)
 Yes 258 (24.1) 129 (24.0) 129 (24.3)
 Missing n = 6 n = 2 n = 4
Outcomes
Avoided social events (2 weeks) 0.437
 Always or Very Often 445 (49.6) 236 (50.9) 209 (48.3)
 Never, Sometimes, or Rarely 452 (50.4) 228 (49.1) 224 (51.7)
 Not Applicable n = 87 n = 43 n = 44
 Missing n = 92 n = 33 n = 59
Stayed at home from work/school (2 weeks) 0.172
 Always or Very Often 337 (39.8) 185 (42.1) 152 (37.4)
 Never, Sometimes, or Rarely 509 (60.2) 255 (58.0) 254 (62.6)
 Not Applicable n = 137 n = 66 n = 71
 Missing n = 93 n = 34 n = 59
Wore a mask in public (2 weeks) 0.269
 Always or Very Often 963 (98.3) 493 (97.8) 470 (98.7)
 Never, Sometimes, or Rarely 17 (1.7) 11 (2.2) 6 (1.3)
 Not Applicable n = 0 n = 0 n = 0
 Missing n = 96 n = 36 n = 60
Physical distancing in public (2 weeks) 0.923
 Always or Very Often 879 (89.4) 452 (89.3) 427 (89.5)
 Never, Sometimes, or Rarely 104 (10.6) 54 (10.7) 50 (10.5)
 Not Applicable n = 0 n = 0 n = 0
 Missing n = 93 n = 34 n = 59
Seroconversion (8 weeks) 0.726
 No 766 (94.8) 380 (94.5) 386 (95.1)
 Yes 42 (5.2) 22 (5.5) 20 (4.9)
 Not Applicable n = 49 n = 24 n = 25
 Missing n = 219 n = 114 n = 105

Overall, 49 participants tested positive for SARS-CoV-2 antibodies at baseline (4.6%) and we observed 42 new seroconversions over the course of the 8 weeks of follow-up. The distribution of the four behavioral outcomes was generally balanced at baseline, with higher engagement reported for certain behaviors. Wearing a face mask (98.3% always or very often) and ensuring physical distancing (89.6% always or very often)] were more prevalent than staying home from work/school (43.2%) and avoiding social events (57.0%). The distribution of engagement in each behavior at baseline, 2-weeks (time of primary behavioral endpoints), 4-weeks, 6-weeks, and 8-weeks (endline) are displayed in Fig 2, stratified by randomized immediate vs. delayed treatment condition and by serostatus.

Fig 2. Protective behaviors by baseline sero-status in delayed and immediate randomized arms.

Fig 2

In the overall study sample, we found no significant differences between treatment conditions in any of the four behavioral outcomes (Table 1). Two weeks after antibody test results were reported to participants in the immediate results condition, chi-square tests indicated that participants in this condition did not report significantly higher or lower engagement in wearing face masks, staying home from work and school, avoiding social events, or ensuring physical distancing in public. Similarly, no significant differences were observed between study conditions for the seroconversion outcome, with 22 delayed condition and 20 immediate condition participants experiencing seroconversion (6% vs. 5% respectively).

Regression models stratified by baseline serostatus did not reveal any significant behavioral differences by study condition (Table 2) for participants receiving either positive or negative antibody test results. Taking face mask use as a representative example, for seronegative participants, receiving antibody test results was not associated with higher or lower face mask engagement [RR (95% CI): 1.01 (1.00, 1.03)]. Similar results were observed for our smaller sample of seropositive participants [RR (95% CI): 0.91 (0.80, 1.04)]. We also did not observe significant differences in seroconversion risk by timing of antibody test results among those who were seronegative at baseline. Participants in the immediate condition did not exhibit a higher or lower risk of seroconversion at 8 weeks, compared to the delayed condition [RR (95% CI): 0.94 (0.52, 1.71)]. Assumptions for all modified Poisson models were assessed and met. In the sensitivity analysis using the summed Likert responses as the outcome, we also did not find any significant differences between protective behaviors at 2-week follow-up between those who received test results immediately compared to those randomized to the delayed arm (S1 Table and S2 Fig).

Table 2. Associations between treatment condition (immediate vs delayed antibody test results) and key behavioral and biological outcomes, stratified by baseline antibody status.

2 weeks 8 weeks 2 weeks 8 weeks
Outcomes RR (95% CI)1 RR (95% CI)1 RD (95% CI)1 RD (95% CI)1
Negative antibody test results (n = 1029) 2
Social event avoidance3 0.96 (0.84, 1.09) -0.02 (-0.09, 0.04)
Staying home from work/school3 0.87 (0.74, 1.04) -0.05 (-0.12, 0.01)
Mask use in public3 1.01 (1.00, 1.03) 0.01 (-0.002, 0.03)
Physical distancing in public3 1.01 (0.97, 1.06) 0.01 (-0.03, 0.05)
Seroconversion - 0.94 (0.52, 1.71) - -0.003 (-0.03,0.03)
Positive antibody test results (n = 47) 2
Social event avoidance3 0.75 (0.33, 1.72) -0.11 (-0.43, 0.21)
Staying home from work/school3 1.40 (0.60, 3.25) 0.14 (-0.20, 0.48)
Mask use in public3 0.91 (0.80, 1.04) -0.09 (-0.21, 0.03)
Physical distancing in public3 0.82 (0.60, 1.11) -0.16 (-0.40, 0.07)
Seroconversion - - - -

1Reference group refers to delayed antibody test results condition

2Wald p-values for the interaction between serostatus and intervention arm were: 0.57 (social event avoidance), 0.28 (staying home from work/school), 0.11 (mask use in public), and 0.18 (physical distancing in public)

3Comparing the probability of “Very Often/Always” vs. all other response categories

Discussion

In this study, we found no evidence for risk compensation among undergraduate students after receiving SARS-CoV-2 antibody test results. Students who were randomized to receive their antibody test results immediately did not report engaging in different levels of COVID-19 risk behaviors compared to students who were randomized to receive their antibody test results later. This lack of association held across all four behaviors we examined (staying home from school/work, avoiding social events, wearing face masks, and physical distancing), and for the SARS-CoV-2 seroconversion outcome. Importantly, stratified analysis by baseline antibody serostatus did not reveal differences in findings by whether participants received a positive or negative antibody test result.

That we found that behavior remained largely unchanged after receipt of antibody test results aligns with findings from prior studies of risk compensation in other settings. Five interventions are often recognized for their potential to cause risk compensation: bicycle helmets, seatbelts, voluntary medical male circumcision and pre-exposure prophylaxis for HIV prevention, and HPV vaccination. Exhaustive reviews in each of these areas have consistently concluded that there is, in fact, little evidence for increased risk-taking after intervention exposure [1013, 16, 28]. In spite of the weak record for risk compensation in other settings, many have raised concerns about the threat of risk compensation in the COVID-19 pandemic [29]. Vaccines and face mask mandates have been two areas of concern for risk compensation, but very few empirical studies have tested these associations. However, there is at least some suggestive evidence that risk compensation may, in fact, occur in settings with face mask mandates [30]. With the lack of risk compensation observed in our study, we provide additional empirical data to better shape our understanding of behavioral disinhibition in the COVID-19 pandemic.

The context of the study setting and study design have important implications for interpretation of our findings. First, changes in risk perception among those who tested positive for antibodies may not have translated into changed behaviors due to COVID preventive measures taken at the university and community level. Indiana University and Monroe County, where it is located, both had mask mandates and bans on large group social gatherings during the study period. Second, in some social circles, social norms reinforced protective behaviors as being the socially responsible course of action. Those external restrictions on behavior may exert a larger influence than an internal assessment of risk. Third, study participants were provided clear guidance to continue to practice COVID-19 protective behaviors regardless of their antibody test results (see S1 Fig), which may also account for our findings of no observable behavior change. Relatedly, it is possible that risk behaviors did not differ between study arms because participants did not otherwise have strong perceptions of a relationship between serostatus and protection against future infections. Finally, as results were disseminated by email, we cannot confirm that everyone in each treatment condition received their results on the expected schedule, or at all. However, operational and survey data we collected indicated that participants received their test results at similar rates in both arms, alleviating concerns of differential intervention uptake (see Fig 1).

Results from this study likely do not generalize to university settings with different COVID-19 policies in place. However, the results are likely to be generalizable to other young adult populations at other large, public universities similar to the one where this study was conducted. We used random sampling from the IU student population and enrolled a large sample into the randomized trial. Though we observed a relatively low response rate, it is comparable to other response rates in university settings [31]. Nevertheless, to assume that our study population stands in for our larger target population, we have to assume that the enrolled study population had similar risk compensation responses as those who did not enroll. Undergraduate women, in particular, were overrepresented in our study population relative to the student body. Other demographic variables, however, generally tracked with those observed in the student body. After enrollment, we maintained a relatively high follow-up rate for the 2-week behavioral outcomes (>90%), but had more limited completeness of follow-up for the secondary seroconversion outcome at 8-weeks (79%). Reasons for the higher attrition at this later timepoint may have involved the higher burden and stricter COVID protocols with the in-person study visit required for the serology testing, and the timing at the end of the in-person semester coinciding with more limited availability of our student population.

Inference from this study is strengthened by the rigorous randomized study design used. By randomly assigning half of our participants to receive their results immediately, we were able to isolate the effect of receiving the results and minimize the potential for confounding to bias our results. Using a randomized trial design is critical because seeking testing for SARS-CoV-2, either for acute infection [32] or antibody testing, requires both personal motivation and access to services which render comparisons between those who receive testing and those who do not subject to significant possible bias. These factors, and those related to them, would be major sources of confounding in an observational study comparing those who did and did not receive antibody tests. However, although we had a large sample overall (n = 1076), only 47 participants tested positive for SARS-CoV-2 antibodies at baseline, limiting the precision with which we measured the trial effects in that subgroup. The relatively low number of positive tests we observed could be accurately capturing the low seropositivity at this early stage of the pandemic (September-November 2020). Negative antibody tests would also be expected for participants who have experienced a SARS-CoV-2 infection but not yet developed antibodies, or whose antibodies have already waned below detectable levels. Of course, some of our study participants may have falsely tested negative, due to the moderately low test sensitivity (64%) or researcher error in administering or reading the tests. In both of these scenarios, the measurement error is unlikely to be differential by trial arm as the study team was blinded to trial arm allocation.

Our findings provide reassuring evidence that the receipt of SARS-CoV-2 antibody test results are unlikely to cause major changes in protective behaviors, with the assumption that the results generalize beyond a university student population. Although individual risk perceptions may be updated with information on antibody positivity or antibody negativity status, those updated risk perceptions do not appear to result in risk compensation. These findings may also be informative to other experiences or interventions in the context of the COVID-19 pandemic such as natural infection, vaccination, and mask mandates. Future work on COVID-19 risk compensation should focus on extending our findings with these COVID-relevant exposures, in different populations, and by covering updated time periods in the pandemic.

Supporting information

S1 Fig. Educational material provided to participants at time of antibody test result return.

(TIF)

S2 Fig. Sensitivity analysis for association between treatment condition (immediate vs delayed antibody test results) and mean frequency of engagement in protective behaviors at 2 weeks (primary endpoints), and across additional timepoints over 8 weeks of follow-up, stratified by baseline antibody status.

(TIF)

S1 Table. Sensitivity analysis for association between treatment condition (immediate vs delayed antibody test results) and mean frequency of engagement in protective behaviors at 2 weeks, stratified by baseline antibody status.

(DOCX)

S1 File. Deidentified minimal dataset.

(CSV)

S2 File. SAS code to produce study results.

(SAS)

S1 Data

(DOCX)

S1 Checklist

(PDF)

Data Availability

All relevant data are within the manuscript and its Supporting Information files.

Funding Statement

This trial was supported by a charitable donation to the Indiana University Foundation, and antibody tests were donated by the United Arab Emirates. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

References

  • 1.Gressman PT, Peck JR. Simulating COVID-19 in a university environment. Mathematical biosciences. 2020;328:108436. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Boehmer TK, DeVies J, Caruso E, van Santen KL, Tang S, Black CL, et al. Changing age distribution of the COVID-19 pandemic—United States, May–August 2020. Morbidity and Mortality Weekly Report. 2020;69(39):1404. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Paltiel AD, Zheng A, Walensky RP. Assessment of SARS-CoV-2 screening strategies to permit the safe reopening of college campuses in the United States. JAMA network open. 2020;3(7):e2016818-e. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Honein MA, Barrios LC, Brooks JT. Data and policy to guide opening schools safely to limit the spread of SARS-CoV-2 infection. JAMA: the journal of the American Medical Association. 2021;325(9):823–4. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.American College Health Association. Considerations for reopening institutions of higher education in the COVID-19 era. ACHA; 2020.
  • 6.Pless B. Risk compensation: Revisited and rebutted. Safety. 2016;2(3):16. [Google Scholar]
  • 7.Hamilton HR, Peterson JL, DeHart T. COVID-19 in college: Risk perception and planned protective behavior. J Am Coll Health. 2022:1–6. Epub 2022/05/14. doi: 10.1080/07448481.2022.2071623 . [DOI] [PubMed] [Google Scholar]
  • 8.Lee AR, Gonzalez A, Garcia JM, Martinez LS, Oren E. COVID-19 risk perceptions, self-efficacy, and prevention behaviors among California undergraduate students. J Am Coll Health. 2022:1–10. Epub 2022/07/12. doi: 10.1080/07448481.2022.2089843 . [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Fadel T, Travis J, Harris S, Webb G. The roles of experiences and risk perception in the practice of preventative behaviors of COVID-19. Pathog Glob Health. 2022;116(1):30–7. Epub 2021/07/28. doi: 10.1080/20477724.2021.1957595 ; PubMed Central PMCID: PMC8812785. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Esmaeilikia M, Radun I, Grzebieta R, Olivier J. Bicycle helmets and risky behaviour: A systematic review. Transportation research part F: traffic psychology and behaviour. 2019;60:299–310. [Google Scholar]
  • 11.Haider AH, Saleem T, Bilaniuk JW, Barraco RD. An evidence based review: efficacy of safety helmets in reduction of head injuries in recreational skiers and snowboarders. The journal of trauma and acute care surgery. 2012;73(5):1340. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Houston DJ, Richardson LE. Risk compensation or risk reduction? Seatbelts, state laws, and traffic fatalities. Social Science Quarterly. 2007;88(4):913–36. [Google Scholar]
  • 13.Owens DK, Davidson KW, Krist AH, Barry MJ, Cabana M, Caughey AB, et al. Preexposure prophylaxis for the prevention of HIV infection: US Preventive Services Task Force recommendation statement. JAMA: the journal of the American Medical Association. 2019;321(22):2203–13. [DOI] [PubMed] [Google Scholar]
  • 14.Gao Y, Yuan T, Zhan Y, Qian H-Z, Sun Y, Zheng W, et al. Association between medical male circumcision and HIV risk compensation among heterosexual men: a systematic review and meta-analysis. The Lancet Global Health. 2021. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Madhivanan P, Pierre-Victor D, Mukherjee S, Bhoite P, Powell B, Jean-Baptiste N, et al. Human papillomavirus vaccination and sexual disinhibition in females: a systematic review. American journal of preventive medicine. 2016;51(3):373–83. [DOI] [PubMed] [Google Scholar]
  • 16.Kasting ML, Shapiro GK, Rosberger Z, Kahn JA, Zimet GD. Tempest in a teapot: A systematic review of HPV vaccination and risk compensation research. Human vaccines & immunotherapeutics. 2016;12(6):1435–50. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Chung H, He S, Nasreen S, Sundaram M, Buchan S, Wilson S, et al. Effectiveness of BNT162b2 and mRNA-1273 COVID-19 vaccines against symptomatic SARS-CoV-2 infection and severe COVID-19 outcomes in Ontario, Canada. 2021. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.De Sutter E, Zaçe D, Boccia S, Di Pietro ML, Geerts D, Borry P, et al. Implementation of electronic informed consent in biomedical research and stakeholders’ perspectives: systematic review. Journal of medical Internet research. 2020;22(10):e19129. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Rothwell E, Wong B, Rose NC, Anderson R, Fedor B, Stark LA, et al. A randomized controlled trial of an electronic informed consent process. Journal of Empirical Research on Human Research Ethics. 2014;9(5):1–7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Rosenberg M. Longitudinal COVID-19 Antibody Testing in Indiana University Undergraduate Students clinicaltrials.gov2020. Available from: https://clinicaltrials.gov/ct2/show/NCT04620798.
  • 21.BGI. 2021 [10/8/2021]. Available from: https://www.bgi.com/global/.
  • 22.Bastos ML, Tavaziva G, Abidi SK, Campbell JR, Haraoui L-P, Johnston JC, et al. Diagnostic accuracy of serological tests for covid-19: systematic review and meta-analysis. BMJ (Clinical research ed). 2020;370. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.World Health Organization. Monitoring knowledge, risk perceptions, preventive behaviours and trust to inform pandemic outbreak response: Survey tool and guidance. 2020.
  • 24.Kianersi S, Luetke M, Ludema C, Valenzuela A, Rosenberg M. Use of research electronic data capture (REDCap) in a COVID-19 randomized controlled trial: a practical example. BMC medical research methodology. 2021;21(1):1–9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.REDCap. 2021 [10/8/2021]. Available from: https://www.project-redcap.org/
  • 26.SAS. SAS Version 9.4. Cary, NC: SAS Institute Inc.
  • 27.R Core Team. R: A language and environment for statistical computing. Vienna, Austria: R Foundation for Statistical Computing; 2020.
  • 28.Weiss HA, Dickson KE, Agot K, Hankins CA. Male circumcision for HIV prevention: current research and programmatic issues. AIDS (London, England). 2010;24(0 4):S61. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29.Mantzari E, Rubin GJ, Marteau TM. Is risk compensation threatening public health in the covid-19 pandemic? BMJ (Clinical research ed). 2020;370. [DOI] [PubMed] [Google Scholar]
  • 30.Yan Y, Bayham J, Richter A, Fenichel EP. Risk compensation and face mask mandates during the COVID-19 pandemic. Scientific reports. 2021;11(1):1–11. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.Rosenberg M, Townes A, Taylor S, Luetke M, Herbenick D. Quantifying the magnitude and potential influence of missing data in campus sexual assault surveys: A systematic review of surveys, 2010–2016. Journal of American college health. 2019;67(1):42–50. [DOI] [PubMed] [Google Scholar]
  • 32.Perry BL, Aronson B, Railey AF, Ludema C. If you build it, will they come? Social, economic, and psychological determinants of COVID-19 testing decisions. PloS one. 2021;16(7):e0252658. [DOI] [PMC free article] [PubMed] [Google Scholar]

Decision Letter 0

Hermano Alexandre Lima Rocha

4 Sep 2022

PONE-D-22-18389No evidence for risk compensation in undergraduate students after SARS-CoV-2 antibody test results: a randomized controlled trialPLOS ONE

Dear Dr. Rosenberg,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

please submit your revised manuscript by Oct 19 2022 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Hermano Alexandre Lima Rocha

Academic Editor

PLOS ONE

Journal Requirements:

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at 

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and 

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

2. Thank you for stating the following in the Competing Interests section: 

   "CL, MR, JT, SK, ML, CC, LGA, and EA have declared that no competing interests exist.

We have read the journal's policy and KM has the following competing interests:

1. Consulting regarding development of therapeutics for Covid-19 (not related to the current manuscript) and payments made to a private clinic in which I am a partner for conduct of a Covid treatment trials (not related to the current manuscript) 

2. Payments made to a private clinic in which I am a partner for conduct of a vaccine trial (not related to the current manuscript) "

Please confirm that this does not alter your adherence to all PLOS ONE policies on sharing data and materials, by including the following statement: "This does not alter our adherence to  PLOS ONE policies on sharing data and materials.” (as detailed online in our guide for authors http://journals.plos.org/plosone/s/competing-interests).  If there are restrictions on sharing of data and/or materials, please state these. Please note that we cannot proceed with consideration of your article until this information has been declared. 

Please include your updated Competing Interests statement in your cover letter; we will change the online submission form on your behalf.

3. In your Data Availability statement, you have not specified where the minimal data set underlying the results described in your manuscript can be found. PLOS defines a study's minimal data set as the underlying data used to reach the conclusions drawn in the manuscript and any additional data required to replicate the reported study findings in their entirety. All PLOS journals require that the minimal data set be made fully available. For more information about our data policy, please see http://journals.plos.org/plosone/s/data-availability.

"Upon re-submitting your revised manuscript, please upload your study’s minimal underlying data set as either Supporting Information files or to a stable, public repository and include the relevant URLs, DOIs, or accession numbers within your revised cover letter. For a list of acceptable repositories, please see http://journals.plos.org/plosone/s/data-availability#loc-recommended-repositories. Any potentially identifying patient information must be fully anonymized.

Important: If there are ethical or legal restrictions to sharing your data publicly, please explain these restrictions in detail. Please see our guidelines for more information on what we consider unacceptable restrictions to publicly sharing data: http://journals.plos.org/plosone/s/data-availability#loc-unacceptable-data-access-restrictions. Note that it is not acceptable for the authors to be the sole named individuals responsible for ensuring data access.

We will update your Data Availability statement to reflect the information you provide in your cover letter.

4. Please include captions for your Supporting Information files at the end of your manuscript, and update any in-text citations to match accordingly. Please see our Supporting Information guidelines for more information: http://journals.plos.org/plosone/s/supporting-information. 

Additional Editor Comments:

Dear authors

Hope you are doing well

Please follow reviewers suggestions and send the manuscript back to us with the required amendments and a letter letting us know the your responses.

Best wishes

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: Yes

Reviewer #3: Partly

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: Yes

Reviewer #3: Yes

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: No

Reviewer #2: No

Reviewer #3: Yes

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

Reviewer #3: Yes

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: No main concerns on this article. Some comments:

- informed consent process: the remote consent administration may not be as effective for transferring correct information to students; probably in this case a telephone contact to better understand students' perception and knowledge of COVID would have been more effective

- apart from test accuracy, could you comment on the understanding of the relation between a positive test and the real protection from COVID infection? Students may not be aware of this relation and this relation is not so strong

- could you comment on the reasons of 23% of students lost to FUP?

- could you comment on the small number of participants tested positive?

Reviewer #2: In general, the manuscript is well-written and describes an interesting study of risk compensation. The following represent relatively minor criticisms.

It would be most useful to evaluate the data and code used for analysis.

* Analysis issues:

The analysis is performed on dichotomized Likert scale responses. The protocol does specify this analysis, but does not specify the dichotomization criteria prospectively. Given the distribution of the data, it does seem unlikely that the choice of dichotomization would matter, but a more principled approach to the analysis would respect the specific structure of the response. After all, if the data were not going to be analyzed as Likert scaled values, then why collect them as such? Alternatively, the protocol should have pre-specified the dichotomization criteria.

The authors do present an alternative analysis treating the Likert scale responses as interval scaled as a sensitivity analysis. This is comforting but does not necessarily completely ameliorate the issue.

A reasonable analysis for these data would be an ordinal logistic regression analysis. A basic approach would be to use a proportional odds model (McCullagh and Nelder, 1989). McCullagh and Nelder also present a slight extension useful for testing the proportional odds assumption versus monotonically changing odds.

For an excellent exposition on the nuts and bolts of implementing the proportional odds model, assessing goodness of fit, and interpreting the results, see Harrell (2010). Professor Harrell has also made numerous resources available online.

Furthermore, is there any specific reason not to include all data in a single analysis with two factors (seropositivity status and intervention) and their interaction effect? The imbalance might mitigate against this, but it would still be of interest to evaluate the interaction term.

And, along those lines, although it is probably only feasible (if at all) with the dichotomized version of the data (or with the Likert scale data treated as interval scaled), a full model would consider three fixed effects (seropositivity status, intervention, and time) and their interactions. However, this model would require a method for handling the repeated measures across time for each subject. This could be carried out using the 'nlme' R package, most likely (Pinheiro and Bates, 2017). However, convergence is always an issue for these binomial responses. But, treating the responses as intervals scaled should meet with no issues. There are several R packages that extend this modeling in a Bayesian framework as well as other frameworks such as STAN (Carpenter et al, 2017) which could also be used to perform such modeling.

An interesting alternative approach to the sensitivity analysis could be carried out using the concept of specification curve analysis (Simonsohn, Simmons, and Nelson, 2020). In a nutshell, the idea is to specify all of the alternative paths that the analysis could have taken to assess the impact of the one path that was actually chosen. It would be quite feasible to implement this analysis using the 'specr' R package (Masur and Scharkow, 2020).

* Interpretation issues

The study sample represents a subset of students who self-selected to join the study (34.4% of randomly selected, eligible IU-B undergraduate students). It could be that in this subset there is no observed risk compensation, but that in the complementary subset of students that did not self-select into the study there is some observed risk compensation. To be able to widen the scope of inference likely requires some assumptions about the comparability of these two subsets. There seems to be no real way to avoid this so it must be addressed in the limitations.

The authors state "However, the results are likely to be generalizable to other young adult populations at other predominantly white universities similar to the one where this study was conducted." (lines 339-340). Is race a deciding factor here? If so, please provide some citations that would support this.

Not only did states differ in their approaches to handling COVID-19 issues broadly, but even cities within states differed, as did universities within states. And, students at different universities could come from a more or less heterogeneous background. It seems as though all of these effects make it more difficult to generalize rather than more easy.

The authors mention "potential for confounding" in line 347. Please list these potential issues, if only briefly or at a high level.

In Figure 2, please show all available data rather than only three time points. Also, the bars in these figures appear offset in a way that makes them hard to interpret.

Although the stacked bar approach is not ideal, it is hard to suggest an alternative. One might attempt grouping bar charts but this is not likely to end well.

Given the linear model approach that was executed in the appendix, please also provide a plot of the least squares estimates over time for each combination of seropositivity status and intervention.

* References

Carpenter, B., Gelman, A., Hoffman, M.D., Lee, D., Goodrich, B., Betancourt, M., Brubaker, M., Guo, J., Li, P. and Riddell, A., 2017. Stan: A probabilistic programming language. Journal of statistical software, 76(1).

Harrell, F. E. (2010). Regression Modeling Strategies: With Applications to Linear Models, Logistic Regression, and Survival Analysis. Springer Series in Statistics.

Masur, P. K. and Scharkow, M. (2020). specr: conducting and visualizing specification curve analyses. https://cran.r-project.org/web/packages/specr/

McCullagh, P. and Nelder, J. A. (1989). Generalized linear models.

Pinheiro, J., Bates, D., DebRoy, S., Sarkar, D., Heisterkamp, S., Van Willigen, B., & Maintainer, R. (2017). Package ‘nlme’. Linear and nonlinear mixed effects models, version, 3(1).

Simonsohn, U., Simmons, J. P., and Nelson, L. D. (2020). Specification curve analysis. Nature Human Behaviour, 4(11), 1208-1214.

Reviewer #3: Thank you for asking me to review this article. Highly transmittable infectious diseases, such as COVID-19 are public health emergencies of international concern. There is still no definitive cure for some of those highly transmittable illness. Immunization and breaking the chain of infection is the only successful approach to mitigate its spread. However, while on one side immunization coverage is conditioned by the people’s acceptance of these vaccines, on the other side natural infection or vaccination has important potential consequences on preventive startegies and for disease control. In this context, aim of the paper under review is to assess whether objective information about antibody status, particularly for those who are antibody negative and likely still susceptible to SARS-CoV-2 infection, increases protective behaviors and, moreover, assessing whether a positive antibody test results in decreased protective behaviors.

The subject under study is certainly important, especially in the historical period we are experiencing. The article presents interesting results but it must be further improved.

Title: it can be improved, highlight the object of the study.

Abstract. I encourage the authors to add more detail about their core contributions in the abstract.

Introduction: The authors should make clearer what is the gap in the literature that is filled with this study. The authors must better frame their study within the vast body of literature that also addressed the issue of knowledge concerning COVID-19 that can affect the implementation of control measures in different groups of population (refer to articles with DOI: https://doi.org/10.3390/ijerph182010872).

Methods: The survey was conducted using a non-standard tool. The use of an unreliable instrument is a serious and irreversible limitation. A validation process must be performed to evaluate the tool. What about reliability, intelligibility and validation index? Was a pilot study performed?

The enrolment procedure must be specified. How did the authors choose the way to select the sample? This can represent a great bias origin. How did they avoid the selection bias? The authors do not propose a minimum sample size. Without the numerical identification of the reference population is not clear the validity of the study. A non-representative sample is by its self a non-sense-survey.

Statistical analysis: I suggest to insert a measure of the magnitude of the effect for the comparisons. Please consider to include effect sizes.

Discussion: I also suggest expanding. Emphasize the contribution of the study to the literature. The discussion must be updated with the discussion regarding knowledge about the diseases (see the above mentioned reference) that can be an important confounding factors for this study. The Authors should add more practical recommendations for the reader, based on their findings. Also, the section of limitations and future search is also very short, the Authors could elaborate on that.

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: Yes: Manuela Monti

Reviewer #2: No

Reviewer #3: No

**********

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2022 Dec 20;17(12):e0279347. doi: 10.1371/journal.pone.0279347.r002

Author response to Decision Letter 0


26 Oct 2022

We thank the reviewers for their valuable and constructive suggestions how to improve our paper “Does receiving a SARS-CoV-2 antibody test result change COVID-19 protective behaviors? Testing risk compensation in undergraduate students with a randomized controlled trial” (Ms. No.: PONE-D-22-18389). Below, we explain point-by-point how we have addressed each of the reviewers’ comments in the revised version of our paper.

Editorial Comments

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

Response: We have now aligned our manuscript and references with the PLOS ONE style template. Thank you!

2. Thank you for stating the following in the Competing Interests section:

"CL, MR, JT, SK, ML, CC, LGA, and EA have declared that no competing interests exist.

We have read the journal's policy and KM has the following competing interests:

1. Consulting regarding development of therapeutics for Covid-19 (not related to the current manuscript) and payments made to a private clinic in which I am a partner for conduct of a Covid treatment trials (not related to the current manuscript)

2. Payments made to a private clinic in which I am a partner for conduct of a vaccine trial (not related to the current manuscript) "

Please confirm that this does not alter your adherence to all PLOS ONE policies on sharing data and materials, by including the following statement: "This does not alter our adherence to PLOS ONE policies on sharing data and materials.” (as detailed online in our guide for authors http://journals.plos.org/plosone/s/competing-interests). If there are restrictions on sharing of data and/or materials, please state these. Please note that we cannot proceed with consideration of your article until this information has been declared.

Please include your updated Competing Interests statement in your cover letter; we will change the online submission form on your behalf.

Thank you for the guidance around competing interests.

Response: We have updated our competing interests statement as requested in the attached cover letter. Please note that we have also added a competing interests statement for coauthor DBA.

We confirm here and in the cover letter that the competing interests listed for KM and DBA do not alter our adherence to PLOS ONE policies.

3. In your Data Availability statement, you have not specified where the minimal data set underlying the results described in your manuscript can be found. PLOS defines a study's minimal data set as the underlying data used to reach the conclusions drawn in the manuscript and any additional data required to replicate the reported study findings in their entirety. All PLOS journals require that the minimal data set be made fully available. For more information about our data policy, please see http://journals.plos.org/plosone/s/data-availability.

"Upon re-submitting your revised manuscript, please upload your study’s minimal underlying data set as either Supporting Information files or to a stable, public repository and include the relevant URLs, DOIs, or accession numbers within your revised cover letter. For a list of acceptable repositories, please see http://journals.plos.org/plosone/s/data-availability#loc-recommended-repositories. Any potentially identifying patient information must be fully anonymized.

Important: If there are ethical or legal restrictions to sharing your data publicly, please explain these restrictions in detail. Please see our guidelines for more information on what we consider unacceptable restrictions to publicly sharing data: http://journals.plos.org/plosone/s/data-availability#loc-unacceptable-data-access-restrictions. Note that it is not acceptable for the authors to be the sole named individuals responsible for ensuring data access.

We will update your Data Availability statement to reflect the information you provide in your cover letter.

Response: We have now created and included a deidentified minimal dataset with this submission in file ‘S1 File.’

We note this change in the manuscript and the cover letter and send our thanks for updating the data availability statement on our behalf.

4. Please include captions for your Supporting Information files at the end of your manuscript, and update any in-text citations to match accordingly. Please see our Supporting Information guidelines for more information: http://journals.plos.org/plosone/s/supporting-information.

Response: We have now made these edits to align our supporting information files with the journal naming requirements.

R1 Comments

1. informed consent process: the remote consent administration may not be as effective for transferring correct information to students; probably in this case a telephone contact to better understand students' perception and knowledge of COVID would have been more effective

Response: We agree with the reviewer about the importance of excellent communication in the informed consent process. We used a remote consent procedure that was reviewed and approved by our Institutional Review Board. We want to assure the reviewer that we took several additional steps to communicate about the study with potential participants. Our study team offered telephone or email consultations with potential study participants on demand, to ensure questions about study participation were answered and that participants understood the relative risks and benefits of study participation. Our protocol also included active solicitation of participant questions at all in-person study visits (prior to serology testing).

Although remote consenting protocols do come with some challenges, a growing body of literature indicates that remote consenting can facilitate strong participant understanding (see, for example, citations below).

1.Rothwell, Erin, et al. "A randomized controlled trial of an electronic informed consent process." Journal of Empirical Research on Human Research Ethics 9.5 (2014): 1-7.

2.De Sutter, Evelien, et al. "Implementation of electronic informed consent in biomedical research and stakeholders’ perspectives: systematic review." Journal of medical Internet research 22.10 (2020): e19129.

We now cite this supporting literature in our ‘Study participants and eligibility criteria’ section, and have added information about the additional communication modes to which potential participants had access:

“All sampled students were contacted by email with a study invitation and a link to detailed information about study objective and procedures. The study team offered potential participants the opportunity for email or telephone consultations to answer any additional questions about study participation. After reviewing study information, interested and eligible students provided written informed consent remotely with an electronic signature.[15, 16] The IU Human Subjects and Institutional Review board provided ethical approval for this study protocol (Protocol #2008293852).”

2. apart from test accuracy, could you comment on the understanding of the relation between a positive test and the real protection from COVID infection? Students may not be aware of this relation and this relation is not so strong

Response: This is a great question. Our study, to our knowledge, is the first to assess the potential for risk compensation after SARS-CoV-2 antibody testing, and we did not specifically query perceptions around the link between serostatus and risk of infection in our survey. So, a possible explanation for the null effects we observed is that participants did not actually have strong perceptions of the relationship between serostatus and risk for future infection. This would be one possible reason why risk compensation was not observed. We have now added a description of this potential explanation to the third paragraph of the discussion:

“…, study participants were provided clear guidance to continue to practice COVID-19 protective behaviors regardless of their antibody test results (see S1 Fig), which may also account for our findings of no observable behavior change. Relatedly, it is possible that risk behaviors did not differ between study arms because participants did not otherwise have strong perceptions of a relationship between serostatus and protection against future infections…”

3. could you comment on the reasons of 23% of students lost to FUP?

Response: Thanks for the opportunity to clarify. Our primary behavioral outcomes were assessed via electronic survey 2 weeks after baseline. At this time point, we maintained over 91% of our sample (missing n=92, 8.5%). We were not able to collect reasons for missing this survey wave, but we hope that this relatively low missing rate reassures the reviewer about the sensitivity of our results to missing data.

It was only for our secondary biologic outcome that we used the endline antibody test data to calculate seroconversion outcomes for participants after 8 weeks. For this outcome, the missingness rate was higher at 21%. Note that this excludes participants who tested positive for SARS-CoV-2 antibodies at baseline as they were not at risk for seroconversion (n=49). Although we did not collect data on reasons for missing the second antibody testing visit, we can speculate on a couple of contextual explanations: 1) This required an in-person visit for fingerprick, a higher burden than an electronic survey, 2) We requested that participants with COVID symptoms or under COVID quarantine/isolation to reschedule, but only had limited openings for these appointments, 3) The window for the 8 week study visit was in the last 2 weeks of the modified 2020 Fall semester with in-person instruction at Indiana University. Participants may have had competing activities that limited their availability for in-person study visit (exams, parties, preparation for moving back home, etc).

To address this important point, we have added the following text to the third paragraph of the discussion:

“After enrollment, we maintained a relatively high follow-up rate for the 2-week behavioral outcomes (>90%), but had more limited completeness of follow-up for the secondary seroconversion outcome at 8-weeks (79%). Reasons for the higher attrition at this later timepoint may have involved the higher burden and stricter COVID protocols with the in-person study visit required for the serology testing, and the timing at the end of the in-person semester coinciding with more limited availability of our student population.”

4. could you comment on the small number of participants tested positive?

Response: Thanks for this question. It was helpful for us to reflect on the possible explanations for the relatively low seropositivity at baseline (n=49) and number of new seroconversions under observation (n=42).

These numbers are somewhat low, but it is important to place them in the context of the early stage of the pandemic (September-November 2020), a time when this student population was just returning back to campus for the first time since March 2020. It is possible that our numbers are accurately capturing the relatively low seropositivity. Other ‘true negative’ reasons that could contribute to our relatively low observed seropositivity are that some participants could have experienced a SARS-CoV-2 infection, but not yet developed antibodies, or whose antibodies had already wanted below detectable results.

Of course, there is also the possibility that some of our participants received false negative test results. This could have happened due to the moderately low antibody test sensitivity (64%), or if there was researcher error in reading or administering the tests.

To better characterize these potential explanations in the manuscript, we have now added the following text to the 5th paragraph of the discussion:

“The relatively low number of positive tests we observed could be accurately capturing the low seropositivity at this early stage of the pandemic (September-November 2020). Negative antibody tests would also be expected for participants who have experienced a SARS-CoV-2 infection but not yet developed antibodies, or whose antibodies have already waned below detectable levels. Of course, some of our study participants may have falsely tested negative, due to the moderately low test sensitivity (64%) or researcher error in administering or reading the tests. In both of these scenarios, the measurement error is unlikely to be differential by trial arm as the study team was blinded to trial arm allocation.”

R2 Comments

1. It would be most useful to evaluate the data and code used for analysis.

Response: Thanks for flagging this for us. With our revised submission, we have now included a deidentified minimal dataset (‘S1 file.csv’) and code (‘S2 File.sas’) underpinning our analyses.

2. The analysis is performed on dichotomized Likert scale responses. The protocol does specify this analysis, but does not specify the dichotomization criteria prospectively. Given the distribution of the data, it does seem unlikely that the choice of dichotomization would matter, but a more principled approach to the analysis would respect the specific structure of the response. After all, if the data were not going to be analyzed as Likert scaled values, then why collect them as such? Alternatively, the protocol should have pre-specified the dichotomization criteria.

The authors do present an alternative analysis treating the Likert scale responses as interval scaled as a sensitivity analysis. This is comforting but does not necessarily completely ameliorate the issue.

A reasonable analysis for these data would be an ordinal logistic regression analysis. A basic approach would be to use a proportional odds model (McCullagh and Nelder, 1989). McCullagh and Nelder also present a slight extension useful for testing the proportional odds assumption versus monotonically changing odds.

For an excellent exposition on the nuts and bolts of implementing the proportional odds model, assessing goodness of fit, and interpreting the results, see Harrell (2010). Professor Harrell has also made numerous resources available online.

Response: As suggested, we ran proportional odds regression models for our four behavioral outcomes by serostatus. The proportional odds assumption was met for all models with the exception of the mask use outcome in the seropositive stratum (n=47). The p-value associated with the Score Test for this outcome/stratum was <0.0001. We suspect this was due in part to the limited sample size overall and within the less frequent response options for masking.

For the remaining outcome models that met the proportional odds assumption (score test p-value >0.05), we ran the proportional odds models to produce summary odds ratios to compare the odds of responding across the Likert categories. The proportional odds models we were able to run supported our primary results and the OLS sensitivity analysis results. We did not observe any statistically significant differences in behavioral outcomes by intervention arm:

Negative antibody test results (n=1029)

Social event avoidance: OR 0.98 (0.73, 1.32)

Staying home from work/school: OR 0.78 (0.61, 1.00)

Mask use in public: OR 0.97 (0.68, 1.40)

Physical distancing in public: OR 0.93 (0.72, 1.19)

Positive antibody test results (n=47)

Social event avoidance: OR 0. 0.83 (0.19, 3.67)

Staying home from work/school: OR 1.39 (0.40, 4.84)

Physical distancing in public: OR 0.63 (0.19, 2.05)

We have decided not to include these results in our sensitivity analysis section, but would be open to doing so if the reviewer and/or editor feels strongly in favor. The reasons we used to decide against inclusion are:

1. Not meeting the proportional odds assumption across all subgroups

2. We prefer to maintain consistency in showing risk ratio and risk difference outcomes as opposed to the odds ratios produced by the proportional odds models

3. Risk ratios are more interpretable effect estimates than odds ratios and are appropriate for our longitudinal data structure. They also better handle data with very common outcomes, as ours are. Odds ratios produce inflated estimates relative to RRs in these situations.

Thanks again for pointing us to this modeling strategy.

3. Furthermore, is there any specific reason not to include all data in a single analysis with two factors (seropositivity status and intervention) and their interaction effect? The imbalance might mitigate against this, but it would still be of interest to evaluate the interaction term.

Response: Thanks for the suggestion, and we agree with the reviewer that the study results could be strengthened by including a formal test of interaction between serostatus and intervention arm. We have now run the models as suggested, and include the Wald p-values for this interaction as footnotes to Table 2. Importantly, we observed null effects at 2-weeks for all behavioral outcomes and there was no evidence for statistical difference in effect by serostatus. The text of the footnote added to Table 2 reads as follows:

“Wald p-values for the interaction between serostatus and intervention arm were: 0.60 (social event avoidance), 0.27 (staying home from work/school), 0.08 (mask use in public), and 0.15 (physical distancing in public)”

4. And, along those lines, although it is probably only feasible (if at all) with the dichotomized version of the data (or with the Likert scale data treated as interval scaled), a full model would consider three fixed effects (seropositivity status, intervention, and time) and their interactions. However, this model would require a method for handling the repeated measures across time for each subject. This could be carried out using the 'nlme' R package, most likely (Pinheiro and Bates, 2017). However, convergence is always an issue for these binomial responses. But, treating the responses as intervals scaled should meet with no issues. There are several R packages that extend this modeling in a Bayesian framework as well as other frameworks such as STAN (Carpenter et al, 2017) which could also be used to perform such modeling.

Response: We agree with the reviewer that it would be very interesting to understand the three-way interaction between serostatus, intervention arm, and time. Unfortunately, our data were not structured or distributed to be able to answer this question. Specifically, we only have two time points for the SARS-CoV-2 serology. Importantly, after the second round of serology, no further behavioral outcomes were assessed and results were immediately returned to participants. Thus, we only have a single timepoint for which the randomized intervention applied and for which the behavioral outcomes were assessed.

5. An interesting alternative approach to the sensitivity analysis could be carried out using the concept of specification curve analysis (Simonsohn, Simmons, and Nelson, 2020). In a nutshell, the idea is to specify all of the alternative paths that the analysis could have taken to assess the impact of the one path that was actually chosen. It would be quite feasible to implement this analysis using the 'specr' R package (Masur and Scharkow, 2020).

Response: We appreciate this suggestion and learned a lot in doing some research into specification curve analysis.

We do think that this suggested analysis would not have much added value with our particular dataset, however. In thinking through the ‘world’ of potential model specifications we could test as related to the outcome, we found that we do not have many other reasonable options to test. Cutoffs at different Likert values in behavioral outcome responses are not indicated due to sparse data in some of the response options. The alternate model specifications we were able to run [OLS and proportional odds models (see response to comment R3 above)] both produced results aligned with the null results we observed in our primary dichotomized analysis.

The robustness of our findings across these specified models seems to suggest that there would be little added value from a specification curve analysis.

6. The study sample represents a subset of students who self-selected to join the study (34.4% of randomly selected, eligible IU-B undergraduate students). It could be that in this subset there is no observed risk compensation, but that in the complementary subset of students that did not self-select into the study there is some observed risk compensation. To be able to widen the scope of inference likely requires some assumptions about the comparability of these two subsets. There seems to be no real way to avoid this so it must be addressed in the limitations.

Response: This is a great point. We had a relatively low response rate from our randomly sampled study population (34.4%), leading to concerns that they may not represent the target population of all Indiana University undergraduate students.

In general, the demographic distribution among our enrolled study participants does provide some reassurance on this point. In most respects, our study population reflected the demographic makeup of the underlying student body: by age, racial/ethnic background, on- vs. off-campus residence, and Greek affiliation. However, undergraduate women were overrepresented in our study population (64%) relative to the broader IU undergraduate population (50%). This gender difference would be important if women responded to antibody test information differently than men.

More generally, to assume that our study population stands in for our larger target population, we have to assume that the enrolled study population has similar risk compensation responses as the people who did not enroll. We think that this assumption is generally reasonable, but acknowledge that we are not able to empirically test it.

We hope these edits to the 4th paragraph of the discussion provide a more transparent/thoughtful discussion of these concerns:

“We used random sampling from the IU student population and enrolled a large sample into the randomized trial. Though we observed a relatively low response rate, it is comparable to other response rates in university settings [28]. Nevertheless, to assume that our study population stands in for our larger target population, we have to assume that the enrolled study population has similar risk compensation responses as those who did not enroll. Undergraduate women, in particular, were overrepresented in our study population relative to the student body, Other demographic variables, however, generally tracked with those observed in the student body.”

7. The authors state "However, the results are likely to be generalizable to other young adult populations at other predominantly white universities similar to the one where this study was conducted." (lines 339-340). Is race a deciding factor here? If so, please provide some citations that would support this.

Response: We agree with the reviewer that the sentence as written may have been overly specific. We have now edited the sentence to read:

“However, the results are likely to be generalizable to other young adult populations at other large, public universities similar to the one where this study was conducted.”

Thanks for the suggestion.

8. Not only did states differ in their approaches to handling COVID-19 issues broadly, but even cities within states differed, as did universities within states. And, students at different universities could come from a more or less heterogeneous background. It seems as though all of these effects make it more difficult to generalize rather than more easy.

Response: We agree with the reviewer that the generalizability of our findings depends at least partially on the COVID-19 policy environment. In addition to the edits we incorporated to the 4th paragraph of the discussion as outlined in response to R2, comment 6 above, the first two sentences of the paragraph now read:

“Results from this study likely do not generalize to university settings with different COVID-19 policies in place. However, the results are likely to be generalizable to other young adult populations at other large, public universities similar to the one where this study was conducted.”

9. The authors mention "potential for confounding" in line 347. Please list these potential issues, if only briefly or at a high level.

Response: Thanks for the opportunity to clarify this! We have now edited the first part of the 5th paragraph of the Discussion to more clearly lay out the potential for confounding were this study to be conducted observationally:

“Inference from this study is strengthened by the rigorous randomized study design used. By randomly assigning half of our participants to receive their results immediately, we were able to isolate the effect of receiving the results and minimize the potential for confounding to bias our results. Using a randomized trial design is critical because seeking testing for SARS-CoV-2, either for acute infection or antibody testing, requires both personal motivation and access to services which render comparisons between those who received testing and those who did not subject to significant possible bias. These factors, and those related to them, would be major sources of confounding in an observational study comparing those who did and did not receive antibody tests.”

10. In Figure 2, please show all available data rather than only three time points. Also, the bars in these figures appear offset in a way that makes them hard to interpret.

Although the stacked bar approach is not ideal, it is hard to suggest an alternative. One might attempt grouping bar charts but this is not likely to end well.

Response: Thanks for the suggestions for strengthening Figure 2. We have now added all follow-up timepoints to the graphs.

We have also tried to improve the interpretability/clarity of the graphs by removing the gridlines and ensuring the axes are consistent for each subgroup within each behavioral outcome reported. We hope this is helps, and agree with the reviewer that the alternative (stacked bar charts) would not be as effective at conveying the information.

11. Given the linear model approach that was executed in the appendix, please also provide a plot of the least squares estimates over time for each combination of seropositivity status and intervention.

Response: We now include a supplemental figure (S2_Fig) that shows the linear model sensitivity analysis results over time – thanks for the suggestion. Although we agree with the reviewer that the added timepoints provide additional context for readers to absorb, we would like to emphasize that our study design necessitated the behavioral outcomes be interpreted at 2 weeks follow-up only. This is because our delayed arm participants received their test results at 4 weeks of follow up. So, at the 4-week timepoint and all subsequent timepoints, there was no exposure difference between arms (both arms had received their test results).

R3 Comments

1. Title: it can be improved, highlight the object of the study.

Response: Thank you. We have now edited the title to read:

“Does receiving a SARS-CoV-2 antibody test result change COVID-19 protective behaviors? Testing risk compensation in undergraduate students with a randomized controlled trial”

2. Abstract. I encourage the authors to add more detail about their core contributions in the abstract.

Response: Thanks for the opportunity to explain more clearly our core contribution. We’ve changed the Abstract text accordingly:

“Risk compensation, or matching behavior to a perceived level of acceptable risk, can blunt the effectiveness of public health interventions. One area of possible risk compensation during the SARS-CoV-2 pandemic is antibody testing. While antibody tests are imperfect measures of immunity, results may influence risk perception and individual preventive actions. We conducted a randomized control trial to assess whether receiving antibody test results changed SARS-CoV-2 protective behaviors.”

3. Introduction: The authors should make clearer what is the gap in the literature that is filled with this study. The authors must better frame their study within the vast body of literature that also addressed the issue of knowledge concerning COVID-19 that can affect the implementation of control measures in different groups of population (refer to articles with DOI: https://doi.org/10.3390/ijerph182010872).

Response: Please see the response to (2) above in clarifying the gap in the literature on the topic of risk compensation. We have also edited text in the 4th paragraph of the Introduction to make our contributions more explicit:

“Given the many influences on SARS-CoV-2 protective behaviors, a randomized trial is the most appropriate design to avoid confounding by these factors. Understanding the impact of information about past SARS-CoV-2 infection on preventive behavior is essential to managing viral control and for learning more about expected behavior post natural infection and vaccination. The SARS-CoV-2 vaccine has an overwhelmingly beneficial effect on lowering risk of serious COVID-19 disease and mortality [17]. However, understanding how much, if any, risk compensation might occur after natural infection or vaccination has important potential consequences for disease control.”

We agree that knowledge about COVID-19 is an important contributor to people’s uptake of preventive behavior and have edited and added citations in the third paragraph of the Introduction as follows:

“Learning the results of a SARS-CoV-2 test may influence individual behavior through several plausible mechanisms. Theoretically, risk compensation postulates that individuals have some amount of risk that they are willing to assume and will change their behaviors to match that level of risk [6]. Relatedly, behavioral disinhibition theory posits that feelings of protection against one health concern may cause people to engage in behaviors that put them at risk for other health issues. A necessary condition for these behavioral pathways to operate with SARS-CoV-2 tests is that people must have working perceptions that a relationship between prior infection and protection against future infections exists. These perceptions are likely related to overall COVID-19 and immunological knowledge.[7-9]”

4. Methods: The survey was conducted using a non-standard tool. The use of an unreliable instrument is a serious and irreversible limitation. A validation process must be performed to evaluate the tool. What about reliability, intelligibility and validation index? Was a pilot study performed?

Response: We used a portion of the World Health Organization’s survey tool for “Monitoring knowledge, risk perceptions, preventive behaviours and trust to inform pandemic outbreak response” to assess the outcomes of protective behaviors and have changed the text to report the origin of the survey questions. The conduct of a pilot study, while important for tool validation in non-urgent settings, would have precluded timely conduct of the study. The text we added to the 6th paragraph of the methods section is reproduced here:

“…participants were followed-up in parallel every two weeks with online surveys assessing their engagement in key COVID-19 protective behaviors using a scale from the World Health Organization COVID-19 survey tool.”

Citation:

World Health Organization. Monitoring knowledge, risk perceptions, preventive behaviours and trust to inform pandemic outbreak response: Survey tool and guidance. 2020.

5. The enrolment procedure must be specified. How did the authors choose the way to select the sample? This can represent a great bias origin. How did they avoid the selection bias? The authors do not propose a minimum sample size. Without the numerical identification of the reference population is not clear the validity of the study. A non-representative sample is by its self a non-sense-survey.

Response: We clarified that we conducted a simple random sample of IU undergraduate students and include here the edited text and surrounding information that explains our sampling procedure.

“We conducted a simple randomly sampled of all IU Bloomington undergraduate students enrolled at the beginning of the Fall 2020 semester yielding 7,499 students from the sampling frame of all IU Bloomington undergraduate students enrolled at the beginning of the Fall 2020 semester. Students in the sample were eligible to participate if they were (1) aged 18 years or older, (2) a current IU Bloomington undergraduate student, and (3) currently residing in Monroe County, Indiana. Of those sampled, 4,069 potential participants met the inclusion criteria for the study. All sampled students were contacted by email with a study invitation and a link to detailed information about study objective and procedures. The study team offered potential participants the opportunity for email or telephone consultations to answer any additional questions about study participation. After reviewing study information, interested and eligible students provided written informed consent remotely with an electronic signature.”

We take the concerns about the representativeness of our sample seriously and have made edits to the 4th paragraph of our discussion to more thoroughly and transparently address the potential threats to external validity. Please see our response to R2, comment 6 above for more details.

6. Statistical analysis: I suggest to insert a measure of the magnitude of the effect for the comparisons. Please consider to include effect sizes.

Response: We apologize if we are misunderstanding the reviewer’s suggestion here, but we include both relative risks (RRs) and risk differences (RDs) as measures of effect for our primary and secondary outcomes in Table 2. We discuss these effect estimates in terms of magnitude (point estimates) and precision (95% CIs) in the table and text. Please let us know if there was something else you were suggesting.

7. Discussion: I also suggest expanding. Emphasize the contribution of the study to the literature. The discussion must be updated with the discussion regarding knowledge about the diseases (see the above mentioned reference) that can be an important confounding factors for this study. The Authors should add more practical recommendations for the reader, based on their findings. Also, the section of limitations and future search is also very short, the Authors could elaborate on that.

Response: Thanks for these suggestions to improve the discussion!

We place our findings in the current literature in the second paragraph of the discussion:

“That we found that behavior remained largely unchanged after receipt of antibody test results aligns with findings from prior studies of risk compensation in other settings. . Four interventions are often recognized for their potential to cause risk compensation: bicycle helmets, seatbelts, voluntary medical male circumcision and pre-exposure prophylaxis for HIV prevention, and HPV vaccination. Exhaustive reviews in each of these areas have consistently concluded that there is, in fact, little evidence for increased risk-taking after intervention exposure [10-13, 16, 28]. In spite of the weak record for risk compensation in other settings, many have raised concerns about the threat of risk compensation in the COVID-19 pandemic [29]. Vaccines and face mask mandates have been two areas of concern for risk compensation, but very few empirical studies have tested these associations. However, there is at least some suggestive evidence that risk compensation may, in fact, occur in settings with face mask mandates [30]. With the lack of risk compensation observed in our study, we provide additional empirical data to better shape our understanding of behavioral disinhibition in the COVID-19 pandemic.”

We now revisit the important point about COVID-19 knowledge and risk perception in the 3rd paragraph of the discussion:

“Relatedly, it is possible that risk behaviors did not differ between study arms because participants did not otherwise have strong perceptions of a relationship between serostatus and protection against future infections.”

We have added substantial discussion of study limitations in the 4th and 5th paragraphs, more thoroughly covering concerns about generalizability, loss to follow-up, confounding, and low seropositivity.

Attachment

Submitted filename: Ab RCT ms_Response.docx

Decision Letter 1

Hermano Alexandre Lima Rocha

6 Dec 2022

Does receiving a SARS-CoV-2 antibody test result change COVID-19 protective behaviors? Testing risk compensation in undergraduate students with a randomized controlled trial

PONE-D-22-18389R1

Dear Dr. Rosenberg,

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

Hermano Alexandre Lima Rocha

Academic Editor

PLOS ONE

Additional Editor Comments (optional):

Congratulations on your fine submission. We wish you all the best.

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #1: All comments have been addressed

Reviewer #2: All comments have been addressed

Reviewer #3: (No Response)

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: (No Response)

Reviewer #3: No

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: (No Response)

Reviewer #3: No

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: (No Response)

Reviewer #3: No

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: (No Response)

Reviewer #3: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: No additional comments. On my opinion the information requested have been exhaustively addressed expecially for my concerns.

Reviewer #2: The authors have done an excellent job of thoroughly responding to all comments. The implementation of the analysis in SAS is nicely done as well.

Reviewer #3: The main problems of the questionnair validation and unreppresentative sample are still on. This make the paper no sense.

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: Yes: Manuela Monti

Reviewer #2: No

Reviewer #3: No

**********

Acceptance letter

Hermano Alexandre Lima Rocha

12 Dec 2022

PONE-D-22-18389R1

Does receiving a SARS-CoV-2 antibody test result change COVID-19 protective behaviors? Testing risk compensation in undergraduate students with a randomized controlled trial

Dear Dr. Rosenberg:

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department.

If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org.

If we can help with anything else, please email us at plosone@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Dr. Hermano Alexandre Lima Rocha

Academic Editor

PLOS ONE

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    S1 Fig. Educational material provided to participants at time of antibody test result return.

    (TIF)

    S2 Fig. Sensitivity analysis for association between treatment condition (immediate vs delayed antibody test results) and mean frequency of engagement in protective behaviors at 2 weeks (primary endpoints), and across additional timepoints over 8 weeks of follow-up, stratified by baseline antibody status.

    (TIF)

    S1 Table. Sensitivity analysis for association between treatment condition (immediate vs delayed antibody test results) and mean frequency of engagement in protective behaviors at 2 weeks, stratified by baseline antibody status.

    (DOCX)

    S1 File. Deidentified minimal dataset.

    (CSV)

    S2 File. SAS code to produce study results.

    (SAS)

    S1 Data

    (DOCX)

    S1 Checklist

    (PDF)

    Attachment

    Submitted filename: Ab RCT ms_Response.docx

    Data Availability Statement

    All relevant data are within the manuscript and its Supporting Information files.


    Articles from PLOS ONE are provided here courtesy of PLOS

    RESOURCES