Skip to main content
PLOS One logoLink to PLOS One
. 2021 Jan 28;16(1):e0244865. doi: 10.1371/journal.pone.0244865

Understanding the relationship between safety beliefs and knowledge for cognitive enhancers in UK university students

Ngoc Trai Nguyen 1, Tim Rakow 1, Benjamin Gardner 1, Eleanor J Dommett 1,*
Editor: Angel Blanch2
PMCID: PMC7842904  PMID: 33508011

Abstract

Background

Cognitive enhancers (CE) are prescription drugs taken, either without a prescription or at a dose exceeding that which is prescribed, to improve cognitive functions such as concentration, vigilance or memory. Previous research suggests that users believe the drugs to be safer than non-users and that they have sufficient knowledge to judge safety. However, to date no research has compared the information sources used and safety knowledge of users and non-users.

Objectives

This study compared users and non-users of CE in terms of i) their sources of knowledge about the safety of CE and ii) the accuracy of their knowledge of possible adverse effects of a typical cognitive enhancer (modafinil); and iii) how the accuracy of knowledge relates to their safety beliefs.

Methods

Students (N = 148) from King’s College London (UK) completed an anonymous online survey assessing safety beliefs, sources of knowledge and knowledge of the safety of modafinil; and indicated whether they used CE, and, if so, which drug(s).

Results

The belief that the drugs are safe was greater in users than non-users. However, both groups used comparable information sources and have similar, relatively poor drug safety knowledge. Furthermore, despite users more strongly believing in the safety of CE there was no relationship between their beliefs and knowledge, in contrast to non-users who did show correlations between beliefs and knowledge.

Conclusion

These data suggest that the differences in safety beliefs about CE between users and non-users do not stem from use of different information sources or more accurate safety knowledge.

Introduction

Cognitive enhancers (CE), commonly referred to as smart drugs, are prescription drugs taken by individuals, either without a prescription or at a dose exceeding that which is prescribed, to improve cognitive functions such as concentration, vigilance or memory [1]. CE were originally developed to treat a range of disorders including Attention Deficit Hyperactivity Disorder (ADHD), Alzheimer’s disease and narcolepsy [2], by targeting various deficits in cognitive functioning such as attention, aberrant learning, and absence of top-down cognitive control [3]. However, they are increasingly being used by healthy individuals to enhance cognition, even though questions remain around their ability to do so within non-clinical populations [2, 4, 5]. One population for whom use of CE is thought to be particularly prevalent is university students, who seek to enhance functioning to improve academic performance [2, 68]. The reported prevalence of CE use among students varies markedly, ranging from 5–35% in US studies [8], 1–16% in Continental Europe [911], and 9–17% in studies of UK students [1214].

Several studies have examined ethical, legal and social issues surrounding CE and identified concerns regarding coercion, drug abuse, morality, illegality, fairness and equality [15, 16]. Studies looking at attitudes towards CE indicate that attitudes are impacted by views on these issues, as well as other factors such as stress, learning approaches, competitiveness and awareness of the possibility of using certain drugs to enhance cognition [3, 17, 18]. Safety concerns have also been raised [19]. Perhaps unsurprisingly, studies have found users to be less concerned about the safety of CE than non-users [4, 20], and perceptions of the severity of possible health risks have been inversely associated with willingness to use CE [2127]. We recently demonstrated that perceived harmlessness and the belief that an individual knows enough about CE to use them safely were significant positive predictors of attitudes towards CE, which in turn predicted CE use in UK university students [12]. However, to date, no study has examined where users and non-users obtain safety information, or the accuracy of their knowledge of drug safety. Furthermore, the relationship between beliefs about the safety of CE and actual knowledge of drug safety remains unclear.

The studies conducted to date indicate the value of further investigating beliefs and knowledge about safety in key populations. When making any health-related decision, including whether or not to use CE, we might assume that individuals will conduct a cost-benefit analysis of a specific course of action. However, a large body of evidence from decision psychology suggests that, often, people do not take this consequentialist approach in their decisions [28]. Rather, they rely heavily on emotional responses to stimuli and events, in which negative emotions such as fears anxieties and worries are particularly important. Consequently, the features of a decision that elicit these emotions (e.g., risks, and how they are presented) are particularly influential. A related proposal is that negative stimuli may be more salient than positive ones creating a general negativity bias within our decision-making [29, 30]. This negativity bias is supported by a range of evidence, including eye-tracking during reading of risk and benefit health information [31], and may explain why interventions that successfully change risk perceptions are more likely to result health behaviour changes in contrast to those which focus on benefits. In keeping with these general features of people’s decision making, most studies looking at CE tend to focus on perceptions of risk and safety rather than knowledge or evaluations of benefits. There are likely to be several specific reasons for this. Firstly, the perceived safety of drugs can be defined as the absence of risk awareness or risk knowledge [32]. Secondly, there is evidence that the benefits of CE are doubted and perceived as highly variable, even in the student populations where prevalence is high [33], meaning the benefits may be even less informative in this case. Thirdly, when off-label use is considered, including that of the cognitive enhancer modafinil, key professionals (e.g. physicians, regulators) focus on risks not benefits [34, 35].

Based on the evidence that safety is a key issue in research into CE and the prominence of risk over benefits in both theoretical models and health interventions, the present study addresses three novel research questions: comparing users and non-users of CE in terms of i) their sources of knowledge about the safety of CE and ii) the accuracy of their knowledge of possible adverse effects of a typical CE; and iii) examining how the accuracy of this knowledge relates to safety beliefs.

Method

All procedures were approved by the Institutional Research Ethics Committee (HR15/162824) at King's College London. Written consent was obtained via the anonymous online survey prior to access to the survey.

Participants and procedure

Eligible participants–i.e., full-time students at the host UK university, aged 18 years or over–completed an online survey. The study was advertised via email circulars to all students at the host institution asking for participants to complete a survey about perceptions of safety and risk of CE. The study was conducted via an anonymous, online questionnaire, hosted by Qualtrics. Study adverts featured a URL linking to the study information and a consent form. Consenting participants were granted access to the online questionnaire, which took approximately 30 minutes to complete. Those who completed the questionnaire were offered entry into a £100 (~$130/€113) Amazon.co.uk voucher prize draw. Data were collected between January and May within the academic year.

Survey measures

Given the novelty of studying these psychological constructs in relation to CE, few measures existed for assessing them. Therefore, unless indicated below, most items described below were designed for this study. A copy of the full survey items can be found in S1 File.

Sample characteristics

Participants were asked to state their gender, age, and the type of qualification they were studying for at the time of the survey, selecting from: undergraduate degree (e.g. BSc), taught postgraduate degree (e.g. MSc) and postgraduate research degree (e.g. PhD). They were asked whether they had taken any of the following with the intent of improving their study results during their current qualification: methylphenidate, amphetamine, modafinil, beta-blockers, rivastigmine. Listings included only drug names rather than brand names in line with previous research [3, 12]. These drugs were all chosen based on frequent citation in the literature on CE use [3638] and recent work showing use of these drugs to enhance cognition in student populations [2]. Participants who indicated they had taken any of these CE were asked to indicate which they took and the frequency with which they used them (from less than one per year to more than once per week).

Safety beliefs

Participants rated their agreement with three statements (1 = strongly agree, 7 = strongly disagree) regarding safety: i) It is important to know whether smart drugs are safe to use (‘Safety Importance’); ii) I know enough about smart drugs to judge whether they are safe to use (‘Sufficient Safety Knowledge’) [3, 12]; iii) I think smart drugs are safe to use (‘Safe to Use’) (3). Each of these safety belief items was reverse scored, such that a high score indicated strong agreement, and entered into analysis as a separate variable.

Sources of information

Participants identified the sources they used when considering safety of CE from: personal experience, experiences of peers, information from websites, social media, NICE guidelines, scientific research and other (please specify). For each of these sources, irrespective of whether they used them, they then rated its reliability (1 = extremely unreliable, 7 = extremely reliable).

Knowledge of drug effects

Participant knowledge of the effects of modafinil were assessed using three questions that could be answers using the typical information found in a drug safety leaflet accompanying prescription medication. Note that modafinil was selected because our previous research indicated this was the most commonly used of the listed CE at the host institution, therefore meaning that the participants were most likely to be familiar with this drug [12]. First, participants selected ‘true’ or ‘false’, for which of five medical conditions (e.g. diabetes, migraines) modafinil should not be taken with (‘Not Safe’). Second, participants identified, in the same way, which of nine conditions would require careful monitoring if modafinil is taken (e.g. depression, heart problems) (‘Monitor’). Third, participants indicated the frequency with which fifteen known side effects (e.g. headache, chest pain) of modafinil occurred, choosing from three options (1 in 10, 1 in 100, or 1 in 1000 individuals) (‘Side Effects’).

Data analysis

Initial analysis was conducted to characterise the sample by calculating percentages for each level of a categorical variable and the mean for age. Chi-square analysis (gender, qualification) or an independent sample t-test (age) was used to establish whether there were differences between users and non-users for these characteristics. Percentage data was calculated for frequency of use. Given that previous studies found beliefs around safety to differ between users and non-users, we also characterised safety beliefs for the whole cohort and compared between the two groups (Safety Importance; Sufficient Safety Knowledge; Safe to Use). Mean (M) and standard deviation (SD) were calculated for all three safety belief measures (importance of knowing about safety; knowing enough to judge safety and perceived safety) and Pearson’s correlation coefficient calculated for relationships between them. Finally, binary logistic regression was used to establish whether the safety beliefs predicted CE use.

Aim 1 was to compare users and non-users in terms of their sources of knowledge about the safety of CE. To do so, three analyses were conducted. Firstly, the total number of sources consulted were calculated and this was compared between the two groups using an independent sample t-test. Secondly, for each of the sources, a Chi-square analysis examined whether there were differences between users and non-users. Finally, perceived reliability of the different measures was compared between users and non-users with a mixed-measures ANOVA followed by post-hoc paired sample t-tests. Aim 2 was to compare the accuracy of safety knowledge in users and non-users of CE. To do this, the total number of correct answers was calculated for each of the three safety questions (Not Safe, Monitor, Side Effects) and independent-sample t-tests were then used to establish group differences. For these and other between-subjects tests of mean difference we assume homogeneity of variance in the population. Analyses reported in S2 File show that no conclusions change if this assumption is not made. Aim 3 was to examine how the accuracy of participants’ knowledge related to their safety beliefs. To achieve this, the Pearson’s correlation coefficient was calculated for the relationship between each type of accuracy and each safety belief for the entire cohort and then for users and non-users separately.

Results

Sample characteristics

Two-hundred and twenty-two participants expressed an interest in the study and 204 gave consent to participate and accessed the anonymous questionnaire, of which 148 (73%) completed it. All 148 were included in subsequent analysis. The majority of those completing the survey were female (78%); which is a slightly higher proportion than for the overall student population at the host institution where 66% of students are female. Most participants were studying for an undergraduate qualification (62%) but taught postgraduate (26%) and postgraduate research (12%) students were also represented. These proportions are representative of the overall student body at the host institution where approximately 60% of students are undergraduates. The mean age of participants was 23.9 years (SD = 4.88) as expected for a student population. Twenty-one percent (N = 30) of participants reported use of CE during their current qualification. Users were significantly more likely to be male (χ2 (1) = 4.91, p = 0.027; 34.4% of males; 16.5% of females). There was no difference in terms of qualification between users and non-users (χ2 (2) = 5.53, p = 0.063). There was also no mean difference in the age of users and non-users (t(146) = 1.21, p = 0.227, 95% CI -.0.78, 3.29). The most used drug was modafinil and no users reported multiple drug use (Table 1). Frequency of use varied considerably. However, the most common frequency was once per term (30%) followed by once per month (20%). More frequent use was also found in a substantial proportion (once per week 16.7%; more than once per week 16.7%) with fewer participants reporting infrequent use (once per year 3.3%, less than once per year 13.3%).

Table 1. Twenty-one percent of the whole sample reported use of CEs.

Cognitive Enhancer Percentage Reporting Use (%)
Modafinil 70.0
Beta-blockers 13.3
Amphetamine 10.0
Methylphenidate 3.3
Rivastigmine 3.3

The breakdown of use across the five commonly used drugs within this group is shown here.

When considering all participants (i.e. users and non-users together), mean data indicate there was a strong belief that it was important to know about the safety of CE. Scores on the remaining belief statements suggest a wider range of beliefs and less overall agreement. There were significant correlations between two of the three possible pairings of safety beliefs (Table 2), the strongest of these reflecting that those with higher self-rated knowledge were more likely to rate CE as safe.

Table 2. Correlations between safety knowledge and beliefs (reversed).

Variable Scale range Mean
(SD)
1 2 3 4 5
1. Safety Importance 1–7 6.59 (0.71) -
2. Sufficient Safety Knowledge 1–7 3.36 (1.69) -.213** - - - -
3. Safe to Use 1–7 3.97 (1.43) -.081 .398** - - -
4. ‘Not safe’ Accuracy 0–5 3.18 (1.02) -.117 .258** .232** - -
5. ‘Monitoring’ Accuracy 0–9 6.91 (1.31) .174* .033 .144 .120 -
6. ‘Side Effects’ Accuracy 0–15 5.75 (2.46) -.004 .015 -.192* -.156 .057

N = 148

* p< 0.05

**<0.01

Logistic regression analysis (Table 3) found that, together, the three safety beliefs, along with gender (included because gender differences were found in the present study), accounted for 36.7% of the variance in CE use (χ2 (4) = 39.11, p<0.001). Within the model, the belief that CE were safe to use was the only significant predictor of use, such that for each additional 1-point increase on the 1–7 scale the odds of being a CE user increased by more than two-and-a-half times. To further illustrate the substantial size of this effect: of the 50 participants giving a response below the scale midpoint (‘1’ to ‘3’) for ‘safe to use’ only 4% used CE, while of the 58 participants giving a response above the scale midpoint (‘5’ to ‘7’) 43% used CE (with a 63% prevalence of use among those responding ‘6’ or ‘7’).

Table 3. Logistic regression of safety beliefs about CE as predictors of CE use.

Variable R2 B S.E Wald χ2 OR 95% CI
Constant .367 -4.401 2.624 2.813 .012
Gender -.903 .524 2.970 .405 .145–1.132
Safety Importance -.219 .351 .389 .803 .404–1.598
Sufficient Safety Knowledge .157 .165 .907 1.170 .847–1.616
Safe to Use 1.016** .280 13.122 2.761 1.594–4.784

Note that the table presents the total R2 Nagelkerke statistic

** p<0.001.

Aim 1: Which information sources do users and non-users draw on?

While the mean number of sources used was higher for users (M = 3.43, SD = 1.14) than non-users (M = 2.97, SD = 1.21), this difference was not statistically significant (t(145) = 1.91, p = 0.058, 95% CI -0.95, 0.01) although the effect was small-to-medium in size (d = 0.039). Thus, we did not detect a difference between users and non-users in the mean number of information sources used. Using G*Power software, we determined that our sample size was sufficient for 0.95 power to detect a mean difference of one additional data source in the population using a two-tailed test (see Supplementary Materials for full details). Therefore, if a difference of that scale or larger were to exist in the population, it is highly unlikely that we would fail to detect a difference.

The percentage of those using the different types of information source is shown in Fig 1, along with ratings of reliability for all sources. Note that 4.4% reported using ‘Other’ sources but examination of the specified ‘other’ sources revealed no consistent sources and therefore, given the low proportion of participants using other sources, this category was excluded from further analysis. Chi-square tests revealed that, for most sources, there were no statistically significant differences between users and non-users for the proportion drawing on the source (experience of peers χ2 (1) = 3.36, p = 0.067; websites χ2 (1) = 2.47, p = 0.116; social media χ2 (1) = 2.00, p = 0.157; NICE guidelines χ2 (1) = 1.93, p = 0.165; scientific research χ2 (1) = 0.79, p = 0.384). The only exception to this was using personal experience (χ2 (1) = 24.09, p<0.001) which was more likely to be used by users than by non-users (73.3% vs. 25.4% respectively). A mixed-measures ANOVA with source as a within-subjects factor, incorporating six levels (i.e. all sources except those labelled as ‘Other’) and use status as a between-subjects factor was used to examine perceived reliability scores for the different sources. This revealed no significant effect of use status (F(1,145) = 0.221, p = 0.639, η2p = 0.02) but there was a significant main effect of source (F(3.41,493.98) = 92.12, p<0.001, η2p = 0.389). There was no use status x source interaction (F(3.41,493.98) = 1.16, p = 0.327, η2p = 0.08. Paired-sample t-tests (corrected α = 0.003) showed there were significant differences between all sources (p<0.001) except personal experience and websites (t(147) = 1.92, p = 0.057, 95% CI -0.10, 0.65) and experience of peers and website (t(146) = 2.84, p = 0.005, 95% CI -0.70, -0.13).

Fig 1. The frequency of use and perceived reliability for different sources of information about the safety of CE shown by use status.

Fig 1

Bars indicate the percentage using a source whilst lines indicate the perceived reliability of the sources. Note that the most frequently used source (websites) was only the fourth most reliable.

Accuracy of safety knowledge (Aim 2) and its relation to safety beliefs (Aim 3)

Considering all participants together, accuracy for the ‘Side Effects’ information was lowest at 38.3% correct (SD = 16.4%), followed by ‘Not Safe’ information with 63.6% correct (SD = 20.4%). The ‘Monitor’ information was most accurate with an average of 76.8% correctly identified (SD = 14.5%). For none of these measures did the mean number correct differ significantly between users and non-users (Fig 2): Not Safe, t(146) = 1.35, p = 0.179, 95% CI –0.69, 0.13; Monitor, t(146) = 0.72, p = 0.471, 95% CI -0.72, 0.34; Side Effects, t(146) = 1.38, p = 0.171, 95% CI -0.30, 1.68. The confidence intervals for these mean differences are for scales with a maximum possible score of 5, 9 and 15, respectively; and therefore the 95% CIs span 19.8%, 11.8% and 13.2% of each scale-range, respectively. We used G*Power software to determine the sample sizes required for 0.8 power to detect an absolute mean difference of 10% of the scale-range for each knowledge scores. Our sample size exceeds the sample size required for this level of statistical power for the monitor and not safe scores, though an additional 56 participants would be required to achieve this high degree of power for the not safe scale (see S2 File).

Fig 2. Accuracy of safety knowledge regarding modafinil was broken down into three categories as is typically found in drug safety leaflets does not differ significantly between users and non-users.

Fig 2

Pearson’s correlation revealed several significant correlations between the accuracy of safety knowledge and safety beliefs when all participants were considered (Table 2). Notably, ‘Safe to Use’ beliefs positively correlated with ‘Not Safe’ knowledge but negatively with ‘Side Effects’ knowledge indicating participants who felt CE were safe to use had less knowledge of side effects. Those with strong ‘Sufficient Safety Knowledge’ beliefs, i.e. they felt they knew enough to judge safety did have better ‘Monitor’ knowledge but no other correlations with knowledge were found. Finally, participants with stronger ‘Safety Importance’ beliefs did have better ‘Monitoring’ knowledge, but no relationships existed for the other knowledge types. Interestingly, when considering users only, we find that there are no correlations between safety beliefs and accuracy for this group (-.118 < r > .124., all p > .513). By contrast all correlations noted above for the whole group remained when non-users were considered alone except the relationship between ‘Safe to Use’ and ‘Side Effects’ which did not reach significant (r = -.169, p = 0.068).

Discussion

Building on previous research demonstrating that beliefs about safety of CE can predict attitudes, and in turn, use of CE [12], the current study aimed to compare users and non-users of CE in terms of i) their sources of knowledge about the safety of CE and ii) the accuracy of their knowledge of possible adverse effects of a typical CE; and iii) examine how the accuracy of their knowledge relates to their safety beliefs. Before considering the findings in relation to these aims, it is helpful to note that both users and non-users felt that it was similarly important to know about the safety of CE, indicating that the two groups do not differ substantially in the value that they place on safety information. However, users reported stronger agreement that they knew enough about CE to judge safety and (to a substantial degree) that they are safe to use. These findings are in line with previous research [4, 12, 20]. It is typically assumed that the belief that CEs are unsafe prevents individuals taking them, whilst those who perceive them as safe are more likely to take them [10, 22, 23] but the exact relationship between safety beliefs and use is unclear. An alternative explanation for this association between CE use and safety beliefs could arise from the partial dissociation between risk perception and risk taking [39] in combination with the phenomenon of cognitive dissonance [40]. While, in general, we avoid behaviours that we regard as unsafe, this is not always true for all individuals and for all situations. For example, imagine someone takes CE to get them through an academic ‘crisis’ even though they have concerns about drug safety. Dissonance theory predicts they would then adjust their beliefs or attitudes to make them more consistent with their recent actions. In this instance, this would mean developing a new belief that CE are not unsafe. Thus, under this account, CE use drives beliefs about safety, rather than the reverse. The reduction of dissonance (discomfort) that is achieved by aligning beliefs with actions may occur without recourse to external sources of information. The nature of the relationship between safety beliefs and CE use therefore still warrants further exploration.

For Aim 1, we found no significant difference in the number of sources used by users and non-users. We note here, however, that our study was not highly powered to detect small differences. Nonetheless, based on the confidence interval reported for this mean difference, it is unlikely that the true difference between users and non-users in the number of information sources is large (e.g., that, in the population of interest, users actually use 1 or 2 more sources on average).The most commonly used sources students used when thinking about safety were scientific research and websites. Interestingly, whilst they rated the former as highly reliable, website reliability ratings were not rated particularly high, ranking only fourth highest out of the six sources for which ratings were given, indicating a mismatch between preferred sources and reliability. For most sources there were no differences between users and non-users in terms of whether the source was used for safety information. The only exception is personal experience which, almost by definition, was higher in users. This additional source may explain the increased belief in safety of CE in users, although this would seem at odds with the reliability of personal experience which was only rated third of six sources. An alternative explanation to source differences underlying different beliefs is that we see motivated reasoning employed when assessing safety information [41], such that users may be more likely to reach the conclusion that the drugs are safe when presented with exactly the same information as non-users. This has been shown to be the case with caffeine, such that coffee drinkers perceive less health risks relative to non-coffee drinkers [42], and therefore may warrant further investigation with regards to CE.

The relatively high use of peer experiences as a source of information may reflect the ready availability and vividness of such anecdotal information [43]. Anecdotes can be an important influence on decisions–including decisions about one’s own healthcare when statistical information is also available [44] especially when delivered in a compelling manner. For example, research has shown that evaluative comments about university courses delivered face-to-face by a few (1 to 4) upper-level students had greater influence on students’ intentions to enrol in those courses than written summaries of course evaluations from a much larger number of students (26 to 132) who took the course the previous semester [45]. This was also found when the face-to-face comments of (ostensible) students were scripted to match summaries of course evaluations in the written summary reports. Borgida and Nisbett argued that peer experiences are influential because they are vivid (having the character of first-hand experience), and concrete (and therefore prompt action); and because people intuitively respond to small samples of information as if they were representative of large-sample information i.e., a reliable source [45]. Thus, while it may be unsurprising that our participants cited peer experiences as a common source of information, what is learnt from the current study is that the participants were explicit in attributing only modest reliability to this source.

For Aim 2, data showed that knowledge of drug safety was not particularly high with accuracy on questions about conditions that modafinil should not be taken with (63.6%) or would require careful monitoring (38.3%) similar to chance level performance for true/false questions. Students were more accurate at estimating the frequency of known side-effects. Despite users believing that CE are safe to use in comparison to non-users, users were no more (or less) accurate in their responses to the safety knowledge questions. In terms of the relationship between accuracy of knowledge and safety beliefs i.e. Aim 3, we found that there were no significant relationships between safety beliefs and actual safety knowledge for users, indicating that their belief that drugs are safe to use is not based on more accurate safety knowledge of the drugs. Notably, for non-users there was a significant correlation between their belief about whether the drugs were safe to use and knowledge of which conditions modafinil should not be taken with. Additionally, knowledge of conditions for which careful monitoring was required did correlate with safety beliefs about whether they had enough knowledge to judge safety and whether safety was important. This indicates that for non-users there are some positive correlations between safety beliefs and safety knowledge. This is consistent with previous studies that have shown perceptions of the severity of possible health risks have been inversely associated with willingness to use CE [2127] and indicates that as well as severity, frequency of side effect may be important.

In addition to providing data to address the specific aims of the study, we report prevalence and frequency of CE use. Here we found that 21% of students surveyed were using CE and using with relatively high frequency of use, with the minority of participants being occasional users. The reported prevalence is considerably higher than the 11% [14] and 9.4% [13] reported in previous UK studies, but it is in line with our previous research at this institution [12]. It may be important for the higher rate of use among our participants that the current study was conducted in a more competitive environment than the previous studies, because it has previously been shown, outside of the UK that use of CE is around two times higher at universities with more competitive criteria [46]. Another explanation for the increased prevalence in the current study is that use of CE is increasing, as has been suggested ([13, 47, 48]. The current study showed relatively high frequency of use, with the minority of participants being occasional users. The high frequency of use is also somewhat at odds with the low levels of consistent use reported by Singh et al. [13] but is more in line with the media hype around pharmaceutical cognitive enhancement which emphasizes sustained use of CE is highly prevalent among UK students [49].

Limitations

There are several limitations to this study that should be acknowledged. Firstly, the study relied on self-report and therefore, only captures information that participants were willing to report, which may not represent true beliefs and behaviour when relating to drug use [50]. However, it is suggested that self-report can be reliable provided that the information is known to respondents and that the questions are unambiguous, relate to recent activity, require a serious and thoughtful response, and will not lead to embarrassing or threatening disclosures [51, 52]. We believe these conditions were met in the current study because, although drug use could represent an embarrassing or threatening disclosure, the anonymity would have reduced this. Secondly, whilst the overall sample size was sufficient for statistical analysis, the study used a sample from one institution and students self-selected to participate meaning that the results may not generalise to other populations. Thirdly, although our sample size allowed for very good statistical power to detect large effects, the study was not well powered to detect small ones (See S2 File). We may therefore have missed detecting differences between CE users and non-users that–while relatively small–may nonetheless be important for understanding how beliefs and knowledge relate to CE use. For example, whilst no statistically significant differences in knowledge were found between users and non-users, the 95% CIs for these effects imply that a 10% difference in knowledge is plausible for knowledge of safety (non-users better) or side effects (users better). We therefore recommend that future studies using similar methods to ours take our sample size (~150) as a minimum target. Relatedly, we note here that the one analysis where we did not achieve 80% power to detect a large effect was the knowledge scale with the lowest degree of precision (5 questions, generating 6 levels of performance). Having fewer questions for this measure than the others may have reduced the signal-to-noise ratio associated with this measure, and reduced power accordingly. Therefore, future studies should–if feasible–aim for a more fine-grained measure of knowledge. Fourthly, whilst this study considered the types of sources users and non-users consulted for safety information, we did not investigate this further in terms of specific sources or the types of information they were extracting from the sources. This should be considered in future research. Finally, the current study focused on risks only and did not ask participants about their perceived benefits of using CE. Whilst this approach is supported in both theoretical models and applied health intervention [29, 53], future research could be expanded to consider benefits.

Conclusions

A growing body of evidence has shown that users of CE believe the drugs to be safer than non-users [4, 12, 20] and that users more strongly believe that they know enough about the drugs to use them safely [12]. In the present study we have shown that the two groups consult similar sources of information when considering the safety of CE, indicating that differences in beliefs about safety are unlikely to be fully explained by use of different sources of information. Furthermore, both groups held comparable and relatively poor knowledge of the safety of the most reported CE, modafinil. Whilst, for non-users there were correlations between safety beliefs and safety knowledge, no significant relationships existed for users. This indicates that greater belief that CE are safe to use and that they have sufficient safety knowledge to make judgements in users is not based on more accurate safety knowledge. Given the lack of differences between sources of information and accuracy of knowledge, future research should consider the processes that mediate the relationship between evidence and beliefs, for example, examining the role of motivated reasoning in CE beliefs.

Supporting information

S1 File. Survey questions.

(DOCX)

S2 File. Understanding the relationship between safety beliefs and knowledge for cognitive enhancers in UK university students.

(DOCX)

Data Availability

The dataset supporting this research is openly available from the King's College London research data repository at http://doi.org/doi:10.18742/RDM01-690.

Funding Statement

The author(s) received no specific funding for this work.

References

  • 1.Hildt E. Cognitive enhancement–A critical look at the recent debate Cognitive enhancement. Netherlands: Springer; 2013. pp. 1–14. [Google Scholar]
  • 2.Sahakian BJ, Morein-Zamir S. Pharmacological cognitive enhancement: treatment of neuropsychiatric disorders and lifestyle use by healthy people. Lancet Psychiat. 2015;2(4):357–62. [DOI] [PubMed] [Google Scholar]
  • 3.Schelle KJ, Olthof BM, Reintjes W, Bundt C, Gusman-Vermeer J, van Mil AC. A survey of substance use for cognitive enhancement by university students in the Netherlands. Frontiers in systems neuroscience. 2015;9:10 10.3389/fnsys.2015.00010 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Finger G, da Silva ER, Falavigna A. Use of methylphenidate among medical students: a systematic review. Rev Assoc Med Bras. 2013;59(3):285–9. 10.1016/j.ramb.2012.10.007 [DOI] [PubMed] [Google Scholar]
  • 5.Repantis D, Schlattmann P, Laisney O, Heuser I. Modafinil and methylphenidate for neuroenhancement in healthy individuals: A systematic review. Pharmacol Res. 2010;62(3):187–206. 10.1016/j.phrs.2010.04.002 [DOI] [PubMed] [Google Scholar]
  • 6.Greely H, Sahakian B, Harris J, Kessler RC, Gazzaniga M, Campbell P, et al. Towards responsible use of cognitive-enhancing drugs by the healthy (vol 456, pg 702, 2008). Nature. 2008;456(7224):872–. [DOI] [PubMed] [Google Scholar]
  • 7.Sahakian B, Morein-Zamir S. Professor's little helper. Nature. 2007;450(7173):1157–9. 10.1038/4501157a [DOI] [PubMed] [Google Scholar]
  • 8.Smith ME, Farah MJ. Are Prescription Stimulants "Smart Pills"? The Epidemiology and Cognitive Neuroscience of Prescription Stimulant Use by Normal Healthy Individuals. Psychol Bull. 2011;137(5):717–41. 10.1037/a0023825 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Ott R, Biller-Andorno N. Neuroenhancement among Swiss Students—A Comparison of Users and Non-Users. Pharmacopsychiatry. 2014;47(1):22–8. 10.1055/s-0033-1358682 [DOI] [PubMed] [Google Scholar]
  • 10.Franke AG, Bonertz C, Christmann M, Huss M, Fellgiebel A, Hildt E, et al. Non-Medical Use of Prescription Stimulants and Illicit Use of Stimulants for Cognitive Enhancement in Pupils and Students in Germany. Pharmacopsychiatry. 2011;44(2):60–6. 10.1055/s-0030-1268417 [DOI] [PubMed] [Google Scholar]
  • 11.Castaldi S, Gelatti U, Orizio G, Hartung U, Moreno-Londono AM, Nobile M, et al. Use of Cognitive Enhancement Medication Among Northern Italian University Students. J Addict Med. 2012;6(2):112–7. 10.1097/ADM.0b013e3182479584 [DOI] [PubMed] [Google Scholar]
  • 12.Champagne J, Gardner B, Dommett EJ. Modelling predictors of UK undergraduates’ attitudes towards smart drugs. Trends in Neuroscience and Education. 2019;14:33–9. 10.1016/j.tine.2019.02.001 [DOI] [PubMed] [Google Scholar]
  • 13.Singh I, Bard I, Jackson J. Robust resilience and substantial interest: a survey of pharmacological cognitive enhancement among university students in the UK and Ireland. PloS one. 2014;9(10):e105969 10.1371/journal.pone.0105969 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Holloway K, Bennett T. Prescription drug misuse among university staff and students: A survey of motives, nature and extent. Drug-Educ Prev Polic. 2012;19(2):137–44. [Google Scholar]
  • 15.Forlini C, Racine E. Added stakeholders, added value (s) to the cognitive enhancement debate: Are academic discourse and professional policies sidestepping values of stakeholders? AJOB Primary Research. 2012;3(1):33–47. [Google Scholar]
  • 16.Forlini C, Racine E. Autonomy and Coercion in Academic "Cognitive Enhancement" Using Methylphenidate: Perspectives of Key Stakeholders. Neuroethics-Neth. 2009;2(3):163–77. [Google Scholar]
  • 17.Ram S, Hussainy S, Henning M, Stewart K, Jensen M, Russell B. Attitudes toward cognitive enhancer use among New Zealand tertiary students. Subst Use Misuse. 2017;52(11):1387–92. 10.1080/10826084.2017.1281313 [DOI] [PubMed] [Google Scholar]
  • 18.Adamopoulos P, Ho H, Sykes G, Szekely P, Dommett EJ. Learning Approaches and Attitudes Toward Cognitive Enhancers in UK University Students. J Psychoactive Drugs. 2020:1–7. 10.1080/02791072.2020.1742949 [DOI] [PubMed] [Google Scholar]
  • 19.Santoni de Sio F, Faber N, Savulescu J, Vincent N. Why less praise for enhanced performance? Moving beyond responsibility-shifting, authenticity, and cheating to a nature of activities approach. 2016. 10.1007/s11948-015-9715-4 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Ilieva I, Boland J, Farah MJ. Objective and subjective cognitive enhancing effects of mixed amphetamine salts in healthy people. Neuropharmacology. 2013;64:496–505. 10.1016/j.neuropharm.2012.07.021 [DOI] [PubMed] [Google Scholar]
  • 21.Eickenhorst P, Vitzthum K, Klapp BF, Groneberg D, Mache S. Neuroenhancement Among German University Students: Motives, Expectations, and Relationship with Psychoactive Lifestyle Drugs. J Psychoactive Drugs. 2012;44(5):418–27. 10.1080/02791072.2012.736845 [DOI] [PubMed] [Google Scholar]
  • 22.Sattler S, Forlini C, Racine E, Sauer C. Impact of Contextual Factors and Substance Characteristics on Perspectives toward Cognitive Enhancement. PloS one. 2013;8(8). 10.1371/journal.pone.0071452 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Sattler S, Mehlkop G, Graeff P, Sauer C. Evaluating the drivers of and obstacles to the willingness to use cognitive enhancement drugs: the influence of drug characteristics, social environment, and personal characteristics. Subst Abuse Treat Pr. 2014;9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Sattler S, Sauer C, Mehlkop G, Graeff P. The Rationale for Consuming Cognitive Enhancement Drugs in University Students and Teachers. PloS one. 2013;8(7). 10.1371/journal.pone.0068821 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Sattler S, Wiegel C. Cognitive Test Anxiety and Cognitive Enhancement: The Influence of Students' Worries on Their Use of Performance-Enhancing Drugs. Subst Use Misuse. 2013;48(3):220–32. 10.3109/10826084.2012.751426 [DOI] [PubMed] [Google Scholar]
  • 26.Sweeney S. The use of prescription drugs for academic performance enhancement in college aged students. Social Work Student Papers. 2010:48. [Google Scholar]
  • 27.Franke AG, Lieb K, Hildt E. What Users Think about the Differences between Caffeine and Illicit/Prescription Stimulants for Cognitive Enhancement. PloS one. 2012;7(6). 10.1371/journal.pone.0040047 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Loewenstein GF, Weber EU, Hsee CK, Welch N. Risk as feelings. Psychol Bull. 2001;127(2):267 10.1037/0033-2909.127.2.267 [DOI] [PubMed] [Google Scholar]
  • 29.Baumeister RF, Bratslavsky E, Finkenauer C, Vohs KD. Bad is stronger than good. Review of general psychology. 2001;5(4):323–70. [Google Scholar]
  • 30.Rozin P, Royzman EB. Negativity bias, negativity dominance, and contagion. Personality and social psychology review. 2001;5(4):296–320. [Google Scholar]
  • 31.Heard CL, Rakow T, Foulsham T. Understanding the Effect of Information Presentation Order and Orientation on Information Search and Treatment Evaluation. Med Decis Making. 2018;38(6):646–57. 10.1177/0272989X18785356 [DOI] [PubMed] [Google Scholar]
  • 32.Gamma A, Jerome L, Liechti ME, Sumnall HR. Is ecstasy perceived to be safe? A critical survey. Drug Alcohol Depend. 2005;77(2):185–93. 10.1016/j.drugalcdep.2004.08.014 [DOI] [PubMed] [Google Scholar]
  • 33.Partridge B, Bell S, Lucke J, Hall W. Australian university students' attitudes towards the use of prescription stimulants as cognitive enhancers: perceived patterns of use, efficacy and safety. Drug Alcohol Rev. 2013;32(3):295–302. 10.1111/dar.12005 [DOI] [PubMed] [Google Scholar]
  • 34.Banjo OC, Nadler R, Reiner PB. Physician attitudes towards pharmacological cognitive enhancement: safety concerns are paramount. PloS one. 2010;5(12):e14322 10.1371/journal.pone.0014322 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35.Lenk C, Duttge G. Ethical and legal framework and regulation for off-label use: European perspective. Ther Clin Risk Manag. 2014;10:537–46. 10.2147/TCRM.S40232 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 36.Sheeran P, Harris PR, Epton T. Does heightening risk appraisals change people's intentions and behavior? A meta-analysis of experimental studies. Psychol Bull. 2014;140(2):511–43. 10.1037/a0033065 [DOI] [PubMed] [Google Scholar]
  • 37.Giles J. Alertness drug arouses fears about 'lifestyle' misuse. Nature. 2005;436(7054):1076–. 10.1038/4361076b [DOI] [PubMed] [Google Scholar]
  • 38.Wilens TE, Adler LA, Adams J, Sgambati S, Rotrosen J, Sawtelle R, et al. Misuse and diversion of stimulants prescribed for ADHD: A systematic review of the literature. J Am Acad Child Psy. 2008;47(1):21–31. 10.1097/chi.0b013e31815a56f1 [DOI] [PubMed] [Google Scholar]
  • 39.Blais A-R, Weber EU. A domain-specific risk-taking (DOSPERT) scale for adult populations. 2006. [Google Scholar]
  • 40.Festinger L. A theory of cognitive dissonance: Stanford university press; 1957. [Google Scholar]
  • 41.Kunda Z. The case for motivated reasoning. Psychol Bull. 1990;108(3):480 10.1037/0033-2909.108.3.480 [DOI] [PubMed] [Google Scholar]
  • 42.Wertz JM, Sayette MA. A review of the effects of perceived drug use opportunity on self-reported urge. Experimental and clinical psychopharmacology. 2001;9(1):3 10.1037/1064-1297.9.1.3 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 43.Jenni K, Loewenstein G. Explaining the identifiable victim effect. Journal of Risk and Uncertainty. 1997;14(3):235–57. [Google Scholar]
  • 44.Fagerlin A, Wang C, Ubel PA. Reducing the influence of anecdotal reasoning on people’s health care decisions: is a picture worth a thousand statistics? Medical decision making. 2005;25(4):398–405. 10.1177/0272989X05278931 [DOI] [PubMed] [Google Scholar]
  • 45.Borgida E, Nisbett RE. The differential impact of abstract vs. concrete information on decisions 1. Journal of Applied Social Psychology. 1977;7(3):258–71. [Google Scholar]
  • 46.McCabe SE, Knight JR, Teter CJ, Wechsler H. Nonmedical use of prescription stimulants among US college students: Prevalence and correlates from a national survey. Addiction. 2005;100 (1): 96–106. 10.1111/j.1360-0443.2005.00944.x [DOI] [PubMed] [Google Scholar]
  • 47.Knapton S. Smart drug' taken by one in four students really does boost performance. The Telegraph; 2015. [Google Scholar]
  • 48.Marsh S. Universities do more to tackle smart drugs, say experts. The Guardian; 2017. [Google Scholar]
  • 49.Cadwalla C. Students used to take drugs to get high Now they take them to get higher grades. The Guardian; 2018. [Google Scholar]
  • 50.Davis CG, Thake J, Vilhena N. Social desirability biases in self-reported alcohol consumption and harms. Addictive behaviors. 2010;35(4):302–11. 10.1016/j.addbeh.2009.11.001 [DOI] [PubMed] [Google Scholar]
  • 51.Kuh GD. The National Survey of Student Engagement: Conceptual framework and overview of psychometric properties Indiana University, Center for Postsecondary Research, Bloomington, IN: 2001. [Available from: http://nsse.iub.edu/pdf/psychometric_framework_2002.pdf. [Google Scholar]
  • 52.Owston R, Lupshenyuk D, Wideman H. Lecture Capture in Large Undergraduate Classes: What Is the Impact on the Teaching and Learning Environment? Online Submission. 2011. [Google Scholar]
  • 53.Bright GM. Abuse of medications employed for the treatment of ADHD: results from a large-scale community survey. The Medscape Journal of Medicine. 2008;10(5):111 [PMC free article] [PubMed] [Google Scholar]

Decision Letter 0

Angel Blanch

6 May 2020

PONE-D-20-07694

Understanding the relationship between safety beliefs and knowledge for cognitive enhancers in UK university students

PLOS ONE

Dear Dr Dommett,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

It was very difficult to find reviewers willing to assess the manuscript. I was able, however, to collect feedback from a reviewer who provided what I consider as useful feedback to revise your manuscript. Please, see the comments at the bottom of this letter. Because this can be considered as a major review, please notice that a resubmission will require another round of reviews involving additional reviewers, and that the final outcome of this process is uncertain at this point.

We would appreciate receiving your revised manuscript by Jun 20 2020 11:59PM. When you are ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter.

To enhance the reproducibility of your results, we recommend that if applicable you deposit your laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). This letter should be uploaded as separate file and labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. This file should be uploaded as separate file and labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. This file should be uploaded as separate file and labeled 'Manuscript'.

Please note while forming your response, if your article is accepted, you may have the opportunity to make the peer review history publicly available. The record will include editor decision letters (with reviews) and your responses to reviewer comments. If eligible, we will contact you to opt in or out.

We look forward to receiving your revised manuscript.

Kind regards,

Angel Blanch, Ph.D.

Academic Editor

PLOS ONE

Journal Requirements:

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

2. Please include additional information regarding the survey or questionnaire used in the study and ensure that you have provided sufficient details that others could replicate the analyses. For instance, if you developed a questionnaire as part of this study and it is not under a copyright more restrictive than CC-BY, please include a copy, in both the original language and English, as Supporting Information.

3. We note that you have stated that you will provide repository information for your data at acceptance. Should your manuscript be accepted for publication, we will hold it until you provide the relevant accession numbers or DOIs necessary to access your data. If you wish to make changes to your Data Availability statement, please describe these changes in your cover letter and we will update your Data Availability statement to reflect the information you provide.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: No

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: No

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: This study examined how beliefs about the safety of cognitive enhancers (CEs), knowledge about their safety, and sources of relevant knowledge differed between users and non-users of CEs. One hundred forty-eight university students completed an online questionnaire. The results showed that 21% of the students had used CEs, and they highly evaluated the safety of CEs compared to non-users. On the other hand, sources of information on CE safety did not differ for users and non-users, suggesting that the higher safety levels perceived by users was not because information sources were different from those of non-users. There was also no significant difference between the groups regarding knowledge of CE safety. In addition, there was no significant correlation between safety beliefs and safety knowledge among CE users.

This study addresses an interesting issue, and the finding that the basis for the safety beliefs of CE users is weak has considerable social significance. However, there are concerns, mainly about the validity of the results. The authors should consider the following three issues and make the necessary corrections:

1) Since the findings of this study are based on null results, the interpretation of the results should be performed carefully. In this study, the sample of CE users was only 30 of 148 students who cooperated by completing the online survey. What is the statistical power in this case? One of the important findings of this study is that the number of sources and the degree of utilization of each source regarding CE safety knowledge did not differ between CE users and non-users. However, in some analyses, the p-value was shown to be close to a significant value. Therefore, there is the possibility that the p-value did not become significant because of insufficient statistical power. Thus, the evidence is too weak to conclude that there was no difference in the sources of safety knowledge for CE users and non-users. The result that there was no difference in safety knowledge between the groups should be also considered carefully. The authors should show the results of the power analysis to indicate that the size of the sample was sufficient for detecting a significant difference. Otherwise, data should be collected from more participants (especially CE users) based on sample size determination. The author stated in line 377 that “the overall sample size was sufficient for statistical analysis,” but the basis for this reasoning is unclear.

2) Although the authors aimed to compare CE users and non-users, there is no table or graph showing the data for each. Although the Results section explains whether the differences were significant, it is important to show which group scored higher. The authors should present data by group.

3) Human behavior is determined by considering not only risks but also benefits. However, in this study, there were three questions focused only on the risks associated with modafinil; they were included to examine participants’ knowledge regarding the safety of CEs. This approach was also used to examine beliefs about CEs. The authors should explain why they focused only on risks and not benefits.

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files to be viewed.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email us at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2021 Jan 28;16(1):e0244865. doi: 10.1371/journal.pone.0244865.r002

Author response to Decision Letter 0


10 Jun 2020

The information below is a copy of the Response to Reviewers document.

Response to Reviewers

Reviewer #1: This study examined how beliefs about the safety of cognitive enhancers (CEs), knowledge about their safety, and sources of relevant knowledge differed between users and non-users of CEs. One hundred forty-eight university students completed an online questionnaire. The results showed that 21% of the students had used CEs, and they highly evaluated the safety of CEs compared to non-users. On the other hand, sources of information on CE safety did not differ for users and non-users, suggesting that the higher safety levels perceived by users was not because information sources were different from those of non-users. There was also no significant difference between the groups regarding knowledge of CE safety. In addition, there was no significant correlation between safety beliefs and safety knowledge among CE users. This study addresses an interesting issue, and the finding that the basis for the safety beliefs of CE users is weak has considerable social significance. However, there are concerns, mainly about the validity of the results. The authors should consider the following three issues and make the necessary corrections:

1) Since the findings of this study are based on null results, the interpretation of the results should be performed carefully. In this study, the sample of CE users was only 30 of 148 students who cooperated by completing the online survey. What is the statistical power in this case? One of the important findings of this study is that the number of sources and the degree of utilization of each source regarding CE safety knowledge did not differ between CE users and non-users. However, in some analyses, the p-value was shown to be close to a significant value. Therefore, there is the possibility that the p-value did not become significant because of insufficient statistical power. Thus, the evidence is too weak to conclude that there was no difference in the sources of safety knowledge for CE users and non-users. The result that there was no difference in safety knowledge between the groups should be also considered carefully. The authors should show the results of the power analysis to indicate that the size of the sample was sufficient for detecting a significant difference. Otherwise, data should be collected from more participants (especially CE users) based on sample size determination. The author stated in line 377 that “the overall sample size was sufficient for statistical analysis,” but the basis for this reasoning is unclear.

Thank you for this comment, and for this reminder to report and discuss the precision of our estimation in the manuscript. In keeping with current recommendations for this, we now include confidence intervals for each of the mean difference between users and non-users that we report (Cumming, 2014). This means that readers can see what the likely limits are for the effects that we report, and can judge whether any effects that significance tests may have failed to detect might be sufficiently large to warrant further investigation. You will see from the confidence intervals that we report that our sample size of 148 allowed for reasonable precision in estimation, such that it is highly unlikely that we have failed to detect large effects. Additionally, we now acknowledge and discuss that we have limited power to detect small effects. In keeping with the reviewer’s comment above, we do that in relation to the non-significant effect for which the p-value came closest to .05. We also discuss statistical power, more generally, in our limitations section and include recommendations for other researchers.

2) Although the authors aimed to compare CE users and non-users, there is no table or graph showing the data for each. Although the Results section explains whether the differences were significant, it is important to show which group scored higher. The authors should present data by group.

We have now replaced the original Figure 1 with a figure that divides the data by group. This provides by group data for Aim 1. We have also added an additional Figure (Figure 2) to provide by group data relating to Aim 2. Aim 3 data is provided overall in a table and correlations by group are in the text. Therefore, by group data is now presented for all aims.

3) Human behavior is determined by considering not only risks but also benefits. However, in this study, there were three questions focused only on the risks associated with modafinil; they were included to examine participants’ knowledge regarding the safety of CEs. This approach was also used to examine beliefs about CEs. The authors should explain why they focused only on risks and not benefits.

We have now included a paragraph within the introduction explaining our rationale for this approach. To briefly summarise this here we have explained that when studying drug safety, including safety of CE, the focus is typically on risk rather than benefits, and that the benefits are not very well understood, even in key populations. Additionally, theoretical models, supported by substantial experimental data, indicate that risks or potential losses are more important than benefits or gains, and health behaviour interventions that modulate risk perception, rather than benefits, are more likely to be effective. However, we have also included in the discussion that this restriction to risks is a limitation of the current study.

Additional Requirements

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at:

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

We have made amendments to the manuscript to bring this in line with the requirements.

2. Please include additional information regarding the survey or questionnaire used in the study and ensure that you have provided sufficient details that others could replicate the analyses. For instance, if you developed a questionnaire as part of this study and it is not under a copyright more restrictive than CC-BY, please include a copy, in both the original language and English, as Supporting Information.

We have now provided a full copy of the survey used as supporting information and made this clear from the text.

3. We note that you have stated that you will provide repository information for your data at acceptance. Should your manuscript be accepted for publication, we will hold it until you provide the relevant accession numbers or DOIs necessary to access your data. If you wish to make changes to your Data Availability statement, please describe these changes in your cover letter and we will update your Data Availability statement to reflect the information you provide.

This is still the case. Our institutional repository will only include published data. Therefore, on acceptance we will publish the data with them and provide the DOI.

Reference

Cumming, G. (2014). The new statistics: Why and how. Psychological Science, 25(1), 7–29. https://doi.org/10.1177/0956797613504966

Attachment

Submitted filename: Response to Reviewers.docx

Decision Letter 1

Angel Blanch

20 Jul 2020

PONE-D-20-07694R1

Understanding the relationship between safety beliefs and knowledge for cognitive enhancers in UK university students

PLOS ONE

Dear Dr. Dommett,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

This version of the manuscript has been re-evaluated by the same Reviewer who did the initial review. As you will see in the comments appended below, the Reviewer was unconvinced that the initial raised concerns were properly addressed and is recommending to reject the manuscript at this point. After my own reading of the manuscript, however, I do think that these concerns could be addressed in another reviewed version of the study.

Please submit your revised manuscript by Sep 03 2020 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols

We look forward to receiving your revised manuscript.

Kind regards,

Angel Blanch, Ph.D.

Academic Editor

PLOS ONE

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #1: (No Response)

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: No

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: No

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: This study investigated the relationship between the safety beliefs on the use of cognitive enhancers (CE), the sources of knowledge relating to their safety, and the existence of actual knowledge among its users as well as non-users. Of the three points to which I had drawn attention to, in the first review; the second and third were appropriately addressed, but the first was answered only partially, leading to an inadequacy in this paper.

In the first review, I had recommended to the authors to either show that the results of the power analysis based on the present sample size were sufficient for detecting a significant difference or to collect additional data, particularly on CE users, based on sample size determination. However, they neither implemented my recommendations nor explained their rationale for not doing so; but instead showed a 95% confidence interval (CI) for the mean differences. While I agree that it is beneficial to report the 95% CI, this revision is not a solution to the issue that I pointed out, since CI is an index of the accuracy of the differences of mean values rather than an index of statistical power.

One of the main objectives of this study was to clarify whether there were differences in the sources of knowledge on the safety of CE between its users and non-users. However, the results showed no significant differences between the two groups relating to the number of reported sources. Regarding this result, the authors argued that even if a difference actually existed in the number of sources between the CE users and non-users, it could have been just one or two, based on the 95% CI of the mean difference (lines 328–333), which they did not consider as a large difference. However, this interpretation is arbitrary because there is no basis for their considering the difference as not being large.

More importantly, the revised version of Figure 1 shows that compared to CE non-users, its users were more likely to use the experiences of peers and websites as sources of information on CE safety. Also, the opposite trends can be seen for social media, NICE guidelines, and scientific research. The p-value of these differences was around 0.15 in several cases, which suggests some meaningful differences, although it is a small effect. While this is an important trend relating to the objective of this study; the reliability of this difference is unclear because its statistical power was weak. Since the authors were unable to provide a clear answer relating to the study’s main objective, I feel that, as of now, this study is not publication-ready.

To overcome the above-mentioned ambiguity, the authors should carry out the research and resubmit it, after increasing the number of participants.

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2021 Jan 28;16(1):e0244865. doi: 10.1371/journal.pone.0244865.r004

Author response to Decision Letter 1


16 Oct 2020

Please see response to reviewer document to see this in full (i.e. with colour coding referred to):

Response to Reviewers

Reviewer #1: This study investigated the relationship between the safety beliefs on the use of cognitive enhancers (CE), the sources of knowledge relating to their safety, and the existence of actual knowledge among its users as well as non-users. Of the three points to which I had drawn attention to, in the first review; the second and third were appropriately addressed, but the first was answered only partially, leading to an inadequacy in this paper. In the first review, I had recommended to the authors to either show that the results of the power analysis based on the present sample size were sufficient for detecting a significant difference or to collect additional data, particularly on CE users, based on sample size determination. However, they neither implemented my recommendations nor explained their rationale for not doing so; but instead showed a 95% confidence interval (CI) for the mean differences. While I agree that it is beneficial to report the 95% CI, this revision is not a solution to the issue that I pointed out, since CI is an index of the accuracy of the differences of mean values rather than an index of statistical power.

One of the main objectives of this study was to clarify whether there were differences in the sources of knowledge on the safety of CE between its users and non-users. However, the results showed no significant differences between the two groups relating to the number of reported sources. Regarding this result, the authors argued that even if a difference actually existed in the number of sources between the CE users and non-users, it could have been just one or two, based on the 95% CI of the mean difference (lines 328–333), which they did not consider as a large difference. However, this interpretation is arbitrary because there is no basis for their considering the difference as not being large.

More importantly, the revised version of Figure 1 shows that compared to CE non-users, its users were more likely to use the experiences of peers and websites as sources of information on CE safety. Also, the opposite trends can be seen for social media, NICE guidelines, and scientific research. The p-value of these differences was around 0.15 in several cases, which suggests some meaningful differences, although it is a small effect. While this is an important trend relating to the objective of this study; the reliability of this difference is unclear because its statistical power was weak. Since the authors were unable to provide a clear answer relating to the study’s main objective, I feel that, as of now, this study is not publication-ready. To overcome the above-mentioned ambiguity, the authors should carry out the research and resubmit it, after increasing the number of participants.

Response: We are pleased that the reviewer noted we had addressed two of their concerns with this revision. For the concern about power we have made some further amendments to the manuscript, but we respectively disagree with the reviewer about this issue and explain this fully below.

In keeping with current recommendations for best practice (e.g., Cumming, 2014), we included confidence intervals for effects wherever it was straightforward to do so. There is a direct link between confidence intervals and power calculations because both provide information about possible effects in the population, and both rely on the standard error: the width of a confidence interval is determined by the standard error, as is the power to detect a population effect of a given size. Therefore, when – for example – an analysis has high power to detect a small effect, the confidence interval for that effect will be very narrow; while when the power to detect a moderate or large effect is poor the confidence interval will be very wide. To illustrate, suppose we have a confidence interval for a mean difference of width 9.99 units, and the true effect is 10.00 units. By definition of the CI, we expect 95/100 studies to have confidence intervals that include the true effect of 10.00. it follows from the interval width that those 95/100 studies will all return significant results because the confidence interval must exclude zero (an interval that is 9.99 wide cannot include both 0 and 10.00). Therefore, the probability of obtaining a significant effect exceeds 0.95, and therefore it follows from the definition of statistical power that the power to detect this effect exceeds .95., We regret not making that link explicit in our previous version, but have done so in our current revision, as follows (pages 13 and 14, red font represents the main revision to this portion):

While the mean number of sources used was higher for users (M = 3.43, SD = 1.14) than non-users (M = 2.97, SD = 1.21), this difference was not statistically significant (t(145) = 1.91, p=0.058, 95% CI -0.95, 0.01) although the effect was small-to-medium in size (d=0.40). Thus, we did not detect a difference between users and non-users in the mean number of information sources used. An implication of the width of the 95% CI for this difference (width just below 1) is that our study had excellent statistical power (>.95) to detect a difference of 1 additional data source.

For none of these measures did the mean number correct differ significantly between users and non-users (Fig 2): Not Safe, t(146)=1.35, p=0.179, 95% CI –0.69, 0.13; Monitor, t(146)=0.72, p=0.471, 95% CI -0.72, 0.34; Side Effects, t(146)=1.38, p=0.171, 95% CI -0.30, 1.68. The confidence intervals for these mean differences are for scales with a maximum possible score of 5, 9 and 15, respectively; and therefore the 95% CIs span 19.8%, 11.8% and 13.2% of each scale-range, respectively. These spans therefore represent population effects for mean differences for which our sample had excellent statistical power (>.95) while for effects which might be considered small – differences in the region of 6% to 10% of the scale-range – our CIs imply modest statistical power (≈.50).

Notice that in the second extract we have expressed the potential effect in as a proportion of the scale range. In doing so, we align with the reviewer’s comment that any designation of an effect as ‘small’, ‘medium’ or ‘large’ is either conventional or arbitrary. This description of the potential effect is much more conducive to reader’s exercising their own judgment in these matters. Personally, we would regard difference of 15-20% of these scales as important, and differences of around 6-10% as modest. However, these way of presenting the information allows readers to exercise their own judgment.

We respectfully disagree with the reviewer’s view that p-values around 0.15 might be taken as evidence of an effect. We prefer to adopt a more conservative and conventional interpretation of p-values, which is that we simply should not speculate about the possible presence or direction of an effect in such circumstances. This is very much in keeping with the weight of reviewer opinion in our experience, wherein we have regularly seen authors chastised by reviewers for describing effects with p = .07 or p = .08 as ‘approaching significance’ or ‘marginally significant’. We may indeed have missed a small effect – but p ≈ .15 cannot be taken as evidence that we should be confident the direction of any such effect, should it exist. This conservatism is in line with current concerns about reproducibility and replication in psychological science where many effects that have been identified with p = .03 or .04 have subsequently been found to be non-replicable (e.g., Munafo, Nosek, Bishop et al., 2017). Perhaps also relevant to mention here, the (non-significant) mean difference of around one-half is entirely consistent with the significant difference that we find and report whereby approximately an additional 50% of CE users use personal information as a source of information.

Related to these issues, the reviewer argues that we have not met one of the main objectives of our study: “One of the main objectives of this study was to clarify whether there were differences in the sources of knowledge on the safety of CE between its users and non-users.” This does not quite match with how we expressed our objective concerning sources of knowledge. We said that we aimed to compare sources of information between users and non-users, and demonstrably we have done this. Additionally, the reviewer seems to interpret our discussion to mean that the true difference in the number of information sources could have been one or two. However, we said that it is unlikely to be as large as one or two. We thought we had been clear in what we had written, however, hopefully the additional comment in the Results about this CI (see above) provides additional clarity on this.

Cumming, G. (2014). The new statistics: why and how. Psychological Science, 25(1), 7–29. https://doi.org/10.1177/0956797613504966

Munafo, M. R., Nosek, B. A., Bishop, D. V. M., et al., (2017). A manifesto for reproducible science. Nature Human Behaviour, 1:0021 DOI: 10.1038/s41562-016-0021

Attachment

Submitted filename: Response to Reviewers Round 2.docx

Decision Letter 2

Angel Blanch

30 Oct 2020

PONE-D-20-07694R2

Understanding the relationship between safety beliefs and knowledge for cognitive enhancers in UK university students

PLOS ONE

Dear Dr. Dommett,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

This version of the manuscript has been evaluated by a fresh new Reviewer (#2) who provided some further suggestions. Please, see the specific comments at the bottom of this letter. As you will see, there were some major concerns with the statistical approach to analyze the data. These concerns should be addressed in another reviewed version of your study.

Please submit your revised manuscript by Dec 14 2020 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols

We look forward to receiving your revised manuscript.

Kind regards,

Angel Blanch, Ph.D.

Academic Editor

PLOS ONE

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #2: (No Response)

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #2: Yes

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #2: No

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #2: No

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #2: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #2: The present questionnaire-based study compared the users and non-users of cognitive enhancers (CE) in terms of their knowledge, accuracy, and safety beliefs about the CE use.

The introduction is written in a comprehensive manner, citing relevant literature, the methods are well-described. The issues that threaten the quality of the study can be found only in the applied statistical procedures.

Major issues:

The analyzed sample consisted of 148 participants out of which 21% identified themselves as CE users. Although this data may be representative of the studied population, there is a disproportion in the size of the compared groups. This raises concerns about the homogeneity of variance of the compared groups that is an important assumption for the used ANOVAs.

The manuscript should therefore explicitly mention how they dealt with this limitation and whether and why are the used statistical methods appropriate.

The different group sizes are also related to another, previously discussed, issue: the statistical power. When I used the widely recommended (e.g. Cumming, 2014) G*Power software (available at tiny.cc/gpower3) to calculate the power of the study given the provided data, the power was much lower than the one reported in the manuscript (0.95). Please note that I do not recommend post-hoc power calculation. However, it would considerably increase the quality of the manuscript, if the authors included information about the probability of finding predicted difference, given the study parameters, which is not be based only on the confidence intervals.

Minor issues:

There are a few cases of misplaced punctuation and inconsistency in citation style.

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #2: Yes: Hana H. Kutlikova

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2021 Jan 28;16(1):e0244865. doi: 10.1371/journal.pone.0244865.r006

Author response to Decision Letter 2


14 Dec 2020

Note that this is clearer in the attachment because of the statistics but we have copied in here as well.

Thank you for the review of the previous version of our manuscript. We have now fully revised our manuscript in line with the reviewer comments. In tandem with these changes to the main manuscript, we now provide additional details of power calculations and statistical analyses in Supporting Information (S2). Please see below for our response to each reviewer point.

1. The analyzed sample consisted of 148 participants out of which 21% identified themselves as CE users. Although this data may be representative of the studied population, there is a disproportion in the size of the compared groups. This raises concerns about the homogeneity of variance of the compared groups that is an important assumption for the used ANOVAs. The manuscript should therefore explicitly mention how they dealt with this limitation and whether and why are the used statistical methods appropriate.

Thank you for this comment. Our Data Analysis section now confirm explicitly that equal variances were assumed for these analyses and point readers to our new Supporting Information (S2) that report a robustness checks of each between-subjects test of mean differences that do not assume equal variances:

For these and other between-subjects tests of mean difference we assume homogeneity of variance in the population. Analyses reported in the Supporting Information (S2) show that no conclusions change if this assumption is not made.

The Supporting Information (S2) explain that the assumption of equal variances in the population is reasonable because we had no a priori reason to depart from this assumption, and because the observed differences between groups in sample variance were small. Nonetheless, the point is well made that any issues arising from heterogeneity in variances are exacerbated when the sample sizes are unequal. Therefore, we think it prudent to include analyses in our Supporting Information (S2) which demonstrate that analyses that assume unequal variances yield the same conclusions as analysis that assume equal variances.

2. The different group sizes are also related to another, previously discussed, issue: the statistical power. When I used the widely recommended (e.g. Cumming, 2014) G*Power software (available at tiny.cc/gpower3) to calculate the power of the study given the provided data, the power was much lower than the one reported in the manuscript (0.95). Please note that I do not recommend post-hoc power calculation. However, it would considerably increase the quality of the manuscript, if the authors included information about the probability of finding predicted difference, given the study parameters, which is not be based only on the confidence intervals.

Thank you for suggesting that we use G*Power software. This has allowed us to provide details of statistical power in a format that will be more familiar to readers. We agree with the reviewer that there is limited value to a post hoc power calculation (e.g., based on the observed effect size). We have therefore based our power calculations on the power to detect effects in the population that we would not want to miss. These were specified in terms of unstandardised mean differences, because these have a meaningful and concrete interpretation in the context of our study (see Baguley, 2009). This required that we use sample data to estimate the standard deviation of scores in the population (as is almost always the case in the behavioural sciences). These calculations took account of the unequal sample sizes. Our Supporting Information (S2) describe this process in detail.

We have replaced the previous description of power in the Results with descriptions based the power calculations in G*Power. These confirm that most analyses had very good statistical power to detect the effects that we specified for the calculations.

On page 13, we now write:

Using G*Power software, we determined that our sample size was sufficient for 0.95 power to detect a mean difference of one additional data source in the population using a two-tailed test (see Supporting Information (S2) for full details). Therefore, if a difference of that scale or larger were to exist in the population, it is highly unlikely that we would fail to detect a difference.

And on page 15:

We used G*Power software to determine the sample sizes required for 0.8 power to detect an absolute mean difference of 10% of the scale-range for each knowledge scores. Our sample size exceeds the sample size required for this level of statistical power for the monitor and not safe scores, though an additional 56 participants would be required to achieve this high degree of power for the not safe scale (see Supporting Information (S2)).As indicated in these additions, fuller details are given in the Supporting Information (S2).

We have also added this short comment to the Limitations sub-section (page 21) because our re-calculation of power highlighted a potential additional route to improve measurement precision and statistical power.

Relatedly, we note here that the one analysis where we did not achieve 80% power to detect a large effect was the knowledge scale with the lowest degree of precision (5 questions, generating 6 levels of performance). Having fewer questions for this measure than the others may have reduced the signal-to-noise ratio associated with this measure, and reduced power accordingly. Therefore, future studies should – if feasible – aim for a more fine-grained measure of knowledge.

For transparency, we have appended the protocols for each power analysis at the end of this letter.

3. There are a few cases of misplaced punctuation and inconsistency in citation style.

We have checked the manuscript for punctuation and style. Additionally, we have corrected one rounding error in the reporting of an effect size (d = 0.40 corrected to d = 0.39 on page 13).

Kind regards,

Eleanor Dommett and colleagues

Reference

Baguley, T., 2009. Standardized or simple effect size: what should be reported? British Journal of Psychology, 100(3), 603-617.

Protocols for power calculations using G*Power

1. Sample size required for 0.95 power to detect a mean difference of 1 additional information sources (estimated standardised effect of d = 0.834)

[1] -- Monday, December 14, 2020 -- 12:52:07

t tests - Means: Difference between two independent means (two groups)

Analysis: A priori: Compute required sample size

Input: Tail(s) = Two

Effect size d = 0.834

α err prob = 0.05

Power (1-β err prob) = 0.95

Allocation ratio N2/N1 = 3.9

Output: Noncentrality parameter δ = 3.6466543

Critical t = 1.9806260

Df = 116

Sample size group 1 = 24

Sample size group 2 = 94

Total sample size = 118

Actual power = 0.9511731

2. Sample size required for 0.8 power to detect a mean difference in knowledge score (not safe) of 10% of the response scale (mean difference of 0.5, estimated standardised effect of d = 0.491).

[2] -- Monday, December 14, 2020 -- 13:03:29

t tests - Means: Difference between two independent means (two groups)

Analysis: A priori: Compute required sample size

Input: Tail(s) = Two

Effect size d = 0.491

α err prob = 0.05

Power (1-β err prob) = 0.80

Allocation ratio N2/N1 = 3.933

Output: Noncentrality parameter δ = 2.8102965

Critical t = 1.9717774

Df = 202

Sample size group 1 = 41

Sample size group 2 = 163

Total sample size = 204

Actual power = 0.7986900

3. Sample size required for 0.8 power to detect a mean difference in knowledge score of 10% of the response scale (mean difference of 0.9, estimated standardised effect of d = 0.686).

[3] -- Monday, December 14, 2020 -- 13:05:00

t tests - Means: Difference between two independent means (two groups)

Analysis: A priori: Compute required sample size

Input: Tail(s) = Two

Effect size d = 0.686

α err prob = 0.05

Power (1-β err prob) = 0.8

Allocation ratio N2/N1 = 3.933

Output: Noncentrality parameter δ = 2.8150771

Critical t = 1.9830375

Df = 104

Sample size group 1 = 21

Sample size group 2 = 85

Total sample size = 106

Actual power = 0.7964569

4. Sample size required for 0.8 power to detect a mean difference in knowledge score of 10% of the response scale (mean difference of 1.5, estimated standardised effect of d = 0.686).

[4] -- Monday, December 14, 2020 -- 13:06:57

t tests - Means: Difference between two independent means (two groups)

Analysis: A priori: Compute required sample size

Input: Tail(s) = Two

Effect size d = 0.612

α err prob = 0.05

Power (1-β err prob) = 0.8

Allocation ratio N2/N1 = 3.933

Output: Noncentrality parameter δ = 2.8362270

Critical t = 1.9783804

Df = 130

Sample size group 1 = 27

Sample size group 2 = 105

Total sample size = 132

Actual power = 0.8037949

Attachment

Submitted filename: Response to Reviewers Dec 20.docx

Decision Letter 3

Angel Blanch

18 Dec 2020

Understanding the relationship between safety beliefs and knowledge for cognitive enhancers in UK university students

PONE-D-20-07694R3

Dear Dr. Dommett,

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

Angel Blanch, Ph.D.

Academic Editor

PLOS ONE

Additional Editor Comments (optional):

Reviewers' comments:

Acceptance letter

Angel Blanch

20 Jan 2021

PONE-D-20-07694R3

Understanding the relationship between safety beliefs and knowledge for cognitive enhancers in UK university students

Dear Dr. Dommett:

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department.

If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org.

If we can help with anything else, please email us at plosone@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Dr. Angel Blanch

Academic Editor

PLOS ONE

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    S1 File. Survey questions.

    (DOCX)

    S2 File. Understanding the relationship between safety beliefs and knowledge for cognitive enhancers in UK university students.

    (DOCX)

    Attachment

    Submitted filename: Response to Reviewers.docx

    Attachment

    Submitted filename: Response to Reviewers Round 2.docx

    Attachment

    Submitted filename: Response to Reviewers Dec 20.docx

    Data Availability Statement

    The dataset supporting this research is openly available from the King's College London research data repository at http://doi.org/doi:10.18742/RDM01-690.


    Articles from PLoS ONE are provided here courtesy of PLOS

    RESOURCES