Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2017 Oct 1.
Published in final edited form as: J Empir Res Hum Res Ethics. 2016 Sep 19;11(4):291–298. doi: 10.1177/1556264616668974

Getting It ‘Right’: Ensuring Informed Consent for an Online Clinical Trial

Alinne Z Barrera 1,2, Laura B Dunn 3, Alexandra Nichols 1, Sonia Reardon 1, Ricardo F Muñoz 1,2,4
PMCID: PMC5334448  NIHMSID: NIHMS812640  PMID: 27630213

Abstract

Getting It ‘Right’: Ensuring Informed Consent for an Online Clinical Trial Ethical principles in conducting technology-based research require effective and efficient methods of ensuring adequate informed consent. This study examined how well participants understood the informed consent form for an online postpartum depression trial. Pregnant women (N=1,179) who consented to the trial demonstrated an understanding of the purpose (86.1%) and procedures of the study (75.8%); and the minimal risks associated with answering sensitive questions online (79%). Almost all (99.6%) understood that psychological treatment was not offered. Participants with current depression incorrectly indicated that participation would replace current psychological treatment relative to participants with a lifetime or no depression history, (19.6% vs. 13.5 % vs. 10.4%, respectively) and that there were no associated risks with participation (29.6% vs.17.6% vs. 16.7%, respectively). Findings provide initial evidence that most individuals who seek online psychological resources are informed consumers.

Keywords: pregnancy, maternal, informed consent, clinical trials, online, Internet, research ethics, prevention, postpartum depression


The use of technology-based tools for the management of health-related issues and delivery of psychological services has grown significantly in recent years. Researchers now depend on technology beyond the data analytic phase of investigations, incorporating such tools into the design, delivery, evaluation, and dissemination of evidence-based, fully digital interventions (Emery, 2014). As with other technological applications, the number of technology-based mental health applications, both private and public, is likely to continue a rapid growth trajectory.

Conducting research online offers investigators many benefits, such as the ability to include large, diverse samples of participants from multiple geographical regions at a low cost. Technology provides the opportunity to standardize experimental conditions. At the same time, it reduces concern over “drift” from intervention protocols and other types of human error and, because of their scalability, inadequate statistical power. Furthermore, the rapid speed by which participants are able to connect and comfortably complete research procedures may influence and aid recruitment efforts, reduce barriers to examine sensitive issues, and reach underserved and marginalized communities (Emery, 2014).

Despite the multiple strengths inherent in online research protocols and data collection, new and old methodological concerns remain. Sampling bias, data security, and concerns over the scientific validity of the study design and outcome data are just a few of the issues that have been raised (e.g. Emery, 2014; Kraut et al., 2004; Nosek, Banaji, & Greenwald, 2002; Warrell, 2014). Additionally, researchers who design and conduct internet-based research, and institutional review boards (IRBs) who review these protocols, have struggled with how to operationalize ethical principles to provide adequate safeguards for study participants. Ethical principles in conducting internet-based research call for finding effective and efficient methods of ensuring adequate informed consent. However, it is not readily apparent how to determine whether participants have sufficient understanding of the purpose, voluntary nature of participation, risks, and benefits of an internet-based study. End User Licensing Agreements (EULAs) and Terms of Service contracts are widely used to obtain consent in the use of technology and related applications (Luzak, 2014). However, most users indicate acceptance of these lengthy documents without reading them. This practice may desensitize the user to reading the informed consent forms in health studies. Users may scroll to the bottom of the document and make a decision without having read or seriously considered the details of the agreement (Varnhagen et al., 2005). Furthermore, some worry that clinical populations with multiple potential vulnerabilities (e.g., low education or literacy, mental illness-related symptoms including decreased motivation and impaired concentration) may more easily agree to participation without carefully weighing the consequences of doing so (Sieber, 2012). Another concern is that participants in online research may have a “therapeutic misconception” (Lidz et al., 2015), that is, they may fail to grasp the investigative nature (and research-specific procedures such as randomization) of the study, may believe that they are receiving treatment tailored to their individual needs, or may not realize that they may not personally benefit from participation.

Few online studies have employed novel consent methods explored by this paper, which restructure the methods used by researchers to obtain consent in the first place, and screen participant understanding of the consent document (Nguyen et al., 2015). Rather than limiting the response options (“I agree” only), researchers can provide options to demonstrate that agreeing is more complicated than simply clicking or marking a checkbox. Documenting how well participants understand the details of participation through the use of quizzes and feedback provides researchers the opportunity to clarify key misunderstandings or concerns about participating in a research investigation when a live person is not available to address these issues (Pace & Livingston, 2005; Sieber, 2012).

Pregnant women, and potentially their unborn children, are considered a vulnerable population under federal regulations that govern IRBs. Thus, gaining a better understanding of how well pregnant women understand the consent form is of vital importance, given the growing interest in development and dissemination of technology-based health resources. Given the relatively limited self-help online resources available to perinatal women, especially from developing nations, and the multiple vulnerabilities associated with this population (e.g., pregnancy-related symptoms of depression), this study examined whether participants would misunderstand the information detailed in the informed consent of a randomized controlled trial (RCT). Among participants who agreed to participate in an online intervention for the prevention of postpartum depression, we examined three ethically relevant questions: What was the overall understanding of key aspects of research participation (e.g., risks, benefits, voluntary nature of research) of pregnant women agreeing to participate in an online trial? Would participants have a “therapeutic misconception” regarding research participation (e.g., would they believe that the study was aimed at providing prenatal care or psychotherapy)? Are there differences based on demographic and depression characteristics in response accuracy and initial commitment to participate in the larger trial?

Methods

Data for this report were extracted from the baseline assessment of a larger trial that examined the efficacy of an online prevention of postpartum depression (PPD) intervention (see Barrera, Wickham, & Muñoz, 2015). Participants were recruited online using sponsored links (see Barrera, Kelman, & Muñoz, 2014) and were directed to the online consent form if they were female, pregnant (at any stage), and over 18 years of age. To ensure recruitment of the target population, interest in using the study materials for personal use was also part of the eligibility criteria.

To consent to participate, participants were required to click “Yes, I am interested in participating in this study” and to “sign” by entering a unique password that was generated in real time. Additionally, participants were required to submit a valid email address to verify their identity. Once participants consented to participate, they were directed to answer four questions to assess whether they understood the limits of participation as detailed in the consent form. Participants were encouraged to answer the questions to the best of their ability and were informed that their responses would not affect participation in the larger study. Regardless of how they responded, all participants were provided corrective feedback to each item before presenting the next consent query item. Demographic, depression, and pregnancy history questionnaire items followed and completed the baseline assessment.

Stratified randomization in the larger trial was conducted in real time. To be randomized, participants were required to complete sufficient items on the depression questionnaires and to click the “submit survey” button at the end of the baseline assessment. Failure to complete either step resulted in a participant not being randomized. All study procedures were reviewed and approved by the Institutional Review Boards of the University of California, San Francisco and Palo Alto University.

Participants

Of 2,960 pregnant women who entered data in the larger study between January and December 2009, the following participants were excluded from this report: 1,059 (35.8%) who immediately exited the study website without indicating consent; 325 (11%) with incomplete consent query data; and 397 (13.4%) with incomplete depression data. The remaining sample consisted of 1,179 participants. Of these, 83% were randomized to one of two conditions evaluated.

Measures

Demographic characteristics

Participants provided information on language, age, sex, ethnicity, race, marital status, education, employment and country of origin. Pregnancy history included number of weeks pregnant and number of prior pregnancies.

Assessment of understanding of informed consent

Four multiple-choice items were created to assess understanding of key aspects of informed consent specific to this trial and consistent with ethical guidelines. The four items (see Figure 1) inquired about participants’ understanding of the purpose of the study (Why are we doing this study?), the voluntary approach to research participation (Are you required to answer all questions presented to you?), potential risks (What are some possible side effects or risks associated with participating in this study?), and potential benefits to participation (What are some of the benefits of participating in this study?). Each possible correct answer choice was coded as 1 point. Because two items asked participants to “check all that apply,” the total possible correct score was 0 to 7. Regardless of how participants answered, the correct response for each item was presented immediately after they indicated their answer choice.

Figure 1. Informed Consent Understanding Assessment Items.

Figure 1

Note: Correct responses in bold.

Mood Screener – Current/ Lifetime Version

The Mood Screener assesses for the presence of five or more of the nine Major Depressive Episode (MDE) symptoms experienced within a 2-week or longer period of time during their lifetime (past MDE) or during the past two weeks (current MDE; Miller & Muñoz, 2005, p. 141; Muñoz, 1998). MDE symptom criteria were consistent with DSM-IV-TR (APA, 2000) diagnostic criteria. In order to meet severity criteria for a positive MDE, symptoms must also interfere with daily activities “a lot.” The MDE Screener has demonstrated good psychometric properties relative to diagnostic screeners and clinical interviews with Spanish-speakers (Muñoz et al., 1999; Vázquez et al., 2008).

Center for Epidemiologic Studies-Depression Scale (CES-D)

The CES-D is a 20-item self-report instrument that assesses for the presence of depressive symptoms during the past week (Radloff, 1977). Total scores range from 0-60, with higher scores indicating more severe depressive symptoms. A score of 16 or above has been designated as a cutoff score for significant depressive symptoms (Weissman et al., 1977).

Data Analysis

Descriptive statistics were calculated to examine participant characteristics and consent query accuracy. T-test and chi-square analyses were used to compare group differences in understanding of consent query items based on MDE and randomization status. All data were analyzed using SPSS for Windows 20.0 (IBM Corp, Armonk, NY, USA).

Results

Participant Characteristics

Table 1 provides demographic characteristics of the 1,179 women included in this report. Participants’ mean age was 27.6 years (SD = 5.6 years); approximately half (52.5%) were married or in a relationship; the majority (81.5%) had a university level education; and 60% were currently not employed. A majority (81.9%) identified their ethnic background as Latino, and completed the study in Spanish (86.6%).

Table 1.

Demographic characteristics of pregnant women (N=1,179)

Variable All
N=1,179
%
Age, M (SD) 27.6 (5.6)
Spanish-speaking 86.6
Race (n=976)
  African
  Alaska Native/Native American
  Asian
  European/white
  Mestiza
  Mixed/Other

3.2
4.5
4.1
30.9
41.4
15.9
Ethnicity (n=1,024)
  Latina/Hispanic

81.9
Top 5 countries of residence (n=1,167)
  Mexico
  Chile
  Venezuela
  Colombia
  Spain

18.8
14.6
10.5
9.5
9.5
Marital Status (n=1,175)
  Married/live with partner
  Single
  Separated/divorced/widowed

52.5
39.7
7.7
Education (n=1,035)
  1-12 years
  University level
  Advanced degree

18.6
72.3
9.2
Employment (n=1,171)
  Employed full- or part-time
  Not employed

40.0
60.0
Annual income (US dollar)
  Don’t know/did not answer
  ≤ $10,000
  > $10,001

39.9
38.3
21.8
First pregnancy (n=1164)
Weeks pregnant, M (SD) (n=1,176)
58.0
16.8 (9.7)
CES-D Score M (SD) (n=1,008)
MDE history
  None
  Lifetime
  Current
27.2 (14.0)

70.7
12.1
17.2

General Understanding of Consent Procedures.

The average number of correct responses by all participants was 4.75 (SD=1.21, range=0-7). Participants correctly understood why the study was being conducted (86.1%) and that they did not have to answer every question during participation in the study (75.8%). Some or all potential risks of participation were correctly identified by 56% (i.e., correctly identified both risk responses) and 23% (i.e., identified one of two risk responses), while 21% incorrectly identified no risks associated with participation. When assessing for understanding of potential benefits of research participation, 18.7% correctly identified all three benefits of participation that were listed, with 74.6% identifying two of the three responses; few (0.4%) participants indicated that the trial would provide online treatment (an incorrect response).

Response Accuracy Based on Participant Characteristics and Commitment to Participate.

An examination of participant responses (correct vs. incorrect responses) revealed some significant differences based on MDE history (none, lifetime, current). Participants meeting criteria for current MDE were more likely to incorrectly identify the purpose of the study as replacing current psychological treatment when compared to those with a lifetime or no MDE history (19.6% vs. 13.5 % vs. 10.4%, respectively; X2 (2) = 11.59, p=.003). Similarly, those endorsing a current MDE were more likely to indicate that there were no risks associated with participation (29.6%) when compared to those with a lifetime or no history of MDE (17.6% vs. 16.7%, respectively; X2 (4) = 11.69, p=.02).

A comparison of demographic and depression characteristics based on whether participants completed the baseline assessment and were randomized, a proxy measure for participation commitment (n=978, randomized, n=201 not randomized), revealed that randomized participants tended to be older (M=27.8, SD=5.6 vs. 26.4, SD= 5.5; p=.002), more likely to be employed (61.3% vs. 53.8%, p=.05), and reported higher rates of lifetime (12.4% vs. 10.9%, p=.02) and current MDE (18.5% vs. 10.9%, p=.02). There were no significant differences in language, ethnicity, race, marital status, education, or self-reported depression scores.

Compared to those who were not randomized, a higher proportion of randomized participants answered each of the four informed consent items correctly. Eighty-seven percent (87.7%) of randomized participants, compared to 78.1% of non-randomized, correctly indicated that participation in the trial would not replace health providers (p<.001) and that they did not have to answer every item (77.5% vs. 67.7%, p=.003). In terms of risk of participation, 18.8% of the randomized participants (vs. 31.8% of non-randomized) indicated that there were no risks associated with participation, while 25.1% of randomized participants indicated full understanding of the multiple risks of participation (compared to 12.9% of non-randomized; p<.0001). The full benefits of participation as outlined in the consent form were clearly understood by almost all participants with a low proportion (1.0% non-randomized vs. 0.3% randomized) incorrectly stating that it was to receive online treatment. The difference between those who fully understood the benefits (i.e., identifying all vs. some of the correct responses) was three times higher among randomized participants relative to non-randomized participants (21% vs. 7.5%, p<.0001).

Discussion

Although much has been written about the need to adhere to ethical principles that guide informed consent when engaging in technology-based data collection, few studies to date have examined participants’ real-time understanding of consent for participation in such research protocols. The present study, which examined understanding of informed consent among pregnant women agreeing to participate in an online depression prevention trial, suggests that while many participants had a good overall understanding, there were also several areas of potential concern.

The majority of women included in this study accurately identified the purpose of the online trial. Almost one-quarter of the women incorrectly answered the second item (i.e., “Are you required to answer all questions presented to you?”). Because of the wording of this item, it is not clear whether incorrect responses represented failure to appreciate that study participation was voluntary vs. the influence of factors such as social desirability (i.e., believing that they “should” respond that they needed to answer every question during the study). This is an important distinction to clarify for potential participants given that some research investigations may involve procedures that are uncomfortable or aversive to those who agree to participate, such as discussing personal information that may result in a negative emotional reaction or invasive medical procedures. For participants in this study, discussing their depression history may have been a sensitive and emotionally difficult personal experience to recount.

Pregnant women were able to accurately identify all or some of the benefits of participation. Although they received corrective feedback when their understanding of the consent form was incomplete or inaccurate, future studies need to determine why participants are unable to identify all actual benefits and how this level of understanding impacts participation or engagement in research trials. For example, comprehension of the benefits of participation may be an artifact of the language used, translations, or the non-material nature of the benefits to participants (e.g., better transition to motherhood).

Making an informed decision to consent to participate in any kind of treatment or study also requires an adequate understanding of the risks of participation. Almost all women understood some or all of the risks associated with participation. In general, the risks to participation were potentially minimal and familiar (e.g., interception of data) and, in combination, may be perceived as the usual risk of participating in an online program. Regardless, given that one in five consenting participants did not consider there to be any risks, researchers who rely on the automated process of completing consent procedures should emphasize the nature of the risks to participation especially when they have the potential to do harm. The simple procedure of testing participant’s comprehension of risks and benefits as well as providing feedback is key to protecting this and other vulnerable populations (Barchard & Williams, 2008).

A major finding from this study was that few consenting participants were under the impression that the online trial would provide an opportunity to obtain treatment. This is of great importance given the detrimental impact of untreated maternal depression, and the fact that therapeutic misconception has been cited as an issue in perinatal research (Brandon, Shivakumar, Lee, Inrig, & Sadler, 2009). In fact, concerns were raised by the original IRB committee about this very issue and motivated the researchers to ensure that participants fully understood the boundaries of participation and the distinction between the research study and standard clinical treatment. The present finding should provide some reassurance to IRBs and researchers concerned about therapeutic misconception in online behavioral health interventions.

We want to call attention to two novel findings: First, given the prevalence of depression among those seeking online psychological resources and the risk for depression among perinatal women, we also examined how well participants understood the information presented in the online consent form within the context of their depression history. Women with depressive symptoms meeting criteria for a current MDE were more likely to indicate no risks associated with participation and endorsed at a higher rate the idea that participation would include access to online treatment. To a lesser degree, women with a lifetime history of MDE responded in a similar pattern. Thus, there seems to be an effect on understanding of the consent form based on the proximity of their depression history. Given the already vulnerable nature of this population, it is concerning that these women may have been more prone to misunderstand some details of the consent form that left the impression that online treatment would be provided. Alternatively, it is possible that women who were depressed or who had recently been depressed may have been more desperately seeking or in need of treatment and hopeful that this trial would be a viable option for them.

It is unclear whether randomized participants, who responded to the consent items more accurately, may have been more committed to the study from the very beginning, read and answered the study questions more carefully, or felt higher interest in completing the questions accurately because they were already committed to being a part of the larger trial. Either way, these data suggest that motivation and drive to participate in the activity being described in an informed consent form, whether for treatment or research participation, may influence how carefully one considers their decision to provide consent. To our knowledge, this is a novel finding in the context of online clinical trials.

There are several limitations that need to be considered when interpreting the findings of this study. First, the consent query questions were limited to four multiple choice items. Our understanding of the depth of participants’ understanding of the informed consent is limited to the constructs and the content of each question. We were unable to ask follow-up questions to probe for the source or level of any misunderstanding. Therefore, it is possible that participants misunderstood the question wording or misread items, rather than misunderstanding the specific aspect of the research protocol. On the other hand, it is also possible that participants may have misunderstood other aspects of the research that we did not assess. However, we opted for a brief assessment given the need to limit any additional burden on participants. Finally, findings from this study should be considered within the context of the global reach and diversity (especially ethnic/racial identity and language) of the sample, which are also major strengths of this study given the limited availability of global maternal mental health research.

In conclusion, we recommend that researchers conducting innovative, technology-based clinical research incorporate measures to assess for participant understanding of key elements of informed consent. As online research applications continue to expand, researchers should continue to investigate how well participants understand online consent forms and should seek to develop effective methods to enhance this understanding. As potential research volunteers continue to gain familiarity and comfort with the mechanisms of online data collection and participation in technology-based programs, investigators and clinicians must ensure that ethical safeguards for participants—including effective informed consent procedures—continue to be upheld.

Best Practices

Ethical principles of human subjects research participation calls for clear and detailed descriptions of participation, which are then approved by committees of experts whose primary aim is to ensure the implementation of these standards. Despite these efforts, it is evident that users will, on occasion, misinterpret or misunderstand the details of participating in research investigations. Additionally, exposure to End User Licensing Agreements and Terms of Service contracts is a ubiquitous part of accessing technology-based tools. These methods to access such tools can present unique challenges to safeguarding the ethical standards of informed consent. Accurate understanding and interpretation of participation consent are particularly important, especially when addressing the potential for negative consequences. Researchers and clinicians can assess for these aspects of participation through simple and brief follow-up questions or quizzes of participants’ understanding. It is important that prior health and mental health experiences be considered as an influencing factor affecting interpretation of the risks and benefits of participation. Assessing level of commitment and motivation to engage in the tools being marketed is an additional factor to consider when ensuring a full understanding of what it means to agree to use technology-based tools.

Research Agenda

To expand the knowledge of ethical considerations in the use of technology-tools, researchers and clinicians who are implementing Internet interventions could examine users’ understanding of informed consent as an added feature of their programs. In-depth qualitative analysis (i.e., beyond survey assessments) of areas of misunderstanding would also enhance awareness of how informed consent forms could be structured to increase clarity and understanding by users. Additional methods of assessing for understanding should be examined and considered. Further assessment of potential ethical considerations of concern for different target populations and settings where technology-based tools are being implemented could be examined and integrated into informed consent procedures. Recommendations for how to word consent forms could be developed by Internet research organizations to provide “standard practice” models.

Educational Implications

This study suggests that Internet users who agree to participate in online programs of research generally do understand informed consent and appreciate the constraints that research participation places on clinical care. However, clinicians, researchers, and mental health staff who design, study, and implement technology-based tools should be instructed not to assume that participants will read or understand all components of the informed consent form. Training on sociocultural and environmental factors that may influence understanding of informed consent procedures should be considered as part of the design and implementation of Internet interventions and related technology tools.

Acknowledgments

Sources of support: National Institute of Mental Health (F32MH077371; PI: Barrera); Robert Wood Johnson Health Disparities Seed Grant (PI: N. Adler) to A.Z. Barrera

References

  1. American Psychiatric Association . Diagnostic and statistical manual of mental disorders. 4th. Author; Washington, DC: 2000. text rev. [Google Scholar]
  2. Barchard KA, Williams J. Practical advice for conducting ethical online experiments and questionnaires for United States psychologists. Behavior Research Methods. 2008;40(4):1111–1128. doi: 10.3758/BRM.40.4.1111. [DOI] [PubMed] [Google Scholar]
  3. Barrera AZ, Kelman AR, Muñoz RF. Keywords to recruit Spanish- and English-speaking participants: evidence from an online postpartum depression randomized controlled trial. Journal of Medical Internet Research. 2014;16(1):e6. doi: 10.2196/jmir.2999. [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. Barrera AZ, Wickham RE, Muñoz RF. Online prevention of postpartum depression for Spanish- and English-speaking pregnant women: A pilot randomized controlled trial. Internet Interventions: The Application of Information Technology in Mental and Behavioural Health. 2015;2(3):257–265. doi: 10.1016/j.invent.2015.06.002. [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Brandon AR, Shivakumar G, Lee SC, Inrig SJ, Sadler JZ. Ethical issues in perinatal mental health research. Current Opinion in Psychiatry. 2009;22(6):601–606. doi: 10.1097/YCO.0b013e3283318e6f. [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Emery K. So you want to do an online study: Ethics considerations and lessons learned. Ethics & Behavior. 2014;24(4):293–303. [Google Scholar]
  7. Kraut R, Olson J, Banaji M, Bruckman A, Cohen J, Couper M. Psychological research online: Report of Board of Scientific Affairs' Advisory Group on the Conduct of Research on the Internet. The American Psychologist. 2004;59(2):105–117. doi: 10.1037/0003-066X.59.2.105. [DOI] [PubMed] [Google Scholar]
  8. Lidz CW, Albert K, Appelbaum P, Dunn LB, Overton E, Pivovarova E. Why is therapeutic misconception so prevalent. Cambridge Quarterly Of Healthcare Ethics: CQ: The International Journal of Healthcare Ethics Committees. 2015;24(2):231–241. doi: 10.1017/S096318011400053X. [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Luzak J. Privacy Notice for Dummies? Towards European Guidelines on How to Give 'Clear and Comprehensive Information' on the Cookies' Use in Order to Protect the Internet Users' Right to Online Privacy. Journal of Consumer Policy. 2014;37(4):547–559. [Google Scholar]
  10. Miller W, Muñoz RF. Controlling your drinking. Guilford Press; New York: 2005. [Google Scholar]
  11. Muñoz RF. Preventing major depression by promoting emotion regulation: a conceptual framework and some practical tools. Int. J. Ment. Health Promot. 1998:23–40. Inaugural Issue. [Google Scholar]
  12. Muñoz RF, McQuaid JR, González GM, Dimas J, Rosales VA. Depression screening in a women's clinic: using automated Spanish- and English-language voice recognition. Journal of Consulting and Clinical Psychology. 1999;67(4):502–510. doi: 10.1037//0022-006x.67.4.502. [DOI] [PubMed] [Google Scholar]
  13. Nguyen Thanh T, Nguyen Tien H, Le Thi Bich T, Nguyen Phuoc L, Nguyen Thi Huyen T, Kenji H, Juntra K. Participants’ understanding of informed consent in clinical trials over three decades: systematic review and meta-analysis. Bulletin of the World Health Organization. 2015;93(3):186–198H. doi: 10.2471/BLT.14.141390. [DOI] [PMC free article] [PubMed] [Google Scholar]
  14. Nosek BG. E-Research: Ethics, security, design, and control in psychological research on the Internet. Journal of Social Issues. 2002;58(1):161. [Google Scholar]
  15. Pace LA, Livingston MM. Protecting human subjects in Internet research. Electronic Journal of Business Ethics & Organizational Studies. 2005;10:35–41. [Google Scholar]
  16. Sieber JE. Research with vulnerable populations. In: Knapp SJ, Gottlieb MC, Handelsman MM, VandeCreek LD, Knapp SJ, Gottlieb MC, et al., editors. APA handbook of ethics in psychology, Vol 2: Practice, teaching, and research. American Psychological Association; Washington, DC, US: 2012. pp. 371–384. [Google Scholar]
  17. Radloff LS. The CES-D scale: a self-report depression scale for research in the general population. Appl. Psychol. Meas. 1977;1:385–401. [Google Scholar]
  18. Varnhagen CK, Gushta M, Daniels J, Peters TC, Parmar N, et al. How informed is online informed consent? Ethics & Behavior. 2005;15(1):37–48. doi: 10.1207/s15327019eb1501_3. [DOI] [PubMed] [Google Scholar]
  19. Vázquez FL, Muñoz RF, Blanco V, López M. Validation of Muñoz's Mood Screener in a nonclinical Spanish population. European Journal of Psychological Assessment. 2008;24(1):57–64. [Google Scholar]
  20. Warrell JG, Jacobsen M. Internet Research Ethics and the Policy Gap for Ethical Practice in Online Research Settings. Canadian Journal of Higher Education. 2014;44(1):22–37. [Google Scholar]
  21. Weissman MM, Sholomskas D, Pottenger M, Prusoff BA, Locke BZ. Assessing depressive symptoms in five psychiatric populations: A validation study. American Journal of Epidemiology. 1977;106(3):203–14. doi: 10.1093/oxfordjournals.aje.a112455. [DOI] [PubMed] [Google Scholar]

RESOURCES