Skip to main content
International Journal of Methods in Psychiatric Research logoLink to International Journal of Methods in Psychiatric Research
. 2014 Feb 24;23(2):184–191. doi: 10.1002/mpr.1421

Recruitment of mental health survey participants using Internet advertising: content, characteristics and cost effectiveness

Philip J Batterham 1,
PMCID: PMC6878492  PMID: 24615785

Abstract

Postal and telephone survey research is threatened by declining response rates and high cost. Online recruitment is becoming more popular, although there is little empirical evidence about its cost‐effectiveness or the representativeness of online samples. There is also limited research on optimal strategies for developing advertising content for online recruitment. The present study aimed to assess these aspects of online recruitment. Two mental health surveys used advertisements within a social network website (Facebook) to recruit adult Australian participants. The initial survey used advertisements linking directly to an external survey website, and recruited 1283 participants at $9.82 per completed survey. A subsequent survey used advertisements linking to a Facebook page that featured links to the external survey, recruiting 610 participants at $1.51 per completion. Both surveys were more cost‐effective than similar postal surveys conducted previously, which averaged $19.10 per completion. Online and postal surveys both had somewhat unrepresentative samples. However, online surveys tended to be more successful in recruiting hard‐to‐reach populations. Advertising using “problem” terminology was more effective than “positive” terminology, while there was no significant effect of altruistic versus self‐gain terminology. Online recruitment is efficient, flexible and cost‐effective, suggesting that online recruitment has considerable potential for specific research designs. Copyright © 2014 John Wiley & Sons, Ltd.

Keywords: recruitment, Internet, mental health, survey research, advertising

Introduction

Recruitment of community‐based participants for research purposes is becoming increasingly difficult. Refusal rates to household surveys have increased over the past 20 years (Brick and Williams, 2013). Furthermore, obtaining well‐identified samples that are representative of the population is limited by factors such as reductions in landline telephone connections (Blumberg and Luke, 2012), tighter privacy restrictions on access to administrative data (e.g. electoral roll data) and increasing costs of postage. Technologically‐based approaches to recruitment of community‐based samples are consequently becoming more common. These include recruitment through email address lists (e.g. Koo and Skinner, 2005), Internet research panels (e.g. Baker et al., 2003) and online advertising (e.g. Munoz et al., 2009; Ramo and Prochaska, 2012; Morgan et al., 2013). Each of these methods shows promise, although recruitment rates tend to be variable. However, there is very little research investigating whether participants recruited through the Internet are representative of the general population, relative to postal or phone assessments. Although Ramo and Prochaska (2012) and Morgan et al. (2013) found online advertising to be cost‐effective, at $4–12 per completed survey, there is a need for comparative data from traditional recruitment methods. Ramo and Prochaska (2012) also examined the effectiveness of specific advertising images. However, systematic content investigation that focuses on the framing of terminology is likely to result in more effective recruitment campaigns.

Community‐based samples are traditionally obtained by recruiting participants through the post or telephone calls. The costs of both of these methods can be considerable, both in terms of overhead (costs of materials, postage, call centres) and personnel (logistical management of mail‐outs, in‐taking surveys, employment of telephone operators). The result of the additional costs tends to be justified by the representativeness of the resulting sample, particularly when names and addresses of potential respondents can be obtained from population registers such as electoral rolls. However, telephone and postal surveys are becoming more difficult and less representative, while the costs of online surveys are becoming insignificant and survey software is becoming sophisticated and user‐friendly. There is also growing evidence that online data collection can be accurate, reliable, valid and representative (Meyerson and Tryon, 2003; Gosling et al., 2004; Chang and Krosnick, 2009). Online surveys do not circumvent the problems of recruitment, with the vast scale of the Internet potentially hindering the identification of well‐defined populations and promoting skepticism about the veracity of information. Therefore it is important to assess whether the respondents recruited online are representative of the target population, and more importantly, whether surveys using online recruitment are less representative than similar surveys that recruit using traditional methods such as post or telephone.

The content of recruitment advertising can be classified in a number of ways. Two categorizations that have previously been useful in examining advertisements for volunteering behaviours are whether the advertisement wording is positive or negative, and whether it aims to motivate by promoting altruistic (communal) benefits or individual benefits. Previous research has found egoistic advertising focusing on self‐gain is more effective than altruistic advertising for recruiting volunteers (Bennett and Kottasz, 2001). Focus on a negative outcome, or “loss” framing, has also been found to be more effective than positive “gain” framing (Lindenmeier, 2008). Previous research of online advertising for smoking cessation has demonstrated loss‐framed advertising to be more effective in attracting respondents than gain‐framed advertising (Graham et al., 2012). In the mental health context, negative or “loss” framing terminology may focus on “mental health problems” rather than “emotional well‐being”. Similarly, gaining feedback about personal health is one way to present an egoistic or “self‐gain” benefit, while a focus on participating in research to help others to improve their health would represent an altruistic benefit.

The present study aimed to: (i) test the cost‐effectiveness of online advertising for online survey recruitment relative to previously conducted postal surveys, (ii) examine whether the use of online recruitment also resulted in reduced representativeness, and, (iii) systematically test whether the content of the advertising could be optimized to increase response rates and thereby reduce online recruitment costs. The social networking site Facebook was used for the online advertising, which provides a number of metrics to assess advertising performance. The primary advantage of recruiting from social network sites is that advertising can be broadly targeted, without requiring key word targeting that is required in search engine advertising (e.g. Google ads). A common metric of cost per completed survey was used for comparison with postal surveys. It was hypothesized that the costs of online surveys would be considerably less than postal surveys, although the online sample was hypothesized to be less representative than postal samples. Using a metric of the number of clicks on advertisement links per completed survey, the study examined two content attributes in a 2 × 2 factorial design: (i) use of problem (mental health problems) versus positive (emotional well‐being) terminology, and, (ii) use of altruistic (help others) versus self‐gain (test yourself) terminology. Based on previous research on volunteering, it was hypothesized that the problem framing terminology (mental health problems) coupled with self‐gain terminology (self‐test) would have the highest response rates (Bennett and Kottasz, 2001; Lindenmeier, 2008).

Method

Participants and procedure

Participants were recruited for an online survey that aimed to assess new methods of mental health screening and examine risk factors for suicidal ideation. Facebook advertising was used during July 2012, with four advertisements presented for separate periods of approximately three days each. The maximum price per click was set at AU$1 and the target audience was adults who identified as aged ≥ 18 and living in Australia. The advertisements linked directly to an external survey website, which was hosted on a secure server at the Australian National University and used the open‐source Linux‐based software LimeSurvey, version 1.9. The survey lasted approximately 20–30 minutes and included online informed consent and service referral options in addition to the assessment content. No incentive or specific feedback was provided to participants. After the period of evaluating the advertisement content finished in July 2012, the advertisements with the lowest number of clicks per completion were used for ongoing recruitment until September 2012, with a total of 1283 completed surveys and 610 incomplete surveys administered in that period. Human research ethics approval (protocol #2012/310) was obtained from the Science and Medical Delegated Ethics Review Committee at the Australian National University.

The four advertisements were developed based on the 2 × 2 factorial design of “problem” versus “positive” terminology, and “altruistic” versus “self‐gain” terminology. The four advertisements were:

  1. “Mental Health Survey: Volunteer 30 minutes of your time right now to help people with mental health problems” (“problem”, “altruistic”)

  2. “Mental Health Survey: Participate in a study examining your mental health by completing a 30 minute survey now” (“problem”, “self‐gain”)

  3. “Emotional Health Survey: Volunteer 30 minutes of your time right now to help others improve their wellbeing” (“positive”, “altruistic”)

  4. “Emotional Health Survey: Participate in a study examining your emotional wellbeing by completing a 30 minute survey” (“positive”, “self‐gain”).

The same image was used in the four advertisements, consisting of the logo of the Australian National University to provide legitimacy for the study.

To test an alternative recruitment method that aimed for greater cost‐effectiveness, a second survey was conducted in October–November 2012, with content similar to the original survey. The advertisement for the second survey linked to a Facebook “page” that encouraged “likes” and displayed prominent links to the external survey site. This survey targeted the same population (Australian adults aged ≥ 18) and used the same advertisement wording. However, pricing for internally‐linked advertisements tends to be cheaper than for externally‐linked advertisements, and is targeted by Facebook to individuals who are more likely to “like” the page.

Measures

The effectiveness of the content of the four advertisements was compared based on the likelihood that someone who clicked on an advertisement went on to complete a survey. This was assessed using the ratio of advertisement clicks (the number of times the link on the advertisement was clicked) to the number of completed surveys, over the period that the advertisement was displayed. The click‐through rates, that is, the percentage of people who clicked on an advertisement when it was displayed to them in Facebook, were also compared to identify whether any of the advertisements were more likely to be clicked. Among individuals who started the survey, the rate of survey completion was also compared across advertising conditions.

Cost‐effectiveness was based on the cost per completed survey across the entire recruitment period. This was calculated as the total spent on advertising divided by the number of completed surveys. This estimate was then compared to three previously‐conducted postal surveys, with the costs for these calculated as the sum of printing costs, postal costs and the cost to obtain names and addresses from the Australian Electoral Commission. To be conservative, data entry costs were not included in the costs of postal surveys, although these costs can be considerable. The staff time of preparing the surveys was also not accounted for, as preparation time may be similar for printed surveys (questionnaire layout and design) and online surveys (programming survey elements). The three comparison postal surveys were (i) a 20 minute cross‐sectional survey assessing prototypes to screen for mental health problems in the adult population of Sydney, Australia (Christensen et al., 2011), (ii) a 15 minute screening survey for a trial of online cognitive behavioural therapy for generalized anxiety disorder in the Australian population of 18 to 30 year‐olds in Sydney, Australia (Christensen et al., 2010), and, (iii) a 15 minute cross‐sectional survey testing a new measure of anxiety stigma in the adult population of Sydney, Australia, and nearby rural areas (Griffiths et al., 2011).

The representativeness of the online survey participants was assessed based on a self‐reported characteristics from each survey: age group, gender, education, and a range of mental health problems. Presence of major depression (past two weeks), panic disorder (past four weeks) and generalized anxiety disorder (past two weeks) were assessed using the Patient Health Questionnaire (Spitzer et al., 1999) and Generalized Anxiety Disorder‐7 (Spitzer et al., 2006), scored using specified algorithms to identify possible caseness. These scales have high precision in identifying the respective disorder relative to the Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition (DSM‐IV) diagnosis in the general population. Selected characteristics were available from two of the postal surveys (Christensen et al., 2011; Griffiths et al., 2011), while data from the third survey were based on selective sampling of young adults and not yet available (Christensen et al., 2010).

Analysis

Differences in content effectiveness were assessed using Fisher's exact test for the contingency table of completions and non‐completions (based on number of clicks) across the two content classifications (problem versus positive; altruistic versus self‐gain). Cost effectiveness was assessed descriptively by tabulating estimated costs per completed survey. Representativeness was assessed by comparing characteristics of the study sample to available national data. Specifically, demographic distributions were compared to the general Australian population using census data (Australian Bureau of Statistics, 2012a, 2012b) with differences assessed using Z tests. Rates of mental health problems were compared to those observed in the Australian National Survey of Mental Health and Well‐being (Australian Bureau of Statistics, 2008) using Z tests. Census data in Australia are collected using self‐report forms provided directly to households by field staff, while the National Survey data were collected using face‐to‐face interviews.

Results

Advertising content

Table 1 shows the click‐to‐completion ratios for each of the four advertisements, presented both as number of clicks per completion and inversely as the percentage of clicks that resulted in a completed survey. All figures are given in Australian dollars. The most effective advertisements were those that required fewest clicks for a survey completion. The “mental health problem” advertisements outperformed the “emotional well‐being” advertisements. Based on Fisher's exact test, the completion rate was significantly higher for the “problem” (9.3 clicks/completion) than for the “positive” (13.2 clicks/completion) terminology (p = 0.013). However, there was no significant difference between altruistic (11.4 clicks/completion) and self‐test (10.2 clicks/completion) terminology (p = 0.436). The positive/altruistic advertisement, which had the greatest number of clicks per completion, also had the highest rate of surveys that were started but not completed. There were, however, no differences in the click‐through rates across the four advertisements, indicating comparable likelihoods that Facebook users would choose to click on each of the advertisements.

Table 1.

Survey completion outcomes for the four advertisements

Content Clicks Completed surveys Started but incomplete surveys Incomplete rate (%) Completion rate (%) Clicks per completion Click‐through rate (%)
“Problem”/”altruistic” 492 51 18 26.1 10.4 9.6 0.027
“Problem”/”self‐gain” 508 56 26 31.7 11.0 9.1 0.026
“Positive”/”altruistic” 275 16 14 46.7 5.8 17.2 0.026
“Positive”/”self‐gain” 790 71 24 25.3 9.0 11.1 0.027

Cost effectiveness

Cost‐effectiveness data are presented in Table 2. The cost of advertising for the first survey that used a direct link to the survey page was $12,600, at $9.82 per completed survey. The second survey linked to a Facebook page, which contained prominent links to the external survey. The mean cost of this advertising method was $1.51 per completed survey. Both of these methods compared favourably with previously conducted postal surveys, which had a mean cost per completed survey of $19.10. The cheapest postal survey was sent to adults in an electorate in Sydney, Australia that has high socio‐economic status. The most expensive postal survey was sent to young adults (18–30 years) in a range of electorates in Sydney, Australia. The cost‐effectiveness of the postal surveys was closely related to their response rates: 10.3% for the most expensive survey of young adults (Christensen et al., 2010), 12.4% for the stigma survey (Griffiths et al., 2011), and 21.3% for the relatively inexpensive prototypes survey (Christensen et al., 2011).

Table 2.

Cost effectiveness of the online survey relative to previous postal surveys

Study Cost of mail‐out/online advertising (AU$) Completed surveys Cost per completion (AU$)
Mental health screening with external link (present study; online) 12,600 1283 9.82
Mental health screening with internal page link (present study; online) 920 610 1.51
Prototypes screening survey (postal survey; Christensen et al., 2011) 34,100 2976 11.46
Generalized anxiety disorder trial screening survey (postal survey; Christensen et al., 2010) 261,300 12329 21.19
Generalized anxiety disorder stigma survey (postal survey; Griffiths et al., 2011) 8800 618 14.24

Representativeness of samples

Table 3 shows the characteristics of the two online survey samples, along with comparison characteristics for two of the postal surveys (Christensen et al., 2011; Griffiths et al., 2011) and population characteristics of Australian adults. All of the differences between survey samples and population statistics were statistically significant (p < 0.05) based on Z tests, except for the proportions of young adults and high school graduates in the online internally‐ linked survey and the proportion of older adults in postal survey 2. The online surveys tended to have an overrepresentation of young adults, while this group was underrepresented in postal surveys. There was a slight underrepresentation of older adults, particularly in the online internal survey, however proportions of older adults were generally consistent across survey modes. Females were overrepresented in all surveys, particularly in the online internal survey, although the online external survey had the most gender‐balanced sample. Surveys also tended to bias toward well‐educated and participants who only spoke English at home, irrespective of mode, although these effects tended to be more pronounced in postal surveys. There were elevated rates of mental health problems in all survey samples, although this was most evident in the online surveys, particularly the survey using internal advertising.

Table 3.

Representativeness of samples based on selected characteristics

Online survey, external link Online survey, internal page link Postal survey 1 Postal survey 2 Census data National Survey data
Sample size 1283 635 2976 618 ~21.5 million ~8880
Young adult (18–24, %) 32.9 10.3 7.0 7.3 9.8 N/A
Older adult (≥ 60, %) 16.4 12.6 16.5 20.3 19.6 N/A
Gender (% female) 57.5 83.3 60.8 62.2 50.3 N/A
High school or greater education (%) 85.4 77.2 93.31 79.1 74.22 N/A
Speak language other than English at home (%) 15.1 6.7 5.5§ N/A 19.3 N/A
Major depression caseness (%) 25.1 33.5 6.9 11.1 N/A 4.1
Generalized anxiety disorder caseness (%) 20.2 26.3 5.0 20.2 N/A 2.7
Panic disorder caseness (%) 13.6 22.2 1.4 N/A N/A 2.6

Note: Postal survey 1 data from Christensen et al. (2011) using prototype measures projected to diagnostic criteria, with superscript 1 indicating data only available from 326 participants who participated in a second phase assessment; Postal survey 2 data from Griffiths et al. (2011) using Goldberg Depression and Anxiety Scales with cutoff of six; July 2012 census data from Australian Bureau of Statistics (2012b); National Survey Data from Australian Bureau of Statistics (2008); 2education data from Australian Bureau of Statistics (2012a). N/A, data not available.

Discussion

This study had three aims, testing the cost‐effectiveness, representativeness and content on online recruitment. Regarding the first aim, this study demonstrated that online advertising for survey recruitment tends to be considerably more cost‐effective than postal surveys. This finding may also translate to online surveys being more cost‐effective than telephone surveys, given the high personnel costs involved in phone surveys. In particular, advertisements that were internally linked within the social networking website were the most cost‐effective method for recruitment. This finding may partly reflect Facebook's pricing strategy and the way Facebook targets the internally linked advertisements. However, it is also likely to be associated with the ability of Facebook users to share the internal Facebook page and survey link within their own social networks, without additional cost. Advertisements linking to external sites do not encourage such sharing.

Nevertheless, cost effectiveness must be balanced with how representative the resulting sample is of the target population, which was the focus of the second aim of the study. The findings suggest that people with a mental health problem are more likely to complete mental health surveys than the remainder of the population. This difference appears to be amplified for online surveys, in that there was greater representation of people with mental health problems. Depending on the research design, having such overrepresentation may or may not be problematical. For example, in developing measures to assess mental health, such overrepresentation may be desirable, while in prevalence research, such biases may be a critical flaw. Despite the reputation of Internet samples as being unrepresentative (Schonlau et al., 2009), the online surveys tended to have similar representativeness to postal surveys with respect to age, gender, education and cultural diversity. Furthermore, online surveys may be particularly effective for accessing populations that have traditionally been hard to reach, with adequate representation of culturally diverse populations and high representation of young adults. Previous research using online recruitment has also noted this benefit, in recruiting substance users (Ramo and Prochaska, 2012), young females (Fenner et al., 2012; Jones et al., 2012) and immigrants (Baltar and Brunet, 2012). Reasonable uptake of social networking among older adults was also reflected in the online samples.

The final aim of the study was to examine whether advertising content could be modified to optimize recruitment effectiveness. The content of the advertising appears to be a crucial aspect to cost effective recruitment. This study found that the term “mental health problems” was significantly more effective than “emotional well‐being” in attracting volunteers to complete a survey. One explanation for the difference in “positive” versus “problem” completion rates may lie in the correspondence between the advertisement and the survey title (“Assessing mental health”) and subsequent content. Alternatively, there may have been differences in comprehension, in that people may not have understood the meaning of emotional well‐being or found the terminology less tangible than “mental health”. “Emotional well‐being” may be less pertinent to individuals who have experienced mental health problems personally or in significant others. Nevertheless, the key recommendation from the analysis of advertising content is that messages should be closely aligned with the content, without masking or downplaying the survey material.

A secondary recommendation is that focusing on disorder, disease or, more generally, problems, may be more effective than a positive focus on well‐being in attracting volunteers who are likely to complete a survey. This supports previous research indicating “loss” framing (focus on a negative outcome) is more effective than positive “gain” framing (Lindenmeier, 2008; Graham et al., 2012). However, due to the brevity required by Facebook advertising, the wording of the present advertisements were not strictly defined in terms of “loss” and “gain”, particularly advertisement #2 which focused only on “mental health”. Future research could examine a range of other terms that have greater emphasis on poor mental health outcomes, such as “mental illness”. There were slightly higher levels of completion for the self‐gain advertisements than the altruistic advertisements. However, in contrast to previous research on egoistic versus altruistic advertising (Bennett and Kottasz, 2001), the difference between these two messages was not significant in terms of either completion rates or click‐through rates. The lack of difference may indicate that more lengthy advertisements are required to find differences between the promotion of personal rather than communal benefits of voluntary survey completion.

While this study represents a timely and comprehensive examination of online recruitment, there were some limitations that could not be addressed by the current research design. Firstly, outcomes may be different for topics other than mental health and in research on other populations. Among other factors, stigmatizing attitudes in the community toward people with mental health problems (Corrigan et al., 2000; Crisp et al., 2000) are likely to have important influences on the type of people who volunteer for mental health surveys. Nevertheless, little research has examined the content of advertising for recruitment into research studies using the present methods. Secondly, the postal surveys used for comparison to the online survey were briefer than the online survey and used different measures. However, shorter surveys tend to result in higher response rates and therefore lower costs, such that the observed cost differences may be conservative estimates. The measures used in the various surveys presented were somewhat diverse and based on self‐report. Nevertheless, the patterns consistently indicated elevated rates of mental health problems in online and postal surveys, with these effects favouring postal surveys. Although there seems to be higher interest in mental health surveys among individuals with mental health problems, further research is required to gain a better understanding of why individuals volunteer to participate in such research.

Thirdly, some costs of postal and online surveys were not included in the cost‐effectiveness analyses. Although the process of selecting survey items is similar regardless of modality, design of paper‐based surveys, application for names and addresses from sources such as electoral commissions, and management of the mail‐out can make the process considerably lengthier. Conversely, online surveys require installation of survey software and maintenance of servers, leading to increased personnel costs, although free survey software was used in the present study and can be used for multiple studies. While advertising to an internal Facebook page was most cost effective, the development and maintenance of a separate website within Facebook required a little additional time. Fourthly, caseness for major depression, generalized anxiety disorder and panic disorder were made on the basis of validated epidemiological scales in the online and mail surveys, while clinical diagnostic interviews were used in the National Survey of Mental Health and Well‐being. This may explain some of the discrepancy in prevalence data. Fifthly, further research is required to assess whether lack of representativeness in online samples are more strongly associated with characteristics of the online population or with characteristics of those who volunteer for research. Finally, the effectiveness of online recruitment for cross‐sectional surveys may not directly translate to longitudinal studies or clinical trials.

Although not directly evaluated in the current study, the time taken to develop an online survey and collect data online seems to be an additional benefit of online surveys. In contrast to postal surveys, the current online surveys were designed, tested and deployed in a matter of days, no waiting time for return post was required, large‐scale recruitment could be completed within a few weeks, and no data entry was required. Furthermore, implicit in the current study is the added benefit that online advertising can be directly monitored in real‐time, enabling ongoing optimization of recruitment messages. Additional research using similar methodology may lead to greater insight into the motivations of individuals who volunteer for research. Testing further combinations of attributes, such as combining altruistic and self‐gain messages, may also lead to more effective recruitment.

The efficiency, flexibility and cost‐effectiveness of online recruitment, coupled with the present evidence that postal surveys are not particularly more representative than online surveys, suggests that online survey research has considerable potential.

Declaration of interest statement

The author has no competing interests.

Acknowledgements

The author gratefully acknowledges the College of Medicine, Biology and Environment, the Australian National University, which funded this study through an early career fellowship support grant. The author is also grateful to Helen Christensen and Kathy Griffiths for permission to use data from their postal survey research. PB is supported by National Health and Medical Research Council Early Career Fellowship 1035262.

References

  1. Australian Bureau of Statistics . (2008) 2007 National Survey of Mental Health and Wellbeing: Summary of Results, Canberra: Australian Bureau of Statistics. [Google Scholar]
  2. Australian Bureau of Statistics . (2012a) 1301.0 – Year Book Australia, 2012, Canberra: Australian Bureau of Statistics. [Google Scholar]
  3. Australian Bureau of Statistics . (2012b) 3101.0 – Australian Demographic Statistics, Jun 2012, Canberra: Australian Bureau of Statistics. [Google Scholar]
  4. Baker L., Wagner T.H., Singer S., Bundorf M.K. (2003) Use of the Internet and e‐mail for health care information: results from a national survey. JAMA: The Journal of the American Medical Association, 289, 2400–2406, DOI: 10.1001/jama.289.18.2400 [DOI] [PubMed] [Google Scholar]
  5. Baltar F., Brunet I. (2012) Social research 2.0: virtual snowball sampling method using Facebook. Internet Research, 22(1), 57–74. [Google Scholar]
  6. Bennett R., Kottasz R. (2001) Advertisement style and the recruitment of charity volunteers. Journal of Nonprofit & Public Sector Marketing, 8(2), 45–63. [Google Scholar]
  7. Blumberg S.J., Luke J.V. (2012) Wireless Substitution: Early Release of Estimates From the National Health Interview Survey, January–June 2012, Atlanta, GA: Centres for Disease Control and Prevention. [Google Scholar]
  8. Brick J.M., Williams D. (2013) Explaining rising nonresponse rates in cross‐sectional surveys. Annals of the American Academy of Political and Social Science, 645, 36–59, DOI: 10.1177/0002716212456834 [DOI] [Google Scholar]
  9. Chang L., Krosnick J.A. (2009) National surveys via RDD telephone interviewing versus the Internet: comparing sample representativeness and response quality. Public Opinion Quarterly, 73, 641–678, DOI: 10.1093/poq/nfp075 [DOI] [Google Scholar]
  10. Christensen H., Griffiths K.M., Mackinnon A.J., Kalia K., Batterham P.J., Kenardy J., Eagleson C., Bennett K. (2010) Protocol for a randomised controlled trial investigating the effectiveness of an online e health application for the prevention of generalised anxiety disorder. BMC Psychiatry, 10, 25, DOI: 10.1186/1471-244X-10-25 [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Christensen H., Batterham P.J., Grant J.B., Griffiths K.M., Mackinnon A.J. (2011) A population study comparing screening performance of prototypes for depression and anxiety with standard scales. BMC Medical Research Methodology, 11, 154, DOI: 10.1186/1471-2288-11-154 [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. Corrigan P.W., River L.P., Lundin R.K., Uphoff Wasowski K., Campion J., Mathisen J., Goldstein H., Bergman M., Gagnon C., Kubiak M.A. (2000) Stigmatizing attributions about mental illness. Journal of Community Psychology, 28(1), 91–102. [Google Scholar]
  13. Crisp A.H., Gelder M.G., Rix S., Meltzer H.I., Rowlands O.J. (2000) Stigmatisation of people with mental illnesses. British Journal of Psychiatry, 177(1), 4–7. [DOI] [PubMed] [Google Scholar]
  14. Fenner Y., Garland S.M., Moore E.E., Jayasinghe Y., Fletcher A., Tabrizi S.N., Gunasekaran B., Wark J.D. (2012) Web‐based recruiting for health research using a social networking site: an exploratory study. Journal of Medical Internet Research, 14, e20, DOI: 10.2196/jmir.1978 [DOI] [PMC free article] [PubMed] [Google Scholar]
  15. Gosling S.D., Vazire S., Srivastava S., John O.P. (2004) Should we trust web‐based studies? A comparative analysis of six preconceptions about Internet questionnaires. American Psychologist, 59(2), 93–104. [DOI] [PubMed] [Google Scholar]
  16. Graham A.L., Fang Y., Moreno J.L., Streiff S.L., Villegas J., Munoz R.F., Tercyak K.P., Mandelblatt J.S., Vallone D.M. (2012) Online advertising to reach and recruit Latino smokers to an internet cessation program: impact and costs. Journal of Medical Internet Research, 14, e116, DOI: 10.2196/jmir.2162 [DOI] [PMC free article] [PubMed] [Google Scholar]
  17. Griffiths K.M., Batterham P.J., Barney L., Parsons A. (2011) The Generalised Anxiety Stigma Scale (GASS): psychometric properties in a community sample. BMC Psychiatry, 11, 184, DOI: 10.1186/1471-244X-11-184 [DOI] [PMC free article] [PubMed] [Google Scholar]
  18. Jones L., Saksvig B.I., Grieser M., Young D.R. (2012) Recruiting adolescent girls into a follow‐up study: benefits of using a social networking website. Contemporary Clinical Trials, 33, 268–272, DOI: 10.1016/j.cct.2011.10.011 [DOI] [PMC free article] [PubMed] [Google Scholar]
  19. Koo M., Skinner H. (2005) Challenges of internet recruitment: a case study with disappointing results. Journal of Medical Internet Research, 7, e6, DOI: 10.2196/jmir.7.1.e6 [DOI] [PMC free article] [PubMed] [Google Scholar]
  20. Lindenmeier J. (2008) Promoting volunteerism: effects of self‐efficacy, advertisement‐induced emotional arousal, perceived costs of volunteering, and message framing. Voluntas: International Journal of Voluntary and Nonprofit Organizations, 19(1), 43–65. [Google Scholar]
  21. Meyerson P., Tryon W.W. (2003) Validating internet research: a test of the psychometric equivalence of internet and in‐person samples. Behavior Research Methods, Instruments, & Computers, 35(4), 614–620. [DOI] [PubMed] [Google Scholar]
  22. Morgan A.J., Jorm A.F., Mackinnon A.J. (2013) Internet‐based recruitment to a depression prevention intervention: lessons from the Mood Memos study. Jourmal of Medical Internet Research, 15, e31, DOI: 10.2196/jmir.2262 [DOI] [PMC free article] [PubMed] [Google Scholar]
  23. Munoz R.F., Barrera A.Z., Delucchi K., Penilla C., Torres L.D., Perez‐Stable E.J. (2009) International Spanish/English Internet smoking cessation trial yields 20% abstinence rates at 1 year. Nicotine and Tobacco Research, 11, 1025–1034, DOI: 10.1093/ntr/ntp090. [DOI] [PMC free article] [PubMed] [Google Scholar]
  24. Ramo D.E., Prochaska J.J. (2012) Broad reach and targeted recruitment using Facebook for an online survey of young adult substance use. Journal of Medical Internet Research, 14, e28, DOI: 10.2196/jmir.1878 [DOI] [PMC free article] [PubMed] [Google Scholar]
  25. Schonlau M., van Soest A., Kapteyn A., Couper M. (2009) Selection bias in Web surveys and the use of propensity scores. Sociological Methods & Research, 37, 291–318, DOI: 10.1177/0049124108327128 [DOI] [Google Scholar]
  26. Spitzer R.L., Kroenke K., Williams J.B. (1999) Validation and utility of a self‐report version of PRIME‐MD: the PHQ primary care study. Primary care evaluation of mental disorders. Patient Health Questionnaire. JAMA: The Journal of the American Medical Association, 282(18), 1737–1744. [DOI] [PubMed] [Google Scholar]
  27. Spitzer R.L., Kroenke K., Williams J.B., Lowe B. (2006) A brief measure for assessing generalized anxiety disorder: the GAD‐7. Archives of Internal Medicine, 166(10), 1092–1097. [DOI] [PubMed] [Google Scholar]

Articles from International Journal of Methods in Psychiatric Research are provided here courtesy of Wiley

RESOURCES