Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2017 Oct 1.
Published in final edited form as: Fam Relat. 2016 Aug 26;65(4):550–561. doi: 10.1111/fare.12206

A Comparison of Three Online Recruitment Strategies for Engaging Parents

Jodi Dworkin 1, Heather Hessel 1, Kate Gliske 1, Jessie H Rudi 1
PMCID: PMC5552070  NIHMSID: NIHMS884411  PMID: 28804184

Abstract

Family scientists can face the challenge of effectively and efficiently recruiting normative samples of parents and families. Utilizing the Internet to recruit parents is a strategic way to find participants where they already are, enabling researchers to overcome many of the barriers to in-person recruitment. The present study was designed to compare three online recruitment strategies for recruiting parents: e-mail Listservs, Facebook, and Amazon Mechanical Turk (MTurk). Analyses revealed differences in the effectiveness and efficiency of data collection. In particular, MTurk resulted in the most demographically diverse sample, in a short period of time, with little cost. Listservs reached a large number of participants and resulted in a comparatively homogeneous sample. Facebook was not successful in recruiting a general sample of parents. Findings provide information that can help family researchers and practitioners be intentional about recruitment strategies and study design.

Keywords: Data collection, Facebook, MTurk, online recruitment, parenting


The task of effectively and efficiently recruiting parents to participate in research can be challenging; researchers are left to figure out where to find and how to recruit parents who are not experiencing a particular challenge or crisis. For instance, newspaper advertisements used to be a common method for recruiting research samples (Karney & Bradbury, 1995), but only 20% of adults in the United States now report getting their news via print papers (Mitchell, Gottfried, Barthel, & Shearer, 2016). Similarly, the use of phone books and random-digit dialing of households has become an unviable sample recruitment strategy, given that only 39.1% of households with children had a landline telephone in 2015 (and that number is dropping precipitously, down from 53% in 2012; Blumburg & Luke, 2016). However, online data collection is in many ways the modern-day equivalent of random-digit dialing (Chang & Krosnick, 2009).

A broad range of potential participants can be reached via the Internet. For example, 87% of adults in the United States are online regularly (Pew Research Center, 2014), and 76% of them use social networking sites (Pew Research Center, 2015). Thus, utilizing the Internet to recruit families is a potentially cost-effective and time-efficient way to find participants where they already are, enabling researchers to overcome many of the barriers to in-person recruitment (Benfield & Szlemko, 2006).

Most previous research has compared online recruitment methods to offline recruitment methods (e.g., Riva, Teruzzi, & Anolli, 2003; Vial, Starks, & Parsons, 2015; Ward, Clark, Zabriskie, & Morris, 2014), with limited methodological discussion around online research methods (Fielding, Lee, & Blank, 2008). The technology tools available to accomplish recruitment goals include basic resources such as e-mail Listservs, social networking sites, and more sophisticated strategies such as online labor markets. The present study is designed to compare the relative effectiveness of these three types of recruitment strategies with regard to demographic diversity, speed of data collection, and cost effectiveness to advance online recruitment strategies and study design. Before describing the method we employed for the present study, we review the literature on recruitment strategies in family research and describe the three key online recruitment tools in detail.

Recruitment Strategies in Family Science

Family researchers have raised questions about the sample one can expect to build from varying recruitment strategies. For example, Karney et al. (1995) found differences in demographics, personality, and marital satisfaction between couples recruited via newspaper ads and those recruited through a database of individuals registered for marriage licenses. In a second study, they found demographic differences between couples who responded to a mailed solicitation and those who did not respond. They concluded that sample techniques had a greater impact than a self-selection bias on the demographic makeup of the sample. Tamis-LeMonda, Briggs, McClowry, and Snow (2008) posited that African American parents have been particularly disadvantaged by the sampling techniques typically employed, as they have resulted in narrow interpretations of their parenting styles. Tamis-LeMonda et al. (2008) articulated the challenge of gathering longitudinal data with mobile and underresourced families. Furthermore, they articulated the need for culturally sensitive recruitment strategies, for example, with extensive outreach to community staff and building relationships that enhance recruitment outcomes. Here we consider the limited existing discussion of the challenges and opportunities of specific online tools for recruitment of samples for online social science survey research.

Recruitment Strategies That Utilize Online Technologies

There are three main recruitment strategies that utilize online technologies: (a) e-mail Listservs, (b) social networking sites, and (c) online labor markets. Although these strategies are similar in that they use communication technologies, each of them has particular characteristics.

E-mail Listservs

A Listserv is a commonly used communication tool among individuals who are members of a group and share a common interest, activity, or other characteristic, such as being a parent. E-mail Listservs are electronic mailing lists developed by organizations to distribute messages to subscribers. They provide access to a large number of potential research participants and are therefore an extremely cost-effective online recruitment tool (Wright, 2005). However, there may be challenges associated with getting permission to post to Listservs, which may accept posts only from group members or consider messages posted from someone outside the group to be spam. In addition, because some Listserv participants receive messages in a daily or weekly “digest” format rather than in real time, or have configured their mail system to send Listserv messages to a separate mailbox to be reviewed at a later point in time, recruitment messages via Listserv may have a slower response and a lower response rate than paid advertisements.

Social networking sites

Social networking has evolved from rudimentary sites that simply link individuals together into more sophisticated systems that facilitate sharing between people and groups of people and blur the lines among marketing, social connections, and personal interest groups. Social networking can be viewed as a new digital “town square” that connects people in diverse social systems. Facebook continues to be the most popular social networking site in the United States; 72% of online adults are members (Pew Research Center, 2015).

As of June 2016, Facebook reported having 1.71 billion monthly users worldwide; 66% used the social networking website daily, and 84.5% of those daily users lived outside of the United States and Canada (Facebook, 2016b). Facebook's Advertising Program (Facebook, 2016a) enables researchers to develop paid advertisements that are distributed to targeted demographics in the Facebook community. The ads appear in prominent locations (e.g., Facebook's News Feed) on the website for users who match the targeted demographics; clicking on the advertisement redirects the user to an external website chosen by the advertiser, such as to an online survey in the present context. Advertisers can select a billing option according to either how many times their advertisement is displayed (cost per impression, or CPM) or how many people click on the advertisement (cost per click, CPC); however, with CPC, advertisers are charged regardless of whether the click is from a unique user, because of the assumption that repeated viewings of an ad is beneficial for businesses. It is unclear whether this is beneficial for researchers trying to reach as many unique visitors as possible.

Facebook ads are moderately successful at collecting representative samples (e.g., Arcia, 2014; Nelson, Hughes, Oakes, Pankow, & Kulasingam, 2014; Ramo & Prochaska, 2012), and they do so with costs that are often less than half the price of traditional print recruitment costs for the same recruitment outcomes (Lohse & Wamboldt, 2013). Some studies have found that samples recruited through Facebook ads are representative of the population being studied (e.g., Nelson et al., 2014), and others have been able to effectively sample difficult-to-reach populations (e.g., Ramo & Prochaska, 2012). However, there is large variation in how many participants researchers are able to recruit, the cost per participant, the diversity of the sample, and the length of time required for recruitment. For instance, the amount spent per viable participant has been reported as low as $1.36 in a study that recruited more than 1,000 individuals (Nelson et al., 2014) and as high as $33.33 for nine individuals, of which ultimately none was eligible (Kapp, Peters, & Oliver, 2013). Facebook ads appear to be most successful when used to target a specific group of people with an easily identifiable characteristic, such as vegetarians or vegans (Hoffman, Stallings, Bessinger, & Brooks, 2013), women in early pregnancy (Arcia, 2014), or individuals younger than age 30 (e.g., 18- to 25-year-old smokers; Ramo & Prochaska, 2012).

Online labor markets

The expansion of the Internet into professional life has resulted in the creation of a virtual workforce, unrestricted by geographic distance. A large, globally distributed workforce has emerged as a result of the number of people looking for short-term jobs that they can complete anywhere and at any time. Various online labor markets have been created that meet the needs and desires of these employees. In an online labor market, employers set a compensation amount for the successful completion of tasks that would previously have been done in person. Online labor markets can facilitate this method of crowdsourcing by enabling communication between employer and employee, and by handling the payment process, all managed online (Horton, Rand, & Zeckhauser, 2011).

Amazon Mechanical Turk (MTurk) is an online labor market that brings together temporary employers (“requesters”) and employees (“workers” or “Turkers”). MTurk was launched by Amazon in 2005 as a crowdsourcing tool that makes it easy for requesters to post advertisements for short-term human intelligence tasks (HITs) that workers can complete for an agreed-on compensation amount. The worker is paid upon successful completion of the task; payment logistics are handled by Amazon, which charges a 20% commission (10% at the time of data collection for the present study). Because a typical HIT is low intensity and has a short duration, the compensation amount tends to be small, usually measured in pennies. Nonetheless, Amazon reports employing more than 500,000 workers in more than 190 countries (Amazon, n.d.). Amazon pays international workers with an Amazon gift card, but in 2009 began offering Indian workers the option of being paid in Indian rupees. The percentage of Indian workers noticeably increased after this change (Ipeirotis, 2010).

MTurk is used for short online tasks that require human processing; common tasks include filling out surveys, transcription, and image labeling. Since its launch in 2005, social science researchers have used MTurk to recruit research subjects to complete questionnaires and participate in online experiments (Steelman, Hammer, & Limayem, 2014). This use has led some to question who MTurk workers are and how well they represent the larger population (Peer, Vosgerau, & Acquisti, 2014).

Because each HIT is generally low compensation, studies have examined the extent to which finances are a primary motivator for MTurk workers. Most workers in both India and the United States participate to earn money (Horton et al., 2011); one study found that money was important for 61.4% of participants (Paolacci, Chandler, & Ipeirotis, 2010). Workers in India are also motivated by wanting to develop new skills, and workers in the United States have indicated that they perform HITs for fun and entertainment (Horton et al., 2011; Paolacci et al., 2010). More than half of both Indian and United States workers said that performing HITs on MTurk was a fruitful way to spend their free time (Ipeirotis, 2010; Paolacci et al., 2010).

Initial research suggests that MTurk may be a feasible, low-cost method of recruiting a large, diverse sample of research participants (Schleider & Weisz, 2015). In general, studies have demonstrated that the MTurk subject pool is more representative than typical convenience samples obtained through traditional recruitment methods (e.g., Ipeirotis, 2010; Ross, Irani, Silberman, Zaldivar, & Tomlinson, 2010; Shapiro, Chandler, & Mueller, 2013; Steelman et al., 2014). Specifically, MTurk samples tend to be more diverse in age, geography, and race than general Internet samples (Buhrmester, Kwang, & Gosling, 2011). Studies have also found that the self-reported demographics from workers are reliable (e.g., Mason & Suri, 2012). In replications of classic studies of judgment and decision making, logistic regression and chi-square analyses revealed that MTurk samples did not provide responses that were statistically different from participants in the original studies (Horton et al., 2011; Paolacci et al., 2010).

To address concerns about participants filling out surveys with random responses, researchers have experimented with inserting attention-check questions into online surveys. These questions, placed either in the instructions or intermingled with the questions themselves, are one way to determine whether respondents are paying attention. Results have been mixed on whether attention checks accomplish what the researcher intends them to, with some studies showing a benefit (Goodman, Cryder, & Cheema, 2013), and others arguing that removing data on the basis of attention checks produces biased results (Chandler, Mueller, & Paolacci, 2014; Downs, Holbrook, Sheng, & Cranor, 2012; Peer et al., 2014). Recent research has found that MTurk workers perform better on attention-check items than traditional subject pool samples such as those found at universities (Hauser & Schwarz, 2016).

Method

To better understand the relative advantages and disadvantages of various research sampling approaches, three unique recruitment strategies were examined: (a) e-mail Listservs, (b) Facebook ads, and (c) MTurk. The e-mail Listservs were used to recruit participants to complete a different survey from that of the Facebook or MTurk recruitment approaches. However, both surveys were designed to recruit parents, took approximately 15–20 minutes to complete, and included many of the same questions. There is no reason to think the survey itself had an impact on recruitment, as both surveys focused on recruiting parents and focused on parents’ use of technology for parenting. The survey used for e-mail Listservs was administered using the university's survey tool. The survey used for participants recruited via Facebook and MTurk was administered using Qualtrics, an online survey tool optimized for use on mobile devices such as smartphones and tablets, thereby increasing accessibility for participants who do not own computers or laptops. Given the similarities between survey administration, length, and topic, the main difference between the recruitment strategies was the entry point to the survey.

We focused on data that enable us to measure the effectiveness and efficiency of data collection. Effectiveness is the degree to which the recruitment objectives were achieved; efficiency is a consideration of cost and the ability to complete data collection in the best possible way, with minimal waste of time and effort. Both effectiveness and efficiency were assessed using the university's survey tool and Qualtrics; both survey tools provided extensive information about survey respondents, including geographic coordinates, how many potential respondents arrived at the page on informed consent, and how many people completed the survey. Effectiveness was also assessed by exploring the quality of the collected data, including missing data and the accuracy of responses to attention check questions for MTurk participants. The effectiveness data allowed us to calculate cost per participant and cost per click. In addition to data from the university's survey tool and Qualtrics, length of time to recruit a sample size appropriate for the study was considered.

E-mail Listservs

To recruit parents using e-mail Listservs, a recruitment e-mail describing the study was sent to demographically diverse e-mail lists of professionals across the United States who work with families, including Cooperative Extension (and eXtension); early education groups within state departments of education; U.S. Department of Agriculture (USDA) initiatives such as Children, Youth, and Families at Risk (CYFAR) projects; National Institute of Food and Agriculture (NIFA) divisions and initiatives; and other statewide and national networks that reach families and professionals with parenting resources. Professionals who subscribed to these Listservs were asked to forward the recruitment e-mail to the parents and families with whom they worked. This allowed for a broad national reach, and meant that parents would receive the participation request from a familiar name. The recruitment e-mail included a brief description of the goals of the study along with a link to the online survey. Data were collected between May and November 2010. Because professionals were asked to forward the recruitment message, it is not possible to know how many parents were reached or to compute a response rate. The only incentive to participation was the opportunity to be entered into a drawing for one of ten $100 Amazon gift cards.

Facebook Ads

Two advertisements were displayed through Facebook's paid-advertising program. Each ad included the text, “Are you a parent of a HS/college student? Take a 15 min survey for a chance to win great prizes!” next to an image of the university logo alone or paired with the university mascot. To incentivize participation, participants had a chance to win an iPad mini or one of two $100 Amazon gift cards upon completion of the survey. E-mail addresses were collected in a separate survey that was not linked to data collection. Utilizing Facebook's ability to target demographic groups, two distinct ad sets were compiled: the first targeted a broad group of parents residing in the United States, a pool of approximately 52 million Facebook users (an estimate calculated by Facebook), and the latter targeted a racially and ethnically diverse set of parents living in the United States (Hispanic, Asian, and African American parents in particular), thereby restricting the pool to an estimated 3.4 million Facebook users. Facebook automatically identifies the top-performing ads and displays them at a higher rate than poorly performing ads. The Facebook Ad Manager provided metrics about the ad campaign.

MTurk

Two rounds of data collection were conducted via MTurk in July 2014. The HITs, or tasks, were advertised to recruit parents of high school and college students in the United States and India. Survey participants were limited to workers with a high MTurk reputation (defined by MTurk as workers who have successfully completed 50 or more HITs from other requesters with a 90% or better approval rate). High-reputation workers tend to produce higher-quality data (e.g., higher internal reliability on survey scales, answer more attention-check questions correctly) than do low-reputation workers (Peer et al., 2014). To add further quality control to the data, three additional attention-check questions were embedded in the survey. These questions were designed to ensure that participants were paying attention and not simply clicking through to complete the task and earn their payment. An example of an attention-check question is “To demonstrate that you are reading the questions, please select Yes below.” Although some participants will inevitably choose the requested answer to individual attention-check items by chance, an incorrect response to any one attention-check item across the survey can help to identify respondents whose pattern of responses may be invalid and therefore warrant careful scrutiny. Each participant received $1 upon survey completion. In addition, parents could enter their e-mail address into a drawing for an iPad mini and one of two $100 Amazon gift cards. E-mail addresses were collected in a separate survey that was not linked to data collection.

Results

Analyses were exploratory and purposely designed to understand the strengths and challenges of three different online recruitment methods, with a particular emphasis on the effectiveness and efficiency of the different recruitment methods. The results focus on demographic diversity of the sample obtained by each method, the quality of the data collected, and the cost (considering both incentives and direct costs).

Table 1 provides a detailed comparison of the samples resulting from each recruitment method; demographic characteristics about the general population of the United States are included for comparison. The length of time to recruit participants varied between studies; this was intentional, the result of the time needed to recruit a sample size appropriate for each study.

Table 1. Demographic Characteristics of Listservs, Facebook, and MTurk Samples Collected for Current Study.

U.S. populationa Listservs Facebook MTurk U.S. sample MTurk Indian sample
Length of recruitment 7 months 6 weeks 17 hours 17 hours
Parent participants (N) 2,240 10 409 207
Total cost of recruitment $1,000.00 $500.00 $650.00
Cost per parent $0.45 $50.00 $1.06
Age: M(SD) 42.0 (9.8) 48.5 (10.1) 42.3 (7.7) 39.9 (8.6)
Female 50.8% 88.00% 70.0% 59.8% 40.3%
Race/ethnicityb
 White or Caucasian 74.8% 83.3% 40.0% 73.1% 0.5%
 Asian 5.6% 2.4% 0.0% 5.6% 93.7%
 Black or African American 13.6% 1.9% 30.0% 10.0% 0.0%
 Hispanic or Latin American 16.3 % 2.1% 30.0% 4.2% 0.5%
 American Indian or Alaska Native 1.7% 0.8% 0.0% 3.7% 3.4%
 Native Hawaiian or other Pacific Islander 0.4% 0.0% 0.0% 1.0% 0.0%
 Mixed race 2.9% 1.3% 0.0% 2.2% 1.9%
Country
 United States 98.1% 100.0% 100.0% 0.0%
 India 0.13% 0.0% 0.0% 100.0%
 Other 0.58% 0.0% 0.0% 0.0%
Marital status
 Married or living with partner 52.4%c 87.0% 80.0% 76.2% 97.6%
 Divorced or separated 10.1% 12.1% 20.0% 13.4% 0.0%
 Single 31.8% 0.4% 0.0% 7.6% 1.9%
 Widowed 5.7d 0.0% 0.0% 2.7% 0.5%
Employment status
 Full-time 49.0% 55.4% 30.0% 64.3% 69.6%
 Part-time 10.9% 20.0% 20.0% 18.8% 23.7%
 Do not work outside home 14.0% 10.0% 8.6% 4.8%
 Unemployed 3.5% 0.0% 6.8% 0.0%
Annual household income
 Less than $30,000 28.6% 7.0% 30.0% 23.0% 62.3%
 $30,000–<$50,000 18.1% 12.8% 20.0% 29.4% 16.9%
 $50,000 –<$75,000 17.0% 19.2% 10.0% 18.8% 8.2%
 $75,000 –<$100,000 11.5% 19.1% 30.0% 16.6% 4.3%
 $100,000 or more 24.7% 31.2% 0.0% 10.0% 1.0%
Education
 High school degree or less 42.0% 3.6% 20.0% 12.0% 8.8%
 Business, technical, or vocational school 4.1% 4.9% 0.0% 6.1% 2.9%
 Some college, no 4-year degree 24.7% 16.0% 30.0% 28.7% 14.6%
 College graduate 18.9% 36.2% 30.0% 35.4% 42.9%
 Postgraduate training (master's, doctorate) 10.4% 39.2% 20.0% 17.4% 30.7%
a

For US population data, some data are from individuals aged 16 and older and others are from individuals aged 18 and older (Bureau of Labor Statistics, 2016; U.S. Census Bureau, 2010, 2014, 2015a, 2015b).

b

Hispanic/Latino is a separate category for the full population.

c

Married only.

d

Including “living with partner.”

Data provided in Table 1 suggest that Listservs were the most cost-effective strategy, yielding the largest sample with the lowest cost per participant. Because the main goal was to collect a larger sample size, it took the most time to collect the data, but it also yielded the largest percentage of missing data (ranging from 0.8% to 5.6% missing per survey item). In contrast, Facebook was not efficient in time or cost. The per-participant cost was high, and it yielded few participants despite having paid for ads and offered participant incentives: Over 50 days the ads reached 142,885 possible participants, with a total of 1,265 clicks on the ad; 131 surveys were started and only 10 were completed at a cost of $49.95 per parent participant. However, nine of the 10 participants recruited through Facebook provided complete data. MTurk was the fastest way to collect data and resulted in the most demographically diverse (e.g., race, socioeconomic status, gender) sample of parents, including a large number of participants from India (see Table 1). In addition, missing data was low, ranging from 0.0% to 4.5% for per survey item. As a result, most participants were approved to be compensated $1.00. The only respondents who were not compensated were those who reported that their child was younger than age 8. Because the HIT called for parents of high school and college students, it was reasonable to assume these parents were not able to accurately report on the parenting behaviors asked about in the survey (e.g., behaviors that are likely to begin in middle childhood, such as parental monitoring). Further, although some parents with children who were not in high school or college were compensated for their time, their data were not used in analyses. The financial cost associated with MTurk was reasonable for a moderate sample size in a cross-sectional research study.

The three samples also differed demographically. The MTurk sample was the most balanced between mothers and fathers among respondents in the United States, and especially with Indian respondents, the latter of which were almost 60% fathers; Listservs and Facebook respondents were primarily mothers. The percentage of parents who were married—between 80% and 90%—were comparable across the three recruitment strategies. Facebook and MTurk were more effective than Listservs at recruiting socioeconomically diverse parents; parents recruited through Listservs were primarily middle or upper class. Less than one-third of participants recruited through Facebook reported working full-time; more than half of those recruited via MTurk and Listservs reported working full-time. Parents recruited through Listservs were also more likely than parents recruited through Facebook and MTurk to have earned a postgraduate degree and to have a high income (see Table 1). The demographics of the MTurk sample from the United States most closely aligned with the population of the United States with regard to gender, race/ethnicity, and marital status.

To better understand similarities and differences in data quality across the three recruitment methods, missing data were examined as described earlier, as were the attention-check questions used in the version of the survey completed by MTurk participants. Of those parents who completed the survey, 74.6% answered all three attention checks correctly; 16.5% missed or skipped one of the attention-check questions, 7.1% missed or skipped two or more attention-check questions, and 1.9% of participants missed or skipped all three questions. Older parents (F(2, 603) = 4.86, p = .008) and mothers (χ2(2, N = 607) = 9.66, p = .008) were more likely to answer all three attention checks correctly, as were White or Caucasian parents (χ2(10, N = 609) = 49.30, p < .001) and those living in the United States (χ2(2, N = 626) = 7.19, p = .027).

Discussion, Implications, and Limitations

One of the benefits of the digital era for researchers is the ability to recruit participants quickly and inexpensively. In less than one day, MTurk resulted in more than 600 diverse parents who completed an online survey, with a low per-person cost and high-quality data. This has profound implications for researchers and areas of study that may not have the funding to enable large-scale recruitment using more traditional sampling methods.

However, there is still a need to be intentional about whether and which online recruitment method is most appropriate. For example, although Facebook ads might work well at recruiting young adults who smoke cigarettes (Ramo & Prochaska, 2012), they would likely be less effective at recruiting a general sample of older adults, because only 56% of adults age 65 and older in the United States use social networking sites (Pew Research Center, 2015). Researchers should be cautious about using online recruitment strategies for populations that may not be readily accessible online.

The ability of Listservs to reach a select population may result in a sample that is not representative of all parents, but this may not always be problematic. For example, these data were collected to understand parents’ online behavior, and the Listserv sample may be representative of parents who are online and actively engaged with the particular Listserv used to recruit.

Individuals use social networking sites like Facebook to connect with people they already know or groups of people with whom they have something in common. Parents of high school and college students, it seems, are not a sufficiently cohesive group to target through social networking sites. In addition, using the ad feature in Facebook assumes that individuals self-identify in a way that connects with the study criteria. For example, users need to indicate they are a parent in their profile in order to be part of the potential pool of parent participants. Setting up a Facebook page without paying for ads may be adequate for recruiting members of a specific population, for instance, parents who have a child with autism, as these parents may be actively seeking support online. For recruiting a general sample of families, however, it seems that social networking websites are not be the best approach. That said, Facebook is one of many social networking websites; other types of social networking sites such as Pinterest, Twitter, or LinkedIn may be more effective for recruitment depending on the intended target sample. Further, attractive and creative marketing of surveys could increase the chance that individuals share a recruitment ad with potential participants within one's social network.

Our findings indicate that data can effectively and efficiently be collected via online labor markets such as MTurk, and that the sample reached a population that is perhaps demographically more diverse than what one might achieve with face-to-face methods. The use of an online labor market like MTurk eliminates some of the problems presented by other online recruitment methods such as Listservs and social networking sites. Privacy concerns are mitigated by the division between survey completion (via researcher's website) and payment (via Amazon). Researchers do not have access to names or other direct identifiers of MTurk workers through Amazon.

The use of online strategies, and online labor markets such as MTurk in particular, to engage research participants raises new questions about recruitment. Our results indicate that, for most researchers, MTurk is a good choice that will produce samples with demographics that better reflect the population of the United States than samples typically found when recruiting via Listservs or through college classes (see Table 1; Buhrmester et al., 2011; Steelman et al., 2014). However, relying on virtual workers who are guaranteed money as compensation—however little—raises questions about the motivation and attention of workers.

The motivation for MTurk workers to complete surveys is different from what is found in other recruitment methods. Traditionally, survey takers like those who complete a survey through a Listserv or Facebook ad may be motivated to participate because of personal interest (e.g., parents of children with certain disabilities) or obligation (e.g., college students taking an undergraduate psychology class). Money is the most important motivation for MTurk participation (Horton et al., 2011), and workers are virtually guaranteed compensation for completion of a survey, with occasional exceptions at the discretion of the researcher. For example, a participant completing a survey without acknowledging informed consent or without meeting inclusion criteria may be denied compensation.

Because of the potential for less personal investment in research with tools such as MTurk, it could be argued that MTurk workers might not pay close attention to their work. Results have been mixed on whether attention checks accomplish what the researcher intends them to do; some studies have shown benefits (Goodman et al., 2013; Peer et al., 2014), and others have posited that removing data on the basis of attention checks produces biased results (e.g., Chandler et al., 2014; Peer et al., 2014). Recent research has found that MTurk workers perform better on attention-check items than do traditional subject-pool samples such as those found at universities (Hauser & Schwarz, 2016). Although including attention checks may seem like good survey construction, it is not always clear how to handle missing or incorrect answers to questions. Analyses suggest that differences exist between respondents who correctly answered all three attention-check questions and those who missed or skipped two or three. Those who missed or skipped two or three questions were younger, more likely to be male, and more likely to be located outside the United States than those who answered all three. When using an online labor market like MTurk, requesters can limit the pool of respondents to only high-reputation workers, as in the current study, which may alleviate concerns about attention, and produce valid and reliable results. Additional research is needed to more fully understand the characteristics of subgroups of MTurk workers, but it is unlikely that limiting the pool of respondents to only high-reputation workers influenced the demographics of the sample in the present study (Peer et al., 2014).

Across recruitment methods, the reputation of the organization conducting the survey may also be a factor in an individual's decision to participate, especially if privacy or anonymity is a concern. Recruitment through a trusted Listserv may increase individuals’ willingness to respond. MTurk workers use discussion boards such as MTurkforum.com and turkernation.com to communicate about topics such as how reliably requesters pay and how well they estimate the time needed to complete their work (Chandler et al., 2014). In contrast with MTurk workers, participants recruited through Listservs or social media are often compensated with less certain or tangible rewards, such as the possibility of winning a prize or good feelings associated with contributing to the common good.

There are also other important costs to recruitment, including length of time needed to recruit the necessary sample and how much time it takes to manage the recruitment efforts (e.g., having to send multiple reminders to e-mail lists). The importance of time and financial cost will vary depending on the research questions and the resources available to the researcher. Although Listservs resulted in the largest sample with the lowest cost, this approach was time intensive. Facebook was also time intensive, but it had a large financial cost as well, with a very low return on investment. MTurk was moderate on both accounts and resulted in the desired sample. For researchers unfamiliar with open, online marketplaces such as MTurk, understanding how MTurk works and setting up HITs appropriately can be the most time-intensive aspect of launching a survey. Researchers can learn and pilot MTurk HITs on the MTurk sandbox, which is a mock MTurk website where requesters can test HITs before officially launching on MTurk. This can be a worthwhile endeavor to ensure the HIT displays correctly and with the desired parameters before beginning data collection.

Technological advances have opened up a new realm of recruitment options for research, including the use of e-mail, social media, and online labor markets, and these online recruitment strategies provide a viable means for obtaining a geographically diverse, and even global, sample. Each new option comes with its own set of advantages and disadvantages around issues that researchers care about, such as financial cost, independence of the data, selection bias, homogeneity, and time to recruit. Researchers should carefully consider their online recruitment options on the basis of the requirements of their study and understand that there are limits to the effectiveness of recruiting participants in every online space. Launching a recruitment strategy without adequately understanding the technology can also be problematic. For example, setting up HITs in MTurk without understanding the norms may result in developing a poor reputation in the MTurk community, which would have an effect on response rate.

Implications for Family Life Professionals

In addition to the many implications for researchers, MTurk can be a useful tool for family life professionals. MTurk can provide access to individuals and families who have typically been difficult for family life professionals to reach (e.g., rural families, homebound older adults), for both needs assessment and evaluation. Additionally, MTurk workers can be hired to write reviews, descriptions, and blog entries for websites; provide editing and transcription; rate the accuracy of search engine results; provide feedback about videos and photos, advertisements, other media and recruitment flyers, and program descriptions; or provide feedback about whether people or materials in photos are appropriate, relatable, and culturally sensitive. They can also tag photos or videos with keywords that practitioners can use to recruit a particular audience (Dworkin, Brar, Hessel, & Rudi, 2016).

MTurk can also enable professionals to gather survey data or in-depth qualitative interviews in a cost-effective manner and from geographically diverse participants, homebound participants, participants without college degrees, and minority groups (Williamson, 2014). For example, professionals seeking to support families with children with a rare health condition, or military families who are geographically dispersed, could be reached through MTurk.

Conclusion

We compared the results of three different online recruitment methods designed to recruit a normative sample of parents of high school and college students. Analyses revealed that Listservs resulted in a large, low-cost, but homogeneous sample; Facebook ads resulted in a high-cost and virtually nonexistent sample; and the MTurk online labor market resulted in a medium-cost, moderately diverse sample of parents. The recruitment options available to researchers and family life professionals are constantly changing, thus providing new opportunities for research and practice that may variably be more or less effective than those that preceded them (e.g., newspaper advertisements, random-digit dialing). Researchers and family life professionals should be informed and intentional when considering the trade-offs associated with each method.

Acknowledgments

This research was supported by the Minnesota Agricultural Experiment Station. One of the co-authors was supported by a fellowship on NIH grant T32 MH010026.

References

  1. Amazon. Requester. (n.d.) Retrieved from https://requester.mturk.com/tour.
  2. Arcia A. Facebook advertisements for inexpensive participant recruitment among women in early pregnancy. Health Education & Behavior. 2014;41:237–241. doi: 10.1177/1090198113504414. [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Benfield JA, Szlemko WJ. Internet-based data collection: Promises and realities. Journal of Research Practice. 2006;2(2) Retrieved from http://jrp.icaap.org/index.php/jrp. [Google Scholar]
  4. Blumberg SJ, Luke JV. Wireless substitution: Early release of estimates from the National Health Interview Survey, July–December 2015. Atlanta, GA: National Center for Health Statistics; 2016. Retrieved from http://www.cdc.gov/nchs/data/nhis/earlyrelease/wireless201605.pdf. [Google Scholar]
  5. Buhrmester M, Kwang T, Gosling SD. Amazon's Mechanical Turk: A new source of inexpensive, yet high-quality, data? Perspectives on Psychological Science. 2011;6(3):3–5. doi: 10.1037/e527772014-223. [DOI] [PubMed] [Google Scholar]
  6. Bureau of Labor Statistics. Labor force statistics from the current population survey: Household data seasonally adjusted. 2016 Retrieved from http://www.bls.gov/web/empsit/cpseea06.htm.
  7. Chandler J, Mueller P, Paolacci G. Nonnaïveté among Amazon Mechanical Turk workers: Consequences and solutions for behavioral researchers. Behavior Research Methods. 2014;46:112–130. doi: 10.3758/s13428-013-0365-7. [DOI] [PubMed] [Google Scholar]
  8. Chang L, Krosnick JA. National surveys via RDD telephone interviewing versus the Internet comparing sample representativeness and response quality. Public Opinion Quarterly. 2009;73:641–678. doi: 10.1093/poq/nfp075. [DOI] [Google Scholar]
  9. Downs JS, Holbrook MB, Sheng S, Cranor LF. Proceedings of the 28th International Conference on Human Factors in Computing Systems. New York, NY: Association for Computing Machinery; 2010. Are your participants gaming the system? Screening Mechanical Turk workers; pp. 2399–2402. [DOI] [Google Scholar]
  10. Dworkin J, Brar P, Hessel H, Rudi J. MTurk 101: An introduction to Amazon Mechanical Turk for extension professionals. 2016 Manuscript submitted for publication. [Google Scholar]
  11. Facebook. Facebook ads. 2016a Retrieved from https://www.facebook.com/business/products/ads/
  12. Facebook. Stats. 2016b Retrieved from http://newsroom.fb.com/company-info.
  13. Fielding NG, Lee RM, Blank G, editors. The Sage handbook of online research methods. London, UK: Sage; 2008. [DOI] [Google Scholar]
  14. Goodman JK, Cryder CE, Cheema A. Data collection in a flat world: The strengths and weaknesses of Mechanical Turk samples. Journal of Behavioral Decision Making. 2013;26:213–224. doi: 10.1002/bdm.1753. [DOI] [Google Scholar]
  15. Hauser DJ, Schwarz N. Attentive Turkers: MTurk participants perform better on online attention checks than do subject pool participants. Behavior Research Methods. 2016;48(1):400–407. doi: 10.3758/s13428-015-0578-z. [DOI] [PubMed] [Google Scholar]
  16. Hoffman SR, Stallings SF, Bessinger RC, Brooks GT. Differences between health and ethical vegetarians. Strength of conviction, nutrition knowledge, dietary restriction, and duration of adherence. Appetite. 2013;65:139–144. doi: 10.1016/j.appet.2013.02.009. [DOI] [PubMed] [Google Scholar]
  17. Horton JJ, Rand DG, Zeckhauser RJ. The online laboratory: Conducting experiments in a real labor market. Experimental Economics. 2011;14:399–425. doi: 10.3386/w15961. [DOI] [Google Scholar]
  18. Ipeirotis PG. The new demographics of Mechanical Turk. A Computer Scientist in a Business School. 2010 Mar 9; [Blog]. Retrieved from http://www.behind-the-enemy-lines.com/2010/03/new-demographics-of-mechanical-turk.html.
  19. Kapp JM, Peters C, Oliver DP. Research recruitment using Facebook advertising: Big potential, big challenges. Journal of Cancer Education. 2013;28:134–137. doi: 10.1007/s13187-012-0443-z. [DOI] [PubMed] [Google Scholar]
  20. Karney BR, Bradbury TN. The longitudinal course of marital quality and stability: A review of theory, methods, and research. Psychological Bulletin. 1995;118:3–34. doi: 10.1037/0033-2909.118.1.3. [DOI] [PubMed] [Google Scholar]
  21. Karney BR, Davila J, Cohan CL, Sullivan KT, Johnson D, Bradbushapry TN. An empirical investigation of sampling strategies in marital research. Journal of Marriage and Family. 1995;57:909–920. doi: 10.2307/353411. [DOI] [Google Scholar]
  22. Lohse B, Wamboldt P. Purposive Facebook recruitment endows cost-effective nutrition education program evaluation. JMIR Research Protocols. 2013;2(2) doi: 10.2196/resprot.2713. [DOI] [PMC free article] [PubMed] [Google Scholar]
  23. Mason W, Suri S. Conducting behavioral research on Amazon's Mechanical Turk. Behavior Research Methods. 2012;44:1–23. doi: 10.3758/s13428-011-0124-6. [DOI] [PubMed] [Google Scholar]
  24. Mitchell A, Gottfried J, Barthel M, Shearer E. The modern news consumer: Pathway to news. Washington, DC: Pew Research Center; 2016. Jul 7, Retrieved from http://www.journalism.org/2016/07/07/pathways-to-news/ [Google Scholar]
  25. Nelson EJ, Hughes J, Oakes JM, Pankow JS, Kulasingam SL. Estimation of geographic variation in human papillomavirus vaccine uptake in men and women: An online survey using Facebook recruitment. Journal of Medical Internet Research. 2014;16(9):e198. doi: 10.2196/jmir.3506. [DOI] [PMC free article] [PubMed] [Google Scholar]
  26. Paolacci G, Chandler J, Ipeirotis PG. Running experiments on Amazon Mechanical Turk. Judgment and Decision Making. 2010;5:411–419. doi: 10.1111/ntwe.12038. [DOI] [Google Scholar]
  27. Peer E, Vosgerau J, Acquisti A. Reputation as a sufficient condition for data quality on Amazon Mechanical Turk. Behavior Research Methods. 2014;46:1023–1031. doi: 10.3758/s13428-013-0434-y. [DOI] [PubMed] [Google Scholar]
  28. Pew Research Center. Internet user demographics. 2014 Jan; Retrieved from http://www.pewinternet.org/data-trend/internet-use/latest-stats.
  29. Pew Research Center. Social networking use. 2015 Jul; Retrieved from http://www.pewresearch.org/data-trend/media-and-technology/social-networking-use.
  30. Ramo DE, Prochaska JJ. Broad reach and targeted recruitment using Facebook for an online survey of young adult substance use. Journal of Medical Internet Research. 2012;14(1):e28. doi: 10.2196/jmir.1878. [DOI] [PMC free article] [PubMed] [Google Scholar]
  31. Riva G, Teruzzi T, Anolli L. The use of the Internet in psychological research: Comparison of online and offline questionnaires. CyberPsychology & Behavior. 2003;6:73–80. doi: 10.1089/109493103321167983. [DOI] [PubMed] [Google Scholar]
  32. Ross J, Irani L, Silberman MS, Zaldivar A, Tomlinson B. Who are the crowdworkers? Shifting demographics in Mechanical Turk; Paper presented at the Association for Computing Machinery and Computer Human Interaction Conference on Human Factors in Computing Systems; Atlanta, GA. 2010. Apr 10, [DOI] [Google Scholar]
  33. Schleider JL, Weisz JR. Using Mechanical Turk to study family processes and youth mental health: A test of feasibility. Journal of Child and Family Studies. 2015;24:3235–3246. doi: 10.1007/s10826-015-0126-6. [DOI] [Google Scholar]
  34. Shapiro DN, Chandler J, Mueller PA. Using Mechanical Turk to study clinical populations. Clinical Psychological Science. 2013;1:213–220. doi: 10.1177/2167702612469015. [DOI] [Google Scholar]
  35. Steelman ZR, Hammer BI, Limayem M. Data collection in the digital age: Alternatives to student samples. MIS Quarterly. 2014;38:355–378. [Google Scholar]
  36. Tamis-LeMonda CS, Briggs RD, McClowry SG, Snow DL. Challenges to the study of African American parenting: Conceptualization, sampling, research approaches, measurement, and design. Parenting. 2008;8:319–358. doi: 10.1080/15295190802612599. [DOI] [Google Scholar]
  37. U.S. Census Bureau. Profile of the general population and housing characteristics: 2010 demographic profile data. 2010 Retrieved from http://factfinder.census.gov/faces/tableservices/jsf/pages/productview.xhtml?src=bkmk.
  38. U.S. Census Bureau. Educational attainment of the population 18 years and over, by age, sex, race, and Hispanic origin: 2014. 2014 Retrieved from http://www.census.gov/hhes/socdemo/education/data/cps/2014/tables.html.
  39. U.S. Census Bureau. CPS 2015 annual social and economic supplement: Selected characteristics of households, by total money income in 2014. 2015a Retrieved from http://www.census.gov/hhes/www/cpstables/032015/hhinc/hinc01_000.htm.
  40. U.S. Census Bureau. Marital status of the population 15 years old and over by sex, race and Hispanic origin: 1950 to present. 2015b Retrieved from https://www.census.gov/hhes/families/data/marital.html.
  41. Vial AC, Starks TJ, Parsons JT. Relative efficiency of field and online strategies in the recruitment of HIV-positive men who have sex with men. AIDS Education and Prevention. 2015;27:103–111. doi: 10.1521/aeap.2015.27.2.103. [DOI] [PMC free article] [PubMed] [Google Scholar]
  42. Ward P, Clark T, Zabriskie R, Morris T. Paper/pencil versus online data collection: An exploratory study. Journal of Leisure Research. 2014;46:84–105. [Google Scholar]
  43. Williamson V. On the ethics of crowd-sourced research. 2014 Aug 10; Retrieved from http://scholar.harvard.edu/files/williamson/files/mturk_ps_081014.pdf.
  44. Wright KB. Researching Internet-based populations: Advantages and disadvantages of online survey research, online questionnaire authoring software packages, and web survey services. Journal of Computer-Mediated Communication. 2005;10(3) doi: 10.1111/j.1083-6101.2005.tb00259.x. [DOI] [Google Scholar]

RESOURCES