Skip to main content
Springer logoLink to Springer
. 2016 May 24;17:765–778. doi: 10.1007/s11121-016-0664-1

Human Subjects Protection and Technology in Prevention Science: Selected Opportunities and Challenges

Anthony R Pisani 1,, Peter A Wyman 1, David C Mohr 2, Tatiana Perrino 3, Carlos Gallo 2, Juan Villamar 2, Kimberly Kendziora 4, George W Howe 5, Zili Sloboda 6, C Hendricks Brown 2
PMCID: PMC4938846  PMID: 27220838

Abstract

Internet-connected devices are changing the way people live, work, and relate to one another. For prevention scientists, technological advances create opportunities to promote the welfare of human subjects and society. The challenge is to obtain the benefits while minimizing risks. In this article, we use the guiding principles for ethical human subjects research and proposed changes to the Common Rule regulations, as a basis for discussing selected opportunities and challenges that new technologies present for prevention science. The benefits of conducting research with new populations, and at new levels of integration into participants’ daily lives, are presented along with five challenges along with technological and other solutions to strengthen the protections that we provide: (1) achieving adequate informed consent with procedures that are acceptable to participants in a digital age; (2) balancing opportunities for rapid development and broad reach, with gaining adequate understanding of population needs; (3) integrating data collection and intervention into participants’ lives while minimizing intrusiveness and fatigue; (4) setting appropriate expectations for responding to safety and suicide concerns; and (5) safeguarding newly available streams of sensitive data. Our goal is to promote collaboration between prevention scientists, institutional review boards, and community members to safely and ethically harness advancing technologies to strengthen impact of prevention science.

Keywords: Human subjects, Prevention, Technology, Ethics, Common rule regulations, IRB


Advances in Internet-enabled connectivity and computing offer new opportunities (Brown et al. 2013; Mohr et al. 2014) across all phases of the prevention research cycle (Kellam et al. 1999), from generative research to the dissemination and implementation of interventions. Technological advances also create new opportunities to promote the welfare of human subjects. Proposed changes to modernize the federal rules that govern human subjects research in the USA (US Department of Health and Human Services 2015) underscore the need to re-evaluate and update our operating procedures (Behnke 2006) in prevention science to encourage broad and well-informed research participation in a digital age.

In this article, we begin by briefly reviewing key guiding principles pertaining to prevention research with human subjects and proposed changes to the rules that govern such research in the United States. After establishing this context, we focus on benefits afforded by technology to promote these principles through the following: (1) research with hard-to-reach populations through new venues and delivery platforms and (2) data collection and interventions that reach more broadly and deeply into participants’ lives. We then outline five human subjects challenges that prevention researchers must surmount to realize the gains that technology-enabled research promises, and offer technological and other solutions to these challenges that can empower participants and strengthen the protections that we provide. The challenges are as follows: (1) achieving adequate informed consent with procedures that are acceptable to participants in a digital age; (2) balancing opportunities for rapid development and broad reach, with gaining adequate understanding of population needs; (3) integrating data collection and intervention into participants’ lives while minimizing intrusiveness and fatigue; (4) setting appropriate expectations for responding to safety and suicide concerns; and (5) safeguarding newly available streams of sensitive data. Our goals are to help prevention scientists harness advancing technologies to strengthen human subjects protections and expand the impact of prevention science.

Salient Human Subjects Challenges in Prevention Science

The Belmont Report (US Department of Health Human Services 1979) was a response to ethical malfeasance. These principles now govern human subjects research across federal agencies that fund or conduct research (US Department of Health and Human Services 1991) and serve as a guide for integrating Internet-based technology into prevention research practices. Below, we briefly review these principles as they pertain to prevention research and discuss current efforts by the US government to modernize the federal rules that codify these principles.

Justice emphasizes equitable selection of participants, so that both the burdens and benefits of research are fairly distributed. In prevention research, concerns about justice arise in several ways. For example, research has historically under-represented women and minorities (Yancey et al. 2006). The term “scientific equity” describes the need for equality and fairness in the scientific knowledge produced which can lead to empirically driven policies to overcome disparities (Brown et al. 2013; Perrino et al. 2014). One threat to justice in prevention science is the risk of inadvertently increasing stigma against a target subpopulation, as happened when research on alcohol use among the Inupiat Indians of Barrow, Alaska (Foulks 1989), was leaked and sensationalized, causing economic and other harms (Hodge 2012). This problem is revisited with new communication technologies because they simultaneously provide avenues for promoting justice by reaching underserved and marginalized populations, such as lesbian, gay, bisexual, and transgender (LGBT) youth (Silenzio et al. 2009), while unearthing controversial, possibly stigmatizing data.

Respect for persons centers on research participants’ autonomy, as well as the researcher’s obligation to protect those with diminished autonomy (e.g., children or disabled individuals). Meaningful interaction is at the heart of an informed consent process that achieves respect for persons. Informing participants about the requirements and risks of participation is necessary, but assuring “understood consent” (Bhutta 2004) is the true goal. The researcher bears responsibility for determining a candidate’s competence, comprehension, and appropriateness for the study.

The population health focus of prevention research can present special challenges for protecting autonomy and ensuring informed consent. Sometimes, formal consent of every individual affected by a large-scale intervention is unachievable, as with group-based preventive interventions that target entire communities (Wyman et al. 2014). As discussed below, respect for persons can be advanced by new technology-enabled methods for disseminating study information, ensuring comprehension, and monitoring ongoing consent and opt-out decisions. At the same time, technology use raises ethical dilemmas when it allows researchers access to information about participants’ lives with minimal interaction, as when interventions are embedded deeply and invisibly into daily routines.

Beneficence emphasizes the obligation to maximize possible benefits and minimize possible harms to individuals and society, including a loss of knowledge if the research is not undertaken at all. A key aspect of beneficence is the mandate to “do no harm.” Prevention researchers face particular challenges here because their activities frequently provide intervention to large populations in which most are asymptomatic and will not become disordered. A leading prevention science paradigm focuses on reducing malleable risk processes antecedent to disorders (Kellam et al. 1999) in broad populations (i.e., universal prevention). Evidence that some universal interventions can be beneficial to certain population subgroups while harmful to others (e.g., universal middle school substance abuse prevention; Sloboda et al. 2009) underscores the tension between adherence to the principle of beneficence and the need to seek and understand strong conceptual theories. The problem is not new. Similar challenges have appeared in programs targeting adolescents at risk for eating disorders by exposing low-risk youth to information which may undermine healthy eating habits (e.g., learning that purging is used by some in an attempt to manage weight, O’Dea and Abraham 2000). As discussed below, the ability to monitor progress remotely and respond to safety concerns promotes beneficence in prevention research. Conversely, conducting research with participants remotely introduce risks of a mismatch with population needs and, therefore, a potential risk of iatrogenesis.

Respect for privacy and confidentiality are core to human subjects protection and cut across all three of the Belmont principles. Privacy is the right to control access to information about oneself. As a personal right, it is defined subjectively, making standards fluid and culture bound. The regulations governing privacy are based on “reasonable expectations” that participants are not being observed or recorded or that information that they provide will not be made public (US Department of Health and Human Services 1991). In prevention research, privacy is most often threatened in the recruitment process, when potential participants may be approached based on information or in contexts that they do not wish to be known, such as a health clinic or juvenile court records, or information posted to a social media website.

Confidentiality refers to the protection of identifiable information. As discussed below, advances in technology make it possible to protect certain aspects of participants’ privacy and confidentiality more effectively than ever, while paradoxically putting other personal information at greater risk of disclosure. Modern security measures can guard against accidental disclosure and data theft, but large streams of data mean greater risk of participants being identified based on combinations of reported or released information, known as deductive disclosure (Sieber 2006).

Federal Policy for the Protection of Human Subjects: NPRM

Although the core principles of human subjects protection remain relevant and binding over time, the application of these principles must evolve with the state of science, society, and technology. In September 2015, sixteen federal agencies issued a Notice of Proposed Rulemaking (NPRM, US Department of Health and Human Services 2015) with the aim of modernizing, strengthening, and streamlining the federal policy, known as the Common Rule Regulations (US Department of Health and Human Services 1991). The NRPM, along with an executive summary of the goals and provisions, is available on the HHS website (US Department of Health and Human Services 2015).

While extensive policy-making steps lie ahead and ramifications are not yet known, the Notice of Proposed Rulemaking (NPRM) signals the direction in which human subjects regulation is going (e.g., Emanuel 2015; Hudson and Collins 2015) and therefore merits discussion with regard to technology in prevention science. Three broad areas are relevant. First, NPRM proposes to make informed consent documents more transparent and concise and would require researchers to post consent documents on a public government website. New standards present an opportunity for updating online consent processes, which heretofore have mostly mimicked the length and density of traditional paper documentation. Computer and phone screens provide a blank canvas for attractive and interactive audio or visual consent presentations, potentially aiding transparency and communication. But, as discussed below, challenges remain for figuring out how to keep participant burden low and interactions brief. Second, a number of proposed changes could reduce institutional review board (IRB) oversight of lower-risk online interventions, potentially increasing the degree to which prevention researchers will be trusted to self-monitor. For example, NPRM creates new categories of excluded research, including “benign interventions with adults” and “secondary use of identifiable private information that was collected for non-research purposes.” The NRPM would also reduce IRB oversight for other low-risk research, including the proposal to eliminate continuing review for many studies. Third, for low-risk studies where confidentiality is the primary concern, the NPRM would decrease the IRB role and shift the burden of participant protection to data security teams—allowing IRBs to focus their attention on higher-risk studies. Changes along these lines could increase interaction between prevention researchers and data security experts and enhance the need for researchers to have the requisite background knowledge to evaluate options offered by technical experts.

Opportunities: New Venues, Delivery Platforms, and Populations

Researchers now have access to populations of individuals around the world via Internet-networked communication devices. Delivering effective prevention programs to minority, marginalized, and geographically remote populations holds enormous potential for reducing health disparities and promoting justice and scientific equity in human subjects research (Brown et al. 2013; Muñoz 2010; Perrino et al. 2013). By the end of 2014, 40 % of the world’s population will have wired-broadband Internet access (International Telecommunications Union 2014). Mobile access is increasing even faster and is expected to reach 2.3 billion globally within the next year (International Telecommunications Union 2014). Adoption of mobile Internet-connected devices among US minority groups is especially rapid. Although legitimate concerns exist that some subgroups could get left behind, minority and marginalized groups that might have missed benefits from previous technologies appear to be participating robustly in the mobile revolution. A greater proportion of African Americans and Latinos than Whites use their mobile devices as their primary means of accessing social networking, email, and entertainment. The gaps in overall Internet access between Whites and minorities (Smith 2015) and young and elderly (Gilleard et al. 2015; Smith 2014) are disappearing, and other marginalized groups such as immigrants, migrant farm workers, and homeless youth are active users of mobile technology (Price et al. 2013; Rice et al. 2011; Welcoming Center for New Pennsylvanians 2012). Broader participation in prevention research benefits society, since scientific knowledge will be more widely generalizable.

Online recruitment and intervention occur primarily via three online platforms, each of which has distinct advantages for reducing disparities: public websites and services, online software retailers, and social media. A growing number of studies have demonstrated the efficacy of delivering interventions on these platforms (Mohr et al. 2013a). First, self-help websites can attract populations seeking interventions. For example, Muñoz and colleagues reached individuals across the English- and Spanish-speaking world with a public self-help website that has proven successful in reducing smoking (Muñoz et al. 2006). The website was free and open to the public and invited voluntary “opt-in” participation in research on the program. Websites dedicated to particular health issues provide opportunities to identify and recruit at-risk individuals amenable to online interventions. For example, Mood Gym (Christensen et al. 2004) teaches cognitive behavioral skills to prevent depression, providing opportunities to identify and recruit at-risk individuals amenable to online interventions. Crisis text services, such as Crisis Text Line (Crisis Text Line 2015) and the Veterans Crisis Line (US Department of Veterans Affairs 2014), attract new populations of at-risk individuals and generate vast quantities of data researchers that can use to understand the needs of individuals in crisis and discover new ways to help them in the short and long term.

Second, online retail sites, such as the Apple App Store, give access to large, active, customer bases. These are new venues for research and intervention delivery. A growing number of researchers are releasing their research applications on these stores. Some require potential participants to contact a research coordinator to unlock the app, while others invite people to participate in the study but allow those who do not want to consent to continue to use the apps. Apple released ResearchKit, an open-source software framework that supports in-phone consenting and manages assessments (Ritter 2015), followed by announcements from companies that will port the platform for use with the Android smartphone operating system (e.g., Patel 2015).

The format for software-based interventions can vary widely—from highly text-based versions of existing health-promotion programs (e.g., a virtual behavior therapy coach (Rizvi et al. 2011) to graphics-based video games (e.g., a diabetes management game for children that involves running from and chasing monsters (Garde et al. 2015). The nexus of prevention and commerce has created new opportunities for partnerships between academic researchers and commercial entities, bringing resources to accelerate the development of research-supported and evaluated interventions (Mohr et al. 2013a), reducing disparities (justice), and increasing the public health impact (beneficence).

Third, social media applications (Facebook, Twitter, Instagram, Qzone, and Weibo) have fundamentally changed how people connect socially, creating new virtual communities. These communities offer access to difficult-to-reach populations and the opportunity to study network effects, which have demonstrated importance in prevention (Valente 2010). For example, suicide prevention researchers used network mapping over MySpace to identify and contact a “hidden population” of LGBT youth at risk for suicide (Silenzio et al. 2009). Network recruitment methods include “respondent-driven sampling” (Homan et al. 2013) to identify and recruit hard-to-reach, at-risk populations. Similarly, sexual health researchers have used Grindr, a messaging application geared toward gay and bisexual men, to target and recruit men who have sex with men (Burrell et al. 2012; Landovitz et al. 2013; Rice et al. 2012) Remote recruiting avoids the stigma that some participants experience with in-person recruitment, thereby reducing potential harm and burden (beneficence). In addition to these existing platforms, the National Institute of Health announced new funding for mobile health research infrastructure (National Institutes of Health 2015). This infrastructure, along with NPRM rules that would exclude several new categories of low-risk research from IRB review, is likely to accelerate mobile health research and ensure that mobile-mediated participant recruitment, consent, and intervention will become increasingly broad and common.

Opportunities: Interventions and Data Collection: Anywhere and “Everyware”

Networked communication devices—computers, smartphones, and sensing devices (e.g., geolocation or biosensors) that send and receive information over the Internet—open up new possibilities for promoting the welfare of human subjects. Everyware (Greenfield 2006) refers to a state of society and technology (rather than any particular class of hardware) in which networked devices become so ubiquitously embedded into everyday objects that information processing “dissolves in behavior.” Currently a theoretical extreme, Everyware identifies an ongoing trend. Passive sensing and data collection capability is already in many everyday objects—thermostats that detect movement and living patterns, light bulbs that change color when your spouse pulls into the driveway, and watches that track heart rate and activity. This deep integration of software provides ever-increasing ways to both collect and distribute prevention information. First-generation research is underway, for example, to use phone-based activity sensor (accelerometer) data to detect and respond to depression cues, such as decreased movement/activity (Saeb et al. in press), and contact lenses to continuously monitor blood sugar levels (Otis and Parviz 2014). For the intervention opportunities to be realized, an enormous amount of newly available, individualized data will have to be mined, interpreted, and responded to, but the technological capability is there.

Prevention research that leverages a broad array of Everyware devices will have distinct advantages for monitoring and responding to concerns about safety, iatrogenesis, and implementation quality, while simultaneously reducing participant burden. First, networked devices can give investigators access to an ongoing stream of information about participant risk, safety, and responses to intervention—generating early decision points. Such “streaming” could allow researchers to detect adverse responses or safety risks more quickly, rather than learning of poor responses only after endpoint data are collected and analyzed. In the case of safety or suicide risks, networked devices can be used to communicate key data, such as location, identity, and symptoms in emergency situations. Natural language-processing researchers have begun testing software to detect suicide concerns in text-based communication and to alert proctors immediately (Dinakar et al. 2015; Pestian et al. 2010). Other safety concerns, such as online bullying, can be detected in a similar fashion (Dinakar et al. 2012). Practical challenges remain; however, it is at least theoretically possible for researchers to use passively collected streams of data to mitigate risks, adjusting or initiating additional communication while a trial is still in progress. In most of our current non-technologic preventive interventions, we generally make only one type of error: not recognizing risk when it is there. Technology-based monitoring may potentially reduce this failure-to-recognize type of error but could increase false positives and result in other types of harm, including stigma.

In a similar vein, technologies can help researchers promote implementation quality, even for programs delivered in local communities. Implementation scientists are developing methods that use smartphone microphones, voice recognition, and computational linguistics to study and enhance the fidelity and competence with which community implementers deliver intervention components. Transcripts of family visits are scanned automatically for linguistic patterns that are linked to high fidelity (Gallo et al. 2015). Further developing such capabilities is critical because the welfare of participants depends on the quality and safety of prevention program delivery.

Finally, specific tailoring and “unobtrusive measures” (Webb 2000) have the potential to reduce participant fatigue and burden—key aspects of beneficence. Even universal interventions can be personalized because access is at an individual level. Tailoring can reduce time wasted on irrelevant or mismatched material. For example, we anticipate that technology-supported tailoring will allow scaling of highly effective parenting programs (e.g., Pantin et al. 2009; Wolchik et al. 2013). These programs have been challenging to deliver to large portions of the population because they require specific matching to family needs. Participant burden can also be reduced with computerized adaptive testing, which greatly diminishes the number of items necessary to achieve reliability (Gibbons et al. 2008), as well as passive data collection. Ambient, wearable, or implantable devices allow participants to “set it and forget it.” Access to multiple streams of real-time data has the potential to deliver interventions at the most impactful times and places, only requiring participants’ time and attention in a targeted fashion (Pejovic and Musolesi 2014). Such use of real-time data to inform real-time delivery of an intervention is known as just-in-time, adaptive interventions (JITAIs, Nahum-Shani et al. 2014). For example, we envision just-in-time intervention to prevent the spread of HIV among intravenous drug users using geolocation sensors (Brown et al. 2013).

Challenge No. 1: Achieving Adequate Informed Consent with Procedures that Are Acceptable to Participants in a Digital Age

Changes to commonly used informed consent practices are needed to achieve the benefits mentioned above and the transparency and reduced burden that NPRM aims for (see above). Traditionally, participants are presented a single detailed document containing a study’s purpose, risks, benefits, data storage, confidentiality, and compensation plans. Comprehensive consent is requested at first contact with the participant.

Although providing highly detailed information upfront theoretically promotes autonomy, applying this approach in the digital setting often clashes with user expectations. This can result in reduced participation and biased samples, which reduce the scientific value and waste participant resources—a failure to promote beneficence. For example, about half of participants who downloaded a mood management app from the Google Play store (Center for Behavioral Intervention Technologies 2015) as part of a quality improvement project refused to sign an in-app consent by typing in their name, to allow researchers to collect usage data (D. Mohr, personal communication, May 18, 2015). Opt-in research participation involving publicly available websites and apps is increasingly common, making it a good target for standardization, with procedures that are acceptable to participants in a digital age.

Second, presenting consent information in a comprehensive manner at first contact can result in diminished comprehension. The most common strategy for achieving consent in online studies is to present an extensive “click-through” agreement (an information screen with a button to signal agreement), but these can be problematic. Research has demonstrated that users are disinclined to read long-form text on a computer screen and tend to misjudge their comprehension compared to the same text printed out (Ackerman and Goldsmith 2011). Instead, people reading electronic content on smaller devices employ a “scan and skip” approach. Less than 50 % of adults presented with a typical click-through actually read the entire document before clicking to continue (Böhme and Köpsell 2010).

Researchers at Facebook, Inc. and Cornell University investigated the network spread of positivity and negativity by manipulating some of their users “News Feeds” (running list of posts by friends) and measuring the valance of users’ subsequent posts (Kramer et al. 2014). Even though the study “was consistent with Facebook’s Data Use Policy to which all users agree” (Kramer et al. 2014, p. 8789), the study still caused controversy. There was widespread criticism in the media, scrutiny from congress and the FCC, and an eventual expression of concern from the journal’s publisher. The Cornell University IRB did not review the study because faculty had access to results, but not individual data (Cornell University Media Relations Office 2014). The lead researcher later acknowledged, “I can understand why some people have concerns about it, and my coauthors and I are very sorry for the way the paper described the research and any anxiety it caused. In hindsight, the research benefits of the paper may not have justified all of this anxiety.” (Kramer 2014)

Highly publicized episodes like this one undermine public confidence, and the NPRM seeks to increase transparency to avoid such problems. In the case of Facebook, the click-through for research participation was very general and hidden in the initial user registration process. Click-through agreements may be appropriate when the potential risks of participation are minimal or when more elaborate consent procedures would be burdensome, off-putting, or unfeasable. But, even in these cases, researchers can take steps to improve comprehension and decrease the chance of error or manipulation (Kunz et al. 2001). Best practices include the following: allowing participants to view informed consent information in digestible chunks and in easy-to-understand ways, giving participants the ability to review terms after starting participation, offering participants a choice between assent and rejection (not just, “click here to accept” but offering a “do not accept” button), labeling buttons in meaningful ways (“yes” or “I agree” rather than “continue,” “submit,” or “enter”), and providing participants with notice of the consequences of assent or rejection.

“Critical Junctures” Approach

One alternative to both simple click-through agreements and overly burdensome initial consent procedures is what we call a critical junctures approach. Consent need not be a comprehensive, one-time occurrence. Although a comprehensive initial consenting process is appropriate for some studies (e.g., a trial of a new networked glucose monitor or surgical procedure), many studies could protect participants better by providing information and soliciting consent as the participant reaches critical junctures. Critical junctures, identified for each study a priori as part of human subjects protocols, could include the start of data collection, the start of an intervention module, the first time that a certain type of data are stored (e.g., GPS data), or at the close of the study. In this way, consent information is integrated with other communications taking place within the natural flow of an intervention. This approach is consistent with current practices on many mobile devices. For example, on both Android and Apple iOS devices, users give permission for an app to use a microphone, camera, or GPS sensor just prior to the first use by an application—not in one long consent screen the first time that the app is launched. In the case of Facebook, the company could include a broad consent to research in their terms of service agreement that allows users to select types of research that they would like to opt into, then present a simple dialog box when they wish to conduct a new study or use data collected in the ordinary course of business for scientific purposes.

For studies with experimental conditions or greater intrusiveness, key elements of concern can be highlighted in pictorial or video form, at critical junctures. Long-form consent documents can be provided on request, while still presenting online information in brief bullets, infographics, or videos. These changes require researchers and IRBs to define and agree on elements needing special attention and those that may be omitted. Because the cost of making and revising mixed-media consent materials is higher than text, an iterative process incorporating IRB review may be required. Evolving electronic tools offer another approach to consent. For example, EduConsent™ (Systemedicus Inc. 2015) is an iPad-based system using continuous video documentation of a consent session: participants view videos and respond to questions at significant steps to demonstrate understanding. Although not yet widely available, adoption of such tools could markedly reduce variation across researchers and increase true informed consent.

Challenge No. 2: Balancing Opportunities for Rapid Development and Broad Reach with Gaining Adequate Understanding of Population Needs

Greater access necessarily implies the potential for greater harm when an intervention does not match the needs of a population. Thus, matching tech-enabled prevention programs to population needs and culture is critical. Unfortunately, remote interaction can distance researchers from the communities that they hope to serve. Geographical and social context may be missing, leading to cultural mismatches, misunderstandings, and conflicts with family or community values. Fewer direct contacts with research participants in their natural environment also mean fewer natural mechanisms for detecting and correcting these problems. High dropout rates in online interventions (Muñoz et al. 2006) are difficult to interpret when participants are not being observed directly. Lack of follow-through might simply indicate “window shopping” behavior, similar to examining but not purchasing a self-help book in a bookstore, but there is a potential for unaddressed harms.

Limiting interactions to computer-mediated only has the potential to erode the human connection, empathy, and engagement between researchers and participants, a feature that has been fundamental in prevention science (Rohrbach 2014). Time spent in the field, learning with and from individuals in a target population, builds trust between researchers and participants, engendering a sense of obligation and accountability in researchers to serve the needs of the communities in which we work. Personal contact can have scientific value as well, leading to unexpected discoveries, new ideas, and personal rewards, which often fuel the best scientific work.

Researchers using remote interventions can take steps to avoid misunderstandings and mismatches and promote the human connections with participants that motivate beneficence. First, having a well-specified conceptual model articulating targets based on empirically identified needs in a population is essential (Fishbein et al. 2001). Online methods for recruiting and studying populations of interest can aid in shaping this conceptual model. When full-scale studies of population needs are not feasible, researchers can still use the Internet to collect preliminary data. Crowdsourcing services (e.g., Mechanical Turk, an online marketplace of distributed workers who respond to open calls to perform small tasks for pay) and commercial survey respondent pools (e.g., FluidSurvey) provide researchers with ready-made platforms to gather “quick and dirty” preliminary data through questionnaires and response harvesting (Mason and Suri 2012). Because these latter methods are a form of convenience sampling, researchers should be clear about sampling frames generated by these methods.

Second, direct experience and fieldwork with representative members of the target population during development can spark ideas for enhancing the benefit of an intervention (beneficence). The Play2Prevent Elm City Stories development process exemplifies the scientific and ethical value of extensive community-based development work (Hieftje et al. 2012):

Researchers at Yale University Play2Prevent Lab developed Elm City Stories, a video game designed to prevent risky teen behavior leading to HIV infection. To gain insights into the living environments, neighborhoods, and risky situations that their audience faced, they engaged teens in novel and creative activities such as “Photo feedback project” (teens taking and sharing pictures of their homes, neighborhoods, favorite hairstyles, peers, important adults), “Storytelling graphic illustration” (projective storytelling), and “My Life” (visual timeline of future aspirations and life goals). These activities directly informed the artwork, scenarios, and prevention strategies that appear in the resulting video game.

Although funding for such elaborate development is not always available, the principles of user-centered design (Abras et al. 2004) can guide the development process to whatever extent is feasible, assuring the needs, wants, and limitations of the target community are addressed at each stage. Usability testing involves direct observation of representative members interacting with an application or website (Brinck et al. 2001). During early design phases, in-person user testing under controlled conditions is recommended, especially those targeting high-risk populations. At all stages, direct observation, using remote video (e.g., Skype) when necessary, permits investigators to evaluate usability and see where the systems fail to meet users’ expectations.

Higher cost and longer timelines are generally the most significant barriers to staged development. The electronic marketplace moves quickly, and researchers developing interventions must balance accelerated development with adequate testing (Nilsen et al. 2012). One way to achieve this balance is to use a process that we proposed elsewhere, called “Continuous Evaluation of Evolving Behavioral Intervention Technologies” (Mohr et al. 2013a, b). Continuous Evaluation of Evolving Behavioral Intervention Technologies (CEEBIT) is a proposed methodology that continuously monitors use and clinical outcomes of multiple intervention technologies and could be used to test, prune, and refine different versions of the same intervention. This methodology allows researchers to monitor and evaluate the effectiveness of a range of applications and to eliminate the less efficacious. CEEBIT has the potential for improving the match and usefulness of an intervention by allowing step-wise validation and modification in small segments. However, for Behavioral Intervention Technologies to evolve and improve rapidly—and for researchers to study and understand participant responses to different iterations—IRBs will need to appreciate the function and value of iteration and avoid requiring researchers to “lock down” their intervention at the point of consent.

Challenge No. 3: Integrating Interventions into Participants’ Lives While Minimizing Intrusiveness and Fatigue

Interventions that integrate seamlessly into the daily lives of research participants, especially through passive data collection and automatic tailoring, reduce risks of fatiguing participants and wasting time on unneeded elements. But, the risks of continuous monitoring and intervention are not yet known. As consent documents become increasingly streamlined as the NPRM envisions, it will be neither feasible nor desirable to inform participants of every possible risk in consent documents. Thus, researchers will need to be proactive about protecting participants from hidden risks.

Researchers from several different laboratories are developing human activity recognition systems to respond to problems as diverse as falls among the elderly, obesity, and smoking. Systems that use various combinations of accelerometers, gyroscopes, and depth video sensors (cameras that detect depth and 3D distance) are being tested to detect unsteadiness or actual falls and alert the older adult, family members, or caregivers. Dental implants (http://www.medicalnewstoday.com/articles/266402.php) and wearable jaw motion sensors and cameras (Fontana and Sazonov 2013; Sazonov et al. 2013) are being tested to recognize specific jaw movements, hand-to-mouth activity, and food intake for eventual use in promoting weight loss and smoking cessation.

To be most helpful, passive data collection would need to continuously monitor participants. But, the risks associated with continuous monitoring—even with full consent—are not well understood. As these examples illustrate, continuous monitoring and intervention alter the traditional meaning and commitment attached to being a research study participant. Always-on monitoring removes the boundary between research and the rest of life. Participating in such research is much different than the experience of accepting an interventionist into your home or receiving phone calls from staff administering measures. Both involve potential intrusions, but the experience is qualitatively different. Similarly, methods mentioned above, which monitor using smartphone microphones and speech recognition, could be perceived negatively by both participants and implementers as invading privacy in a “big brother is always watching” fashion. Plus, these devices capture data from other people in the environment who have not given consent. In other interventions, participants’ awareness of constant background monitoring of sound, video, location, or activity could make participants self-conscious or change their normal behavior in ways that are not yet well understood.

Tailoring interventions through requests for input or personalized “push” notifications, while potentially useful, also has the potential for harm. Researchers have documented the harmful effects of interruptions and information overload on productivity, memory, and emotions (Bailey and Konstan 2006). Interventions that increase human-machine interaction, requiring active responses to information and prompts, could thus introduce new stresses. For example, during the pre-intervention development stages of a text messaging intervention for teenagers (ARP), school staff and parents expressed concerns that ill-timed messages from the prevention program could cause interpersonal conflicts, such as texting prevention messages during a family meal. These anecdotal data led to our decision to avoid texting during school, dinner, and late night hours, even though teenagers stated that they preferred receiving late-night texts. Such trade-offs reflect the ethical tension between providing the most helpful and effective intervention and avoiding potential harm—trade-offs that are best explored and evaluated when researchers take the time to interact directly with stakeholders in developmental phases.

In the absence of clear data about the effect of continuous monitoring and of push notifications, researchers can take steps to reduce intrusion and burden and to support participant autonomy (respect for persons). First, researchers should provide a straightforward way for participants to temporarily pause data collection and participation. Second, researchers should be as transparent as possible about data collection and limit it to what is absolutely necessary. Third, because standards for privacy are fluid and culture bound, researchers testing always-on interventions should consult with members of the intended participant group and incorporate measures of fatigue and privacy invasion into all stages of development and field testing. Community advisory boards and other forms of direct input from community members can connect the researchers and IRBs to community standards, revealing values and expectations about privacy, intrusion, and safety. This helps align IRB concerns with participants’ foreseeable and reasonable expectations. For example, online behaviors indicate a willingness to exchange personal information for convenience, services, and personalization. There is a growing awareness that data we all contribute to linked systems provide benefits to society (Pentland 2013). Simultaneously, concerns about online privacy are growing, and a majority of individuals polled report feeling uncomfortable with a perceived loss of control over their personal information (Raine et al. 2013). In the social context of these countervailing and evolving tendencies, applications collecting movement or geolocation data might be familiar and acceptable to many and anathema to others. Thus, navigating these issues requires consultation with members of the intended participants’ group and should not be assumed known.

Challenge No. 4: Setting Appropriate Expectations for Responding to Safety and Suicide Concerns

The principles of beneficence and respect for persons underscore the importance of ensuring the safety and well-being of research participants known to be vulnerable (e.g., suicide attempt history), as well as those who become so while participating (Fisher et al. 2002). We have a scientific and ethical responsibility to include people at risk for suicide and other risky behaviors in research to discover and test prevention opportunities and to assure scientific equity for these individuals (Pearson et al. 2001; Perrino et al. 2014). Yet, the scope of responsibility for monitoring and responding to safety risks when interventions are delivered over networked devices is still evolving, and care must be taken to set realistic expectations. The standards developed through small traditional clinical trials may not fit with large-scale online interventions.

Ginger.io is a private company that is developing algorithms to detect health-related patterns in smartphone sensor data. Ginger.io is currently investigating Mood Matters, a depression prevention program that uses activity and communication to alert individuals and their healthcare providers about fluctuations in depression and provides recommendations for responding. Individuals who use the program provide initial self-report information to train the program’s algorithms. The program collects and analyzes data behind the scenes and issues notifications when it determines that the user would benefit from taking action (such as contacting a friend or family member, exercising, or completing exercises assigned by a therapist or “coach”).

What responsibilities do researchers using Ginger.io have for detecting, evaluating, and actively responding to suicide-related material? One primary decision is establishing cut points in measures that would trigger real-time interventions. Where a known-vulnerable population is targeted, such adolescents with identified risk (e.g., suicide attempts), concerns and monitoring increase since the base rate and consequences of suicidal ideation and behavior are likely to be higher; however, researchers must also weigh the risks of too much direct monitoring: false positives, unwanted intrusions, and perceived invasion of privacy. Well-intentioned suicide prevention efforts by Facebook and its suicide prevention partners have been criticized along these lines. For example, a consumer watchdog agency publicized (PRNewswire 2015) an instance where a Facebook user was supposedly hospitalized inappropriately as a result of a friend using the “Report post” function that Facebook and its suicide prevention partners announced in 2015 (Facebook 2015). The veracity of the story is unconfirmed, but the media attention to this supposed incident reflects concerns about unintended negative consequences that could result from a social media company partnering with suicide prevention advocates to respond to suicide concerns.

If safety risks could be reliably detected, what responses are even possible? As with any human subjects decision, the answers will depend on the risks, benefits, and feasibility of options. For in-person interventions, safety protocols usually include standard responses (providing crisis intervention, telephone support, referrals to emergency, and mental health services). No such set of standard options yet exists for remote, asynchronous interactions. For example, in the USA, many states require in-state licensing to provide services, so intervening across state lines poses unique challenges. Furthermore, monitoring and responding to safety concerns might not be feasible for interventions capable of reaching very large participant groups. For example, in an ongoing randomized trial testing a universal suicide prevention program (Wyman et al. 2010) in 40 high schools, nearly 20,000 student participants in two different states completed online assessment of suicidal thoughts and behaviors in the past 12 months. Monitoring and intervening with high-risk individuals were not feasible. In this case, researchers provided information about accessing mental health resources to all participants within the online survey.

Another safety issue arises from technology failure rates. Websites or services participants rely on could experience outages. It is reasonable to expect that technologies developed on small research budgets will experience “bugs,” downtime, or even dramatic failures from time to time. When these occur, the consequences range from client annoyance and frustration to more serious failures in providing needed services. Mitigating frustrations is fairly straightforward—addressed by setting appropriate expectations of problems and technical support. For high-risk populations and newer interventions, the authors believe that researchers should budget enough to provide technical support and “customer assistance” for the duration of a research study and should specify support available after the study is completed. On the other hand, technology failures that hinder access to needed interventions or information usually require contingency planning. For example, one app currently in use provides mobile storage of a suicide safety plan and emergency contacts. Another analyzes glucose monitor results. Although the goals are quite different, failure of either of these apps to function could have serious negative consequences for the user. Researchers seeking to study the safety-planning app could provide recommendations and instructions for keeping a backup copy of plans and emergency contacts. Researchers studying new software for analysis of blood glucose could require patients to demonstrate that they have a secondary means of testing before enrolling them.

Challenge No. 5: Safeguarding Newly Available Streams of Data

The NPRM states the following:

Society is in an information age. In all facets of one’s life information... is generated, stored, shared, analyzed, and often provides tremendous societal value. People share information about themselves with large numbers of people with the click of a button, and this trend of rapid and widespread sharing is only likely to grow. The increase in concern about unauthorized and inadvertent information disclosure, in combination with newer research techniques that increase the volume and nature of identifiable data suggest the need for the Common Rule to more explicitly address data security and privacy protection. (US Department of Health and Human Services 2015, §I.C.)

Public trust in prevention researchers depends on our ability to protect highly sensitive data from unintentional disclosure, and new regulations will require a greater degree of attention to security and privacy. Technology-heavy interventions and “big-data” explorations, especially those capturing or analyzing contextual information in the background, can generate sensitive data, both on the device and transmitted remotely to investigators. Sensor and geolocation data are especially sensitive and, in the wrong hands, could be abused or even lead to danger for the participant. Deductive disclosure (Sieber 2006) refers to the identification of participations based on triangulating combinations of reported or released information. Data that are geographically referenced, longitudinal, or multilevel (e.g., student, teacher, school, district or patient, clinic, community) are at higher risk for deductive disclosure. Adhering to reporting standards is one important way that researchers can guard against deductive disclosure (Inter-University Consortium for Political and Social Research 2012; Samarati and Sweeney 1998), but making use of evolving computational tools will help researchers address this concern more effectively.

New tools are available to protect participant confidentiality in single studies as well as in multi-study syntheses. These tools will be of great use to researchers as government requirements for data security are standardized and articulated, as the NPRM envisions. First, computational solutions can conceal individuals in a population. For example, we (CHB, CG) used a computer program to remove identifiers in a large dataset of text messages and log notes collected by a community partner (Wang et al., Automatic Classification of Communication Logs into Implementation Stages Via Text Analysis, unpublished). Embedded in the text were names and locations that could potentially have been used to identify individuals and link them to actions that they took, which was prohibited in the IRB agreement. Prior to analyzing these data, we “scrubbed” this dataset by automatically (a) sorting and enumerating every word; (b) identifying all names of persons, organizations, email addresses, and physical locations; (c) permuting these names randomly, and (d) replacing them with unique identifiers, such as PERSON1424 and LOCATION3449. We accomplished this process “in-house,” by data management personnel. In other words, the identified data is scrubbed before it leaves the agency that collected it. The table of names and tokens remains with the data collector. This and similar methods for automatic scrubbing allow rich analysis of the entire text including tokens, their relationship among each other, and the context in which they appear. This process de-identifies information, in a cost-efficient, fast manner, while maintaining accuracy and richness (Saygin et al. 2006). Scrubbing identifying information can also be done with audio and video recordings of participants. Computational approaches are under development to automatically detect and replace audio-visual information, such as a participant’s voice or face, with a fuzzy/blurred signal that still allows meaningful analysis (e.g., Bitouk et al. 2008; Chan et al. 2013; Gutta et al. 2005).

A second, more extensive method to protect identities is the use of a “hash function,” which is a way to provide an encrypted digital representation based on a combination of identifying information. An example of this is the globally unique identifier (GUID) used by the National Institute of Mental Health (NIMH) to allow linkages of individuals across different datasets in the National Database on Autism Research (Johnson et al. 2010). An algorithm is used to encrypt data drawn from each participant’s birth certificate, including full name, date, and place of birth. This yields a unique GUID that cannot be decrypted to recover the original information. Other studies that enter the same information would generate the same GUID, so individuals can be linked across different studies and analyzed without any indication of who the person is.

Third, methods of merging the same individuals across multiple datasets can be accomplished through the actions of a trusted broker (Boyd et al. 2009). As an intermediary, the broker links records across datasets based on relevant criteria (e.g., exclude all patients who “opt out” of using their medical record for research summaries), strips off identifiers or variables not permitted under an IRB-approved agreement, and makes the resulting data available to permitted researchers. We (PW, CHB) have used a trusted broker system to link longitudinal panel data from youth who are asked about their suicide ideation and behavior and social networks, thereby retaining anonymity.

One important limitation in planning data security protections is the lack of security of participants’ own devices. On the research institution side, disclosure by accident, hardware theft, or system intrusion can be mitigated by strong information security practices; however, the greatest threat to data security for participants is often the data stored on the participant’s own device. While traditional efficacy studies separate data collection from interventions, software-based interventions often place outcome data collection within the application itself, in part to reduce barriers to providing such data. Since mobile devices are often used in public venues or shared among family members or friends, interactions may be revealed inappropriately. Researchers cannot tell whose eyes are on the device at any given moment, opening the participant up to unintended exposure. Furthermore, many people do not secure their mobile devices from others in case of loss or theft (e.g., with strong passcodes and automatic or remote erasure). Researchers can require passcodes to open applications, but this may interfere with preferred interaction “styles,” creating barriers to effectiveness and adoption. Thus, researchers have little control over this threat to confidentiality other than by alerting participants and reminding them at critical junctures about data stored on the device. Such risks are not unique to mobile-mediated interventions—behavioral interventions have long used workbooks and journals that could be discovered—nevertheless, the concern may be heightened because mobile devices are more attractive to thieves, and digital information can be more easily exported and distributed (e.g., posted on a website) by a malicious person.

Evolving Human Subjects Procedures to Match Current Needs: Progress Through Flexibility and Collaboration

Prevention science is in the midst of a technological revolution, and the USA is on the cusp of the first major update to federal human subjects policy since 1976 (Hudson and Collins 2015). This is an apt moment to consider the human subjects opportunities and challenges presented by technology in prevention research. Protecting human subjects aligns with the goals of prevention science—to maximize benefits and minimize risks to a population—good prevention science is inseparable from proper human subjects protection. As the Facebook informed-consent controversy illustrates (see above), researchers must be knowledgeable, sensitive, and proactive above and beyond what may or may not be required by IRB review. In that case, the Cornell IRB concluded that no review was required because their faculty member had access only to results, not to any individual, identifiable data (Cornell University Media Relations Office 2014). As partnerships with big-data companies increase and IRB oversight over lower-risk studies decreases (under proposed new rules of the NPRM), such situations requiring will become more common. Thus, prevention scientists, IRBs, commercial partners, and community members all have a vested interest in the integrity and flexibility of procedures designed to protect human subjects.

Table 1 provides a summary selection of goals, ethical tensions, and questions that prevention researchers, IRBs, and community members should consider together when planning a study involving technology. While the tensions are not fundamentally new, finding solutions in a new context is a critical challenge that we now face. Research on the impact of proposed solutions on comprehension, participation, and scientific productivity is needed. Prevention researchers can aid the development of an evidence base by reporting and examining their protocol decisions in empirical studies. Publicly posted consent documents, as proposed by the NPRM, could facilitate such research. Creative, realistic, and evidence-informed solutions could have benefits beyond studies that use technology by addressing concerns about IRB inflexibility and conservatism that predate the Internet. Thus, new technologies challenge us to update old assumptions and operating principles so that prevention science can continue to advance the well-being of research participants and their communities.

Table 1.

Technology and human subjects protection: goals, ethical tensions, and protocol considerations

Human subjects goals Ethical principles in tension Considerations for human subjects protocols
Achieving adequate informed consent with procedures that are acceptable to participants in a digital age Full comprehension and active consent (autonomy/respect for persons); matching participants’ expectations and preferences for computer-mediated interactions (respect for persons); scientific and social value of representative samples with diverse participants (justice, beneficence) Have the researchers identified “critical” junctures where consent might be achieved for specific procedures or stages? Have researchers followed best practices for presenting online documentation? Have researchers considered the risk of participant dropout when planning online consent procedures?
Balancing opportunities for rapid development and broad reach with gaining adequate understanding of population needs Preventive interventions accessible anywhere, anytime to anyone (justice, beneficence); avoiding and detecting harm in interventions deployed remotely asynchronously (respect for persons) Do early stages of development include input and direct contact with intended population? Are there mechanisms for detecting cultural mismatches or other harms to sub-groups? Do research protocols allow flexibility for rapid iteration in response to feedback and discoveries?
Integrating data collection and intervention into participants’ lives while minimizing intrusiveness and fatigue Reducing participant burden through passive data collection and personalization (respect for persons); increasing impact through frequent interaction (beneficence); protecting participants from stress of always-on monitoring or intervention (privacy, beneficence) Can participants “pause” data collection or their participation without having to withdraw from the study? Have researchers provided adequate justification for all measures?
Setting appropriate expectations for responding to safety and suicide concerns Protecting vulnerable/at-risk participants (beneficence, respect for persons); including at-risk individuals in research that could benefit them (justice); deploying interventions at massive scale to achieve broad impact for science and society (beneficence) Have researchers communicated the timing and scope response that participants can expect in a crisis? What information and resources will participants be given in advance, and in the moment of a reported crisis? Can participants seek or be directed to help within the app/website/device?
Safeguarding newly available streams of data Responsibility to take advantage of available data sources to promote public and individual health (beneficence); responsibility to protect confidentiality of research participants (confidentiality, respect for persons) Have researchers explored technological solutions and trusted brokers for deidentifying sensitive data? Do researchers have reporting guidelines that reduce the risk of deductive disclosure? Have researchers considered and warned participants about the implications of a lost or stolen device?

Acknowledgments

The authors wish to thank Jean Poduska and Bill Morrison for their contributions to this work.

Compliance with Ethical Standards

Funding

The authors gratefully acknowledge support from the National Institute of Mental Health (K23MH101449, R01MH091452, R01MH040859, P20MH090318, R01MH095753, R01MH095753, R01MH100482), the Centers for Disease Control (R49CE002093), and the National Institute on Drug Abuse (P30DA027828).

Conflict of Interest

The authors declare that they have no conflict of interest.

Ethical Approval

IRB approval is not applicable for this type of study.

Informed Consent

Informed consent is not applicable for this type of study.

References

  1. Abras C, Maloney-Krichmar D, Preece J. User-centered design. Thousand Oaks: Sage Publications; 2004. [Google Scholar]
  2. Ackerman R, Goldsmith M. Metacognitive regulation of text learning: On screen versus on paper. Journal of Experimental Psychology: Applied. 2011;17:18. doi: 10.1037/a0022086. [DOI] [PubMed] [Google Scholar]
  3. Bailey BP, Konstan JA. On the need for attention-aware systems: Measuring effects of interruption on task performance, error rate, and affective state. Computers in Human Behavior. 2006;22:685–708. doi: 10.1016/j.chb.2005.12.009. [DOI] [Google Scholar]
  4. Behnke S. Ethics rounds: APA’s ethical principles of psychologists and code of conduct: An ethics code for all psychologists. Monitor on Psychology. 2006;37:66. [Google Scholar]
  5. Bhutta ZA. Beyond informed consent. Bulletin of the World Health Organization. 2004;82:771–777. [PMC free article] [PubMed] [Google Scholar]
  6. Bitouk D, Kumar N, Dhillon S, Belhumeur P, Nayar SK. Face swapping: Automatically replacing faces in photographs. ACM Transactions on Graphics (TOG) 2008;27:39. doi: 10.1145/1360612.1360638. [DOI] [Google Scholar]
  7. Böhme, R., & Köpsell, S. (2010). Trained to accept?: A field experiment on consent dialogs. Paper presented at the Proceedings of the SIGCHI Conference on Human Factors in Computing Systems.
  8. Boyd, A. D., Saxman, P. R., Hunscher, D. A., Smith, K. A., Morris, T. D., Kaston, M., . . . Rajeev, N. (2009). The University of Michigan Honest Broker: A Web-based service for clinical and translational research and practice. Journal of the American Medical Informatics Association, 16, 784–791. [DOI] [PMC free article] [PubMed]
  9. Brinck T, Gergle D, Wood SD. Usability for the web: Designing web sites that work. Burlington: Morgan Kaufmann; 2001. [Google Scholar]
  10. Brown, C. H., Mohr, D. C., Gallo, C. G., Mader, C., Palinkas, L., Wingood, G., . . . Jacobs, C. (2013). A computational future for preventing HIV in minority communities. Journal of Acquired Immune Deficiency Syndromes, 63, S72–S84. doi: 10.1097/QAI.0b013e31829372bd. [DOI] [PMC free article] [PubMed]
  11. Burrell, E. R., Pines, H. A., Robbie, E., Coleman, L., Murphy, R. D., Hess, K. L., . . . Gorbach, P. M. (2012). Use of the location-based social networking application GRINDR as a recruitment tool in rectal microbicide development research. AIDS and Behavior, 16, 1816–1820. doi:10.1007/s10461-012-0277-z. [DOI] [PMC free article] [PubMed]
  12. Center for Behavioral Intervention Technologies. (2015). IntelliCare: Mental health apps for the 21st century. Retrieved May 19, 2015, from https://intellicare.cbits.northwestern.edu/
  13. Chan CH, Tahir MA, Kittler J, Pietikainen M. Multiscale local phase quantization for robust component-based face recognition using kernel fusion of multiple descriptors. Pattern Analysis and Machine Intelligence, IEEE Transactions on. 2013;35:1164–1177. doi: 10.1109/TPAMI.2012.199. [DOI] [PubMed] [Google Scholar]
  14. Christensen, H., Griffiths, K. M., & Jorm, A. F. (2004). Delivering interventions for depression by using the internet: randomised controlled trial. BMJ, 328(7434), 265. [DOI] [PMC free article] [PubMed]
  15. Cornell University Media Relations Office. (2014). Media statement on Cornell University’s role in Facebook ‘emotional contagion’ research. Retrieved April 18, 2016, from http://mediarelations.cornell.edu/2014/06/30/media-statement-on-cornell-universitys-role-in-facebook-emotional-contagion-research/
  16. Crisis Text Line. (2015). Crisis text line. Retrieved May 15, 2015, from http://www.crisistextline.org/
  17. Dinakar K, Jones B, Havasi C, Lieberman H, Picard R. Common sense reasoning for detection, prevention, and mitigation of cyberbullying. ACM Transactions on Interactive Intelligent Systems (TiiS) 2012;2:18. [Google Scholar]
  18. Dinakar, K., Chen, J., Lieberman, H., & Picard, R. (2015). Mixed-initiative real-time topic modeling & visualization for crisis counseling. Paper presented at the Twentieth ACM International Conference on Intelligent User Interfaces, Atlanta, CA.
  19. Emanuel, E. J. (2015). Reform of clinical research regulations, Finally. New England Journal of Medicine. [DOI] [PubMed]
  20. Facebook. (2015). Facebook safety. Retrieved June 5, 2015, from https://www.facebook.com/fbsafety/posts/817724748265365.
  21. Fishbein, M., Hennessy, M., Kamb, M., Bolan, G. A., Hoxworth, T., Iatesta, M., . . . Zenilman, J. M. (2001). Using intervention theory to model factors influencing behavior change project respect. Evaluation & the health professions, 24, 363–384. [DOI] [PubMed]
  22. Fisher, C. B., Pearson, J. L., Kim, S., & Reynolds, C. F. (2002). Ethical issues in including suicidal individuals in clinical research. IRB: Ethics & Human Research, 24, 9–14. doi:10.2307/3563804. [PubMed]
  23. Fontana, J. M., & Sazonov, E. S. (2013). Evaluation of chewing and swallowing sensors for monitoring ingestive behavior. Sensor letters, 11(3), 560. [DOI] [PMC free article] [PubMed]
  24. Foulks EF. Misalliances in the Barrow alcohol study. American Indian and Alaska Native Mental Health Research. 1989;2:7–17. doi: 10.5820/aian.0203.1989.7. [DOI] [PubMed] [Google Scholar]
  25. Gallo, C., Pantin, H., Villamar, J., Prado, G., Tapia, M., Ogihara, M., . . . Brown, C. H. (2015). Blending qualitative and computational linguistics methods for fidelity assessment: Experience with the Familias Unidas preventive intervention. Administration and Policy in Mental Health and Mental Health Services Research, 42, 574–585. doi: 10.1007/s10488-014-0538-4. [DOI] [PMC free article] [PubMed]
  26. Garde, A., Umedaly, A., Abulnaga, S. M., Robertson, L., Junker, A., Chanoine, J. P., . . . Dumont, G. A. (2015). Assessment of a mobile game (“MobileKids Monster Manor”) to promote physical activity among children. Games for Health Journal, 4, 149-158. [DOI] [PubMed]
  27. Gibbons, R. D., Weiss, D. J., Kupfer, D. J., Frank, E., Fagiolini, A., Grochocinski, V. J., . . . Immekus, J. C. (2008). Using computerized adaptive testing to reduce the burden of mental health assessment. Psychiatric Services, 59, 361. [DOI] [PMC free article] [PubMed]
  28. Gilleard C, Jones I, Higgs P. Connectivity in later life: The declining age divide in mobile cell phone ownership. Sociological Research Online. 2015;20:3. doi: 10.5153/sro.3552. [DOI] [Google Scholar]
  29. Greenfield A. Everyware: The Dawning Age of ubiquitous computing. Berkeley: New Riders; 2006. [Google Scholar]
  30. Gutta, S., Trajkovic, M., Colmenarez, A. J., & Philomin, V. (2005). Method and apparatus for automatic face blurring: Google Patents.
  31. Hieftje K, Rosenthal MS, Camenga DR, Edelman EJ, Fiellin LE. A qualitative study to inform the development of a videogame for adolescent human immunodeficiency virus prevention. GAMES FOR HEALTH: Research, Development, and Clinical Applications. 2012;1:294–298. doi: 10.1089/g4h.2012.0025. [DOI] [PMC free article] [PubMed] [Google Scholar]
  32. Hodge FS. No meaningful apology for American Indian unethical research abuses. Ethics & Behavior. 2012;22:431–444. doi: 10.1080/10508422.2012.730788. [DOI] [Google Scholar]
  33. Homan CM, Silenzio V, Sell R. Respondent-driven sampling in online social networks. In: Greenberg AM, Kennedy WG, Bos ND, editors. Social computing, behavioral-cultural modeling and prediction. Berlin: Springer-Verlag; 2013. pp. 403–411. [Google Scholar]
  34. Hudson, K. L., & Collins, F. S. (2015). Bringing the common rule into the 21st century. New England Journal of Medicine. [DOI] [PMC free article] [PubMed]
  35. International Telecommunications Union . The world in 2014 IC facts and figures. Geneva: International Telecommunications Union; 2014. [Google Scholar]
  36. Inter-university Consortium for Political and Social Research . Guide to social science data preparation and archiving: Best practice throughout the data life cycle. 5. Ann Arbor: ICPSR; 2012. [Google Scholar]
  37. Johnson SB, Whitney G, McAuliffe M, Wang H, McCreedy E, Rozenblit L, Evans CC. Using global unique identifiers to link autism collections. Journal of the American Medical Informatics Association. 2010;17:689–695. doi: 10.1136/jamia.2009.002063. [DOI] [PMC free article] [PubMed] [Google Scholar]
  38. Kellam SG, Koretz D, Moscicki EK. Core elements of developmental epidemiologically based prevention research. American Journal of Community Psychology. 1999;27:463–482. doi: 10.1023/A:1022129127298. [DOI] [PubMed] [Google Scholar]
  39. Kramer, A. (2014). Facebook post dated June 29, 2014. Retrieved February 27, 2015, 2015, from https://www.facebook.com/akramer/posts/10152987150867796.
  40. Kramer, A. D., Guillory, J. E., & Hancock, J. T. (2014). Experimental evidence of massive-scale emotional contagion through social networks. Proceedings of the National Academy of Sciences, 111, 8788–8790. [DOI] [PMC free article] [PubMed]
  41. Kunz CL, Del Duca MF, Thayer H, Debrow J. Click-through agreements: Strategies for avoiding disputes on validity of assent. Business Lawyer. 2001;57:401–429. [Google Scholar]
  42. Landovitz, R. J., Tseng, C.-H., Weissman, M., Haymer, M., Mendenhall, B., Rogers, K., . . . Shoptaw, S. (2013). Epidemiology, sexual risk behavior, and HIV prevention practices of men who have sex with men using GRINDR in Los Angeles, California. Journal of Urban Health, 90, 729–739. [DOI] [PMC free article] [PubMed]
  43. Mason W, Suri S. Conducting behavioral research on Amazon’s Mechanical Turk. Behavior Research Methods. 2012;44:1–23. doi: 10.3758/s13428-011-0124-6. [DOI] [PubMed] [Google Scholar]
  44. Mohr DC, Burns MN, Schueller SM, Clarke G, Klinkman M. Behavioral intervention technologies: Evidence review and recommendations for future research in mental health. General Hospital Psychiatry. 2013;35:332–338. doi: 10.1016/j.genhosppsych.2013.03.008. [DOI] [PMC free article] [PubMed] [Google Scholar]
  45. Mohr DC, Cheung K, Schueller SM, Hendricks Brown C, Duan N. Continuous evaluation of evolving behavioral intervention technologies. American Journal of Preventive Medicine. 2013;45:517–523. doi: 10.1016/j.amepre.2013.06.006. [DOI] [PMC free article] [PubMed] [Google Scholar]
  46. Mohr DC, Schueller SM, Montague E, Burns MN, Rashidi P. The behavioral intervention technology model: An integrated conceptual and technological framework for eHealth and mHealth interventions. Journal of Medical Internet Research. 2014;16:e146. doi: 10.2196/jmir.3077. [DOI] [PMC free article] [PubMed] [Google Scholar]
  47. Muñoz R. Using evidence-based internet interventions to reduce health disparities worldwide. Journal of Medical Internet Research. 2010;12:e60. doi: 10.2196/jmir.1463. [DOI] [PMC free article] [PubMed] [Google Scholar]
  48. Muñoz R, Lenert LL, Delucchi K, Stoddard J, Perez JE, Penilla C, Perez-Stable EJ. Toward evidence-based internet interventions: A Spanish/English web site for international smoking cessation trials. Nicotine Tobacco Research. 2006;8:77–87. doi: 10.1080/14622200500431940. [DOI] [PubMed] [Google Scholar]
  49. Nahum-Shani I, Smith SN, Tewari A, Witkiewitz K, Collins LM, Spring B, Murphy S. Just in time adaptive interventions (jitais): An organizing framework for ongoing health behavior support. State College: Methodology Center technical report. Penn State University; 2014. [Google Scholar]
  50. National Institutes of Health. (2015). Mobilizing research: A research resource to enhance mHealth Research (U2C). Retrieved May 18, 2015, from http://grants.nih.gov/grants/guide/rfa-files/RFA-OD-15-129.html.
  51. Nilsen, W., Kumar, S., Shar, A., Varoquiers, C., Wiley, T., Riley, W. T., . . . Atienza, A. A. (2012). Advancing the science of mHealth. Journal of Health Communication, 17 Suppl 1, 5–10. doi: 10.1080/10810730.2012.677394. [DOI] [PubMed]
  52. O’Dea, J. A., & Abraham, S. (2000). Improving the body image, eating attitudes, and behaviors of young male and female adolescents: A new educational approach that focuses on self-esteem. International Journal of Eating Disorders, 28, 43–57. [DOI] [PubMed]
  53. Otis, B., & Parviz, B. (2014). Introducing our smart contact lens project. Retrieved May 19, 2015, from http://googleblog.blogspot.com/2014/01/introducing-our-smart-contact-lens.html.
  54. Pantin, H., Prado, G., Lopez, B., Huang, S., Tapia, M. I., Schwartz, S. J., . . . Branchini, J. (2009). A randomized controlled trial of Familias Unidas for Hispanic adolescents with behavior problems. Psychosomatic Medicine, 71, 987. [DOI] [PMC free article] [PubMed]
  55. Patel, C. (2015). Android ResearchKit—an open source library to create medical research apps. Retrieved December 1, 2015, from http://blog.appliedinformaticsinc.com/android-researchkit/.
  56. Pearson JL, Stanley B, King CA, Fisher CB. Intervention research with persons at high risk for suicidality: Safety and ethical considerations. Journal of Clinical Psychiatry. 2001;62:17–26. [PubMed] [Google Scholar]
  57. Pejovic, V., & Musolesi, M. (2014). Anticipatory mobile computing for behaviour change interventions. Paper presented at the Proceedings of the 2014 ACM International Joint Conference on Pervasive and Ubiquitous Computing: Adjunct Publication.
  58. Pentland, A. (2013). How big data can transform society for the better. Scientific American, 309.
  59. Perrino, T., Howe, G., Sperling, A., Beardslee, W., Sandler, I., Shern, D., . . . Brown, C. H. (2013). Advancing science through collaborative data sharing and synthesis. Perspectives on Psychological Science, 8, 433–444. [DOI] [PMC free article] [PubMed]
  60. Perrino, T., Beardslee, W., Bernal, G., Brincks, A., Cruden, G., Howe, G., . . . Brown, C. H. (2014). Toward scientific equity for the prevention of depression and depressive symptoms in vulnerable youth. Prevention Science, 1–10. [DOI] [PMC free article] [PubMed]
  61. Pestian J, Nasrallah H, Matykiewicz P, Bennett A, Leenaars A. Suicide note classification using natural language processing: A content analysis. Biomedical Informatics Insights. 2010;2010:19. doi: 10.4137/BII.S4706. [DOI] [PMC free article] [PubMed] [Google Scholar]
  62. Price, M., Williamson, D., McCandless, R., Mueller, M., Gregoski, M., Brunner-Jackson, B., . . . Treiber, F. (2013). Hispanic migrant farm workers’ attitudes toward mobile phone-based telehealth for management of chronic health conditions. Journal of Medical Internet Research, 15. [DOI] [PMC free article] [PubMed]
  63. PRNewswire. (2015). Facebook’s new suicide prevention program leads to false imprisonment of man in mental institution; Consumer Watchdog Calls For Suspension Until Safeguards In Place. Retrieved June 5, 2015, from http://www.prnewswire.com/news-releases/facebooks-new-suicide-prevention-program-leads-to-false-imprisonment-of-man-in-mental-institution-consumer-watchdog-calls-for-suspension-until-safeguards-in-place-300049390.html.
  64. Raine, L., Kiesler, S., Rougu, K., & Madden, M. (2013). Anonymity, privacy, and security online. internet, science & technology. Pew Research Center. Washington, DC.
  65. Rice E, Lee A, Taitt S. Cell phone use among homeless youth: Potential for new health interventions and research. Journal of Urban Health. 2011;88:1175–1182. doi: 10.1007/s11524-011-9624-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
  66. Rice, E., Holloway, I., Winetrobe, H., Rhoades, H., Barman-Adhikari, A., Gibbs, J., . . . Dunlap, S. (2012). Sex risk among young men who have sex with men who use Grindr, a smartphone geosocial networking application. Journal of AIDS & Clinical Research.
  67. Ritter S. Apple’s research kit development framework for Iphone apps enables innovative approaches to medical research data collection. Journal of Clinical Trials. 2015;5:e120. [Google Scholar]
  68. Rizvi SL, Dimeff LA, Skutch J, Carroll D, Linehan MM. A pilot study of the DBT coach: An interactive mobile phone application for individuals with borderline personality disorder and substance use disorder. Behavior Therapy. 2011;42:589–600. doi: 10.1016/j.beth.2011.01.003. [DOI] [PubMed] [Google Scholar]
  69. Rohrbach, L. A. (2014). Design of prevention interventions Defining Prevention Science (pp. 275–291). New York: Springer.
  70. Saeb, S., Zhang, M., Karr, C. J., Schueller, S. M., Corden, M. E., Kording, K. P., & Mohr, D. C. (in press). Smartphone sensor correlates of depressive symptom severity in daily-life behavior. Journal of Medical Internet Research. [DOI] [PMC free article] [PubMed]
  71. Samarati, P., & Sweeney, L. (1998). Protecting privacy when disclosing information: k-Anonymity and its enforcement through generalization and suppression. Technical report, SRI International.
  72. Saygin Y, Hakkani-Tur D, Tur G. Sanitization and anonymization of document repositories. In: Ferrari E, Thurasisingham B, editors. Web and information security. Hersey: IGI Global; 2006. pp. 133–148. [Google Scholar]
  73. Sazonov, E., Lopez-Meyer, P., & Tiffany, S. (2013). A wearable sensor system for monitoring cigarette smoking. Journal of Studies on Alcohol and Drugs, 74(6), 956–964. [DOI] [PMC free article] [PubMed]
  74. Sieber JE. Introduction: Data sharing and disclosure limitation techniques. Journal of Empirical Research on Human Research Ethics. 2006;1:47–50. doi: 10.1525/jer.2006.1.3.47. [DOI] [PubMed] [Google Scholar]
  75. Silenzio V, Duberstein PR, Tang W, Lu N, Tu X, Homan CM. Connecting the invisible dots: Reaching lesbian, gay, and bisexual adolescents and young adults at risk for suicide through online social networks. Social Science & Medicine. 2009;69:469–474. doi: 10.1016/j.socscimed.2009.05.029. [DOI] [PMC free article] [PubMed] [Google Scholar]
  76. Sloboda, Z., Stephens, R. C., Stephens, P. C., Grey, S. F., Teasdale, B., Hawthorne, R. D., . . . Marquette, J. F. (2009). The adolescent substance abuse prevention study: A randomized field trial of a universal substance abuse prevention program. Drug and alcohol dependence, 102, 1–10. [DOI] [PubMed]
  77. Smith, A. (2014). Older adults and technology use.
  78. Smith, A. (2015). US smartphone use in 2015. Retrieved May 13, 2015, from https://web.archive.org/web/20160516184355/http://www.pewinternet.org/2015/04/01/us-smartphone-use-in-2015/.
  79. Systemedicus Inc. (2015). Electronic informed consent for clinical trials. Retrieved February 27, 2015, from http://systemedicus.com/category/educonsent-clinical-trials.
  80. U.S. Department of Health and Human Services. (2015). NPRM for federal policy for the protection of human subjects. Retrieved December 15, 2015, from https://www.federalregister.gov/articles/2015/09/08/2015-21756/federal-policy-for-the-protection-of-human-subjects.
  81. U.S. Department of Veterans Affairs. (2014). Veterans Text | Veterans Crisis Line. Retrieved May 13, 2015, from https://web.archive.org/save/_embed/https://www.veteranscrisisline.net/TextTermsOfService.aspx.
  82. US Department of Health Human Services. (1979). The Belmont report: Ethical principles and guidelines for the protection of human subjects of research. Washington, DC.
  83. US Department of Health & Human Services. (1991). Federal policy for the protection of human subjects (‘common rule’). Federal Register. Retrieved Dec 15, 2015, from http://www.hhs.gov/ohrp/humansubjects/commonrule/index.html.
  84. Valente TW. Social networks and health: Models, methods, and applications. New York: Oxford University Press; 2010. [Google Scholar]
  85. Webb, E. J. (2000). Unobtrusive measures (Vol. 2): Sage.
  86. Welcoming Center for New Pennsylvanians. (2012). Digital diaspora: How immigrants are capitalizing on today’s technology. Retrieved Nov 24, 2015, from http://www.welcomingcenter.org/sites/default/files/digital_diaspora_final_report_-_nov_2012.pdf.
  87. Wolchik, S. A., Sandler, I. N., Tein, J.-Y., Mahrer, N. E., Millsap, R. E., Winslow, E., . . . Reed, A. (2013). Fifteen-year follow-up of a randomized trial of a preventive intervention for divorced families: Effects on mental health and substance use outcomes in young adulthood. Journal of Consulting and Clinical Psychology, 81, 660. [DOI] [PMC free article] [PubMed]
  88. Wyman, P. A., Brown, C. H., LoMurray, M., Schmeelk-Cone, K., Petrova, M., Yu, Q., ... Wang, W. (2010). An outcome evaluation of the sources of strength suicide prevention program delivered by adolescent peer leaders in high schools. American Journal of Public Health, 100(9), 1653–1661. [DOI] [PMC free article] [PubMed]
  89. Wyman, P. A., Henry, D., Knoblauch, S., & Brown, C. H. (2014). Designs for testing group-based interventions with limited numbers of social units: The dynamic wait-listed and regression point displacement designs. Prevention Science, 1–11. doi: 10.1007/s11121-014-0535-6. [DOI] [PMC free article] [PubMed]
  90. Yancey AK, Ortega AN, Kumanyika SK. Effective recruitment and retention of minority research participants. Annual Review of Public Health. 2006;27:1–28. doi: 10.1146/annurev.publhealth.27.021405.102113. [DOI] [PubMed] [Google Scholar]

Articles from Prevention Science are provided here courtesy of Springer

RESOURCES