Skip to main content
Journal of Law and the Biosciences logoLink to Journal of Law and the Biosciences
. 2023 Jul 14;10(2):lsad021. doi: 10.1093/jlb/lsad021

Terms and conditions apply: an ethical analysis of mobile health user agreements in research

Luke Gelinas 1,2,c,✉,d, Walker Morrell 3,b,c,d, Barbara E Bierer 4,5,6,d
PMCID: PMC10347671  PMID: 37456712

Abstract

Mobile health (mHealth) technologies raise unique risks to user privacy and confidentiality that are often embedded in lengthy and complex Privacy Policies, Terms of Use, and End User License Agreements. We seek to improve the ethical review of these documents (‘user agreements’) and their risks in research using mHealth technologies by providing a framework for identifying when these risks are research risks, categorizing the key information in these agreements under relevant ethical and regulatory categories, and proposing strategies to mitigate them. MHealth user agreements typically describe the nature of the data collected by mHealth technologies, why or for what purposes user data are collected and shared, who will have access to the different types of data collected, and may include exculpatory language. The risks raised by data collection and sharing typically increase with the sensitivity and identifiability of the data and vary by whether data are shared with researchers, the technology developer, and/or third-party entities. The most important risk mitigation strategy is disclosure of the key information found in user agreements to participants during the research consent process. In addition, researchers should prioritize mHealth technologies with favorable risk–benefit balances.

Keywords: clinical research, ethics, IRB, mobile health, privacy policy, technology

I. INTRODUCTION

Mobile technologies pervade modern life: 97 per cent of Americans own a cellphone, and more than 5 billion people worldwide have a mobile device.1 The benefits of using these technologies are evident and include access to information, ease of communication, efficiency, mobility, productivity, and flexibility.2 The risks, however, are often underappreciated, buried in Privacy Policies and Terms of Use (which are also referred to as ‘Terms of Service’, ‘Terms and Conditions’, and the like), to which individuals must agree as a condition of use.3 Privacy Policies and Terms of Use serve different purposes. Privacy Policies exist to inform users of what will be done with the data collected by the technology and what privacy protections apply; provision of these disclosures to the public is often mandated by state and regulatory authorities, and generally serve the interests of users.4,5 Terms of Use, by contrast, set forth the conditions and rules that must be followed in using the technology and generally exist to limit the liability of the technology developer. Nonetheless, both types of documents contain information that can be relevant to an assessment of the technology’s overall risks and benefits.

There are distinctive challenges involved with assessing the risks that Privacy Policies and Terms of Use, which for ease of reference we will refer to as ‘user agreements’, contain. Most obviously, user agreements often include permissions of which users may, or are likely to be, unaware, such as the sale of personal and sensitive information, including geographical location data, and access to personal website browsing and purchasing histories, among others.6 People are likely to be unaware of these permissions because few people actually read the user agreements in which they are disclosed—and understandably so. User agreements are dense and arcane7—one study estimated that it would take the average individual 244 hours per year to read the privacy policies for every website they visited8—and their complex legal language typically requires a college-level education to understand.9 Empirical evidence on readership rates among the public confirms what we know from experience: that the vast majority of individuals have become habituated to accepting these materials and the risks therein without reading them.10

The challenges to privacy and confidentiality posed by these agreements are brought into sharp relief for mobile technologies used in the realms of medicine and health, due to the sensitivity of the personal health information collected11 and the ethical and legal importance granted to informed consent in these spheres.12 In the context of clinical research, regulatory and ethical norms require oversight bodies to ensure that there are adequate privacy and confidentiality protections for research participants. But even when user agreements are read and understood, identifying and evaluating their risks may be no small task. A Privacy Policy, for example, may disclose that research data collected using a device or mobile application may, or may not, be de-identified before being shared with third parties. However, even if data sets are initially de-identified, the risks of reidentification will very likely not be addressed in the user agreement and remain unappreciated. Many people are unlikely to know whether or how easily the data collected from them admits of reidentification or what other data triangulated with it would permit reidentification.13 Further, savvy may be needed to unearth the extent to which personal data not connected to the research, such as information about the user’s contacts or internet browsing history, are being collected from their device or how these data make triangulation and reidentification easier.14

Dynamics such as these have the potential to problematize the use of mobile technologies in research, where there is a particularly strong emphasis on understanding the risks of research as a prerequisite for valid informed consent.15 Failure to communicate the risks associated with the use of mHealth technologies in a way that is understandable to participants, and provide adequate privacy and confidentiality safeguards generally, may further erode public trust in these technologies—which is already by some counts suffering,16 for understandable reasons17,18—and, by extension, research involving them.19

User agreements also pose challenges that are more narrowly regulatory in nature. For one, each of the two regulatory frameworks that govern research in the USA—the Food and Drug Administration (FDA) regulations that govern research with investigational drugs and devices and the Department of Health and Human Services (HHS) regulations (often called the ‘Common Rule’) that govern federally conducted and funded research with humans—contain prohibitions on exculpatory language in informed consent forms and materials. User agreements do, however, typically contain exculpatory language. Insofar as user agreements can be considered part of informed consent materials, they may run afoul of this regulatory requirement (see Section III).20 Further, mHealth technologies may themselves be regulated as medical devices by the FDA. This raises the stakes. In studies where an mHealth technology meets the regulatory definition of a medical device (e.g., when it is being assessed for safety or efficacy at treating a medical condition as, for example, in a study designed to assess a mobile application or ‘app’ that delivers AI-driven psychotherapy for anxiety), the study will require either pre-review by the FDA and an investigational device exemption, or judgment by a research ethics or Institutional Review Board (IRB) that use of the device does not meet the definition of significant risk found in FDA regulations.21 Indeed, anecdotally, sponsors often submit device studies to IRBs first and rely on IRBs to decide whether FDA pre-review is required. In these contexts, IRBs have strong reasons to conduct rigorous and accurate assessments of device risk. For mHealth technologies, these risks are to be gleaned in part from user agreements, further underscoring the importance of principled and reliable approaches to reviewing them.

Perhaps recognizing these challenges, regulatory and advisory bodies have addressed certain aspects of these topics. FDA, for example, has issued guidance addressing mobile health, wellness, and medical apps, explaining when these devices are the subject of FDA regulatory oversight and providing examples of how different types of apps may be regulated.22 Broader ethical assessments of mHealth research have also been undertaken.2324 This work is valuable and, where relevant, informs our analysis in what follows. That said, existing prior work has been limited in scope, focusing on subsets of specific technologies (e.g., wearables) or populations (e.g., older people), and is not focused specifically on user agreements and the ethical complexities they raise. There currently exists, to our knowledge, no general, or overarching framework for assessing the ethical issues arising from user agreements in research.25

Perhaps in part because of this, uncertainty persists over the responsibilities of sponsors, investigators, oversight bodies, and IRBs for assessing and mitigating the risks contained in user agreements. This uncertainty, in turn, produces variability in the approaches taken to them.26 There is the potential for disagreement over when user agreements warrant scrutiny in research contexts to begin with; over who is primarily responsible for that scrutiny—for identifying and mitigating the risks; and over what particular mitigation strategies are needed or best.27 Exacerbating matters, oversight bodies, no less than research participants and other members of the public, may themselves not always possess the technological expertise nor the legal and informatic background needed to fully grasp the privacy and confidentiality implications of these documents.28

This article aims to provide a general framework for assessing user agreements with the goal of enabling research stakeholders to undertake confident and efficient ethical review of research involving them. Section II describes the different ways in which mHealth technologies may be used in research; provides criteria for determining when the risks contained in user agreements are research risks; and distinguishes several potential review strategies. Section III advances a systematic approach to isolating the key information in user agreements, based on the idea that the identifiability and sensitivity of data should be governing factors when evaluating proposed data uses. We then assess the general risks of data sharing with three entities in particular: researchers, the developer of the platform, and third-party entities beyond the developer (Section IV). In Section V, we outline substantive and actionable risk-mitigation strategies, before concluding with some brief reflections on the potential limitations of IRBs in Section VI.

II. USER AGREEMENT RISKS AS RESEARCH RISKS

MHealth technologies can play different roles in research, in ways that make a difference to the appropriate assessment and oversight of user agreement documents. Some studies employing mHealth platforms restrict enrollees to individuals who already possess an mHealth technology and have already accepted an associated user agreement, prior to and independent of their research participation. In these cases, assuming the version of the technology used in the research does not vary from the version already in use, the risks embedded in user agreements are generally risks that the individual already accepts in their everyday lives and not because of their participation in research.29 At the same time, a previously accessed or possessed mHealth technology may be put to new uses in research and involve collection of data that would not have otherwise been collected. While we do not believe that user agreements necessarily require scrutiny from researchers and IRBs in these cases, it may nonetheless be advisable to inform research participants that they may incur additional risks by virtue of using the mHealth platform for the proposed research data collection, and why (see further below and Section V).30

By contrast, other studies ask participants to obtain and use mHealth technologies upon enrolling, as part of the research, be it a version of a program already in public use or a version specifically developed for the research. In these cases, the risks embedded in the associated user agreements are risks individuals incur because they are research participants. Arguably, these user agreements require scrutiny from researchers and IRBs.31 In what follows, we restrict our focus to this second type of situation—those where user agreements embed genuine research risks.

US federal regulations outline the responsibilities of IRBs and researchers regarding the ethical review and conduct of research.32 IRBs are responsible for ensuring the protection and welfare of research participants through, among other things, determining that the risks of research participation are minimized and appropriately balanced with anticipated benefits; the informed consent of each participant is obtained and appropriately documented; and the privacy of participants and the confidentiality of their data are safeguarded.33 Sponsors and investigators are responsible for providing the IRB with the study materials and information necessary to execute these regulatory responsibilities.34 Despite this, the actual standards and processes for reviewing mHealth user agreements are not specified or fully determined by the US regulations or regulatory guidance.

A number of views are possible about the status of user agreements in research and the appropriate role of the IRB, sponsors, and investigators in reviewing them. On one end of the spectrum is the view that user agreements deserve the same level of scrutiny as other study materials (i.e., scientific protocols and consent documents) and that any research risks they contain should be analyzed and treated similarly to other research risks.35 Under this view, every user agreement that participants are asked to accept as part of research participation should be reviewed in detail by investigators and/or the IRB and the privacy and confidentiality risks contained in those documents should factor into the overall risk–benefit assessment of the research..36 Further, the risks communicated in user agreements should be disclosed to participants in consent materials.

On the other end of the spectrum, it might be argued that the risks contained in user agreements are minimal and not significant enough to warrant intensive review nor to factor into a study’s risk–benefit assessment at all. Such a view might appeal to the idea, present in the US regulations, that ‘minimal risk’ is to be understood in terms of the risks inherent in everyday life.37 Given the ubiquity of mHealth technologies and their corresponding privacy risks in contemporary life, and the apparent willingness of society to accept those risks in non-research contexts, it can appear that the risks contained in mHealth user agreements are everyday risks, or at least do not exceed them. If so, their risks are unlikely to impact a study’s overall risk–benefit balance, and the obligation to review each one in research contexts is weakened or removed entirely.

In between the two extremes of ‘always review’ and ‘never review’, various approaches are possible. For example, it could be held that, while the general risks are unlikely to vary enough between user agreements to merit scrutinizing each particular one, participants should always be made aware of their presence in research, and perhaps their general privacy and confidentiality implications, as part of informed consent for research participation. This disclosure can alert participants to the presence of potential risks embedded in user agreements and allow them to decide for themselves whether to further scrutinize them.

Another approach likewise holds that the general default position should be not to review user agreements but that intensive review of certain agreements may be warranted depending on other facts about the study, such as the nature or sensitivity of the data being collected and/or the vulnerability of the research population. This type of view can be combined with the previous idea that the default position should be to alert participants to the presence of user agreements in participant consent materials, even if they are not reviewed in detail by the investigator or IRB. We believe that an approach of this latter type may be most promising. That said, the analysis that follows is intended to be relevant to and, we hope, helpful for any approach where the contents of at least some user agreements are taken to merit review and/or disclosure in participant consent materials and where decisions must be made about the nature and scope of the review and the content and extent of the disclosure.

III. ISOLATING AND UNDERSTANDING THE RISKS OF MHEALTH USER AGREEMENTS

When user agreements are reviewed, investigators and IRBs may find it challenging to identify and distill the information relevant for their purposes. Further, it may not always be clear what actions can or should be taken in response to potential risks. In what follows, we propose several categories of key information common to user agreements and outline substantive and process-based considerations that can aid in the review of user agreements and the formulation of appropriate risk mitigation.

The information embedded in user agreements can be sorted into at least four categories, each of which is relevant for research oversight by virtue of implicating user privacy and confidentiality. The first category is the nature of the data collected by mHealth technologies. By this, we mean the type of data collected (e.g., number of steps, heart rate, blood pressure, and pulse) as well as the way in which those data are collected. An mHealth technology may collect data actively and/or passively. Active data collection occurs when participants actively and intentionally provide data to an mHealth technology, such as by responding to a survey on a smartphone or clicking ‘record’ and speaking into a mobile device’s microphone.38 Participants may not fully appreciate the risks raised by providing these data, but they are likely cognizant that the mHealth technology is collecting them.

By contrast, participants may be entirely unaware of the data that an mHealth technology collects passively, by running in the background without their direct or intentional involvement.39 Examples of passive data collection include geolocation tracking, passive audio/video capture, physiological data collection, and the collection of internet browsing history. In some cases, information about which data will be collected passively by an mHealth technology is only disclosed in its user agreements.

The second category of key information disclosed in user agreements is why or for what purposes user data are collected and shared. This overlaps with the third category of key information, who will have access to the different types of data collected. Data collected by an mHealth technology may be used by the investigator and study staff for research purposes, such as assessing study objectives and exploratory or secondary use. Unless a technology is specifically designed for research, however, it is unlikely that its user agreements will address research uses of the data collected. Instead, this will or should be detailed in the study protocol and consent materials. At the same time, data collected for research purposes may also serve non-research purposes. Additionally, some data may be collected exclusively for non-research purposes by the developer of the platform; investigators and their study staff may not have access to these data at all. The collection of data for non-research purposes will not be detailed in the study protocol; disclosure of such data collection will be limited to the user agreements of the relevant technologies.

Data collected for non-research purposes may include data used to support the operation of the technology, such as data used for network or server support.40 Data may also be collected and used for commercial purposes, such as improving business outcomes or sale to third parties.41 In both cases, a number of entities could have access to participant data, including the developer of the platform, third-party vendors involved in its operation and maintenance, and distinct commercial entities unconnected with the platform.42 These entities will not typically be subject to the same restrictions on data use and sharing as research sponsors and investigators. While some non-research uses may be undertaken with the aim of improving the platform’s security and increasing user privacy and confidentiality (e.g., sharing user data with the intention of diagnosing a glitch or crash with security implications), data sharing will often raise risks to participant privacy and confidentiality that participants would not have incurred but for their research participation, and that may not be readily apparent to them.43 For example, the more widely shared (or sold) someone’s personal data, the greater the risks of data breach, reidentification, and downstream harms, such as identity theft and fraud, may be.

Fourth, and finally, user agreements often contain exculpatory language: language that protects the technology developer from being held liable if use of the technology results in harm to the user.44 As noted earlier, exculpatory language is prohibited in research regulated by the FDA and the HHS.45 This makes its presence important for research stakeholders to identify and address. Insofar as user agreements for technologies used in research may reasonably be considered part of informed consent materials, they would be subject to the regulatory prohibition on such language.46

The main types of risks associated with data collection by mHealth platforms are privacy and confidentiality risks. The weight of these risks can be largely understood in terms of the sensitivity and identifiability of the data collected. Sensitive data are data that the average individual would reasonably prefer to keep private and not share with others, the paradigm cases of which are protected health information and information that, if disclosed, could lead to societal stigmatization, financial harm, decreased ability to gain certain types of insurance or employment, and implications for civil and criminal liability.47 Identifiable data are typically understood as data that contain direct identifiers (e.g., name, home address, email address, geolocation data, and IP address) and that can be used to identify a specific individual.48

The collection of data that are directly identifiable raises the greatest risk to participant privacy and confidentiality and generally deserves the most scrutiny from researchers and IRBs.49 While collection of data that are not identifiable generally deserves less scrutiny, it may still merit significant attention, depending on further details. One way for data to be de-identified is for it to be anonymized completely. This consists of destroying any link between the data and the user’s identifiers, and other potential elements unique to an individual that would permit their identification.50 A second way is for the data to be detached from identifiers but for there to remain a link between them that permits re-identification.51 This form of de-identification, sometimes referred to as pseudonymization, is common in research, with researchers ‘coding’ data (i.e., assigning a unique, random code to each individual participant’s data set) and maintaining a file or document that links each unique code to participant identities, usually in a separate and secure location.52 Both anonymization and pseudonymization may also involve physical security measures that seek to ensure that an individual's data cannot be combined with other data sets for the purpose of reidentification.

In general, collection and use of fully anonymized data raise fewer privacy and confidentiality risks than coded data53 since for the latter, but not the former, there is always the possibility that the link may be inadvertently shared with others or intentionally breached in a way that allows for re-identification.54 That said, both anonymized and coded data sets may permit of re-identification when they are conjoined or triangulated with other data sets, including publicly available data or other information collected from the same individual.55 While this risk is usually remote, it deserves consideration in proportion to the sensitivity of the data and the potential ease with which it could be triangulated with other data in the service of re-identification. In some cases, researchers or the IRB may need to consult technological expertise to understand or fully appreciate the ease or likelihood of re-identification.56 In addition, institutional general counsel can be a valuable resource for IRBs and researchers with questions about how best to interpret the content of user agreements or their regulatory and legal implications. Indeed, it may be prudent for IRBs to work with general counsel when formulating their policy or general approach to reviewing user agreements.

IV. MHEALTH DATA SHARING: A GRADATION OF RISK

The ethical significance of sharing data with various parties turns in part on whether the data are identifiable and the issues discussed in the last section. Any sharing of identifiable data, including with the mHealth technology developer and third-party entities, involves a loss of participant privacy and confidentiality and requires risk mitigation. By contrast, sharing de-identified mHealth data need not be intrinsically ethically concerning and will only require mitigation techniques in proportion to the sensitivity of the data and the ease with which it could be used to re-identify participants. Within this basic framework, sharing data with researchers is likely to pose the least risk to participants; sharing with the developer of mHealth technologies poses greater risk; and sharing with third-party vendors and entities beyond the developer poses the most risk. We unpack these in turn.

Certain research with identifiable data is regulated as human subjects research, namely, research conducted or funded by the HHS or under the purview of the FDA. In these cases, sharing identifiable mHealth data with other researchers, for the purpose of research, will typically be subject to oversight and familiar regulatory protections.57 These protections include IRB review, ensuring that the risks are minimized and reasonable in relation to the benefits, and informed consent.58 Sharing de-identified data among researchers, at least in the USA, may not be regulated as human subjects research or receive the same protection.59 Nonetheless, research with de-identified data is often undertaken to further the public good in some way, such as performing secondary analyses that might advance the understanding of a disease. In addition, research investigators and sponsors arguably stand in a fiduciary or quasi-fiduciary relation with research participants and have a strong interest in transparency and maintaining public trust in research—considerations that provide motivation to safeguard the confidentiality of de-identified participant data sets.

These same considerations do not always or clearly apply to data sharing with commercial developers of mHealth platforms and third-party commercial entities. For developers, the plans for using and sharing data are typically described in user agreements, permitting IRBs to review those plans, assess for risk, and apply appropriate protections. However, it is unlikely that user agreements must or do outline all possible future data uses and sharing.60 Further, user agreements can and do change, permitting uses of data that were not envisioned earlier,61 and revised user agreements are rarely reviewed by sponsors, investigators, or IRBs. Any subsequent non-research use of mHealth data that is not covered in the user agreement operative at the time of the original research would likely not receive IRB oversight, even if the data shared were identifiable. In addition, developers and third-party vendors collect and use data primarily to improve their product and/or advance their own economic self-interests. As such, they relate to research participants primarily as businesses to consumers, rather than as a fiduciary. This makes the ethical stakes and risks of sharing data with developers higher than for sharing with researchers.

Developers may share or sell data to third-party entities, including vendors contracted by the developer to provide services for the platform and other entities who simply wish to obtain and use data for commercial and advertising purposes.62 User agreements may at times contain statements that limit or explicitly prohibit developers from sharing data with third parties, or limit certain third-party vendors from sharing data with other vendors, and researchers and IRBs should check for their presence as a mitigating factor, when appropriate. Often, however, user agreements will not describe the data usage and sharing policies of the external entities with whom data are shared, and it will generally be impossible to assess, at a study-specific level, the full downstream risks of participant data sharing with third-party entities. IRBs would thus be reasonable to assume that, when data are permitted to be shared with third-party entities and vendors, those entities may, in turn, use and share it without restriction. Data sharing with third-party entities in the context of mHealth research thus deserves significant scrutiny.

V. MITIGATION STRATEGIES

At a very general level, there are three types of mitigation strategies available to researchers and IRBs with respect to the risks of user agreements: (i) disclose the risks of user agreements in research consent materials, (ii) seek changes to features of user agreements considered unacceptably risky, and (iii) decline to use the relevant mHealth platform. These strategies, particularly the first and second, are not mutually exclusive. It may, in some cases, be reasonable both to disclose the risks of user agreements in consent materials and seek to change certain aspects of them, although (as discussed below) the effectiveness of requesting and securing changes to user agreements may be limited.

In our view, the most important strategy for mitigating the risks of user agreements in research is clear and forthright disclosure of them and their risks in research informed consent materials. This approach may seem redundant insofar as accepting a user agreement, and the attendant opportunity to read it, is already a condition of research participation. Even so, there are good reasons to disclose the risks of user agreements in consent materials. First, as noted earlier, we know from experience and empirical research that a vanishingly small number of people actually read user agreements, and there is no reason to think that readership rates improve when they are part of research.63 Including the risks of user agreements in consent materials makes it much more likely that participants will be aware of and consider them. Second, empirical research has also shown that user agreements are written at a high level of complexity that is difficult for many to understand.64 Disclosing user agreement risks in consent materials provides the opportunity to simplify the language of user agreements and ensure that it meets common standards of comprehensibility, reading level, and health literacy. Finally, as noted earlier, the implications of certain information contained in user agreements, and the underlying risks and potential downstream harms posed to participants, may not always be obvious or plain on their face.65 For example, a user agreement may disclose information about collection of geolocation data, but participants may not realize that such data would easily permit inferences about their patterns of movement or place of work or residence. These implications of user agreement disclosures can be straightforwardly explained in consent materials, leading to better participant understanding of risks they might not otherwise appreciate.

When deciding what to disclose in consent materials, researchers and IRBs should start by considering the basic categories of information delineated in Section III. Consent materials should include the types of data being collected; whether these data are collected actively or passively; for what purpose the data are collected; and with whom the data will be shared. Privacy and confidentiality implications should be explained as appropriate. Consent materials should also note the presence of any exculpatory language and explain whether, or how, this language impacts the rights of participants, given the regulatory prohibition against exculpatory language. Table 1 contains a comprehensive set of recommendations and points to consider for determining which elements of user agreements merit consent disclosure.

Table 1.

Points to Consider in the Evaluation of MHealth Technology USER AGREEMENTs

Points to consider
Are user agreement risks research risks?
  • Have research participants agreed to the user agreements for the mHealth technology prior to and independent of the research?

    • If yes, will the mHealth technology collect data from research participants that DIFFERS from the data collected if these individuals were NOT enrolled in the study?

      • If yes, the user agreement risks are research risks:

        • At a minimum, disclose to participants that the mHealth technology collects different data than envisioned when they accepted the user agreements.

      • If no, the user agreement risks are NOT research risks.

    • If no, the user agreement risks are research risks:

      • Take action to mitigate user agreement risks (see other points to consider in Table 1).

Key information
  • Nature of the data being collected

    • Determine whether the mHealth technology will collect data beyond the data needed to conduct the research

    • For each type of data collected by the mHealth technology, determine whether it is

      • Collected actively or passively;

      • Potentially sensitive in regard to societal stigmatization, insurability, employability, financial well-being, and/or civil or criminal liability; and

      • Immediately identifiable, identifiable if linked with other data, or anonymized.

  • Why or for what purposes data are collected and shared and who will have access to the different types of data collected

    • For each type of data collected by the mHealth technology, determine

      • Why the data are collected; and

      • With whom data are shared:

        • Researchers;

        • The mHealth technology developer; and/or

        • Third-party entities.

  • Exculpatory language

    • Determine whether the user agreements contain exculpatory language.

Mitigation strategies
  • Disclose to participants:

    • Nature of the data being collected

      • The types of data collected and the potential uses to which these data may be put;

      • Whether each type of data is collected actively or passively;

      • Whether and how each type of data is potentially sensitive; and

      • The identifiability of each type of data, including the potential for triangulation with other data and re-identification.

    • Why or for what purposes data are collected and shared and who will have access to the different types of data collected

      • Why the data are collected:

        • Research purposes;

        • Operational support of the mHealth technology; and/or

        • Commercial purposes.

      • With whom data are shared:

        • Researchers;

        • The mHealth technology developer; and/or

        • Third-party entities.

          • · Disclose that third-party entities may use and share data in unknown ways.

    • Exculpatory language

      • The existence of exculpatory language; and

      • Whether (or how) the exculpatory language affects their rights.

  • Consider whether the following can be disabled via the technology interface or by contacting the technology developer:

    • The collection of data that is not needed for the research;

    • Data sharing with the technology developer; and

    • Data sharing with third-party entities.

  • Consider replacing the mHealth technology with a different technology that has a more favorable risk–benefit profile.

While clearly explaining the risks of user agreements to participants in consent materials will often sufficiently mitigate those risks, in some situations, researchers and IRBs may decide that more protective measures are needed. For example, a study that collects sensitive data about sexual or gender orientation and drug use from a vulnerable group, such as displaced or homeless adolescents, and proposes to permit the open-ended sharing of identifiable data sets with third-party entities, may require mitigation strategies beyond consent disclosure. In these situations, efforts should first be made to disable (or if participants are using their own technology, request that participants disable) any features of the technology that may be responsible for problematic data collection and sharing practices, insofar as this is compatible with study endpoints and the mHealth interface permits such customization. Of course, even with mitigation strategies, these risks to participant confidentiality should be considered in the overall risk–benefit assessment of the study in question.

When concerns cannot be adequately addressed in this way, the investigator and IRB should attempt to contact the developer of the mHealth technology, communicate their concerns, and see if they are willing to modify data collection and sharing practices, and the user agreement, in a way that removes or mitigates concerns. This strategy stands the best chance of success when the mHealth platform itself is an object of assessment in the study or has been developed specifically for the study, and the developer is the sponsor of the research or otherwise closely involved in conducting it. In many situations, however, the mHealth technology is not the object of assessment—it is simply being used as a data collection tool, which may already be commercially available—and the developer of the mHealth platform is not a sponsor of the research or otherwise invested in it. In these cases, convincing the developer to change the user agreement will typically be more challenging, since the platform is used in various contexts beyond the research, and it may not be in the developer’s best interests to devise a distinct user agreement tailored specifically for the research under consideration.

When requests to change a user agreement are ineffective, the final strategy for addressing outstanding concerns is to seek to replace the specific mHealth platform with one that does not raise the same issues and is more favorable from a risk–benefit perspective. This approach will be more feasible when the mHealth platform is used for a common function. For example, if a study is using a wearable sensor to measure movement patterns or gait, there are likely to be several such sensors that would permit capture of the relevant endpoints. Substituting an mHealth technology will be less feasible the more customized the use of an mHealth platform since fewer alternatives may exist for performing the research use.

Researchers and IRBs share responsibility for these types of comparative assessments. At the institutional level, a record of these assessments and summaries of the risks of different mHealth technologies could be maintained and updated as needed (e.g., as user agreements are changed), to help structure and streamline future decisions about which mHealth technologies to select in research contexts. Indeed, ideally, consortia or other multistakeholder initiatives invested in research regulation and conduct could reach consensus on the comparative privacy and confidentiality risks of different mHealth platforms and issue recommendations on how to prioritize the choice of these technologies in different research settings. This would both help researchers and IRBs uphold their obligations to safeguard the privacy and other interests of research participants while providing greater incentive for mHealth developers to be attentive to the research community’s ethical values and priorities. By identifying the salient points of ethical concern, the framework developed here can assist in this endeavor.

VI. CONCLUSION

User agreements for mHealth technologies are long, complex, and unfamiliar to investigators and IRBs, complicating efforts to review them for research oversight. By identifying the key categories of information in user agreements and outlining salient ethical issues and mitigation strategies, we hope to help investigators and IRBs overcome these barriers. We stress, however, that navigating the challenges posed by user agreements may sometimes require a level of technological expertise that researchers and IRBs lack. In our view, the long-term approach to addressing this matter should involve IRBs making efforts to bring people with the relevant technical expertise—data scientists, IT specialists, and software developers—into the IRB. In the meantime, whenever researchers and IRBs are uncertain about the content of user agreements or their potential privacy and confidentiality implications, they should take advantage of the regulatory provision to consult with external experts.66 This can help to ensure that the privacy interests and rights of participants are adequately protected in particular cases and that longer- term public trust in research uses of mHealth technologies, and research generally, is preserved and promoted.

ACKNOWLEDGMENTS

We thank Aaron Kirby and Robert Romanchuk for helpful discussions of user agreements in clinical research.

Footnotes

1

Mobile Fact Sheet, Pew Research Center (Apr. 7, 2021), https://www.pewresearch.org/internet/fact-sheet/mobile/; Laura Silver, Smartphone Ownership Is Growing Rapidly Around the World, but Not Always Equally, Pew Research Center (Feb. 5, 2019), Accessed July 21, 2022. https://www.pewresearch.org/global/2019/02/05/smartphone-ownership-is-growing-rapidly-around-the-world-but-not-always-equally/.

2

C. Lee Ventola, Mobile Devices and Apps for Health Care Professionals: Uses and Benefits, 39 Pharm. Ther. 356, 362 (2014).

3

Luke Hutton et al., Assessing the Privacy of mHealth Apps for Self-Tracking: Heuristic Evaluation Approach, 6 JMIR Mhealth Uhealth e185 (2018).

4

California State Law. Chapter 22, Internet Privacy Requirements. Codes Display Text. Accessed July 1, 2021. https://leginfo.legislature.ca.gov/faces/codes_displayText.xhtml?division=8.&chapter=22.&lawCode=BPC

5

GDPR, Article 13. Information to be provided where personal data are collected from the data subject. GDPR.eu. Published Nov. 14, 2018. Accessed July 1, 2021. https://gdpr.eu/article-13-personal-data-collected/

6

Id; Jennifer Valentino-DeVries et al., Your Apps Know Where You Were Last Night, and They’re Not Keeping It Secret, N.Y. Times, Dec. 10, 2018.

7

Esma Aïmeur et al., When Changing the Look of Privacy Policies Affects User Trust: An Experimental Study, 58 Comput Hum Behav 368 (2016).

8

Aleecia M. McDonald & Lorrie Faith Cranor, The Cost of Reading Privacy Policies, 4 J Law Pol’y Inf Soc 540 (2008).

9

Ali Sunyaev et al., Availability and Quality of Mobile Health App Privacy Policies, 22 J Am. Med. Inform. Assoc. e28 (2015); Kevin Litman-Navarro, We Read 150 Privacy Policies. They Were an Incomprehensible Disaster, N.Y. Times, June 12, 2019; Ewa Luger et al., Consent for All: Revealing the Hidden Complexity of Terms and Conditions, in CHI ‘13: Proc. of the SIGCHI Conf. on Hum. Factors in Computing Sys. 2687 (2013); Adam Powell et al., The Complexity of Mental Health App Privacy Policies: A Potential Barrier to Privacy, 6 JMIR Mhealth Uhealth e158 (2018).

10

Jonathan A. Obar & Anne Oeldorf-Hirsch, The Biggest Lie on the Internet: Ignoring the Privacy Policies and Terms of Service Policies of Social Networking Services, 23 Inf. Common. Soc. 128 (2020); Yannis Bakos et al., Does Anyone Read the Fine Print? Consumer Attention to Standard-Form Contracts, 43 J. Leg. Stud. 1 (2014); Aïmeur et al., supra note 5, at 369.

11

Deven McGraw et al., Privacy and Confidentiality in Pragmatic Clinical Trials, 12 Clin. Trials 520, 521 (2015).

12

45 C.F.R. § 46.116 (2020); Joseph Millum & Danielle Bromwich, Informed Consent: What Must Be Disclosed and What Must Be Understood?, 21 Am J Bioeth 46 (2021); National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research, The Belmont Report (1979).

13

Aïmeur et al., supra note 5, at 369.

14

Quinn Grundy et al., Data Sharing Practices of Medicines Related Apps and the Mobile Ecosystem: Traffic, Content, and Network Analysis, 364 BMJ l920, 1, 2, 5 (2019).

15

45 C.F.R. § 46.116 (2020); Millum & Bromwich, supra note 10; National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research, supra note 10.

16

Gandhi M, Wang T. Digital health consumer adoption: 2015 Available at https://rockhealth.com/reports/digital-health-consumer-adoption-2015/. Accessed Mar. 12, 2023.

17

Beilinson J. Glow pregnancy app exposed women to privacy threats, consumer reports finds 2016. Available at https://www.consumerreports.org/mobile-security-software/glow-pregnancy-app-exposed-women-to-privacy-threats/. Accessed Mar. 12, 2023.

18

Kramer DB, Fu K. Cybersecurity concerns and medical devices: lessons from a pacemaker advisory. J Am Med Assoc. 2017;318(21):2077–2078.

19

Anna C. Mastroianni, Sustaining Public Trust: Falling Short in the Protection of Human Research Participants, 38 Hastings Cent Rep 8, 9 (2008).

20

Off. for Hum. Rsch. Prot., Attachment B - Clarifying Requirements in Digital Health Technologies Research. Accessed July 21, 2022. HHS.gov (2020), https://www.hhs.gov/ohrp/sachrp-committee/recommendations/april-7-2020-attachment-b/index.html.

21

FDA, Information Sheet: Guidance for IRBs, Clinical Investigators, and Sponsors, Significant Risk and Nonsignificant Risk Medical Device Studies (2006). Accessed July 21, 2022. https://www.fda.gov/media/75459/download

22

FDA, Mobile Medical Applications: Guidance for Industry and Food and Drug Administration Staff (2015). https://research.unc.edu/wp-content/uploads/sites/61/2016/10/Mobile-Medical-Applications-FDA-Guidance-9-25-2013.pdf. FDA, Policy for Device Software Functions and Mobile Medical Applications: Guidance for Industry and Food and Drug Administration Staff (2022). Accessed July 21, 2022. https://www.fda.gov/media/80958/download

23

Nebeker C., Parrish E.M., Graham S. (2021) ‘The AI-Powered Digital Health Sector: Ethical and Regulatory Considerations When Developing Digital Mental Health Tools for the Older Adult Demographic.’ In: Jotterand F., Ienca M. (eds) Artificial Intelligence in Brain and Mental Health: Philosophical, Ethical & Policy Issues. Advances in Neuroethics. Springer, Cham. https://doi.org/10.1007/978-3-030-74188-4_11

24

M. Behnke, S. Saganowski, D. Kunc and P. Kazienko, ‘Ethical Considerations and Checklist for Affective Research with Wearables,’ in IEEE Transactions on Affective Computing, 2022, doi: 10.1109/TAFFC.2022.3222524.

25

Off. for Hum. Rsch. Prot., Attachment B - Clarifying Requirements in Digital Health Technologies Research, HHS.gov (2020), https://www.hhs.gov/ohrp/sachrp-committee/recommendations/april-7-2020-attachment-b/index.html.

26

Camille Nebeker et al., Ethical and Regulatory Challenges of Research Using Pervasive Sensing and Other Emerging Technologies: IRB Perspectives, 8 AJOB Empir. Bioeth. 266, 272 (2017); Sarah Moore et al., Consent Processes for Mobile App Mediated Research: Systematic Review, 5 JMIR Mhealth Uhealth e126 (2017). Accessed July 21, 2022.

27

Nebeker et al., supra note 16, at 272.

28

Nebeker et al., supra note 16, at 269, 270; Cynthia E. Schairer et al., How Could Commercial Terms of Use and Privacy Policies Undermine Informed Consent in the Age of Mobile Health?, 20 AMA J. Ethics e864, e866 (2018).

29

Off. for Hum. Rsch. Prot., supra note 15.

30

Id.

31

Id.

32

45 C.F.R. § 46 (2020).

33

45 C.F.R. § 46.111 (2020).

35

Off. for Hum. Rsch. Prot., Accessed July 21, 2022. supra note 15.

36

Id.

37

45 C.F.R. § 46.102 (2020).

38

Luke Gelinas et al., Navigating the Ethics of Remote Research Data Collection, 18 Clin. Trials 606, 607 (2021).

39

Id.

40

Grundy et al., supra note 12, at 5.

41

Id.

42

Id.

43

Id.

44

Nebeker et al., supra note 16, at 272; Off. for Hum. Rsch. Prot., supra note 15.

45

21 C.F.R. § 50.20 (2020); 45 C.F.R. § 46.116 (2020).

46

Off. for Hum. Rsch. Prot., supra note 15.

47

Lygeia Ricciardi, Protecting Sensitive Health Information in the Context of Health Information

Technology, The Consumer Partnership for eHealth (June 2010), http://go.nationalpartnership.org/site/DocServer/Sensitive-Data-Final_070710_2.pdf?docID=7041#:~:text=2%20Despite%20a%20range%20of,in%20the%20event%20of%20disclosure.

48

2 C.F.R. § 200.79 (2021). Accessed July 21, 2022.

49

Gregory S. Nelson, Practical Implications of Sharing Data: A Primer on Data Privacy, Anonymization, and De-Identification, SAS Global Users Group Paper IB06, 19 (2015).

50

Id. at 12; 2016 J.O. (L 119).

51

2016 J.O. (L 119); Ira S. Rubinstein & Woodrow Hartzog, Anonymization and Risk, 91 Wash. Law Rev. 703, 710 (2016).

52

2016 J.O. (L 119); Rubinstein & Woodrow, supra note 41, at 710; Raphaël Chevrier et al., Use and Understanding of Anonymization and De-Identification in the Biomedical Literature: Scoping Review, 21 JMIR Mhealth Uhealth e13484 (2019).

53

Nelson, supra note 39, at 19.

54

Rubinstein & Woodrow, supra note 41, at 710, 711.

55

Rubinstein & Woodrow, supra note 41, at 704, 709.

56

Nebeker et al., supra note 16, at 270.

57

45 C.F.R. § 46 (2020).

58

45 C.F.R. § 46.111 (2020).

59

45 C.F.R. §46.104 (2020).

60

Gioacchino Tangari et al., Mobile Health and Privacy: Cross Sectional Study, 373 BMJ n1248, 6 (2021); Kit Huckvale et al., Unaddressed Privacy Risks in Accredited Health and Wellness Apps: A Cross-Sectional Systematic Assessment, 13 BMC Med. 214 (2015).

61

Hutton et al., supra note 3.

62

Grundy et al., supra note 12, at 1, 5.

63

Obar & Oeldorf-Hirsch, supra note 8, at 135–140; Bakos et al., supra note 8, at 19–22; Aïmeur et al., supra note 5, at 369.

64

Sunyaev et al., supra note 7; Litman-Navarro, supra note 7; Luger et al., supra note 7, at 2691; Powell et al., supra note 7.

65

Grundy et al., supra note 12, at 2, 5.

66

45 C.F.R. § 46.107 (2020).

Contributor Information

Luke Gelinas, Advarra, Inc., Columbia, MD, USA; Multi-Regional Clinical Trials Center of Brigham & Women’s Hospital and Harvard (MRCT Center), Boston and Cambridge, MA, USA.

Walker Morrell, Multi-Regional Clinical Trials Center of Brigham & Women’s Hospital and Harvard (MRCT Center), Boston and Cambridge, MA, USA.

Barbara E Bierer, Multi-Regional Clinical Trials Center of Brigham & Women’s Hospital and Harvard (MRCT Center), Boston and Cambridge, MA, USA; Harvard Medical School, Boston, MA, USA; Brigham and Women’s Hospital, Boston, MA, USA.


Articles from Journal of Law and the Biosciences are provided here courtesy of Oxford University Press

RESOURCES