Abstract
Background
There is a growing interest in creating large-scale repositories that store genetic, behavioral, and environmental data for future, unspecified uses. The All of Us Research Program is one example of such a repository. Its participants will get access to their personal data and the results of the studies that used them. However, little is known about what researchers should return to participants and how they should do it in a way that is valuable and meaningful to participants.
Methods
To better understand the concept of “return of value” and the practice of returning valuable study information, we conducted semi-structured telephone interviews with 44 stakeholders with diverse perspectives on this topic. All interviews have been transcribed and coded thematically to identify the most salient themes, to explore differences between returning different types of study results, and to describe differences and similarities in perspectives of different stakeholder groups.
Results
We found that one size does not fit all when it comes to returning value to participants: the decisions about return of results are affected by participant preferences, researchers’ concerns about feasibility, the types of data collected, their level of granularity, and available options for supporting result interpretation.
Conclusions
Our findings suggest that the key to operationalizing return of value and to identifying ways to return valuable information to study participants may be to find a point of equilibrium between criteria that may affect usefulness and feasibility. The point of equilibrium may vary by study, by participants’ backgrounds and preferences, by their health literacy and access to regular healthcare, and by the resources available to professionals controlling the data. Future studies should explore the factors that determine the point of equilibrium between feasibility and usefulness.
Keywords: All of Us Research Program, Data repository, Research ethics, Research participants, Return of results, Return of value
Introduction
Recent advances in our abilities to collect, store, process, share, analyze, and interpret large amounts of different types of data, including genetic information and digital data (e.g., fitness apps, wearable devices), are revolutionizing the way researchers conduct biomedical research and providers deliver care (Belle et al. 2015, Manogaran et al. 2017). In addition to conducting research projects that collect data to test specific hypotheses, there is a growing interest in creating research infrastructures that provide access to large amounts of different types of data, collected not for the purposes of testing a specific hypothesis advanced by a specific research team, but for unspecified future use by a wide range of investigators, including academic and citizen scientists.
The All of Us Research Program (AoURP) – a key element of the Precision Medicine Initiative guided by the National Institutes for Health (NIH) – is the most recent example of a large-scale national effort to leverage advances in genomics and computational sciences to accelerate biomedical discoveries that support development of tailored treatment and prevention strategies (Lyles et al. 2018). The AoURP is a data repository designed to collect and centrally store information from a million or more individuals who provide genetic, biometric, behavioral health, and wireless sensor data that are then merged with the data from their electronic health record (EHR) (Department of Health and Human Services 2018). Arguably, AoURP represents the future of “big data” biomedical research in which large data repositories that include different types of data collected from individuals from diverse social, racial/ethnic, ancestral, geographic, and economic backgrounds, representing different age groups and health statuses can be used for research that does not focus only on a specific disease. Although such disease-agnostic repositories offer unique benefits to researchers interested in developing personalized treatment options that could work for different populations, the success of such repositories depends on the willingness of diverse populations, many of whom are currently underrepresented in research, to provide different types of data for future unspecified uses. A key assumption underlying such initiatives is that participants will be willing to enroll, remain engaged, and provide additional data if needed (Khodyakov et al. 2018).
To generate participant interest and long-term engagement in the AoURP, this initiative plans to give participants’ information back to them and share results of studies that used their data (All of Us Research Program 2018). By doing so, the AoURP is following the principles of participant-centered research (Aungst et al. 2003) and trying to create a value proposition for participant enrollment and long-term engagement. A recent systematic review of the literature on data sharing suggests that the general public may be willing to share genomic data samples for future unspecified use (Nanibaa’ A et al. 2016). Nonetheless, it is not clear what may motivate participants to share genetic, physical measurement, EHR, claims, sensor, lifestyle, attitudinal, and environmental data for unspecified future research uses and what benefits they may want to receive in return for their participation. For instance, a 2015 survey of U.S. adults showed that people have different motivations for sharing personal health data. Survey participants cited learning about their own health as the most important incentive, followed by receiving payment for their time and obtaining health care. Roughly three-quarters of survey participants stated that lab and genetic testing results are the types of information they want to receive. Sixty percent stated that they would like to receive information about other research studies related to their health, and 57% stated they wanted to know how their health compared to the health of others in the study (Kaufman et al. 2016). With such a range of opinions, studies may need guidance on what and how results should be returned to participants.
Appropriate procedures for returning research results is an area of ethical uncertainty, especially in genetic research (Jarvik et al. 2014, Burke, Evans, and Jarvik 2014, McGuire et al. 2013, McGuire, Caulfield, and Cho 2008, Fabsitz et al. 2010, Haga and Beskow 2008, McEwen, Boyer, and Sun 2013). Various working groups issued guidelines for reporting individual and/or aggregate research results to study participants (Fabsitz et al. 2010, Prucka et al. 2015), highlighting the importance of returning clinically valid, actionable, and reliable research results and secondary findings to participants who have elected to receive them. The National Academies of Sciences, Engineering, and Medicine (NASEM) recently published guidance for a new research paradigm detailing when it is appropriate to return individual research results to participants to ensure research transparency and participant engagement, while complying with Health Insurance Portability and Accountability (HIPAA) Privacy Rule, Clinical Laboratory Improvement Amendments of 1988 (CLIA), and the Common Rule (National Academies of Sciences 2018). The guidance includes a conceptual framework for decisions on returning individual research results, which contains “value to participant” and “feasibility” to a project as key components. According to the NASEM framework, the justification for the return of results is stronger when both the value and feasibility of returning results is high (National Academies of Sciences 2018).
The NASEM guidance establishes a foundation for further development of the concept of return of value, which can be used to consider what information might be valuable for participants to receive, including those that may “inform clinical decision making, life or reproductive planning, and other decisions that may affect health and quality of life,” as well as those that “may have personal value to participants by providing a newfound understanding about a health condition” (National Academies of Sciences 2018). Because studies like AoURP collect not only genetic data, they will have more diverse kinds of results that could be returned to participants in hopes that they can better meet a wide range of preferences for the types of results participants may want to receive and what they may consider to be of personal value. At the same time, the range and granularity of data collected may increase the number of challenges related to the ethical return of research results. Key questions in this uncharted territory include the type, timing, and means of returning results to participants (Wong, Hernandez, and Califf 2018).
In this manuscript, we expand on the NASEM framework by describing stakeholders’ perspectives about what and how information collected and research results obtained may be repurposed and shared with participants of studies like AoURP to ensure research transparency and participant engagement. In so doing, we take a step towards further unpacking the concept of return of value, focusing specifically on what researchers may need to do with the information they collect from participants to make it useful to them. In particular, we distinguish between the types of data, levels of information aggregation and granularity, and levels of support for data interpretation that have been considered in the literature as important variables affecting options for returning individual results (Wolf 2013, Wong, Hernandez, and Califf 2018).
Methods
Between June and December 2017, we conducted a series of semi-structured telephone interviews with 44 experts and stakeholders about their perspectives on data sharing and return of results, among other topics, in large-scale initiatives such as the AoURP. Our goal was to solicit the perspectives of individuals who had previous experiences with human subjects’ protection, data privacy and security, biobanks, stakeholder-engaged projects, and longitudinal studies. Our interviewees represented universities, health provider organizations, community organizations, non-profit research institutes, and private companies.
To identify the most knowledgeable interviewees, we reached out to (1) the members of relevant AoURP working groups, (2) authors of recent publications on ethics of genetic research, (3) individuals representing community-based organizations attending stakeholder engagement conferences about the AoURP, and (4) individuals involved with commercial and research biobanks. We used a purposive approach (Ritchie et al. 2013, 113–117) to sampling participants to assemble a maximum variation sample (Sandelowski 1995) that included individuals who have relevant experience but represent a range of different perspectives on the study topic, prioritizing those individuals who may represent the perspective of more than one of the four groups above. We stopped our recruitment efforts once we were able to achieve data saturation or informational redundancy, meaning that our interviews were no longer yielding relevant new information requiring the creation of new codes in the code book (see below) (Saunders et al. 2018).
A team of three experienced qualitative researchers conducted all interviews by telephone using a semi-structured interview guide informed by the literature on return of results and the 2015 survey of the US population about potential AoURP design features (Kaufman et al. 2016). A verbal consent was obtained prior to the start of each interview. The interview protocol included open-ended questions about what responsible return of results meant, the project’s responsibilities to return different types of results, and pros and cons of different options for returning results, among others. All interviews were audio recorded and transcribed verbatim. Each stakeholder was given a random participant ID (PID) number. Both the RAND and Scripps Institutional Review Boards (IRBs) determined this study to be exempt from review.
The researchers jointly developed the codebook, which included codes derived deductively from the main interview questions and inductively, based on the responses provided during the interviews. The codebook included codes for pros and cons of returning different types of data, the levels of data granularity, and the level of support to assist with result interpretation. After the main codes were developed, two coders independently coded two transcripts. They reviewed the coded transcript and discussed a small number of disagreements until reaching consensus. One coder coded the remaining interview transcripts; the other coder reviewed all coded transcripts to ensure adherence to the code book. Once all interview transcripts were coded, we identified the most salient themes, looked for differences between returning study results that relied on survey, biometric, and genetic data, and we searched for any stakeholder differences and similarities in perspectives on key themes.
Results
Participants
Out of 77 individuals whom we approached, 44 (57%) completed a one-hour interview. Females were more likely than males to respond to our interview invitation (65% vs. 37%, p=.019).
The majority of our interviewees were female; we interviewed more project administrators than researchers. Roughly one-third were university employees, and one quarter were employed by a health provider organization. Table 1 describes the types of stakeholders we interviewed.
Table 1.
Stakeholder Descriptions
Types of Organizations | PhD Level Researchers | Project Administrators | Total | ||
---|---|---|---|---|---|
Female | Male | Female | Male | ||
Health Provider Organization | 2 | 0 | 8 | 1 | 11 |
University | 9 | 0 | 6 | 0 | 15 |
Non-Profit Research Institute | 2 | 0 | 2 | 2 | 6 |
Private Entity | 0 | 3 | 3 | 1 | 7 |
Community Organization | 0 | 0 | 3 | 2 | 5 |
Total | 13 | 3 | 22 | 6 | 44 |
Below, we present the results of our interviews, organized around the main theme of what research stakeholders think may make the results more valuable to participants. Overall, interviewees indicated that the information participants may find valuable may depend on the data used to generate them (e.g., genetic vs non-genetic data), the level of information granularity returned to participants (e.g., individual vs aggregate results), and the level of support provided for interpreting results (e.g., no support, contextualization of results, support from a geneticist or a physician). In presenting the results of our thematic analyses, we highlight interviewees’ views of pros and cons of different options for returning results. We also cite illustrative statements from interviewees, who are referred to only with a participant ID (PID) code.
Level of Information Granularity
Returning Individual-Level Data
Returning individual-level data in studies like the AoURP means giving individual participants access to their own data, including responses to all survey questions, biometric information (e.g., blood pressure, waist circumference, height, weight), as well as genome sequences, if applicable. Almost all of our interviewees (37 out of 44) did not think that returning individual-level data to participants would be useful without giving them some support for interpretation (see below) because participants may not know what to do with these data. Providing support for interpretation, however, was perceived as not feasible for a large-scale project: “I think we want to provide people whatever is relatively easy to provide, not necessarily personalized” (PID12). Indeed, return of results should not be about “returning information that requires interpretation to individuals, unless we feel comfortable that we can provide that interpretation and provide the resources needed to do something with that” (PID29).
Those who did not think that returning individual data was a good option also felt that doing so could divert resources from the primary goal of producing generalizable knowledge. In addition, some interviewees suggested returning individual data could lead to diagnostic misconception among participants who might think that the primary purpose of the research is producing personalized, actionable clinical information, not yielding generalizable knowledge:
People will sign up for this study just to get their individual results back…This is a bad idea. It is a bad use of researchers’ resources. But if people truly understand that this initiative is about gathering data from a million people to try and learn things that will help a lot of patients, then what people are agreeing to is contributing to that generation of generalizable knowledge (PID13).
A smaller number of interviewees, however, argued that participants should have “access to every piece of data about themselves” (PID16), whether or not researchers know what participants may “do with it.” In discussing genomic data, some interviewees argued that access to data “is the [fundamental] right” that all participants should have because “samples are taken from their bodies” (PID56). Most stakeholders who favored returning all data to research participants had previously participated in similar research initiatives.
To summarize, our interviewees generally did not consider returning individual-level data to be either of high value to participants or feasible from the researcher perspective. Nonetheless, some viewed the return on individual-level data, especially genomic data, as a moral obligation of researchers.
Returning Aggregate-Level Data
Another option is to return aggregated survey, biometric, or genetic results. Those supporting the return of aggregate-level data to participants felt that doing so is more feasible than returning individual study findings for a large research program because it could often be accomplished by sharing overall study findings. As interviewee PID10 put it,
I consider that [returning] aggregate data [is] very similar to [returning] the rest of the [study] results. So, I say the aggregate data is very similar in nature because some of it is only really usable in aggregate form.
All our interviewees agreed that overall study findings should be shared with participants; doing so seemed to them to be feasible and to add value. Interviewees identified returning aggregate-level results as appropriate because it may require little or no additional interpretation or packaging of results included in study publications. As interviewee PID18 said:
[Those] who do the community engagement studies work into the studies that we have Town Hall meetings, publish in the small local newspapers, and write it in a way that people understand. But I think that everyone should be asked to take some accountability for getting the information back…even if it’s in the form of executive summary.
Interviewee PID20 suggested that “all the papers [should be published as] ‘open access’ [articles] and [there should be] a variety of ways to communicate those results.”
Level of Support for Interpreting Results
Although sharing aggregate data is useful because it increases comprehension of study findings, some of our participants felt that sharing individual-level data may not necessarily be valuable without providing some support to help participants interpret what the data mean. They indicated that this is particularly true about genetic data. Indeed, in discussing options for supporting interpretation of individual results, our interviewees distinguished between genetic and non-genetic (survey and biometric) data.
Providing No Interpretation
Raw non-genetic data
Some interviewees said that returning raw non-genetic data to participants is an easy, “non-controversial” (PID36), and highly feasible option that could spark their interest in their own health and encourage long-term engagement with the study. As interviewee PID32 put it, “When [AoURP participants] come in and they get their physical measurements done, they get a sheet with their weight, height, all of the physical measurements that they had done, their BMI…their blood pressure. And they can choose to take that document to their doctor” to help better understand what it means to them. Doing so can also create an impression that participants are getting something back right away in return for providing their data, which could be an important engagement tactic. It is worth noting that those who supported the return of raw, non-genetic data were project administrators from a range of organizations, including non-profit organizations, universities, and health provider organizations.
However, other interviewees said that returning raw non-genetic data was of minimal value and potentially confusing. One interviewee even indicated that some participants could misinterpret routine biometric screening results. As interviewee PID23 said:
Even being able to return…your blood pressure, which seems straight up, isn’t always, because what if their blood pressure was in those wiggly ranges where it’s a little high but not high enough?
PID39 indicated that misinterpretation of data would be a particular concern for underserved populations with low health literacy or without stable access to health care because
You just don’t know who you’re talking to…so if you start sending information out and you don’t know who you’re talking to or how they might use that information, I’m not sure that that’s the best way to do that.
We found that those believing that returning raw, non-genetic data could be problematic represented both researchers and project administrators who had previous experiences with AoURP-like initiatives.
Raw genomic data
Some interviewees felt that returning raw genomic data may be more valuable to participants than seeing their biometric screening data. Our interviewees mentioned a number of benefits of returning raw genetic data, including the expectation that although interpreting sequenced genomic data may not be possible today, having one’s full genome sequenced may become more valuable in time. In general, those supporting access to uninterpreted individual genetic data had participated in similar initiatives and commented on the value in having access to this information. Some stated that AoURP-like studies should give participants their raw genomic data so that they can take these data elsewhere for interpretation: “We are going to give you your data, you are going to be able to see it, you are going to be able to talk about it with your healthcare provider [about what they mean to you], and we stop there” (PID20).
Other interviewees, most of whom were researchers, saw returning uninterpreted, genetic data as problematic because of unintended consequences that can result from misinterpretation and the need for more support to help participants understand the implications of these data. As interviewee PID15 said:
Looking at things like the discovery of the BRCA gene and the whole Angelina Jolie effect* that people were talking about, there was a lot of misunderstanding about what those results meant. Some people thought that if you tested positive for the BRCA gene, you were definitely going to get cancer and you were definitely going to die.
Indeed, those disagreeing with returning raw genetic information not only saw it as being of very little value for those who cannot interpret it, but also suggested that it could lead to potentially harmful decision-making.
Contextualizing Individual Results
Contextualizing survey results
Contextualizing individual non-genetic results was generally deemed more valuable and engaging than providing raw data. Interviewees also felt that it was not difficult to do. For example, interviewee PID36 suggested that understanding how an individual’s results compare to results of other study participants could be useful:
Let’s just say you’ve just told us that you’re a two-pack a day smoker. It would be good to see that in context of the overall cohort…the folks enrolled in All of Us, 15% are smokers at your level. Couple few percent smoke more, a lot smoke less or not at all. Being able to contextualize something like that or maybe even more importantly things that are not about habits but just about health, so weight or blood pressure…a little bit of contextualization goes a long way and helps people understand where they are compared to other folks.
Other interviewees suggested that researchers could include information about relative risk factors for individuals who are similar to a given participant (e.g., the relative risk of dying from smoking is higher for men than for women), which is different from stating what a participant’s personal risk factor is. Using environmental variables can also help explain health outcomes. According to interviewee PID15,
Providing some additional context in terms of how your environment can impact your health in terms of obesity, diabetes…[may increase the study’s] value to the participants and make that more useful to them in understanding their own health.
Contextualizing biometric and genetic data
Our interviewees felt that contextualization of biometric information may include the discussion of health implications associated with biometric results that fall into certain ranges (e.g., indicating the range of normal blood pressure readings) or the addition of the number of people with the same demographics with the results falling into a given range (e.g., saying that a given participant is taller than 40% of study participants). In turn, contextualization of genetic data may include the discussion of evidence on the genetic markers in the sequenced data that are known to be associated with certain diseases.
While putting survey results into context may be straightforward, contextualizing genetic data may create risks of privacy and lead participants to take unwarranted actions. For example, interviewee PID18 noted that putting personal results into context raises issues of privacy and personal implications, especially for certain populations, such as the:
American Indian community who have said, ‘If you are going to do genetic testing, what if you find out that I am not Indian, or I am not Indian enough?’ It has obvious economic implications for somebody who is living on the reservation and accessing services [and] has huge cultural implication.
Interviewee PID39 was also hesitant to return contextualized genetic results to participants:
…because you just don’t know how that end user is going to interpret or try to use that information, and it may provoke, for example, people running to their doctors with all kinds of questions that may be appropriate and may not be appropriate, but it raises a sense of alarm when there really isn’t.
Providing Support for Interpreting Results
Our interviewees felt that providing support for interpreting study results would increase their value to participants. There was consensus that physicians are well equipped to help participants interpret their non-genetic results. For example, some interviewees suggested that physicians could look at participant survey responses and provide feedback to participants about how different responses related to their environment, lifestyle, or behavior may influence their health. For biometric results, physicians could use participants’ age, gender, and race to discuss their weight, height, blood pressure, and waist circumference to better help them understand the health implications of those results.
Moreover, while many interviewees felt that those participating in AoURP-like studies “should have the ability to contact someone to discuss” their genetic results, they disagreed about whether or not this should be done by genetic counselors or physicians. In general, genetic counselors were considered to be better equipped than physicians to help participants interpret genetic results. Providing the support of a geneticist either in person, over the phone, or electronically can help participants better understand the health implications of their genetic sequencing. Indeed, interviewee PID06 argued that:
Genetic counselors are the professionals who are in the best position to be able to explain genetic components…the program has a responsibility to provide that environment.
Our interviewees who favored providing the support of a geneticist tended to be researchers and project administrators who had previously participated in similar initiatives.
Some, however, voiced concerns about the feasibility of providing the support of geneticists, including the high cost of providing genetic counseling, the lack of sufficient genetic counselors, and the potentially poor fit of genetic counselor training for interpreting research results. Moreover, some stakeholders felt that genetic counselors may not know participants’ medical histories:
It’s more problematic when you have somebody who does not have a relationship with the patient or…is not familiar with their full medical history…to interpret results for them. It confuses the issue for participants, and it makes it harder if the program tells them one kind of generic interpretation but their doctor is actually telling them something different (PID25).
Some of our interviewees were skeptical about the feasibility of engaging physicians in interpreting genetic results. Interviewee PID34 mentioned that physicians “may not necessarily even be equipped to interpret genomic data” and that they often think that interpreting genetic information is “too much for them to take on.” Interviewee PID41 also worried that physicians may not even know what to do with the results of genetic tests that just say:
‘Oh, we found this risk gene; you should talk to your doctor,’ but then not giving them any context with which to talk to their doctor and then putting the doctor in the position of saying, ‘Well, I don’t know what to do with this information.’
Most of those who voiced these concerns were researchers and project administrators with prior experience in similar initiatives.
Discussion
Providing participants access to information about their health and sharing results of the studies that used their data may be a strong motivator to join and stay involved with an initiative like the AoURP and reflects the “larger cultural transition toward more engagement, collaboration, and transparency between investigators and research participants” (National Academies of Sciences 2018). Nonetheless, there is a lack of guidance on how research repositories should share individual data and/or individual-level results with participants. In such a context of ethical uncertainty, it may be important to consider the ways in which researchers can feasibly return the information and results to participants so that the latter can perceive a return of value from their participation in initiatives like the AoURP.
Many of our interviewees said that researchers had a moral obligation to return all data to participants as a way of ensuring research transparency. However, they had quite different opinions about what kinds of data could be feasibly returned to participants and what value different types of data might have. For example, interviewees agreed that aggregate level data, which might take the form of overall study findings, could readily be shared with participants; nonetheless, aggregate data may not have much value to participants. Providing copies of overall study findings to participants is a standard practice in community-based participatory research (Minkler 2004, Hall 1992, Ansley and Gaventa 1997, Israel et al. 1998, Chen et al. 2010), and this practice may be relatively feasible even for large-scale studies. Indeed, a growing number of researchers express support for returning aggregate results to participants in genomic research as a gesture of gratitude for their participation, to demonstrate accountability for completion of the project, and to facilitate public understanding of, and trust in, science (Beskow et al. 2012). While providing overall study findings may help a given community work towards achieving a specific goal, only seeing aggregate results may have limited value for a given participant who might find a personalized report containing his or her own health information to be more valuable, especially if they participate in a study about precision medicine and personalized health.
The above examples seem to validate the importance of the NASEM conceptual framework’s choice of feasibility and value to participants as two key variables to consider when making choices about returning results. Our results, however, suggest that the NASEM framework can be augmented by acknowledging that the results that participants may find valuable may depend on the type of data used to generate them (e.g., genetic vs non-genetic data), the level of information granularity returned to participants (e.g., individual vs aggregate results), and the level of support provided for interpreting results (e.g., no support, contextualization of results, support from a geneticist or a physician).
In general, our interviewees felt that the type of data returned may affect the perceived value of results. We found that returning survey findings may not be as valuable to participants as other types of information, but that the perceived value may vary according to participants’ health literacy and/or access to routine medical care. Reports of blood pressure or body mass index (BMI) may be of particular interest to underserved populations who may not seek medical care on a regular basis. Moreover, interviewees indicated that the value of type of data returned can depend upon the current state of scientific knowledge that can contextualize the results. Results from genetic sequencing could be of high value and have health importance for participants if the body of evidence for the relevance of particular genomic markers becomes more comprehensive and robust in the future.
Our interviewees also indicated that the degree of granularity of the information and the availability of support for interpreting results shape both feasibility and perceived value to participants. The majority of those we interviewed felt that although valuable, returning individual level genetic data would not be very feasible from a research perspective and could distract research efforts from their primary goal of producing generalizable information. They also expressed concern that participants might not know how to interpret raw results of biometric and genetic screenings, noting that this might be particularly true for participants with low health literacy or those who did not have insurance coverage or a regular health care provider to explain what the data meant for a participant’s health.
Our interviewees also thought that study participants would find it useful to know how their non-genetic individual results compared with those of other study participants. Such contextualization of their data could perhaps help participants understand more clearly how the environment and their own health habits affected their wellbeing. Stakeholders did not view contextualizing survey data as controversial, but contextualizing genetic data raised privacy issues and other cultural concerns, specifically if there are certain groups that may have higher risk factors for certain diseases that may lead to stigmatization. Moreover, many interviewees were not confident that physicians are best sources of help with interpreting genetic information, but noted the shortage of genetic specialists who may be better positioned to help interpret the genetic data appropriately. Given the AoURP’s scope, reliance on physicians as the sources of interpreting individual study findings may also turn out to be problematic, especially for participants who sign up for a study via a website and who may not have a primary care physician who can help with result interpretation or if providers start feeling that the program is “diverting attention away from the patient base that you have an obligation to care for” (PID37).
We also found that previous experiences and professional backgrounds of our interviewees might have affected their perceptions of what results might be of value to participants. To illustrate, among our interviewees, researchers in particular feared that returning raw genetic data might lead to unintended consequences if patients misinterpret the results. They argued that providing support to help participants understand their individual results is a best practice supported in the literature that could be accomplished by using plain language, offering different ways of obtaining additional information, explaining how participants’ own results compare to those of others, providing relative risks related to participants’ socio-cultural-environmental surroundings, or providing access to a geneticist or clinician to help participants gain a deeper understanding of genetic results (Jarvik et al. 2014, McEwen, Boyer, and Sun 2013, Beskow and Burke 2010, Wolf 2013, National Academies of Sciences 2018).
Although our study provides empirical data to support and augment the NASEM’s framework, it has several limitations. First, the majority of our interviewees were women, who were also more likely than men to respond to our interview invitation. Although we did not see any substantial differences in perspectives between the men and women we interviewed, future studies on this topic should oversample men.
Second, we did not gather the perspective of individuals participating in studies like the AoURP, and our interviewees may have erroneous ideas about what would be valuable to participants. Nonetheless, all our interviewees were eligible to participate in the AoURP and thus could be considered potential participants. Finally, although helpful for the conceptual development of the notion of return of value, our results may not be generalizable to another group of research stakeholders.
While not without limitations, the purposeful and maximally variant sampling approach we used is standard in qualitative research. It allowed us to identify the range and variation in stakeholder views of what data should and can be returned to study participants, based on the type of data, levels of data granularity, and the support available to participants for interpreting the data. For example, aggregate study results are relatively straightforward to return, but stakeholders did not view them as particularly useful to participants. In contrast, raw genetic data could be useful to participants in the future but returning them to participants was fraught with concerns about privacy and misinterpretation that affect what data interpretation support could be offered to participants.
Conclusion
In this study, we highlighted the perspectives of research and research management stakeholders on the return of results in the AoURP-like studies that we think will be the future of research. An important message emerged from our interviews: one size does not fit all when it comes to returning value. Our findings suggest that the key to operationalizing return of value may be to find a point of equilibrium between criteria that may affect usefulness and feasibility. In addition to the NASEM conceptual framework’s prioritization of feasibility and value to participants, the type of data, its level of granularity, and options for supporting result interpretation, especially for large scale initiatives designed to recruit diverse participants that may not have health insurance or a primary care provider, should be carefully considered when determining what results should be offered to participants. The point of equilibrium may vary by study, by participants’ backgrounds and preferences, by their health literacy and access to regular healthcare, and by the resources available to professionals controlling the data.
Future studies should empirically test the utility of the augmented NASEM framework for the purposes of identifying options for returning value to participants using the data collected from different research stakeholders, including research participants. Doing so can help identify best practices for returning results to participants and explore the factors that determine the point of equilibrium between feasibility and usefulness. With the creation of large data repositories like the AoURP and the increased engagement of participants in research as study partners (Sabatello and Appelbaum 2017), it is important to understand the conditions under which different options for returning results may work best, what factors may make the return of results easier or more difficult, and what educational interventions for participants and their healthcare providers may be needed to help them better understand clinical implications of research results. Increasing participants’ and clinicians’ research literacy may enable the exchange of information during clinical encounter and facilitate the delivery of evidence-based, patient-centered, and personalized care. And while much has been learned about return of results from genomics-focus research, we know much less about the ethical, legal, and social implications of returning results from studies that combine genomic and non-genomic data sources (e.g., sensor and social network data).
ACKNOWLEDGMENTS
The authors would like to thank all study participants for sharing their perspectives, Steven Steinhubl for useful suggestions on earlier versions of this manuscript, Mary Vaiana for editorial assistance, Ingrid Maples for helping with reference formatting, and two anonymous reviewers for their careful reviews of the manuscript.
FUNDING: This work was supported by the National Institutes of Health under Grant 1U24OD023176-01.
Footnotes
The authors report no conflicts of interest.
CONFLICTS OF INTEREST: The authors report no conflicts of interest.
ETHICAL APPROVAL: : Both the RAND and Scripps Institutional Review Boards (IRBs) determined this study to be exempt from review.
This interviewee is referring to patients making rash clinical decisions, like getting a double mastectomy, based on results of elevated risks from genome sequencing.
References
- All of Us Research Program. 2018. “All of Us Research Program - Operational protocol.” National Institute of Health accessed October 1, 2018 https://allofus.nih.gov/sites/default/files/aou_operational_protocol_v1.7_mar_2018.pdf. [Google Scholar]
- Ansley Fran, and Gaventa John. 1997. “Researching for democracy & democratizing research.” Change 29 (1):46–53. [Google Scholar]
- Aungst Jessica, Haas Amy, Ommaya Alexander, and Green Lawrence W. 2003. Exploring challenges, progress, and new models for engaging the public in the clinical research enterprise: Clinical research roundtable workshop summary: National Academies Press. [Google Scholar]
- Belle Ashwin, Thiagarajan Raghuram, Soroushmehr SM, Navidi Fatemeh, Beard Daniel A, and Kayvan Najarian. 2015. “Big data analytics in healthcare.” BioMed Research International 2015. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Beskow Laura M, and Wylie Burke. 2010. “Offering individual genetic research results: Context matters.” Science Translational Medicine 2 (38):1–5. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Beskow Laura M, Wylie Burke, Fullerton Stephanie M, and Sharp Richard R. 2012. “Offering aggregate results to participants in genomic research: Opportunities and challenges.” Genetics in Medicine 14 (4):490–96. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Burke Wylie, Evans Barbara J, and Jarvik Gail P. 2014. “Return of results: Ethical and legal distinctions between research and clinical care.” American Journal of Medical Genetics Part C: Seminars in Medical Genetics 166 (1):105–111. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Chen Peggy G, Nitza Diaz, Georgina Lucas, and Rosenthal Marjorie S. 2010. “Dissemination of results in community-based participatory research.” American Journal of Preventive Medicine 39 (4):372–378. [DOI] [PubMed] [Google Scholar]
- Department of Health and Human Services. 2018. “National Institutes of Health: All of Us Research Program.” accessed May 1, 2018 https://allofus.nih.gov/about/about-all-us-research-program.
- Fabsitz Richard R, McGuire Amy, Sharp Richard R, Mona Puggal, Beskow Laura M, Biesecker Leslie G, Ebony Bookman, Wylie Burke, Esteban Gonzalez Burchard, and George Church. 2010. “Ethical and practical guidelines for reporting genetic research results to study participants: Updated guidelines from a National Heart, Lung, and Blood Institute working group.” Circulation: Genomic and Precision Medicine 3 (6):574–580. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Haga Susanne B, and Beskow Laura M. 2008. “Ethical, legal, and social implications of biobanks for genetics research.” Advances in Genetics 60:505–544. [DOI] [PubMed] [Google Scholar]
- Hall Budd L. 1992. “From margins to center? The development and purpose of participatory research.” The American Sociologist 23 (4):15–28. [Google Scholar]
- Israel Barbara A, Schulz Amy J, Parker Edith A, and Becker Adam B. 1998. “Review of community-based research: Assessing partnership approaches to improve public health.” Annual Review of Public Health 19 (1):173–202. [DOI] [PubMed] [Google Scholar]
- Jarvik Gail P, Amendola Laura M, Berg Jonathan S, Kyle Brothers, Clayton Ellen W, Wendy Chung, Evans Barbara J, Evans James P, Fullerton Stephanie M, and Gallego Carlos J. 2014. “Return of genomic results to research participants: The floor, the ceiling, and the choices in between.” The American Journal of Human Genetics 94 (6):818–826. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kaufman David J, Rebecca Baker, Milner Lauren C, Stephanie Devaney, and Hudson Kathy L. 2016. “A survey of U.S. adults’ opinions about conduct of a nationwide Precision Medicine Initiative® cohort study of genes and environment.” PLoS One 11 (8):e0160461. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Khodyakov Dmitry, Bromley Elizabeth, Evans Sandra Kay, and Katharine Sieck. 2018. “Best practices for participant and stakeholder engagement in the All of Us Research Program.” RAND. https://www.rand.org/pubs/research_reports/RR2578.html.
- Lyles Courtney R, Lunn Mitchell R, Obedin-Maliver Juno, and Bibbins-Domingo Kirsten. 2018. “The new era of precision population health: Insights for the all of us research program and beyond.” Journal of Translational Medicine 16 (1):211. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Manogaran Gunasekaran, Thota Chandu, Lopez Daphne, Vijayakumar V, Abbas Kaja M, and Revathi Sundarsekar. 2017. “Big data knowledge system in healthcare” In Internet of Things and Big Data Technologies for Next Generation Healthcare, 133–157. Springer. [Google Scholar]
- McEwen Jean E, Boyer Joy T, and Sun Kathie Y. 2013. “Evolving approaches to the ethical management of genomic data.” Trends in Genetics 29 (6):375–382. [DOI] [PMC free article] [PubMed] [Google Scholar]
- McGuire Amy L, Timothy Caulfield, and Cho Mildred K. 2008. “Research ethics and the challenge of whole-genome sequencing.” Nature Reviews Genetics 9 (2):152. [DOI] [PMC free article] [PubMed] [Google Scholar]
- McGuire Amy L, Oliver Robinson Jill, Ramoni Rachel B, Morley Debra S, Steven Joffe, and Plon Sharon E. 2013. “Returning genetic research results: Study type matters.” Personalized Medicine 10 (1):27–34. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Minkler Meredith. 2004. “Ethical challenges for the “outside” researcher in community-based participatory research.” Health Education & Behavior 31 (6):684–697. [DOI] [PubMed] [Google Scholar]
- Nanibaa’ A, Garrison Nila A Sathe, Matheny Antommaria Armand H, Holm Ingrid A, Sanderson Saskia C, Smith Maureen E, McPheeters Melissa L, and Clayton Ellen W. 2016. “A systematic literature review of individuals’ perspectives on broad consent and data sharing in the United States.” Genetics in Medicine 18 (7):663–671. [DOI] [PMC free article] [PubMed] [Google Scholar]
- National Academies of Sciences, Engineering, and Medicine,. 2018. Returning individual research results to participants: Guidance for a new research paradigm. Washington, DC: The National Academies Press. [PubMed] [Google Scholar]
- Prucka Sandra K., Arnold Lester J., Brandt John E., Gilardi Sandra, Harty Lea C., Hong Feng, Malia Joanne, and Pulford David J.. 2015. “An update to returning genetic research results to individuals: Perspectives of the industry pharmacogenomics working group.” Bioethics 29 (2):82–90. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Ritchie Jane, Lewis Jane, McNaughton Nicholls Carol, and Rachel Ormston. 2013. Qualitative research practice: A guide for social science students and researchers. London: SAGE Publications Ltd. [Google Scholar]
- Sabatello Maya, and Appelbaum Paul S. 2017. “The precision medicine nation.” Hastings Center Report 47 (4):19–29. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Sandelowski Margarete. 1995. “Sample size in qualitative research.” Research in Nursing & Health 18 (2):179–183. [DOI] [PubMed] [Google Scholar]
- Saunders Benjamin, Sim Julius, Kingstone Tom, Baker Shula, Waterfield Jackie, Bartlam Bernadette, Burroughs Heather, and Jinks Clare. 2018. “Saturation in qualitative research: Exploring its conceptualization and operationalization.” Quality & Quantity 52 (4):1893–1907. doi: 10.1007/s11135-017-0574-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wolf Susan M. 2013. “Return of individual research results and incidental findings: Facing the challenges of translational science.” Annual Review of Genomics and Human Genetics 14:557–577. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wong Charlene A, Hernandez Adrian F, and Califf Robert M. 2018. “Return of research results to study participants: Uncharted and untested.” JAMA 320 (5):435–436. [DOI] [PubMed] [Google Scholar]