Barnes and colleagues (2024) suggest the use blockchain technologies combined with generative artificial intelligence (AI) for consent procedures in biobanking. They argue that harnessing these technologies facilitates a “demonstrated consent” that accurately treats consent as a continuous process rather than a one-time decision. Such a mechanism, they argue, combines the benefits of broad and granular (e.g. dynamic) consent. Participants opt into a larger research program, are informed about studies that use their data, and can opt out of studies that are not to their liking—all with the ease of a click.
While we agree with the authors’ analysis of the problems associated with current consent procedures, including importance of individual-level control and privacy protections, their proposal bears the risk of techno-solutionist tokenisms. “Techno-solutionism” refers to the presentation of a seemingly simple, high-tech solution to a complex and socially, politically, and economically intractable problem (Saetra and Selinger 2024). The authors seem to be aware of this issue. They consider the negative environmental impact of blockchain technology, for example, and the risk that technology-based solutions will unjustly—and in some respects ineffectively—replace human-led consent processes. However, a blockchain/generative AI-based system raises three additional research-related harms: (1) entrenching the digital divide; (2) reinforcing a framework of atomistic individualism that hinders effective collective, solidarity-based oversight that should complement individual-level control; and (3) exacerbating researchers’ (and tech-designers’) moral disengagement toward research participants and their communities rather than enhancing reflection on harm and responsibility.
The tech divide is alive and kicking
Barnes and colleagues argue that integrating new technologies in consent processes can improve the agency of participants. The proposed blockchain/generative AI-based system, they posit, will allow participants to keep track of how their data are being used, and to readjust their consent preferences throughout the process, e.g. by delineating new areas of study that they do not want their data to be used for. While the interoperable data infrastructure of the new model would indeed be an advantage for biobanks, this is arguably not the case for many research participants. The model ignores the heterogeneity of needs and digital capabilities of research participants and the ramifications for the claimed human agency. Simply stated, the ease of a click is not equally easy, accessible, or available to everyone, and the agency of participants through this blockchain/generative AI-based system is hardly a wholesale. While well-resourced participants may be able to bear this cost for the promise of more agency, this promise demands time and labor also of participants who are short on the first and overburdened by the latter.
As with other technologies, the utility of the platform for research participants is likely to vary by age, socioeconomic status, and geographic location (Raihan et al. 2024). Digital literacy—that is, the ability to navigate digital space safely and effectively—is similarly not equally distributed. These features of the tech divide have direct ramifications for both individual participants and groups of potential participants. Individually, internet connectivity as well as the availability and functionality of devices that participants have are likely to limit the ability of at least some participants to follow up on their donated data. On a group level, many underserved populations are directly or indirectly excluded from acquiring digital devices and literacy. For example, the design of digital devices, apps, and websites is often done without a-priori consideration of disability accessibility (Botelho 2021), notwithstanding national and international laws requiring otherwise. And while a blockchain/generative AI-based system would be presumably freely available for research participants, our studies indicate that digital inaccessibility will require research participants with disabilities who want to exercise their agency rights to pay-out-of-pocket to adjust such platforms to their needs. Importantly, these are not merely technical or practical difficulties to address, but they go deep into how we think about inclusion and respect for participants in genomic (and other) research. Creating a blockchain/generative AI-based system that does not address access challenges could thus further exacerbate existing inequities and discourage participation or downstream benefits of continuous consent to those experiencing digital exclusion.
Individualism at the price of solidarity
A second possible harm in a blockchain/generative AI-based system for data sharing lies in its’ exclusive focus on individual autonomy for bio-data decisions, without any regard to benefits that research participants can gain from group- and community-level exchange and deliberation. While individual consent is important, it cannot address power asymmetries: It needs to be complemented by effective collective oversight. Without appropriate address, the approach by Barnes and colleagues may retain existing problems in consent and its narrowly defined individualistic framework—contrary to the growing recognition of the importance of community voices and potential group harms in genomic studies.
It is well established that many participants have limited understanding and recollection of study information, despite the consent process (Felt et al. 2009), and that consent forms are technocratic artifacts for the recording of consent, rather than assurance of meaningful consent procedures (Jacob 2007). Barnes and colleagues’ blockchain/generative AI-based system would exacerbate this issue as it focuses only on choices that individuals make in isolation. Moreover, while the proposed changes in consent procedures address the currently unaccounted for possibility that individual preferences may change over time, they account for neither the possibility of group harm in individuals who are not research participants (Chapman et al. 2025), nor “the vast power asymmetries between data processors and data subjects” (Prainsack et al. 2022). Thus, beyond individual consent, a new approach to consent and data sharing must also consider these broader implications and incorporate measures to facilitate both individual-level control and group-level deliberation (Eitenberger et al. 2025). Engaging relevant communities to understand their views on how their data should and should not be used—while also examining how community-based perspectives may differ from those of individuals, and how to reconcile these differences—could enhance protections for both individuals and communities.
Datafication of human agency downgrades researchers’ moral responsibility
Although the proposed blockchain/generative AI based system holds potential for increased data interoperability and hence, ease of use for data users, for research participants, the technocratic solution would come at the cost of social, inter-personal researcher-participant interaction. Human interaction, however, is crucial for research efforts. Barnes et al (2025) themselves stress that a first encounter of in-person interaction should remain in place to assure that participants indeed provide informed consent—yet without explaining how and whether such in-person interactions would be realized more broadly within their model, where research participants interface with computers. Human interaction is also instrumental for gaining insight into participants’ views, concerns, barriers and experiences as research participants and indispensable for trust-building, especially with communities that have historically been wronged in research and society (Emmel et al. 2007). Yet, the blockchain/generative AI based system does not seem to distinguish true deliberation from merely recording the preference of individual participants.
The expectation that a technology-based platform can replace human interaction in engagement about research goals and consent preferences raises other issues. In our study on polygenic risk scores and return of results, we found significant differences in how clinicians and patients/community members view constructive approaches for translational genomic research (Sabatello et al. 2024). Patients/community members indicated the need for ongoing interaction (texts, phone-calls, and study-specific support groups) with research staff that are personally known to them. Clinicians, in contrast, preferred automated approaches through the medical records, avatars that send texts, or other online portals.
The disconnect between these two approaches is unlikely to dissipate by the ease of the click but to exacerbate it. The new technology offers individual-based preferences for the types of studies in which a research participant agrees for their data to be used. However, it is unclear that it offers the opportunity for research participants to explain their choices, and significantly: that it has a built-in process to assure that researchers requesting access to these data are made aware of participants’ rationales, especially those who decline consent, and required to actively engage with these rationales, including their personal responsibility in forging ahead with a potentially harmful study that may not befit the community. As scholars found (Nichol et al. 2023), such disengagement—i.e., moral distance between actions and harms or responsibility—arises also among AI-tech developers. It is critical that a blockchain/generative AI-based system that promises individual-based agency in decisions to share data for presumably good-intended re-uses will also have safeguards to ensure that declining consent will expand to benefit the larger community.
In reforming consent processes from the ground up, efforts should focus on carefully weighing risks and benefits of data donation, storage and uses (Prainsack et al. 2022). This requires close collaboration with communities; a “demonstrated consent” cannot and should not be done merely through technological platforms. Techno-solutionist fixes promote efficiency on a large scale but may come at the price of high social, environmental and financial burden on some research participants. These underlying issues need to be addressed; new technologies such as the blockchain/generative AI-based system can, and should, serve as a catalyst for larger conversations around solidarity and true participant engagement.
Acknowledgement
This work was supported in part by NIH/NHGRI grant R01HG010868 and Fulbright Austria Scholarship Grant (2024).
Footnotes
Disclosure Statement
Maya Sabatello is an IRB member of the All of Us Research Program and Barbara Prainsack is Chair of the European Group on Ethics in Science and New Technologies, which advises the European Commission. Their contributions reflect their personal view as academic researchers.
Contributor Information
Magdalena Eitenberger, Center for Precision Medicine and Genomics, Department of Medicine, Columbia University, New York, USA; Department of Political Science, University of Vienna, Vienna, Austria..
Barbara Prainsack, Department of Political Science, University of Vienna, Austria.
Maya Sabatello, Center for Precision Medicine and Genomics, Department of Medicine; Associate Professor of Medical Sciences (in Medical Humanities and Ethics), Division of Ethics, Department of Medical Humanities and Ethics; co-Director, Precision Medicine & Society Seminar Speaker/Workshop, Columbia University, New York, USA.
References
- Botelho Fernando H. F. 2021. “Accessibility to Digital Technology: Virtual Barriers, Real Opportunities.” Assistive Technology 33 (sup1): 27–34. 10.1080/10400435.2021.1945705. [DOI] [PubMed] [Google Scholar]
- Chapman Carolyn Riley, Quinn Gwendolyn P., Natri Heini M., Berrios Courtney, Dwyer Patrick, Owens Kellie, Heraty Síofra, and Caplan Arthur L.. 2025. “Consideration and Disclosure of Group Risks in Genomics and Other Data-Centric Research: Does the Common Rule Need Revision?” The American Journal of Bioethics 25 (2): 47–60. 10.1080/15265161.2023.2276161. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Eitenberger Magdalena, Baugh Mika, McDonald Katherine E., and Sabatello Maya. 2025. “Beyond Individual Responsibility: Group Harms in Genomic (Data-Centric) Research Ethics Require Structural, Justice-Oriented Solutions.” The American Journal of Bioethics 25 (2): 77–79. 10.1080/15265161.2024.2441719. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Emmel Nick, Hughes Kahryn, Greenhalgh Joanne, and Sales Adam. 2007. “Accessing Socially Excluded People — Trust and the Gatekeeper in the Researcher-Participant Relationship.” Sociological Research Online 12 (2): 43–55. 10.5153/sro.1512. [DOI] [Google Scholar]
- Felt Ulrike, Bister Milena D., Strassnig Michael, and Wagner Ursula. 2009. “Refusing the Information Paradigm: Informed Consent, Medical Research, and Patient Participation.” Health (London, England: 1997) 13 (1): 87–106. 10.1177/1363459308097362. [DOI] [PubMed] [Google Scholar]
- Jacob Marie-Andrée. 2007. “Form-Made Persons: Consent Forms as Consent’s Blind Spot.” PoLAR: Political and Legal Anthropology Review 30 (2): 249–68. 10.1525/pol.2007.30.2.249. [DOI] [Google Scholar]
- Nichol Ariadne A., Halley Meghan C., Federico Carole A., Cho Mildred K., and Sankar Pamela L.. 2023. “Not in My AI: Moral Engagement and Disengagement in Health Care AI Development.” Pacific Symposium on Biocomputing. Pacific Symposium on Biocomputing 28:496–506. [PMC free article] [PubMed] [Google Scholar]
- Prainsack Barbara, Seliem El-Sayed Nikolaus Forgó, Szoszkiewicz Łukasz, and Baumer Philipp. 2022. “Data Solidarity: A Blueprint for Governing Health Futures.” The Lancet Digital Health 4 (11): e773–74. 10.1016/S2589-7500(22)00189-3. [DOI] [PubMed] [Google Scholar]
- Raihan Mohammad M. H., Subroto Sujoy, Chowdhury Nashit, Koch Katharina, Ruttan Erin, and Turin Tanvir C.. 2024. “Dimensions and Barriers for Digital (in)Equity and Digital Divide: A Systematic Integrative Review.” Digital Transformation and Society ahead-of-print (ahead-of-print). 10.1108/DTS-04-2024-0054. [DOI] [Google Scholar]
- Sabatello Maya, Bakken Suzanne, Chung Wendy K., Cohn Elizabeth, Crew Katherine D., Kiryluk Krzysztof, Kukafka Rita, Weng Chunhua, and Appelbaum Paul S.. 2024. “Return of Polygenic Risk Scores in Research: Stakeholders’ Views on the eMERGE-IV Study.” HGG Advances 5 (2): 100281. 10.1016/j.xhgg.2024.100281. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Saetra Henrik Skaug, and Selinger Evan. 2024. “Technological Remedies for Social Problems: Defining and Demarcating Techno-Fixes and Techno-Solutionism.” Science and Engineering Ethics 30 (6): 60. 10.1007/s11948-024-00524-x. [DOI] [PMC free article] [PubMed] [Google Scholar]
