Skip to main content

This is a preprint.

It has not yet been peer reviewed by a journal.

The National Library of Medicine is running a pilot to include preprints that result from research funded by NIH in PMC and PubMed.

ArXiv logoLink to ArXiv
[Preprint]. 2024 Sep 28:arXiv:2410.12793v1. [Version 1]

Environment Scan of Generative AI Infrastructure for Clinical and Translational Science

Betina Idnay 1, Zihan Xu 2, William G Adams 3, Mohammad Adibuzzaman 4, Nicholas R Anderson 5, Neil Bahroos 6, Douglas S Bell 7, Cody Bumgardner 8, Thomas Campion 2,51, Mario Castro 9, James J Cimino 10, I Glenn Cohen 11, David Dorr 4, Peter L Elkin 12, Jungwei W Fan 13, Todd Ferris 14, David J Foran 15, David Hanauer 16, Mike Hogarth 17, Kun Huang 18, Jayashree Kalpathy-Cramer 19, Manoj Kandpal 20, Niranjan S Karnik 21, Avnish Katoch 22, Albert M Lai 23, Christophe G Lambert 24, Lang Li 25, Christopher Lindsell 26, Jinze Liu 27, Zhiyong Lu 28, Yuan Luo 29, Peter McGarvey 30, Eneida A Mendonca 31, Parsa Mirhaji 32, Shawn Murphy 33, John D Osborne 34, Ioannis C Paschalidis 35, Paul A Harris 36, Fred Prior 37, Nicholas J Shaheen 38, Nawar Shara 30, Ida Sim 39, Umberto Tachinardi 40, Lemuel R Waitman 41, Rosalind J Wright 42, Adrian H Zai 43, Kai Zheng 44, Sandra Soo-Jin Lee 45, Bradley A Malin 36, Karthik Natarajan 1, W Nicholson Price II 46, Rui Zhang 47, Yiye Zhang 2, Hua Xu 48,*, Jiang Bian 49,*, Chunhua Weng 1,50,*, Yifan Peng 2,51,*
PMCID: PMC12478420  PMID: 41031081

Abstract

This study reports a comprehensive environmental scan of the generative AI (GenAI) infrastructure in the national network for clinical and translational science across 36 institutions supported by the Clinical and Translational Science Award (CTSA) Program led by the National Center for Advancing Translational Sciences (NCATS) of the National Institutes of Health (NIH) at the United States. With the rapid advancement of GenAI technologies, including large language models (LLMs), healthcare institutions face unprecedented opportunities and challenges. This research explores the current status of GenAI integration, focusing on stakeholder roles, governance structures, and ethical considerations by administering a survey among leaders of health institutions (i.e., representing academic medical centers and health systems) to assess the institutional readiness and approach towards GenAI adoption. Key findings indicate a diverse range of institutional strategies, with most organizations in the experimental phase of GenAI deployment. The study highlights significant variations in governance models, with a strong preference for centralized decision-making but notable gaps in workforce training and ethical oversight. Moreover, the results underscore the need for a more coordinated approach to GenAI governance, emphasizing collaboration among senior leaders, clinicians, information technology staff, and researchers. Our analysis also reveals concerns regarding GenAI bias, data security, and stakeholder trust, which must be addressed to ensure the ethical and effective implementation of GenAI technologies. This study offers valuable insights into the challenges and opportunities of GenAI integration in healthcare, providing a roadmap for institutions aiming to leverage GenAI for improved quality of care and operational efficiency.

Keywords: Clinical and Translational Research, GenAI, LLM

1. INTRODUCTION

The burgeoning advancement of generative AI (GenAI) provides transformative potential for healthcare systems globally. GenAI employs computational models to generate new content based on patterns learned from existing data. These models, exemplified by large language models (LLMs), can produce content across various modalities such as text, images, video, and audio. 15 Its ability to generate human comprehensible text enabled the exploration of diverse applications in healthcare that involve the sharing and dissemination of expert knowledge, ranging from clinical decision support to patient engagement. 6,7 Integrating GenAI into healthcare can enhance diagnostic accuracy, personalized treatment plans, and operational efficiencies. For instance, GenAI-driven diagnostic tools can analyze medical images and electronic health records (EHRs) to detect diseases, often surpassing the accuracy of human experts. 813 GenAI applications can streamline administrative processes, reduce clinicians’ documentation burden, and enable them to spend more time on direct patient care. 14,15 However, implementing GenAI technologies in healthcare has several challenges. Issues such as trustworthiness, data privacy, algorithmic bias, and the need for robust regulatory frameworks are critical considerations that must be addressed to ensure the responsible and effective use of GenAI. 16,17

Given these promising advancements and associated challenges, understanding the current institutional infrastructure for implementing GenAI in healthcare is crucial. Various stakeholders (e.g., clinicians, patients, researchers, regulators, industry professionals) have different roles and responsibilities in GenAI implementation, ranging from ensuring patient safety and data security to driving innovation and regulatory compliance, and may hold varying attitudes toward GenAI applications that influence their acceptance and utilization of these technologies. Failure to consider these diverse perspectives may hinder the widespread adoption and effectiveness of GenAI technologies.

Previous studies have examined stakeholder perspectives on AI adoption to some extent. For example, Scott et al. found that while various stakeholders generally had positive attitudes towards AI in healthcare, especially those with direct experience, significant concerns persisted regarding privacy breaches, personal liability, clinician oversight, and the trustworthiness of AI-generated advice. 18 These concerns are reflective of AI technologies in general. Specific to GenAI, Spotnitz et al. surveyed healthcare providers and found that while clinicians were generally positive about using LLMs for assistive roles in clinical tasks, they had concerns about generating false information and propagating training data bias. 19

Despite these insights, there remains a gap in understanding the infrastructure required for GenAI integration in healthcare institutions, particularly from the perspective of institutional leadership. The Clinical and Translational Science Awards (CTSA) Program, funded by the National Center for Advancing Translational Sciences (NCATS) of the National Institutes of Health (NIH) at the United States, supports a nationwide consortium of medical research institutions at the forefront of clinical and translational research and practice. 20 By examining the GenAI infrastructure within CTSA institutions, we can gain valuable insights into how GenAI is being adopted into cutting-edge research environments and help set benchmarks for the broader healthcare community. Furthermore, understanding the challenges faced by CTSA institutions in this context is crucial for developing strategies that promote fair and accessible GenAI implementation. 8,21

In this study, we aim to conduct an environmental scan of the infrastructure for GenAI within CTSA institutions by surveying CTSA leaders to comprehensively understand its current integration status. We also highlight opportunities and challenges in achieving equitable GenAI implementation in healthcare by identifying key stakeholders, governance structures, and ethical considerations. We acknowledge the dual roles that respondents may represent, whether in their capacity as leaders within academic institutions (i.e., CTSA), healthcare systems, or both. Hence, we use the term “healthcare institutions” to encompass the broad range of leadership representation and capture a more complete picture of GenAI integration across research-focused and healthcare-delivery institutions. The insights gained from this study can inform the development of national policies and guidelines to ensure the ethical use of GenAI in healthcare; identifying successful GenAI implementation strategies can serve as best practices for other institutions; highlighting gaps in the current GenAI infrastructure can guide future investments and research priorities; and ultimately, a robust GenAI infrastructure can enhance patient care through more accurate diagnoses, personalized treatments, and efficient healthcare delivery.

2. RESULTS

The US CTSA network contains over 60 hubs. We sent email invitations to 64 CTSA leaders, each responding on behalf of a unique CTSA site, with 42 confirming participation. Ultimately, we received 36 complete responses, yielding an 85.7% completion rate. Only fully completed responses were included in the analysis, as the six unfinished responses had 0–65% progress and were excluded. The survey questions are available in the Supplementary File A. Of the 36 completed responses, 15 (41.7%) represented only a CTSA, and 21 (58.3%) represented a CTSA and its affiliated hospital.

2.1. Stakeholder Identification and Roles

Figure 1 shows that senior leaders were the most involved in GenAI decision-making (94.4%), followed by information technology (IT) staff, researchers, and physicians. Cochran’s Q test revealed significant differences in stakeholder involvement (Q = 165.9, p < 0.0001). Post-hoc McNemar tests (see Methods) with Bonferroni correction showed senior and departmental leaders were significantly more involved than business unit leaders, nurses, patients, and community representatives (all corrected p < 0.0001). Nurses were also less engaged than IT staff (corrected p < 0.0001) (See Supplementary Table 1).

Figure 1:

Figure 1:

Which stakeholder groups are involved in your organization’s decision-making and implementation of GenAI?

We further split our analysis based on whether institutions have formal committees or task forces overseeing GenAI governance to provide insights into how governance models may impact GenAI adoption. 77.8% (28/36) respondents reported having formal committees or task forces overseeing GenAI governance, 19.4% (7/36) did not, and 2.8% (1/36) were unsure. We grouped those without formal committees for analysis to simplify the comparison and focus on clear distinctions between institutions with and without established governance structures. Institutions without formal committees did not involve patients and community representatives as stakeholders in the decision-making and implementation of GenAI (Figure 1).

Further, the decision-making process for implementing GenAI (Figure 2) was primarily led by cross-functional committees (80.6%), with clinical leadership also playing a key role (50.0%). Institutions without formal committees were led more by clinical leadership. Specific mentions include the dean, CTSA and innovation teams, researchers, and health AI governance committees. Cochran’s Q test revealed significant differences in leadership involvement (Q = 46.8, p < 0.0001), especially between cross-functional committees and both regulatory bodies and other stakeholders (corrected p < 0.0001) (See Supplementary Table 2).

Figure 2:

Figure 2:

Who leads the decision-making process for implementing GenAI applications in your organization?

2.2. Decision-Making and Governance Structure

The decision-making process for adopting GenAI in healthcare institutions varied (Figure 3). A centralized (top-down) approach was used by 61.1% (22/36) of respondents, while 8.3% (3/36) mentioned alternative methods, such as decisions based on the tool’s nature or a mix of centralized and decentralized approaches.

Figure 3:

Figure 3:

How are decisions regarding adopting GenAI made in your healthcare institution?

Thematic analysis of statements about governance structures in organizations with formal committees identified two major themes (Figure 4). “AI Governance and Policy” reflects institutions’ structured approaches to ensure responsible GenAI implementation. Institutions often establish multidisciplinary committees to integrate GenAI policies with existing frameworks, aligning AI deployment with organizational goals and regulatory requirements and focusing on legal and ethical compliance. “Strategic Leadership and Decision Making” highlights the crucial role of leadership in GenAI initiatives. High-level leaders drive GenAI integration through strategic planning and resource allocation, with integrated teams from IT, research, and clinical care fostering a culture of innovation and collaboration. Excerpts on these governance practices are detailed in the Supplementary Table 3.

Figure 4:

Figure 4:

Thematic analysis of governance and leadership structures in GenAI deployment across CTSA institutions with featured responses.

2.3. Regulatory and Ethical Considerations

Regulatory body involvement in GenAI deployment varied widely across institutions (Figure 5). Federal agencies were engaged in 33.3% (12/36) of organizations. A significant portion (55.6%) identified other bodies, including institutional review boards (IRBs), ethics committees, community advocates, and state agencies. Internal governance committees and university task forces were also explicitly mentioned.

Figure 5:

Figure 5:

Which regulatory bodies are involved in overseeing the deployment of GenAI in your organization?

Regarding ethical oversight (Figure 6), 36.1% (13/36) of respondents reported an ethicist’s involvement in GenAI decision-making; 27.8% (10/36) mentioned an ethics committee, while 19.4% (7/36) reported neither, and 16.7% (6/36) were unsure. Ethical considerations were ranked based on importance (Figure 7), with “Bias and fairness” (mean rank 2.31) and “Patient Privacy” (mean rank 2.36) being the top priorities.

Figure 6:

Figure 6:

Do you have an ethicist or an ethics committee involved in the decision-making process for implementing GenAI technologies in your organization?

Figure 7:

Figure 7:

Please rank the following ethical considerations from most important (1) to least important (6) when decision-makers are deciding to implement GenAI technologies.

2.4. Stage of Adoption

Institutions were at varying stages of GenAI adoption (Figure 8), with 75.0% (27/36) in the experimentation phase, focusing on exploring AI’s potential, building skills, and identifying areas for value addition. Integrating existing systems and workflows was met with mixed responses (Figure 9), with 50.0% (18/36) rating it as neutral.

Figure 8:

Figure 8:

What is the stage of GenAI adoption in your organization?

Figure 9:

Figure 9:

How well do GenAI applications integrate with your existing systems and workflows?

Workforce familiarity with large LLMs also varied (Figure 10), with 36.1% (13/36) of respondents reporting slight familiarity and 25.0% (9/36) reporting moderate familiarity. Workforce training on LLMs was uneven, with only 36.1% (13/36) having received training, while 44.4% (16/36) considered but did not receive training, and 19.4% (7/36) neither received nor considered training. The demand for further training was evident, with 83.3% (30/36) finding it desirable or even more (Figure 11). The respondents who indicated receiving further LLM training for their workforce was undesirable were from institutions without a formal committee.

Figure 10:

Figure 10:

How familiar are members of the workforce with the use of LLMs in your organization?

Figure 11:

Figure 11:

How desirable is it for the workforce to receive further LLM training?

Vendor collaboration was crucial, with 69.4% (25/36) of institutions partnering with multiple vendors, ranging from one to twelve, to implement GenAI solutions. Notable vendors included Dax Co-pilot, Microsoft Azure AI, Amazon Web Services, Epic Systems, and various startups. Some respondents noted that discussions are often confidential or lack comprehensive information on enterprise-wide vendor engagements. Additionally, 25.0% (9/36) have considered vendor collaboration but have not engaged, while only 5.6% (2/36) have neither considered nor pursued such partnerships.

2.5. Budget Trends

Regarding funds allocation for GenAI projects, 50.0% (18/36) of respondents reported that ad-hoc funding was allocated mostly from institutions with formal committees (Figure 12). Most institutions without formal committees reported that no funds had been allocated for GenAI projects (62.5%; 5/8). Since 2021, 36.1% (13/36) of respondents were unsure about budget changes, 19.4% (7/36) noted the budget remained roughly the same, and 44.5% reported budget increases ranging from 10% to over 300% (Figure 13).

Figure 12:

Figure 12:

Have funds been allocated for GenAI projects?

Figure 13:

Figure 13:

Compared to 2021, how does the budget allocated to GenAI projects in your organization change?

2.6. Current LLM usage

Institutions were adopting LLMs with varied strategies (Figure 14), with 61.1% (22/36) using a combination of both open and proprietary LLMs, 11.1% (4/36) using open LLMs only, and 25.0% (9/36) using proprietary LLMs only. Only 2.8% (1/36) reported not using any LLMs. Significant differences exist (Q = 28.7, p < 0.0001) between the types of LLMs used. Post-hoc tests revealed significant differences between using open and proprietary LLMs versus open LLMs only (corrected p = 0.0032) (See Supplementary Table 4), indicating a notable preference for combining different LLM types in some institutions. No significant differences were found among specific open or proprietary LLM types (Q = 2.4, p = 0.4936), suggesting that institutions did not exhibit strong preferences between particular open or proprietary LLM models. Institutions developing open LLMs prioritized technical architecture and deployment (61.1%), followed by customization and integration features (50.0%, Figure 15). Some institutions focused on research and experimentation, comparing open to proprietary LLMs, with interests in medical education and cost-effectiveness. Technical architecture and deployment are prioritized over clinician or patient buy-in (corrected p = 0.0024) (See Supplementary Table 5).

Figure 14:

Figure 14:

Which of the LLMs are you currently using?

Figure 15:

Figure 15:

You indicated that your organization is using open LLMs (blue) or proprietary LLMs (red). What factors influenced your decision to develop internally/to go with commercial solutions?

Regarding GenAI deployment (Figure 16), private cloud and on-premises self-hosting were the most common approaches (both 63.9%), suggesting that most institutions have both approaches but do not take a hybrid approach. Some institutions specified using local supercomputing resources or statewide high-performance computing infrastructure. Statistical analysis (Q = 42.6, p < 0.0001) indicated a preference for more controlled environments, with private cloud and on-premises self-hosting significantly more favored than public cloud (corrected p = 0.0022 and p = 0.0060, respectively) (See Supplementary Table 6).

Figure 16:

Figure 16:

What AI deployment options does your organization currently use?

For institutions adopting proprietary LLMs, the critical factors for decision-making include technical architecture and deployment (61.1%), and scalability and performance (Figure 16). Respondents noted the importance of ease of deployment, especially in partnerships with vendors like Epic Systems and Oracle, and the advantage of existing Health Insurance Portability and Accountability Act (HIPAA) Business Associate Agreements with providers like Microsoft. Statistical analysis (Q = 57.4, p < 0.0001) revealed significant differences, particularly between technical architecture and deployment and monitoring and reporting and AI workforce development (both corrected p = 0.0113). Scalability and performance were significantly more prioritized than LLM output compliance and AI monitoring and reporting (corrected p-values = 0.0405) (See Supplementary Table 7).

Finally, LLMs were applied across diverse domains, with common uses in biomedical research (66.7%), medical text summarization (66.67%), and data abstraction (63.9%, Figure 17). Co-occurrence analysis showed frequent overlaps in these areas (See Supplementary Table 8). Medical imaging analysis was the most common use case for institutions without formal committees overseeing GenAI governance. Significant differences were observed in using LLMs for data abstraction compared to drug development, machine translation, and scheduling and between biomedical research and drug development, machine translation, and scheduling (corrected p-values < 0.05) (See Supplementary Table 9).

Figure 17:

Figure 17:

Which of the following use cases are you currently using LLMs for?

2.7. LLM Evaluation

Respondents prioritized accuracy and reproducible and consistent answers when evaluating LLMs for healthcare (Figure 18), each receiving the highest mean rating of 4.5 (See Supplementary Table 10). Healthcare-specific models and security and privacy risks were also deemed important, though responses varied. An Analysis of Variance (ANOVA) test revealed significant differences among the importance ratings (F = 3.4, p = 0.0031). Post-hoc Tukey’s honestly significant difference (HSD) tests showed a significant difference between accuracy, and explainability and transparency (p = 0.0299).

Figure 18:

Figure 18:

On a scale from 1 to 5, please rate the importance of each of the following criteria when evaluating LLMs. 1 means ”Not at all Important,” and 5 means ”Extremely Important”.

Regarding potential roadblocks to adopting GenAI in healthcare, regulatory compliance issues were rated as the most significant concern, with a mean rating of 4.2 (Figure 19) (Mean Rating See Supplementary Table 11). While ‘Too expensive’ and ‘Not built for healthcare and life science’ were less of a concern, they still posed challenges for some respondents, though there are no significant differences among these ratings (F = 2.0, p = 0.0606).

Figure 19:

Figure 19:

On a scale of 1 to 5, please rate how significant the following potential limitations or roadblocks are to your roadmap for current generative AI technology, with 1 being not important and 5 being very important.

2.8. Projected Impact

Participants rated the anticipated impact of LLMs on various use cases over the next 2–3 years (Figure 20), with the highest mean ratings for natural language query interface, information extraction, and medical text summarization (4.5 each), followed by transcribing medical encounters (4.3). Data abstraction (4.3) and medical image analysis (4.2) were also highly rated, while synthetic data generation, scheduling (3.5 each), and drug development (3.4) received lower ratings (See Supplementary Table 12). Additional use cases, such as medical education and decentralized clinical trials, suggest an expanding scope for LLM applications.

Figure 20:

Figure 20:

On a scale of 1 to 5, please rate how much you think LLMs will impact each use case over the next 2–3 years. 1 means very negative, and 5 means very positive.

Further, respondents reported increased operational efficiency (44.4%) as the most commonly observed improvement, with faster decision-making processes noted by 13.9% (Figure 21). However, none reported improved patient outcomes. Other reported improvements included increased patient satisfaction and enhanced research capacity, although some noted it was too early to prove such benefits. Significant differences among these improvements were observed (Q = 38.9, p < 0.0001), particularly between better patient engagement and improved patient outcomes (corrected p = 0.0026) (See Supplementary Table 13).

Figure 21:

Figure 21:

What improvements, if any, have you observed since implementing Generative AI (GenAI) solutions in your healthcare institution?

Regarding GenAI implementation concerns (Figure 22), data security was identified as a major issue by 52.78% of respondents, followed by a lack of clinician trust (50.0%) and AI bias (44.44%). Cochran’s Q Test confirmed variability in these concerns (Q = 33.3, p < 0.001). Other challenges included the time required to train models, lack of validation tools, inadequate provider training, and concerns about organizational trust. Some respondents also noted that their observations were based on internal experiences, with no implementations yet in production.

Figure 22:

Figure 22:

What drawbacks or negative impacts, if any, have you observed since implementing GenAI solutions?

2.9. Enhancement Strategies

Respondents identified several strategies for testing and improving LLMs in healthcare, with human-in-the-loop being the most common (83.3%, Figure 23). Significant differences were noted between human-in-the-loop and methods like quantization and pruning and Reinforcement Learning with human feedback (RLHF) 22 (corrected p < 0.005) (See Supplementary Table 14). Significant differences were found between adversarial testing 23 and human-in-the-loop (corrected p < 0.0001) and guardrails and human-in-the-loop (corrected p = 0.0067) (See Supplementary Table 14).

Figure 23:

Figure 23:

Which steps do you take to test and improve your LLM models?

In evaluating deployed LLMs (Figure 24), the most common assessments focused on hallucinations or disinformation (50.0%) and robustness (38.9%). However, 19.4% (7/36) of respondents indicated no evaluations had been conducted. Cochran’s Q Test revealed significant variation in the importance of these evaluations (Q = 77.1, p < 0.0001), with post-hoc analysis showing significant differences between explainability and prompt injection (i.e., a technique where specific prompts or questions are used to trick the GenAI into bypassing its specified restrictions, revealing weaknesses in how it understands and responds to information), and between fairness versus ideological leaning and prompt injection (corrected p = 0.0040) (See Supplementary Table 15).

Figure 24:

Figure 24:

What type(s) of evaluations have your deployed LLM solutions undergone?

Integrating GenAI into healthcare presents several challenges (Figure 25), with technical architecture and deployment cited most frequently (72.2%). Interestingly, AI workforce development is the most common challenge for institutions without a formal committee. Data lifecycle management was noted as a critical limitation by 52.8% (19/36) of respondents. Challenges often overlap, with technical architecture and deployment closely linked to security, scalability, and regulatory compliance issues. Additional gaps were also highlighted, such as the absence of a training plan and a limited workforce. Significant variability was observed (Q = 45.4, p < 0.0001), with post-hoc analysis indicating that technical architecture and deployment were more prevalent than LLM output compliance (i.e., the trustworthiness of the LLM output) and scalability and performance (corrected p = 0.0269) (See Supplementary Table 16).

Figure 25:

Figure 25:

What challenges, if any, have you faced in integrating GenAI with existing systems?

2.10. Additional Insights into GenAI Integration

Nine respondents provided additional insights into the complexities of integrating GenAI into healthcare. They emphasized the challenges posed by the rapid pace of technological change, which complicates long-term investment and integration decisions. Organizational approaches to GenAI vary; some institutions aggressively pursue it, while others have yet to implement it on a broader scale despite individual use. The integration of GenAI has improved collaboration between researchers, physicians, and administrators, but slow decision-making and a significant gap in AI workforce skills remain critical issues. The evolving nature of AI initiatives makes it difficult to fully capture current practices, highlighting the need for a comprehensive approach that addresses technological, organizational, and workforce challenges.

3. DISCUSSION

This study provides a snapshot of GenAI integration within CTSA institutions, focusing on key stakeholders, governance structures, ethical considerations, and associated challenges and opportunities. Table 1 summarizes the key recommendations from the findings. Senior leaders, IT staff, and researchers are central to GenAI integration, with significant involvement from cross-functional committees highlighting the multidisciplinary collaboration required for effective implementation. However, findings suggest minimal involvement of nurses, patients, and community representatives in the current GenAI implementation decision-making process, which raises concerns about inclusiveness, which is essential to aligning technologies with the needs of all stakeholders. 18,24 Most institutions adopt a centralized, top-down governance structure, streamlining decision-making but potentially limiting flexibility for departmental needs. 25 While formal committees or task forces suggest emerging governance frameworks, the variability across institutions indicates that best practices are still evolving.

Table 1:

Summary of Key Findings and Recommendations for GenAI Implementation in Healthcare.

Key Finding Recommendation
Stakeholder Involvement Involve senior leaders, IT staff, researchers, clinicians, and patients to ensure a representative and effective decision-making process.
Governance Structure Establish formal GenAI governance committees to ensure structured oversight.
Decision-Making Cross-functional committees should lead decision-making for GenAI adoption, balancing stakeholder involvement.
Popular Enhancement Strategies Use human-in-the-loop and supervised fine-tuning as primary enhancement strategies for LLM models.
Cloud Architecture Preferences Prefer private cloud or on-premises hosting to maintain control over security, scalability, and regulatory compliance in GenAI deployment.
Ethical Considerations Prioritize bias and fairness, patient privacy, and data security when integrating GenAI into healthcare institutions.
Budget Allocation Encourage institutions to establish systematic funding mechanisms for GenAI projects to support long-term investments.
LLM Usage Adopt a combination of open and proprietary LLMs, depending on the technical and scalability requirements of the institution.
Workforce Training Implement comprehensive training programs to enhance GenAI literacy and bridge skill gaps within the healthcare workforce.
Projected Impact and Improvements Focus on operational efficiency and decision-making speed while addressing the gap in direct improvements to patient outcomes.

According to the respondents, ethical and regulatory oversight of GenAI implementation varies across institutions, with some involvement from federal agencies, IRBs, and ethics committees. Prioritization of ethical considerations such as patient privacy, data security, and fairness in AI algorithms reflects the awareness of the significant challenges in deploying GenAI in healthcare. Our findings also reveal variability in the reported involvement of regulatory bodies, with less frequent mentions of engagement from local health authorities. However, we did not collect detailed information on the specific roles of these agencies or distinguish between different types of regulatory engagement. This limitation suggests a need for more explicit and consistent oversight frameworks to address the unique risks associated with GenAI. Despite these gaps, this study emphasizes the importance of developing comprehensive policies and guidelines to navigate the ethical landscape of GenAI technologies in healthcare.

Collaboration with vendors is common among CTSA institutions, with partnerships reported with major technology providers like Microsoft Azure AI, Amazon Web Services, Oracle, and Epic Systems. However, the variability in the extent of these collaborations and the need for comprehensive information on enterprise-wide vendor engagements suggest challenges in coordinating AI implementation efforts across institutions. Further, the ad-hoc funding allocation for GenAI projects indicates that AI integration is still in its infancy, with institutions likely testing the waters before committing to substantial investments. Implementing LLMs in healthcare settings presents significant challenges, particularly in technical architecture, deployment, customization, and security, requiring a comprehensive and coordinated approach across departments for successful integration. 26

To evaluate their GenAI technologies, some institutions are using strategies like human-in-the-loop oversight, supervised fine-tuning, and interpretability tools to enhance GenAI transparency and reliability while also employing de-biasing techniques to mitigate biases, ensuring that GenAI outputs are continuously monitored and refined by human experts. 27,28 Evaluation practices emphasize robustness and accuracy, with assessments for hallucinations, disinformation, and bias crucial to ascertaining GenAI systems function effectively in real-world healthcare settings. 29,30 However, some institutions’ lack of comprehensive evaluations suggests the early stages of LLM adoption and potential shortcomings in initial adoption, highlighting the need to improve their resources or expertise before widespread adoption.

The respondents are optimistic about the projected impact of LLMs on healthcare, particularly in areas like medical text summarization, query interfaces, and information extraction, which are expected to streamline workflows, enhance information access, and improve documentation efficiency. 31,32 However, the gap between anticipated benefits and actual outcomes, such as the limited direct improvements in patient outcomes, highlights ongoing challenges. This discrepancy emphasizes the need for a focused evaluation of how GenAI tools can directly impact patient health and care quality. Emerging LLM applications in medical education, decentralized trials, and digital twin technologies suggest an expanding scope for these tools. While their impact in specialized domains like drug development remains uncertain, recent evidence points to promising advancements that could enhance the utility of LLMs in this area. 33 Despite the enthusiasm, significant concerns about data security, clinician trust, high maintenance costs, AI bias, and lack of patient trust complicate LLM integration into healthcare institutions.

Integrating LLMs into healthcare institutions is further complicated by high maintenance costs, AI bias, and lack of patient trust. Evaluations within institutions prioritize accuracy, reliability, and security, with respondents emphasizing the critical need for dependable and secure AI outputs to maintain trust and patient safety. 34 Legal and reputational risks, along with the need for explainability and transparency, are also highly rated, indicating a significant focus on the ethical and legal implications of AI deployment. However, the importance of these criteria varies, reflecting diverse contexts and priorities across institutions. Despite high expectations for LLMs, the study identified significant roadblocks and considerations for widespread adoption (Table 2). These challenges underscore the complex landscape where multiple factors must be managed simultaneously.

Table 2:

Summary of Key Challenges in GenAI Implementation Across CTSA Institutions

Challenge Description
Stakeholder Inclusion Nurses, patients, and community representatives have limited involvement in the decision-making processes, particularly in institutions without formal committees.
Governance Structure Variability in governance models, with some institutions lacking formal GenAI oversight committees, may impact structured decision-making.
Leadership in Decision-Making Institutions without formal committees rely more on clinical leadership rather than cross-functional committees, potentially affecting the balance of stakeholder input.
Ethical Oversight Varying degree of involvement of ethicists and ethics committees can create gaps and disparity in fairness, privacy, and data security in the broad scientific community for clinical and translational science.
Workforce Readiness Variability in workforce familiarity with LLMs, with some institutions having insufficient training and preparedness for GenAI integration.
Training and Skill Gaps Significant gaps in formal GenAI training plans, with many institutions struggling to build internal capabilities to manage GenAI tools effectively.
Technical Integration Difficulties in integrating GenAI into existing systems, with mixed responses about how well these technologies integrate into current workflows.
Funding and Resources Many institutions rely on ad-hoc funding mechanisms for GenAI projects, creating uncertainty in long-term resource allocation and support for AI initiatives.
Vendor Collaboration Limited transparency and variability in vendor collaborations, with some institutions facing challenges coordinating enterprise-wide AI implementation.
Data Security and Trust Major concerns regarding the security of GenAI systems and lack of clinician trust, particularly in institutions without formal governance structures.
AI Bias and Mistrust Concerns about bias in GenAI outputs and mistrust from clinicians and patients could affect the adoption and effective use of GenAI technologies.
Compliance and Legal Risks Regulatory compliance and accuracy are major concerns, with institutions needing to navigate legal and reputational risks associated with GenAI deployment.

Further, the study reveals that most institutions are still in the experimentation phase of GenAI adoption, exploring the technology’s potential and building the necessary skills for its practical adoption. Mixed levels of familiarity with LLMs among the workforce and stakeholders indicate a significant need for further AI workforce training and clinician engagement to enhance GenAI literacy, ensuring that key stakeholders can manage GenAI effectively. Without proper training, healthcare professionals may struggle to fully leverage these tools, potentially leading to inefficiencies, errors, or privacy or security violations (e.g., inappropriately uploading data). 35,36 Previous work suggests a multifaceted and multi-sectorial approach to address these gaps and facilitate knowledge sharing, including implementing structured training programs, offering hands-on workshops, developing mentorship opportunities, and partnering with vendors to provide tailored training specific to the healthcare setting. 37 This opens the possibility that NCATS and other NIH institutes may want to consider collaborative initiatives to address the questions raised in this research. Additionally, the CTSA network’s emphasis on knowledge sharing could facilitate smoother GenAI adoption across institutions, 38 particularly for late adopters. By encouraging the dissemination of best practices and lessons learned from early adopters, 39 the CTSA network can help institutions with fewer resources or those facing governance challenges navigate the complexities of GenAI implementation more efficiently.

The study has limitations, including variability in respondents’ knowledge and the evolving nature of GenAI practices, which may not capture ongoing progress or changes beyond the survey period. Additionally, the reliance on responses from senior leaders, who may not have full visibility into all aspects of GenAI integration within their institutions, introduces the risk of misreporting or incomplete information. The focus on CTSA institutions may limit the generalizability of the findings to other healthcare organizations, particularly for institutions with fewer resources where these implementation and governance challenges may be especially difficult to address. The survey also did not distinguish between live GenAI systems and those still in development, which limits our ability to assess the operational readiness and deployment status of these tools fully across institutions. Additionally, reliance on self-reported data introduces possible biases.

In conclusion, the study highlights the complex and evolving landscape of GenAI integration in CTSA institutions. By identifying successful strategies and highlighting areas for improvement, this research provides an actionable roadmap for institutions seeking to navigate the complexities of AI integration in healthcare to ensure ethical, equitable, and effective implementation, ultimately contributing to advancing patient care and the broader goals of precision medicine.

4. METHODS

4.1. Study Design

This study uses an online survey to conduct an environmental scan of GenAI infrastructure within CTSA institutions through multiple choice, ranking, rating, and open-ended questions to understand GenAI integration, including stakeholder roles, governance structures, and ethical considerations.

4.2. Survey Instrument Development

The survey, administered through the Qualtrics platform (Qualtrics, Provo, UT), was intended to take approximately 15 minutes to complete. Initially developed through a comprehensive review of current literature on AI in healthcare, the survey covered topics such as stakeholder roles, governance structures, ethical considerations, AI adoption stages, budget trends, and LLM usage. The survey was reviewed by experts (SL, BM, KN, WP, RZ, YZ) in health informatics, clinical practice, ethics, and law, who provided feedback that informed revisions to improve clarity and comprehensiveness. A small group piloted the final version to identify any remaining issues. The survey questions are available in the Supplementary File A.

4.3. Participant Recruitment

Participants were recruited in July 2024 through targeted outreach to key stakeholders at CTSA sites using purposive and snowball sampling. 40 Email invitations were sent to senior leaders involved in GenAI implementation and decision-making within the CTSA network (), with follow-up reminders to maximize response rates.

4.4. Data Collection

Data were collected from July to August 2024. CTSA leaders who responded to the initial invitation received a follow-up email with the survey link. A PDF version of the survey was provided to help participants prepare by reviewing questions offline before completing the survey online. Participants could return to the survey if necessary.

4.5. Data Analysis

Quantitative data from the survey were analyzed using various methods. Multiple-choice and multiple-answer questions were summarized with frequency distributions and percentages. In addition, multiple-answer questions were also analyzed using co-occurrence and pattern analysis to identify common selections and combinations among stakeholder groups. Cochran’s Q test identified overall differences among response proportions, with post-hoc analysis using pairwise McNemar tests with Bonferroni corrections. 41 Ranking questions were analyzed by calculating mean ranks, with lower mean ranks indicating higher importance. Likert-scale items were summarized using measures of central tendency and dispersion, with an ANOVA test to check for significant differences in ratings across different use cases, followed by Tukey’s HSD test for post-hoc pairwise comparisons while controlling for the family-wise error rate. 42

Qualitative data from open-ended survey questions was analyzed using thematic analysis. 43 This process involved coding the data to identify common themes and patterns. Two researchers (BI, ZX) independently coded the data, and a third researcher (YP) resolved disagreements through consensus.

Supplementary Material

Supplement 1

ACKNOWLEDGMENTS

This work was supported by the National Center for Advancing Translational Sciences (NCATS) of the National Institutes of Health (NIH) under grant numbers UL1TR002384, UM1TR004789, UL1TR001412, UL1TR001449, ULTR002345, UM1TR004404, UL1TR001866, UM1TR004909, and UL1TR001873; National Library of Medicine (NLM) of NIH under grant numbers T15LM007079, T15LM012495, R25LM014213; and the National Institute on Alcohol Abuse and Alcoholism of NIH grant numbers R21AA026954 and R33AA0226954. This study was also funded in part by the Department of Veterans Affairs and NIH Intramural Research Program. The content is solely the responsibility of the authors and does not necessarily represent the official views of the NIH.

Footnotes

COMPETING INTERESTS

I.G. Cohen is a member of the Bayer Bioethics Council, the Chair of the ethics advisory board for Illumina, and an advisor for World Class Health. He was also compensated for speaking at events organized by Philips with the Washington Post, by the Doctors Company, and attending the Transformational Therapeutics Leadership Forum organized by Galen Atlantica. He has been retained as an expert in health privacy, gender-affirming care, and reproductive technology lawsuits.

DATA AVAILABILITY

The data supporting the findings of this study, with identifying information removed to ensure confidentiality, are available from the corresponding author upon reasonable request. The authors declare that all other data supporting the findings of this study are available within the paper and its supplementary information files.

References

  • [1].Huang J, Neill L, Wittbrodt M, Melnick D, Klug M, Thompson M, Bailitz J, Loftus T, Malik S, Phull A, Weston V, Heller JA, Etemadi M. Generative Artificial Intelligence for Chest Radiograph Interpretation in the Emergency Department. JAMA Netw Open. 2023. 2 Oct;6(10):e2336100. doi: 10.1001/jamanet-workopen.2023.36100. [DOI] [Google Scholar]
  • [2].Matsubayashi CO, Cheng S, Hulchafo I, Zhang Y, Tada T, Buxbaum JL, Ochiai K. Artificial intelligence for gastric cancer in endoscopy: From diagnostic reasoning to market. Dig Liver Dis. 2024. Jul;56(7):1156–1163. doi: 10.1016/j.dld.2024.04.019. [DOI] [PubMed] [Google Scholar]
  • [3].Saaran V, Kushwaha V, Gupta S, Agarwal G. A Literature Review on Generative Adversarial Networks with Its Applications in Healthcare: Proceedings of CIS 2020, Volume 1. In: Sharma H, SaraswatM, Yadav A, Kim JH, Bansal JC, editors. Congress on Intelligent Systems. vol. 1334 of Advances in Intelligent Systems and Computing. Singapore: Springer Singapore; 2021. p. 215–225. doi: 10.1007/978-981-33-6981-818. [DOI] [Google Scholar]
  • [4].Kazerouni A, Aghdam EK, Heidari M, Azad R, Fayyaz M, Hacihaliloglu I, Merhof D. Diffusion models in medical imaging: A comprehensive survey. Med Image Anal. 2023. Aug;88(102846):102846. doi: 10.1016/j.media.2023.102846. [DOI] [Google Scholar]
  • [5].Wei R, Mahmood A. Recent advances in variational autoencoders with representation learning for biomedical informatics: A survey. IEEE Access. 2021;9:4939–4956. doi: 10.1109/access.2020.3048309. [DOI] [Google Scholar]
  • [6].Brown TB, Mann B, Ryder N, Subbiah M, Kaplan J, Dhariwal P, Neelakantan A, Shyam P, Sastry G, Askell A, Agarwal S, Herbert-Voss A, Krueger G, Henighan T, Child R, Ramesh A, Ziegler DM, Wu J, Winter C, Hesse C, Chen M, Sigler E, Litwin M, Gray S, Chess B, Clark J, Berner C, McCandlish S, Radford A, Sutskever I, Amodei D. Language Models are Few-Shot Learners. arXiv [csCL]. 2020. 28 May. [Google Scholar]
  • [7].Touvron H, Lavril T, Izacard G, Martinet X, Lachaux MA, Lacroix T, Rozière B, Goyal N, Hambro E, Azhar F, Rodriguez A, Joulin A, Grave E, Lample G. LLaMA: Open and efficient foundation language models. arXivorg. 2023. doi: 10.48550/ARXIV.2302.13971. [DOI] [Google Scholar]
  • [8].Alowais SA, Alghamdi SS, Alsuhebany N, Alqahtani T, Alshaya AI, Almohareb SN, Aldairem A, Alrashed M, Bin Saleh K, Badreldin HA, Al Yami MS, Al Harbi S, Albekairy AM. Revolutionizing healthcare: the role of artificial intelligence in clinical practice. BMC Med Educ. 2023. 22 Sep;23(1):689. doi: 10.1186/s12909-023-04698-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [9].Topol EJ. High-performance medicine: the convergence of human and artificial intelligence. Nat Med. 2019. Jan;25(1):44–56. doi: 10.1038/s41591-018-0300-7. [DOI] [PubMed] [Google Scholar]
  • [10].Kim J, Leonte KG, Chen ML, Torous JB, Linos E, Pinto A, Rodriguez CI. Large language models outperform mental and medical health care professionals in identifying obsessive-compulsive disorder. NPJ Digit Med. 2024. 19 Jul;7(1):193. doi: 10.1038/s41746-024-01181-x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [11].Rajpurkar P, Chen E, Banerjee O, Topol EJ. AI in health and medicine. Nat Med. 2022. Jan;28(1):31–38. doi: 10.1038/s41591-021-01614-0. [DOI] [PubMed] [Google Scholar]
  • [12].Hao B, Hu Y, Adams WG, Assoumou SA, Hsu HE, Bhadelia N, Paschalidis IC. A GPT-based EHR modeling system for unsupervised novel disease detection. J Biomed Inform. 2024. Sep;157(104706):104706. doi: 10.1016/j.jbi.2024.104706. [DOI] [Google Scholar]
  • [13].Amini S, Hao B, Yang J, Karjadi C, Kolachalama VB, Au R, Paschalidis IC. Prediction of Alzheimer’s disease progression within 6 years using speech: A novel approach leveraging language models. Alzheimers Dement. 2024. Aug;20(8):5262–5270. doi: 10.1002/alz.13886. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [14].Jiang F, Jiang Y, Zhi H, Dong Y, Li H, Ma S, Wang Y, Dong Q, Shen H, Wang Y. Artificial intelligence in healthcare: past, present and future. Stroke Vasc Neurol. 2017. Dec;2(4):230–243. doi: 10.1136/svn-2017-000101. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [15].Secinaro S, Calandra D, Secinaro A, Muthurangu V, Biancone P. The role of artificial intelligence in healthcare: a structured literature review. BMC Med Inform Decis Mak. 2021. 10 Apr;21(1):125. doi: 10.1186/s12911-021-01488-9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [16].Obermeyer Z, Powers B, Vogeli C, Mullainathan S. Dissecting racial bias in an algorithm used to manage the health of populations. Science. 2019. 25 Oct;366(6464):447–453. doi: 10.1126/science.aax2342. [DOI] [PubMed] [Google Scholar]
  • [17].Polevikov S. Advancing AI in healthcare: A comprehensive review of best practices. Clin Chim Acta. 2023. 1 Aug;548:117519. doi: 10.1016/j.cca.2023.117519. [DOI] [Google Scholar]
  • [18].Scott IA, Carter SM, Coiera E. Exploring stakeholder attitudes towards AI in clinical practice. BMJ Health Care Inform. 2021. Dec;28(1). doi: 10.1136/bmjhci-2021-100450. [DOI] [Google Scholar]
  • [19].Spotnitz M, Idnay B, Gordon ER, Shyu R, Zhang G, Liu C, Cimino JJ, Weng C. A Survey of Clinicians’ Views of the Utility of Large Language Models. Appl Clin Inform. 2024. Mar;15(2):306–312. doi: 10.1055/a-2281-7092. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [20].Liverman CT, Schultz AM, Terry SF, Leshner AI. The CTSA program at NIH: Opportunities for advancing clinical and translational research. [Google Scholar]
  • [21].Yin J, Ngiam KY, Teo HH. Role of Artificial Intelligence Applications in Real-Life Clinical Practice: Systematic Review. J Med Internet Res. 2021. 22 Apr;23(4):e25759. doi: 10.2196/25759. [DOI] [Google Scholar]
  • [22].Stiennon N, Ouyang L, Wu J, Ziegler DM, Lowe R, Voss C, Radford A, Amodei D, Christiano P. Learning to summarize from human feedback. arXiv [csCL]. 2020. 2 Sep. [Google Scholar]
  • [23].Hao B, Shen G, Chen R, Farris CW, Anderson SW, Zhang X, Paschalidis IC. Distributionally robust image classifiers for stroke diagnosis in accelerated MRI. In: Lecture Notes in Computer Science. Lecture notes in computer science. Cham: Springer Nature Switzerland; 2023. p. 768–777. doi: 10.1007/978-3-031-43904-9_74. [DOI] [Google Scholar]
  • [24].Sujan MA, White S, Habli I, Reynolds N. Stakeholder perceptions of the safety and assurance of artificial intelligence in healthcare. Saf Sci. 2022. Nov;155(105870):105870. doi: 10.1016/j.ssci.2022.105870. [DOI] [Google Scholar]
  • [25].Argyres NS. Technology strategy, governance structure and interdivisional coordination. J Econ Behav Organ. 1995. Dec;28(3):337–358. doi: 10.1016/0167-2681(95)00039-9. [DOI] [Google Scholar]
  • [26].Denecke K, May R, LLMHealthGroup, Rivera Romero O. Potential of large language models in health care: Delphi study. J Med Internet Res. 2024. 13 May;26(1):e52399. doi: 10.2196/52399. [DOI] [Google Scholar]
  • [27].Bakken S. AI in health: keeping the human in the loop. J Am Med Inform Assoc. 2023. 20 Jun;30(7):1225–1226. doi: 10.1093/jamia/ocad091. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [28].Mosqueira-Rey E, Hernández-Pereira E, Alonso-Ríos D, Bobes-Bascarán J, Fernández-Leal A. Human-in-the-loop machine learning: a state of the art. Artif Intell Rev. 2023. 17 Apr;56(4):3005–3054. doi: 10.1007/s10462-022-10246-w. [DOI] [Google Scholar]
  • [29].Williamson SM, Prybutok V. The era of artificial intelligence deception: Unraveling the complexities of false realities and emerging threats of misinformation. Information (Basel). 2024. 23 May;15(6):299. doi: 10.3390/info15060299. [DOI] [Google Scholar]
  • [30].Menz BD, Modi ND, Sorich MJ, Hopkins AM. Health disinformation use case highlighting the urgent need for artificial intelligence vigilance: Weapons of mass disinformation: Weapons of mass disinformation. JAMA Intern Med. 2024. 1 Jan;184(1):92–96. doi: 10.1001/jamainternmed.2023.5947. [DOI] [PubMed] [Google Scholar]
  • [31].Tripathi S, Sukumaran R, Cook TS. Efficient healthcare with large language models: optimizing clinical workflow and enhancing patient care. J Am Med Inform Assoc. 2024. 20 May;31(6):1436–1440. doi: 10.1093/jamia/ocad258. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [32].Gebreab SA, Salah K, Jayaraman R, Habib ur Rehman M, Ellaham S. LLM-based framework for administrative task automation in healthcare. In: 2024 12th International Symposium on Digital Forensics and Security (ISDFS). IEEE; 2024. p. 1–7. doi: 10.1109/isdfs60797.2024.10527275. [DOI] [Google Scholar]
  • [33].Yang J, Walker KC, Bekar-Cesaretli AA, Hao B, Bhadelia N, Joseph-McCarthy D, Paschalidis IC. Automating biomedical literature review for rapid drug discovery: Leveraging GPT-4 to expedite pandemic response. Int J Med Inform. 2024. Sep;189(105500):105500. doi: 10.1016/j.ijmedinf.2024.105500. [DOI] [Google Scholar]
  • [34].Choudhury A, Chaudhry Z. Large language models and user trust: Focus on healthcare. arXiv [csCY]. 2024. 15 Mar. [Google Scholar]
  • [35].Hazarika I. Artificial intelligence: opportunities and implications for the health workforce. Int Health. 2020. 1 Jul;12(4):241–245. doi: 10.1093/inthealth/ihaa007. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [36].Charow R, Jeyakumar T, Younus S, Dolatabadi E, Salhia M, Al-Mouaswas D, Anderson M, Balakumar S, Clare M, Dhalla A, Gillan C, Haghzare S, Jackson E, Lalani N, Mattson J, Peteanu W, Tripp T, Waldorf J, Williams S, Tavares W, Wiljer D. Artificial intelligence education programs for health care professionals: Scoping review. JMIR Med Educ. 2021. 13 Dec;7(4):e31043. doi: 10.2196/31043. [DOI] [Google Scholar]
  • [37].Frehywot S, Vovides Y. An equitable and sustainable community of practice framework to address the use of artificial intelligence for global health workforce training. Hum Resour Health. 2023. 13 Jun;21(1):45. doi: 10.1186/s12960-023-00833-5. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [38].Fishman AY, Lounsbury DW, Lechuga C, Patena J, Marantz P, Kim M, Keller MJ. Moving from prove to improve: A collaborative continuous quality improvement process for advancing Clinical and Translational Science. J Clin Transl Sci. 2024;8(1):1–21. doi: 10.1017/cts.2024.555. [DOI] [Google Scholar]
  • [39].Escobar-Rodríguez T, Romero-Alonso M. The acceptance of information technology innovations in hospitals: differences between early and late adopters. Behaviour & Information Technology. 2014. 2 Nov;33(11):1231–1243. doi: 10.1080/0144929X.2013.810779. [DOI] [Google Scholar]
  • [40].Biernacki P, Waldorf D. Snowball sampling: Problems and techniques of chain referral sampling. Sociol Methods Res. 1981. Nov;10(2):141–163. doi: 10.1177/004912418101000205. [DOI] [Google Scholar]
  • [41].Stephen D, Adruce SAZ. Cochran’s Q with pairwise McNemar for dichotomous multiple responses data: A practical approach. Int J Eng Technol. 2018. 2 Aug. doi: 10.14419/ijet.v7i3.18.16662. [DOI] [Google Scholar]
  • [42].Mircioiu C, Atkinson J. A comparison of parametric and non-parametric methods applied to a Likert scale. Pharmacy (Basel). 2017. 10 May;5(2):26. doi: 10.3390/pharmacy5020026. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [43].Nowell LS, Norris JM, White DE, Moules NJ. Thematic Analysis: Striving to Meet the Trustworthiness Criteria. International Journal of Qualitative Methods. 2017. 1 Dec;16(1):1609406917733847. doi: 10.1177/1609406917733847. [DOI] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Supplement 1

Data Availability Statement

The data supporting the findings of this study, with identifying information removed to ensure confidentiality, are available from the corresponding author upon reasonable request. The authors declare that all other data supporting the findings of this study are available within the paper and its supplementary information files.


Articles from ArXiv are provided here courtesy of arXiv

RESOURCES