Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2026 Feb 4.
Published before final editing as: Prev Sci. 2026 Jan 14:10.1007/s11121-025-01863-2. doi: 10.1007/s11121-025-01863-2

Addressing Health Disparities through Community Engagement in Artificial Intelligence-Driven Prevention Science

Emily E Haroz 1, Novalene Goklish 1, Adrienne Dillard 3, Roy Adams 4, Sheana S Bull 5, Ricardo F Gonzalez-Fisher 6, Pamela Valenza 7, Spero M Manson 2,#, J Roland Thorpe Jr 8,#
PMCID: PMC12865688  NIHMSID: NIHMS2132235  PMID: 41533193

Abstract

Artificial intelligence and machine learning (AI/ML) in prevention science may improve or perpetuate health inequities. Community engagement is one proposed strategy thought to empirically mitigate bias in AI/ML tools. We outline how to incorporate community engagement at every stage of the model development and implementation. Borrowing from a framework for phases of prevention research, we describe the value and application of engaging communities to help shape more rigorous and relevant applications of AI/ML for prevention science. We provide concrete examples from real-world applications, including efforts in suicide prevention with Indigenous communities, on chronic disease prevention for Hispanic and Latino populations, and a community-driven effort to leverage AI/ML to improve allocation of resources focused on social determinants of health for Native Hawaiians. This work aims to provide applied examples of how community-engagement has been incorporated into AI/ML development and implementation, with the goal of encouraging those in the prevention science field to consider the voices of the community as the use of such tools grows. Engaging with the community around AI/ML is critical to ensure these tools reach populations in need and advance health equity for all.


“No kakou, na kakou” – “for us, by us.”

Introduction

The rapid growth of use of Artificial Intelligence (AI) and Machine Learning (ML) is transforming public health approaches on a global scale. These technologies are being extensively implemented to enhance efficiency and quality of health services. AI/ML approaches are now being used to enhance risk identification, augment intervention development and training, and for many other purposes (Amisha et al. 2019; Shaheen 2021). AI/ML approaches offer unique advantages for prevention science including the ability to process large datasets to identify population-based insights, tailored interventions based on a variety of risk profiles or contexts, and to support delivery of prevention services at scale (Amisha et al. 2019). Artificial intelligence (AI) encompasses any computational approach designed to mimic human cognitive functions, while machine learning (ML) represents a specific subset that learns patterns from data without explicit programming. Examples of other AI approaches include rule-based systems, generative models, and natural language processing. The necessity of AI/ML in prevention science stems from fundamental limitations of traditional approaches. Prevention science increasingly confronts complex, multi-factorial health challenges that often require commensurate methodological approaches. While, there has been increasing attention paid to potential study designs to address this complexity (Lich et al. 2013), less has been focused on the underlying data analytic methods. Traditional analytic methods struggle to identify non-linear relationships and interaction effects among social determinants, behavioral factors, and health outcomes that drive population-level disparities. AI/ML approaches address these limitations by enabling analysis of complex, high-dimensional data to identify at-risk populations earlier, personalize prevention strategies based on individual risk profiles, and predict intervention responsiveness across diverse contexts—capabilities essential for advancing precision prevention at scale (Fishbein & Dariotis, 2019).

However, these powerful capabilities introduce unique risks. Recent years have seen a surge in use of AI/ML models in public health and prevention science, accompanied by a growing realization of the need for quality assurance when using these approaches (Food 2019). However, as these tools are more widely applied, biases that lead to inequities in services have become apparent. In their paper examining a widely used AI/ML tool to guide the provision of care to the sickest patients, Obermeyer et al. 2019 found that, using an AI/ML-generated risk score, Black patients were considerably sicker than White patients, at the same level of risk score, but were less likely to receive needed care (Obermeyer et al. 2019). In another example Coley et al. 2021 showed that a suicide risk model used in a large healthcare system vastly underperformed in American Indian/Alaska Native (AI/AN) patients compared to other racial and ethnic groups (Coley et al. 2021).

While bias in healthcare and preventative services extend beyond AI/ML-based tools, these tools can inherit and amplify bias in unique ways (Chen et al. 2021; Ferryman and Pitcan 2018). Barriers to study participation and healthcare access, and differences in care-seeking behaviors result in datasets that overrepresent certain groups (Gianfrancesco et al. 2018; Hing and Burt 2009; Weber et al. 2017). Accumulated disparities and biases in measurement of outcomes result in biased prediction targets that reflect different levels of need for different patients (Mullainathan and Obermeyer 2021). Metrics used to optimize and evaluate algorithms may obscure when models fail to work well for subpopulations (Subbaswamy et al. 2021). Even sound algorithms may result in bias if problems or deployment settings are not chosen equitably.

Researchers have suggested several strategies to help advance equity issues in AI/ML applications in health. Substantial work has gone into mathematical formalizations of fairness (Mehrabi et al. 2022), as well as quantitative and qualitative methods to audit algorithms for bias (Chen et al. 2018; Moons et al. 2019; Subbaswamy et al. 2021; Wang et al. 2022). Efforts are underway to diversify the pool of publicly available health data (Daneshjou et al. 2022; Mapes et al. 2020), as well as the pool of AI/ML researchers (“AIM-AHEAD” n.d.; Shanker 2022). Some authors have pointed to the importance of regulatory oversight to mitigate bias and ensure fairness (Rajkomar et al. 2018; Thomasian et al. 2021). Berdahl and colleagues (Berdahl et al. 2023) in their review identified strategies to advance health equity including improving data quantity and quality, evaluating disparities, increasing model transparency, enhancing governance, and involving the community.

A growing body of literature acknowledges the importance of community engagement to the equitable development of precision approaches to public health and medicine (Dolan et al. 2023) and improving machine learning model’s rigor and relevance (Birhane et al. 2022; Kulynych et al. n.d.). Yet, there remains little concrete guidance on how to achieve effective community engagement in AI/ML development processes. This paper addresses this gap by providing a phase-based framework for systematically integrating community engagement throughout AI/ML development in prevention science. Drawing from established prevention science phases and contemporary community engagement principles (Leve et al., 2024; Boyd et al., 2023), we demonstrate how community voice can strengthen rigor, relevance, and equity at each stage—from conceptualization through sustainability. We illustrate this framework through real-world applications with American Indian, Hispanic, and Native Hawaiian communities, showing how different engagement approaches can be adapted to local contexts while maintaining core equity principles. Ultimately, this work aims to move beyond theoretical acknowledgment of community engagement’s importance to practical guidance for prevention scientists developing AI/ML tools.

Community engagement to advance health equity

Advancing health equity is a priority across prevention initiatives, with community engagement serving as a cornerstone strategy. Community engagement exists on a continuum with increasing levels of involvement, impact, trust, and communication flow, progressing from outreach to consultation, involvement, collaboration, and finally to shared leadership (Leve et al., 2024). This approach requires moving beyond university-driven research agendas toward mutually defined or even community-driven agendas, emphasizing cultural humility in creating “mutually respectful, equal, and dynamic partnerships between academic and underrepresented communities” (Leve et al., 2024).

Community engagement and Community-Based Participatory Research (CBPR) represent distinct but complementary approaches. While community engagement encompasses a spectrum of involvement levels, CBPR is a specific research orientation that “equitably involves community members, organizational representatives, and academic researchers in all aspects of the research process” with shared decision-making throughout (Israel et al., 2018). Both approaches are effective in reducing health inequities (Ortiz et al., 2020) and are considered the “gold standard for prevention science that advances health equity” (Boyd et al., 2023).

Successful community engagement can significantly improve the relevance, rigor, and reach of research (Balazs and Morello-Frosch 2013). Stewart et al. 2015 and Pratt and de Vries, 2018, highlight the importance of community involvement in research design and implementation, particularly as it enhances acceptability and increases willingness to participate in research studies. Similarly, recent articles highlight the importance of community engagement in responding to the COVID-19 pandemic, including helping to reduce mortality from the disease(Haroz et al. 2022; Manson and Buchwald 2021; Santibañez et al. 2019).

The integration of AI/ML tools in prevention science introduces unique equity challenges that require explicit community engagement strategies. Public health institutions must “prioritize equity in AI design and implementation, minimizing the risk of reinforcing existing health disparities” through training datasets that are “inclusive and representative of diverse populations” (Panteli et al., 2025). Fisher and Rosella (2022) identify explicit consideration of equity and bias as one of six strategic priorities for successful AI implementation, emphasizing the need to “carefully evaluate and assess potential sources of bias throughout development” and “engage with the community” as essential practices. Without community engagement, AI systems risk perpetuating mistrust from marginalized communities (Leve et al., 2024).

Effective community engagement in AI/ML development requires sustained investment in building trust and capacity. This includes ensuring community members have input into research processes and recognizing that “without a history of positive collaboration with communities…researchers miss individual representation, but also their input” (Leve et al., 2024). Contemporary approaches must incorporate explainable AI technologies to “ensure decisions made by AI systems are understandable and fair” (Panteli et al., 2025), while fostering transparent partnerships that address community priorities and values throughout the development lifecycle.

How can community engagement be used to develop AI-based prevention science tools?

While existing literature recognizes the importance of community engagement in AI/ML development (Berdahl et al. 2023; Birhane et al. 2022), there remains a critical gap in systematic frameworks for prevention science applications. Our work addresses this gap by providing a phase-based framework illustrated through real-world case studies, demonstrating how community engagement can be integrated throughout AI/ML development in prevention science contexts.

Drawing from the phases of prevention research, we propose viewing AI/ML development through similar stages including: (1) conceptualization, (2) design of prevention strategy, (3) efficacy testing, (4) effectiveness testing, (5) dissemination and implementation, and (6) sustainability. Community engagement strengthens each stage, ensuring AI-assisted preventative interventions address genuine needs while remaining responsive to community needs.

During the Conception Phase, engagement with communities facilitates identifying priority health conditions that require attention and the contextual factors that may impact appropriate utilization of the tool. Short-term engagement strategies employed at this phase include establishing a community advisory board (CAB), understanding baseline AI literacy, or conducting stakeholder interviews. Long-term engagement strategies involve providing training in data science or related fields to empower community members with the necessary skills to develop and guide their projects effectively.

In the Design Phase, community input can shape the tool’s user interface, functionality, and planned workflow to make these domains more intuitive, accessible, and acceptable for diverse users or for a particular group of users. This phase may involve working with stakeholders to determine how to convey the tool’s information in an accessible and appropriate manner. Again, short-term strategies may involve user-centered design cycles or stakeholder interviews. Longer-term solutions may include developing and investing in infrastructure to make this process easier and more efficient.

Moving into the Evaluation Phase, community engagement helps ensure that appropriate data are included, results are interpreted consistent with community values and perspectives, and the tool is helpful for specific population segments through priority subgroup analyses. Evaluation results should be reviewed with advisory boards and local stakeholders. Further questions from community can be asked of the evaluation data to answer key questions prior to implementation. Here, short-term and long-term engagement strategies mirror earlier phases and include additional training that could help adapt these methods to community contexts.

During the Dissemination and Implementation Phase, insights from the community can guide strategies for effective dissemination and implementation to ensure that the community receives the benefits. Community-engaged implementation science tools and strategies may be particularly useful in this phase (Berdahl et al., 2023).

The Sustainability Phase encompasses two critical dimensions often overlooked in AI/ML literature. Implementation sustainability involves the organizational, financial, and governance structures needed to maintain tools over time. Algorithmic sustainability addresses a unique challenge: maintaining model performance as underlying disease distributions change due to intervention. This dual challenge is particularly complex in community-engaged contexts where algorithmic updates must preserve community trust and oversight. Continuous community feedback remains essential to assess long-term impact, audit for performance disparities and safety, make decisions about algorithmic updates, and ensure continued community benefit.

By integrating community engagement throughout these stages, the processes used to develop AI/ML-enabled tools become more inclusive, responsive, accurate, and responsible, ultimately leading to equitable and impactful innovations across all segments of the population. Ultimately it is a cyclical process that allows for further refinement of tools to ensure the highest quality approaches being delivered in the service of prevention.

Example of community engagement to develop and launch an AI/ML suicide-risk tool

Here we highlight the development and implementation of the Native-RISE (Risk Identification and Enhanced Care) system to illustrate how community engagement, framed within the phases of prevention, can be used throughout the AI/ML tool development and implementation process. Suicide rates among American Indian/Alaska Natives (AI/ANs) are disproportionately high, a trend that has worsened over time (CDC 2021; Stone et al. 2023). While many AI/ML suicide-risk models exist, almost none have been developed specifically for AI/ANs, a gap that the Native-RISE project aims to address (Cwik et al. 2016).

In 2017, tribal partners worked with long-standing university partners (Johns Hopkins Center for Indigenous Health; JHCIH) to support development of a tool to identify individuals at high risk in the Native community. Working with JHCIH researchers, members of the community obtained a tribal resolution to support analysis of local data to help in this process (Figure 2, conceptualization). AI/ML tools were selected as a potential method to meet the community’s needs.

Figure 2.

Figure 2.

Community engagement in Native-RISE as applied to the phases of prevention research for AI/ML supported strategies

The resulting work examined ten years of local surveillance data to identify those at risk of suicide attempt within 12 months of their last referral for suicide related behaviors to a local suicide surveillance and case management system. Results indicated that a machine learning algorithm (Logistic Regression based with 73 features) using these data produced an Area Under the Curve (AUC) of 0.87 (95% CI ± 0.04). This substantially out performed a comparison of basing risk high-status on a past suicide attempt, our best known single risk factor indicator and what was being used in practice (AUC=0.57; 95% CI ± 0.08) (Haroz et al. 2019). While this was a significant improvement, translation to practice was key (Figure 2, design phase). We interviewed program staff, asking how they currently evaluated risk and how this type of tool could be helpful. Generally, staff welcomed this tool, indicated preferences for it to show a high vs. low-risk classification, and discussed how much of their risk assessment is done through their clinical observations in real time, rather than past history, when meeting with a person. Information from how staff approach suicide risk informed workflow and how to set values to define high- vs. low-risk groups. In this instance, case managers favored specificity (meaning fewer false positives) so that that risk model was trustworthy and with the understanding that case managers’ clinical judgment would compensate for some loss in sensitivity (e.g., fewer false negatives) (Figure 2; Efficacy Phase) (Haroz et al. 2021).

In the next step, (Figure 2, Evaluation Phase), we evaluated the algorithm’s performance in real time. We conducted a prognostic study of the model performance at identifying those at risk of another suicide-related event among 400 individuals. The risk model continued to improve risk identification, was associated with an increase in reach of care for individuals at highest risk, and when the model was used by staff, helped to reduce the risk of suicidal behaviors for these highest-risk cases (Haroz et al. 2023). Results of all of these studies were disseminated with manuscript review and approval coming from the Tribe. Subsequent dissemination (e.g., conferences, papers) included people from the community who represent a critical voice in how this work was conducted and its implications locally and for the broader scientific field. Finally, based on these results and ongoing communication with other community partners, we sought and obtained an external grant to expand this work to the health system. During the upcoming Implementation phase (Figure 2), we are working with the Indian Health Service, the provider of health services to the more than 2.2 million AI/AN people who are members of federally recognized tribes, to expand the reach of this work (R01MH128518). As the model is rolled out, ongoing monitoring will include bi-annual reviews of model performance disaggregated by key demographic factors, with community advisory board members and staff from the local communities participating in interpretation of results and decisions about recalibration thresholds. Such community-engaged auditing processes ensure that technical performance metrics are evaluated against community values regarding acceptable trade-offs between different types of errors.

Example of community engagement to adapt and utilize an AI/ML screening and self-management-support chatbot

English- and Spanish-speaking Latino populations in Colorado experience disparities in screening, timely diagnoses, and access to treatment for a range of chronic health concerns (Thorpe et al. n.d.). Increasing access to screening, timely diagnosis and treatment, and engagement in self-management activities are recommended strategies to address these inequities. Text-message-based artificially intelligent (AI) conversational chatbots are emerging as a potential tool to facilitate this access.

However, there are complexities in working across languages and cultures, and we have little data about their acceptability and feasibility in these communities. Our work aimed to explore how an AI-chatbot could be co-developed with Latino communities. The goal was to adapt existing AI-chatbots designed to facilitate access to screening and self-management for diabetes, hypertension, high cholesterol, and cancer that would resonate for a primarily Spanish-speaking, new immigrant, low-income patient population seeking care through a federally qualified health center, Tepeyac Community Health Center (Tepeyac) (“Tepeyac Community Health Center” n.d.).

We used an iterative design and development process. During the Conceptualization and Design phases, we collaborated with two local programs “Ventanillas de Salud” and Servicios de La Raza, programs that offer in-person blood-pressure and blood-sugar screening and referrals for care to people seeking consular services. We conducted in-depth interviews with N=18 of these programs’ clients and asked them if they were familiar with chatbots; what they thought of them; whether chatbots could be used to access information and support for healthy behaviors; and inquired about strategies to encourage use of chatbots. We learned that any chatbot system would need to be branded with the name of a healthcare organization to demonstrate endorsement and engender trust; to be offered in Spanish; to offer users prompts and shortcuts to help users understand the system and minimize typing; and to include relatable content to encourage engagement and ongoing use. These suggestions were integrated into the Chatbot system. For example, if a client asked about physical activity, the Chatbot would include suggestions of types of movement that resonate more for Latinos based on our qualitative interviews, such as Dancing (salsa, merengue, hip hop!), bird watching, and play tag with the user’s children (see Appendix for further examples).

The Design Phase involved generating, categorizing, and creating variations for each anticipated “intent” (i.e., the specific topics we believed people wanted to learn or ask about screening, diagnosis, and treatment), designing the system to revert to a fixed choice of responses if no match was found, and using data-augmentation techniques to update the system’s library of questions and variations. Additionally, the provider team at Tepeyac was consulted for medical accuracy and cultural relevance.

The Evaluation Phase the chatbot utilized natural language processing with intent classification algorithms achieving 78% accuracy in matching user queries to appropriate responses. The technical architecture employed approaches similar to those described in Bull et al. (2024, 2025) for health-focused chatbot development, with full technical specifications to be detailed in future publications.

Access to the system was made available on the Tepeyac website on August 15, 2023, and remains functional today. As part of ongoing monitoring, we log all engagements with the chatbot, track each communication, and log incorrect responses. As of January 1, 2024, there were 554 queries to the system from 172 unique users. The average number of messages per user was 3.2, suggesting people ask multiple questions.

The chatbot implementation utilizes the following ethical safeguards: users access this system without any login requires and do so anonymously when accessing the chatbot on a website. The system records user IP addresses, which can be considered protected health information (PHI), although an IP address is not necessarily unique for a user who shares a computer. Users are notified they should call 911 for a medical emergency or 988 for a mental health emergency. Every user who accesses the system are notified upon first accessing the system that the intent of the chatbot is to provide general information only.

Users are informed upon first accessing the system that logs documenting questions and answers are retained only on the clinic servers behind a secure firewall. Decisions as to whether and how long to keep these logs are the purview of clinics, because they represent patient engagement and may inform clinical decision-making regarding patient education in the future.

This work has been classified as exempt from IRB review, given the emphasis on patient education and the minimal risk for privacy or confidentiality issues and thus represents an example of how clinics might use an AI chatbot for quality improvement.

Example of community engagement to develop an AI/ML tool for Community Health Workers

As an example of deep engagement during Conceptualization Phase, in the Spring of 2022, Kula no nā Po’e Hawai’i o Papakōlea, Kewalo, Kalāwahine (KULA), a Hawai’i community-based nonprofit, sought to develop an AI/ML tool for Native Hawaiian/Pacific Islander (NHPI) populations Community Health Workers (CHWs). The goal is to draw upon KULA datasets to address health disparities within the Hawaiian homestead region of Papakōlea. Specifically, KULA’s aim is to apply AI/ML to the Native Hawaiian Homestead Health Survey (HHS) dataset to create an AI-powered application for CHWs to improve program coverage and resource recommendations.

The project is rooted in a cultural safety research praxis, HILINA’I, a model of community-based safety for Indigenous and communities of color that builds trust and cultural safety when engaging in research (Ka’opua et al. 2017). KULA uses this framework to ensure that the community of Papakōlea is engaged on their own terms to advance self-sufficiency and self-determination while preserving the values, traditions, and Native Hawaiian culture. The project is driven by the concept: “No kakou, na kakou.”

During the Conceptualization Phase, a Community Advisory Board was formed to guide activities. The CHWs and KULA staff were introduced to AI/ML through training and a group modeling exercise with feedback gathered throughout the workshop. This information, coupled with a larger process involving the National Association of Pasifika Organizations (NAOPO) that relied on “talanoa - talk story” sessions held with 15 local organizations to gather information on communities’ knowledge of and experience and attitudes towards using AI/ML, has yielded important community-driven lessons, including: (1) Prioritize needs as perceived by the community itself; (2) Establish relationships to foster trust, understanding, and address cultural safety concerns; (3) Develop sustainable and practical solutions to health inequities by incorporating the wisdom, knowledge, and experiences of community members, leaders, and advocates; (4) Engage with the community from the outset to facilitate transparent collaboration; (5) Collaborate with the community to create solutions that are truly meaningful to them, rather than imposing preconceived notions; (6) Encourage both the community and AI/ML experts to invest in learning each other’s “language” to facilitate effective communication and understanding; (7) Foster reciprocal relationships and concurrent learning among all research partners; and (8) Humanize the process of ML model building to make it comprehensible and engaging for the community, thereby increasing their willingness to participate. These lessons are foundational to approaches to engage communities in helping to develop AI/ML tools that will most benefit communities with the highest needs. Technical specifications for the planned AI/ML application will be detailed in future publications as the project advances through implementation phases.

Conclusion

Our framework demonstrates how community engagement can be systematically integrated across all phases of tool development for AI/ML-driven prevention efforts. Our case studies illustrate emerging approaches. Each example demonstrates different approaches: Native-RISE’s focus on community-driven priorities and expansion to health systems, the chatbot’s adaptation across languages and cultural contexts, and KULA’s community-driven development. These diverse pathways suggest that successful implementation, sustainability and scaling requires maintaining core community engagement principles while adapting methods to local contexts, infrastructure, and community capacity.

Community engagement is critical to developing AI/ML tools for use in prevention science. By incorporating diverse perspectives at every stage of model development and implementation, these tools become more closely aligned to real-world needs and challenges of end-users. Authentic community engagement takes time but engenders trust. Ultimately, integrating community insights into AI/ML enabled prevention tools is not just a matter of enhancing functionality but is essential in advancing health equity.

Figure 1.

Figure 1.

Phases of prevention research for AI/ML supported strategies

Figure 3.

Figure 3.

Community engagement in development of the Tepeyac Chatbot

Funding

This publication was supported in part by the AIM-AHEAD Coordinating Center (SM Manson & R Thorpe, MPIs), itself supported by the Office of the Director, National Institutes of Health Common Fund under award number 1OT2OD032581-01. Work reported here is also supported by the National Institute of Mental Health (NIMH), grant number R01MH128518 (EH Haroz, PI). The work is solely the responsibility of the authors and does not necessarily represent the official view of AIM-AHEAD, NIMH, or the National Institutes of Health.

Table A1:

Examples of changes to chatbot messages to increase cultural relevance and engagement

Topic (English) Topic (Spanish) English response Spanish response How culturally tailored?
How can I make physical activity more fun? ¿Cómo puedo hacer que la actividad fisica sea más divertida? Dancing (salsa, merengue, hip hop!), walking with a friend, and bird watching are all things you can do to stay active. If you have kids in your life, playing tag and going to the playground work, too. Bailar salsa, merengue, hip hop!; Caminar con un amigo y observar el paisaje a tu alrededor, es todo lo que puedes hacer para mantenerte activo. Si tienes hijos,” el juego de la mancha” o ir con ellos al parque de juegos, tambien es una buena opcion!. Focus is on types of movement that resonate for Latinos
How do I manage my blood sugar levels? ¿Cómo manejo mis niveles de azúcar en la sangre? Did you know that both Halle Berry and Tom Hanks both have type II diabetes? This means they have to control what they eat and make sure to stay active as two key ways to manage their blood sugar and keep it at a healthy level. I can help you with ideas on how--just ask! ¿Sabías que tanto Isabel Pantoja, la Jueza Sotomayor como Don Francisco tienen diabetes tipo II? Esto significa que tienen que controlar lo que comen y asegurarse de mantenerse activos como tambien de controlar su azúcar en la sangre y mantenerlo en un nivel saludable. Puedo ayudarte con ideas sobre Cómo lograrlo ¡solo pregunta! Role models identified in the English content are different than the Spanish content; the latter includes people better known to the Latino community
What are some steps I can take to reduce stress? ¿Cuáles son algunos pasos que puedo tomar para reducir el estres? You might know people like Maria & Jorge who have promised their kids that they will have a family dinner at their house every week—even if it is just to share sopa. You can make this plan too as a way to reduce stress. Posiblemente conoces a María, una gran líder de Colorado, que tiene un saludable habito para reducir el estres. Maria suele planear junto con sus hijos, una cena familiar al menos una vez al mes y con sus amigos mas cercanos, se reunen para charlar durante los fines de semana!. También puedes hacer este plan como una forma de reducir el estrés. Focus is on types of activities and food that can resonate for diverse communities
How can I access a cancer support group in the Denver area? ¿Cómo puedo acceder a un grupo de apoyo para pacientes con cáncer en Denver? People living in the Denver area have a couple of choices for support groups. Will you commit to one of these today? The XXX Church located at XXX., Arvada, CO offers a cancer support group. Also, Coffee & Conversations is an online option. Contact XXX at (303) XXX-XXXX. Las personas que viven en Colorado tienen varias opciones en grupos de apoyo. ¿Te comprometerás con uno de esos grupos hoy? Haga clic aquí para obtener información sobre grupos de apoyo en espanol. The support group identified in the English content does not offer options for exclusive Spanish speakers; we tailored information for the Spanish speakers to access support in their preferred language

Figure A1.

Figure A1.

User interface for the AI-chatbot based on community feedback and preferences

Footnotes

Disclosure

None

Ethical Approval

Each study reviewed in this article obtained ethical approvals prior to engaging participants. No primary data is reported in this manuscript.

Informed Consent

Each study reviewed in this article obtained informed consent as required by overseeing the respective studies. No primary data is reported in this manuscript.

References

  1. AIM-AHEAD. (n.d.). AIM-AHEAD. https://www.aim-ahead.net/. Accessed 24 December 2023
  2. Amisha, Malik P, Pathania M, & Rathaur VK (2019). Overview of artificial intelligence in medicine. Journal of family medicine and primary care, 8(7), 2328–2331. 10.4103/jfmpc.jfmpc_440_19 [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Balazs CL, & Morello-Frosch R (2013). The Three R’s: How Community Based Participatory Research Strengthens the Rigor, Relevance and Reach of Science. Environmental justice, 6(1). 10.1089/env.2012.0017 [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. Berdahl CT, Baker L, Mann S, Osoba O, & Girosi F (2023). Strategies to Improve the Impact of Artificial Intelligence on Health Equity: Scoping Review. JMIR AI, 2(1), e42936. 10.2196/42936 [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Birhane A, Isaac W, Prabhakaran V, Diaz M, Elish MC, Gabriel I, & Mohamed S (2022). Power to the People? Opportunities and Challenges for Participatory AI. In Proceedings of the 2nd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization (pp. 1–8). New York, NY, USA: Association for Computing Machinery. 10.1145/3551624.3555290 [DOI] [Google Scholar]
  6. Centers for Disease Control and Prevention, National Center for Health Statistics. (2021). 1999–2020 Wide Ranging Online Data for Epidemiological Research (WONDER), Multiple Cause of Death files. http://wonder.cdc.gov/ucd-icd10.html
  7. Chen I, Johansson FD, & Sontag D (2018). Why is my classifier discriminatory? Advances in neural information processing systems, abs/1805.12002. https://proceedings.neurips.cc/paper/2018/hash/1f1baa5b8edac74eb4eaa329f14a0361-Abstract.html [Google Scholar]
  8. Chen IY, Pierson E, Rose S, Joshi S, Ferryman K, & Ghassemi M (2021). Ethical Machine Learning in Healthcare. Annual Review of Biomedical Data Science, 4(1), 123–144. 10.1146/annurev-biodatasci-092820-114757 [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Coley RY, Johnson E, Simon GE, Cruz M, & Shortreed SM (2021). Racial/Ethnic Disparities in the Performance of Prediction Models for Death by Suicide After Mental Health Visits. JAMA psychiatry, 78(7), 726–734. 10.1001/jamapsychiatry.2021.0493 [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Cwik MF, Tingey L, Maschino A, Goklish N, Larzelere-Hinton F, Walkup J, & Barlow A (2016). Decreases in Suicide Deaths and Attempts Linked to the White Mountain Apache Suicide Surveillance and Prevention System, 2001–2012. American journal of public health, 106(12), 2183–2189. 10.2105/AJPH.2016.303453 [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Cyril S, Smith BJ, Possamai-Inesedy A, & Renzaho AMN (2015). Exploring the role of community engagement in improving the health of disadvantaged populations: a systematic review. Global health action, 8, 29842. 10.3402/gha.v8.29842 [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. Daneshjou R, Vodrahalli K, Novoa RA, Jenkins M, Liang W, Rotemberg V, et al. (2022). Disparities in dermatology AI performance on a diverse, curated clinical image set. Science advances, 8(32), eabq6147. 10.1126/sciadv.abq6147 [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Dolan DD, Cho MK, & Lee SS-J (2023). Innovating for a Just and Equitable Future in Genomic and Precision Medicine Research. The American journal of bioethics: AJOB, 23(7), 1–4. 10.1080/15265161.2023.2215201 [DOI] [PMC free article] [PubMed] [Google Scholar]
  14. Ferryman K, & Pitcan M (2018). Fairness in precision medicine. https://apo.org.au/sites/default/files/resource-files/2018-02/apo-nid134536_1.pdf
  15. Fishbein DH, & Dariotis JK (2019). Personalizing and optimizing preventive intervention models via a translational neuroscience framework. Prevention Science, 20(1), 10–20. 10.1007/s11121-017-0851-8 [DOI] [PubMed] [Google Scholar]
  16. Food and Drug Administration (FDA). (2019). Proposed regulatory framework for modifications to Artificial Intelligence/Machine Learning (AI/ML)-based Software as a Medical Device (SaMD). Department of Health and Human Services (United States). https://apo.org.au/node/228371. Accessed 18 December 2023 [Google Scholar]
  17. Gianfrancesco MA, Tamang S, Yazdany J, & Schmajuk G (2018). Potential Biases in Machine Learning Algorithms Using Electronic Health Record Data. JAMA internal medicine, 178(11), 1544–1547. 10.1001/jamainternmed.2018.3763 [DOI] [PMC free article] [PubMed] [Google Scholar]
  18. Haroz EE, Goklish N, Walsh CG, Cwik M, O’Keefe VM, Larzelere F, et al. (2023). Evaluation of the Risk Identification for Suicide and Enhanced Care Model in a Native American Community. JAMA psychiatry, 80(7), 675–681. 10.1001/jamapsychiatry.2022.5068 [DOI] [PMC free article] [PubMed] [Google Scholar]
  19. Haroz EE, Grubin F, Goklish N, Pioche S, Cwik M, Barlow A, et al. (2021). Designing a Clinical Decision Support Tool That Leverages Machine Learning for Suicide Risk Prediction: Development Study in Partnership With Native American Care Providers. JMIR public health and surveillance, 7(9), e24377. 10.2196/24377 [DOI] [PMC free article] [PubMed] [Google Scholar]
  20. Haroz EE, Kemp CG, O’Keefe VM, Pocock K, Wilson DR, Christensen L, et al. (2022). Nurturing Innovation at the Roots: The Success of COVID-19 Vaccination in American Indian and Alaska Native Communities. American journal of public health, 112(3), 383–387. 10.2105/AJPH.2021.306635 [DOI] [PMC free article] [PubMed] [Google Scholar]
  21. Haroz EE, Walsh CG, Goklish N, Cwik MF, O’Keefe V, & Barlow A (2019). Reaching Those at Highest Risk for Suicide: Development of a Model Using Machine Learning Methods for use With Native American Communities. Suicide & life-threatening behavior, sltb.12598. 10.1111/sltb.12598 [DOI] [PMC free article] [PubMed] [Google Scholar]
  22. Hing E, & Burt CW (2009). Are there patient disparities when electronic health records are adopted? Journal of health care for the poor and underserved, 20(2), 473–488. 10.1353/hpu.0.0143 [DOI] [PubMed] [Google Scholar]
  23. Home. (2022, July 26). https://www.atsdr.cdc.gov/communityengagement/index.html. Accessed 18 December 2023
  24. Israel BA, Schulz AJ, Parker EA, & Becker AB (1998). Review of community-based research: Assessing partnership approaches to improve public health. Annual Review of Public Health, 19(1), 173–202. 10.1146/annurev.publhealth.19.1.173 [DOI] [PubMed] [Google Scholar]
  25. Ka’opua LSI, Tamang S, Dillard A, & Puni Kekauoha B (2017). Decolonizing knowledge development in health research cultural safety through the lens of Hawaiian Homestead residents. Journal of indigenous social development, 5(2). https://journalhosting.ucalgary.ca/index.php/jisd/article/view/58467. Accessed 2 April 2024 [Google Scholar]
  26. Kellam SG, & Langevin DJ (2003). A framework for understanding “evidence” in prevention research and programs. Prevention science: the official journal of the Society for Prevention Research, 4(3), 137–153. 10.1023/a:1024693321963 [DOI] [PubMed] [Google Scholar]
  27. Kulynych B, Madras D, Milli S, Raji ID, Zhou A, & Zemel R (n.d.). Participatory Approaches to Machine Learning. Presented at the International Conference on Machine Learning Workshop. https://participatoryml.github.io/. Accessed 2 April 2024 [Google Scholar]
  28. Lich KH, Ginexi EM, Osgood ND, & Mabry PL (2013). A call to address complexity in prevention science research. Prevention Science, 14(3), 279–289. 10.1007/s11121-012-0285-2 [DOI] [PubMed] [Google Scholar]
  29. Manson SM, & Buchwald D (2021). Bringing Light to the Darkness: COVID-19 and Survivance of American Indians and Alaska Natives. Health equity, 5(1), 59–63. 10.1089/heq.2020.0123 [DOI] [PMC free article] [PubMed] [Google Scholar]
  30. Mapes BM, Foster CS, Kusnoor SV, Epelbaum MI, AuYoung M, Jenkins G, et al. (2020). Diversity and inclusion for the All of Us research program: A scoping review. PloS one, 15(7), e0234962. 10.1371/journal.pone.0234962 [DOI] [PMC free article] [PubMed] [Google Scholar]
  31. Mehrabi N, de Lichy C, McKay J, He C, & Campbell W (2022, January 24). Towards Multi-Objective Statistically Fair Federated Learning. arXiv [cs.LG]. http://arxiv.org/abs/2201.09917 [Google Scholar]
  32. Moons KGM, Wolff RF, Riley RD, Whiting PF, Westwood M, Collins GS, et al. (2019). PROBAST: A Tool to Assess Risk of Bias and Applicability of Prediction Model Studies: Explanation and Elaboration. Annals of internal medicine, 170(1), W1–W33. 10.7326/M18-1377 [DOI] [PubMed] [Google Scholar]
  33. Mullainathan S, & Obermeyer Z (2021). On the inequity of predicting A while hoping for B. AEA papers and proceedings. American Economic Association, 111, 37–42. 10.1257/pandp.20211078 [DOI] [Google Scholar]
  34. Obermeyer Z, Powers B, Vogeli C, & Mullainathan S (2019). Dissecting racial bias in an algorithm used to manage the health of populations. Science, 366(6464), 447–453. 10.1126/science.aax2342 [DOI] [PubMed] [Google Scholar]
  35. Ortiz K, Nash J, Shea L, Oetzel J, Garoutte J, Sanchez-Youngman S, & Wallerstein N (2020). Partnerships, Processes, and Outcomes: A Health Equity–Focused Scoping Meta-Review of Community-Engaged Scholarship. Annual review of public health, 41(1), 177–199. 10.1146/annurev-publhealth-040119-094220 [DOI] [PMC free article] [PubMed] [Google Scholar]
  36. Pratt B, & de Vries J (2018). Community engagement in global health research that advances health equity. Bioethics, 32(7), 454–463. 10.1111/bioe.12465 [DOI] [PubMed] [Google Scholar]
  37. Rajkomar A, Hardt M, Howell MD, Corrado G, & Chin MH (2018). Ensuring Fairness in Machine Learning to Advance Health Equity. Annals of internal medicine, 169(12), 866–872. 10.7326/M18-1990 [DOI] [PMC free article] [PubMed] [Google Scholar]
  38. Santibañez S, Davis M, & Avchen RN (2019). CDC Engagement With Community and Faith-Based Organizations in Public Health Emergencies. American journal of public health, 109(S4), S274–S276. 10.2105/AJPH.2019.305275 [DOI] [PMC free article] [PubMed] [Google Scholar]
  39. Shaheen MY (2021, September 25). Applications of Artificial Intelligence (AI) in healthcare: A review. 10.14293/s2199-1006.1.sor-.ppvry8k.v1 [DOI] [Google Scholar]
  40. Shanker A (2022). Advancing Health Equity and Biomedical Researcher Diversity: A New AIM-AHEAD Consortium. Cancer Health Disparities, 6. https://www.companyofscientists.com/index.php/chd/article/view/208. Accessed 2 April 2024 [Google Scholar]
  41. Stewart MK, Felix HC, Olson M, Cottoms N, Bachelder A, & Smith J (2015). Community Engagement in Health-Related Research: A Case Study of a Community-Linked Research Infrastructure, Jefferson County, Arkansas, 2011–2013. Preventing chronic disease, 12, E115. 10.5888/pcd12.140564 [DOI] [PMC free article] [PubMed] [Google Scholar]
  42. Stone DM, Mack KA, & Qualters J (2023). Notes from the Field: Recent Changes in Suicide Rates, by Race and Ethnicity and Age Group - United States, 2021. MMWR. Morbidity and mortality weekly report, 72(6), 160–162. 10.15585/mmwr.mm7206a4 [DOI] [PMC free article] [PubMed] [Google Scholar]
  43. Subbaswamy A, Adams R, & Saria S (13–15 Apr 2021). Evaluating Model Robustness and Stability to Dataset Shift. In Banerjee A & Fukumizu K (Eds.), Proceedings of The 24th International Conference on Artificial Intelligence and Statistics (Vol. 130, pp. 2611–2619). PMLR. https://proceedings.mlr.press/v130/subbaswamy21a.html [Google Scholar]
  44. Tepeyac Community Health Center. (n.d.). Tepeyac Community Health Center. https://www.tepeyachealth.org/. Accessed 2 April 2024
  45. Thomasian NM, Eickhoff C, & Adashi EY (2021). Advancing health equity with artificial intelligence. Journal of public health policy, 42(4), 602–611. 10.1057/s41271-021-00319-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
  46. Thorpe KE, Chin KK, Cruz Y, Innocent MA, & Singh L (n.d.). The United States Can Reduce Socioeconomic Disparities By Focusing On Chronic Diseases. Health Affairs Forefront. 10.1377/forefront.20170817.061561 [DOI] [Google Scholar]
  47. Wang HE, Landers M, Adams R, Subbaswamy A, Kharrazi H, Gaskin DJ, & Saria S (2022). A bias evaluation checklist for predictive models and its pilot application for 30-day hospital readmission models. Journal of the American Medical Informatics Association: JAMIA, 29(8), 1323–1333. 10.1093/jamia/ocac065 [DOI] [PMC free article] [PubMed] [Google Scholar]
  48. Weber GM, Adams WG, Bernstam EV, Bickel JP, Fox KP, Marsolo K, et al. (2017). Biases introduced by filtering electronic health records for patients with “complete data.” Journal of the American Medical Informatics Association: JAMIA, 24(6), 1134–1141. 10.1093/jamia/ocx071 [DOI] [PMC free article] [PubMed] [Google Scholar]

RESOURCES