Abstract
Artificial intelligence (AI) tools and techniques are undoubtedly being used in bioinformatics education, reflecting broader trends in education. However, many instructors and learners may be unaware of the full scope of potential uses for these tools within bioinformatics education, as well as effective practices for using them. Building on discussions held at the 6th Global Bioinformatics Education Summit, this perspective article provides insights about ways that AI might be used to generate or adapt instructional content, provide personalized help for learners, and automate assessment and grading. Additionally, we highlight AI skills that are important for bioinformatics learners to develop in order to effectively use AI as a bioinformatics learning tool. We highlight currently available tools in the quickly evolving AI landscape and suggest ways that instructors or learners might use such tools. Furthermore, we discuss key considerations and challenges associated with integrating AI into bioinformatics education, including ethical implications, potential biases, and the need to critically evaluate AI-generated content. Finally, we highlight the need for further research to better understand how AI tools are being used in practice and empower their effective and responsible use in bioinformatics education.
1 Introduction
Artificial intelligence (AI) has been used in educational settings for decades (Zawacki-Richter et al. 2019). However, recent advances in generative AI (GenAI) and natural language processing have expanded educators’ views about AI’s potential in education. Instructors and learners are experimenting with these tools as a way to develop educational materials, provide personalized support to learners, and equip learners with skills that they will need for the modern workforce. However, many instructors and learners may still be unaware of how these tools can improve their educational experience, especially for domain-specific applications. This review addresses these potential opportunities for bioinformatics education, which we define broadly as efforts to train individuals in concepts and skills necessary to process, analyze, and interpret biological data. In some cases, these efforts aim to help learners with computational or quantitative backgrounds to apply their skills in biology-related contexts. In other cases, the training aim is to help biology-oriented learners expand their skills and leverage the ongoing deluge of biological data. In cases where learners have prior training in biology, computing, and quantitative disciplines, the training aim might be to unite these concepts and skills in interesting and useful ways.
While many bioinformatics educators and practitioners are likely familiar with AI methodologies, many may not fully appreciate their pedagogical role. One goal of this paper is to raise awareness about how AI might enhance the learning and training process. Another goal is to identify available evidence to support the use of AI tools in education and to discuss potential advantages and disadvantages of their use. As often as possible, we connect our observations to bioinformatics specifically; however, we draw heavily from the broader education literature.
In May 2024, we and others gathered in New York City for the 6th Global Bioinformatics Education Summit. In a session titled, “How to use AI in training,” participants discussed their experiences using—or potential to use—diverse types of AI tools for bioinformatics education. Here we highlight four topics related to using AI tools: (i) generating or adapting instructional content, (ii) automating assessment and grading, (iii) providing personalized help to learners, and (iv) teaching learners to use AI. We use instructor to refer to any individual who provides bioinformatics training. Such training might occur in formal settings—as in higher-education institutions or secondary schools—or less-formal settings, such as short-form workshops, tutorials, or online courses. Instructors include faculty, staff scientists in academia or industry, postdoctoral researchers, graduate students, freelancers, and other domain experts. We use learner to refer to any individual who participates in bioinformatics training. Learners include students matriculated in degree programs, professionals seeking continuing education, researchers expanding their skill sets, and individuals pursuing self-directed learning through online resources or community initiatives. We have structured this article according to the above topics and draw on our summit discussions for insights. We believe these topics are important for bioinformatics educators in diverse settings around the world, as reflected by summit participation from individuals working at academic institutions, industry organizations, nonprofits, or governments on five continents.
2 Generating or adapting instructional content
Bioinformatics education requires diverse instructional content, including course outlines, instructional materials (e.g. lecture slides, handouts, and hands-on exercises), coding tutorials, and rubrics. Instructors typically develop these materials themselves, ensuring that the materials align with learning objectives, are grounded in real-world biological problems, and promote active engagement. Ideally, the content follows a logical progression, starting with foundational concepts and advancing to more complex topics and skills. These steps require a significant amount of time and mental effort, which can be amplified when the instructor lacks depth of expertise in the content areas of instruction. Therefore, these processes are ripe for the application of AI tools. In particular, GenAI—including large language models (LLMs)—offer opportunities to accelerate and improve these processes.
As an alternative to creating materials de novo, instructors can write GenAI prompts, requesting help in formulating ideas and pedagogical strategies. Instructors can use GenAI tools as “sounding boards,” providing feedback on approaches they are considering, exploring new perspectives, and helping to verify the correctness of their content. In some cases, GenAI tools can generate new content by adapting existing materials. For example, a hands-on exercise that addresses a particular learning objective might already exist (in a public repository or shared by a colleague); however, the materials might be from a non-biology discipline or from a different biology subdiscipline. Instructors may find GenAI tools helpful in refining those materials and/or contextualizing them. As another example, an instructor could refine existing code snippets by generating line-by-line explanations or characterizing common coding mistakes (MacNeil et al. 2022).
In addition to generating static materials, GenAI tools may be useful for interactive learning. For example, an instructor can create a chatbot that provides instructional support to learners. In doing so, the instructor does not directly generate the content but facilitates its creation by training the chatbot on course materials. Chatbots can either be built from scratch or adapted from publicly available frameworks for educators. They can be tailored to a particular content area through custom training or retrieval-augmented generation. In OpenAI’s ecosystem, Custom GPTs offer a way to fine tune GPT models based on specific content. Several Custom GPTs specific to bioinformatics and genomics have been created (Table 1). These chatbots promise to empower learners with diverse skills to understand bioinformatics-related code, analysis workflows, and best practices.
Table 1.
This table highlights specific tools relevant to bioinformatics education, providing examples of their potential uses for instructors and learners and important considerations when using them.a
| Category | Example tools | Uses | Cautions |
|---|---|---|---|
| General-purpose LLM chatbots |
|
|
|
| Fine-tuned LLM chatbots |
|
|
|
| Specialized LLMs |
|
|
|
| Custom LLM-based chatbots |
|
|
|
Each row builds upon the previous one, meaning that the uses listed in earlier rows often apply to tools in later rows, which may introduce additional or specialized functionalities. This list is not exhaustive: multiple versions of these tools may exist and many other tools are available. We do not endorse any specific tool or guarantee improvements in effectiveness, efficiency, or regulatory compliance. Instead, this table serves as a starting point for exploring current capabilities and identifying tools that may support bioinformatics education.
When generating educational content and deciding how to engage with learners, instructors can improve inclusivity and accessibility by delivering content through multiple media, such as written materials, hands-on activities, and discussion (CAST Inc 2025). Summit participants noted that if GenAI tools can reduce some barriers to learning for diverse learners, these tools would address a great need within the bioinformatics community. For example, GenAI can mitigate language biases by generating translated materials for non-native speakers; facilitate inclusive learning environments for remote learners by mimicking group learning (an LLM acts as a group member); cater to different learning styles; and improve learning access by generating text-to-speech or speech-to-text solutions for visually or hearing impaired learners, as reviewed elsewhere (Gibson 2024). However, this benefit is not a guarantee; AI is also known to reflect human biases, amplify disparities, and reinforce accessibility barriers (Quinn 2021, Nadis 2023, Venkit et al. 2023, Glickman and Sharot 2025). Inclusive use of AI in bioinformatics education requires thoughtful design and inclusion of individuals from diverse backgrounds into the design and testing of AI-enabled pedagogical tools.
3 Providing personalized help to learners
Bioinformatics learners often face obstacles when mastering content and skills, both in traditional teaching contexts and self-directed learning. For example, learners might struggle with understanding complex algorithms, interpreting large datasets, or applying programming skills to biological problems. They may also find it challenging to keep up with rapidly evolving tools and resources. Such obstacles may discourage learners and reduce their self-efficacy. However, in a classroom environment, instructors may have insufficient time to assess each learner’s needs and provide personalized help. In self-directed learning experiences, such as reading books or consuming online material, it is rarely feasible for content creators to provide personalized help. GenAI tools offer opportunities to address these challenges by enabling personalized and adaptive experiences, tailored to individual learners’ needs and paces. By taking advantage of these resources, learners may move closer to the ideal of “self-regulated learning,” which is characterized by greater autonomy, motivation, and intentionality (Garg and Rajendran 2024, Ng et al. 2024).
Learners can use LLM chatbots to enhance their understanding of topics they are currently studying or to extend their knowledge to more advanced topics (Yilmaz and Karaoglan Yilmaz 2023). For example, ChatGPT has been shown to effectively respond to queries on bioinformatics-related topics like data mining, genetics, and figure interpretation (Hu et al. 2024). The colloquial language used by LLM chatbots allows learners to ask questions in a form that is similar to how they might ask an instructor, and if prompted to do so, the chatbot can respond in ways that are suitable for each learner’s specific needs and skill levels. Chatbots can help learners identify knowledge gaps, create personalized lesson plans, or generate material that helps them prepare for assessments; the latter might include practice questions or hands-on exercises (Ellis and Slade 2023, Mollick and Mollick 2023, Yilmaz and Karaoglan Yilmaz 2023). However, current chatbots may fall short for questions and other interactions that require deep reasoning.
In addition to facilitating conceptual learning, LLM chatbots can help with hands-on skills. For example, chatbots can recommend tools for completing analysis tasks and identify advantages and disadvantages of those tools; in some cases, they may be able to guide learners through the process of using them. For tasks that require programming, chatbots can provide personalized help and increase learners’ self-efficacy (Yilmaz and Karaoglan Yilmaz 2023). For example, ChatGPT has been demonstrated to respond effectively to prompts for generating Python code for basic bioinformatics programming tasks (Piccolo et al. 2023). When learners encounter errors, such tools can help with explaining error messages and debugging (Chen et al. 2023, Sun et al. 2024). These capabilities may also support relatively advanced tasks. For example, plugins such as GitHub Copilot may serve as “pair programmers” to accelerate the processes of understanding, writing, and optimizing code.
To complement learners’ efforts to obtain personalized help from AI tools, instructors can facilitate this process. For example, instructors can guide learners to available AI tools and encourage their effective use, cautioning them to avoid overreliance on the tools. Instructors can recommend high-quality, publicly available chatbots or create custom chatbots that are tailored to a course’s content or learning objectives (Ng et al. 2024). To facilitate self-regulation, instructors can encourage or require learners to complete formative exercises without using AI tools—or attempt to generate exercises that AI tools cannot solve—thus enabling the learners to evaluate their own learning. As discussed below, instructors can also teach students best practices for using these tools.
Learners, instructors (and/or content creators), and AI tools each have something to contribute to the learning process. Leveraging the strengths of each can create a more effective and engaging learning environment (Li et al. 2024).
4 Automating assessment and grading
Assessments play critical roles in educational settings (Dixson and Worrell 2016). Instructors use formative assessments to provide feedback during the learning process. Such feedback can help learners identify areas for improvement and help instructors adjust their teaching strategies in response to learners’ progress. Summative assessments help instructors evaluate the extent to which learning outcomes have been met; they typically result in a rating or score that quantifies the learner’s performance. In the context of bioinformatics education, assessments might include programming exercises, projects, written reports, and multiple-choice questions. When assessing these contributions, instructors might evaluate written or oral summaries against a rubric, verify that computer code functions properly and follows formatting requirements, or provide feedback on ideas and modes of presentation. However, providing useful feedback can be time consuming and tedious, which may prevent learners from receiving such feedback or receiving it in a timely manner. Creating and delivering summative assessments can be similarly challenging. AI tools have potential to reduce these barriers (González-Calatayud et al. 2021, Mollick and Mollick 2023).
With the overarching goal of ensuring that learners receive useful and timely feedback on assessments, we discuss techniques for facilitating this process (Zawacki-Richter et al. 2019). In some settings, instructors can use AI tools to customize assessments based on learners’ strengths and weaknesses (e.g. adaptive testing, on-demand chatbots). For programming exercises and other tasks that involve computations, many instructors have access to software tools that automate the process of delivering instructions, feedback, and grades to learners (Messer et al. 2023, Piccolo 2025). Machine-learning research and tools within this domain have often focused on assessing code correctness (functionality or methodology). Less research emphasis has been placed on using machine-learning techniques to evaluate whether learners’ code is maintainable, readable, and well documented (Messer et al. 2023). We believe the latter skills are important to evaluate because they require deep levels of understanding. Aside from assessing hands-on computing skills, AI techniques can help with assessing higher-order cognitive skills. For example, instructors can deliver open-response assessment items in large courses and use LLM tools to summarize learners’ responses and automate assessment and feedback (Gao et al. 2024). Such approaches may enhance the consistency and objectivity of assessments. AI-powered tools may reduce instructors’ reliance on teaching assistants to manually assess learners’ submissions or the use of multiple-choice assessments, which can limit opportunities for nuanced feedback and deeper evaluation of critical thinking skills. Alternatively, AI tools may aid in increasing the quality of multiple-choice distractors, potentially in a data-driven manner based on real learners’ misconceptions.
By using AI tools to save time on tasks that have previously demanded considerable time and energy, instructors may redirect these resources to other tasks (which might also be aided by AI), such as generating practice assessments; meeting one-on-one with learners to assess competencies in more nuanced manners; reviewing assessment instruments and rubrics to improve their alignment with learning outcomes; identifying learners most at risk of performing poorly and helping those learners in the early stages of learning; evaluating their own teaching through summarizing course evaluations; enhancing content (e.g. interlacing dynamic multimedia content with real-time assessment); and identifying and addressing cases of academic dishonesty.
AI tools may also be useful for learners to self-assess. For example, learners can use AI to positively influence their own motivation, increase their engagement with material (e.g. personalized learning plans and content), reduce anxiety, and obtain feedback on written text (Owan et al. 2023).
5 Teaching learners to use artificial intelligence
As these examples illustrate, AI tools have the potential to transform teaching and learning in bioinformatics. However, as studies conducted in other computing-education settings have found, learners’ baseline use of AI tools does not always promote effective learning (Ghimire and Edwards 2024, Sheese et al. 2024). For bioinformatics learners to maximally—and equitably—benefit from AI tools in the classroom, they need to learn how to use these tools effectively. This begins with making AI literacy a pillar of bioinformatics curricula. Literacy is an emancipatory pedagogical concept: it empowers learners to access and share knowledge in an evolving society. Long and Magerko (2020) identified 17 competencies in their consolidated definition of AI literacy across heterogeneous stakeholders. Using this as a foundation, we identify priorities for AI literacy in bioinformatics training: (i) understanding how AI tools work, (ii) recognizing what tasks are well-suited for these tools, (iii) developing good practices to use these tools, and (iv) reflecting on the global impact and ethical costs—intellectual, environmental, labor—of AI tools (Ahmad et al. 2023, Crawford 2024).
These learning objectives can be incorporated into bioinformatics training through exercises such as “prompt engineering” (Denny et al. 2024). Instructors can provide explanations and examples of how to develop clear prompts and how to iterate and refine prompts given AI tools’ responses and the required bioinformatics task (Sun et al. 2024). Training learners to formulate effective prompts for LLMs not only improves their ability to use these AI tools but also requires understanding how the tools work. These exercises also provide opportunities for iteration, error resolution, and self-reflection about their problem-solving strategy and limitations of AI, refining learners’ understanding of when these tools are best deployed. Framing such exercises in bioinformatics contexts can create opportunities to practice core bioinformatics analyses (Shue et al. 2023), while also building other important skills for the field, such as navigating uncertainty in the deployment of black-box models (Bearman and Ajjawi 2023) and critical thinking to design and debug code (Styve et al. 2024).
Learners should also be introduced to how AI tools’ inherent limitations may introduce inaccuracy or inequity. One hands-on pedagogical strategy is to ask learners to identify potential limitations of the AI tools and design experiments to evaluate the consequences. For example, learners may evaluate an AI tool’s ability to write accurate code in widely used versus less commonly used programming languages; to propose robust analysis steps for multi-ancestry versus European-only datasets; or to write code that can be run in low-resource computing environments. Learners can also investigate the costs of their use of AI tools for bioinformatics analyses. This might include estimating energy and water consumption that results from employing GenAI models (Lannelongue et al. 2021, Budennyy et al. 2022), identifying potentially inappropriate uses of others’ intellectual property in GenAI responses, or learning about the labor, land, and other social costs of developing these tools (Solaiman et al. 2024).
Educating bioinformatics learners on how to use AI can empower beginners and advanced bioinformaticians alike to be informed and responsible users of these tools, while they build an AI skill set essential for the modern workforce and society at large. If implemented thoughtfully, this can even democratize bioinformatics education by giving learners the tools for sustained, catalytic learning beyond what they may have access to at their own institutions (Williams et al. 2023).
6 Examples
To make these concepts more concrete and illustrate how GenAI could be used in real-world bioinformatics-education contexts, we have provided examples (Additional Data File S1, available as supplementary data at Bioinformatics Advances online) for the above categories. In each example, we indicate whether an instructor, a learner, or both might perform the specified task. Where applicable, we provide the GenAI prompt and the model’s response for each example, where relevant. Finally, we provide evaluative comments about the responses. This list is not intended to be comprehensive but rather to highlight some potential use cases. Overall, we conclude that the generated responses could serve as effective starting points for additional refinement.
7 Discussion
In this perspective, we provide an overview of the current landscape of AI in bioinformatics education, alongside practical recommendations for further innovation. We acknowledge, however, that these topics do not encompass all potential uses of AI in bioinformatics education and that tools and techniques in these areas are rapidly evolving. Even as newer technologies emerge, opening new possibilities, we believe that certain core considerations will remain critical and relevant.
Notably, the uses of AI in bioinformatics education that we review here are motivated by pedagogical best practices. Rather than replacing the efforts of instructors or learners, these tools can augment their efforts. For example, using AI chatbots for personalized learning does not replace instructors. Human instruction is still important to the learning process, but chatbots provide a means for active learning to continue outside of person-to-person instruction. Additionally, an instructor’s bioinformatics domain expertise remains essential to ensure the quality of AI-generated content; instructors should not rely solely on AI tools for tasks such as content creation and ideation (Mollick and Mollick 2023, Yadav 2023).
The emergence of AI tools for education should motivate instructors to learn more about how these tools work and best practices for using them. While it may seem straightforward to use AI tools in educational contexts, our experiences suggest that obtaining high-quality results is not trivial in reality. For example, seemingly small differences in prompts can result in dramatically different outputs. Thus, effective prompt generation is an important skill that requires practice for both instructors and learners. Developing this skill can prevent instructors and learners from feeling frustrated or discouraged when using AI. Effective strategies include providing clear instructions; using short, sequential prompts; providing contextual information; illustrating through examples; and specifying desired output formats (Wang et al. 2024).
While AI tools offer valuable support for both instructors and students, they also come with a time investment. If the cost of using these tools outweighs the benefits, their adoption may be impractical for educators. Additionally, we emphasize the need to train instructors in AI-related skills. Although certain tasks may feel intuitive to some, others will require more structured guidance. Beyond crafting effective prompts, instructors may need support in selecting the most appropriate tools for specific tasks; evaluating AI-generated content; helping students interact productively with AI and critically evaluate AI-generated content; and using these technologies in ethically responsible ways. Such training could occur through a range of formats, including conference tutorials, independent workshops, peer interactions, institutional initiatives, and the creation of prompt libraries (Prompt Library 2025).
Building a strong, foundational understanding of AI tools can increase educators’ awareness of the tools’ limitations. AI tools are subject to stochasticity—the same prompt can produce different results—which can lead to variability in the information that learners receive. Instructors should account for this variability in the design of activities and assignments involving AI. GenAI tools mimic a conversational tone without expressing uncertainty, which can lull learners into a false confidence about the tools’ accuracy. Without proper training to navigate the “black-box” nature of these tools, learners may use them blindly, without thinking critically about the outputs, which could undermine the instructor’s learning objectives.
The use of AI tools in education raises ethical concerns, including about environmental impact. Many tools are trained on large corpuses of materials from the Internet but fail to cite those sources in their outputs. Thus, when instructors use GenAI to create course materials, they may unknowingly infringe on others’ intellectual property. Other ethical issues more directly impact learners’ experiences. For example, LLMs are mainly trained on English corpuses and thus produce higher-quality outputs in response to English prompts (Zhang et al. 2023), indicating that these tools may disadvantage learners from different language backgrounds. AI tools are inaccessible to many learners who lack Internet access or face socioeconomic barriers (Bulathwela et al. 2024). Moreover, commercial AI tools have privacy policies that may leave learners’ personal and educational data vulnerable (Tlili et al. 2023). If incorporating AI tools alienates learners in these ways, it ultimately undermines their learning.
Nonetheless, we propose that bioinformatics, an interdisciplinary endeavor, is ideally situated to integrate AI into its training in meaningful ways. This will require thoughtful implementation of free or low-cost tools, trained on diverse corpuses, to serve diverse learners, using AI sandboxes to protect learners’ data. It will require iterative evaluation to ensure that the tools are having the intended impact on core AI literacy and bioinformatics learning outcomes, especially before they are integrated into high-stakes assessments (Hornberger et al. 2023).
Researchers and practitioners in the field may find it worthwhile to undergo collective efforts, such as developing high-quality prompts for common bioinformatics educational tasks. As bioinformatics instructors test AI tools in their courses, they should not only reflect on their own experiences but also ask learners for feedback on how the tools impact their learning. For example, do learners find AI-generated assessment feedback valuable? Does questioning a chatbot mimic the experience of questioning an instructor? It is essential for the results of such investigations to be disseminated to the community. There is currently a dearth of literature about applications of AI tools in bioinformatics education; for this review, we drew from literature in both bioinformatics and related fields such as computer science. As AI tools continue to evolve, bioinformatics educators can play a critical role in testing and validating emerging tools and developing best practices and shared resources designed for this specific domain.
Supplementary Material
Acknowledgments
We thank Dr. Julia Lima Fleck for reviewing worked example #3.
Contributor Information
Stephen R Piccolo, Department of Biology, Brigham Young University, Provo, UT, 84602, United States.
Aparna Nathan, Department of Biomedical Informatics, Harvard Medical School, Boston, MA, 02115, United States.
Michelle D Brazas, Adaptive Oncology, Ontario Institute for Cancer Research, Toronto, ON M5G 0A3, Canada.
Manoj Kandpal, Center for Clinical and Translational Science, The Rockefeller University, New York, NY, 10065, United States.
Aida T Miró-Herrans, Academic Research Consulting and Services, George A. Smathers Libraries, University of Florida, Gainesville, FL, 32610, United States.
Adam J Kleinschmit, Department of Natural and Applied Sciences, University of Dubuque, Dubuque, IA, 52001, United States.
Susan McClatchy, The Jackson Laboratory, Bar Harbor, ME, 04609, United States.
Pertunia Mutheiwana, Computational Biology Division, Department of Integrative Biomedical Sciences, Faculty of Health Sciences, University of Cape Town, Observatory, Cape Town, 7925, South Africa.
Dusanka Nikolic, Wellcome Connecting Science, Wellcome Genome Campus, Hinxton CB10 1SA, United Kingdom.
Luciana I Gallo, Instituto de Fisiología, Biología Molecular y Neurociencias, CONICET, Departamento de Fisiología y Biología Molecular y Celular, Facultad de Ciencias Exactas y Naturales, Universidad de Buenos Aires, Buenos Aires, C1428EGA, Argentina.
Rolanda Sunaye Julius, Computational Biology Division, Department of Integrative Biomedical Sciences, Faculty of Health Sciences, University of Cape Town, Observatory, Cape Town, 7925, South Africa.
Marta Lloret-Llinares, EMBL’s European Bioinformatics Institute, Hinxton CB10 1SD, United Kingdom.
Nicola Mulder, Computational Biology Division, Department of Integrative Biomedical Sciences, Faculty of Health Sciences, University of Cape Town, Observatory, Cape Town, 7925, South Africa.
Danielle Presgraves, The Jackson Laboratory, Bar Harbor, ME, 04609, United States.
Sonal Shewaramani, National Microbiology Laboratory Branch, Public Health Agency of Canada, Winnipeg, MB R3E 3R2, Canada.
Jorge Xool-Tamayo, Departamento de Innovación Biomédica, Centro de Investigación Científica y de Educación Superior de Ensenada (CICESE), Ensenada, Baja California, 22860, México.
Frédéric J J Chain, Department of Biological Sciences, University of Massachusetts Lowell, Lowell, MA, 01854, United States.
Silvia Arantza Sanchez Guerrero, Bachillerato Tecnológico de Educación y Promoción Deportiva (BTED), Plantel Tetla de la Solidaridad, Prolongación República de Brasil s/n 2da secc, Teotlalpan, C.P. 90430, Tetla de la Solidaridad, Tlaxcala, México.
Author contributions
Stephen Piccolo (Conceptualization [equal], Investigation [equal], Project administration [equal], Supervision [equal], Writing—original draft [equal], Writing—review & editing [equal]), Aparna Nathan (Investigation [equal], Project administration [equal], Supervision [equal], Writing—original draft [equal], Writing—review & editing [equal]), Michelle D. Brazas (Investigation [equal], Supervision [equal], Writing—original draft [equal], Writing—review & editing [equal]), Manoj Kandpal (Investigation [equal], Supervision [equal], Writing—original draft [equal], Writing—review & editing [equal]), Aida T. Miró-Herrans (Investigation [equal], Supervision [equal], Writing—original draft [equal], Writing—review & editing [equal]), Adam J. Kleinschmit (Investigation [equal], Writing—original draft [equal], Writing—review & editing [equal]), Susan McClatchy (Investigation [equal], Writing—original draft [equal], Writing—review & editing [equal]), Pertunia Mutheiwana (Investigation [equal], Writing—original draft [equal], Writing—review & editing [equal]), Dusanka Nikolic (Investigation [equal], Writing—original draft [equal], Writing—review & editing [equal]), Luciana I. Gallo (Investigation [equal], Writing—review & editing [equal]), Rolanda Sunaye Julius (Investigation [equal], Writing—review & editing [equal]), Marta Lloret-Llinares (Investigation [equal], Writing—review & editing [equal]), Nicola Mulder (Investigation [equal], Writing—review & editing [equal]), Danielle Presgraves (Investigation [equal], Writing—original draft [equal]), Sonal Shewaramani (Investigation [equal], Writing—review & editing [equal]), Jorge Xool-Tamayo (Investigation [equal], Writing—review & editing [equal]), Frédéric J.J. Chain (Writing—review & editing [equal]), and Silvia Arantza Sanchez Guerrero (Investigation [equal])
Supplementary data
Supplementary data are available at Bioinformatics Advances online.
Conflict of interest
None declared.
Funding
This work was supported by the USA National Institutes of Health [grant number R03HL168983 to S.R.P.]; Ontario Institute for Cancer Research with funds from the Government of Ontario (Canada) [to M.D.B.]; the USA National Center for Advancing Translational Sciences, National Institutes of Health, Clinical and Translational Science Award Program [grant number UL1 TR001866 to M.K.]; the Argentinean Agencia Nacional de Promoción Científica y Tecnológica [PICT 2018-04706, PICT 2019-02041, PICT-GRF-TII-00436 to L.I.G.]; the Bill and Melinda Gates Foundation [grant number INV-079147 to N.M.]; and the USA National Science Foundation [grant number 2144259 to F.J.J.C.].
References
- Ahmad SF, Han H, Alam MM et al. Impact of artificial intelligence on human loss in decision making, laziness and safety in education. Humanit Soc Sci Commun 2023;10:311–4. 10.1057/s41599-023-01787-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bearman M, Ajjawi R. Learning to work with the black box: pedagogy for a world with artificial intelligence. Brit J Educational Tech 2023;54:1160–73. 10.1111/bjet.13337 [DOI] [Google Scholar]
- Budennyy SA, Lazarev VD, Zakharenko NN et al. eco2AI: carbon emissions tracking of machine learning models as the first step towards sustainable AI. Dokl Math 2022;106:S118–28. 10.1134/S1064562422060230 [DOI] [Google Scholar]
- Bulathwela S, Pérez-Ortiz M, Holloway C et al. Artificial intelligence alone will not democratise education: on educational inequality, techno-solutionism and inclusive tools. Sustainability 2024;16:781. 10.3390/su16020781 [DOI] [Google Scholar]
- CAST Inc. About the Guidelines 3.0 Update. 2025. https://udlguidelines.cast.org (7 July 2025, date last accessed).
- Chen E, Huang R, Chen H-S et al. GPTutor: a ChatGPT-powered programming tool for code explanation. arXiv:2305.01863 [cs], http://arxiv.org/abs/2305.01863, 2023.
- Crawford K. Generative AI’s environmental costs are soaring—and mostly secret. Nature 2024;626:693. 10.1038/d41586-024-00478-x [DOI] [PubMed] [Google Scholar]
- Denny P, Leinonen J, Prather J et al. Prompt problems: a new programming exercise for the generative AI era. In: Proceedings of the 55th ACM Technical Symposium on Computer Science Education, V. 1., Portland, OR, 2023. pp.296–302. ACM. 2024. 10.1145/3626252.3630909 [DOI] [Google Scholar]
- Dixson DD, Worrell FC. Formative and summative assessment in the classroom. Theory Pract 2016;55:153–9. 10.1080/00405841.2016.1148989 [DOI] [Google Scholar]
- Ellis AR, Slade E. A new era of learning: considerations for ChatGPT as a tool to enhance statistics and data science education. J Stat Data Sci Educ 2023;31:128–33. [Google Scholar]
- Gao R, Merzdorf HE, Anwar S et al. Automatic assessment of text-based responses in post-secondary education: a systematic review. Comput Educ: Artif Intell 2024;6:100206. [Google Scholar]
- Garg A, Rajendran R. Analyzing the role of generative ai in fostering self-directed learning through structured prompt engineering. In: Generative Intelligence and Intelligent Tutoring Systems: 20th International Conference, ITS 2024, Thessaloniki, Greece, June 10–13, 2024, Proceedings, Part I, pp.232–43, Berlin, Heidelberg: Springer-Verlag, 2024. 10.1007/978-3-031-63028-6_18 [DOI]
- Ghimire A, Edwards J. Coding with AI: how are tools like ChatGPT being used by students in foundational programming courses. In: Andrew MO, Irene-Angelica C, Zitao L, Olga CS, Ig IB, (eds), Artificial Intelligence in Education. Cham, Switzerland: Springer Nature, 2024, 259–67. 10.1007/978-3-031-64299-9_20 [DOI] [Google Scholar]
- Gibson R. The Impact of AI in Advancing Accessibility for Learners with Disabilities. 2024. https://er.educause.edu/articles/2024/9/the-impact-of-ai-in-advancing-accessibility-for-learners-with-disabilities (7 July 2025, date last accessed).
- Glickman M, Sharot T. How human–AI feedback loops alter human perceptual, emotional and social judgements. Nat Hum Behav 2025;9:345–59. 10.1038/s41562-024-02077-2 [DOI] [PMC free article] [PubMed] [Google Scholar]
- González-Calatayud V, Prendes-Espinosa P, Roig-Vila R. Artificial intelligence for student assessment: a systematic review. Appl Sci 2021;11:5467. 10.3390/app11125467 [DOI] [Google Scholar]
- Hornberger M, Bewersdorff A, Nerdel C. What do university students know about artificial intelligence? Development and validation of an AI literacy test. Comput Educ: Artif Intell 2023;5:100165. [Google Scholar]
- Hu G, Liu L, Xu D. On the responsible use of chatbots in bioinformatics. Genom Proteom Bioinform 2024;22:qzae002. 10.1093/gpbjnl/qzae002 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lannelongue L, Grealey J, Inouye M. Green algorithms: quantifying the carbon footprint of computation. Adv Sci (Weinh) 2021;8:2100707. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Li H, Xu T, Zhang C et al. Bringing generative AI to adaptive learning in education, arXiv:2402.14601 [cs], http://arxiv.org/abs/2402.14601, 2024.
- Long D, Magerko B. What is AI literacy? Competencies and design considerations. In: Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, CHI ’20, pp.1–16. New York, NY: Association for Computing Machinery, 2020. 10.1145/3313831.3376727 [DOI]
- MacNeil S, Tran A, Mogil D et al. Generating diverse code explanations using the GPT-3 large language model. In: Proceedings of the 2022 ACM Conference on International Computing Education Research—Volume 2, pp.37–9, Lugano and Virtual Event Switzerland: ACM. 2022. 10.1145/3501709.3544280 [DOI]
- Messer M, Brown NCC, Kölling M et al. Machine learning-based automated grading and feedback tools for programming: a meta-analysis. In: Proceedings of the 2023 Conference on Innovation and Technology in Computer Science Education V. 1, ITiCSE 2023, pp. 491–7. New York, NY: Association for Computing Machinery. 2023. 10.1145/3587102.3588822 [DOI]
- Mollick ER, Mollick L. Using AI to implement effective teaching strategies in classrooms: five strategies, including prompts, March 2023. https://papers.ssrn.com/abstract=4391243
- Nadis S. How machine-learning models can amplify inequities in medical diagnosis and treatment, August 2023. https://news.mit.edu/2023/how-machine-learning-models-can-amplify-inequities-medical-diagnosis-treatment-0817
- Ng DTK, Wei Tan C, Leung JKL. Empowering student self-regulated learning and science education through ChatGPT: a pioneering pilot study. Brit J Educational Tech 2024;55:1328–53. 10.1111/bjet.13454 [DOI] [Google Scholar]
- Owan VJ, Abang KB, Idika DO et al. Exploring the potential of artificial intelligence tools in educational measurement and assessment. EURASIA J Math Sci Tech Ed 2023;19:em2307. 10.29333/ejmste/13428 [DOI] [Google Scholar]
- Piccolo SR, Denny P, Luxton-Reilly A et al. Evaluating a large language model’s ability to solve programming exercises from an introductory bioinformatics course. PLoS Comput Biol 2023;19:e1011511. 10.1371/journal.pcbi.1011511 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Piccolo SR, Tuft E, Tatlow PJ et al. CodeBuddy: A programming assignment management system for short-form exercises. J Open Res Softw 2025;13:1. 10.3390/su16020781 [DOI] [Google Scholar]
- Prompt Library. https://www.aiforeducation.io/prompt-library (20 June 2025, date last accessed).
- Quinn. Artificial intelligence and the rights of persons with disabilities—Report of the Special Rapporteur on the rights of persons with disabilities, Technical report. 2021. https://www.ohchr.org/en/documents/thematic-reports/ahrc4952-artificial-intelligence-and-rights-persons-disabilities-report (20 June 2025, date last accessed).
- Sheese B, Liffiton M, Savelka J et al. Patterns of student help-seeking when using a large language model-powered programming assistant. In: Proceedings of the 26th Australasian Computing Education Conference, ACE ’24, pp.49–57. New York, NY: Association for Computing Machinery. 2024. 10.1145/3636243.3636249 [DOI]
- Shue E, Liu L, Li B et al. Empowering beginners in bioinformatics with ChatGPT. Quant Biol 2023;11:105–8. 10.15302/J-QB-023-0327. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Solaiman I, Talat Z, Agnew W et al. Evaluating the social impact of generative AI systems in systems and society. arXiv:2306.05949 [cs], http://arxiv.org/abs/2306.05949, June 2024.
- Styve A, Virkki OT, Naeem U. Developing critical thinking practices interwoven with generative AI usage in an introductory programming course. In: 2024 IEEE Global Engineering Education Conference (EDUCON), pp.01–8, May 2024. 10.1109/EDUCON60312.2024.10578746 [DOI]
- Sun D, Boudouaia A, Zhu C et al. Would ChatGPT-facilitated programming mode impact college students’ programming behaviors, performances, and perceptions? An empirical study. Int J Educ Technol High Educ 2024;21:14. 10.1186/s41239-024-00446-5 [DOI] [Google Scholar]
- Tlili A, Shehata B, Agyemang Adarkwah M et al. What if the devil is my guardian angel: ChatGPT as a case study of using chatbots in education. Smart Learn Environ 2023;10:15. 10.1186/s40561-023-00237-x [DOI] [Google Scholar]
- Venkit PN, Srinath M, Wilson S. Automated ableism: an exploration of explicit disability biases in sentiment and toxicity analysis models. arXiv:2307.09209 [cs], http://arxiv.org/abs/2307.09209, 2023.
- Wang J, Cheng Z, Yao Q et al. Bioinformatics and biomedical informatics with ChatGPT: year one review. Quant Biol 2024;12:345–59. 10.1002/qub2.67 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Williams JJ, Tractenberg RE, Batut B et al. An international consensus on effective, inclusive, and career-spanning short-format training in the life sciences and beyond. Plos One 2023;18(11):e0293879. 10.1371/journal.pone.0293879 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Yadav G. Scaling evidence-based instructional design expertise through large language models. arXiv:2306.01006 [cs], http://arxiv.org/abs/2306.01006, 2023.
- Yilmaz R, Karaoglan Yilmaz FG. The effect of generative artificial intelligence (AI)-based tool use on students’ computational thinking skills, programming self-efficacy and motivation. Comput Educ Artif Intell 2023;4:100147. 10.1016/j.caeai.2023.100147 [DOI] [Google Scholar]
- Zawacki-Richter O, Marín VI, Bond M et al. Systematic review of research on artificial intelligence applications in higher education—where are the educators? Int J Educ Technol High Educ 2019;16:39. 10.1186/s41239-019-0171-0 [DOI] [Google Scholar]
- Zhang X, Li S, Hauer B et al. Don’t trust ChatGPT when your question is not in English: a study of multilingual abilities and types of LLMs, arXiv:2305.16339 [cs], 2023, http://arxiv.org/abs/2305.16339
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
