Abstract
Introduction
As artificial intelligence (AI) increasingly integrates into health workplaces, evidence suggests AI can exacerbate gender inequity. Health professional programmes have a role to play in ensuring graduates grasp the challenges facing working in an AI‐mediated world.
Approach
Drawing from feminist scholars and empirical evidence, this conceptual paper synthesises current and future ways in which AI compounds gender inequities and, in response, proposes foci for an integrated approach to teaching about AI and equity.
Analysis
We propose three concerns. Firstly, multiple literature reviews suggest that the gender divide is embedded within AI technologies from both process (AI development) and product (AI output) perspectives. Next, there is emerging evidence that AI is reinforcing already entrenched health workforce inequities, where certain types of roles are seen as being the domain of certain genders. Finally, AI may disassociate health professionals' interactions with an embodied, agentic patient by diverting attention to a gendered digital twin.
Implications
Responding to these concerns is not simply a matter of teaching about bias but needs to promote an understanding of AI as a sociotechnical phenomenon. Healthcare curricula could usefully provide clinically relevant educational experiences that illustrate how AI intersects with inequitable gendered knowledge practices. Students can be directed to: (1) explore doubts when working with AI‐generated data or decisions; (2) refocus on caring through prioritising embodied connections; and (3) consider how to negotiate gendered workplaces in a time of AI.
Conclusion
The intersection of gender equity and AI provides an accessible, illustrative case about how changing knowledge practices have the potential to embed inequity and how health professional education programmes might respond.
Short abstract
In a world where AI appears to be exacerbating gender inequities in healthcare, the authors outline why and how health professional programs might meaningfully respond.
1. INTRODUCTION
Healthcare cannot avoid the rise of artificial intelligence (AI). AI has been present in our everyday social interactions for years 1 and algorithmic‐intensive practices are already integrated into many health workplaces within decision‐making, administrative and image‐making applications. 2 , 3 Evidence drawn from in situ implementations, while currently limited and somewhat mixed, suggests that AI can improve clinical care particularly around decision‐making. 4 However, as scientific knowledge is already a gendered affair, 5 there is a clear and present danger that AI as a knowledge generating technology may exacerbate current gender and other inequities. This is a matter for health professional education: our graduates need capabilities to work in a world where AI is commonplace. 6 , 7 , 8 They must grasp AI's potential to simultaneously support clinical practice but also negatively impact their patients, the healthcare system and themselves.
Educational researchers have traditionally focussed on the technology underpinning AI, with ethics as ‘add on’ to these core concerns. 9 But even a cursory glance of news headlines outlines serious problems: chatbots providing racial slurs 10 or racial profiling in the justice system. 11 These appear not readily resolved through technological fixes, 12 as AI generally reflects the society that is drawn from. Indeed, from a sociotechnical perspective, where social and technical systems are co‐constructing, AI technology cannot be disentangled from its social contexts. 13
This entwining of AI within society suggests learning to work with AI cannot simply be a matter of ‘using’ an AI tool in an ethical way. Rather, health professionals must grasp the relationship between AI and the whole of society, and this means health professional programmes must provide insights into the broader social implications of these technologies. In this conceptual paper, which focuses on the intersection between gender equity and AI in healthcare, we wish to go beyond calls for increasing what our students need to know about AI as a technology. 14 , 15 We propose that graduates should have a deep understanding of how knowledge is constituted in an AI‐mediated world.
AI can be considered technologies that are ‘designed, developed and deployed’ to work with knowledge. 16 From this perspective, we claim that AI must form an integral part of health professional programmes—not just as a technology to be used—but acknowledged as a form of knowledge production. And many of the ethical concerns about AI in the health professions literature relate to how AI knowledge is produced or how AI affects knowledge production in general. For example, the Association of Medical Education Europe (AMEE) Guide to Ethical Use of AI 17 describes potential problems such as excessive data collection with inappropriate consent; risks to data privacy and ownership; lack of transparency and accountability of algorithms; and bias, including gender. Other authors warn against the dehumanisation of healthcare and healthcare education. 18 Educational scholars identify how the corporate platforms enmeshed in schools and universities drive curricula 19 ; they also articulate the ‘racializing and colonial logic of/in AI’ 20 and report the impacts of racial profiling in online proctoring. 21
Despite these deep worries, we are not against the use of AI in healthcare. Hoeyer 22 describes in his book on data paradoxes, how ‘data intensification has created both less work and more work; how data both empower and disempower staff and patients; how data both uncover patient concerns and cover up patient concerns; and how data intensity both tightens organizational control and generates new forms of organizational disintegration’. In line with this paradox, we acknowledge that AI is neither wholly one thing nor another, but that we must teaching about it within our educational systems, including its negative effects.
This paper conceptualises how health professional education can address the challenges of gender inequity and AI in healthcare. As with any conceptual paper, 23 , 24 we do not seek to comprehensively report evidence, but rather, we draw from theory and a critical review of the literature to propose foci for health professional education programme. Gender equity provides a useful, tangible focus for any discussion about AI and equity, as gender divides are present globally and are embedded in most cultures. Thus, gender acts as a pervasive and ubiquitous illustration of the challenges facing ‘invisible’ identity positions. We acknowledge that other intersecting identity positions 25 such as race or class or sexual orientation offer equally meaningful points of departure.
Conversations about AI often occur in differentiated journals with independent readerships—broadly those who consider AI as a kind of technological tool and those who wish to make some kind of social critique. We seek to address both audiences by combining feminist scholars and current evidence to conceptualise how AI is currently producing inequity for those who are less ‘visible’. Our position is framed by posthuman feminist scholars including Haraway, Braidotti and Wacjman. 26 , 27 , 28 At the same time, we draw on systematised reviews where available and rich illustrative qualitative data drawn from rigorous studies, to support our claim that AI is enshrining inequitable knowledge practices in healthcare practice. And we do more than critique: we propose a model for health professional education to develop graduates who can work effectively in an AI‐mediated workplace.
This manuscript is structured in three parts. First, we outline our working definitions, then we turn to three predominant concerns that together describe the challenges of gendered knowledge practices associated with AI with a particular reference to healthcare practice. Next, we turn to education, exploring how curricula should adapt for AI‐mediated clinical practice, exploring themes of evidence, emotions and labour. We close with an illustrative example.
2. DIFFICULT DEFINITIONS
Any paper about gender and AI must navigate tricky territory of definitions. Here, we do not take gender as an essentialising characteristic, but as constructed socially. 29 A useful starting frame for discussing gender and technology is Haraway's 27 cyborg myth. The cyborg is often taken to mean that women should embrace the machine, but this is not the full story. Haraway provides the cyborg as a ‘monstrous’ aspiration for us—neither woman nor machine—a being that operates beyond gender. Thus, the cyborg offers a fluid, regenerative and partial means to progress beyond a story of domination and subjugation. But it also acknowledges that feminine ways of knowing, working and being are less valued and visible as masculine ones. Indeed, Haraway 27 describes women's relationship to their own health as: ‘struggles over meanings and means of health to environments pervaded by high‐technology producers and processes’.
Definitions of AI are highly contested, 13 and we do not provide a singular frame. Rather, we note that AI can be defined in three different ways 30 : by its underlying technologies (how AI is constructed), its capabilities (what AI does) or by its relationships (how people and AI work together). We provide our focus from each of these viewpoints.
From a technological perspective—how AI is constructed—we are particularly interested in forms of AI that rely on finding patterns in data. Such AI technologies rely on predictive algorithms that can be prompted to interrogate a corpus of data and produce outputs based on statistical manipulation. The corpus, commonly called big data, consists of pre‐existing texts and datasets, including sound, images, numbers and words. The machine learning algorithms can be trained to learn and produce patterns (e.g. images of cats) within the corpus that have meaning for people. Such a definition includes generative AIs such as ChatGPT or Midjourney or other large language models but also includes any form of algorithmic interrogation of data.
From a capability perspective—what the AI can do—we focus on AI's role as a ‘tool of knowledge’. 16 In other words, AI technologies produce outputs that entail meaning and interpretation. Examples of capabilities include diagnosing clinical conditions in medical images, writing clinical notes and predicting what a person will want to see in their social media feed.
From a relational perspective, we align with Johnson and Verdicchio's 13 view of AI systems as: ‘sociotechnical ensembles … combinations of artefacts, human behaviour, social arrangement and meaning’. By this, we hold that AI and humans are always working together. 30 , 31 That is to say, an AI's prediction about a patients' clinical condition cannot happen without a patient, doctor and healthcare setting—just like a prediction about a social media feed depends on a person actually engaging with social media.
3. THE CASE FOR AI PERPETUATING GENDERED KNOWLEDGE PRACTICES
From these foundational views of gender and AI, we now turn to recent literature on AI and gender equity, with a particular emphasis on how the feminine is overlooked or erased. We describe three concerns: (1) how the gender divide is embedded within AI technologies; (2) how AI may be exacerbating health workforce inequities; and (3) how AI (re)constructs gendered bodies.
3.1. The gender divide embedded within AI technologies
The gender divide within AI technologies can be understood in two ways: in the AI outputs that are produced and the AI processes that underpin this production. Systematic reviews into gender bias within AI present a sobering picture of how the feminine and the female are represented in AI outputs. 32 , 33 Hall and Ellis 32 note that algorithms tend to represent women within traditional, often lower status, roles. For example, we see more associations of white male AI‐generated images with the role of ‘medical student’. 34 Moreover, the difficulties faced by intersectional representation is of grave concern—for example, criminal justice AIs impact black women more than others. 32 Similar types of disturbing bias concerns are reported in a systematic review of AI decision‐making software. 33 But worryingly, we note that such reviews report bias that can be measured, for example, whether certain occupations are associated with male or female pronouns and so on.
Measurable gendered output problems likely represent the tip of the iceberg. More subtle forms of textual and visual constructions may exacerbate already existing biases, in particular the gendered nature of what we call knowledge practices. By using this term, we acknowledge the inseparability of what counts as knowledge from its social, material and discursive contexts of production. 35 This means, we must also grapple with how knowledge production is dynamic, embodied and a matter of power relations. 26 Moreover, any discussion of knowledge necessarily delineates not knowledge as well. This is illustrated by Perrotta and colleagues 36 study of writing with generative AI. He describes a participant, who reports on her experiences of co‐writing a story with generative AI. When Gabrielle frames herself as the protagonist and the AI responds, Gabrielle experiences ‘mild disappointment that the system made no mention of her heritage in the auto‐generated text, but she is not surprised. She wonders, in a rhetorical tone, “whether or not someone like myself has been included in developing this”’. She deliberately shifts her prompts to male names or ‘upper class’ names. As the AI produces text, she ‘feels she is being forced by the system to renounce her right to a voice’. 37 In short, Gabrielle's experiences have become not knowledge, as least within the terms of AI's outputs.
Gabrielle's instinct that ‘someone like her’ was not involved in producing the AI, is insightful. UNESCO's 2019 37 report into the problems of gendered professionals in digital industries notes an increasing disparity of digital skills between genders. And the AI sector presents particularly grave challenges. The report describes how, in Silicon Valley, the ‘applicant pool for technical jobs in artificial intelligence (AI) and data science is often less than 1 per cent female’. It notes that, at that time, only 10% of Google's machine intelligence employees and only 12% of top machine learning academics were women. In other words, it is likely that 88%–99% of those who work in software research or development around AI knowledge production processes are men.
Feminist scholars describe this as matter of power relations. 28 , 38 They note that how as working in technology (at least in the Global North) became desirable and powerful, it shifted from being its historical place as ‘women's work’ and became a male‐dominated profession. Wacjman and Young 28 state: ‘It really matters who is in the room, and even more so who is absent, when new technology like AI is developed. … increasingly real‐world societal questions are primarily posed as computational problems with technical solutions. Yet even the ways in which these tools select and pose problems foregrounds particular modes of seeing or judging over others. It has proven to be anything but gender neutral’.
This focused review of the evidence suggests AI knowledge practices diminish the female and the feminine at scale. It is often suggested in health professional education literature that AI reflects the biases that already exist. 17 , 39 , 40 But the danger is more than this: it goes beyond a question of ‘bad data’—or a male‐dominated industry. Evidence suggests that AIs may not just reflect biased data, they can exacerbate inequities. 41 In a review of ethical machine learning, Chen and colleagues 42 show how this viscious cycle arises in AI‐based healthcare applications. From the start of the development pipeline, inequities occur. Candidate problems address the concerns of the dominant population (e.g. endometriosis is a less likely candidate for an intervention), while training data and clinical markers of ‘success outcomes’ are drawn from the same select population. These types of oversights are magnified during algorithm development with the final outcome of making existing health disparities worse.
3.2. AI and health workforce: A woman's work is …
The healthcare workforce is full of inequity. Women are more likely to be in lower paid and lower status positions that involve direct care rather than being in higher paid and higher status decision‐making roles. For example, in Australia in 2022, 88% of nurses and midwives were women, 43 while 36% of medical specialists were women. A year earlier, only 14% of Australasian neurosurgeons were women. 44 This suggests some tasks—such as caring, relational patient work—are associated with the feminine (soft, kind and nurturing), while other tasks—such as taking responsibility for clinical decision‐making or high‐risk operative skills—are associated with the masculine (hard, strong and decisive). The previous section made the case that these gender stereotypes appear to be captured and furthered within AI systems.
However, the impact upon labour is not just on how women are represented, but what women's work becomes as AI is introduced into workplaces. For example, Carboni and colleagues' ethnography of digital pathology noted how a woman (the secretary) had to scan physical slides into the computerised system to prepare them for AI, a difficult and thankless task. 45 The authors describe: ‘a secretary was reassigned from the secretariat to a dedicated “digital pathology” room. This “scanning” secretary waited approximately 10 minutes during our first observations before stating how much she disliked her job’. 45 Indeed, in Kusta and colleagues' 46 ethnographic study of digitalised pathology in Denmark, pathologists focussed on the future efficiencies that the AI would bring, despite describing experiences of AI that were slower and less accurate. That is to say, AI appears to increase some forms of menial labour, not decrease it. This increase of labour as a consequence of technology—what is sometimes called ‘fauxtomation’ 47 —has also been noted in teacher education. 21 But who does this extra labour? We recall the introduction of time‐saving technological devices into domestic homes, which was supposed to lead to efficiencies, but the additional ‘found time’ was consumed with the gendered performances of identity with technology. 28 Are women ending up with the invisible (and therefore unpaid) labour of making the AI work?
3.3. AI and (re)constructing gendered bodies: The rise of the digital twin
Bodies are central to healthcare, but embodiment is often seen in opposition to algorithmic‐intensive practices. 18 Yet, the two are entwined and driven by complex power relations. A concern with AI that is not often raised is how digital (re)constructions of women's bodies may be occurring at too far a distance from the women themselves.
A disturbing illustration of this phenomenon is Ebeling's 48 account of how her experiences of miscarriage resulted in an algorithmically prompted ‘ghostly’ child, one that was constructed from her health data (which in the United States can be bought and sold). This ‘child’ kept returning to her in the form of marketing requests, a digitally reconstructed part of her body that continued to exist for half a decade after her loss. She notes: ‘the embodied and societal category of a particular gender is decoupled from the individual person who identifies with a particular gender, and redeployed into a free‐floating marketing category of “male” or “female” (marketing logic is often binary—gender, race and class categories that defy facile categorisation are rendered invisible) that has particular habits, desires, and of course, predictable consumption behaviours that can be targeted with specific marketing messages’. 48 As Ebeling experienced, the data from our bodies become translated, to appropriate Latour's term, 49 into predictive algorithms. These predictive algorithms can dictate what possibilities are afforded for the categories of ‘woman’ or ‘mother’ or ‘girl’.
In line with this, we notice in our own work the potential for harms in clinical practice presented by these types of digital twins. This term originates from manufacturing term and refers to the ‘virtual representation of a physical product containing information about said product’. 50 Algorithmic‐mediation enhances the likelihood of these digital twins causing harms. For example, in a survey of how AI was used in Australian clinical practice, a women's health practitioner suggested that use of AI was removing women from the decision‐making moment. They noted, that due to the staff interacting with the AI, decisions were being made about care, ‘which is not contextualised to what is occurring in the room (i.e. the woman has moved and hence caused loss of contact with the machine) or the woman's wishes …’. 2 The woman—her body and her desires—is no longer present.
4. IMPLICATIONS FOR HEALTH PROFESSIONAL EDUCATION
We have presented the case as to how AI may be already exacerbating the existing acute gender equity divide. So how should health professions education respond? We do not suggest disengaging from AI. Rather, our foundational stand is that we must do more than critique the dominant position. As many feminist scholars note, promoting masculine/feminine or human/machine as binary opposites may reinforce inequities rather than transform them. 26 , 27 Where possible, therefore, we must consider AI and its broader implications for how students come to be health practitioners. Indeed, Braidotti 26 suggests that who we are (ontology) has consequences for how we know (epistemology) and the moral choices we make (ethics). So, our teaching needs to reflect this intertwining of being, knowing and doing. AI must therefore be part of integrated curricular approaches. We should include AI in teaching about patient care, to ensure that AI is not disaggregated from the rest of the curriculum. This could be within case‐based tasks/assessments or in simulations or in the clinical environment. We propose three specific points of emphasis for this integration: (1) learning to doubt; (2) learning to care; and (3) and learning to interrogate the gendered nature of clinical work. We then close with exploring how students might interrogate the intersection of gender and AI while on clinical placements.
4.1. AI and evidence: Learning to doubt
As this paper already shows, interrogating AI and gender equity allows health professional educators to confront how inequity already exists within knowledge and healthcare systems. Thus, when teaching about AI, educators should also teach about how evidence is constructed and used. As Chen and colleagues'42 review suggests, the clinical conditions we try to solve, the solutions we prioritise and the data we gather are the product of an inequitable society. Therefore, teaching about AI cannot be separated from teaching about what counts as evidence and understanding the social determinants of health.
This is not a matter for quick fixes. An easy response to working with AI perpetuating inequities is a suggestion that we must teach about ‘bias’. But removing bias is not possible. Jaton 51 notes: a ‘bias free ML [Machine Learning] algorithm is an oxymoron’. In other words, if AI relies on patterns in the data, these must by definition include bias. Moreover, no system (human or machine) can account for every minoritised identity position. Crenshaw's 25 work on intersectionality raises the real possibility that mitigating one subjugated identity position (say women) can exacerbate the harms for those holding more than one such position (say women of colour). This does not necessarily mean we should not use algorithms or seek to reduce bias with respect to identity positions. However, it emphasises the need to attune our students to both the potential discriminatory nature of human processes and to the real possibility that AI will have inequitable impacts.
The ever present bias in AI‐mediated outputs underpins the need to teach evidence‐based practice as framed by ‘judgment not rules’. 52 Li and colleagues 53 similarly call for ‘improving human judgment’ in health professional curricula in response to AI. We extend their proposals by suggesting we move beyond ‘data‐driven decision making’, 53 to embed this teaching about judgement and doubt in the messy world of clinical practice. Healthcare is full of indeterminate, tacit and conflicting information, and data only form one part of the judgement equation.
In a time of AI, humans must increasingly be arbiters of quality in practice 54 and make evaluative judgements about the quality of AI processes and outputs. 55 Lebovitz's 56 ethnography of radiologists working with AI offers a useful insight: ‘[the] radiologists commonly exercised doubt practices to help them cope with the subjectivity and complexity involved in diagnostic decision making. Exercising doubt involved seeking out evidence that would contradict or support tentative diagnosis theories. Doubt practices involved radiologists asking themselves questions, seeking colleagues' opinions, and acquiring additional imaging or patient information. These practices helped radiologists prevent premature conclusions, thoughtfully weigh conflicting information, and thoroughly consider every available detail’. [Italics ours]. We can teach doubt practices about evidence and data—but we suggest that routine doubt can extend to consider the broader social implications of any AI‐based recommendations. Who is advantaged? This includes corporations as well as identity positions—our students should be as alerted to the influence of ‘Big Tech’ as they are to ‘Big Pharma’. Otherwise, our students will never truly exercise their judgements about AI in a holistic manner, and we risk thinking about perpetuating inequalities as being tokenistic rather than integral.
4.2. AI and feelings: Learning to care
In 1995, Lie 57 proposed: ‘Technology is a symbol of masculinity’. Around a similar time, a study of nursing students underlined how caring, particularly bodily care and emotions, was seen as women's work. 58 As described in previous sections, these associations persist and are perpetuated in AI. The aspiration here (our mythic cyborg) is to teach emotional caring aspects of clinical care integrated with learning about AI. As Hodges 59 suggests, we must remove the myth that technology necessarily threatens humans' ability to demonstrate compassion. Indeed, technologies should be ‘care‐full’. 60
Healthcare (and education) is saturated with emotion, and AI will be a part of this whether we want it to be or not. As learners move into clinical practice, it is important that they understand clinical practice is an emotional business. And herein lies an opportunity; in health professions education, the literature often presents emotions as noise to be ignored or regulated. 61 But by prompting students when learning about AI to tune into their feelings in relation to others (be it machines or patients or teachers or peers), we emphasise that caring is part of healthcare just as AI will be.
Concepts of care can be explored early in the curriculum, and a case‐based approach can offer opportunities to see caring and AI decisions as occurring together. For example, a software company magazine suggests that Taiwanese pharmacists working with AI have increased the number of patients seen in a day from 15 to 30. 62 A professional practice teaching case might focus on a series of patient requests, and learners can investigate issues pertaining to particular prescriptions or patient safety, focussing on gender concerns. At the same time, they can also debate: What do patients need from me and how should I talk with them? What patients are being advantaged and how can I help those who are being disadvantaged? How should I balance efficiency and patient care? If I see 30 patients instead of 15, how will I feel about my patients and my work and my sense of being a pharmacist?
Hodges 59 proposes that for compassionate care in a time of technology, a doctor should be ‘present’ for a patient. Women's ownership of their own bodies in clinical practice have traditionally been problematic, and as our earlier examples illustrate, AI may be making things worse. In teaching students about working with AI, health professional curriculum should help students learn to avoid conflating a digital twin with an embodied person. We suggest, the first step towards this presence is to seek connection with the embodied person, rather than their gendered digital twin. A person's decisions, thoughts and feelings about their own bodies must be respected.
4.3. AI and labour: Learning to negotiate a gendered workplace
In this last section, we return to the challenge of gendered roles in healthcare. As Lombarts and Verghese 63 suggest ‘medicine is not gender neutral—she is male’. In health professional curriculum, we cannot expect to address the problems of AI without understanding the real challenges that are rife in health professional workplaces.
Learning about AI can be a means to open our students' very sense of what is possible. Health professional curricula can draw inspiration from social work programmes, where teaching about the social world is as foundational as the biomedicine of the body. This means highlighting sociotechnical understandings of AI, opening up subjectivity and messiness and the paradoxes associated with AI and data. 22 As part of these debates, we can ask students to consider the matter of invisible labour and who performs it. They can interrogate the assumption that AI will take the ‘menial’ tasks 64 and at the very same time create alternate ‘menial’ tasks, prompting a discussion about how AI makes some forms of labour lower status. 65
5. HOW MIGHT WE TEACH BOTH GENDER EQUITY AND AI TOGETHER?
In the previous sections, we have suggested notions of evidence, caring and labour as focal points for learning about gender equity and AI and the impact upon healthcare practice. We now offer an in‐depth example about how we might draw on these to teach the interrelationship between gender equity and AI. We propose ‘back to base’ experiences as a useful pedagogical technique. Such post‐placement sessions, where learners come together to debrief their time in the clinical environment, can have powerful impacts. 66 , 67
In our illustrative example, a health professional programme offers a small preparatory session prior to placement where students are asked to engage in intentional noticing 68 during their planned clinical experience. Firstly, educators request students notice the gendered roles in clinical workplaces through observing the ways in which patients and families interact with different staff. Secondly, they are asked to recognise the algorithmically mediated knowledge in clinical workplaces (such as AI‐supported decision‐making, AI‐mediated clinical administration or phone‐based apps) and to observe effects on practice—and care. Educators may need to alert students to the possibility that their supervisors themselves may be unaware of the role that technology has in their practices. 69 , 70 Immediately after the rotation, students submit contemporaneous notes as the first part of an assessable task. Making such tasks assessable is important because these issues are germane to being a contemporary healthcare practitioner.
The next part of the task would be for students to make sense of these notes post‐placement. One possibility is that after clinical rotations, groups of students—optimally from multiple professions—can come together to discuss their observations. Indeed, an interprofessional setting would be ideal for a conversation about gendered roles in the workplace. Such a session might commence with respected clinical practitioners discussing the experiences of gender and other forms of discrimination in their own environments and to jointly speculate how this might be amplified or reduced by technology. After listening to expert stories ‘from the clinic’, the students themselves can form small debriefing groups.
The debriefing conversations can be designed to draw directly from students' clinical experiences. Students can be asked to compare notes and jointly consider: Did you find yourself working with numeric representations of a person (woman) without their presence (i.e. a digital twin)? How did the technology construct this data? What did you think the gaps were in evidence‐based decision‐making? How much did you trust it? Did you find points of connection with patients mediated by technology—what enhanced and/or detracted from care? How did that technology make you feel? Finally, students can elaborate on the gender role inequities they observed in clinical workplaces and then to ask if technology enhanced this or hindered it. To finalise their assessment task, students submit annotations to their original notes based on the debrief session.
Our example illustrates how we can bring some of the very big ideas about gender and AI into the day‐to‐day of health professional curricula. The frames of doubt, care and interrogating workplace norms focus the student on the social construction of knowledge through AI in clinical practice and highlight any resultant inequities. We underline the need for any such education to be assessable and for the discussion/feedback to be carefully facilitated. This is the signal that equity is not an optional extra, but rather something that the profession deeply values. 71
6. REFLEXIVE CONSIDERATIONS
We offer some reflexive considerations to aid the readers in interpreting the ideas we present. In terms of our position, we consider learning as more than knowledge reproduction. For us, education is about becoming knowledgeable professionals—who make judgements with (and sometimes for) their patients—as ethically, responsibly and compassionately as is possible for humans to do. Our views of AI are grounded in theory—post structural, feminist theory for the most part—which aligns with our worldview that both technology and knowledge are social—and co‐construct each other. 31 From this perspective, the two of us engage with AI all the time—in search engines, social media and so on—as well as Margaret's frequent but not quite daily use of generative AIs. However, we do hold (to varying degrees) that some things can be measured and presented as representations of reality, and this is why, we turn to the systematic review evidence in particular. Finally, we note that as women who live in Australia and Canada, we do not experience the challenging impact of technology facing those in the Global South, 72 and this must be acknowledged. Writing this paper fully opened our eyes to the scale of the challenge. We became sad and angry but also inspired by Haraway's work of more than 40 years ago to dynamically, fluidly and partially work beyond our boundaries. Finally, we should declare: did we use generative AI in any form in this task? While we did not use it to write or summarise, we did employ it to confirm and extend our list of essential feminist scholars who study technology in society.
7. CONCLUSIONS
A sociotechnical understanding of AI can help address the gender inequities that are necessarily inherent within the knowledge produced by such technologies. While we have conducted a focused analysis of gender discrimination, we also illustrate the challenges faced by other minoritised groups. Feminist thinking allows us to move beyond binaries and consider how knowing, being and choosing cannot be disentangled. Following this thinking, we present AI, gender and knowledge construction as interwoven. This presents an opportunity to bring together traditionally masculine and feminine interests within health professional education programmes, as a means of creating more equitable healthcare.
AUTHOR CONTRIBUTIONS
Margaret Bearman led the conceptualisation, drafting and revision of the article. Rola Ajjawi contributed to conceptualisation and writing, and critically revised for intellectual content.
CONFLICT OF INTEREST STATEMENT
None.
ETHICS STATEMENT
N/A.
ACKNOWLEDGEMENTS
None. Open access publishing facilitated by Deakin University, as part of the Wiley ‐ Deakin University agreement via the Council of Australian University Librarians.
Bearman M, Ajjawi R. Artificial intelligence and gender equity: An integrated approach for health professional education. Med Educ. 2025;59(10):1049‐1057. doi: 10.1111/medu.15657
Funding information None.
DATA AVAILABILITY STATEMENT
Data sharing is not applicable to this article as no new data were created or analyzed in this study.
REFERENCES
- 1. Elliott A. The culture of AI: everyday life and the digital revolution. Routledge; 2019. [Google Scholar]
- 2. Bearman M, Howard SK, Caouette S. Artificial Intelligence in day‐to‐day professional practice: implications for higher education. In: Popenici S, Rudolph J, Ismail F, Tan S, eds. Handbook of AI in higher education. Edward Elgar; 2025. in press [Google Scholar]
- 3. Esmaeilzadeh P. Challenges and strategies for wide‐scale artificial intelligence (AI) deployment in healthcare practices: a perspective for healthcare organizations. Artif Intell Med. 2024;151:102861. doi: 10.1016/j.artmed.2024.102861 [DOI] [PubMed] [Google Scholar]
- 4. Yin J, Ngiam KY, Teo HH. Role of artificial intelligence applications in real‐life clinical practice: systematic review. J Med Internet Res. 2021;23(4):e25759. doi: 10.2196/25759 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5. Walby S. Is the knowledge society gendered? Gend Work Organ. 2011;18(1):1‐29. doi: 10.1111/j.1468-0432.2010.00532.x [DOI] [Google Scholar]
- 6. McCoy LG, Nagaraj S, Morgado F, Harish V, Das S, Celi LA. What do medical students actually need to know about artificial intelligence? NPJ Digit Med. 2020;3(1):86. doi:10.1038/s41746‐020‐0294‐7 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7. Labrague LJ, Aguilar‐Rosales R, Yboa BC, Sabio JB. Factors influencing student nurses' readiness to adopt artificial intelligence (AI) in their studies and their perceived barriers to accessing AI technology: a cross‐sectional study. Nurse Educ Today. 2023;130:105945. doi: 10.1016/j.nedt.2023.105945 [DOI] [PubMed] [Google Scholar]
- 8. Rowe M. Artificial intelligence in clinical practice: implications for physiotherapy education. OpenPhysio J. 2019;1‐6. doi:10.14426/art/528 [Google Scholar]
- 9. Zawacki‐Richter O, Marín VI, Bond M, Gouverneur F. Systematic review of research on artificial intelligence applications in higher education—where are the educators? Int J Educ Technol High Educ. 2019;16(1):39. doi: 10.1186/s41239-019-0171-0 [DOI] [Google Scholar]
- 10. Hunt E. Tay, Microsoft's AI chatbot, gets a crash course in racism from Twitter. The Guardian. 2016.
- 11. Buranyi S. Rise of the racist robots—how AI is learning all our worst impulses. The Guardian 2017.
- 12. Shamin S. Why Google's AI tool was slammed for showing images of people of colour. Al Jazeera 2024.
- 13. Johnson DG, Verdicchio M. Reframing AI discourse. Minds Mach. 2017;27(4):575‐590. doi: 10.1007/s11023-017-9417-6 [DOI] [Google Scholar]
- 14. Krive J, Isola M, Chang L, Patel T, Anderson M, Sreedhar R. Grounded in reality: artificial intelligence in medical education. JAMIA Open. 2023;6(2):ooad037. doi: 10.1093/jamiaopen/ooad037 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15. Taylor Gonzalez DJ, Djulbegovic MB, Bair H. We need to add prompt engineering education to optimize generative artificial intelligence in medicine. Acad Med. 2024;99(10):1050‐1051. doi: 10.1097/ACM.0000000000005803 [DOI] [PubMed] [Google Scholar]
- 16. Alvarado R. AI as an epistemic technology. Sci Eng Ethics. 2023;29(5):32. doi: 10.1007/s11948-023-00451-3 [DOI] [PubMed] [Google Scholar]
- 17. Masters K. Ethical use of artificial intelligence in health professions education: AMEE guide no. 158. Med Teach. 2023;45(6):574‐584. doi: 10.1080/0142159X.2023.2186203 [DOI] [PubMed] [Google Scholar]
- 18. van der Niet AG, Bleakley A. Where medical education meets artificial intelligence: ‘does technology care?’. Med Educ. 2021;55(1):30‐36. doi:10.1111/medu.14131 [DOI] [PubMed] [Google Scholar]
- 19. Williamson B, Macgilchrist F, Potter J. Re‐examining AI, automation and datafication in education. Learn Media Technol. 2023;48(1):1‐5. doi: 10.1080/17439884.2023.2167830 [DOI] [Google Scholar]
- 20. Zembylas M. A decolonial approach to AI in higher education teaching and learning: strategies for undoing the ethics of digital neocolonialism. Learn Media Technol. 2023;48(1):25‐37. doi: 10.1080/17439884.2021.2010094 [DOI] [Google Scholar]
- 21. Pangrazio L. Data harms: the evidence against education data. Postdigital Sci Educ. 2024;6(4):1049‐1054. doi: 10.1007/s42438-024-00468-2 [DOI] [Google Scholar]
- 22. Hoeyer K. Data paradoxes: the politics of intensified data sourcing in contemporary healthcare. MIT Press; 2023. [Google Scholar]
- 23. Jaakkola E. Designing conceptual articles: four approaches. AMS Rev. 2020;10(1):18‐26. doi:10.1007/s13162‐020‐00161‐0 [Google Scholar]
- 24. Gilson LL, Goldberg CB. Editors' comment: so, what is a conceptual paper? Group Org Manag. 2015;40(2):127‐130. doi:10.1177/1059601115576425 [Google Scholar]
- 25. Crenshaw K. Demarginalizing the intersection of race and sex: a black feminist critique of antidiscrimination doctrine, feminist theory and antiracist politics. Univ Chicago Leg Forum. 1989;1989:139‐167. [Google Scholar]
- 26. Braidotti R. A theoretical framework for the critical Posthumanities. Theory Culture Soc. 2019;36(6):31‐61. doi: 10.1177/0263276418771486 [DOI] [Google Scholar]
- 27. Haraway D. A manifesto for cyborgs: science, technology and socialist feminism in the 1980s. Socialist Rev. 1985;80:65‐108. [Google Scholar]
- 28. Wajcman J, Young E. Feminism confronts AI. In: Browne J, Cave S, Drage E, McInerney K, eds. Feminist AI: critical perspectives on algorithms, data, and intelligent machines. Oxford University Press; 2023. doi: 10.1093/oso/9780192889898.003.0004 [DOI] [Google Scholar]
- 29. Butler J. Gender trouble. Routledge; 1990. [Google Scholar]
- 30. Bearman M, Ajjawi R. When I say … artificial intelligence. Med Educ. 2024;58(11):1273‐1275. doi:10.1111/medu.15408 [DOI] [PubMed] [Google Scholar]
- 31. Bearman M, Ajjawi R. Learning to work with the black box: pedagogy for a world with artificial intelligence. Br J Educ Technol. 2023;54(5):1160‐1173. doi: 10.1111/bjet.13337 [DOI] [Google Scholar]
- 32. Hall P, Ellis D. A systematic review of socio‐technical gender bias in AI algorithms. Online Inf Rev. 2023;47(7):1264‐1279. doi: 10.1108/OIR-08-2021-0452 [DOI] [Google Scholar]
- 33. Nadeem A, Marjanovic O, Abedin B. Gender bias in AI‐based decision‐making systems: a systematic literature review. Australas J Inf Syst. 2022;26. doi:10.3127/ajis.v26i0.3835 [Google Scholar]
- 34. Currie G, Currie J, Anderson S, Hewis J. Gender bias in generative artificial intelligence text‐to‐image depiction of medical students. Health Educ J. 2024;83(7):732‐746. doi:10.1177/00178969241274621 [Google Scholar]
- 35. Foucault M. The birth of the clinic: an archaeology of medical perception. Vintage Books; 1973:235‐238. [Google Scholar]
- 36. Perrotta C, Selwyn N, Ewin C. Artificial intelligence and the affective labour of understanding: the intimate moderation of a language model. New Media Soc. 2024;26(3):1585‐1609. doi: 10.1177/14614448221075296 [DOI] [Google Scholar]
- 37. UNESCO . I'd blush if I could. Closing gender divides in digital skills through education. UNESCO; 2019. [Google Scholar]
- 38. Young E, Wajcman J, Sprejer L. Mind the gender gap: inequalities in the emergent professions of artificial intelligence (AI) and data science. New Technol Work Employment. 2023;38(3):391‐414. doi: 10.1111/ntwe.12278 [DOI] [Google Scholar]
- 39. Triola MM, Reinstein I, Marin M, et al. Artificial intelligence screening of medical school applications: development and validation of a machine‐learning algorithm. Acad Med. 2023;98(9):1036‐1043. doi: 10.1097/ACM.0000000000005202 [DOI] [PubMed] [Google Scholar]
- 40. Tolsgaard MG, Pusic MV, Sebok‐Syer SS, et al. The fundamentals of artificial intelligence in medical education research: AMEE guide no. 156. Med Teach. 2023;45(6):565‐573. doi: 10.1080/0142159X.2023.2180340 [DOI] [PubMed] [Google Scholar]
- 41. Prates MOR, Avelar PH, Lamb LC. Assessing gender bias in machine translation: a case study with Google translate. Neural Comput Applic. 2020;32(10):6363‐6381. doi: 10.1007/s00521-019-04144-6 [DOI] [Google Scholar]
- 42. Chen IY, Pierson E, Rose S, Joshi S, Ferryman K, Ghassemi M. Ethical machine learning in healthcare. Annu Rev Biomed Data Sci. 2021;4:123‐144. doi:10.1146/annurev‐biodatasci‐092820‐114757 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 43. Australian Institute of Health and Welfare. 2024. https://www.aihw.gov.au/reports/workforce/health-workforce [Google Scholar]
- 44. Olson S, McAlpine H, Cain SA, Mitchell R, Olsson G, Drummond KJ. International women leaders in neurosurgery: past, present, and what the future must look like. Neurosurg Focus. 2021;50(3):E2. doi:10.3171/2020.12.FOCUS20949 [DOI] [PubMed] [Google Scholar]
- 45. Carboni C, Wehrens R, van der Veen R, de Bont A. Eye for an AI: more‐than‐seeing, fauxtomation, and the enactment of uncertain data in digital pathology. Soc Stud Sci. 2023;53(5):712‐737. doi: 10.1177/03063127231167589 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 46. Kusta O, Bearman M, Gorur R, Risør T, Brodersen JB, Hoeyer K. Speed, accuracy, and efficiency: the promises and practices of digitization in pathology. Soc Sci Med. 2024;345:116650. doi:10.1016/j.socscimed.2024.116650 [DOI] [PubMed] [Google Scholar]
- 47. Taylor A. The automation charade. Logic(s) 2018.
- 48. Ebeling M. Healthcare and big data: digital specters and phantom objects. Palgrave Macmillan; 2016. [Google Scholar]
- 49. Latour B. Reassembling the social: an introduction to actor‐network‐theory. Oxford University Press; 2007. [Google Scholar]
- 50. Jones D, Snider C, Nassehi A, Yon J, Hicks B. Characterising the digital twin: a systematic literature review. CIRP J Manuf Sci Technol. 2020;29:36‐52. doi: 10.1016/j.cirpj.2020.02.002 [DOI] [Google Scholar]
- 51. Jaton F. Assessing biases, relaxing moralism: on ground‐truthing practices in machine learning design and application. Big Data Soc. 2021;8(1):20539517211013569. doi:10.1177/20539517211013569 [Google Scholar]
- 52. Greenhalgh T, Howick J, Maskrey N. Evidence‐based medicine: a movement in crisis? BMJ. 2014;348:g3725. doi:10.1136/bmj.g3725 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 53. Li D, Kulasegaram K, Hodges BD. Why we needn't fear the machines: opportunities for medicine in a machine learning world. Acad Med. 2019;94(5):623‐625. doi: 10.1097/ACM.0000000000002661 [DOI] [PubMed] [Google Scholar]
- 54. Bearman M, Luckin R. Preparing university assessment for a world with AI: tasks for human intelligence. Re‐imagining University assessment in a digital world. ebook. Springer; 2020:49‐63. [Google Scholar]
- 55. Bearman M, Tai J, Dawson P, Boud D, Ajjawi R. Developing evaluative judgment for a time of generative artificial intelligence. Assess Eval High Educ. 2024;49(6):893‐905. doi:10.1080/02602938.2024.2335321 [Google Scholar]
- 56. Lebovitz S. Diagnostic doubt and artificial intelligence: an inductive field study of radiology work. International conference on information systems (ICIS). Association for Information Systems electronic library; 2019. https://aisel.aisnet.org/icis2019/future_of_work/future_work/11 [Google Scholar]
- 57. Lie M. Technology and masculinity: the case of the computer. Eur J Womens Stud. 1995;2(3):379‐394. doi:10.1177/135050689500200306 [Google Scholar]
- 58. Poole M, Isaacs D. Caring: a gendered concept. Women's Stud Int Forum. 1997;20(4):529‐536. doi: 10.1016/S0277-5395(97)00041-1 [DOI] [Google Scholar]
- 59. Hodges BD. Introduction: technology, compassion, and the future of healthcare. In: Hodges BD, Paech G, Bennett J, eds. Without compassion, there is no healthcare: leading with care in a technological age. McGill‐Queens Press; 2020:3‐30. [Google Scholar]
- 60. Sriprakash A, Williamson B, Facer K, Pykett J, Valladares Celis C. Sociodigital futures of education: reparations, sovereignty, care, and democratisation. Oxford Rev Educ. 2024:1‐18. doi:10.1080/03054985.2024.2348459 [Google Scholar]
- 61. Ajjawi R, Olson RE, McNaughton N. Emotion as reflexive practice: a new discourse for feedback practice and research. Med Educ. 2022;56(5):480‐488. doi: 10.1111/medu.14700 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 62.Microsoft Source Asia; 2024. Taiwan hospital deploys AI copilots to lighten workloads for doctors, nurses and pharmacists. [Google Scholar]
- 63. Lombarts KMJ, Verghese A. Medicine Is Not Gender‐Neutral ‐ She Is Male. N Engl J Med. 2022. Mar 31;386(13):1284‐1287. doi: 10.1056/NEJMms2116556. PMID: 35353969. [DOI] [PubMed] [Google Scholar]
- 64. Boscardin CK, Gin B, Golde PB, Hauer KE. ChatGPT and generative artificial intelligence for medical education: potential impact and opportunity. Acad Med. 2024;99(1):22‐27. doi: 10.1097/ACM.0000000000005439 [DOI] [PubMed] [Google Scholar]
- 65. Bearman M, Ryan J, Ajjawi R. Discourses of artificial intelligence in higher education: a critical literature review. High Educ. 2023;86(2):369‐385. doi: 10.1007/s10734-022-00937-2 [DOI] [Google Scholar]
- 66. Harrison J, Molloy E, Bearman M, Ting CY, Leech M. Clinician peer exchange groups (C‐PEGs): augmenting medical students' learning on clinical placement. In: Billett S, Newton J, Rogers G, Noble C, eds. Augmenting Health and Social Care Students' Clinical Learning Experiences: Outcomes and Processes. Springer International Publishing; 2019:95‐120. doi: 10.1007/978-3-030-05560-8_5 [DOI] [Google Scholar]
- 67. van Braak M, Schaepkens SP, van Dolder E, et al. What affects you? A conversation analysis of exploring emotions during reflection sessions in Dutch general practitioner training. Front Psychol. 2023;14:1198208. doi:10.3389/fpsyg.2023.1198208 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 68. Clement T, Bolton J, Griffiths L, Cracknell C, Molloy E. ‘Noticing’ in health professions education: time to pay attention? Med Educ. 2023;57(4):305‐314. doi:10.1111/medu.14978 [DOI] [PubMed] [Google Scholar]
- 69. Lees J, Risǿr T, Sweet L, Bearman M. Digital technology in physical examination teaching: clinical educators' perspectives and current practices. Adv Health Sci Educ Theory Pract. 2024. Dec 3. doi:10.1007/s10459‐024‐10401‐8. Epub ahead of print. PMID: 39627621. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 70. Lees J, Risør T, Sweet L, Bearman M. Integrating digital technologies into teaching embodied knowledge in the context of physical examination. Med Educ. 2024:1‐10. doi:10.1111/medu.15599 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 71. Knight P. Introduction. In: Knight P, ed. Assessment for learning in higher education. Routledge; 1995. doi: 10.1163/9789004329775_002 [DOI] [Google Scholar]
- 72. Kwet M. Digital colonialism: US empire and the new imperialism in the global south. Race Class. 2019;60(4):3‐26. doi: 10.1177/0306396818823172 [DOI] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Data Availability Statement
Data sharing is not applicable to this article as no new data were created or analyzed in this study.
