Abstract
This study sought to characterize images of cancer patients generated by Artificial Intelligence (AI) text-to-image tools, and assess whether images differed by cancer type or AI tool, to elucidate the potential implications of using AI-generated images in health communication. Two generative AI-based tools, DALL-E and Stable Diffusion, were prompted to produce images of a “cancer patient,” “breast cancer patient,” “lung cancer patient,” and “prostate cancer patient”. Images (N = 320) were coded for perceived demographics, illness features, affect, cancer symbols, setting, and photorealism. Analysis revealed that AI tools commonly depicted cancer patients as White (83.2%) and middle-aged or older (87.5%). Compared to general cancer patient images, breast cancer patients were portrayed as younger, while prostate and lung cancer patients were depicted as older. Breast cancer patients were also more frequently depicted as healthy and displaying positive affect, while lung cancer patients were more often depicted as ill and showing negative affect. Differences were also found between the AI tools, with DALL-E images featuring more racial diversity and being less photorealistic compared to images produced by Stable Diffusion. Because generative AI tools may produce images of cancer patients that are limited on some dimensions of diversity, and in some cases may reinforce stereotypes (eg, breast cancer patients as healthy and happy, lung cancer patients as ill and hopeless), it is critical to consider biases that may exist in these models — and the potential societal implications of using AI-generated images of cancer patients — before these tools are deployed in cancer communication efforts.
Introduction
Researchers are increasingly recognizing the potential for Artificial Intelligence (AI) technologies such as large language models to have a transformative effect on health communication (Dunn et al., 2023). However, less attention has been paid in the field of health communication to the possible impact of AI-based image generators. Generative AI tools that convert text descriptions into images are being used to generate millions of images daily (Bianchi et al., 2023). These text-to-image tools require little technical expertise to operate but can quickly generate impressively detailed, realistic, relevant, and novel images based on text prompts (Ali et al., 2024). Trained on vast datasets of images paired with captions scraped from the internet (Sonmez et al., 2024), these tools use machine learning methods to extract key information from text prompts (eg, the relationship between objects) and then generate an image based on that information (Bird et al., 2023).
There are many potential applications of these tools in health communication efforts (Alenichev et al., 2023). AI image generators can be used to create promotional materials for health organizations, design patient-facing educational materials (Ali et al., 2024), produce medical illustrations to support medical education (Kumar et al., 2024), and make interventions more engaging (Sezgin & McKay, 2024). However, to date, robust health communication research evaluating the use and impact of generative AI tools have largely focused on text-based, large language models and associated tools (eg, chatbots, virtual health assistants) (S. Chen et al., 2023; Li et al., 2024; Vilaro et al., 2022) while little attention has been paid to text-to-image tools (Buzzaccarini et al., 2024). The expanding availability and uptake of these tools warrants systematic evaluation of the outputs they produce by health communication scholars.
One concern with AI text-to-image tools is that generated images may perpetuate or even amplify existing biases, stereotypes, and disparities, for example if the training data used in model development are skewed (Ali et al., 2024; Y. Chen et al., 2024; Sun et al., 2024). Prior research has identified gender and racial stereotypes in AI-generated images (Y. Chen et al., 2024; Fraser et al., 2023a). For instance, studies have found that receptionists are more likely to be portrayed as female while most engineers are depicted as male (Cho et al., 2023), and certain attributes (eg, “poor”) are more likely to be associated with darker skin tones (Fraser et al., 2023b). Generated images related to health have similarly been found to lack diversity: one study found that AI models depict surgeons as White and male in the vast majority of instances, significantly under-representing female and non-White surgeons relative to real-world data (Ali et al., 2024). Another study found that AI images generated for “dementia” overrepresented light-skinned individuals and featured visual tropes that could reinforce harmful disease stereotypes (Putland et al., 2023).
Few studies to date have examined AI-generated images in the context of cancer, an important area of research for several reasons. First, the importance of visuals in health communication messages, including their potential to influence outcomes such as attention, comprehension, recall, and behavior is increasingly being recognized by scholars (Gatting et al., 2023; King, 2015). In fact, many guidelines for the development of health education materials recommend the use of images in patient and public-facing health information resources as a best practice (Gatting et al., 2023).
Second, AI-generated images are increasingly being deployed in industries that shape cultural norms (eg, entertainment media and marketing). It is therefore important to consider how visual portrayals of cancer may impact cultural narratives about the disease. Cultivation theory in health communication posits that with repeated exposure, media narratives can shape worldviews, norms, and perceptions of reality (Romer et al., 2014). The theory is concerned not with the immediate effects of exposure to a message, but rather the long-term consequences of cumulative exposure to messages and images that may both reflect and shape the ways people think about the world – for example, beliefs about crime as a result of exposure to television narratives about violence (Morgan et al., 2014). As AI images become more ubiquitous, exposure to the narratives and potential biases they contain may similarly influence attitudes toward, and perceptions of, subjects like cancer. The way cancer is depicted in popular media and its implications for social perceptions of the condition has been a long-standing area of interest for health communication researchers (Champion et al., 2016). This research has revealed common themes that suggest a persistent cultural framing of the illness. For example, magazine advertisements related to breast cancer are found to overwhelmingly promote hope and positive experiences (AbiGhannam et al., 2018) and media representations of breast cancer patients tend to portray them as optimistic “fighters” (Champion et al., 2016). Understanding the narratives communicated by visual portrayals of cancer is important, given their potential to impact not only how others view and treat those with a cancer diagnosis but patients’ own experiences and understanding of their condition. For example, in one study, young cancer survivors reported finding that entertainment narratives often depict cancer experiences unrealistically (eg, underestimating the time spent receiving chemotherapy; showcasing individuals wearing makeup and not experiencing treatment-induced nausea), contributing to emotional distress and internalized stigma (Reffner Collins et al., 2024).
Third, prior research into visual portrayals of cancer in popular media has identified pronounced biases, which could be further reproduced by AI tools. For example, an examination of images in consumer cancer magazines found that the images featured primarily younger (47% under 40), female (61%), White (77%), and healthy-looking (76%) people (Phillips et al., 2011). An analysis of breast cancer images in women’s and fashion magazines similarly found that they tended to depict mostly White (81%), young (81%), and attractive (99%) women with positive facial expressions (88%) and healthy-looking body types (94%) (McWhirter et al., 2012). Finally, a study of breast cancer-related content on Pinterest found that a majority of pins containing an image of a person depicted White (84%) and female (97%) adults (Miller et al., 2019). It is important to assess whether these trends are replicated in AI-generated images, or if AI-generated images introduce unique biases into cancer patient portrayals.
Although AI text-to-image tools could be used for a variety of cancer-related communication purposes (eg, creation of patient education materials, development of stimuli for studies), their utility would be severely limited if they consistently produce images that lack diversity or portray cancer patients in stigmatizing or inauthentic ways. Furthermore, the use of problematic generated images in communications more broadly (eg, news content) could have larger societal implications, as exposure to stereotypical images about a certain group can reinforce negative stereotypes about that group among others and also have an adverse impact on the self-perception of those in the affected group (Bianchi et al., 2023; Jean et al., 2022; Kay et al., 2015; McClure et al., 2011).
In the current study, we examined how two leading AI image generators (DALL-E and Stable Diffusion) depict individuals with cancer to assess the characteristics of these outputs and obtain insight into the utility of these generated images for real-world applications. We generated images of patients with cancer (in general) as well as patients with three common cancers (breast, prostate, and lung) to assess whether there are differences in the way specific cancer patients are portrayed. This in-depth analysis will help inform our understanding of how cancer patients are portrayed by popular AI tools and highlight critical considerations for the use of AI tools in cancer research and practice.
Materials and methods
Sample generation
We used Stable Diffusion and DALL-E 3 to generate 40 images per tool for each of the following prompts: “[a photograph of a] cancer patient,” “[a photograph of a] breast cancer patient,” “[a photograph of a] prostate cancer patient,” and “[a photograph of a] lung cancer patient,” for a total of 320 images. This sampling strategy is in line with previously published studies of AI images (Fraser et al., 2023a, 2023b; Putland et al., 2023). The final prompts were determined based on multiple rounds of testing and assessment of outputs on both platforms. The phrase “a photograph of” was added to prompts for DALL-E 3 to obtain images that were more comparable to those generated with Stable Diffusion’s default style setting. Additionally, DALL-E 3 was queried through the ChatGPT interface, and explicit instructions not to alter the prompt (“do not make any modifications or additions to the prompt”) were added to prevent the tool from editing the prompt to ensure consistency in the generation process. After the initial prompt refinement process, we followed procedures similar to those that have been used in previous content analysis studies of AI images where a standardized prompt was used to generate a large set of images. We intentionally avoided additional prompt engineering to best ascertain the extent of heterogeneity in outputs. The images were generated over a two-week period (March 21, 2024–April 2, 2024).
Image coding
Features of images were identified and categorized (hereafter referred to as “coded”) by pairs of coders using a codebook iteratively developed by the research team (the final version of the codebook in its entirety is available in Appendix A). Specific codes were informed by prior research on visual representations of cancer patients (Grant & Hundley, 2009; McWhirter et al., 2012; Phillips et al., 2011), other studies of AI-generated images (Fraser et al., 2023b; Putland et al., 2023), and key image features identified in the pilot round of image generation. Codes included demographic characteristics of individuals depicted in the images (eg, perceived race, age, gender), affect, overall health appearance and indicators of illness, setting, presence of cancer symbols (eg, cancer ribbons, the color pink), and level of photorealism (ie, to what extent the image appears to be a photograph of a real person). The codebook was further refined through an initial coding of 20% of images. All images were then double coded using a Qualtrics survey form built from the final codebook. Images were randomly assigned to pairs of coders who were blinded to the prompt and the AI tool that was used to generate the image. Given that much of this coding entailed subjective judgments, efforts were made to minimize intercoder variability, including by having all coders participate in extensive training and pilot coding debriefings prior to coding the final dataset. During these meetings, coders received additional guidance on how to consistently apply and operationalize the various coding categories. For example, it was clarified that medical scrubs and white lab coats (in addition to hospital gowns) count as “medical clothing,” that rendering errors should not be taken into account when assessing photorealism (coding for that variable should only focus on the style of the image), and that when coding for affect, decisions should be based on the overall facial expression of the individual, and not just the presence or absence of a smile. Overall, agreement between coders was relatively high in the final dataset, with percent agreement ranging from 62% (for health appearance) to 99% (for cancer ribbon). A third coder adjudicated disagreements between the initial two coders for the final dataset.
Analysis
Out of the 320 images initially generated, 17 images were removed from the sample because they did not contain a person or did not clearly show their facial features, yielding a final analytic sample of 303 images. For analysis, several codes, including health appearance, affect, and photorealism were converted from 5- to 3-point scales (eg, combining “slightly negative” and “negative” as well as “slightly positive” and “positive” affect), and two response categories in the “setting” code were collapsed.
Summary statistics (frequencies and percentages) were calculated separately by prompt and AI tool. Chi-Square tests (and Fisher’s Exact tests when any cell included had expected frequencies ≤5) (Kim, 2017) were then used to determine if there were significant differences in characteristics between 1) the cancer site-specific prompts compared to the general cancer patient prompt; and 2) the two AI tools. Qualitative insights derived from coders’ observations are also included to highlight additional salient features and trends that emerged during coding.
Results
Across prompts and tools, generated images tended to portray cancer patients as White and middle-aged or older (Figure 1). Overall, cancer patients were infrequently portrayed as very ill. Although some characteristics were similar across prompts (eg, higher representation of White individuals), significant differences between the general “cancer patient” prompt and each of the cancer site-specific prompts were also noted (Table 1).
Figure 1.

Examples of AI-generated images of cancer patients. Example images of cancer patients generated by Dall-E and Stable Diffusion. Each set of images was randomly sampled from the 40 outputs generated for each prompt using each tool.
Table 1.
Frequency of image features by prompt (combined across AI image generation tools).
| Cancer Patient [reference] n = 80 (%) | Breast Cancer Patient n = 78 (%) [p-valuea,b] | Prostate Cancer Patient n = 72 (%) [p-valuea,b] | Lung Cancer Patient n = 73 (%) [p-valuea,b] | Overall (all prompts) n = 303 (%) | ||||
|---|---|---|---|---|---|---|---|---|
| Race | 0.152b | 0.068b | 0.173b | |||||
| White | 73 (91.3) | 63 (80.8) | 57 (79.2) | 59 (80.8) | 252 (83.2) | |||
| Non-White | 2 (2.5) | 6 (7.7) | 2 (2.8) | 3 (4.1) | 13 (4.3) | |||
| Unclear | 5 (6.3) | 9 (11.5) | 13 (18.1) | 11 (15.1) | 38 (12.5) | |||
| Age | <0.001 b | <0.001 a | <0.001 a | |||||
| Child <18 | 0 (0.0) | 1 (1.3) | 0 (0.0) | 0 (0.0) | 1 (0.3) | |||
| Young adult 18–39 | 15 (18.8) | 17 (21.8) | 2 (2.8) | 3 (4.1) | 37 (12.2) | |||
| Middle aged 40–65 | 43 (53.8) | 57 (73.1) | 17 (23.6) | 6 (8.2) | 123 (40.6) | |||
| Older adult >65 | 22 (27.5) | 3 (3.8) | 53 (73.6) | 64 (87.7) | 142 (46.9) | |||
| Gender | 0.106a | <0.001 a | <0.001 b | |||||
| Masculine | 14 (17.5) | 6 (7.7) | 72 (100.0) | 57 (78.1) | 149 (49.2) | |||
| Feminine | 66 (82.5) | 72 (92.3) | 0 (0.0) | 12 (16.4) | 150 (49.5) | |||
| Unclear | 0 (0.0) | 0 (0.0) | 0 (0.0) | 4 (5.5) | 4 (1.3) | |||
| Affect | <0.001 a | 0.871a | <0.001 a | |||||
| Negative | 12 (15.0) | 0 (0.0) | 13 (18.1) | 51 (69.9) | 76 (25.1) | |||
| Neutral or mixed | 43 (53.8) | 25 (32.1) | 38 (52.8) | 19 (26.0) | 125 (41.3) | |||
| Positive | 25 (31.3) | 53 (67.9) | 21 (29.2) | 3 (4.1) | 102 (33.7) | |||
| Health appearance | <0.001 a | <0.001 a | <0.001 a | |||||
| Sick | 17 (21.3) | 0 (0.0) | 2 (2.8) | 41 (56.2) | 60 (19.8) | |||
| Neutral or unclear | 46 (57.5) | 21 (26.9) | 38 (52.8) | 27 (37.0) | 132 (43.6) | |||
| Healthy | 17 (21.3) | 57 (73.1) | 32 (44.4) | 5 (6.8) | 111 (36.6) | |||
| Subject in bed | <0.001 a | 0.003 a | 0.022 a | |||||
| Yes | 23 (28.8) | 0 (0.0) | 6 (8.3) | 9 (12.3) | 38 (12.5) | |||
| No | 57 (71.3) | 78 (100.0) | 66 (91.7) | 64 (87.7) | 265 (87.5) | |||
| Head covering | 0.057a | <0.001 a | <0.001 a | |||||
| Yes | 50 (62.5) | 36 (46.2) | 0 (0.0) | 6 (8.2) | 92 (30.4) | |||
| No | 30 (37.5) | 42 (53.8) | 72 (100.0) | 67 (91.8) | 211 (69.6) | |||
| Pink color | <0.001 b | <0.001 b | <0.001 b | |||||
| Yes | 41 (51.3) | 64 (82.1) | 3 (4.2) | 3 (4.1) | 111 (36.6) | |||
| No | 36 (45.0) | 10 (12.8) | 65 (90.3) | 69 (94.5) | 180 (59.4) | |||
| Not applicable | 3 (3.8) | 4 (5.1) | 4 (5.6) | 1 (1.4) | 12 (4) | |||
| Cancer ribbon | <0.001 a | 0.036 b | 0.007 b | |||||
| Yes | 8 (10.0) | 31 (39.7) | 1 (1.4) | 0 (0.0) | 40 (13.2) | |||
| No | 72 (90.0) | 47 (60.3) | 71 (98.6) | 73 (100.0) | 263 (86.8) | |||
| Image setting | <0.001 b | 0.519b | 0.861b | |||||
| Medical setting | 18 (22.5) | 1 (1.3) | 13 (18.1) | 14 (19.2) | 46 (15.2) | |||
| Indoor-not medical setting | 29 (36.3) | 15 (19.2) | 23 (31.9) | 30 (41.1) | 97 (32.0) | |||
| Outdoor | 1 (1.3) | 3 (3.8) | 0 (0.0) | 0 (0.0) | 4 (1.3) | |||
| No background | 32 (40.0) | 59 (75.6) | 36 (50) | 29 (39.7) | 156 (51.5) | |||
| Medical equipment present | 0.002 a | 0.030 a | 0.368a | |||||
| Yes | 21 (26.3) | 5 (6.4) | 8 (11.1) | 25 (34.2) | 59 (19.5) | |||
| No | 59 (73.8) | 73 (93.6) | 64 (88.9) | 48 (65.8) | 244 (80.5) | |||
| Medical staff present | 0.007 b | 0.702a | 0.841a | |||||
| Yes | 8 (10.0) | 0 (0.0) | 5 (6.9) | 9 (12.3) | 22 (7.3) | |||
| No | 72 (90.0) | 78 (100.0) | 67 (93.1) | 64 (87.7) | 281 (92.7) | |||
| Subject wearing medical clothing (e.g., hospital gown) | 0.009 a | 0.631a | 0.204a | |||||
| Yes | 25 (31.3) | 10 (12.8) | 19 (26.4) | 31 (42.5) | 85 (28.1) | |||
| No | 55 (68.8) | 68 (87.2) | 53 (73.6) | 42 (57.5) | 218 (71.9) | |||
| Photorealism | 0.004 b | <0.001 a | 0.026 b | |||||
| Photorealistic | 64 (80.0) | 72 (92.3) | 51 (70.8) | 54 (74.0) | 241 (79.5) | |||
| Not very photorealistic | 16 (20.0) | 4 (5.1) | 7 (9.7) | 13 (17.8) | 40 (13.2) | |||
| Not at all photorealistic | 0 (0.0) | 2 (2.6) | 14 (19.4) | 6 (8.2) | 22 (7.3) | |||
Statistical significance tested using a Χ2 test. Bold values denote statistical significance at the p < 0.05 level.
Statistical significance tested using a Fisher’s Exact Test for Count Data. Bold values denote statistical significance at the p < 0.05 level.
Demographics
Most cancer patients (hereafter CPs) portrayed in AI-generated images were White (79.2–91.3% across prompts; 83.2% overall). Notably, most images not coded as “White” were coded as “unclear,” indicating that the race/ethnicity of the individual portrayed was ambiguous. Age representation varied across prompts, with breast CPs portrayed as younger than general CPs (only 3.8% of breast CP images portrayed older adults compared to 27.5% of general CP images), and individuals in both prostate and lung CP images being portrayed as older more frequently than those in general CP images (with 73.6% and 87.7% coded as older adults, respectively). In terms of gender, individuals in general CP images (82.5%) and breast CP images (92.3%) were mostly perceived as feminine, while both lung CP and prostate CP images had higher rates of masculine seeming individuals (100% for prostate cancer and 78.1% for lung cancer).
Affect
There were differences in affect between breast CP images and the general CP images: none of the breast CP images portrayed individuals with negative affect (vs. 15.0% of general CP images), and a majority conveyed positive affect (67.9% vs. 31.3% for general CP images). Conversely, lung CP images featured much higher rates of negative affect (69.9%) compared to general CP images. Over half of the prostate CP images (52.8%) and general CP images (53.8%) depicted individuals with a neutral affect.
Illness features
There were also notable differences across prompts on health appearance. No breast CP images contained individuals who looked clearly ill, and 73.1% of images featured individuals that looked healthy; whereas in the general CP images, 21.3% of individuals were coded as looking ill and the same percentage were coded as looking clearly healthy. Prostate CP images also had a higher percentage of healthy-looking individuals (44.4%), and a lower percentage of unhealthy-looking individuals (2.8%), compared to the general CP images. In contrast, individuals in lung CP images were portrayed as ill at a much higher rate compared to general CP images (56.2% vs 21.3%). Additionally, 28.8% of general CP images portrayed the subject lying or sitting in bed, which was a less frequent feature in images generated with the site-specific prompts (0% for breast CP; 8.3% for prostate CP; and 12.3% for lung CP). Furthermore, 62.5% of CP images and 46.2% of breast CP images portrayed an individual with a head covering. In contrast, no prostate CP images and only 8.2% of lung CP images showed head coverings.
Cancer symbols
The color pink was more prevalent in the breast CP images (82.1%) compared to the general CP images (51.3%). In contrast, only approximately 4% of the prostate CP and lung CP images contained the color pink. Additionally, although the inclusion of a cancer ribbon was not common overall (13.2%), breast CP images more frequently contained a ribbon compared to the general CP images (39.7% vs 10.0%), while prostate and lung CP images less frequently contained a ribbon (1.4% and 0%, respectively).
Setting features
Overall, about half of the images (51.5%) had no discernable background (eg, a solid-colored background). Fewer breast CP images featured a medical setting (1.3%) compared to general CP images (22.5%). Overall, medical equipment was present in 19.5% of images and medical staff were seen in 7.3% of images. Compared to general CP images (26.3%), both breast CP images and prostate CP images depicted medical equipment less frequently (6.4% and 11.1%, respectively), and fewer breast CP images included medical staff (0% vs. 10.0% for general CP images).
Photorealism
The majority of images were photorealistic (79.5%), but breast CP images were more frequently rated as photorealistic than general CP images (92.3% vs. 80.0%), whereas prostate and lung CP images were rated as photorealistic less frequently.
Differences between AI text-to-image tools
There were clear differences in the images generated by DALL-E and Stable Diffusion (Table 2). For example, although most DALL-E images featured White individuals (77.6%), there was more racial diversity in these images than in Stable Diffusion images (88.1% of which were coded as White and 0% coded as clearly non-White). There were also differences in age representation: a greater percentage of DALL-E images portrayed younger adults (24.5%) compared to Stable Diffusion images (1.3%). Additionally, Stable Diffusion images more frequently showed subjects in bed (22.5% vs 1.4% in DALL-E images), and more frequently portrayed CPs with head coverings (47.5% vs. 11.2% in DALL-E images). Finally, there was a pronounced difference in terms of photorealism, as 100% of Stable Diffusion images were coded as photorealistic, compared to 56.6% of DALL-E images.
Table 2.
Frequency of image features by AI tool (combined across prompts).
| DALL-E n = 143 (%) | Stable Diffusion n = 160 (%) | p-valuea,b | Overall (all tools) n = 303 (%) | |
|---|---|---|---|---|
| Race | <0.001 a | |||
| White | 111 (77.6) | 141 (88.1) | 252 (83.2) | |
| Non-White | 13 (9.1) | 0 (0) | 13 (4.3) | |
| Unclear | 19 (13.3) | 19 (11.9) | 38 (12.5) | |
| Age | <0.001 a | |||
| Child < 18 | 1 (0.7) | 0 (0) | 1 (0.3) | |
| Young adult 18–39 | 35 (24.5) | 2 (1.3) | 37 (12.2) | |
| Middle aged 40–65 | 45 (31.5) | 78 (48.8) | 123 (40.6) | |
| Older adult > 65 | 62 (43.4) | 80 (50) | 142 (46.9) | |
| Gender | 0.222b | |||
| Masculine | 76 (53.1) | 73 (45.6) | 149 (49.2) | |
| Feminine | 64 (44.8) | 86 (53.8) | 150 (49.5) | |
| Unclear | 3 (2.1) | 1 (0.6) | 4 (1.3) | |
| Affect | 0.055a | |||
| Negative | 20 (14) | 40 (25) | 60 (19.8) | |
| Neutral or mixed | 66 (46.2) | 66 (41.3) | 132 (43.6) | |
| Positive | 57 (39.9) | 54 (33.8) | 111 (36.6) | |
| Health appearance | <0.001 a | |||
| Sick | 21 (14.7) | 55 (34.4) | 76 (25.1) | |
| Neutral or unclear | 81 (56.6) | 44 (27.5) | 125 (41.3) | |
| Healthy | 41 (28.7) | 61 (38.1) | 102 (33.7) | |
| Subject in bed | <0.001 a | |||
| Yes | 2 (1.4) | 36 (22.5) | 38 (12.5) | |
| No | 141 (98.6) | 124 (77.5) | 265 (87.5) | |
| Head covering | <0.001 a | |||
| Yes | 16 (11.2) | 76 (47.5) | 92 (30.4) | |
| No | 127 (88.8) | 84 (52.5) | 211 (69.6) | |
| Pink color | <0.001 a | |||
| Yes | 40 (28) | 71 (44.4) | 111 (36.6) | |
| No | 91 (63.6) | 89 (55.6) | 180 (59.4) | |
| Not Applicable | 12 (8.4) | 0 (0) | 12 (4) | |
| Cancer ribbon | 0.001 a | |||
| Yes | 29 (20.3) | 11 (6.9) | 40 (13.2) | |
| No | 114 (79.7) | 149 (93.1) | 263 (86.8) | |
| Image Setting | <0.001 b | |||
| Medical settings | 6 (4.2) | 40 (25) | 46 (15.2) | |
| Indoor-not medical setting | 26 (18.2) | 71 (44.4) | 97 (32) | |
| Outdoor | 4 (2.8) | 0 (0) | 4 (1.3) | |
| No background | 107 (74.8) | 49 (30.6) | 156 (51.5) | |
| Medical equipment present | 0.849a | |||
| Yes | 29 (20.3) | 30 (18.8) | 59 (19.5) | |
| No | 114 (79.7) | 130 (81.3) | 244 (80.5) | |
| Medical staff present | 0.348a | |||
| Yes | 13 (9.1) | 9 (5.6) | 22 (7.3) | |
| No | 130 (90.9) | 151 (94.4) | 281 (92.7) | |
| Subject wearing medical clothing (e.g., hospital gown) | <0.001 a | |||
| Yes | 7 (4.9) | 78 (48.8) | 85 (28.1) | |
| No | 136 (95.1) | 82 (51.3) | 218 (71.9) | |
| Photorealism | <0.001 a | |||
| Photorealistic | 81 (56.6) | 160 (100) | 241 (79.5) | |
| Not very photorealistic | 40 (28) | 0 (0) | 40 (13.2) | |
| Not at all photorealistic | 22 (15.4) | 0 (0) | 22 (7.3) |
Statistical significance tested using a Χ2 test. Bold values denote statistical significance at the p < 0.05 level.
Statistical significance tested using a Fisher’s Exact Test for Count Data. Bold values denote statistical significance at the p < 0.05 level.
Qualitative observations
In addition to the features summarized above, other salient themes emerged during coding. First, many images (especially of general CPs and breast CPs) featured feminine individuals who aligned with societal beauty ideals, such as being thin and wearing make-up. Several images also featured nudity, where the individual’s breast(s) were visible (see Figure 1, #13). Additionally, some of the lung CP images depicted individuals who appeared to be smoking or holding cigarettes (Figure 1, #28). Images also sometimes combined visual markers signifying CPs (eg, head coverings) and visual markers associated with healthcare providers, such as white coats, medical scrubs, and stethoscopes, that patients would not realistically wear (Figure 1, #11). Many images – particularly those generated by DALL-E – also included pronounced anatomical elements such as internal organs (eg, Figure 1, #23 and #34), highlighting their biomedical emphasis. Finally, while the level of photorealism was relatively high, obvious rendering errors that made the images’ generated nature apparent (eg, too many fingers, anatomically impossible hand placement) were also frequently observed.
Discussion
This analysis sought to characterize images of cancer patients produced by AI text-to-image tools. The first notable finding was the lack of racial diversity in generated images, with most depicting White individuals, and few clearly depicting persons of color. Age and gender representation were also skewed for certain prompts, and while these imbalances may be logical in some cases (eg, higher representation of individuals who are middle-aged or older may reflect the fact that cancer risk increases with age, the lack of feminine individuals in prostate cancer images corresponds with biological risk), in other cases, these distributions do not reflect reality. For example, only 16.4% of lung CPs were coded as feminine, whereas the real-world gender disparity in lung cancer incidence is not nearly as large (Fu et al., 2023; Sharma, 2022). Similar discrepancies with risk statistics have been previously observed with cancer images published in magazines (Phillips et al., 2011), indicating a replication of bias. Lack of demographic representation, particularly regarding race/ethnicity, is problematic because the “erasure” of certain groups can reinforce disparities and contribute to certain groups being overlooked in cancer control efforts. It is important to ensure that AI-generated images equitably and accurately represent people across demographic groups (Fraser et al., 2023a). Limited demographic representation in AI-generated images may also have implications for health communication efforts that incorporate these images. Social identity theory (which posits that people define their sense of self in terms of social categories and group memberships) suggests that individuals are more likely to attend to a message if they identify with the individuals portrayed in the images accompanying the message, and research suggests this “identification” is often based on characteristics such as race and gender (Phillips et al., 2011).
Beyond demographics, important patterns were also observed in other image features. Unlike general CP images, most images depicted breast CPs as healthy (73.1%) and with positive affect (67.9%), whereas lung CPs were more often portrayed as ill (56.2%) and with negative affect (69.9%). These findings show how AI tools may reinforce both “aspirational” cancer experiences as well as negative stereotypes about a cancer diagnosis. Studies have shown that public attitudes toward lung cancer are more negative than attitudes toward breast cancer, and that lung cancer is more frequently associated with despair than breast cancer (Sriram et al., 2015). Whether AI images reinforce or challenge these beliefs is important to assess because the ways in which cancer patients are portrayed can influence both how people experience cancer (eg, sense of identity, expectations) and how people with cancer are treated by others (Putland et al., 2023).
Although lung cancer survival rates are lower than survival rates for breast cancer, consistently portraying lung CPs as gravely ill and hopeless could help reinforce the common view that the disease is always fatal, which could contribute to therapeutic nihilism and negatively impact health seeking behaviors and treatment decisions among individuals who have, or are at risk for, lung cancer (Sriram et al., 2015; Tran et al., 2015). On the other hand, although portraying breast CPs as healthy and happy could foster hope, it may also have negative effects, such as reducing the perceived seriousness of the disease and the urgency of taking preventive action (eg, screening) (McWhirter et al., 2012) or even alienating patients whose real-life experience with breast cancer is not as positive (Bock, 2013; Reffner Collins et al., 2024). These optimistic portrayals of breast CPs may reflect the widespread dissemination of images associated with “pink ribbon culture” promoted by breast cancer advocacy groups, companies, and the mass media, and which has been critiqued for “sugarcoating” the disease, emphasizing a forced cheerfulness (McDonnell et al., 2017), and centering individualism, feminine ideals, and an imperative for optimism (Gibson et al., 2014). Although it is not necessarily problematic for any single image to portray a healthy breast CP or an unhappy lung CP, the consistency of these depictions in the images generated may reinforce certain narratives. Ideally, images of CPs should portray the full spectrum of cancer experiences.
Additionally, the frequency of cancer ribbons and the color pink in breast CP images reflect commonly used visual markers of breast cancer (AbiGhannam et al., 2018). The pink ribbon has become an instantly recognizable symbol for breast cancer, and while it may have positive connotations of strength and hope, not everyone may perceive this symbol positively (Harvey & Strahilevitz, 2009). Additionally, the use of pink ribbons may be a way to signify cancer in visual materials without having to use “objectionable” or negative images of cancer, which makes it more palatable for use in communication efforts, including cause-related marketing campaigns (Harvey & Strahilevitz, 2009). As pink is commonly associated with femininity in Western culture, frequent use of pink may reinforce the idea of breast cancer as a “women’s disease” (Wagner, 2005).
Lastly, a substantial number of lung CP images included allusions to smoking, which could help further feed into the prevailing narrative that lung cancer is a self-inflicted disease caused by a person’s smoking (Tran et al., 2015), and increase societal stigma associated with lung cancer. It is possible that fear-based tobacco control campaigns have contributed to a prevalent visual association between lung cancer and cigarette smoking, as well as negative portrayals of lung cancer patients, and these associations are being reflected in AI-generated images.
The potential for AI-generated images to increase or reinforce stigma needs to be further evaluated and monitored because stigma can negatively impact lung cancer patients’ health and wellbeing, leading them to feel guilt or shame, experience psychological distress, and avoid or delay seeking medical care (Mazières et al., 2015; Tran et al., 2015). Additionally, exposure to negative representations of a group can reinforce stereotypes and encourage negative attitudes toward members of that group (Harwood, 2020). Work on social identity theory suggests a link between media exposure and intergroup outcomes through the media’s role in creating commonly held norms and conventions: by promoting representations that emphasize particular aspects of a group (eg, many lung cancer patients smoke) while ignoring others, media images and messages can play a role in creating shared norms and activating the use of these constructs in subsequent evaluations (Mastro, 2003; McKinley et al., 2014).
The potential for generative AI systems to reproduce and reinforce biases and stereotypes is concerning because these perspectives, hidden under the guise of technological neutrality, can influence social perceptions of reality (Gorska & Jemielniak, 2023), especially if disseminated on a large scale (Ali et al., 2024; Bianchi et al., 2023). Although modifying the prompts used could help mitigate some of the observed patterns (eg, specifying images of non-White patients to increase racial diversity), prompt engineering is generally quite limited as a solution to the complex and embedded biases in these tools (Bird et al., 2023). More systematic mitigation approaches, such as involving stakeholders in the design of these systems, using higher-quality training data, and enacting guidelines to increase transparency and accountability, are likely needed to help reduce the risk of harm from these tools (Bird et al., 2023). Furthermore, promoting greater AI literacy among communication practitioners, researchers, health care providers, and the general public may also help offset the potential harms of biased AI-generated images. Overly optimistic beliefs in the potential and objectivity of AI systems could cause individuals to perceive AI tools as more impartial or neutral than humans (Helberger et al., 2020; Klingbeil et al., 2024), and consequently be less wary of the potential for bias in the products generated by these systems. In addition, a better understanding of AI tools could help users develop more accurate perceptions of their capabilities, recognize biases in these tools, and make more appropriate decisions about what to do with the outputs they produce (Pinski & Benlian, 2024). Beyond didactic education to promote AI literacy, hands-on training may offer a tangible way to help learners understand how AI tools work. For example, a mobile app (“AiLingo”) that enables users to iteratively configure an image classification model by selecting different training data and assessing changes in the model’s performance was found to increase both subjective and objective AI literacy (Pinski et al., 2024). Finally, training in prompt engineering may help users of AI systems mitigate biases in the absence of more comprehensive solutions.
Limitations
While this study provides some important initial insights on the way CPs are portrayed in AI-generated images, it also has several limitations. First, some coding judgments regarding image characteristics are subjective and necessarily require some degree of interpretation. To improve reliability of coding decisions, all images were double-coded, discrepancies were adjudicated by a third coder, and all coders participated in a pilot coding phase to ensure coding categories were clear and applied consistently.
In addition, data were collected over several weeks in the Spring of 2024 using the most recent versions of the AI tools then available, therefore results reflect only the outputs of these tools at a particular moment in time. Given that AI tools are constantly evolving, these results may not generalize to subsequent iterations (Ali et al., 2024). Similarly, our study only looked at images across two leading AI models and may not generalize to other tools that employ different algorithms or training data. Lastly, only 40 images were examined per prompt, and it is possible that a larger number of images generated per prompt would have revealed a different set of patterns in the images.
Conclusion
The utilization of generative AI offers researchers and practitioners numerous potential advantages, including the ability to efficiently generate high-quality, customized images at low cost (Buzzaccarini et al., 2024). However, the results of our study suggest that the use of text-to-image models in cancer-related research and practice may be premature, and if they are used, careful consideration must be given to their flaws and potential biases. In the cancer context, users must be aware of the potential lack of diversity in these images, as well as the possibility of perpetuating certain cancer-related stereotypes (both positive and negative) when using AI-generated images in health communication messaging, interventions, or clinical practice. Finally, while this study focused on the portrayal of cancer patients in AI-generated images, the analytic approach outlined herein may be applied to other health conditions to better understand potential biases and narratives embedded in generative text-to-image AI models related to these diseases and the larger social implications of the potential widespread dissemination of the resulting images.
Funding
Data collection for this manuscript was supported by a contract granted by the National Cancer Institute to Westat [contract #HHSN261201800002B].
Appendix A.
Codebook
Enter your coder initials (e.g., AG, NS): _______
Enter the number of the image you are coding: ________
- Does the image contain a person?
- Yes
-
No (if “no” is selected, skip to end of survey)If two or more people are in the image, code only for the focal person (person in the foreground) and add a note about there being multiple persons in Question 19.
- Race/ethnicity:
- White
- Non-White [specify below, if able; consider the following race categories: Black/African American; Hispanic/Latinx; American Indian or Alaska Native; Asian; Native Hawaiian or Other Pacific Islander]
- Unclear [select if the focal person in the image does not look clearly white or nonwhite]
- Skin tone:
- Light [select for very fair skin tone]
- Medium
- Dark
- Gender presentation:
- Feminine
- Masculine
- Unclear
- Age group:
- Child/adolescent (< 18)
- Young adult (18–39)
- Middle aged (> 39 and < 65)
- Older adult (65+) [select if there are clear signs of advanced age, like a significant amount of gray hair or wrinkles]
- Affect – facial expression:
- Negative
- Slightly negative
- Neutral or mixed
- Slightly positive
- Positive
- N/A (e.g. face is covered)
- Presence of head covering:
- Yes
- No
- Overall physical health appearance:
- Very sick
- Somewhat sick
- Neither clearly sick, nor clearly healthy (neutral/can’t tell)
- Somewhat healthy
- Very healthy
- Features of person [select all that apply]:
- Rash
- In bed (sitting or lying down)/in a wheelchair
- Absence of hair [look for total baldness; only code if head covering not present]
- Wearing hospital gown (or similar medical clothing)
- Other (e.g. frailty, sunken eyes, dark circles around eyes, etc. if these feature are significant) [specify] ____________
- None of the above
- Image setting:
- Indoor – medical setting (e.g., hospital or clinic)
- Indoor – not a medical setting (e.g., a bedroom)
- Indoor – not clear
- Outdoors
- No identifiable background (e.g., solid background)
- Presence of medical equipment/medical staff [consider the entire image]:
- Medical equipment (including latex gloves, surgical face masks, stethoscopes, IVs)
- Medical staff (someone other than the focal person portrayed as a medical provider)
- None of the above
- Presence of cancer ribbon (but not differently shaped bows):
- Yes
- No
- Pink color significantly present:
- Yes (any shade of pink)
- No
- N/A (picture is black and white)
- Presence of anatomical elements (e.g., DNA, tumor, internal organs):
- Yes
- No
- Level of photorealism:
- Highly photorealistic
- Somewhat photorealistic
- Not very photorealistic
- Not at all photorealistic (e.g., an animation/cartoon)
- Obvious rendering errors present in the image, like extra hands or misspelled words:
- Yes
- No
Add any other details you think are worth mentioning and not captured above (e.g., presence of more than one individual in the image, nudity, person is portrayed as smoking).
Add any notes about your coding for this image (e.g. wasn’t sure how to code for Q7 because …).
Footnotes
Disclosure statement
No potential conflict of interest was reported by the author(s).
Disclaimer
Opinions expressed by the authors are their own, and this material should not be interpreted as representing the official viewpoint of the US Department of Health and Human Services, the National Institutes of Health, or the National Cancer Institute.
Data availability statement
The data analyzed in the current study are available from the corresponding author on reasonable request.
References
- AbiGhannam N, Chilek LA, & Koh HE (2018). Three pink decades: Breast cancer coverage in magazine advertisements. Health Communication, 33(4), 462–468. 10.1080/10410236.2016.1278496 [DOI] [PubMed] [Google Scholar]
- Alenichev A, Kingori P, & Grietens KP (2023). Reflections before the storm: The AI reproduction of biased imagery in global health visuals. Lancet Global Health, 11(10), e1496–e1498. 10.1016/S2214-109X(23)00329-7 [DOI] [PubMed] [Google Scholar]
- Ali R, Tang OY, Connolly ID, Abdulrazeq HF, Mirza FN, Lim RK, Johnston BR, Groff MW, Williamson T, Svokos K, Libby TJ, Shin JH, Gokaslan ZL, Doberstein CE, Zou J, & Asaad WF (2024). Demographic representation in 3 leading artificial intelligence text-to-image generators. JAMA Surgery, 159(1), 87–95. 10.1001/jamasurg.2023.5695 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bianchi F, Kalluri P, Durmus E, Ladhak F, Cheng M, Nozza D, Hashimoto T, Jurafsky D, Zou J, & Caliskan A (2023). Easily accessible text-to-image generation amplifies demographic stereotypes at large scale. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency (FAccT ‘23), Chicago, IL, USA: (pp. 1493–1504). Association for Computing Machinery. 1493–1504. 10.1145/3593013.3594095 [DOI] [Google Scholar]
- Bird C, Ungless E, & Kasirzadeh A (2023). Typology of risks of generative text-to-image models. Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society (AIES ‘23), Montréal, QC, Canada (pp. 396–410). Association for Computing Machinery. New York, NY, USA. 10.1145/3600211.3604722 [DOI] [Google Scholar]
- Bock S (2013). Staying positive: Women’s illness narratives and the stigmatized vernacular. Health Culture and Society, 5(1), 150–166. 10.5195/hcs.2013.125query [DOI] [Google Scholar]
- Buzzaccarini G, Degliuomini RS, Borin M, Fidanza A, Salmeri N, Schiraldi L, DiSumma PG, Vercesi F, Vanni VS, Candiani M, & Pagliardini L (2024). The promise and pitfalls of AI-generated anatomical images: Evaluating midjourney for aesthetic surgery applications. Aesthetic Plastic Surgery, 48(9), 1874–1883. 10.1007/s00266-023-03826-w [DOI] [PubMed] [Google Scholar]
- Champion C, Berry TR, Kingsley B, & Spence JC (2016). Pink ribbons and red dresses: A mixed methods content analysis of media coverage of breast cancer and heart disease. Health Communication, 31 (10), 1242–1249. 10.1080/10410236.2015.1050082 [DOI] [PubMed] [Google Scholar]
- Chen S, Kann BH, Foote MB, Aerts HJ, Savova GK, Mak RH, & Bitterman DS (2023). Use of artificial intelligence chatbots for cancer treatment information. JAMA Oncology, 9(10), 1459–1462. 10.1001/jamaoncol.2023.2954 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Chen Y, Zhai Y, & Sun S (2024). The gendered lens of AI: Examining news imagery across digital spaces. Journal of computer-Mediated Communication, 29(1), zmad047. 10.1093/jcmc/zmad047 [DOI] [Google Scholar]
- Cho J, Zala A, & Bansal M (2023). DALL-EVAL: Probing the reasoning skills and social biases of text-to-image generation models. Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) (pp. 3020–3031). The Institute of Electrical and Electronics Engineers (IEEE) Computer Society, Paris, France. 10.1109/ICCV51070.2023.00283 [DOI] [Google Scholar]
- Dunn AG, Shih I, Ayre J, & Spallek H (2023). What generative AI means for trust in health communications. Journal of Communication in Healthcare, 16(4), 385–388. 10.1080/17538068.2023.2277489 [DOI] [PubMed] [Google Scholar]
- Fraser KC, Kiritchenko S, & Nejadgholi I (2023a). Diversity is not a one-way street: Pilot study on ethical interventions for racial bias in text-to-image systems. Proceedings of the 14th International Conference on Computational Creativity (ICCC) (pp. 288–292). Association for Computational Creativity, Waterloo, Canada. [Google Scholar]
- Fraser KC, Kiritchenko S, & Nejadgholi I (2023b). A friendly face: Do text-to-image systems rely on stereotypes when the input is under-specified?. Proceedings of the The Thirty-Seventh AAAI Conference on Artificial Intelligence: Creative AI Across Modalities Workshop (CreativeAI @ AAAI), Association for the Advancement of Artificial Intelligence, Washington, DC, USA. 10.48550/arXiv.2302.07159 [DOI] [Google Scholar]
- Fu Y, Liu J, Chen Y, Liu Z, Xia H, & Xu H (2023). Gender disparities in lung cancer incidence in the United States during 2001–2019. Scientific Reports, 13(1), 12581. 10.1038/s41598-023-39440-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Gatting L, Hanna C, & Robb K (2023). Prevalence and characteristics of pictures in cancer screening information: Content analysis of UK print decision support materials. Health Communication, 38(8), 1601–1611. 10.1080/10410236.2021.2022869 [DOI] [PubMed] [Google Scholar]
- Gibson AF, Lee C, & Crabb S (2014). ‘If you grow them, know them’: Discursive constructions of the pink ribbon culture of breast cancer in the Australian context. Feminism & Psychology, 24(4), 521–541. 10.1177/0959353514548100 [DOI] [Google Scholar]
- Gorska AM, & Jemielniak D (2023). The invisible women: Uncovering gender bias in AI-generated images of professionals. Feminist Media Studies, 23(8), 4370–4375. 10.1080/14680777.2023.2263659 [DOI] [Google Scholar]
- Grant JA, & Hundley H (2009). Images of the war on cancer in the associated press: Centering survivors and marginalizing victims. American Communication Journal, 11(4), 1–16. [Google Scholar]
- Harvey JA, & Strahilevitz MA (2009). The power of pink: Cause-related marketing and the impact on breast cancer. Journal of the American College of Radiology, 6(1), 26–32. 10.1016/j.jacr.2008.07.010 [DOI] [PubMed] [Google Scholar]
- Harwood J (2020). Social identity theory. In Bulck J (Ed.), The international encyclopedia of media psychology (pp. 1–7). John Wiley & Sons. 10.1002/9781119011071.iemp0153 [DOI] [Google Scholar]
- Helberger N, Araujo T, & de Vreese CH (2020). Who is the fairest of them all? Public attitudes and expectations regarding automated decision-making. Computer Law & Security Review, 39, 105456. 10.1016/j.clsr.2020.105456 [DOI] [Google Scholar]
- Jean EA, Neal-Barnett A, & Stadulis R (2022). How we see us: An examination of factors shaping the appraisal of stereotypical media images of black women among black adolescent girls. Sex Roles, 86(5), 334–345. 10.1007/s11199-021-01269-8 [DOI] [Google Scholar]
- Kay M, Matuszek C, Munson SA (2015). Unequal representation and gender stereotypes in image search results for occupations. Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI ‘15), Seoul, Republic of Korea. Association for Computing Machinery, (pp. 3819–3828, New York, NY. 10.1145/2702123.2702520 [DOI] [Google Scholar]
- Kim H-Y (2017). Statistical notes for clinical researchers: Chi-squared test and Fisher’s exact test. Restorative Dentistry & Endodontics, 42(2), 152–155. 10.5395/rde.2017.42.2.152 [DOI] [PMC free article] [PubMed] [Google Scholar]
- King AJ (2015). A content analysis of visual cancer information: Prevalence and use of photographs and illustrations in printed health materials. Health Communication, 30(7), 722–731. 10.1080/10410236.2013.878778 [DOI] [PubMed] [Google Scholar]
- Klingbeil A, Grützner C, & Schreck P (2024). Trust and reliance on AI-An experimental study on the extent and costs of overreliance on AI. Computers in Human Behavior, 160, 108352. 10.1016/j.chb.2024.108352 [DOI] [Google Scholar]
- Kumar A, Burr P, & Young TM (2024). Using AI text-to-image generation to create novel illustrations for medical education: Current limitations as illustrated by hypothyroidism and Horner syndrome. JMIR Medical Education, 10(1), e52155. 10.2196/52155 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Li S, Chen M, Liu PL, & Xu J (2024). Following medical advice of an AI or a human doctor? Experimental evidence based on clinician-patient communication pathway model. Health Communication, 1–13. 10.1080/10410236.2024.2423114 [DOI] [PubMed] [Google Scholar]
- Mastro DE (2003). A social identity approach to understanding the impact of television messages. Communication Monographs, 70(2), 98–113. 10.1080/0363775032000133764 [DOI] [Google Scholar]
- Mazières J, Pujol J-L, Kalampalikis N, Bouvry D, Quoix E, Filleron T, Targowla N, Jodelet D, Milia J, & Milleron B (2015). Perception of lung cancer among the general population and comparison with other cancers. Journal of Thoracic Oncology, 10(3), 420–425. 10.1097/JTO.0000000000000433 [DOI] [PubMed] [Google Scholar]
- McClure KJ, Puhl RM, & Heuer CA (2011). Obesity in the news: Do photographic images of obese persons influence antifat attitudes? Journal of Health Communication, 16(4), 359–371. 10.1080/10810730.2010.535108 [DOI] [PubMed] [Google Scholar]
- McDonnell TE, Jonason A, & Christoffersen K (2017). Seeing red and wearing pink: Trajectories of cultural power in the AIDS and breast cancer ribbons. Poetics, 60, 1–15. 10.1016/j.poetic.2016.10.005 [DOI] [Google Scholar]
- McKinley CJ, Masto D, & Warber KM (2014). Social identity theory as a framework for understanding the effects of exposure to positive media images of self and other on intergroup outcomes. International Journal of Communication, 8, 1049–1068. [Google Scholar]
- McWhirter JE, Hoffman-Goetz L, & Clarke JN (2012). Can you see what they are saying? Breast cancer images and text in Canadian women’s and fashion magazines. Journal of Cancer Education, 27(2), 383–391. 10.1007/s13187-011-0305-0 [DOI] [PubMed] [Google Scholar]
- Miller CA, Guidry JP, & Fuemmeler BF (2019). Breast cancer voices on Pinterest: Raising awareness or just an inspirational image? Health Education & Behavior, 46(2_suppl), 49S–58S. 10.1177/1090198119863774 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Morgan M, Shanahan J, & Signorielli N (2014). Cultivation theory in the twenty-first century. In Fortner RS & Fackler PM (Eds.), The handbook of media and mass communication theory (pp. 480–497). 10.1002/9781118591178.ch26 [DOI] [Google Scholar]
- Phillips SG, Della LJ, & Sohn SH (2011). What does cancer treatment look like in consumer cancer magazines? An exploratory analysis of photographic content in consumer cancer magazines. Journal of Health Communication, 16(4), 416–430. 10.1080/10810730.2010.546484 [DOI] [PubMed] [Google Scholar]
- Pinski M, & Benlian A (2024). AI literacy for users-A comprehensive review and future research directions of learning methods, components, and effects. Computers in Human Behavior Artificial Humans, 2(1), 100062. 10.1016/j.chbah.2024.100062 [DOI] [Google Scholar]
- Pinski M, Haas M-J, & Benlian A (2024). Building metaknowledge in AI literacy-the effect of gamified vs. text-based learning on AI literacy metaknowledge. Proceedings of the 57th Hawaii International Conference on System Sciences (HICSS) (pp. 5164–5173). University of Hawaii at Manoa’s Shidler College of Business, Honolulu, Hawaii, USA. [Google Scholar]
- Putland E, Chikodzore-Paterson C, & Brookes G (2023). Artificial intelligence and visual discourse: A multimodal critical discourse analysis of AI-generated images of “dementia”. Social Semiotics, 35(2), 228–253. 10.1080/10350330.2023.2290555 [DOI] [Google Scholar]
- Reffner Collins MK, Lazard AJ, Hedrick mckenzie AM, & Varma T (2024). ‘It’s nothing like cancer’: Young adults with cancer reflect on memorable entertainment narratives. Health Communication, 39(3), 552–562. 10.1080/10410236.2023.2174403 [DOI] [PubMed] [Google Scholar]
- Romer D, Jamieson P, Bleakley A, & Jamieson KH (2014). Cultivation theory: Its history, current status, and future directions. In Fortner RS & Fackler PM (Eds.), The handbook of media and mass communication theory (pp. 115–136). 10.1002/9781118591178.ch7 [DOI] [Google Scholar]
- Sezgin E, & McKay I (2024). Behavioral health and generative AI: A perspective on future of therapies and patient care. NPJ Mental Health Research, 3(1), 25. 10.1038/s44184-024-00067-w [DOI] [PMC free article] [PubMed] [Google Scholar]
- Sharma R (2022). Mapping of global, regional and national incidence, mortality and mortality-to-incidence ratio of lung cancer in 2020 and 2050. International Journal of Clinical Oncology, 27(4), 665–675. 10.1007/s10147-021-02108-2 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Sonmez SC, Sevgi M, Antaki F, Huemer J, & Keane PA (2024). Generative artificial intelligence in ophthalmology: Current innovations, future applications and challenges. British Journal of Ophthalmology, 108 (10), 1335–1340. 10.1136/bjo-2024-325458 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Sriram N, Mills J, Lang E, Dickson HK, Hamann HA, Nosek BA, Schiller JH, & Gorlova OY (2015). Attitudes and stereotypes in lung cancer versus breast cancer. PLOS ONE, 10(12), e0145715. 10.1371/journal.pone.0145715 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Sun L, Wei M, Sun Y, Suh YJ, Shen L, & Yang S (2024). Smiling women pitching down: Auditing representational and presentational gender biases in image-generative AI. Journal of computer-Mediated Communication, 29(1), zmad045. 10.1093/jcmc/zmad045 [DOI] [Google Scholar]
- Tran K, Delicaet K, Tang T, Ashley LB, Morra D, & Abrams H (2015). Perceptions of lung cancer and potential impacts on funding and patient care: A qualitative study. Journal of Cancer Education, 30 (1), 62–67. 10.1007/s13187-014-0677-z [DOI] [PubMed] [Google Scholar]
- Vilaro MJ, Wilson-Howard DS, Neil JM, Tavassoli F, Zalake MS, Lok BC, Modave FP, George TJ, Odedina FT, Carek PJ, Mys AM, & Krieger JL (2022). A subjective culture approach to cancer prevention: Rural black and white adults’ perceptions of using virtual health assistants to promote colorectal cancer screening. Health Communication, 37(9), 1123–1134. 10.1080/10410236.2021.1910166 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wagner LC (2005). “It’s for a good cause”: The semiotics of the pink ribbon for breast cancer in print advertisements. Intercultural Communication Studies, 14(3), 209–216. [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Data Availability Statement
The data analyzed in the current study are available from the corresponding author on reasonable request.
