Skip to main content
BMC Medical Ethics logoLink to BMC Medical Ethics
. 2025 Jul 4;26:82. doi: 10.1186/s12910-025-01249-7

Turkish medical oncologists’ perspectives on integrating artificial intelligence: knowledge, attitudes, and ethical considerations

Efe Cem Erdat 1,, Filiz Çay Şenler 1
PMCID: PMC12232021  PMID: 40615865

Abstract

Background

Integrating artificial intelligence (AI), especially large language models (LLM) into oncology has potential benefits, yet medical oncologists’ knowledge, attitudes, and ethical concerns remain unclear. Understanding these perspectives is particularly relevant in Türkiye, which has approximately 1340 practicing oncologists.

Methods

A cross-sectional, online survey was distributed via the Turkish Society of Medical Oncology’s channels from October 16 to November 27, 2024. Data on demographics, AI usage, self-assessed knowledge, attitudes, ethical/regulatory perceptions, and educational needs were collected. Quantitative analyses were performed using descriptive statistics and graphics were generated using R v.4.3.1, and qualitative analysis of open-ended responses was conducted manually.

Results

Of 147 respondents (representing about 11% of Turkish oncologists), 77.5% reported prior AI use, mainly LLMs, yet only 9.5% had formal AI education. While most supported integrating AI into prognosis estimation, research, and decision support, concerns persisted regarding patient-physician relationships and social perception. Ethical reservations centered on patient management, scholarly writing, and research design. Over 79% deemed current regulations inadequate and advocated ethical audits, legal frameworks, and patient consent. Nearly all were willing to receive AI training, reflecting a substantial educational gap.

Conclusions

Turkish medical oncologists exhibit cautious optimism toward AI but highlight critical gaps in training, clear regulations, and ethical safeguards. Addressing these needs could guide responsible AI integration. Limitations include a single-country perspective. Further research is warranted to generalize findings and assess evolving attitudes as AI advances.

Trial Registration

Not applicable due to cross-sectional survey design.

Supplementary Information

The online version contains supplementary material available at 10.1186/s12910-025-01249-7.

Keywords: Medical oncology, Artificial intelligence, Ethical considerations, Surveys

Background

Artificial intelligence (AI) is rapidly reshaping various facets of healthcare, including oncology. By leveraging machine learning, deep learning, and large language models (LLM), AI-driven tools hold promises for enhancing diagnostic accuracy, optimizing treatment planning, refining prognosis estimation, and accelerating research [1, 2]. However, the effective integration of AI into routine oncology practice requires not only robust technologies but also clinicians who are equipped with the requisite knowledge, confidence, and ethical awareness to navigate this evolving landscape [3].

Medical oncologists’ acceptance and appropriate use of AI hinge on several factors. These include adequate education, trust in AI outputs, understanding of algorithmic limitations, data privacy protections, and clearly defined accountability structures. While global interest in AI is burgeoning, little is known about oncologists’ baseline familiarity, attitudes, and concerns, particularly in countries with evolving healthcare landscapes. Turkiye, with approximately 1340 practicing medical oncologists, provides a pertinent context for exploring these issues. Insights gathered from their perspectives can inform targeted educational programs, policy frameworks, and best practices that ensure AI augments—rather than undermines—patient-centered oncology care.

This study aimed to assess Turkish medical oncologists’ experiences and perceptions regarding AI. Specifically, we sought to: (1) quantify their exposure to and familiarity with AI tools, particularly LLM; (2) evaluate their attitudes toward AI’s role in clinical practice, research, and patient interactions; (3) identify their ethical and legal concerns; and (4) determine their educational and regulatory needs. By clarifying these dimensions, we can foster a more informed, responsible approach to implementing AI in oncology, ultimately contributing to safer, more effective patient care.

Materials and methods

Study design and participants

A cross-sectional survey design was employed. The target population comprised medical oncologists who were members of the Turkish Society of Medical Oncology. Eligible participants were medical oncology professionals in Türkiye, including residents and fellows who completed mandatory internal medicine training, as well as medical oncology specialists. With approximately 1340 medical oncologists nationwide, the sample of 147 participants represents roughly 11% of this population [4].

Survey instrument

After a preliminary study with qualitative interviews, we developed a questionnare through expert consultation with ethical board members and AI experts, and superficial literature review with search terms (“artificial intelligence” OR “AI”) and (“ethics” OR “concerns”), encompassed: (1) demographics and professional background; (2) AI usage patterns and formal AI education; (3) self-assessed AI knowledge in domains such as machine learning, deep learning, and natural language processing; (4) attitudes toward AI in diagnosis, treatment planning, prognosis estimation, research, patient follow-up, and clinical decision support; (5) perceptions of AI’s impact on patient-physician relationships, healthcare access, policy development, workload, and job satisfaction; (6) ethical and regulatory considerations, including perceived ethically concerning activities, current legal sufficiency, and suggested reforms. The qualitative pilot study found high interest in LLMs within AI domains, so some of the survey questions were specifically about LLMs. The English translation of survey instrument is available in the supplementary appendix.

Data collection

The survey was administered via Microsoft Forms (Microsoft Corp., Redmond, WA) online platform from October 16 to November 27, 2024. Invitations were distributed through the Turkish Society of Medical Oncology’s social media and instant messaging groups, and email network. Participation was voluntary, anonymous, and initiated upon electronic informed consent.

Statistical and qualitative analysis

Descriptive statistics (frequency, percentage, median, interquartile range (IQR)) summarized quantitative data. Ordinal regression was utilized to evaluate the factors linked to knowledge levels, concerns, and attitudes. Post hoc calculations indicated 96% power with a 10% margin of error. Qualitative data from open-ended responses were analyzed manually, identifying recurring themes related to ethical concerns, data security, clinical integration, and educational gaps. All statistical analyses were performed using, and figures and graphics were generated in R version 4.4 (R Foundation for Statistical Computing, Vienna, Austria).

Ethical considerations

The study was approved by the institutional ethics committee (AUTF-KAEK 2024/635) and conducted in accordance with the Declaration of Helsinki. No personally identifying information was collected.

Results

Participant characteristics

A total of 147 medical oncologists completed the survey, corresponding to approximately 11% of the estimated 1340 medical oncologists practicing in Türkiye [4]. The median age of participants was 39 years (IQR: 35–46), and 63.3% were male. Respondents had a median of 14 years (IQR: 10–22) of medical experience and a median of 5 years (IQR: 2–14) specifically in oncology. Nearly half (47.6%) practiced in university hospitals, followed by 31.3% in training and research hospitals, and the remainder in private or state settings (Table 1). In terms of academic rank, residents/fellows constituted 38.1%, specialists 22.4%, professors 21.1%, associate professors 16.3%, and assistant professors 2.0%. Respondents were distributed across various urban centers, including major cities such as Istanbul and Ankara, as well as smaller provinces, reflecting a broad regional representation of Türkiye’s oncology workforce.

Table 1.

Demographics, AI usage, and education status of participants

Gender, n (%)
 Male 93 (63.3%)
 Female 54 (36.7%)
Age, median (IQR) 39 (35–46)
Years as physician, median (IQR) 14 (10–22)
Years in oncology, median (IQR) 5 (2–14)
Site of practice, n (%)
 University hospital 70 (47.6%)
 Training and research hospital 46 (31.3%)
 Private hospital 20 (13.6%)
 State hospital 8 (5.4%)
 Private clinic 3 (2.0%)
Educational and academic status, n (%)
 Resident, fellow 56 (38.1%)
 Specialist 33 (22.4%)
 Professor 31 (21.1%)
 Associate professor 24 (16.3%)
 Assistant professor 3 (2.0%)
Used any artificial intelligence before, n (%)*
 ChatGPT and other GPT models 114 (77.5%)
 Google Gemini 25 (17.0%)
 Microsoft Bing 16 (10.9%)
 Others** 13 (8.8%)
 Have not used any 33 (22.5%)
Artificial intelligence education status, n (%)
 Not received any education 133 (90.5%)
 Received basic-level education 10 (6.8%)
 Received advanced-level education 3 (2.0%)
 Received intermediate-level education 1 (0.7%)
Will to receive education for artificial intelligence, n (%)
 Yes 139 (94.6%)
 No 8 (5.4%)
Resources used to acquire knowledge about artificial intelligence, n (%)*
 Colleagues 39 (26.5%)
 Academic publications 34 (23.1%)
 Online courses and websites (e.g., Coursera, EDx) 32 (21.8%)
 Popular science publications 29 (19.7%)
 Conferences and workshops 27 (18.4%)
 Other periodicals 7 (4.8%)
 Others*** 8 (5.4%)
 Do not using any resources 57 (38.8%)

*Percentages shown for total participant counts

**Other artificial intelligences, include Meta LLAMA, X Grok, Google Bard, Perplexity, Anthropic Claude

***Other resources include social media and non-academic books

IQR Interquartile range

Most of the participants completed the survey from Central Anatolia Region of Türkiye (34.0%, n = 50), followed by Marmara Region (27.2%, n = 40), Eagean Region (17.0%, n = 25) and Mediterranian Region (10.2%, n = 15). The distrubution of the participants with regional map of Türkiye is presented in Fig. 1.

Fig. 1.

Fig. 1

Geographical Distribution of Participants by Regions of Türkiye

AI usage and education

A majority (77.5%, n = 114) of oncologists reported prior use of at least one AI tool. Among these, ChatGPT and other GPT-based models were the most frequently used (77.5%, n = 114), indicating that LLM interfaces had already penetrated clinical professionals’ workflow to some extent. Other tools such as Google Gemini (17.0%, n = 25) and Microsoft Bing (10.9%, n = 16) showed more limited utilization, and just a small fraction had tried less common platforms like Anthropic Claude, Meta Llama-3, or Hugging Face. Despite this relatively high usage rate of general AI tools, formal AI education was scarce: only 9.5% (n = 14) of respondents had received some level of formal AI training, and this was primarily basic-level. Nearly all (94.6%, n = 139) expressed a desire for more education, suggesting that their forays into AI usage had been largely self-directed and that there was a perceived need for structured, professionally guided learning.

Regarding sources of AI knowledge, 38.8% (n = 57) reported not using any resource, underscoring a gap in continuing education. Among those who did seek information, the most common channels were colleagues (26.5%, n = 39) and academic publications (23.1%), followed by online courses/websites (21.8%, n = 32), popular science publications (19.7%, n = 29), and professional conferences/workshops (18.4%, n = 27). This pattern suggests that while some clinicians attempt to inform themselves about AI through peer discussions or scientific literature, many remain unconnected to formalized educational pathways or comprehensive training programs.

Self-assessed AI knowledge

Participants generally rated themselves as having limited knowledge across key AI domains (Fig. 2A). More than half reported having “no knowledge” or only “some knowledge” in areas such as machine learning (86.4%, n = 127, combined) and deep learning (89.1%, n = 131, combined). Even fundamental concepts like LLM sand generative AI were unfamiliar to a substantial portion of respondents. For instance, nearly half (47.6%, n = 70) had no knowledge of LLMs, and two-thirds (66.0%, n = 97) had no knowledge of generative AI. Similar trends were observed for natural language processing and advanced statistical analyses, reflecting a widespread lack of confidence and familiarity with the technical underpinnings of AI beyond superficial usage.

Fig. 2.

Fig. 2

Overview of Oncologists’ AI Familiarity, Attitudes, and Perceived Impact. (A) Distribution of participants’ self-assessed AI knowledge, (B) attitudes toward AI in various medical practice areas, and (C) insights into AI’s broader impact on medical practice

Attitudes toward AI integration in oncology

When asked to evaluate AI’s role in various clinical tasks (Fig. 2B), respondents generally displayed cautious optimism. Prognosis estimation stood out as one of the areas where AI received the strongest endorsement, with a clear majority rating it as “positive” or “very positive.” A similar pattern emerged for medical research, where nearly three-quarters of respondents recognized AI’s potential in academic field. In contrast, opinions on treatment planning and patient follow-up were more mixed, with a considerable proportion adopting a neutral stance. Diagnosis and clinical decision support still garnered predominantly positive views, though some participants expressed reservations, possibly reflecting concerns about reliability, validation, and the interpretability of AI-driven recommendations.

Broadening the perspective, Fig. 2C illustrates how participants viewed AI’s impact on aspects like patient-physician relationships, social perception, and health policy. While most believed AI could improve overall medical practices and potentially reduce workload, many worried it might affect the quality of personal interactions with patients or shape public trust in uncertain ways. Approximately half recognized potential benefits for healthcare access, but some remained neutral or skeptical, perhaps concerned that technology might not equally benefit all patient populations or could inadvertently exacerbate existing disparities.

Ethical and regulatory concerns

Tables 2 and 3, along with Figs. 3A–C, summarize participants’ ethical and legal considerations. Patient management (57.8%, n = 85), article or presentation writing (51.0%, n = 75), and study design (25.2%, n = 37) emerged as key activities where the integration of AI was viewed as ethically questionable. Respondents feared that relying on AI for sensitive clinical decisions or academic tasks could compromise patient safety, authenticity, or scientific integrity. A subset of respondents reported utilizing AI in certain domains, including 13.6% (n = 20) for article and presentation writing, and 11.6% (n = 17) for patient management, despite acknowledging potential ethical issues in the preceding question. However, only about half of the respondents who admitted using AI for patient management identified this as an ethical concern. This discrepancy suggests that while oncologists harbor concerns, convenience or lack of guidance may still drive them to experiment with AI applications.

Table 2.

Ethical concerns regarding AI usage in medical practice

Agrees for usage of artificial intelligence in medical practice, n (%)
 Yes 120 (81.6%)
 No 9 (6.1%)
 Unsure 18 (12.3%)
Thinks practitioners should have an active role in artificial intelligence development, n (%)
 Yes 135 (91.8%)
 No 3 (2.0%)
 Unsure 9 (6.2%)
Considers which of activities using artificial intelligence ethically concerning?*, n (%)
 Patient management 85 (57.8%)
 Article/presentation writing 75 (51.0%)
 Article/presentation editing 37 (25.2%)
 Study design 37 (25.2%)
 Ethics committee application 33 (22.5%)
 Preparation of communication materials 22 (14.9%)
 None of them 29 (17.7%)
Ever done which of activities using artificial intelligence ethically concerning?*,**, n (%)
 Patient management 17 (11.6%)
 Article/presentation writing 20 (13.6%)
 Article/presentation editing 33 (22.4%)
 Study design 15 (10.2%)
 Ethics committee application 10 (6.8%)
 Preparation of communication materials 17 (11.6%)
 None of them 16 (10.9%)

*Percentages shown for total participant counts

**Not obligatory question in the survey

Table 3.

Views on ethical development and regulations for AI

Suggests which measures to should be employed to ensure the ethical development and use of artificial intelligence in medical practice* n, (%)
 Ethical audits 111 (75.5%)
 Artificial intelligence education 105 (71.4%)
 Legal regulations 105 (71.4%)
 Obtaining patient consents 91 (61.9%)
 Establishing working groups/commissions 75 (51.0%)
 Others** 2 (1.5%)
Thinks legal regulations for artificial intelligence applications are satisfactory, n (%)
 No 117 (79.6%)
 Yes 3 (2.0%)
 Unsure 27 (18.4%)
Thinks to whom the responsibility of a medical error in supported by artificial intelligence*, n (%)
 Software developer 100 (68.0%)
 Physician 90 (61.2%)
 Health institution 57 (38.8%)
 Patients and relatives (if informed consent applied) 43 (29.3%)
 Artificial intelligence instructor 35 (23.8%)
Thinks what steps should be taken to close the legal gaps related to artificial intelligence in medical practice* n, (%)
 Establishing national and international standards 121 (82.3%)
 Enacting new laws 87 (59.2%)
 Establishing institutions for artificial intelligence oversight 79 (53.7%)
 Making informed consent mandatory for artificial intelligence use 78 (53.1%)
 Updating existing laws 65 (44.2%)

*Percentages shown for total participant counts

**Others were thinking the ethical concerns are the major barriers to developments in artificial intelligence and clinical trials

Fig. 3.

Fig. 3

Ethical Considerations, Implementation Barriers, and Strategic Solutions for AI Integration. (A) Frequency distribution of major ethical concerns, (B) heatmap of implementation challenges across technical, educational, clinical, and regulatory categories, and (C) priority matrix of proposed integration solutions including training and regulatory frameworks. The implementation time and time-line is extracted from the open-ended questions. Timeline: The estimated time needed for implementation; Implementation time: The urgency of implementation. The timelime and implementation time is fully correlated (R.2 = 1.0)

Moreover, nearly 82% of participants supported using AI in medical practice, yet 79.6% (n = 117) did not find current legal regulations satisfactory. Over two-thirds advocated for stricter legal frameworks and ethical audits. Patient consent was highlighted by 61.9% (n = 91) as a critical step, implying that clinicians want transparent processes that safeguard patient rights and maintain trust. Liability in the event of AI-driven errors also remained contentious: 68.0% (n = 100) held software developers partially responsible, and 61.2% (n = 90) also implicated physicians. This suggests a shared accountability model might be needed, involving multiple stakeholders across the healthcare and technology sectors.

To address these gaps, respondents proposed various solutions. Establishing national and international standards (82.3%, n = 121) and enacting new laws (59.2%, n = 87) were seen as pivotal. More than half favored creating dedicated institutions for AI oversight (53.7%, n = 79) and integrating informed consent clauses related to AI use (53.1%, n = 78) into patient forms. These collective views point to a strong desire among oncologists for a structured, legally sound environment in which AI tools are developed, tested, and implemented responsibly.

Ordinal regression analysis of factors associated with AI knowledge, attitudes, and concerns

For knowledge levels, the ordinal regression model identified formal AI education as the sole significant predictor (ß = 30.534, SE = 0.6404, p < 0.001). In contrast, other predictors such as age (ß = −0.1835, p = 0.159), years as physician (ß = 0.0936, p = 0.425), years in oncology (ß = 0.0270, p = 0.719), and academic rank showed no significant associations with knowledge levels in the ordinal model.

The ordinal regression for concern levels revealed no significant predictors among demographic factors, professional experience, academic status, AI education, nor current knowledge levels (p > 0.05) were associated with the ordinal progression of ethical and practical concerns.

For attitudes toward AI integration, the ordinal regression identified two significant predictors. Those willing to receive AI education showed progression toward more positive attitudes (ß = 13.143, SE = 0.6688, p = 0.049), and actual receipt of AI education also predicted progression toward more positive attitudes (ß = 12.928, SE = 0.6565, p = 0.049). Additionally, higher knowledge levels showed a trend toward more positive attitudes in the ordinal model although not significant (ß = 0.3899, SE = 0.2009, p = 0.052).

Table 4 presents the ordinal regression analyses examining predictors of AI knowledge levels, concerns, and attitudes among Turkish medical oncologists.

Table 4.

Ordinal regression results for assessing the factors affecting knowledge levels, attitudes and concerns

Domain* Factor ß SE p**
Knowledge levels Age (per year increase) −0.1835 0.1303 0.159
Years as physician (per year increase) 0.0936 0.1174 0.425
Years in oncology (per year increase) 0.0270 0.0752 0.719
Educational and academic status (compared to associate professor)
 Assistant professor 0.8900 11.048 0.420
 Professor 11.068 0.7912 0.162
 Specialist −0.3128 0.6078 0.607
 Resident, fellow −0.9552 0.6371 0.134
Will to receive AI education (yes vs. no) −0.6132 0.7887 0.437
AI education status (any vs. none) 30.534 0.6404  <.001
Concern levels Age (per year increase) −0.0875 0.1004 0.384
Years as physician (per year increase) 0.0239 0.0922 0.796
Years in oncology (per year increase) 0.0175 0.0544 0.747
Educational and academic status (compared to associate professor)
 Assistant professor −0.7438 11.487 0.517
 Professor 0.1743 0.7193 0.809
 Specialist −0.0620 0.5116 0.904
 Resident, fellow −0.6821 0.5423 0.208
Will to receive AI education (yes vs. no) −0.7755 0.7440 0.298
AI education status (any vs. none) −0.5828 0.6318 0.356
Knowledge levels (per one point increase in Likert scale) 0.0384 0.1992 0.847
Attitude levels Age (per year increase) 0.0637 0.1084 0.557
Years as physician (per year increase) −0.0361 0.0991 0.715
Years in oncology (per year increase) −0.0118 0.0660 0.858
Educational and academic status (compared to associate professor)
 Assistant professor 0.0704 11.448 0.951
 Professor −0.4625 0.7904 0.558
 Specialist −0.1383 0.5611 0.805
 Resident, fellow −0.1709 0.5977 0.775
Will to receive AI education (yes vs. no) 13.143 0.6688 0.049
AI education status (any vs. none) 12.928 0.6565 0.049
Knowledge levels (per one point increase in Likert scale) 0.3899 0.2009 0.052

AI Artificial intelligence, ß Beta estimate in ordinal regression, SE Standard error

*Domains are evaluated with median of each component of Likert scales. One-point increase indicates higher knowledge levels and concerns and more positive attitudes

**Significant p-values are shown in bold

Qualitative insights

The open-ended responses, analyzed qualitatively, revealed several recurring themes reinforcing the quantitative findings. Participants frequently stressed the importance of human oversight, emphasizing that AI should complement rather than replace clinical expertise, judgment, and empathy. Data security and privacy emerged as central concerns, with some respondents worrying that insufficient safeguards could lead to breaches of patient confidentiality. Others highlighted the challenge of ensuring that AI tools maintain cultural and social sensitivity in diverse patient populations. Calls for incremental, well-regulated implementation of AI were common, as was the suggestion that education and ongoing professional development would be essential to ensuring clinicians use AI effectively and ethically.

In essence, while there is broad acknowledgment that AI holds promise for enhancing oncology practice, respondents also recognize the need for clear ethical standards, solid regulatory frameworks, comprehensive training, and thoughtful integration strategies. oncology care.

Discussion

This national survey of Turkish medical oncologists shows cautious enthusiasm for AI’s, particularly generative AI and LLMs’ integration into oncology practice since the pilot study of our survey showed particular interest in LLMs besides non-generative AI applications in decision-making. While participants acknowledge AI’s potential to enhance decision-making, research, and treatment optimization, they highlight substantial unmet needs in education, ethics, and regulation.

The high percentage of oncologists who have used AI tools—particularly LLMs—illustrates growing interest. Yet the near absence of formal training and the widespread desire for education suggest that professional societies, universities, and regulatory bodies must develop tailored programs. Such training could focus on critical interpretation of AI outputs, data governance, algorithmic bias, and validation processes to ensure that clinicians remain informed, confident users of these tools.

Many participants expressed positive attitudes toward AI’s impact on prognosis estimation and research, that may be attributable for hypothesis generation, literature synthesis and data analysis. This aligns with global trends, where AI excels at processing vast datasets to identify patterns and guide evidence-based practice [5, 6]. Concerns regarding patient-physician relationships highlight the necessity of preserving a humanistic approach. Additionally, the potential decrease in job satisfaction may be attributed to cultural factors specific to Turkey, which warrant further investigation. AI should function as a supportive tool rather than a substitute for empathy, communication, and clinical judgment that are essential to quality oncology care.

Ethical and regulatory challenges emerged prominently. Respondents identified ethically concerning activities involving patient management and academic work, suggesting that misuse or misinterpretation of AI outputs could compromise patient safety and scientific integrity. The main concerns are consistent with recent literature, yet the major large-language models like GPT includes ethical reasoning and information about the risks of AI usage in clinical practice, ensuring sharing of the risks [711]. Also, most of the literature in artificial intelligence ethics emphasize on the publication ethics, which is a result of recent developments in the AI practice and increasing usage [12, 13]. With most participants deeming current legal frameworks inadequate, developing robust standards, clear guidelines, and oversight institutions is essential. Turkiye’s experience, though specific to one country, may reflect broader global needs. International collaborations and harmonized regulations can mitigate uncertainty, clarify liability, and ensure that advances in AI align with ethical principles and patient welfare, as the recent literature also suggesting the same aspects for legal principles [1417].

The ordinal regression findings demonstrate that both wanting AI education and having received it strongly predict positive attitudes toward AI in oncology. Most strikingly, formal AI education increased knowledge levels dramatically, suggesting that even brief training can transform oncologists from novices to knowledgeable users. While education successfully improves both knowledge and attitudes, it does not reduce concerns about AI implementation, which remain consistent across all oncologist groups regardless of experience or training. This indicates that building AI competency through education is essential for acceptance, but addressing ethical and practical concerns will require additional strategies beyond individual training programs.

In open-ended questions, participants primarily mentioned data security and AI's potential impact on professional practice, such as job loss and reputational harm, which could be due to fear and anxiety about the unknown, and some studies from Türkiye is consistent with our results [1820]. Several respondents expressed concerns regarding potential ethical issues, particularly the illegal trading of data and the lack of confidentiality as major problems. A few participants reported utilizing non-generative AI technologies, such as radiomics systems, whereas the majority did not. This outcome may be attributed to the widespread prevalence of large language models (LLMs) in practical applications. The open-ended questions revealed that participants predominantly seek formal education on AI in the immediate future, while recommending postponement for clinical application. The qualitative analyses indicated that most participants have negative perceptions and future expectations regarding the impact of AI on oncology. Additionally, the majority of participants demonstrated a lack of awareness about non-generative AI systems in oncology practice, although some expressed a desire for widely available AI-augmented risk models.

In oncology research, while many studies focus on large language models (LLMs), others utilize methods such as big data analysis, imaging modalities, and genomic exploration. Advances in machine learning techniques, risk modeling, reasoning, radiogenomics, the availability of extensive datasets, AI-augmented data extraction, and various bioinformatics approaches signify notable progress for oncology practice. These innovations have the potential to greatly enhance cancer care. Nevertheless, it is imperative to address ethical considerations. As the field continues to develop, additional concerns are likely to emerge.

This study’s limitations include its single-country focus and reliance on self-reported data. While Turkiye’s ~ 1340 oncologists provide a meaningful context, findings may not generalize to other countries with different healthcare systems or regulatory environments. The survey excluded the purpose of use in particular AI tools, making the results less applicable to all groups. Nonetheless, these insights can inform stakeholders worldwide about common concerns and aspirations surrounding AI in oncology. Additionally, while our survey instrument was developed through expert consultation and preliminary qualitative interviews, we did not conduct formal psychometric validation prior to data collection, which may limit the reliability of our survey instrument.

Future research should explore qualitative interviews, focus groups, and longitudinal assessments to capture evolving attitudes and the effects of educational interventions or policy changes over time. Comparative studies across multiple countries and regions would also help clarify cultural and systemic factors influencing AI adoption in oncology.

Conclusions

Turkish medical oncologists recognize AI’s potential to enhance oncology practice but emphasize critical gaps in education, ethical standards, and regulatory frameworks. Their cautious optimism signals a need for proactive measures—comprehensive training, transparent policies, robust oversight, and patient-centered guidelines—that ensure AI augments clinical expertise without eroding trust or professional integrity. Although limited by a single-country perspective, these findings offer valuable lessons for global efforts to integrate AI responsibly into cancer care.

Supplementary Information

Additional file 1. (17.8KB, docx)

Acknowledgements

Not applicable

Abbreviations

AI

Artificial intelligence

GPT

Generalized pretrained transformers

IQR

Interquartile range

LLM

Large language models

Authors’ contributions

ECE: Survey design, data collection, formal analysis, conceptualization, writing FÇŞ: Survey design, formal analysis, conceptualization, editing.

Funding

The study was not funded by any organization or corporation.

Data availability

The survey instrument is given on supplementary appendix. The data collected will be extracted and translated into English upon a reasonable request.

Declarations

Ethics approval and consent to participate

The institutional ethics committee approved the study protocol (Ankara University Faculty of Medicine, Ethics Board of Human Research, approval number: AUTF-KAEK 2024/635, approval date: 10.10.2024). Prior to completing the survey, all participants provided informed consent to participate in the study. The research was conducted in compliance with the Declaration of Helsinki.

Consent to publication

Not applicable.

Competing interests

The authors declare no competing interests.

Footnotes

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

References

  • 1.Bhinder B, Gilvary C, Madhukar NS, Elemento O. Artificial Intelligence in Cancer Research and Precision Medicine. Cancer Discov. 2021;11:900–15. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Elemento O, Leslie C, Lundin J, Tourassi G. Artificial intelligence in cancer research, diagnosis and therapy. Nature Reviews Cancer 2021 21:12. 2021;21:747–52. [DOI] [PubMed]
  • 3.Yu K-H, Healey E, Leong T-Y, Kohane IS, Manrai AK. Medical Artificial Intelligence and Human Values. N Engl J Med. 2024;390:1895–904. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Turkish Society for Medical Oncology. Medical Oncology Clinics - Turkish Society for Medical Oncoloy. https://www.kanser.org/saglik/tibbi-onkoloji-klinikleri. Accessed 12 Dec 2024.
  • 5.Khan Rony MK, Akter K, Nesa L, Islam MT, Johra FT, Akter F, et al. Healthcare workers’ knowledge and attitudes regarding artificial intelligence adoption in healthcare: A cross-sectional study. Heliyon. 2024;10:e40775. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Hamedani Z, Moradi M, Kalroozi F, Manafi Anari A, Jalalifar E, Ansari A, et al. Evaluation of acceptance, attitude, and knowledge towards artificial intelligence and its application from the point of view of physicians and nurses: A provincial survey study in Iran: A cross-sectional descriptive-analytical study. Health Sci Rep. 2023;6. [DOI] [PMC free article] [PubMed]
  • 7.Pearson GS. Artificial Intelligence and Publication Ethics. J Am Psychiatr Nurses Assoc. 2024;30:453–5. [DOI] [PubMed] [Google Scholar]
  • 8.Parikh RB, Teeple S, Navathe AS. Addressing Bias in Artificial Intelligence in Health Care. JAMA - Journal of the American Medical Association. 2019;322:2377–8. [DOI] [PubMed] [Google Scholar]
  • 9.Akinrinmade AO, Adebile TM, Ezuma-Ebong C, Bolaji K, Ajufo A, Adigun AO, et al. Artificial Intelligence in Healthcare: Perception and Reality. Cureus. 2023. 10.7759/CUREUS.45594. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Dergaa I, Chamari K, Zmijewski P, Saad H Ben. From human writing to artificial intelligence generated text: examining the prospects and potential threats of ChatGPT in academic writing. Biol Sport. 2023;40:615-22. [DOI] [PMC free article] [PubMed]
  • 11.See KC. Using artificial intelligence as an ethics advisor. Ann Acad Med Singap. 2024;53:454–5. [DOI] [PubMed] [Google Scholar]
  • 12.Wiwanitmkit S, Wiwanitkit V. Artificial Intelligence, Academic Publishing, Scientific Writing, Peer Review, and Ethics. Braz J Cardiovasc Surg. 2024;39:e20230377. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Kocak Z. Publication ethics in the era of artificial intelligence. J Korean Med Sci. 2024;39(33):e249. [DOI] [PMC free article] [PubMed]
  • 14.Stewart C, Wong SKY, Sung JJY. Mapping ethico-legal principles for the use of artificial intelligence in gastroenterology. J Gastroenterol Hepatol. 2021;36:1143–8. [DOI] [PubMed] [Google Scholar]
  • 15.Currie G, Hawk KE. Ethical and Legal Challenges of Artificial Intelligence in Nuclear Medicine. Semin Nucl Med. 2021;51:120–5. [DOI] [PubMed] [Google Scholar]
  • 16.Lang M, Bernier A, Knoppers BM. Artificial Intelligence in Cardiovascular Imaging: “Unexplainable” Legal and Ethical Challenges? Can J Cardiol. 2022;38:225–33. [DOI] [PubMed] [Google Scholar]
  • 17.Hedderich DM, Weisstanner C, Van Cauter S, Federau C, Edjlali M, Radbruch A, et al. Artificial intelligence tools in clinical neuroradiology: essential medico-legal aspects. Neuroradiology. 2023;65:1091. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Gherheş V. Why Are We Afraid of Artificial Intelligence (Ai)? European Review Of Applied Sociology. 2018;11:6–15. [Google Scholar]
  • 19.Civaner MM, Uncu Y, Bulut F, Chalil EG, Tatli A. Artificial intelligence in medical education: a cross-sectional needs assessment. BMC Med Educ. 2022;22:1–9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Yılmaz C, Erdem RZ, Uygun LA. Artificial intelligence knowledge, attitudes and application perspectives of undergraduate and specialty students of faculty of dentistry in Turkey: an online survey research. BMC Med Educ. 2024;24:1149. [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Additional file 1. (17.8KB, docx)

Data Availability Statement

The survey instrument is given on supplementary appendix. The data collected will be extracted and translated into English upon a reasonable request.


Articles from BMC Medical Ethics are provided here courtesy of BMC

RESOURCES