Skip to main content
Digital Health logoLink to Digital Health
. 2025 Dec 2;11:20552076251404507. doi: 10.1177/20552076251404507

From awareness to action: A nationwide survey of infectious diseases and clinical microbiology physicians’ perspectives on artificial intelligence in clinical practice

Ezgi Gülten 1,, Okan Derin 2,3, Eyüp Arslan 4, Ceren Atasoy Tahtasakal 2, Fatih Temoçin 5, Funda Memişoğlu 6, Nazlım Aktuğ Demir 7
PMCID: PMC12673049  PMID: 41346940

Abstract

Background

The integration of artificial intelligence (AI) into clinical practice is gaining momentum globally, yet specialty-specific perspectives remain underexplored. This study aimed to assess the awareness, knowledge, attitudes, expectations, and concerns of infectious diseases and clinical microbiology (IDCM) physicians regarding AI applications in their field.

Methods

A cross-sectional, online survey was distributed between May and June 2025 to IDCM physicians across Türkiye. The questionnaire included multiple-choice, Likert-type, and open-ended items assessing sociodemographic characteristics, AI familiarity, clinical use, and perceptions. Descriptive and inferential statistics, along with thematic analysis of qualitative responses, were employed.

Results

In total, 387 IDCM physicians completed the survey. While 0.5% (n = 2) reported prior long-term/extensive AI training, 88.9% (n = 344) agreed that IDCM physicians should be actively involved in AI system development. Notably, 23.0% (n = 89) had already used AI tools, primarily ChatGPT (n = 69, 77.5%). Regarding accountability, 68.2% (n = 264) assigned responsibility for erroneous AI-generated decisions to physicians. Familiarity with AI showed a strong association with academic title (p < .001). Total knowledge scores were significantly higher among university hospital physicians (p < .001), whereas total attitude scores differed across age (p = .003), academic title (p = .001), and years of experience (p = .006). Thematic analysis of 97 open-ended responses revealed high expectations for AI in enhancing decision support, timeliness, and operational efficiency. However, major concerns included ethical risks, algorithmic bias, data reliability, and potential erosion of clinical autonomy.

Conclusions

This study provides comprehensive insights into IDCM physicians’ perspectives on AI. Findings highlight strong interest but limited preparedness, underscoring the need for targeted education, ethical safeguards, and inclusive policy frameworks to ensure responsible AI integration.

Keywords: Artificial intelligence, clinical microbiology, digital health, infectious diseases, physician attitudes

Background

The rapid evolution of artificial intelligence (AI) technologies is transforming the landscape of modern healthcare, offering innovative tools for diagnosis, treatment optimization, risk stratification, and clinical decision support. 1 In the field of infectious diseases and clinical microbiology (IDCM), AI holds particular promise for addressing longstanding challenges such as antimicrobial stewardship, early outbreak detection, infection control surveillance, and laboratory data interpretation. 2 These technologies can facilitate real-time analytics, automate routine processes, and support more personalized and timely interventions. 2 However, the meaningful integration of AI into clinical workflows requires more than technological advancement. It depends fundamentally on the preparedness, awareness, and engagement of end-users, particularly frontline physicians who bear the responsibility of clinical decision-making in uncertain and high-pressure contexts. 3 Several additional concerns continue to challenge the integration of AI into everyday clinical practice. These include issues related to data privacy, algorithmic transparency, potential bias in AI-generated outputs, and limited clinician training or clear ethical and legal accountability frameworks. 4 Addressing these barriers is essential to ensure that AI adoption remains both safe and clinically meaningful.

Understanding how IDCM physicians perceive and engage with AI is essential for guiding responsible implementation. Existing literature has primarily centered on technical feasibility or general practitioner attitudes, often overlooking the specialized demands and decision-making environments of IDCM physicians. 5 As a specialty that operates at the intersection of individualized patient care, public health response, and microbiological diagnostics, IDCM presents distinct opportunities and challenges for AI integration. 6 Therefore, assessing the familiarity, attitudes, and expectations of IDCM physicians is critical not only for identifying practical and acceptable use cases but also for informing the development of ethical, context-sensitive, and clinically impactful AI tools.

To address this gap, the present study aimed to evaluate the current landscape of AI-related awareness, knowledge, and attitudes among IDCM physicians in Türkiye. By surveying a diverse cohort of specialists and residents across institutional settings, the study sought to identify patterns of experience, perceived barriers, and readiness for future AI integration. The findings are intended to inform both local and global strategies for facilitating clinician-centered AI adoption in infectious disease practice.

Methods

This descriptive, cross-sectional study aimed to evaluate the awareness, knowledge, attitudes, expectations, and concerns of IDCM specialists and residents in Türkiye. The study was conducted between May 26, 2025, and June 15, 2025 using an anonymous, voluntary, and online questionnaire distributed digitally via professional networks, institutional mailing lists, and relevant society platforms. Participation was open to all IDCM specialists and residents in Türkiye who were currently working in inpatient healthcare settings. Participants were recruited using a non-probability convenience sampling method. Respondents were required to provide electronic informed consent prior to survey initiation, and incomplete responses were excluded from the final analysis. The minimum required sample size was calculated as 384 using Cochran's formula assuming a large (effectively infinite) population, with a 95% confidence level and a 5% margin of error. Data collection was terminated upon reaching this threshold. At the time of closure, a total of 387 complete responses had been recorded and included in the analysis. According to national data, approximately 4000 IDCM physicians are currently practicing in Türkiye. Thus, the achieved sample of 387 participants corresponds to roughly 10% of this professional population. This study was conducted and reported in accordance with the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) guidelines for cross-sectional studies.

The survey instrument was developed based on a comprehensive review of the literature and iterative feedback from professionals in IDCM. Three IDCM specialists, one medical microbiologist, and one biostatistician with expertise in survey methodology reviewed the draft questionnaire for ensuring content validity and methodological robustness. Prior to distribution, the questionnaire was piloted among twelve IDCM physicians (residents and specialists) to evaluate item clarity, wording, and overall structure. Minor revisions were made based on their feedback. Internal consistency was assessed using Cronbach's alpha (α = 0.84 for the attitude domain and α = 0.81 for the knowledge domain). It comprised five main sections: (a) sociodemographic characteristics and professional background (e.g., age, gender, academic title, institutional setting, and years of experience), (b) awareness and knowledge of AI, (c) attitudes toward AI applications in IDCM, (d) institutional practices and access to AI tools, and (e) expectations and concerns regarding future AI integration in the field. The questionnaire included multiple-choice items, 5-point Likert-type scales, and open-ended questions. Attitudinal items were rated on a five-point Likert scale, where 1 = strongly disagree, 2 = disagree, 3 = neither agree nor disagree, 4 = agree, and 5 = strongly agree. Items assessing self-reported knowledge were rated on a five-point scale, where 1 = no knowledge, 2 = limited knowledge, 3 = moderate knowledge, 4 = good knowledge, and 5 = very good knowledge. Items related to perceived knowledge were designed to evaluate self-reported familiarity with AI applications in domains such as infection diagnosis, antibiotic stewardship, hand hygiene surveillance, isolation practices, outbreak monitoring, and clinical decision support, whereas attitudinal items assessed physicians’ perceptions of AI's role in clinical decision-making, ethical implications, patient privacy, professional autonomy, and regulatory governance. Cumulative scores were calculated to assess participants’ overall knowledge and attitudes regarding AI. The total knowledge score was derived from 12 items (each rated on a 5-point Likert scale), resulting in a total possible score range of 12–60. The total attitude score was calculated from 11 items (each rated on a 5-point Likert scale), yielding a total possible range of 11–55. Higher scores in both domains indicated greater knowledge and more favorable attitudes toward AI. The full content of the survey instrument is available in Supplemental Material 1.

Descriptive statistics were used to summarize categorical variables as frequencies and percentages, and continuous or ordinal variables as means with standard deviations (SD) or medians with minimum and maximum values. Differences between two groups were analyzed using the chi-square test for categorical variables and the Kruskal–Wallis test for continuous or ordinal variables. For comparisons involving more than two groups, the chi-square test and the Kruskal–Wallis test were used for categorical and continuous/ordinal variables, respectively. When expected cell counts were less than five in contingency tables, the Fisher–Freeman–Halton exact test was used instead of the chi-square test. Correlations between continuous variables were assessed using Spearman's rank correlation. Open-ended responses concerning participants’ expectations and concerns were analyzed thematically. Thematic analysis was conducted manually by the research team through an inductive approach. The analysis was independently performed by two researchers, and discrepancies in theme identification were resolved through discussion to reach consensus. Recurrent themes were identified, categorized, and exemplified using representative participant quotes. To control for Type I error inflation across multiple statistical tests, corrections for multiple comparisons were applied using the Benjamini–Hochberg False Discovery Rate method. Adjusted p-values < .05 were considered statistically significant. Effect sizes were also calculated to complement p-values and indicate the magnitude of associations or group differences. H(df) and χ²(df) denote the Kruskal–Wallis and chi-square test statistics with corresponding degrees of freedom, respectively. Specifically, Cramer's V was computed for the chi-square and Fisher–Freeman–Halton exact tests, epsilon squared (ε²) values were calculated for the Kruskal–Wallis analyses, and Spearman's correlation coefficient (r) itself was interpreted as the effect size for correlation analyses. Effect sizes were interpreted according to established conventions: for Cramer's V, values of 0.10, 0.30, and 0.50 were considered to represent small, moderate, and large effects, respectively, particularly for 2 × 2 contingency tables. For ε², values of approximately 0.01, 0.06, and 0.14 were interpreted as small, moderate, and large effect sizes, respectively, and for r, values of 0.10, 0.30, and 0.50 indicated small, moderate, and large correlations, respectively. Effects were described as trivial when p-values exceeded .05, regardless of the effect size magnitude. All statistical analyses were performed using IBM SPSS Statistics v26.0.

Results

Demographic characteristics

In total, 387 IDCM physicians participated in the survey. Participants aged 24–35 years constituted 50.6% (n = 196) of the sample, and 71.3% (n = 276) identified as female. University hospitals accounted for 49.6% (n = 192) of institutional affiliations. Among the participants, 166 (42.9%) were residents and 102 (26.3%) were specialists, representing the two largest academic title groups in the study. In terms of professional experience, 43.7% (n = 169) had ≤5 years, and 31.3% (n = 121) had ≥16 years. Demographic characteristics of the participants are presented in Table 1.

Table 1.

Demographic characteristics of the participants (na = 387).

Variable Category n (%)
Age 24–35 years 196 (50.6)
36–45 years 77 (19.9)
46–55 years 73 (18.9)
≥56 years 41 (10.6)
Gender Female 276 (71.3)
Male 108 (27.9)
Prefer not to say 3 (0.8)
Academic title Resident 166 (42.9)
Specialist 102 (26.3)
Assistant professor 24 (6.2)
Associate professor 49 (12.7)
Professor 46 (11.9)
Institution type Private hospital 20 (5.2)
State hospital 53 (13.7)
Training and research hospital 122 (31.5)
University hospital 192 (49.6)
Years of experience 0–5 years 169 (43.7)
6–10 years 44 (11.3)
11–15 years 53 (13.7)
≥16 years 121 (31.3)
a

The number of participants.

General exposure to and familiarity with artificial intelligence

The initial exposure to AI among participants predominantly occurred via social media platforms (n = 292, 75.4%). While 52.5% (n = 203) of the respondents reported basic familiarity with AI, 90.7% (n = 351) had not received any formal AI training. Nonetheless, 79.6% (n = 308) indicated that they would be willing to use AI tools if made accessible by their institutions. Table 2 presents detailed data on participants’ first exposure to AI, their familiarity levels, and previous training and usage patterns.

Table 2.

General knowledge and exposure to artificial intelligence (na = 387).

Item Response options n (%)
First exposure to AIb (among those who have heard of AI, n = 384) Social media 292 (75.4)
Social circle 13 (3.3)
Scientific publications 22 (5.6)
Scientific meetings/trainings 57 (14.7)
Familiarity with AI Never heard 3 (0.8)
Basic knowledge 203 (52.5)
Moderate knowledge 165 (42.6)
Advanced knowledge 16 (4.1)
Openness to improving oneself in AI Not at all 9 (2.4)
Slightly 35 (9.0)
Moderately 153 (39.5)
Very 119 (30.8)
Definitely 71 (18.3)
Previous AI training No 351 (90.7)
Yes, short-term 34 (8.8)
Yes, long-term/extensive 2 (0.5)
Familiarity with AI software Never heard 87 (22.5)
Basic knowledge 256 (66.1)
Moderate knowledge 41 (10.6)
Advanced knowledge 3 (0.8)
Participation in AI software projects No 381 (98.4)
Yes 6 (1.6)
Use of AI in clinical/professional settings No 298 (77.0)
Yes 89 (23.0)
Institutional AI system availability No 269 (69.5)
Not sure 103 (26.6)
Yes 15 (3.9)
Institutional AI-based clinical decision support system availability No 260 (67.2)
Not sure 111 (28.7)
Yes 16 (4.1)
Intention to use AI tools if institutionally accessible No 16 (4.1)
Yes 308 (79.6)
Undecided 63 (16.3)
a

The number of participants.

b

Artificial intelligence.

Eighty-nine respondents (23.0%) reported prior use of AI tools in clinical or professional contexts, with ChatGPT (OpenAI, USA) being the most frequently referenced platform, acknowledged by 69 (77.5%) participants. Reported applications of ChatGPT spanned a wide range of tasks. In clinical settings, it was used for differential diagnosis, antibiotic selection and dosing, treatment duration decisions, and dermatological lesion identification. In academic and research domains, participants utilized the tool for literature review, data analysis, study design and interpretation, summarization of scientific articles and guidelines, and the construction of tables and decision algorithms. Additionally, ChatGPT was frequently employed in educational and administrative tasks such as preparing presentations, creating visual materials (e.g., posters and invitations), translation, and language editing. Other platforms mentioned by participants included Google Gemini (Google LLC, USA), Perplexity (Perplexity AI, USA), Abacus/DeepAgent (Abacus.AI, USA), Grok (xAI, USA), DeepSeek (DeepSeek AI, China), Genspark (Genspark AI, USA), Notebook LM (Google Research, USA), Napkin.ai (Napkin, Inc., USA), and Claude (Anthropic, USA).

Uncertainty regarding the presence of AI-based systems within institutions was reported by 103 participants (26.6%). Among the 16 participants (4.1%) who confirmed institutional AI availability, reported applications included radiological diagnosis, cancer screening, antibiotic prescribing in sepsis and pneumonia, diagnostic algorithms for various clinical scenarios, electrocardiogram interpretation, pathological diagnosis, and educational purposes.

Attitudes toward AI implementation and governance

A majority of respondents agreed or strongly agreed that AI could enhance clinical decision-making (n = 163, 42.1%) and contribute to antimicrobial stewardship efforts (n = 189, 48.8%). Frequently reported concerns were ethical implications (n = 197, 51.0%), patient privacy (n = 182, 47.1%), and the potential erosion of professional autonomy (n = 168, 43.4%). Participants strongly emphasized the importance of adapting AI systems to patient-specific data (n = 244, 63.1%) and maintaining the role of AI as a support tool rather than a sole decision-maker (n = 293, 75.7%). Additionally, 59.9% (n = 232) supported integrating AI education into specialty training, while 75.4% (n = 292) endorsed regulation within a legal and ethical framework. The median total attitude score among all participants was 24 (12–60). Likert-scale responses reflecting physicians’ attitudes toward AI implementation and governance in IDCM are presented in Table 3.

Table 3.

Likert-scale responsesa reflecting physicians’ attitudes toward artificial intelligence implementation and governance in infectious diseases and clinical microbiology (nb = 387).

Strongly disagree, n (%) Disagree, n (%) Neither agree nor disagree, n (%) Agree, n (%) Strongly agree, n (%)
AIc improves clinical decision-making 8 (2.1) 58 (15.0) 158 (40.8) 109 (28.2) 54 (13.9)
AI strengthens antimicrobial stewardship 17 (4.4) 45 (11.6) 136 (35.2) 127 (32.8) 62 (16.0)
I have a high level of trust in AI systems 25 (6.5) 102 (26.3) 181 (46.8) 57 (14.7) 22 (5.7)
AI may result with distance in the physician–patient relationship 39 (10.1) 84 (21.7) 128 (33.1) 92 (23.8) 44 (11.3)
AI has the potential to raise ethical concerns 18 (4.6) 59 (15.2) 113 (29.2) 114 (29.5) 83 (21.5)
AI may pose a threat to patient privacy 26 (6.7) 69 (17.8) 110 (28.4) 113 (29.3) 69 (17.8)
AI may undermine professional autonomy 17 (4.4) 77 (19.9) 125 (32.3) 111 (28.7) 57 (14.7)
AI systems should be tailored to patient-specific data 9 (2.3) 35 (9.0) 99 (25.6) 123 (31.8) 121 (31.3)
AI must be regulated within a legal and ethical framework 7 (1.8) 20 (5.2) 68 (17.6) 109 (28.1) 183 (47.3)
AI education should be integrated into specialty training 13 (3.4) 41 (10.6) 101 (26.1) 112 (28.9) 120 (31.0)
AI should function as a support tool, not a decision-maker 7 (1.8) 20 (5.2) 67 (17.3) 91 (23.5) 202 (52.2)
a

Responses were measured on a 5-point Likert scale: 1 = strongly disagree, 2 = disagree, 3 = neither agree nor disagree, 4 = agree, 5 = strongly agree.

b

The number of participants.

c

Artificial intelligence.

Self-perceived knowledge and application areas

Participants reported low to moderate self-perceived knowledge regarding AI applications across both clinical and operational domains. The highest levels of perceived competence were observed in the areas of infectious disease diagnosis (41.2% moderate to very good knowledge, n = 159) and medical imaging analysis (n = 158, 40.9%). In contrast, knowledge levels were particularly low in domains such as pandemic modeling, hand hygiene monitoring, and monitoring of isolation practices, where 47.5% (n = 184), 47.3% (n = 183), and 46.0% (n = 178) of the respondents indicated no knowledge, respectively. Among all participants, 87 (22.5%) reported the lowest possible knowledge score (1: No knowledge) across all AI domains, whereas only five individuals (1.3%) reported the highest score (5: Very good knowledge) in all assessed areas. The median total knowledge score among all participants was 24 (12–60). Physicians’ perceived knowledge levels regarding the use of AI in different clinical and operational contexts are summarized in Table 4.

Table 4.

Physicians’ perceived knowledge levelsa regarding the use of artificial intelligence in different clinical and operational contexts (nb = 387).

No knowledge, n (%) Limited knowledge, n (%) Moderate knowledge, n (%) Good knowledge, n (%) Very good knowledge, n (%)
Diagnosis of infectious diseases 114 (29.4) 114 (29.4) 97 (25.1) 48 (12.5) 14 (3.6)
Decision support for antibiotic selection 134 (34.6) 113 (29.2) 86 (22.2) 39 (10.1) 15 (3.9)
Analysis of antibiotic consumption and appropriateness 142 (36.7) 124 (32.0) 77 (19.9) 31 (8.0) 13 (3.4)
Hand hygiene training 175 (45.2) 80 (20.7) 71 (18.3) 46 (11.9) 15 (3.9)
Hand hygiene monitoring 183 (47.3) 84 (21.7) 62 (16.0) 44 (11.4) 14 (3.6)
Isolation precautions training 168 (43.4) 94 (24.3) 73 (18.9) 40 (10.3) 12 (3.1)
Monitoring of isolation practices 178 (46.0) 92 (23.8) 65 (16.8) 38 (9.8) 14 (3.6)
Surveillance of healthcare-associated infections 170 (43.9) 89 (23.0) 78 (20.2) 34 (8.8) 16 (4.1)
Pandemic modeling 184 (47.5) 90 (23.3) 63 (16.3) 32 (8.3) 18 (4.6)
Interpretation of microbiological data 151 (39.0) 87 (22.5) 92 (23.8) 42 (10.8) 15 (3.9)
Medical imaging analysis 134 (34.6) 95 (24.5) 80 (20.7) 51 (13.2) 27 (7.0)
Integration of clinical guidelines 165 (42.6) 82 (21.2) 84 (21.7) 43 (11.1) 13 (3.4)
a

Knowledge levels were self-assessed by participants on a 5-point Likert scale, where 1 = no knowledge, 2 = limited knowledge, 3 = moderate knowledge, 4 = good knowledge, and 5 = very good knowledge.

b

The number of participants.

Participants were also asked to identify up to three domains in which AI could provide the greatest benefit in IDCM. Diagnosis was most frequently selected (n = 254, 65.6%), followed by antibiotic consumption analysis (n = 185, 47.8%), monitoring of guideline adherence (n = 160, 41.3%), treatment optimization (n = 154, 39.8%), and infection control (n = 138, 35.6%). Additional areas included educational support (n = 103, 26.6%), infection prevention (n = 63, 16.3%), triage and patient classification (n = 60, 15.5%), and drug interaction detection (n = 1, 0.3%).

Perspectives on responsibility and professional roles

A total of 344 participants (88.9%) agreed that IDCM physicians should be involved in the development of AI systems. Regarding responsibility in the case of erroneous AI-generated decisions, 264 respondents (68.2%) held physicians accountable, 233 (60.2%) cited the healthcare institution, and 100 (25.8%) attributed responsibility to the system developers. In relation to the future role of AI in clinical decision-making, 192 (49.6%) of participants stated that clinical decisions should remain exclusively within the physician's authority. A further 178 (46.0%) respondents supported the use of AI as a consultant, while only 17 participants (4.4%) indicated that AI could assume responsibility for certain clinical decisions.

Subgroup analyses based on age, academic title, years of professional experience, and institution type

Subgroup analyses were conducted to assess differences in AI-related familiarity, prior training, clinical/professional use, and total knowledge and attitude scores across different groups. AI familiarity was significantly associated with academic title (H = 21.76, p < .001, ε² = 0.046, small effect) but not with age (H = 2.40, p = .600, ε² = 0.000, trivial), professional experience (H = 1.56, p = .748, ε² = 0.000, trivial), or institution type (H = 5.99, p = .197, ε² = 0.007, trivial). Openness to improving oneself in AI was influenced by academic title (H = 11.24, p = .024, ε² = 0.019, small effect) only. Previous AI training differed significantly by age (χ² = 21.53, p = .001, Cramer's V = 0.166, small), academic title (χ² = 22.95, p < .001, Cramer's V = 0.172, small), and years of experience (χ² = 22.52, p = .015, Cramer's V = 0.170, small), but not by institution type (χ² = 4.59, p = .761, Cramer's V = 0.072, trivial). Prior use of AI in clinical or professional contexts did not differ significantly across any of the subgroup variables (all p > .05; Cramer's V = .017–0.125, trivial).

Correlation and group comparison analyses were conducted to explore associations between total knowledge and attitude scores and various demographic and professional characteristics. Spearman's rank-order correlation revealed no statistically significant relationship between total knowledge scores and age (r = –.0702, p = .168, trivial), academic title (r = .0179, p = .726, trivial), or years of professional experience (r = –.0433, p = .396, trivial). In contrast, total attitude scores showed small but statistically significant positive correlations with age (r = .153, p = .003, small effect), academic title (r = .163, p = .001, small effect), and professional experience (r = .141, p = .006, small effect), suggesting that older and more experienced participants, as well as those with higher academic ranks, tended to report slightly more favorable attitudes toward AI applications in clinical practice. To assess differences across institutional settings, Kruskal–Wallis tests were performed. The results indicated that total knowledge scores differed significantly by institution type (H = 16.489, p < .001, ε² = 0.035, small effect), whereas total attitude scores did not show a statistically significant difference across institutions (H = 7.643, p = .054, ε² = 0.028, trivial). Together, these findings indicate that while most observed associations were of small magnitude, they nonetheless highlight consistent patterns suggesting that both individual and institutional factors modestly influence familiarity, attitudes, and training exposure related to AI in IDCM. Table 5 summarizes the subgroup analyses, and detailed results are provided in Supplemental Tables S1 to S4.

Table 5.

Summary of subgroup analyses of artificial intelligence related knowledge and attitudes Among infectious diseases and clinical microbiology physicians (na = 387).

Item
Subgroups familiarity with AIb Openness to improving oneself in AI Previous AI training Use of AI in clinical/ professional settings Total attitude score Total knowledge score
Age p-valuec .600 .112 .001* .194 .003* .168
H (df)/χ2 (df)c 2.40 5.99 21.531 4.714
Effect size (Interpretation)c ε²=0.000 (trivial) ε²=0.007 (trivial) Cramer's V = 0.166 (small) Cramer's V = 0.110 (trivial) r = .153 (small) r = −0.0702 (trivial)
Academic title p <.001* .024* <.001* .599 .001* .726
H (df)/χ2 (df) 21.76 11.24 22.954 2.759 - -
Effect size (interpretation) ε²=0.046 (small) ε²=0.019 (small) Cramer's V = 0.172 (small) Cramer's V = 0.084 (trivial) r = .163 (small) r = .0179 (trivial)
Years of experience p .748 .368 .015* .109 .006* .396
H (df)/χ2 (df) 1.56 3.15 22.524 6.054
Effect size (interpretation) ε²=0.000 (trivial) ε²=0.000 (trivial) Cramer's V = 0.170 (small) Cramer's V = 0.125 (trivial) r = .141 (small) r = −0.0433 (trivial)
Type of institution p-value .197 .161 .761 .989 .054 <.001*
H (df)/χ2 (df) 5.99 5.15 4.596 0.123 7.643 16.489
Effect size (interpretation) ε²=0.007 (trivial) ε²=0.005 (trivial) Cramer's V = 0.072 (Trivial) Cramer's v = 0.017 (trivial) ε²=0.028 (trivial) ε²=0.035 (small)
a

The number of participants.

b

Artificial intelligence.

c

Group comparisons were performed using the chi-square test for categorical variables and the Kruskal–Wallis test for ordinal or continuous variables. When expected cell counts were low, the Fisher–Freeman–Halton exact test was applied instead of the chi-square test. Correlations between continuous variables were assessed using Spearman's rank correlation. To control for Type I error inflation across multiple statistical tests, corrections for multiple comparisons were applied using the Benjamini–Hochberg False Discovery Rate method. Adjusted p-values < .05 were considered statistically significant. Effect sizes were also calculated to complement p-values and indicate the magnitude of associations or group differences. H(df) and χ²(df) denote the Kruskal–Wallis and chi-square test statistics with corresponding degrees of freedom, respectively. Specifically, Cramer's V was computed for the chi-square and Fisher–Freeman–Halton exact tests, epsilon squared (ε²) values were calculated for the Kruskal–Wallis analyses, and Spearman's correlation coefficient (r) itself was interpreted as the effect size for correlation analyses. Effect sizes were interpreted according to established conventions: for Cramer's V, values of 0.10, 0.30, and 0.50 were considered to represent small, moderate, and large effects, respectively, particularly for 2 × 2 contingency tables. For ε², values of approximately 0.01, 0.06, and 0.14 were interpreted as small, moderate, and large effect sizes, respectively, and for r, values of 0.10, 0.30, and 0.50 indicated small, moderate, and large correlations, respectively. Effects were described as trivial when p-values exceeded .05, regardless of the effect size magnitude.

Expectations and concerns

A total of 97 open-ended responses regarding the expectations (n = 38, 39.2%) and concerns (n = 59, 60.8%) for AI use in IDCM were thematically analyzed. The most frequently mentioned expectations included AI's role as a clinical decision support tool (n = 24, 63.2%), improving time efficiency (n = 5, 13.2%), and enabling faster diagnosis and treatment (n = 5, 13.2%). Less frequently, participants noted its potential utility in education (n = 2, 5.2%), standardization of clinical guidelines (n = 1, 2.6%), and automation of routine tasks (n = 1, 2.6%). The most prominent themes regarding the concerns related to AI integration included ethical risks and algorithmic bias (n = 25, 42.4%), as well as concerns over data quality and reliability (n = 25, 42.4%). Additional issues raised were the loss of clinical autonomy (n = 7, 11.8%), limited generalizability of AI outputs (n = 1, 1.7%), and the risk of overreliance on AI systems (n = 1, 1.7%). These themes and representative quotes with the background of respondents were summarized in Table 6.

Table 6.

Thematic summary of expectations and concerns regarding artificial intelligence in infectious diseases and clinical microbiology (na = 387).

Theme Respondents (n) Representative quote with respondent background
Expectations
Clinical decision support 24 “AIb can assist clinicians in selecting appropriate treatment based on surveillance data.” (Resident, University hospital)
Operational efficiency and workload reduction 5 “AI may provide time efficiency.” (Assoc. Prof., State hospital)
Timely diagnosis and therapeutic initiation 5 “AI can enable rapid data analysis and support prompt diagnosis and treatment.” (Specialist, Training and Research Hospital)
Support for education and training 2 “AI can enhance health literacy, answer common public questions at the primary care level, and reduce unnecessary hospital visits and medication use, thereby alleviating the burden on healthcare services.” (Resident, Training and Research Hospital)
Standardization of clinical practice 1 “AI can provide feedback to clinicians to review treatment plans, especially in cases of prolonged therapy.” (Prof., University Hospital)
Automation of repetitive clinical tasks 1 “If no justification is provided for continued antibiotic use, AI can automatically discontinue the order.” (Specialist, State Hospital)
Concerns
Ethical concerns, bias, and AIb hallucinations 25 “AI may raise ethical concerns and generate fabricated or inaccurate references, potentially compromising the integrity of medical information.” (Assist. Prof., University Hospital)
Data integrity and reliability issues 25 “AI may be prone to errors due to faulty data, and there is a risk of misdirection or overriding human clinical judgment.” (Assoc. Prof., Training and Research Hospital)
Risk of diminished clinical autonomy 7 “AI may overlook the principle that “there are no diseases, only patients,” emphasizing the need for clinician oversight in every decision.” (Specialist State Hospital)
Concerns over limited generalizability 1 “AI may fail to provide patient-specific insights due to excessive generalization.” (Assoc. Prof., Private Hospital)
Overdependence and erosion of critical thinking 1 “AI may lead to challenges stemming from ethical violations and overreliance.” (Assist. Prof., University Hospital)
a

The number of participants.

b

Artificial intelligence.

Discussion

This study adds to the limited body of literature by providing a comprehensive, multi-dimensional assessment of IDCM physicians’ perspectives on AI. As a nationwide survey conducted in Türkiye, it offers valuable insight into the awareness, knowledge, attitudes, expectations, and concerns of IDCM physicians regarding AI integration into clinical practice. In Türkiye, IDCM services operate within a highly centralized and predominantly public healthcare system. National guidelines, infection control committees, and surveillance networks play a decisive role in shaping clinical practice and institutional priorities. This structure creates both advantages and barriers for AI adoption. On one hand, Türkiye's extensive digital health infrastructure—exemplified by e-Nabız, the national electronic health record system that integrates patient data across all public and private healthcare institutions, as well as nationwide electronic prescription and laboratory reporting systems—provides a unique foundation for data-driven innovation. On the other hand, variability in institutional resources, regional disparities in access to technology, and limited structured AI training may hinder consistent implementation across settings. Moreover, the hierarchical organization of clinical decision-making in public hospitals could influence how physicians perceive the autonomy, accountability, and practicality of AI-assisted care. Contextualizing the findings within this framework underscores that successful AI integration in Türkiye will depend on aligning technological development with institutional capacity, regulatory oversight, and frontline clinical realities.

In our study, despite the predominance of basic AI familiarity and limited formal training, the vast majority expressed openness to AI implementation and institutional adoption. This is consistent with previous international studies indicating a high level of interest in AI among healthcare professionals, even in the absence of formal education or hands-on experience. 7 For instance, He et al. 8 and Alnomasy et al. 9 both reported that while most healthcare workers perceive AI as promising, structured training remains scarce. In Germany, 87.8% of anesthesiologists agreed that AI should be integrated into their field, yet only 17% were familiar with its specific applications. 10 Comparable findings have been reported among radiologists, pathologists, and general practitioners in the United States and Europe, indicating a global readiness gap.11,12 These parallels suggest that IDCM physicians in Türkiye share certain similarities with broader international trends, yet the present results specifically reflect their local experiences and professional context. The self-initiated use of ChatGPT and other AI platforms by our study participants underscores the adaptability and proactive stance of IDCM physicians. This pattern of self-directed adoption mirrors trends observed internationally among clinicians experimenting with generative AI tools for similar academic and clinical purposes, but the extent and motivations of such use should be interpreted within the national and specialty-specific scope of this study. 13

Attitudinal responses further revealed a duality of enthusiasm and apprehension. While participants largely recognized the potential of AI in IDCM practice, they also voiced significant concerns. These findings are consistent with prior qualitative investigations in oncology, and pathology, where physicians highlighted fears related to algorithmic opacity, medicolegal responsibility, and disruption of the physician-patient relationship.14,15 Notably, our respondents emphasized ethical risks and bias as major barriers to AI integration, reflecting similar sentiments found in surveys of emergency medicine and intensive care clinicians.16,17 This alignment with international concerns indicates that Turkish IDCM physicians share the same core apprehensions as their global peers. The prevalent view that AI should serve as an adjunct rather than a decision-maker aligns with ethical guidelines from organizations such as the American Medical Association which advocate for “augmented intelligence” that complements clinical expertise rather than substituting it. 18

Our subgroup analyses contribute further depth by illuminating how demographic and professional variables shape AI perceptions. Although the detected associations were generally of small magnitude, these consistent trends highlight meaningful variation in AI familiarity and attitudes across experience and institutional contexts. Even modest effects are relevant in this setting, as they may signal early indicators of emerging digital literacy gaps. Differences across experience levels and institutional types suggest that readiness for AI integration is uneven. These findings parallel international data showing that senior academic clinicians tend to be more informed about AI and exhibit greater openness to its integration in clinical practice, possibly due to increased involvement in research and institutional decision-making. 7 In our study, physicians in university hospitals scored higher on knowledge assessments. This observation reflects broader global trends, as reported in multicountry reviews, where clinicians in tertiary care settings exhibit greater digital readiness owing to their exposure to interdisciplinary innovation ecosystems. 19 However, the lack of preparedness among early-career or non-academic clinicians in our study highlights the need for more inclusive and scalable educational initiatives. This educational gap is similarly emphasized in low- and middle-income countries, suggesting that improving AI literacy is a universal rather than context-limited need. 20

Finally, the thematic analysis of open-ended responses enriched the dataset with nuanced perspectives. Participants envisioned AI as a transformative asset. For example, one participant noted, “AI could help standardize antibiotic use and alert clinicians when treatment durations are unnecessarily prolonged.” Another respondent emphasized its potential to improve efficiency: “It may support faster diagnosis and treatment decisions, saving valuable time in critical cases.” These expectations align with real-world implementations in fields such as infectious disease surveillance and radiology, where AI has been successfully deployed for early outbreak detection and image-based diagnostics.21,22 Yet these aspirations were tempered by critical concerns. Several participants highlighted ethical and professional concerns, such as “AI should not replace the physician's judgment; it must remain a supportive tool,” and “Without proper ethical and legal frameworks, there is a risk that AI will undermine clinical responsibility and patient trust.” Others stressed the need for structured training and human-centered design, stating, “Training is essential—many physicians use AI tools without fully understanding their limitations,” and “AI can never replace the intuition and empathy of a clinician, but it can enhance decision-making when used responsibly.” Notably, participants stressed the lack of transparency and explainability in current AI models—a barrier also documented in surgical and psychiatric domains.23,24 The apprehension surrounding generalizability across diverse patient populations and contexts further emphasizes the need for context-specific, evidence-based development and validation processes. Taken together, these patterns demonstrate that the perspectives of Turkish IDCM physicians are not isolated but part of a broader global narrative on responsible AI integration. Importantly, our findings echo the recommendations of leading frameworks, including the World Health Organization's Guidance on Ethics and Governance of Artificial Intelligence for Health, which calls for stakeholder engagement, transparency, and human oversight as prerequisites for responsible AI integration. 25

Relevance for clinical practice

As AI transitions from concept to clinical reality in IDCM, institutions should not only establish structured AI literacy and ethics training but also create pilot integration programs and interdisciplinary oversight mechanisms to ensure safe, transparent, and equitable adoption. Embedding clinician feedback in AI design and evaluation processes will help maintain clinical relevance, usability, and trust. The implications of this study are limited to the Turkish IDCM context; however, the observed trends may offer useful insights for specialties with similar digital-readiness profiles. Moreover, these insights can inform policy makers and healthcare institutions in other low- and middle-income countries facing similar infectious disease burdens, guiding the development of sustainable and context-appropriate digital health ecosystems that strengthen global health equity.

Strengths and limitations

This study presents several notable strengths. First and foremost, to our knowledge, this is the first systematic survey specifically targeting IDCM physicians to explore their awareness, knowledge, attitudes, expectations, and concerns regarding AI applications. While previous international studies have assessed AI perceptions among general practitioners, anesthesiologists, radiologists, and other specialties, no prior research has comprehensively examined the perspectives of IDCM physicians—a group operating at the unique intersection of individualized patient care, infection control, antimicrobial stewardship, and microbiological diagnostics. By addressing this critical gap, our study provides original and specialty-specific insight that contributes meaningfully to the international digital health literature.

Second, the study employed a comprehensive, multidimensional questionnaire informed by existing frameworks and expert consensus, allowing for robust evaluation across both clinical and operational AI domains. Third, inclusion of both residents and specialists from a wide range of institutional settings across Türkiye enhances the representativeness of the sample within the specialty. Fourth, the incorporation of qualitative thematic analysis provided rich, contextualized insights into participants’ expectations and concerns, complementing the quantitative findings.

Several limitations should be considered. First, the cross-sectional design limits the ability to draw causal inferences, as it reflects physician perceptions at a single point in time, which may change over time with the rapid evolution of AI technologies and regulatory frameworks. Second, the voluntary and self-selected nature of survey participation may have introduced selection bias, potentially overrepresenting individuals with existing interest or familiarity with AI. Third, because the study employed a non-probability convenience sampling approach, full representativeness of the national IDCM physician population cannot be guaranteed; the achieved sample of 387 participants corresponds to approximately 10% of the estimated 4000 IDCM specialists and residents in Türkiye. Fourth, the self-reported data are subject to recall and social desirability bias, possibly affecting the accuracy of reported knowledge or experiences. Additionally, inter-rater agreement (e.g., Cohen's kappa) was not calculated during thematic coding, which may limit the quantification of reliability in qualitative analysis. Lastly, as the study was conducted solely among IDCM physicians in Türkiye, generalizability may be limited not only across other medical specialties or countries with differing healthcare systems and AI adoption rates, but also across rural versus urban regions and between public and private institutions.

Conclusion

This study highlights both enthusiasm and ambivalence among IDCM physicians in Türkiye regarding the integration of AI into clinical workflows. While familiarity and formal training remain limited, there is substantial openness to AI adoption, particularly if supported by institutional frameworks and education. Our findings underscore the importance of engaging IDCM physicians in AI development and governance processes, not only as users but also as contributors and evaluators. Future efforts should prioritize targeted training programs, ethical guidelines, and cross-disciplinary collaborations to ensure that AI technologies in IDCM are safe, effective, and aligned with clinical realities. By addressing identified gaps at the national and specialty level, AI can be leveraged to advance infection diagnosis, infection control, and antimicrobial stewardship in a manner consistent with local healthcare structures and resources.

Supplemental Material

sj-docx-1-dhj-10.1177_20552076251404507 - Supplemental material for From awareness to action: A nationwide survey of infectious diseases and clinical microbiology physicians’ perspectives on artificial intelligence in clinical practice

Supplemental material, sj-docx-1-dhj-10.1177_20552076251404507 for From awareness to action: A nationwide survey of infectious diseases and clinical microbiology physicians’ perspectives on artificial intelligence in clinical practice by Ezgi Gülten, Okan Derin, Eyüp Arslan, Ceren Atasoy Tahtasakal, Fatih Temoçin, Funda Memişoğlu and Nazlım Aktuğ Demir in DIGITAL HEALTH

sj-docx-2-dhj-10.1177_20552076251404507 - Supplemental material for From awareness to action: A nationwide survey of infectious diseases and clinical microbiology physicians’ perspectives on artificial intelligence in clinical practice

Supplemental material, sj-docx-2-dhj-10.1177_20552076251404507 for From awareness to action: A nationwide survey of infectious diseases and clinical microbiology physicians’ perspectives on artificial intelligence in clinical practice by Ezgi Gülten, Okan Derin, Eyüp Arslan, Ceren Atasoy Tahtasakal, Fatih Temoçin, Funda Memişoğlu and Nazlım Aktuğ Demir in DIGITAL HEALTH

sj-docx-3-dhj-10.1177_20552076251404507 - Supplemental material for From awareness to action: A nationwide survey of infectious diseases and clinical microbiology physicians’ perspectives on artificial intelligence in clinical practice

Supplemental material, sj-docx-3-dhj-10.1177_20552076251404507 for From awareness to action: A nationwide survey of infectious diseases and clinical microbiology physicians’ perspectives on artificial intelligence in clinical practice by Ezgi Gülten, Okan Derin, Eyüp Arslan, Ceren Atasoy Tahtasakal, Fatih Temoçin, Funda Memişoğlu and Nazlım Aktuğ Demir in DIGITAL HEALTH

Acknowledgments

The authors extend their sincere appreciation to the physicians, institutional stakeholders, and AI experts who contributed their time and perspectives throughout the study. We are especially grateful to the IDCM physicians who participated in the survey and shared thoughtful insights into the current and future integration of AI in their clinical practice. We also acknowledge the efforts of the academic collaborators and institutional leaders who supported the dissemination of the survey and facilitated access to key participants. To the clinicians and researchers committed to advancing ethical, equitable, and effective applications of AI in medicine: your continued efforts inspire us to explore, question, and co-create technologies that truly serve patients and providers alike.

Footnotes

Ethical approval: The study was approved by the Ankara University Faculty of Medicine Human Research Ethics Committee (decision number: I05-401-25; date of approval: May 26, 2025) and conducted in accordance with the Declaration of Helsinki.

Informed consent: All participants provided written informed consent via an online consent form prior to participating.

Author contributions: Conceptualization: Ezgi Gülten, Okan Derin, Eyüp Arslan, Ceren Atasoy Tahtasakal, Fatih Temoçin, Funda Memişoğlu, and Nazlım Aktuğ Demir; data curation: Ezgi Gülten; formal analysis: Ezgi Gülten and Okan Derin; investigation: Ezgi Gülten, Okan Derin, Eyüp Arslan, Ceren Atasoy Tahtasakal, Fatih Temoçin, Funda Memişoğlu, and Nazlım Aktuğ Demir; methodology: Ezgi Gülten, Okan Derin, Eyüp Arslan, Ceren Atasoy Tahtasakal, Fatih Temoçin, Funda Memişoğlu, and Nazlım Aktuğ Demir; supervision: Funda Memişoğlu and Nazlım Aktuğ Demir; writing-original draft: Ezgi Gülten, Okan Derin, Eyüp Arslan, Ceren Atasoy Tahtasakal, Fatih Temoçin; writing-review and editing: Funda Memişoğlu and Nazlım Aktuğ Demir. All authors have read and approved the final version of the article.

Funding: The authors received no financial support for the research, authorship, and/or publication of this article.

The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

Data availability: The data that support the findings of this study are available on request from the corresponding author. The data are not publicly available due to privacy or ethical restrictions.

Supplemental material: Supplemental material for this article is available online.

References

  • 1.Topol EJ. High-performance medicine: the convergence of human and artificial intelligence. Nat Med 2019; 25: 44–56. [DOI] [PubMed] [Google Scholar]
  • 2.Theodosiou AA, Read RC. Artificial intelligence, machine learning and deep learning: potential resources for the infection clinician. J Infect 2023; 87: 287–294. [DOI] [PubMed] [Google Scholar]
  • 3.Giebel GD, Raszke P, Nowak H, et al. Improving AI-based clinical decision support systems and their integration into care from the perspective of experts: interview study among different stakeholders. JMIR Med Inform 2025; 13: e69688. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Goktas P, Grzybowski A. Shaping the future of healthcare: ethical clinical challenges and pathways to trustworthy AI. J Clin Med 2025; 14: 1605. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Cesaro A, Hoffman SC, Das P, et al. Challenges and applications of artificial intelligence in infectious diseases and antimicrobial resistance. NPJ Antimicrob Resist 2025; 3: 2. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Sarantopoulos A, Mastori Kourmpani C, Yokarasa AL, et al. Artificial intelligence in infectious disease clinical practice: an overview of gaps, opportunities, and limitations. Trop Med Infect Dis 2024; 9: 228. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Hoffman J, Hattingh L, Shinners L, et al. Allied health Professionals’ perceptions of artificial intelligence in the clinical setting: cross-sectional survey. JMIR Form Res 2024; 8: e57204. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.He J, Baxter SL, Xu J, et al. The practical implementation of artificial intelligence technologies in medicine. Nat Med 2019; 25: 30–36. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Alnomasy N, Pangket P, Alreshidi B, et al. Artificial intelligence in health care: assessing impact on professional roles and preparedness among hospital nurse leaders. Digit Health 2025; 11: 20552076251356362. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Henckert D, Malorgio A, Schweiger G, et al. Attitudes of anesthesiologists toward artificial intelligence in anesthesia: a multicenter, mixed qualitative-quantitative study. J Clin Med 2023; 12: 2096. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Kharko A, Garcia Sanchez C, Hagström J, et al. General practitioners’ opinions of generative artificial intelligence in the UK: an online survey. Digit Health 2025; 11: 20552076251360863. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Obuchowicz R, Lasek J, Wodziński M, et al. Artificial intelligence-empowered radiology-current status and critical review. Diagnostics (Basel) 2025; 15: 282. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Lonsdale H, O'Reilly-Shah VN, Padiyath A, et al. Supercharge your academic productivity with generative artificial intelligence. J Med Syst 2024; 48: 73. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Li M, Xiong X, Xu B. Attitudes and perceptions of Chinese oncologists towards artificial intelligence in healthcare: a cross-sectional survey. Front Digit Health 2024; 6: 1371302. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Ling Kuo RY, Freethy A, Smith J, et al. Stakeholder perspectives towards diagnostic artificial intelligence: a co-produced qualitative evidence synthesis. EClinicalMedicine 2024; 71: 102555. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Stewart J, Freeman S, Eroglu E, et al. Attitudes towards artificial intelligence in emergency medicine. Emerg Med Australas 2024; 36: 252–265. [DOI] [PubMed] [Google Scholar]
  • 17.van der Meijden SL, de Hond AAH, Thoral PJ, et al. Intensive care unit Physicians’ perspectives on artificial intelligence-based clinical decision support tools: preimplementation survey study. JMIR Hum Factors 2023; 10: e39114. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.American Medical Association. Augmented intelligence development, deployment, and use in health care, www.ama-assn.org/system/files/ama-ai-principles.pdf (2024, accessed 2 August 2025).
  • 19.Chen M, Zhang B, Cai Z, et al. Acceptance of clinical artificial intelligence among physicians and medical students: a systematic review with cross-sectional survey. Front Med (Lausanne) 2022; 9: 990604. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Khan MS, Umer H, Faruqe F. Artificial intelligence for low income countries. Humanit Soc Sci Commun 2024; 11: 1422. [Google Scholar]
  • 21.Liu W. Bracing the artificial intelligence technology in viral infectious disease control. Infect Med (Beijing) 2025; 4: 100186. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Wang R, Jiao Z, Yang L, et al. Artificial intelligence for prediction of COVID-19 progression using CT imaging and clinical data. Eur Radiol 2022; 32: 205–212. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Brandenburg JM, Müller-Stich BP, Wagner M, et al. Can surgeons trust AI? Perspectives on machine learning in surgery and the importance of eXplainable Artificial Intelligence (XAI). Langenbecks Arch Surg 2025; 410: 53. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Joyce DW, Kormilitzin A, Smith KA, et al. Explainable artificial intelligence for mental health through transparency and interpretability for understandability. NPJ Digit Med 2023; 6: 6. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.World Health Organization. Ethics and Governance of Artificial Intelligence for Health: WHO Guidance, www.who.int/publications/i/item/9789240029200 (2021, accessed 2 August 2025).

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

sj-docx-1-dhj-10.1177_20552076251404507 - Supplemental material for From awareness to action: A nationwide survey of infectious diseases and clinical microbiology physicians’ perspectives on artificial intelligence in clinical practice

Supplemental material, sj-docx-1-dhj-10.1177_20552076251404507 for From awareness to action: A nationwide survey of infectious diseases and clinical microbiology physicians’ perspectives on artificial intelligence in clinical practice by Ezgi Gülten, Okan Derin, Eyüp Arslan, Ceren Atasoy Tahtasakal, Fatih Temoçin, Funda Memişoğlu and Nazlım Aktuğ Demir in DIGITAL HEALTH

sj-docx-2-dhj-10.1177_20552076251404507 - Supplemental material for From awareness to action: A nationwide survey of infectious diseases and clinical microbiology physicians’ perspectives on artificial intelligence in clinical practice

Supplemental material, sj-docx-2-dhj-10.1177_20552076251404507 for From awareness to action: A nationwide survey of infectious diseases and clinical microbiology physicians’ perspectives on artificial intelligence in clinical practice by Ezgi Gülten, Okan Derin, Eyüp Arslan, Ceren Atasoy Tahtasakal, Fatih Temoçin, Funda Memişoğlu and Nazlım Aktuğ Demir in DIGITAL HEALTH

sj-docx-3-dhj-10.1177_20552076251404507 - Supplemental material for From awareness to action: A nationwide survey of infectious diseases and clinical microbiology physicians’ perspectives on artificial intelligence in clinical practice

Supplemental material, sj-docx-3-dhj-10.1177_20552076251404507 for From awareness to action: A nationwide survey of infectious diseases and clinical microbiology physicians’ perspectives on artificial intelligence in clinical practice by Ezgi Gülten, Okan Derin, Eyüp Arslan, Ceren Atasoy Tahtasakal, Fatih Temoçin, Funda Memişoğlu and Nazlım Aktuğ Demir in DIGITAL HEALTH


Articles from Digital Health are provided here courtesy of SAGE Publications

RESOURCES