Abstract
Objective
This study aims to explore the opinions on the insurance coverage of artificial intelligence (AI), as categorized based on the distinct value elements offered by AI, with a specific focus on patient-centered outcomes (PCOs). PCOs are distinguished from traditional clinical outcomes and focus on patient-reported experiences and values such as quality of life, functionality, well-being, physical or emotional status, and convenience.
Materials and Methods
We classified the value elements provided by AI into four dimensions: clinical outcomes, economic aspects, organizational aspects, and non-clinical PCOs. The survey comprised three sections: 1) experiences with PCOs in evaluating AI, 2) opinions on the coverage of AI by the National Health Insurance of the Republic of Korea when AI demonstrated benefits across the four value elements, and 3) respondent characteristics. The opinions regarding AI insurance coverage were assessed dichotomously and semi-quantitatively: non-approval (0) vs. approval (on a 1–10 weight scale, with 10 indicating the strongest approval). The survey was conducted from July 4 to 26, 2023, using a web-based method. Responses to PCOs and other value elements were compared.
Results
Among 200 respondents, 44 (22%) were patients/patient representatives, 64 (32%) were industry/developers, 60 (30%) were medical practitioners/doctors, and 32 (16%) were government health personnel. The level of experience with PCOs regarding AI was low, with only 7% (14/200) having direct experience and 10% (20/200) having any experience (either direct or indirect). The approval rate for insurance coverage for PCOs was 74% (148/200), significantly lower than the corresponding rates for other value elements (82.5%–93.5%; P ≤ 0.034). The approval strength was significantly lower for PCOs, with a mean weight ± standard deviation of 5.1 ± 3.5, compared to other value elements (P ≤ 0.036).
Conclusion
There is currently limited demand for insurance coverage for AI that demonstrates benefits in terms of non-clinical PCOs.
Keywords: Artificial intelligence, Insurance, Coverage, Reimbursement, Payment, Value, Value-based healthcare, Patient-centered outcome, Patient-reported outcome measure, Survey
INTRODUCTION
With its progress and application in medicine continually advancing, artificial intelligence (AI) holds the potential to enhance every facet of healthcare. Numerous algorithms have already gained approval as medical devices from regulatory authorities—such as the U.S. Food and Drug Administration (FDA), the European CE marking, and the Ministry of Food and Drug Safety (MFDS) of the Republic of Korea (ROK) [1,2,3,4,5,6,7]. The depth of discussion on the clinical implementation of AI is steadily expanding [8,9,10,11,12,13,14,15]. However, the integration of AI into everyday clinical practice beyond the research domain is lagging [16,17]. One significant factor influencing the clinical adoption of health technology concerns financial considerations, such as reimbursement and return on investment [16]. Similarly, a significant hurdle to the widespread adoption of AI in practice is the issue of payment and coverage policies [18]. While many countries already have pathways and systems for AI coverage by medical/health insurance (for example, the “Guidelines for the Evaluation of Innovative Medical Technologies for Coverage by National Health Insurance: Artificial Intelligence-Based Innovative Medical Technologies” by the ROK government [19]) and instances of insurance coverage for AI are emerging, there are currently only a small number of global examples of AI insurance coverage, many of which are temporary [20,21,22].
Securing insurance coverage for the use of AI in healthcare hinges on demonstrating its value in improving ultimate patient clinical outcomes when compared to traditional care [21,23]. This tenet aligns with the fundamental principles of value-based healthcare [24]. Initially, AI was hyped for its potential to substantially enhance the clinical outcomes of patients. However, as knowledge and experience accumulated, the predominant strengths of AI often lie in improving productivity and efficiency in hospital workflows and among healthcare professionals, as well as enhancing non-clinical patient-centered outcomes (PCOs) rather than clinical outcomes. Two good examples of the enhanced productivity and efficiency enabled by AI technology are AI that microscopically examines lymph nodes for metastasis in oncologic patients, substantially saving time and reducing cognitive burden for pathologists [25], along with AI that segments the target lesion contour for radiation therapy, thus enormously reducing time for the practice and patient waiting list [26]. Insurance reimbursement is generally not considered regarding such AI applications, as institutions or personnel already reap benefits from improved productivity and efficiency [20,27]. PCOs are distinguished from clinical outcomes measured by biomarkers and clinical parameters and focus on patient-reported experiences and values such as quality of life, functionality, well-being, physical or emotional status, and convenience (Table 1; PCOs related to imaging tests as examples are also summarized in Supplements [Section B of survey questionnaire]). Despite their significance in enabling a more holistic healthcare approach, PCOs have been neglected in traditional value-based healthcare, where the primary emphasis is on improved clinical outcomes [28]. Consequently, PCOs have typically not yet been considered in insurance coverage decisions. A recent study by Maruszczyk et al. [29] reported the absence of guidance on utilizing patient-reported outcome measures (PROMs), which are similar to PCOs, for real-world evidence generation in the context of reimbursement consideration, indirectly indicating the current state of limited consideration of PCOs in insurance coverage.
Table 1. Value elements offered by artificial intelligence and their beneficiaries.
| Value element | Beneficiary | Definition or examples |
|---|---|---|
| Clinical outcomes | Patient | Diagnostic accuracy or treatment outcomes |
| - Diagnostic/predictive performance: sensitivity, specificity, and ROC curve area | ||
| - Impact on the rates of disease/health-related states or mortality | ||
| - Survival rate, therapeutic efficacy/effectiveness, or side effects | ||
| Economic aspects | Insurer | Macroscopic healthcare cost |
| - Nationwide healthcare cost for a particular disease | ||
| - Frequency of medical service utilization, e.g., number of imaging tests needed | ||
| Organizational aspects | Healthcare institution/medical personnel | - Efficiency or healthcare operations-related expenses of an institution |
| - Fatigue, workload, or efficiency of medical personnel | ||
| Non-clinical PCOs | Patient | Quality of medical services from the patient’s perspective—such as quality of life, functionality, well-being, physical or emotional status, and convenience |
ROC = receiver operating characteristic, PCO = patient-centered outcome
With a growing awareness of PCOs, as demonstrated by the activities of organizations like PCORI (https://www.pcori.org/), their importance in holistic healthcare is gaining recognition. Simultaneously, AI’s role in improving PCOs is also highlighted. AI tools’ ability to reduce the radiation dose for computed tomography (CT) examinations and decrease the scan time required for magnetic resonance imaging (MRI) examinations through quality improvements in image acquisition exemplifies the technology’s positive impact on imaging test-related PCOs [30]. A more recent example, especially with the rapid advances in large language models based on the transformer architecture and foundational model technique [31,32,33], is the use of AI to enhance information exchange with patients in patient care—an important component of PCOs [34,35]. Unlike AI tools that improve the productivity and efficiency of hospital workflow or healthcare professionals, the positive effects of AI on PCOs directly contribute to patient benefits. When compared to improving productivity and efficiency, this distinction may make AI tools that improve PCOs more eligible for insurance coverage, although it is not currently recognized as such. Therefore, this study aims to survey the opinions of various stakeholders regarding AI insurance coverage, as categorized according to the different value elements provided by AI, with a specific emphasis on PCOs.
MATERIALS AND METHODS
The present study was approved by the Institutional Review Board of the National Evidence-based Healthcare Collaborating Agency (NECA) (IRB No. NECAIRB23-010).
Survey Design
To lay the groundwork for our survey, we revisited the outcomes of a previously conducted systematic literature search [36]. The literature search sought to more broadly gather the aspects needed for a clinical evaluation of AI models in medicine. Carried out on December 18, 2022, and spanning the preceding five years, the search utilized PubMed with the query “(checklist OR guide OR guideline OR tip) AND (reader OR reviewer OR user) AND (“artificial intelligence” OR “machine learning” OR “deep learning”).” Drawing from the systematic literature review, we categorized the value elements offered by AI into four dimensions: clinical outcomes, economic aspects, organizational aspects, and non-clinical PCOs, as shown in Table 1. This categorization is similar to that of the Model for ASsessing the value of Artificial Intelligence in medical imaging (MAS-AI) [37]. It proves practical in addressing insurance coverage, as each category has a distinct group of stakeholders as beneficiaries.
We subsequently developed the survey questionnaire. Given that the survey targeted Korean respondents, the questionnaire was originally crafted in Korean (an English translation and the original Korean version are provided as Supplements). The survey comprised three sections: the first section focused on respondents’ experiences with PCOs concerning an evaluation of medical AI technology; the second section aimed to gauge opinions on the eligibility of insurance coverage for the four value elements provided by AI; and the third section sought to collect information about the characteristics of the respondents. Questions related to insurance coverage were designed in alignment with the National Health Insurance of the ROK. To address potential unfamiliarity with PCOs among some respondents, we provided a concise and explicit explanation of PCOs in the survey, focusing particularly on PCOs as related to imaging tests (Section B of survey questionnaire in Supplements) [30]. Furthermore, to avoid confusion among the respondents, the survey also included explanations of the four value elements provided by AI, as outlined in Table 1. Within the section on respondent characteristics, we included a specific question inquiring about the nature of the respondents as stakeholders for AI insurance coverage. This question was designed as a multiple-choice query with seven options (DQ3 in Supplements). We employed seven options during data collection to ensure precise information gathering. However, in the subsequent analysis phase, these options were condensed into four categories (see ‘Statistical Analysis’ below).
Conducting the Survey
The survey was administered through the expertise of the survey research firm Hankook Research (Seoul, ROK), utilizing a web-based online survey method. The survey spanned from July 4 to 26, 2023, encompassing the time required to reach the target of 200 respondents. The selection of the target number for the respondents was primarily guided by the research budget. Given the survey’s exploratory nature, along with the absence of relevant prior data for sample size calculations, we opted not to conduct formal sample size calculations regarding the number of respondents.
Recognizing the need to provide meaningful responses to the survey concerning AI in medicine might necessitate a certain level of experience or familiarity with the subject, we did not open the survey to random respondents; instead, we officially announced the survey through NECA to nine representative professional societies or associations in the ROK related to AI in medicine. These included medical and hybrid medicine-informatics/computer science academic societies (the Korean Academy of Medical Sciences, the Korean Society of Radiology, the Korean Society of Pathologists, the Korean Society of Artificial Intelligence in Medicine, the Korean Society of Medical Informatics, and the Korean Society of Health Informatics and Statistics) as well as industry associations (the Korea Medical Devices Industry Association, the Korea Smart Healthcare Association, and the Korea Digital Health Industry Association). We further reached out to relevant departments dealing with AI in medicine and digital healthcare within government agencies of the ROK, such as the MFDS, NECA, and the Health Insurance and Review Assessment Service (HIRA). We explained the survey’s purpose to the representatives of these organizations and requested that these organizations encourage their members to participate in the survey. However, we did not have control over the specific methods used to encourage survey participation—such as email notifications to individual members or posting announcements on the organization’s website. To promote participation, we incentivized respondents by offering a mobile gift voucher worth 30,000 KWR to those who completed the survey.
Supplementary Literature Analysis
We conducted an additional literature analysis to gain insight into the relative frequencies of clinical research studies on AI technology by exploring the four value elements provided by AI. This analysis aimed to produce objective data complementing the subjective survey results. We specifically focused on two journals—Radiology and the Korean Journal of Radiology—for several reasons. First, we aimed to align the literature analysis with the National Health Insurance of the ROK. Consequently, only research studies conducted by Korean authors were considered. Second, these journals are highly regarded publications within the field of radiology, which is the most dominant clinical field regarding AI in medicine. Not only do Korean researchers actively publish in these two journals, but both publications have a particularly strong presence in the West and East, respectively, in addition to a strong global recognition. We opted not to include European Radiology, another journal of a similar nature representing Europe. This decision was made to prevent potential skewing of results, as it had a significantly higher publication volume and was known to publish AI studies more prominently [38]. Acknowledging the vastness of AI literature, focusing on these two specific journals may offer a more practical approach and provide pilot data—even though a comprehensive analysis was not feasible. To identify eligible articles, we conducted a manual search of the two selected journals from 2021 to the most recent update on December 5, 2023. We screened all articles published within the specified period, without utilizing any search queries. The full text of eligible articles was carefully reviewed to assess whether they presented results on clinical outcomes, economic aspects, organizational aspects, or PCOs associated with AI. Clinical outcomes were further categorized into diagnostic accuracy and post-accuracy outcomes, with diagnostic yield considered as a post-accuracy outcome parameter [39].
Statistical Analysis
The survey results were analyzed independently by Hankook Research. Categorical results are presented as percentages, while continuous data are expressed as mean ± standard deviation. In the analysis of survey results, respondents were categorized into four distinct stakeholder groups: a) patient/patient representative (comprising patients, caregivers, and NGOs), b) industry/developer, c) medical practitioner/doctor, and d) government health personnel (encompassing experts in government health policy/administration from the MFDS, NECA, or HIRA). The PCOs results were compared with those for each of the other three value elements using the McNemar and paired t-tests, as appropriate. The analysis was conducted for the entire respondent pool and separately for each of the four stakeholder groups. P-values < 0.05 were considered to be statistically significant. Given the exploratory nature of the study, we did not adjust for multiple comparisons. Statistical analysis utilized MedCalc Statistical Software version 22.016 (MedCalc Software bv, Ostend, Belgium).
RESULTS
Characteristics of Survey Respondents
The summarized characteristics of the 200 survey respondents are presented in Table 2. Notably, the distribution of respondents across various stakeholder categories was fairly balanced. However, industry/developer participants and medical practitioners/doctors comprised 32% and 30% of the respondents, respectively, while government health personnel accounted for a smaller (16%) proportion compared to the other categories. The complete survey results, independently compiled by Hankook Research and including aspects not featured in the main paper, are available in the Supplements (in Korean).
Table 2. Characteristics of survey respondents.
| Characteristic | n (%) | |
|---|---|---|
| Sex | ||
| Male | 93 (46.5) | |
| Female | 107 (53.5) | |
| Age, yr | ||
| 20–39 | 96 (48) | |
| 40–49 | 63 (31.5) | |
| ≥ 50 | 41 (20.5) | |
| Stakeholder category | ||
| Patient/patient representative | 44 (22) | |
| Industry/developer | 64 (32) | |
| Medical practitioner/doctor | 60 (30) | |
| Government health personnel* | 32 (16) | |
| Experience in AI in medicine, yr | ||
| < 3 | 101 (50.5) | |
| ≥ 3 and < 6 | 48 (24) | |
| ≥ 6 and < 9 | 28 (14) | |
| ≥ 9 | 23 (11.5) | |
| All | 200 (100) | |
*Encompassing experts in government health policy/administration from the Ministry of Food and Drug Safety, National Evidence-based Healthcare Collaborating Agency, or Health Insurance and Review Assessment Service.
AI = artificial intelligence
Experience with PCOs in the Evaluation of Medical AI Technology
The experience of the respondents with PCOs in the evaluation of medical AI technology is outlined in Table 3. Overall, respondents exhibited a low level of experience, with only 7% (14/200) having had direct experience and 10% (20/200) having had any experience (either direct or indirect). When individual stakeholder groups were considered separately, all groups—except for government health personnel—demonstrated low levels of experience. Government health personnel, who likely had work-related encounters with PCOs, exhibited a relatively higher level of experience (direct experience being 21.9%).
Table 3. Respondents’ experience with patient-centered outcomes in the evaluation of medical AI technology.
| Stakeholder group | Direct experience | Acquaintance without direct experience | No acquaintance |
|---|---|---|---|
| Patient/patient representative (n = 44) | 2.3 (1/44) | 4.5 (2/44) | 93.2 (41/44) |
| Industry/developer (n = 64) | 6.3 (4/64) | 1.6 (1/64) | 92.2 (59/64) |
| Medical practitioner/doctor (n = 60) | 3.3 (2/60) | 5 (3/60) | 91.7 (55/60) |
| Government health personnel* (n = 32) | 21.9 (7/32) | 0 (0/32) | 78.1 (25/32) |
| All (n = 200) | 7 (14/200) | 3 (6/200) | 90 (180/200) |
Data are presented as the percentage of respondents in each row, with the nominal value in parentheses. The sum of percentages may not be exactly 100% due to rounding.
*Encompassing experts in government health policy/administration from the Ministry of Food and Drug Safety, National Evidence-based Healthcare Collaborating Agency, or Health Insurance and Review Assessment Service.
AI = artificial intelligence
For those without prior exposure to PCOs in the evaluation of medical AI technology, 85.6% (154/180) believed that PCOs would be important in future AI medical technology assessments. Specifically, 78.0% (32/41) of patients/patient representatives, 86.4% (51/59) of industry/developers, 94.5% (52/55) of medical practitioners/doctors, and 76% (19/25) of government health personnel expressed this view.
Eligibility for Insurance Coverage of AI: PCOs vs. Other Value Elements
The percentage of respondents who expressed agreement with granting AI technology the coverage provided by the National Health Insurance, based on the specified value elements, is detailed in Table 4. The findings reveal that when AI is proven to have positive effects or benefits, the overall approval rate for non-clinical PCOs was at 74% (148/200), a figure significantly lower than the corresponding rates for other value elements (P ≤ 0.034). This trend remains consistent across all individual stakeholder groups. It is worth noting that conducting robust statistical comparisons in individual stakeholder groups was not plausible due to the limited sample size in each group. Nevertheless, based on the sample values, respondents exhibited the lowest rate of agreement with insurance coverage for non-clinical PCOs, ranging from 66.7%–79.5%. Contrastingly, the results were notably more favorable for clinical outcomes, recording an overall approval rate of 93.5% (187/200) and ranging from 90.6%–97.7% across individual stakeholder groups.
Table 4. Respondents’ approval of coverage of AI technology by the National Health Insurance, categorized according to the value elements provided by AI.
| Stakeholder group | Value element provided by AI with proven positive effects or benefits | ||||||
|---|---|---|---|---|---|---|---|
| Clinical outcomes | P * | Economic aspects | P * | Organizational aspects | P * | Non-clinical PCOs | |
| Patient/patient representative (n = 44) | 97.7 (43/44) | 0.013 | 86.4 (38/44) | 0.371 | 84.1 (37/44) | 0.724 | 79.5 (35/44) |
| Industry/developer (n = 64) | 92.2 (59/64) | 0.027 | 90.6 (58/64) | 0.080 | 81.3 (52/64) | 0.803 | 78.1 (50/64) |
| Medical practitioner/doctor (n = 60) | 93.3 (56/60) | < 0.001 | 83.3 (50/60) | 0.044 | 86.7 (52/60) | 0.031 | 66.7 (40/60) |
| Government health personnel† (n = 32) | 90.6 (29/32) | 0.078 | 75 (24/32) | 1.000 | 75 (24/32) | 1.000 | 71.9 (23/32) |
| All (n = 200) | 93.5 (187/200) | < 0.001 | 85 (170/200) | 0.003 | 82.5 (165/200) | 0.034 | 74 (148/200) |
Data are presented as the percentage of respondents in each row who expressed agreement with the insurance coverage, with the nominal value in parentheses.
*Comparison with non-clinical PCOs, †Encompassing experts in government health policy/administration from the Ministry of Food and Drug Safety, National Evidence-based Healthcare Collaborating Agency, or Health Insurance and Review Assessment Service.
AI = artificial intelligence, PCO = patient-centered outcome
Table 5 displays the weights assigned by respondents to each value element, ranging from 0–10, with 0 indicating non-approval and 10 indicating the strongest approval for insurance coverage. The overall strength of approval, reflected by the weights, was significantly lower for non-clinical PCOs, with a mean weight ± the standard deviation of 5.1 ± 3.5, compared to other value elements (P ≤ 0.036). While robust statistical testing for each stakeholder group was not feasible, the difference remained consistent across all individual stakeholder groups, where the sample mean weight values for non-clinical PCOs were smaller than those for clinical outcomes and economic aspects and either smaller or equal to those for organizational aspects.
Table 5. The strength of respondents’ approval (ranging from 0 for no approval to 10 for the strongest approval) of coverage of AI technology by the National Health Insurance, categorized according to the value elements provided by AI.
| Stakeholder group | Value element provided by AI with proven positive effects or benefits | ||||||
|---|---|---|---|---|---|---|---|
| Clinical outcomes | P * | Economic aspects | P * | Organizational aspects | P * | Non-clinical PCOs | |
| Patient/patient representative (n = 44) | 7.6 ± 2.4 | < 0.001 | 6.1 ± 3.3 | 0.425 | 5.7 ± 3.2 | 1.000 | 5.7 ± 3.6 |
| Industry/developer (n = 64) | 7.0 ± 2.8 | 0.001 | 6.4 ± 2.7 | 0.029 | 5.7 ± 3.3 | 0.454 | 5.3 ± 3.2 |
| Medical practitioner/doctor (n = 60) | 7.2 ± 2.5 | < 0.001 | 5.9 ± 3.1 | 0.009 | 6.4 ± 3.1 | 0.006 | 4.6 ± 3.7 |
| Government health personnel† (n = 32) | 7.0 ± 3.0 | 0.007 | 5.4 ± 3.5 | 0.566 | 5.0 ± 3.5 | 0.919 | 5.0 ± 3.6 |
| All (n = 200) | 7.2 ± 2.7 | < 0.001 | 6.0 ± 3.1 | < 0.001 | 5.8 ± 3.2 | 0.036 | 5.1 ± 3.5 |
Data are presented as the mean weight ± standard deviation.
*Comparison with non-clinical PCOs, †Encompassing experts in government health policy/administration from the Ministry of Food and Drug Safety, National Evidence-based Healthcare Collaborating Agency, or Health Insurance and Review Assessment Service.
AI = artificial intelligence, PCO = patient-centered outcome
Relative Frequencies of Clinical Research Studies on AI, Examining the Four Value Elements Provided by AI
The supplementary literature search identified 48 eligible studies (Fig. 1) [40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87]. Table 6 shows the count of studies that addressed the four value elements provided by AI. The literature analysis reveals that only a smaller proportion of studies explored non-clinical PCOs compared to the much larger number of studies investigating clinical outcomes. Specifically, only 10.4% (5/48) of the studies evaluated non-clinical PCOs and reported on reduced radiation exposure during CT examinations, whereas 60.4% (29/48) addressed clinical outcomes. The frequency of studies exploring non-clinical PCOs (10.4% [5/48]) closely mirrors the percentage of survey respondents who reported any experience (either direct or indirect) with PCOs in the evaluation of medical AI technology in the survey (10% [20/200]).
Fig. 1. Flow diagram for the supplementary literature analysis. KJR = Korean Journal of Radiology.
Table 6. Relative frequencies of clinical research studies conducted by Koreans on AI, examining the four value elements provided by AI.
| Clinical outcomes | Economic aspects | Organizational aspects | Non-clinical PCOs |
|---|---|---|---|
| 29* (60.4) | 1 (2.1) | 5 (10.4) | 5 (10.4) |
Data are presented as the number of studies, with the percentage (of the total 48 studies) in parentheses.
*Four studies addressed post-accuracy outcomes in addition to diagnostic accuracy.
AI = artificial intelligence, PCO = patient-centered outcome
DISCUSSION
This study investigated the opinions of various stakeholders regarding insurance coverage for AI, as categorized according to different value elements provided by AI, with a specific emphasis on PCOs. Our results indicated that there is currently limited demand for insurance coverage for AI technology that yields positive effects or benefits in terms of non-clinical PCOs. The overall approval rate for insurance coverage for non-clinical PCOs was 74% (148/200), a figure significantly lower than the corresponding rate for organizational aspects at 82.5% (165/200), not to mention the rate for clinical outcomes at 93.5% (187/200). Even among patients/patient representatives, the approval rate for non-clinical PCOs was only 79.5% (35/44). Similarly, the semi-quantitative results concerning the strength of respondents’ approval of insurance coverage for AI technology showed that the strength was significantly lower for non-clinical PCOs (5.1 ± 3.5) than for other value elements, and specifically for organizational aspects (5.8 ± 3.2). Considering insurance reimbursement is generally not considered for AI that brings positive effects or benefits in terms of organizational aspects [20,27], the results indicate that granting insurance coverage for improvement in non-clinical PCOs would likely not be warranted at present.
The results align with the observation that PCOs have been neglected in traditional value-based healthcare, where the primary emphasis is on improved clinical outcomes [28]. Perhaps this neglect is somehow related to PCOs still currently being in its early stages—at least in the field of the clinical evaluation of AI in medicine. The first comprehensive attempt to define the PCOs of imaging tests, to which the majority of AI tools currently available after regulatory approval belong, was made only recently [30]. According to a recent systematic evaluation of research protocols for clinical trials for AI technology registered in ClinicalTrials.gov up to 2022 by Pearce et al. [88], the use of PROMs in the assessment of AI health technologies as trial endpoints was observed in only 7% of clinical trials for AI technology. The rate falls behind the 17% rate of using PROMs as trial endpoints across all clinical trials registered on ClinicalTrials.gov between 2007 and 2013 [89]. Our survey and supplementary literature analysis reveal similar patterns, showing that only 10% (20/200) of survey respondents reported any experience (either direct or indirect) with PCOs in the evaluation of medical AI technology, and only 10.4% (5/48) of the clinical studies of AI conducted by Korean authors and published in select representative radiology journals addressed non-clinical PCOs.
On the other hand, it might be worth mentioning that the difference in approval strength for AI insurance coverage among the value elements was small, with differences of 2.1 or less on the semi-quantitative 0–10 scale. Although interpreting these numerical values precisely is challenging as the scoring does not conform to a ratio or interval scale, they appear modest. Additionally, despite the lower approval rate for non-clinical PCO compared to other value elements, a substantial level of approval (74%) was still evident. Considering these results and the importance of PCOs in achieving more holistic healthcare and the potential of AI in improving PCOs, further research to accumulate clinical evidence in this area is crucial. Fortunately, data from the systematic analysis by Pearce et al. [88] also indicated a rapid growth in the number of trials of AI health technologies incorporating PROMs. With an increasing awareness of PCOs as vital components of healthcare outcomes, and as more data accumulate regarding PCOs associated with AI use, it would be worthwhile to explore whether the perception of the value associated with AI use changes in future research studies.
In contrast to PCOs, insurance coverage for positive effects or benefits in clinical outcomes was essentially universally agreed upon. The approval rate, slightly falling short of the 100% mark, likely reflects the understanding that improvements in diagnostic or predictive performance do not guarantee enhanced ultimate patient outcomes [90,91]. Therefore, data directly demonstrating improvements in clinical patient outcomes with the use of AI are regarded more highly for deciding insurance coverage than data merely indicating improved accuracy [92].
This study has several limitations. First, as the study was designed and conducted in alignment with the coverage provided by the National Health Insurance of the ROK, the results may not be entirely generalizable to other countries. The opinion regarding the different value elements provided by AI may vary according to the healthcare system, including the health insurance system and the sufficiency/scarcity of healthcare resources in a country [93,94,95,96,97,98,99]. Therefore, the results should be interpreted in conjunction with the healthcare system/status in a particular country. Second, our survey was a small-scale survey due to budget constraints and our intention to conduct a pilot study. Fortunately, our survey had a fairly balanced distribution of respondents across the four stakeholder categories. Therefore, we believe it provides useful pilot results. We recommend follow-up research at a larger scale with the accumulation of more experience with PCOs associated with AI. Third, it would have been ideal to analyze the results according to the level of sufficiency/scarcity of healthcare resources, especially human resources, in the respondents’ practice setting, considering the unique value of AI in mimicking and assisting human health professionals. This factor should be considered in any future large-scale studies.
In conclusion, our study results indicated that there is currently limited demand for insurance coverage for AI technology that provides positive effects or benefits in terms of non-clinical PCOs. However, our study also revealed that considerations of PCOs are at an early stage in the field of clinical evaluation of AI in medicine. Therefore, it would be worthwhile to investigate whether the opinions regarding the value associated with AI use change in future research studies as more data accumulate regarding PCOs associated with AI use.
Footnotes
Conflicts of Interest: Hye Young Jang and Seong Ho Park, who hold respective positions on the Assistants to the Editor and Editor-in-Chief of the Korean Journal of Radiology, were not involved in the editorial evaluation or decision to publish this article. The remaining author has declared no conflicts of interest.
- Conceptualization: Ah-Ram Sul, Seong Ho Park.
- Funding acquisition: Ah-Ram Sul, Seong Ho Park.
- Methodology: all authors.
- Project Administration: Ah-Ram Sul.
- Supervision: Seong Ho Park.
- Investigation: all authors.
- Data curation: Hoyol Jhang, So Jin Park, Hye Young Jang.
- Data analysis: Hoyol Jhang, So Jin Park, Hye Young Jang.
- Writing—original draft: Hoyol Jhang, So Jin Park.
- Writing—review & editing: Ah-Ram Sul, Hye Young Jang, Seong Ho Park.
Funding Statement: This research was supported by the National Evidence-based Healthcare Collaborating Agency in the Republic of Korea (grant no. NECA-A-23-002)
Availability of Data and Material
The datasets generated or analyzed during the study are included in this published article and its supplements.
Supplement
The Supplement is available with this article at https://doi.org/10.3348/kjr.2023.1281.
References
- 1.American College of Radiology. ACR data science institute AI central database. [accessed on December 23, 2023]. Available at: https://aicentral.acrdsi.org/
- 2.U.S. Food and Drug Administration. Artificial intelligence and machine learning (AI/ML)-enabled medical devices. [accessed on December 23, 2023]. Available at: https://www.fda.gov/medical-devices/software-medical-device-samd/artificial-intelligence-and-machine-learning-aiml-enabled-medical-devices .
- 3.The Medical Futurist. FDA-approved A.I.-based algorithms. [accessed on December 23, 2023]. Available at: https://medicalfuturist.com/fda-approved-ai-based-algorithms/
- 4.Health AI Register. Products. [accessed on December 23, 2023]. Available at: https://radiology.healthairegister.com/products/
- 5.Muehlematter UJ, Daniore P, Vokinger KN. Approval of artificial intelligence and machine learning-based medical devices in the USA and Europe (2015-20): a comparative analysis. Lancet Digit Health. 2021;3:e195–e203. doi: 10.1016/S2589-7500(20)30292-2. [DOI] [PubMed] [Google Scholar]
- 6.Benjamens S, Dhunnoo P, Meskó B. The state of artificial intelligence-based FDA-approved medical devices and algorithms: an online database. NPJ Digit Med. 2020;3:118. doi: 10.1038/s41746-020-00324-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Ministry of Food and Drug Safety. [2022 medical device approval report] [accessed on December 23, 2023]. Available at: https://www.mfds.go.kr/brd/m_218/view.do?seq=33531&srchFr=&srchTo=&srchWord=&srchTp=&itm_seq_1=0&it. Korean.
- 8.U.S. Food and Drug Administration. Good machine learning practice for medical device development: guiding principles. [accessed on December 23, 2023]. Available at: https://www.fda.gov/medical-devices/software-medical-device-samd/good-machine-learning-practice-medical-device-development-guiding-principles.
- 9.Omoumi P, Ducarouge A, Tournier A, Harvey H, Kahn CE, Jr, Louvet-de Verchère F, et al. To buy or not to buy-evaluating commercial AI solutions in radiology (the ECLAIR guidelines) Eur Radiol. 2021;31:3786–3796. doi: 10.1007/s00330-020-07684-x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.Recht MP, Dewey M, Dreyer K, Langlotz C, Niessen W, Prainsack B, et al. Integrating artificial intelligence into the clinical practice of radiology: challenges and recommendations. Eur Radiol. 2020;30:3576–3584. doi: 10.1007/s00330-020-06672-5. [DOI] [PubMed] [Google Scholar]
- 11.Filice RW, Mongan J, Kohli MD. Evaluating artificial intelligence systems to guide purchasing decisions. J Am Coll Radiol. 2020;17:1405–1409. doi: 10.1016/j.jacr.2020.09.045. [DOI] [PubMed] [Google Scholar]
- 12.Juluru K, Shih HH, Keshava Murthy KN, Elnajjar P, El-Rowmeim A, Roth C, et al. Integrating Al algorithms into the clinical workflow. Radiol Artif Intell. 2021;3:e210013. doi: 10.1148/ryai.2021210013. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13.Lee S, Shin HJ, Kim S, Kim EK. Successful implementation of an artificial intelligence-based computer-aided detection system for chest radiography in daily clinical practice. Korean J Radiol. 2022;23:847–852. doi: 10.3348/kjr.2022.0193. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14.Choi H, Sunwoo L, Cho SJ, Baik SH, Bae YJ, Choi BS, et al. A nationwide web-based survey of neuroradiologists’ perceptions of artificial intelligence software for neuro-applications in Korea. Korean J Radiol. 2023;24:454–464. doi: 10.3348/kjr.2022.0905. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15.Wu K, Wu E, Theodorou B, Liang W, Mack C, Glass L, et al. Characterizing the clinical adoption of medical AI devices through U.S. insurance claims. NEJM AI. 2024;1:AIoa2300030 [Google Scholar]
- 16.Davenport TH, Glaser JP. Factors governing the adoption of artificial intelligence in healthcare providers. Discov Health Syst. 2022;1:4. doi: 10.1007/s44250-022-00004-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17.Rebitzer JB, Rebitzer RS. AI adoption in U.S. health care won’t be easy. [accessed on December 23, 2023]. Available at: https://hbr.org/2023/09/ai-adoption-in-u-s-health-care-wont-be-easy .
- 18.Wu G, Segovis CS, Nicola LP, Chen MM. Current reimbursement landscape of artificial intelligence. J Am Coll Radiol. 2023;20:957–961. doi: 10.1016/j.jacr.2023.07.018. [DOI] [PubMed] [Google Scholar]
- 19.Health Insurance Review and Assessment Service. [Guidelines for the evaluation of innovative medical technologies for coverage by national health insurance: artificial intelligence-based innovative medical technologies] [accessed on December 23, 2023]. Available at: https://www.hira.or.kr/bbs/157/2023/08/BZ202308259003306.pdf. Korean .
- 20.Lobig F, Subramanian D, Blankenburg M, Sharma A, Variyar A, Butler O. To pay or not to pay for artificial intelligence applications in radiology. NPJ Digit Med. 2023;6:117. doi: 10.1038/s41746-023-00861-4. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21.Park SH, Park CM, Choi JI. Health insurance coverage for artificial intelligence-based medical technologies: focus on radiology. J Korean Med Assoc. 2021;64:648–653. [Google Scholar]
- 22.Parikh RB, Helmchen LA. Paying for artificial intelligence in medicine. NPJ Digit Med. 2022;5:63. doi: 10.1038/s41746-022-00609-6. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23.Park SH, Han K. Methodologic guide for evaluating clinical performance and effect of artificial intelligence technology for medical diagnosis and prediction. Radiology. 2018;286:800–809. doi: 10.1148/radiol.2017171920. [DOI] [PubMed] [Google Scholar]
- 24.NEJM Catalyst. What is value-based healthcare? [accessed on December 23, 2023]. Available at: https://catalyst.nejm.org/doi/full/10.1056/CAT.17.0558 .
- 25.Caldonazzi N, Rizzo PC, Eccher A, Girolami I, Fanelli GN, Naccarato AG, et al. Value of artificial intelligence in evaluating lymph node metastases. Cancers (Basel) 2023;15:2491. doi: 10.3390/cancers15092491. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26.Senior K. NHS embraces AI-assisted radiotherapy technology. Lancet Oncol. 2023;24:e330. doi: 10.1016/S1470-2045(23)00353-4. [DOI] [PubMed] [Google Scholar]
- 27.Schoppe K. Artificial intelligence: who pays and how? J Am Coll Radiol. 2018;15:1240–1242. doi: 10.1016/j.jacr.2018.05.036. [DOI] [PubMed] [Google Scholar]
- 28.Kidanemariam M, Pieterse AH, van Staalduinen DJ, Bos WJW, Stiggelbout AM. Does value-based healthcare support patient-centred care? A scoping review of the evidence. BMJ Open. 2023;13:e070193. doi: 10.1136/bmjopen-2022-070193. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 29.Maruszczyk K, Aiyegbusi OL, Torlinska B, Collis P, Keeley T, Calvert MJ. Systematic review of guidance for the collection and use of patient-reported outcomes in real-world evidence generation to support regulation, reimbursement and health policy. J Patient Rep Outcomes. 2022;6:57. doi: 10.1186/s41687-022-00466-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 30.Thompson MJ, Suchsland MZ, Hardy V, Lavallee DC, Lord S, Devine EB, et al. Patient-centred outcomes of imaging tests: recommendations for patients, clinicians and researchers. BMJ Qual Saf. 2023;32:536–545. doi: 10.1136/bmjqs-2021-013311. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 31.Jung KH. Uncover this tech term: foundation model. Korean J Radiol. 2023;24:1038–1041. doi: 10.3348/kjr.2023.0790. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 32.Hong GS, Jang M, Kyung S, Cho K, Jeong J, Lee GY, et al. Overcoming the challenges in the development and implementation of artificial intelligence in radiology: a comprehensive review of solutions beyond supervised learning. Korean J Radiol. 2023;24:1061–1080. doi: 10.3348/kjr.2023.0393. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 33.Hwang SI, Lim JS, Lee RW, Matsui Y, Iguchi T, Hiraki T, et al. Is ChatGPT a “fire of prometheus” for non-native English-speaking researchers in academic writing? Korean J Radiol. 2023;24:952–959. doi: 10.3348/kjr.2023.0773. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 34.Amin K, Khosla P, Doshi R, Chheang S, Forman HP. Artificial intelligence to improve patient understanding of radiology reports. Yale J Biol Med. 2023;96:407–417. doi: 10.59249/NKOY5498. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 35.Giorgino R, Alessandri-Bonetti M, Luca A, Migliorini F, Rossi N, Peretti GM, et al. ChatGPT in orthopedics: a narrative review exploring the potential of artificial intelligence in orthopedic practice. Front Surg. 2023;10:1284015. doi: 10.3389/fsurg.2023.1284015. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 36.Park SH, Sul AR, Ko Y, Jang HY, Lee JG. Radiologist’s guide to evaluating publications of clinical research on AI: how we do it. Radiology. 2023;308:e230288. doi: 10.1148/radiol.230288. [DOI] [PubMed] [Google Scholar]
- 37.Fasterholdt I, Kjølhede T, Naghavi-Behzad M, Schmidt T, Rautalammi QTS, Hildebrandt MG, et al. Model for ASsessing the value of Artificial Intelligence in medical imaging (MAS-AI) Int J Technol Assess Health Care. 2022;38:e74. doi: 10.1017/S0266462322000551. [DOI] [PubMed] [Google Scholar]
- 38.Kocak B, Chepelev LL, Chu LC, Cuocolo R, Kelly BS, Seeböck P, et al. Assessment of radiomics research (ARISE): a brief guide for authors, reviewers, and readers from the Scientific Editorial Board of European Radiology. Eur Radiol. 2023;33:7556–7560. doi: 10.1007/s00330-023-09768-w. [DOI] [PubMed] [Google Scholar]
- 39.Park HY, Suh CH, Kim SO. Use of “diagnostic yield” in imaging research reports: results from articles published in two general radiology journals. Korean J Radiol. 2022;23:1290–1300. doi: 10.3348/kjr.2022.0741. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 40.Ahn Y, Lee SM, Noh HN, Kim W, Choe J, Do KH, et al. Use of a commercially available deep learning algorithm to measure the solid portions of lung cancer manifesting as subsolid lesions at CT: comparisons with radiologists and invasive component size at pathologic examination. Radiology. 2021;299:202–210. doi: 10.1148/radiol.2021202803. [DOI] [PubMed] [Google Scholar]
- 41.Hong JH, Jung JY, Jo A, Nam Y, Pak S, Lee SY, et al. Development and validation of a radiomics model for differentiating bone islands and osteoblastic bone metastases at abdominal CT. Radiology. 2021;299:626–632. doi: 10.1148/radiol.2021203783. [DOI] [PubMed] [Google Scholar]
- 42.Hwang EJ, Lee JS, Lee JH, Lim WH, Kim JH, Choi KS, et al. Deep learning for detection of pulmonary metastasis on chest radiographs. Radiology. 2021;301:455–463. doi: 10.1148/radiol.2021210578. [DOI] [PubMed] [Google Scholar]
- 43.Kang NG, Suh YJ, Han K, Kim YJ, Choi BW. Performance of prediction models for diagnosing severe aortic stenosis based on aortic valve calcium on cardiac computed tomography: incorporation of radiomics and machine learning. Korean J Radiol. 2021;22:334–343. doi: 10.3348/kjr.2020.0099. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 44.Kim DW, Ha J, Lee SS, Kwon JH, Kim NY, Sung YS, et al. Population-based and personalized reference intervals for liver and spleen volumes in healthy individuals and those with viral hepatitis. Radiology. 2021;301:339–347. doi: 10.1148/radiol.2021204183. [DOI] [PubMed] [Google Scholar]
- 45.Kim JH, Yoon HJ, Lee E, Kim I, Cha YK, Bak SH. Validation of deep-learning image reconstruction for low-dose chest computed tomography scan: emphasis on image quality and noise. Korean J Radiol. 2021;22:131–138. doi: 10.3348/kjr.2020.0116. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 46.Kim K, Kim S, Han K, Bae H, Shin J, Lim JS. Diagnostic performance of deep learning-based lesion detection algorithm in CT for detecting hepatic metastasis from colorectal cancer. Korean J Radiol. 2021;22:912–921. doi: 10.3348/kjr.2020.0447. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 47.Kim M, Kim HS, Kim HJ, Park JE, Park SY, Kim YH, et al. Thin-slice pituitary MRI with deep learning-based reconstruction: diagnostic performance in a postoperative setting. Radiology. 2021;298:114–122. doi: 10.1148/radiol.2020200723. [DOI] [PubMed] [Google Scholar]
- 48.Kim UH, Kim MY, Park EA, Lee W, Lim WH, Kim HL, et al. Deep learning-based algorithm for the detection and characterization of MRI safety of cardiac implantable electronic devices on chest radiographs. Korean J Radiol. 2021;22:1918–1928. doi: 10.3348/kjr.2021.0201. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 49.Lee JG, Kim H, Kang H, Koo HJ, Kang JW, Kim YH, et al. Fully automatic coronary calcium score software empowered by artificial intelligence technology: validation study using three CT cohorts. Korean J Radiol. 2021;22:1764–1776. doi: 10.3348/kjr.2021.0148. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 50.Lee KC, Lee KH, Kang CH, Ahn KS, Chung LY, Lee JJ, et al. Clinical validation of a deep learning-based hybrid (Greulich-Pyle and modified Tanner-Whitehouse) method for bone age assessment. Korean J Radiol. 2021;22:2017–2025. doi: 10.3348/kjr.2020.1468. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 51.Lee S, Yim JJ, Kwak N, Lee YJ, Lee JK, Lee JY, et al. Deep learning to determine the activity of pulmonary tuberculosis on chest radiographs. Radiology. 2021;301:435–442. doi: 10.1148/radiol.2021210063. [DOI] [PubMed] [Google Scholar]
- 52.Park HS, Jeon K, Cho YJ, Kim SW, Lee SB, Choi G, et al. Diagnostic performance of a new convolutional neural network algorithm for detecting developmental dysplasia of the hip on anteroposterior radiographs. Korean J Radiol. 2021;22:612–623. doi: 10.3348/kjr.2020.0051. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 53.Shin NY, Bang M, Yoo SW, Kim JS, Yun E, Yoon U, et al. Cortical thickness from MRI to predict conversion from mild cognitive impairment to dementia in Parkinson disease: a machine learning-based model. Radiology. 2021;300:390–399. doi: 10.1148/radiol.2021203383. [DOI] [PubMed] [Google Scholar]
- 54.Sung J, Park S, Lee SM, Bae W, Park B, Jung E, et al. Added value of deep learning.based detection system for multiple major findings on chest radiographs: a randomized crossover study. Radiology. 2021;299:450–459. doi: 10.1148/radiol.2021202818. [DOI] [PubMed] [Google Scholar]
- 55.Yeoh H, Hong SH, Ahn C, Choi JY, Chae HD, Yoo HJ, et al. Deep learning algorithm for simultaneous noise reduction and edge sharpening in low-dose CT images: a pilot study using lumbar spine CT. Korean J Radiol. 2021;22:1850–1857. doi: 10.3348/kjr.2021.0140. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 56.Yoo SJ, Yoon SH, Lee JH, Kim KH, Choi HI, Park SJ, et al. Automated lung segmentation on chest computed tomography images with extensive lung parenchymal abnormalities using a deep neural network. Korean J Radiol. 2021;22:476–488. doi: 10.3348/kjr.2020.0318. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 57.Bae K, Oh DY, Yun ID, Jeon KN. Bone suppression on chest radiographs for pulmonary nodule detection: comparison between a generative adversarial network and dual-energy subtraction. Korean J Radiol. 2022;23:139–149. doi: 10.3348/kjr.2021.0146. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 58.Chang S, Han K, Lee S, Yang YJ, Kim PK, Choi BW, et al. Automated measurement of native T1 and extracellular volume fraction in cardiac magnetic resonance imaging using a commercially available deep learning algorithm. Korean J Radiol. 2022;23:1251–1259. doi: 10.3348/kjr.2022.0496. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 59.Choe J, Hwang HJ, Seo JB, Lee SM, Yun J, Kim MJ, et al. Content-based image retrieval by using deep learning for interstitial lung disease diagnosis with chest CT. Radiology. 2022;302:187–197. doi: 10.1148/radiol.2021204164. [DOI] [PubMed] [Google Scholar]
- 60.Choi JW, Cho YJ, Ha JY, Lee YY, Koh SY, Seo JY, et al. Deep learning-assisted diagnosis of pediatric skull fractures on plain radiographs. Korean J Radiol. 2022;23:343–354. doi: 10.3348/kjr.2021.0449. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 61.Heo S, Lee SS, Kim SY, Lim YS, Park HJ, Yoon JS, et al. Prediction of decompensation and death in advanced chronic liver disease using deep learning analysis of gadoxetic acid-enhanced MRI. Korean J Radiol. 2022;23:1269–1280. doi: 10.3348/kjr.2022.0494. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 62.Hong W, Hwang EJ, Lee JH, Park J, Goo JM, Park CM. Deep learning for detecting pneumothorax on chest radiographs after needle biopsy: clinical implementation. Radiology. 2022;303:433–441. doi: 10.1148/radiol.211706. [DOI] [PubMed] [Google Scholar]
- 63.Kim YS, Jang MJ, Lee SH, Kim SY, Ha SM, Kwon BR, et al. Use of artificial intelligence for reducing unnecessary recalls at screening mammography: a simulation study. Korean J Radiol. 2022;23:1241–1250. doi: 10.3348/kjr.2022.0263. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 64.Lee JH, Kim KH, Lee EH, Ahn JS, Ryu JK, Park YM, et al. Improving the performance of radiologists using artificial intelligence-based detection support software for mammography: a multi-reader study. Korean J Radiol. 2022;23:505–516. doi: 10.3348/kjr.2021.0476. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 65.Lee JH, Lee D, Lu MT, Raghu VK, Park CM, Goo JM, et al. Deep learning to optimize candidate selection for lung cancer CT screening: advancing the 2021 USPSTF recommendations. Radiology. 2022;305:209–218. doi: 10.1148/radiol.212877. [DOI] [PubMed] [Google Scholar]
- 66.Nam JG, Kang HR, Lee SM, Kim H, Rhee C, Goo JM, et al. Deep learning prediction of survival in patients with chronic obstructive pulmonary disease using chest radiographs. Radiology. 2022;305:199–208. doi: 10.1148/radiol.212071. [DOI] [PubMed] [Google Scholar]
- 67.Nam JG, Park S, Park CM, Jeon YK, Chung DH, Goo JM, et al. Histopathologic basis for a chest CT deep learning survival prediction model in patients with lung adenocarcinoma. Radiology. 2022;305:441–451. doi: 10.1148/radiol.213262. [DOI] [PubMed] [Google Scholar]
- 68.Otgonbaatar C, Ryu JK, Shin J, Woo JY, Seo JW, Shim H, et al. Improvement in image quality and visibility of coronary arteries, stents, and valve structures on CT angiography by deep learning reconstruction. Korean J Radiol. 2022;23:1044–1054. doi: 10.3348/kjr.2022.0127. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 69.Park HJ, Yoon JS, Lee SS, Suk HI, Park B, Sung YS, et al. Deep learning-based assessment of functional liver capacity using gadoxetic acid-enhanced hepatobiliary phase MRI. Korean J Radiol. 2022;23:720–731. doi: 10.3348/kjr.2021.0892. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 70.Park J, Shin J, Min IK, Bae H, Kim YE, Chung YE. Image quality and lesion detectability of lower-dose abdominopelvic CT obtained using deep learning image reconstruction. Korean J Radiol. 2022;23:402–412. doi: 10.3348/kjr.2021.0683. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 71.Park JH, Park I, Han K, Yoon J, Sim Y, Kim SJ, et al. Feasibility of deep learning-based analysis of auscultation for screening significant stenosis of native arteriovenous fistula for hemodialysis requiring angioplasty. Korean J Radiol. 2022;23:949–958. doi: 10.3348/kjr.2022.0364. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 72.Son W, Kim M, Hwang JY, Kim YW, Park C, Choo KS, et al. Comparison of a deep learning-based reconstruction algorithm with filtered back projection and iterative reconstruction algorithms for pediatric abdominopelvic CT. Korean J Radiol. 2022;23:752–762. doi: 10.3348/kjr.2021.0466. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 73.Yoo H, Kim EY, Kim H, Choi YR, Kim MY, Hwang SH, et al. Artificial intelligence-based identification of normal chest radiographs: a simulation study in a multicenter health screening cohort. Korean J Radiol. 2022;23:1009–1018. doi: 10.3348/kjr.2022.0189. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 74.Chae KJ, Lim S, Seo JB, Hwang HJ, Choi H, Lynch D, et al. Interstitial lung abnormalities at CT in the Korean National Lung Cancer Screening Program: prevalence and deep learning-based texture analysis. Radiology. 2023;307:e222828. doi: 10.1148/radiol.222828. [DOI] [PubMed] [Google Scholar]
- 75.Hwang EJ, Goo JM, Nam JG, Park CM, Hong KJ, Kim KH. Conventional versus artificial intelligence-assisted interpretation of chest radiographs in patients with acute respiratory symptoms in emergency department: a pragmatic randomized clinical trial. Korean J Radiol. 2023;24:259–270. doi: 10.3348/kjr.2022.0651. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 76.Hwang HJ, Kim H, Seo JB, Ye JC, Oh G, Lee SM, et al. Generative adversarial network-based image conversion among different computed tomography protocols and vendors: effects on accuracy and variability in quantifying regional disease patterns of interstitial lung disease. Korean J Radiol. 2023;24:807–820. doi: 10.3348/kjr.2023.0088. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 77.Jeon SK, Lee JM, Joo I, Yoon JH, Lee G. Two-dimensional convolutional neural network using quantitative US for noninvasive assessment of hepatic steatosis in NAFLD. Radiology. 2023;307:e221510. doi: 10.1148/radiol.221510. [DOI] [PubMed] [Google Scholar]
- 78.Kim H, Jin KN, Yoo SJ, Lee CH, Lee SM, Hong H, et al. Deep learning for estimating lung capacity on chest radiographs predicts survival in idiopathic pulmonary fibrosis. Radiology. 2023;306:e220292. doi: 10.1148/radiol.220292. [DOI] [PubMed] [Google Scholar]
- 79.Kim M, Lee SM, Son IT, Park T, Oh BY. Prognostic value of artificial intelligence-driven, computed tomography-based, volumetric assessment of the volume and density of muscle in patients with colon cancer. Korean J Radiol. 2023;24:849–859. doi: 10.3348/kjr.2023.0109. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 80.Kim PH, Yoon HM, Kim JR, Hwang JY, Choi JH, Hwang J, et al. Bone age assessment using artificial intelligence in Korean pediatric population: a comparison of deep-learning models trained with healthy chronological and Greulich-Pyle ages as labels. Korean J Radiol. 2023;24:1151–1163. doi: 10.3348/kjr.2023.0092. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 81.Lee JH, Hong H, Nam G, Hwang EJ, Park CM. Effect of human-AI interaction on detection of malignant lung nodules on chest radiographs. Radiology. 2023;307:e222976. doi: 10.1148/radiol.222976. [DOI] [PubMed] [Google Scholar]
- 82.Nam JG, Hwang EJ, Kim J, Park N, Lee EH, Kim HJ, et al. AI improves nodule detection on chest radiographs in a health screening population: a randomized controlled trial. Radiology. 2023;307:e221894. doi: 10.1148/radiol.221894. [DOI] [PubMed] [Google Scholar]
- 83.Park H, Yun J, Lee SM, Hwang HJ, Seo JB, Jung YJ, et al. Deep learning-based approach to predict pulmonary function at chest CT. Radiology. 2023;307:e221488. doi: 10.1148/radiol.221488. [DOI] [PubMed] [Google Scholar]
- 84.Park HJ, Shin K, You MW, Kyung SG, Kim SY, Park SH, et al. Deep learning-based detection of solid and cystic pancreatic neoplasms at contrast-enhanced CT. Radiology. 2023;306:140–149. doi: 10.1148/radiol.220171. [DOI] [PubMed] [Google Scholar]
- 85.Park S, Ye JC, Lee ES, Cho G, Yoon JW, Choi JH, et al. Deep learning-enabled detection of pneumoperitoneum in supine and erect abdominal radiography: modeling using transfer learning and semi-supervised learning. Korean J Radiol. 2023;24:541–552. doi: 10.3348/kjr.2022.1032. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 86.Woo C, Jo KH, Sohn B, Park K, Cho H, Kang WJ, et al. Development and testing of a machine learning model using 18F-fluorodeoxyglucose PET/CT-derived metabolic parameters to classify human papillomavirus status in oropharyngeal squamous carcinoma. Korean J Radiol. 2023;24:51–61. doi: 10.3348/kjr.2022.0397. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 87.Yun J, Ahn Y, Cho K, Oh SY, Lee SM, Kim N, et al. Deep learning for automated triaging of stable chest radiographs in a follow-up setting. Radiology. 2023;309:e230606. doi: 10.1148/radiol.230606. [DOI] [PubMed] [Google Scholar]
- 88.Pearce FJ, Cruz Rivera S, Liu X, Manna E, Denniston AK, Calvert MJ. The role of patient-reported outcome measures in trials of artificial intelligence health technologies: a systematic evaluation of ClinicalTrials.gov records (1997-2022) Lancet Digit Health. 2023;5:e160–e167. doi: 10.1016/S2589-7500(22)00249-7. [DOI] [PubMed] [Google Scholar]
- 89.Vodicka E, Kim K, Devine EB, Gnanasakthy A, Scoggins JF, Patrick DL. Inclusion of patient-reported outcome measures in registered clinical trials: evidence from ClinicalTrials.gov (2007-2013) Contemp Clin Trials. 2015;43:1–9. doi: 10.1016/j.cct.2015.04.004. [DOI] [PubMed] [Google Scholar]
- 90.Park SH, Choi JI, Fournier L, Vasey B. Randomized clinical trials of artificial intelligence in medicine: why, when, and how? Korean J Radiol. 2022;23:1119–1125. doi: 10.3348/kjr.2022.0834. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 91.Park SH, Sul AR, Han K, Sung YS. How to determine if one diagnostic method, such as an artificial intelligence model, is superior to another: Beyond performance metrics. Korean J Radiol. 2023;24:601–605. doi: 10.3348/kjr.2023.0448. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 92.Park SH, Choi J, Byeon JS. Key principles of clinical validation, device approval, and insurance coverage decisions of artificial intelligence. Korean J Radiol. 2021;22:442–453. doi: 10.3348/kjr.2021.0048. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 93.Bold B, Lkhagvajav Z, Dorjsuren B. Role of artificial intelligence in achieving universal health coverage: a Mongolian perspective. Korean J Radiol. 2023;24:821–824. doi: 10.3348/kjr.2023.0668. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 94.Dagvasumberel G, Bold B, Dagvasumberel M. The growing problem of radiologist shortage: Mongolia’s perspective. Korean J Radiol. 2023;24:938–940. doi: 10.3348/kjr.2023.0787. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 95.Goh CXY, Ho FCH. The growing problem of radiologist shortages: perspectives from Singapore. Korean J Radiol. 2023;24:1176–1178. doi: 10.3348/kjr.2023.0966. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 96.Lai AYT. The growing problem of radiologist shortage: Hong Kong’s perspective. Korean J Radiol. 2023;24:931–932. doi: 10.3348/kjr.2023.0838. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 97.Meng F, Zhan L, Liu S, Zhang H. The growing problem of radiologist shortage: China’s perspective. Korean J Radiol. 2023;24:1046–1048. doi: 10.3348/kjr.2023.0839. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 98.Shen SH, Chiou HJ. The growing problem of radiologist shortage: Taiwan’s perspective. Korean J Radiol. 2023;24:1049–1051. doi: 10.3348/kjr.2023.0843. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 99.Vu LD, Nguyen HTT, Nguyen TN, Pham TM. The growing problem of radiologist shortage: Vietnam’s perspectives. Korean J Radiol. 2023;24:1054–1056. doi: 10.3348/kjr.2023.0829. [DOI] [PMC free article] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Data Availability Statement
The datasets generated or analyzed during the study are included in this published article and its supplements.

