Abstract
Background
Innovative assessment strategies are essential for determining clinical understanding in the evolving field of health profession education. Key feature questions (KFQs) have been developed as assessment tools to assess the clinical understanding of students. The purpose of this study is to determine the effectiveness of image-based key feature questions (IBKFQs) compared with traditional multiple-choice questions (MCQs) in radiology examinations. Additionally, this study aims to determine the correlation between test scores obtained in both test formats.
Method
This quasi experimental, correlation study was conducted from September to December 2021 at a public medical university in Karachi, Pakistan. Thirty radiology residents from various training years participated in the study. Each resident completed a comprehensive written assessment comprising 50 MCQs and 50 IB-KFQs as part of their internal evaluation at the end of a module.
Results
Out of thirty residents, 28 (93.3%) were females. The reliability score and Cronbach’s alpha were 0.944 for the MCQs and 0.881 for the IB-KFQs. Spearman's correlation coefficient revealed a positive correlation between the MCQ and IB-KFQ scores (rho = 0.823, p < 0.001). The mean scores were similar for the IB-KFQs (29.24 ± 6.31) and MCQs (28.93 ± 11.41).
Conclusion
The findings of this study indicate that incorporating IB-KFQs alongside MCQs in written assessments of radiology residents is feasible. IB-KFQs offer a focused evaluation of critical skills such as film analysis, interpretation, and report writing. By complementing traditional MCQs, IB-KFQs enhance the assessment process.
Supplementary Information
The online version contains supplementary material available at 10.1186/s12909-024-06487-8.
Keywords: Assessment Strategies, Clinical Competence, Decision-making, Image-Based Key Feature Questions, Multiple Choice Questions, Radiology Residents
Introduction
Assessment procedures constitute the cornerstone for the evaluation of student performance. Innovative assessment strategies are essential for determining clinical understanding in the evolving field of health profession education. Diverse testing approaches ensure that the standard of education is maintained [1]. Traditional assessment methods are often inadequate for evaluating the complex decision-making skills required in clinical scenarios [2]. To fill this gap, key feature questions (KFQs) have been developed as assessment tools [3].
Bordage and Page first introduced the concept of key features in 1987 and defined the key feature as a critical step in solving a clinical problem [4]. Their research, along with subsequent studies by Hrynchak et al., highlighted that every clinical scenario contains a few vital elements essential for diagnosis. Key feature questions (KFQs) target these elements and focus on the points where candidates are most likely to make errors [5]. This method ensures that the assessment goes beyond rote memorization and instead focuses on applying knowledge in a clinically relevant context. It targets the critical aspects of clinical scenarios, optimizing testing time and resources [6]. Additionally, KFQs offer a more realistic and practical assessment than traditional methods, serving as a valuable self-assessment tool for competence. By simulating real clinical scenarios, they require candidates to demonstrate their diagnostic skills in a way that mirrors real practice [7].
In radiology, image-based KFQs (IB-KFQs) integrate clinical reasoning and image interpretation skills and assess a radiologist's ability to identify key diagnostic features within images. This is particularly beneficial in radiology, where accurately identifying essential findings on imaging can significantly impact patient outcomes. This approach has the potential to enhance the validity and reliability of radiology assessments, providing a more accurate measure of a postgraduate's readiness for clinical challenges [8].
A comprehensive framework addressing the knowledge and interpretation skills essential for radiology has been developed, which emphasizes the importance of knowledge, perception, analysis, and synthesis—all crucial aspects of image interpretation and report writing. Radiology residents face the challenge of simultaneously developing these skills. IB-KFQs offer a promising solution, with the potential to effectively track the development of both perceptual and interpretive skills in radiology residents [9].
Assessment strategies must evolve with technological innovations as the field of radiology continues to advance. Any attempt to develop a generic assessment is likely to fail, as clinical reasoning and problem solving are not generic [2]. Therefore, the integration of image-based KFQs into the assessment framework for radiology postgraduates represents a significant step forward in aligning educational outcomes with clinical competencies.
Radiological assessment clearly reveals a link between its primary goal—accurate diagnosis—and improved patient health [10]. Importantly, the patient is ultimately the central focus and beneficiary of healthcare services. Effective radiology practices not only support hospitals and health services but also positively impact the overall economy [11]. Therefore, high-quality teaching, learning, and assessment methods are essential for developing competent and compassionate radiologists who are proficient in film interpretation and writing reports. This study aims to contribute to this goal by creating an exam format that incorporates image-based key feature questions, thus evaluating students'clinical reasoning and decision-making skills through radiology images and key features, ultimately leading to improved patient care.
Method
The study was conducted in accordance with the Declaration of Helsinki, and the protocol was approved by the institutional review board of Dow University of Health Sciences (IRB2085/DUHS/Approval/2021/423). A quasi experimental design and universal sampling method were used. The study was carried out among all residents across years one through four enrolled in a public sector university's radiology residency fellowship program. A total of 30 radiology residents participated in the study.
All participants were exposed to both the MCQ followed by IB-KFQ assessment formats sequentially. The topics covered in the MCQs and IB-KFQs overlapped within core radiology areas, including chest, abdominal, and musculoskeletal imaging, ensuring that the knowledge assessed by each question type was comparable. The total number of questions in both types of test methods was 50, with a duration of 60 min in each test. Descriptive statistics were employed to calculate the mean, standard deviation, median, minimum, and maximum scores obtained for both the IB-KFQ and MCQ assessment formats. Internal consistency and reliability were assessed via Cronbach's alpha. A passing score criterion of 60% was applied to both assessment formats. The mean scores were compared via Student’s t test, with a pvalue of less than 0.05 indicating statistical significance. The correlation between the IB-KFQ and MCQ scores was measured via Spearman's correlation coefficient. Furthermore, the difficulty and discriminatory indices were calculated for each format. The item difficulty index is calculated as percentage of the total number of correct responses to the test item. It is calculated using the formula P = R/T, where P is the item difficulty index, R is the number of correct responses and T is the total number of responses (which includes both correct and incorrect responses). Discriminatory index is the ability of an item to distinguish between high and low achievers. This value is calculated by determining the top and bottom 27% of class on the exam, and subtracting the number of successes by the low group on the item from the number of successes by the high group, and divide this difference by the size of the class [12, 13].
Design of the KFQ question
In this study, faculty members with more than five years of teaching experience who have completed the course of certified health professional education (CHPE), and academic radiologists were selected to be part of the study. A total of six faculty members agreed to participate in the study. Debriefing sessions and panel discussions were carried out among all the subject experts and educators. Initially, a table of specifications (TOS) was prepared, and IB-KFQs were constructed by following the Key Feature Questions guidelines (Medical Council of Canada Guidelines of KFQS) [14]. The level of difficulty of the MCQs and IB-KFQs remained the same, which was evaluated via post hoc analysis of monthly module tests. IB-KFQs were prepared, checked, and rechecked with the keys. Three practice sessions were conducted at the end of each monthly test to familiarize the faculty and students with the test format and scoring key before the final test was conducted.
Examples of IB-KFQ and MCQ are in Appendix I.
Results
Thirty residents were involved in the study, with a median age of 28 years and a female predominance (93.3%). The internal consistency/reliability, Cronbach’s alpha, was 0.944 for the MCQs and 0.881 for the IB-KFQs.
The mean scores of the residents obtained via multiple choice questions (MCQs) and IB-KFQs out of total score of 50 are summarized in Table 1.
Table 1.
Descriptive analysis of scores obtained by residents in MCQs and IB-KFQs, out of total score of 50 for each assessment format
Test Format | Mean score ± SD Percentage | Minimum score Percentage | Maximum score Percentage | Median (IQR) |
---|---|---|---|---|
MCQs |
28.93 ± 11.41 57.86% ± 22.82 |
9.0 18% |
45 90% |
32 (09–45) |
IB-KFQs |
29.24 ± 6.31 58.48% ± 12.62 |
17.8 35.6% |
41.6 83.2% |
27.7(17.8- 41.6) |
The item analysis for each item in both test formats revealed difficulty indices (P) of 57.86 ± 23.26 for the MCQs and 58.49 ± 18.56 for the IB-KFQs, which indicates moderate to average item difficulty. The discriminative index (D) values were 0.59 ± 0.30 and 0.30 ± 0.19 for the MCQs and IB-KFQs, respectively, which indicate a good discriminatory index among high- and low-score achievers (Table 2).
Table 2.
Item analysis of MCQs and IB-KFQs
Test Format | Difficulty Index (P) Mean ± SD | Discriminatory Index (D) Mean ± SD |
---|---|---|
MCQs | 57.86 ± 23.26 | 0.59 ± 0.30 |
IB-KFQs | 58.49 ± 18.56 | 0.30 ± 0.19 |
Student’s t-test was applied for the mean scores of both types of questions, with a p-value < 0.05 considered significant. The mean IB-KFQ score was comparable to the MCQ score, i.e., 29.24 ± 6.31 vs. 28.93 ± 11.41, respectively, with a p value of 0.896.
The correlation between the IB-KFQ and MCQ scores was measured via Spearman's correlation coefficient. A positive correlation was observed between the MCQ and IB-KFQ scores (rho 0.823, p value < 0.001).
A moderately positive correlation was observed between the number of years spent in the residency program and the MCQ and IB-KFQ scores (rho 0.620, p value < 0.001; rho 0.594, p value 0.001, respectively).
Discussion
Assessment of learning is an integral aspect of the quality of an educational system; therefore, there is constant research on the reliability and validity of various assessment techniques utilized during the learning and evaluation process [15]. The gist behind the KFQs is that every case has unique/hallmark features, which are the keys to final diagnosis. These questions are designed to stimulate learners’ analytical skills rather than simple recall memory [16]. Traditionally, the assessment methods for radiology postgraduate training and assessment in our setting depend on an initial theoretical examination, which consists of text-based questions. These are not able to assess the understanding of pertinent imaging findings. Image-based key feature questions (KFQs) bring real-world diagnostic scenarios into assessments, making them more applicable to practical radiology tasks. Incorporating images in KFQs represents a modern shift in medical education, enhancing assessment fidelity compared to traditional text-based MCQs. This study aimed to evaluate whether image-based key feature questions (IB-KFQs) should be included as a part of the assessment of radiology residents'clinical reasoning and decision-making skills (CRDMs) to comprehend images.
Numerous studies have been carried out to determine the reliability of KFQs, and varying results have been reported. The Cronbach’s alpha in our study was 0.88 for the IB-KFQs. This finding is comparable to that of a study by Sharma et al., which reported a higher reliability of 0.98 among students with oral pathology, as well as that reported by Trudel et al., who reported a score of 0.95 among colorectal surgeons [17, 18]. Lower Cronbach’s alpha coefficients have been reported in many studies, such as Zamani et al. [19], who reported a reliability of 0.75 in obstetrics and gynaecology students; Zia et al. [3], who reported 0.56 in otolaryngology; and Grumer et al. [20], who reported 0.53 in neurology. A possible explanation for the improvement in reliability scores could be the development of multiple guidelines for writing key feature questions. Additionally, the teaching methodology in health professionals’ education has evolved over the years, with a shift towards clinical reasoning rather than memorization and recall [21].
The mean scores between the MCQs and IB-KFQs were comparable in our study, i.e., 28.9 and 29.2 out of 50, respectively. The score for the key feature question is slightly higher; however, it is statistically insignificant, with a pvalue of 0.896. The difficulty and discriminatory indices were also comparable, with average difficulty and a good discriminatory index. This finding indicates that the IB-KFQ format was easily adapted by the residents. A possible explanation may be that the IB-KFQ format is similar to the TOACS film viewing and reporting sessions of the radiology fellowship exam. The use of IB-KFQS in radiology testing can help train residents write meaningful radiology reports that concisely and clearly explain all relevant information to the clinical practitioner. Our results are in concordance with other studies with slightly higher scores on key feature questions [17]. This difference was statistically significant in the study conducted by Zamani et al. among midwifery students, with a pvalue of 0.005. They concluded that clinical reasoning cannot be adequately assessed with MCQs and requires novel evaluation tools such as key feature questions [19]. We believe that the IB-KFQs can be used in combination with MCQs to provide a more comprehensive evaluation of interpretation skills and application of knowledge. The results of the present study revealed a positive correlation between the MCQ score and the IBKFQ score.
In a study by Alavi-Moghaddam et al., case-based clinical reasoning was assessed via pre- and post-test questions, including the KFQ. They concluded that case-based teaching led to improvements in the scores [22]. Our questions were also designed via a case-based approach with sequential step-by-step questions, detailing the clinical scenario. We believe that pre- and postassessment comparisons would offer valuable insight into the effectiveness of IB-KFQs and can be a potential direction for future research.
The main aim of assessment is to determine if the resident is prepared to address real-life scenarios by incorporating the history, findings of physical examination, laboratory investigations and follow-up radiological tests that lead to final diagnosis and management. A number of studies have concluded that case-based approaches followed by pertinent clinical reasoning and key feature questions are good learning and assessment strategies [23].
Considering significant spread in years of study between participants, a moderately positive correlation was observed between the number of residency years and performance on KFQs, which is an interesting finding of our study. One possible explanation for this is that radiology competencies—particularly those related to image interpretation and clinical decision-making develop progressively over time but are influenced by multiple factors beyond the number of years in residency alone, such as individual learning experiences, case exposure, and practice frequency. Another important consideration is that the modules being tested are first taught by faculty members over a period of one month in the form of lectures, case based discussions and interactive film viewing sessions. Peer teaching is also part of our residency training program. Therefore all residents from year 1 to year 4 are actively taught the topics before being assessed. This bring the junior residents at par with their senior counterparts. It has been shown that allocating dedicated blocks of academic time can not only support research activities but also promote more focused and effective studying [24].
This finding may indicate the potential of KFQs to identify particular strengths or gaps in diagnostic skills at different stages of training, regardless of time spent in the program. Finally, the structure of the residency program itself, such as rotations or the emphasis on specific skills during different training years, might dilute the strength of the correlation. Future studies could control for these variables to better understand the relationship between residency duration and assessment performance.
Our study has several strengths. First, it is the first study in Pakistan to incorporate case-based questions with radiological images and key feature questions for the assessment of radiology residents. This is a unique intervention in our setup that shows promising results. Second, we designed key feature questions by thoroughly studying the available guidelines. The faculty was primed to write key feature questions, which were proofread by senior faculty members with more than 10 years of experience in health professional education. Third, we conducted three practice sessions to familiarize the residents with IB-KFQs. These residents are committed to learning and passing the FCPS exam; this factor reduces careless responses (response bias). Fourth, we compared the IB-KFQ with the MCQ in terms of reliability (Cronbach’s alpha), difficulty, and discriminatory indices, which were comparable.
Some limitations of our study are its limited sample size and single-centre design. Another limitation is gender bias, with a preponderance of female residents. Studies have shown better clinical reasoning skills in female medical students than in their male counterparts [25, 26]. Another limitation is that the images were not viewed on a diagnostic monitor, thus reducing the overall image quality. Furthermore, only limited images were provided for diagnosis, and advanced image editing techniques were not available.
In addition to these limitations, another important concept in radiologic training and diagnosis is perception. Accurate diagnosis is an intricate perceptual and cognitive process, where the radiologist recognizes, evaluates, and analyzes various imaging findings [27, 28]. Therefore, while IBKFQs and MCQs are efficient and resource-effective for testing analytical skills, they should ideally be supplemented with assessment tools specifically designed to evaluate perception. Incorporating image-based perception assessments could offer a more comprehensive evaluation of residents’ readiness to perform in real-world clinical settings, where both perception and clinical reasoning are essential [29].
In this study, MCQs and KFQs were aligned in terms of covering similar radiology topics. MCQs featured scenario-based questions with five distractors that required a single best-choice answer, whereas KFQs included images and emphasized key imaging findings. These structural differences as well as variation in the content of question may have influenced our results, however, we tried to keep the difficulty indices of the two tests comparable. Future research could investigate using more comparable structures to better understand how these differences impact performance.
Innovation in the rapidly changing world of radiology calls for re-assessment of existing teaching methods, enhancement of resident education, and enrichment of the overall residency experience [30]. While IB-KFQs offer promise in enhancing the authenticity of assessments by incorporating images, the absence of statistical significance in mean scores suggests a need for further exploration. The positive correlation between the two formats emphasized that participants who performed well in one format tended to perform well in the other, reinforcing the convergent validity of both methods. The moderate difficulty and lower discriminatory index of IB-KFQs warrant careful consideration of question design and content to enhance their capacity to differentiate among varying levels of competency.
Conclusion
The integration of image-based key feature questions (IB-KFQs) into radiology residency assessments has emerged as a viable extension of the established multiple choice question (MCQ) format. IB-KFQs show promise as tools for evaluating analytical film interpretation and report writing skills. The IB-KFQs are helpful for the training and assessment of radiologists, as they incorporate case-based real-life situations.
Supplementary Information
Acknowledgements
I would like to acknowledge the faculty members who helped in writing IB-KFQs and the radiology residents who participated in the study.
Abbreviations
- KFQ
Key Feature Questions
- IB-KFQ
Image based Key Feature Questions
- MCQ
Multiple Choice Questions)
- CRDM
Clinical Reasoning and Decision Making Skills
- TOS
Table of Specification
- CHPE
Certificate in Health Professions Education
Authors’ contributions
NN: Conceived and designed the study, data collection, data analysis, manuscript writing KH: Study design, data collection, reviewing manuscript VB: Study design, data analysis, discussion NR: Data collection, data analysis, manuscript writing AA: Study design, data analysis, manuscript review.
Funding
No funding was received for this research.
Data availability
The data that support the findings of this study are not openly available due to reasons of sensitivity and are available from the corresponding author upon reasonable request.
Declarations
Ethics approval and consent to participate
The Ethics Committee of Dow University of Health Sciences approved the study (Ref Number: IRB2085/DUHS/Approval/2021/423).
Informed consent was sought from all study participants.
Consent for publication
Not applicable.
Competing interests
The authors declare no competing interests.
Footnotes
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
References
- 1.Hasen Y, Mequanint A, Teshome T, Belete A, Bekele D. Analysis of Clinical Skill Performance During Medical Internship at Department of Surgery, Debre Tabor University. Ethiopia Sriwijaya Journal of Surgery. 2024;7(1):611–7. [Google Scholar]
- 2.Saher AS, Ali AM, Amani D, Najwan F. Traditional Versus Authentic Assessments in Higher Education. Pegem Journal of Education and Instruction. 2022;12(1):283–91. [Google Scholar]
- 3.Zia S, Obaid MS, Ahmed J, Qazi S. Key-feature questions for assessment of clinical reasoning: Reliable and valid or not? RMJ. 2019;44(1):189–91. [Google Scholar]
- 4.Page G, Bordage G, Allen T. Developing key-feature problems and examinations to assess clinical decision-making skills. Acad Med. 1995;70(3):194–201. [DOI] [PubMed] [Google Scholar]
- 5.Hrynchak P, Glover Takahashi S, Nayer M. Key-feature questions for assessment of clinical reasoning: a literature review. Med Educ. 2014;48(9):870–83. [DOI] [PubMed] [Google Scholar]
- 6.Nayer M, Glover Takahashi S, Hrynchak P. Twelve tips for developing key-feature questions (KFQ) for effective assessment of clinical reasoning. Médical teacher. 2018;40(11):1116–22. [DOI] [PubMed] [Google Scholar]
- 7.Guedjati MR, Bouhidel JO, Benaldjia H, Derdous C. The Medical Residency Examination in the Era of the Overhaul of Medical Study Programs, a Call for Reform. Reflections for New Docimological Orientations. Acta Scientific Medical Sciences (ISSN: 2582–0931). 2024;8(5).
- 8.Teichgräber U, Ingwersen M, Mentzel HJ, et al. Impact of a Heutagogical, MultimediaBased Teaching Concept to Promote Self-Determined, Cooperative Student Learning in Clinical Radiology. Rofo. 2021;193(6):701–11. 10.1055/a-1313-7924. [DOI] [PubMed] [Google Scholar]
- 9.Williams LH, Carrigan AJ, Mills M, Auffermann WF, Rich AN, Drew T. Characteristics of expert search behavior in volumetric medical image interpretation. J Med Imaging (Bellingham). 2021;8(4): 041208. 10.1117/1.JMI.8.4.041208. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.De Lange T, Møystad A, Torgersen GR. Increasing clinical relevance in oral radiology: Benefits and challenges when implementing digital assessment. Eur J Dent Educ. 2018;22(3):198–208. [DOI] [PubMed] [Google Scholar]
- 11.Hardy M, Johnson L, Sharples R, Boynes S, Irving D. Does radiography advanced practice improve patient outcomes and health service quality? A systematic review. Br J Radiol. 2016;89(1062):20151066. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12.Si-Mui Sim, Rasiah RI. Relationship between item difficulty and discrimination indices in true/false type multiple choice questions of a para-clinical multidisciplinary paper. Ann Acad Med Singapore 2006; 35: 67–71. [PubMed]
- 13.Bhattacherjee S, Mukherjee A, Bhandari K, Rout AJ. Evaluation of Multiple-Choice Questions by Item Analysis, from an Online Internal Assessment of 6th Semester Medical Students in a Rural Medical College. West Bengal Indian J Community Med. 2022;47(1):92–5. 10.4103/ijcm.ijcm_1156_21. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14.Bordage G, Page G. Guidelines for the development of key feature problems and test cases. Medical Council of Canada. 2012.
- 15.UKEssays. Validity and reliability of assessment in medical education. November 2018. Available from: https://www.ukessays.com/essays/psychology/validity-and-reliability-of-assessment-inmedical-education-psychology-essay.php?vref=1. Accessed 5 July 2024.
- 16.Shravani Deolia, Karthika Nambiar, Payal Padole, Andrea Tlau, Aditi Agrawal , Amit Reche. Dental Patient’s Knowledge, Attitude and Practice of Tobacco Use and Perception Towards the Role of Dentists in Tobacco Cessation. JODRDMIMS 2018;2(4).
- 17.Sharma P, Fulzele P, Chaudhary M, Gawande M, Patil S, Hande A. Introduction of Key Feature Problem Based Questions in Assessment of Dental Students. International Journal of Current Research and Review. 2020;12:56–61. [Google Scholar]
- 18.Trudel JL, Bordage G, Downing SM. Reliability and validity of key feature cases for the self-assessment of colon and rectal surgeons. Ann Surg. 2008;248(2):252–8. 10.1097/SLA.0b013e31818233d3. [DOI] [PubMed] [Google Scholar]
- 19.Zamani S, Amini M, Masoumi SZ, Delavari S, Namaki MJ, Kojuri J. The comparison of the key feature of clinical reasoning and multiple choice examinations in clinical decision makings ability. Biomedical Research-India. 2017;28(3):1115–9. [Google Scholar]
- 20.Grumer M, Brüstle P, Lambeck J, Biller S, Brich J. Validation and perception of a key feature problem examination in neurology. PLoS ONE. 2019;14(10): e0224131. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21.Challa KT, Sayed A, Acharya Y. Modern techniques of teaching and learning in medical education: a descriptive literature review. MedEdPublish. 2021;10. [DOI] [PMC free article] [PubMed]
- 22.Alavi-Moghaddam M, Zeinaddini-Meymand A, Ahmadi S, Shirani A. Teaching clinical reasoning to medical students: A brief report of case-based clinical reasoning approach. J Educ Health Promot. 2024;13:42. Published 2024 Feb 26. 10.4103/jehp.jehp_355_23. [DOI] [PMC free article] [PubMed]
- 23.Wu F, Wang T, Yin D, et al. Application of case-based learning in psychology teaching: a meta-analysis. BMC Med Educ. 2023;23(1):609. Published 2023 Aug 25. 10.1186/s12909-023-04525-5. [DOI] [PMC free article] [PubMed]
- 24.Waite S, Grigorian A, Alexander RG, et al. Analysis of Perceptual Expertise in Radiology - Current Knowledge and a New Perspective. published correction appears in Front Hum Neurosci. 2019 Aug 13;13:272. 10.3389/fnhum.2019.00272. Front Hum Neurosci. 2019;13:213. Published 2019 Jun 25. 10.3389/fnhum.2019.00213. [DOI] [PMC free article] [PubMed]
- 25.Hege I, Hiedl M, Huth K, Kiesewetter J. Differences in clinical reasoning between female and male medical students. Diagnosis. 2023;10(2):100–4. 10.1515/dx-2022-0081. [DOI] [PubMed] [Google Scholar]
- 26.Croskerry P. Individual variability in clinical decision making and diagnosis. InDiagnosis 2017 Sep 19 (pp. 129–158). CRC Press.
- 27.Degnan AJ, Ghobadi EH, Hardy P, et al. Perceptual and Interpretive Error in Diagnostic Radiology-Causes and Potential Solutions. Acad Radiol. 2019;26(6):833–45. 10.1016/j.acra.2018.11.006. [DOI] [PubMed] [Google Scholar]
- 28.Ul Hassan, A., Jan, S., Akhtar, Z., Jan, N., Bhat, G. M., & Yusuf, A. (2022). Assessment measures to determine role of radiology in undergraduate medical education. International Journal of Health Sciences, 6(S9), 3191–3198. 10.53730/ijhs.v6nS9.13249.
- 29.Lam CZ, Nguyen HN, Ferguson EC. Radiology Resident’ Satisfaction With Their Training and Education in the United States: Effect of Program Directors, Teaching Faculty, and Other Factors on Program Success. AJR Am J Roentgenol. 2016;206(5):907–16. 10.2214/AJR.15.15020. [DOI] [PubMed] [Google Scholar]
- 30.Halsted MJ, Perry L, Racadio JM, Medina LS, LeMaster T. Changing radiology resident education to meet today’s and tomorrow’s needs. J Am Coll Radiol. 2004;1(9):671–8. 10.1016/j.jacr.2004.04.002. [DOI] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Data Availability Statement
The data that support the findings of this study are not openly available due to reasons of sensitivity and are available from the corresponding author upon reasonable request.