Abstract
Background
Educators agree on the importance of assessing the quality of graduate medical education. In the United States, the Accreditation Council for Graduate Medical Education (ACGME) resident survey is an important part of the accreditation process, yet some studies have questioned its validity.
Objective
We assessed the reliability and acceptance of the ACGME-International (ACGME-I) resident survey in the culturally distinct, nonnative English-speaking resident population of Abu Dhabi in the United Arab Emirates.
Methods
A total of 158 residents in ACGME-I accredited institutions in Abu Dhabi received an online link to the ACGME-I survey. Reliability analysis was conducted using the Cronbach α. A focus group was then held with a convenience sample of 25 residents from different institutions and specialties to understand potential challenges encountered by survey participants.
Results
Completed surveys were received from 116 residents (73.4%). The 39 items in the survey demonstrated high reliability, with a Cronbach α of 0.918. Of the 5 subscales, 4 demonstrated acceptable to very good reliability, ranging from 0.72 to 0.888. The subscale “resources” had lower reliability at 0.584. Removal of a single item increased the Cronbach α to a near-acceptable score of 0.670. Focus group results indicated that the survey met standards for readability, length, and time for completion.
Conclusions
The ACGME-I resident survey demonstrates acceptable reliability and validity for measuring the perceptions of residents in an international residency program. The data derived from the survey can offer an important set of metrics for educational quality improvement in the United Arab Emirates.
What was known
The Accreditation Council for Graduate Medical Education (ACGME) resident survey is used in the United States to collect data on resident perceptions of their learning environment; it is an important component of accreditation.
What is new
This study assesses the reliability and initial validity of the resident survey for use in ACGME-International–accredited programs in the United Arab Emirates.
Limitations
Survey administration to residents in a single international site reduces generalizability.
Bottom line
The ACGME-I resident survey data can offer, with acceptable reliability and validity, metrics for educational quality improvement in the United Arab Emirates.
Introduction
The past 2 decades have seen growing interest in quality improvement in medical training. Although educators worldwide agree on the importance of assessing the quality of graduate medical education (GME), there is relatively less consensus on how to do so. In the United States, regulatory bodies such as the Accreditation Council for Graduate Medical Education (ACGME) have developed standards that assess program quality as part of the accreditation process.1 The ACGME resident survey provides data on resident satisfaction and perceptions of their learning environment and has become an increasingly important component in accreditation.2 In an analysis of 91 073 surveys from residents in more than 5600 programs with 4 or more residents, the survey demonstrated a high degree of internal reliability (Cronbach α of 0.84), leading the ACGME to determine that the survey is a “reliable, valid, and useful tool for evaluating residency programs.”1 However, several studies have since voiced concerns over the validity of inferences based on this instrument. In 1 study, the ACGME survey inaccurately reflected the magnitude of noncompliance in duty hours.3 In a larger study of US general surgery residents, 35% admitted discussing responses with faculty, and 17% received instructions on survey completion.4 More concerning is that 19% of the residents reported having difficulty understanding the questions and 14% admitted to providing untruthful responses.4
Concerns over the quality of postgraduate training are compounded in international teaching hospitals, where differences in culture, resources, and institutional support can foster significant inconsistencies in residency training and program quality. In recent years, many countries, including Singapore, Qatar, and the United Arab Emirates (UAE), have redesigned their GME systems on the competency-based framework of the ACGME-International (ACGME-I).5,6 Like their domestic counterparts, residents in ACGME-I–accredited programs complete an annual resident survey designed and fielded by ACGME-I. Concern about issues with wording and comprehension among US residents, and concern about the validity of inferences based on the survey, prompted us to assess the reliability and validity of the ACGME-I resident survey in the culturally distinct, nonnative English-speaking residents of the UAE. To our knowledge, this is the first study of the ACGME-I resident survey instrument outside the United States.
Methods
The 6 major teaching institutions in Abu Dhabi and their core residency programs of general surgery, emergency medicine, family medicine, internal medicine, pediatrics, and psychiatry received ACGME-I accreditation in 2013. Survey participants included 158 residents from the 6 core disciplines in 4 Abu Dhabi teaching hospitals. The ACGME-I resident/fellow survey was accessed online.7 As in the United States, the international version is designed to measure residents' perception of their training program's compliance with ACGME Common Program Requirements, and it is nearly identical to US versions of the survey. The instrument consists of 34 questions measuring 6 domains of program compliance: duty hours, faculty, evaluation, educational content, resources, and teamwork. Item response questions are either “yes/no” or a modified Likert scale. The survey was transcribed into the online SurveyMonkey and verified for response functionality; a participation link was delivered electronically to all residents in March 2013.
Participation was voluntary, and residents read an informed consent statement before taking the survey. At the conclusion of the study, data were downloaded from SurveyMonkey into Microsoft Excel, numerically recoded from qualitative responses, and frequency distributions were verified for accuracy with SurveyMonkey outputs. Reliability was assessed by calculating the Cronbach α using SPSS version 21 (IBM Corp).
After survey administration, a convenience sample of 25 residents from different institutions and disciplines participated in a focus group. The 90-minute interview included questions specific to the domains of the ACGME-I survey; participants responded “yes” or “no.” Semistructured, open-ended questions were asked in an attempt to understand potential challenges encountered by survey participants. These were verified by 2 independent, unblinded reviewers.
The study was approved by the Al Ain Ethics Committee.
Results
The survey response rate was 73.4% (116 of 158). The 39 total response items demonstrated high reliability overall, with a Cronbach α of 0.92 (table 1). The Resources domain produced the lowest subscale reliability score at 0.58, suggesting some heterogeneity of items within this subscale. Interitem correlation matrix analysis of the Resources domain illustrated positive content correlations ranging from r = 0.168 to r = 0.575. The most weakly correlated item asked about the ability to learn being compromised by others, which had an interitem correlation of r = 0.168; the overall domain reliability increases from 0.584 to a borderline acceptable 0.670 if the item is removed.
TABLE 1.
Focus group results indicated that the survey met standards for ease of readability, acceptable length, and optimum time. Focus group participants also reported that the terminology did not affect understanding for most questions. Areas where the participants had some difficulty understanding survey terms are shown in table 2.
TABLE 2.
Discussion
We investigated the reliability and validity of the ACGME-I resident survey in culturally distinct international programs where residents complete medical education in English, but where English is not a native language. Our results support overall instrument reliability consistent with other literature,1 with an overall Cronbach α of 0.92. The lowest Cronbach α was found for the question referring to educational interference by other trainees, including PhD students or nurse practitioners, most of which are absent in clinical settings in the UAE.
Although reliability is evidence of internal structure validity,8 further validity evidence must be collected over several distributions of the survey to support inferences based on the construct validity of this instrument in the international arena. Further establishing the validity of the ACGME-I resident survey will support evidence from interviews and other data collection during site visits and, ultimately, from learning outcomes.8,9 Response process concerns may be present if the reading level of the survey is inappropriate for international trainees.10 The use of complex sentences and difficult or unfamiliar vocabulary could result in misinterpretation of a question. In our study, focus group responses revealed that the reading level was appropriate, although 6 trainees (24%) reported confusion with the terms “in-house” and “at-home” calls. When these terms were explained as “first” and “second” call, 3 residents (12%) reported that they had misinterpreted the questions and provided inaccurate responses. The ACGME-I should consider modifying the survey in the UAE to include both terms to improve resident understanding.
The ACGME-I could also generate response process validity evidence by defining standards for survey administration, including administration logistics and guidelines for resident preparation in answering questions. In this study, the focus group participants denied discussing the survey with faculty or receiving any coaching or advice, but it is not known how individual programs will operationalize administration of the survey. Training is needed to accomplish the goal of standardization, and a web-based learning format has the capability to minimize the resources needed for such training internationally.
Our study has several limitations. Its administration to residents in a single international site reduces the ability to generalize from the findings. In addition, residents were aware that survey responses were to be used for internal quality improvement and would not be used for accreditation decisions. The authors believe this might have encouraged more truthful responses, but the actual effect is not known.
Strengths of our study include a high response rate and the fact that it is a multicenter study that includes the various specialty programs offered in Abu Dhabi. Future research should assess whether the discriminant validity of the survey, which was found in the US study,1 is replicated in the use of the survey in international residency programs. Collecting evidence on consequences of the ACGME-I survey is also important in the international setting, where noncompliant survey responses could result in citations or other negative consequences for newly accredited programs. Finally, directors of GME in international teaching hospitals must be properly educated on the appropriate use of the ACGME-I resident survey as a helpful tool for formative feedback to guide ongoing residency program improvement.
Conclusion
Assessing the quality of residency training is of critical importance to international teaching hospitals actively restructuring their training programs. Our findings indicate that the ACGME-I resident survey has the potential to be a reliable and valid tool to measure resident perceptions of their program in an international setting, and it can serve as an important measure for educational quality improvement in teaching hospitals throughout the UAE.
Footnotes
Halah Ibrahim, MD, MEHP, is Senior Consultant, Department of Internal Medicine, and Chair, Academic Affairs, Tawam Hospital, Al Ain, Abu Dhabi, United Arab Emirates; Brenessa Lindeman, MD, MEHP, is Resident, Department of Surgery, Johns Hopkins Hospital; Steven A. Matarelli, PhD, worked for Johns Hopkins Medicine International as Chief Operating Officer for Tawam Hospital during the writing of this article; and Satish Chandrasekhar Nair, PhD, is Director of Clinical Research, Tawam Hospital.
Funding: The authors report no external funding source for this study.
Conflict of Interest: The authors declare they have no competing interests.
References
- 1.Holt KD, Miller RS, Philibert I, Heard JK, Nasca TJ. Residents' perspectives on the learning environment: data from the Accreditation Council for Graduate Medical Education resident survey. Acad Med. 2010;85(3):512–518. doi: 10.1097/ACM.0b013e3181ccc1db. [DOI] [PubMed] [Google Scholar]
- 2.Holt KD, Miller RS. The ACGME resident survey aggregate reports: an analysis and assessment of overall program compliance. J Grad Med Educ. 2009;1(2):327–333. doi: 10.4300/JGME-D-09-00062.1. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Fahy BN, Todd SR, Paukert JL, Johnson ML, Bass BL. How accurate is the Accreditation Council for Graduate Medical Education (ACGME) Resident survey? Comparison between ACGME and in-house GME survey. J Surg Educ. 2010;67(6):387–392. doi: 10.1016/j.jsurg.2010.06.003. [DOI] [PubMed] [Google Scholar]
- 4.Sticca RP, MacGregor JM, Szlabick RE. Is the Accreditation Council for Graduate Medical Education (ACGME) Resident/Fellow survey a valid tool to assess general surgery residency programs' compliance with work hours regulations. J Surg Educ. 2010;67(6):406–411. doi: 10.1016/j.jsurg.2010.09.007. [DOI] [PubMed] [Google Scholar]
- 5.Abdel-Razig S, Alameri H. Restructuring graduate medical education to meet the health care needs of Emirati citizens. J Grad Med Educ. 2013;5(2):195–200. doi: 10.4300/JGME-05-03-41. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6.Huggan P, Samarasekara D, Archuleta S, Khoo SM, Sim JH, Sin CS, et al. The successful, rapid transition to a new model of graduate medical education in Singapore. Acad Med. 2012;87(9):1268–1273. doi: 10.1097/ACM.0b013e3182621aec. [DOI] [PubMed] [Google Scholar]
- 7.The ACGME-I resident/fellow survey. http://www.acgme-i.org/web/dcs/Resident-Survey-General.pdf. Accessed July 15, 2013. [Google Scholar]
- 8.Cook DA, Beckman TJ. Current concepts in validity and reliability for psychometric instruments: theory and application. Am J Med. 2006;119(2):166.e7–e16. doi: 10.1016/j.amjmed.2005.10.036. [DOI] [PubMed] [Google Scholar]
- 9.Cronbach LJ. Construct validation after thirty years. In: Linn RL, editor. Intelligence: Measurement, Theory and Public Policy. Vol. 1989. Champaign, IL: University of Illinois Press; pp. 147–171. [Google Scholar]
- 10.Downing SM. Validity: on the meaningful interpretation of assessment data. Med Educ. 2003;37(9):830–837. doi: 10.1046/j.1365-2923.2003.01594.x. [DOI] [PubMed] [Google Scholar]