Abstract
The overarching goal of medical education is to train clinicians who achieve and maintain competence in patient care. Although the field of medical education research has acknowledged the importance of education on clinical practices and outcomes, most research endeavors continue to focus on learner-centered outcomes, such as knowledge and attitudes. The absence of clinical and patient-centered outcomes in pulmonary and critical care medicine medical education research has been attributed to barriers at multiple levels, including financial, methodological, and practical considerations. This Perspective explores clinical outcomes relevant to pulmonary and critical care medicine educational research and offers strategies and solutions that educators can use to accomplish what many consider the “prize” of medical education research: an understanding of how our educational initiatives impact the health of patients.
Keywords: medical education, clinical research, patient outcomes
The overarching goal of medical education is to train clinicians who achieve and maintain competence in patient care. Although the field of medical education research has acknowledged the importance of education in clinical practices and outcomes (1, 2), the majority of research endeavors continue to focus on learner-centered outcomes, like knowledge and attitudes (3, 4). The absence of clinical and patient-centered outcomes in medical education research has been attributed to barriers at multiple levels, including financial, methodological, and practical considerations (5, 6). This perspective explores clinical outcomes relevant to pulmonary and critical care medicine educational research, identifies barriers to their integration into educational research agendas, and proposes solutions.
Clinical Outcomes in Medical Education—What Is Attainable?
The National Institutes of Health has defined four main types of clinical outcomes, including patient-reported, observer-reported, clinician-reported, and performance outcome measures (Table 1) (7). Researchers can measure all these types of outcomes in educational contexts.
Table 1.
Types of clinical outcomes
Type of Clinical Outcome | Definition | Examples |
---|---|---|
Patient reported | Reported directly by patients, either self-report or via interview, reflecting the status of a patient’s health condition or level of function |
|
Observer reported | Observed by people other than health professionals or patients themselves—such as caregivers, parents, or other observers of daily life—when a person cannot report for themselves. Observer-reported outcomes do not include medical judgment or other forms of interpretation |
|
Clinician reported | Observed and reported by trained healthcare professionals, involving assessment of observable clinical findings or events, potentially in combination with biomarker or other objective clinical data | |
Performance outcome measures | Clinical outcomes measured by standardized tasks performed by patients |
In 2013, Cook and West delineated an argument against emphasizing clinical outcomes research in medical education (8). First, they contended that patient-centered outcomes in medical education research suffer from a dilution effect within the vastness of a healthcare system—how can we establish a causal link that a particular educational intervention caused the observed patient outcomes, rather than other unstudied factors? But it is well known that education research focusing on teaching discrete tasks that clinicians can deliberately practice counters the dilution effect by linking educational interventions directly to specific clinical outcomes. Examples include obtaining informed consent or placing ultrasound-guided peripheral intravenous lines (performance outcome measures) (9, 10); cultivating communication skills among clinicians, which is associated with improved patient experience scores (patient-reported outcomes) (11); simulation of family presence during resuscitation training experiences, which is associated with increased clinician support for family during actual resuscitations (observer-reported outcomes) (12); and patient education interventions, which are associated with improved medication regimen adherence (clinician-reported outcomes) (13).
Second, Cook and West noted that low sample size typically limits the scope of medical education research. They shared the concern that integration of outcomes in education research could influence curricula, such that educators might focus more on clinical outcomes and bypass fundamental principles like pathophysiology; learners might understand the what but not the why. We believe that the Kirkpatrick model of assessment, which is foundational to medical education (14), allows for progression in the types of outcomes measured, providing a framework to assess clinical outcomes without detracting from more learner-centric metrics and foundational concepts. Although the first two levels of the model (reactions and learning) allow for ready assessment of learner-centered cognitive outcomes, the next two levels (behavior and results) model assessment of how educational interventions affect clinical performance and patient care outcomes (1). Patient-relevant clinical outcomes, as affected by educational interventions, are grounded within these top two levels of the model. This composite of outcomes—measuring both a clinician’s behaviors and performance and those outcomes that are meaningful to patients—encompasses a spectrum spanning both process-related and patient-centered outcomes (15). It is especially important for the field of medical education to incorporate this entire spectrum, as process-focused outcomes (e.g., first-pass intubation rates) and patient-centered ones (e.g., desaturation during intubation) are both connected to each other and crucial to measure.
Finally, Cook and West cited the potential for bias in clinical outcome selection by researchers, with prioritization of measurable outcomes over those that are most meaningful. Although we acknowledge this is a potential limitation of clinical outcomes research in medical education, it is not unique to medical education research; such methodological concerns affect all forms of clinical and health services research.
Barriers and Solutions
Educational studies designed to measure impacts on patient-relevant outcomes allow researchers to assess how educational interventions affect patients. Yet, multiple barriers prevent design and implementation of educational research that can robustly measure clinical outcomes. In this section, we outline these barriers and propose practical solutions for educators to integrate these assessments into their research, as summarized in Table 2.
Table 2.
Barriers and solutions to assessing clinical outcomes in medical education research
Barrier | Solution | Example |
---|---|---|
Lack of expertise in assessing clinical outcomes | Collaborate with health outcomes researchers; enroll in dedicated coursework to build skills | A medical education researcher develops a novel airway curriculum for PCCM fellows and assesses for impacts on first-pass intubation success. She partners with a health outcomes researcher to design her study. |
Inadequate funding | Maximize use of available local resources and focus grant applications on clinical relevance | A junior faculty member is in the planning phase of an educational research project evaluating impacts of fellow and faculty implicit biases on pulmonary clinic patient outcomes. He uses a free departmental biostatistician consulting service as he completes an IRB application and applies for both a local clinical care quality grant and an early career educational research grant from a national specialty society. |
Lack of standardized curricula or electronic medical records across different institutions | Leverage contacts through professional societies to conduct multiinstitutional research studies | A multiprogram PCCM fellowship research consortium decides to study ACLS guideline adherence, as well as patient outcomes, for cardiac arrest resuscitations led by fellows from their respective programs |
Lack of dedicated methodological schema for clinical outcomes medical education research | Use educational schema and frameworks that can organically accommodate clinical outcomes | A critical care medicine educator maps out a multistudy research agenda investigating impacts of a serious illness conversation training curriculum for residents rotating in the intensive care unit. She uses the Kirkpatrick model for assessment (14) to frame her agenda, with multiple studies planned to assess learner perception, knowledge gained, behaviors changed, and clinical outcomes. |
Historical divide between educational and clinical/health services research | Include educational implications in discussions of clinical/health services research and vice versa | A pulmonary, critical care, and sleep medicine division director asks her educational and clinical/health services research faculty to begin to periodically attend each other’s conferences, with the goal of fostering collaboration and highlighting educational implications of the results of clinical trials and quality-improvement projects. |
Lack of systematic data collection systems that have both educational and clinical relevance, as well as the subjectivity of many educational outcomes | Partner with data science and quality-improvement specialists to design or adapt existing EHR data collection strategies prospectively, with focus on deliberately chosen objective outcomes | A PCCM assistant program director seeks to study first-pass intubation success rates for fellows in his program. He partners with information technology specialists at his institution to insert a drop-down input button into the intubation note template. This allows for easy extraction and tracking of that data from the EHR. |
Definition of abbreviations: ACLS = Advanced Cardiovascular Life Support; EHR = electronic health record; IRB = institutional review board; PCCM = pulmonary and critical care medicine.
Barrier: Lack of Expertise in Assessing Clinical Outcomes
Training for expertise in medical education research focuses on use of learning theories, trainee assessment, and quantitative and qualitative methods. Clinician educators may or may not have the required training and/or expertise needed to design and implement studies specifically intended to measure clinical outcomes in the context of educational interventions (1).
Solution: Collaborate and Self-Educate
Partnering with a collaborator who has expertise in clinical research can prove invaluable, ideally from the time of study conception, to ensure an optimal approach. Understanding the types of clinical outcomes that educators can query is also critical to conducting these studies (Table 1). Opportunities exist for self-education, including auditing coursework on clinical study design or pursuing clinical research certification, such as from the Association of Clinical Care Professionals (16).
Barrier: Inadequate Funding
The lack of funding for logistical support and protected time in medical education research represents a longstanding and pervasive problem (17, 18). Research that involves measuring clinical outcomes can often be particularly time and resource intensive, especially when educators prospectively assess those involving patient contact (19).
Solution: Focus Grant Applications on Clinical Relevance and Maximize Local Resources
Increased access to dedicated medical education research funding can help educators engage in high-quality, impactful research that assesses clinical outcomes. Applying for clinical research, quality improvement, or implementation science grants, in addition to medical education–focused ones, would allow access to a substantially larger pool of funding. In addition, collaborating with clinical researchers who already have grant funding, or including medical education interventions in larger clinical or quality-improvement grants, can offer more opportunities to conduct such research. Academic institutions often offer seed funding for junior faculty, as well as low- or no-cost statistical and research methodology support. Deliberately describing the benefit to patients from the proposed interventions in grant applications can enhance interest from varied funding sources at institutional and national levels. An emphasis on patient-relevant outcomes (e.g., an educational intervention that aims to reduce intubation-related hypoxemia for intubations performed by critical care fellows), as opposed to process-based outcomes (e.g., an educational intervention that aims to improve trainee adherence to a preintubation checklist), may prove more persuasive to grant selection committees.
Barrier: Lack of Standardized Curricula or Electronic Medical Records across Different Institutions
Generalizability is a critical component of high-quality clinical research. Institutions typically have their own unique curricula, learning and practice contexts, and electronic medical records. These interinstitutional variations represent major threats to the generalizability of the clinical impacts of any given educational intervention.
Solution: Collaborate with Researchers at Other Institutions and Design Interventions with Generalizability in Mind
Leverage contacts through local, national, and international professional societies to conduct multiinstitutional research studies, which increases the generalizability of study findings (20). Aggregating geographically diverse researchers with similar interests into consortia allows for multisite collaborations. Choosing research questions that maximize commonalities in clinical practice, such as assessing outcomes that have similar processes across different institutions, is another approach to increase the applicability of study findings to a wide audience. For example, a standardized practice guideline—the American Heart Association’s Advanced Cardiovascular Life Support clinical algorithm—influences cardiac arrest outcomes.
Barrier: Lack of Dedicated Methodological Schema for Clinical Outcomes Medical Education Research
Medical education researchers examining outcomes such as burnout, communication skills, or the psychological impacts of the learning environment on trainees often rely on frameworks, instruments, and approaches borrowed from established quantitative and qualitative research methodologies. Designing studies that assess clinical outcomes in an educational context, on the other hand, often requires researchers to blaze their own trail, crafting methodologies for themselves that meet their own needs in real time. Or, more commonly, educational researchers may inadequately assess, or choose not to assess, the impact of educational interventions on clinical outcomes (21).
Solution: Use Educational Schema and Frameworks That Can Accommodate Clinical Outcomes
Ground research design within existing educational frameworks and schema that can organically accommodate clinical outcomes. Kirkpatrick’s Training Evaluation Model is one framework that allows for the merging of traditional educational and clinical outcomes in a research agenda, as it offers progression from acceptability and feasibility, to trainee knowledge acquisition, to behavior and/or process change, and then, finally, to clinical outcomes (14). In addition, concepts important to learners, such as burnout and communication skills, are increasingly acknowledged in clinical research as impacting patient outcomes. This has led to an emergence of literature using frameworks and instruments of shared importance to medical educators and clinician researchers, narrowing the distance between the two fields (22, 23).
Barrier: Cultural Divide between Educational and Clinical Research
Clinical and educational research are not traditionally considered as related disciplines and are often siloed at the institutional level (1). This historical divide does not account for the extensive impact that medical training has on clinical trials. For example, a 2016 study described poor adoption of low tidal volume ventilation by physicians caring for patients with acute respiratory distress syndrome (ARDS) (24). The most important barrier identified was the lack of recognition of ARDS, something addressable by educational interventions (25). We join others who have called for medical education to bridge gaps of knowledge to improve evidence-based practice implementation (26, 27). This requires clinical researchers to partner with medical educators, ensuring that discoveries are shared and protocols involving education are adopted. Physicians must understand the why behind the what when it comes to healthcare discovery, and medical education often provides the how regarding actual, real-world implementation of clinical advances.
Solution: Bridge the Gap by Attending Each Other’s Conferences and Highlighting Educational Implications of Clinical Trial Results
One way to change culture and bridge gaps between clinical and educational research is for educational and clinical researchers to attend each other’s didactic conferences and journal clubs. Highlighting the educational implications of clinical trial results helps clinical researchers account for educational considerations when designing studies. Engagement on social media has highlighted how clinician educators can be instrumental to disseminating the findings of important studies to a wide clinical audience (28).
Barrier: Data Collection Systems Used in Educational Research Are Often Poorly Equipped to Measure Subjective Educational Outcomes and Lack Clinical Relevance
Lack of systematic data collection has been a historical barrier to educational research (29). Adding in a need for clinical relevance compounds such friction. Although numerous factors contribute to this barrier—for example, educational research tends to involve smaller sample sizes—the often-subjective nature of educational outcomes has particular relevance.
Solution: Partner with Data Science and Quality-Improvement Specialists to Design or Adapt Electronic Health Record Data Collection Strategies Prospectively, with a Focus on Deliberately Chosen Objective Outcomes
Hospital data science, bioinformatics, and quality-improvement specialists can design or adapt existing data collection tools to ensure efficient and accurate extraction of data from an electronic health record. Partnering with these individuals can help educational researchers design a data-extraction strategy for the measurement of objective, patient-relevant outcomes associated with educational interventions. Such partnerships have become increasingly common (30). Precedent exists for large, multiinstitutional, collaborative, and longitudinal databases in medical education research. Increased use of these databases could facilitate broader data availability for medical education research and generate evidence with relevance to clinical practice (31).
Navigating New Paths: Examples
In this section, we provide two examples of studies that successfully incorporated clinical outcomes into medical education research. The first is a seminal study from 2008 that outlined an approach to integrate trainee clinical outcomes into graduate medical education program evaluation structures. This framework aligned clinical outcomes with Accreditation Council for Graduate Medical Education competencies and suggested a tiered strategy for evaluation (15). It focused first on national consensus clinical outcome standards if available (e.g., quality standards for asthma management). If consensus standards were unavailable, programs would be asked to choose from national specialty society standards, local or regional initiatives, or program priority areas (in descending order of priority). Importantly, this study provided a lingua franca with which educators across programs and specialties could communicate about clinical outcomes measurement.
The second study is the 2014 I-PASS trial, which studied implementation of a resident handoff improvement initiative at nine pediatric residency programs (32). I-PASS effectively modeled the rigorous study of an educational intervention’s impact on clinical outcomes. The study team assessed the rates of pre/post-intervention medical errors and preventable adverse events, inclusion of key elements in verbal and written handoffs, and duration of oral handoff and resident workflow. In terms of scope and rigor, the I-PASS trial set a high standard for educational research assessing clinical outcomes. But the principles used in the design of the trial also apply to smaller educational studies and incorporate several of the solutions we discussed in this perspective: deliberately chosen clinical outcomes that an intervention could impact, measurement of clinical outcomes using a preexisting gold standard, formation of a multidisciplinary study team, and grant support from stakeholder organizations.
Conclusions
Collaboration between educators and clinical researchers can facilitate the necessary methodologic skills and resources to accomplish the integration of clinical outcomes into medical education research. Reflecting on the educator–clinical researcher dyad, we imagine a mutualistic relationship between medical educators and clinical researchers. Medical educators, as those who train clinicians to implement best practices in medical care, can build bridges between evidence-based recommendations and implementation. Working together to align evidence-based recommendations with best educational practices can maximize scope and impact on patient care, as clinician educators train the next generation of physicians, provide direct patient care, and are active participants of administrative health systems. Trainees will also benefit from an introduction to the essential elements of quality-improvement processes and outcomes research. Although barriers to measuring clinical outcomes in medical education do exist, we offer strategies and solutions that educators can use to accomplish what many consider the prize of medical education research: an understanding of how our educational initiatives impact the health of patients.
Acknowledgments
Acknowledgment
The authors thank Dr. Derrick Herman for contributions to the initial concept of this manuscript.
Footnotes
Author disclosures are available with the text of this article at www.atsjournals.org.
References
- 1. Chen FM, Bauchner H, Burstin H. A call for outcomes research in medical education. Acad Med . 2004;79:955–960. doi: 10.1097/00001888-200410000-00010. [DOI] [PubMed] [Google Scholar]
- 2. O’Malley PG, Pangaro LN. Research in medical education and patient-centered outcomes: shall ever the twain meet? JAMA Intern Med . 2016;176:167–168. doi: 10.1001/jamainternmed.2015.6938. [DOI] [PubMed] [Google Scholar]
- 3. Prystowsky JB, Bordage G. An outcomes research perspective on medical education: the predominance of trainee assessment and satisfaction. Med Educ . 2001;35:331–336. doi: 10.1046/j.1365-2923.2001.00910.x. [DOI] [PubMed] [Google Scholar]
- 4. Kalet A. The state of medical education research. Virtual Mentor . 2007;9:285–289. doi: 10.1001/virtualmentor.2007.9.4.medu2-0704. [DOI] [PubMed] [Google Scholar]
- 5. Reed D, Price EG, Windish DM, Wright SM, Gozu A, Hsu EB, et al. Challenges in systematic reviews of educational intervention studies. Ann Intern Med . 2005;142:1080–1089. doi: 10.7326/0003-4819-142-12_part_2-200506211-00008. [DOI] [PubMed] [Google Scholar]
- 6. Feemster LC, Saft HL, Bartlett SJ, Parthasarathy S, Barnes T, Calverley P, et al. American Thoracic Society Behavioral Sciences and Health Services Research Assembly and Nursing Assembly Patient-centered outcomes research in pulmonary, critical care, and sleep medicine: an official American Thoracic Society workshop report. Ann Am Thorac Soc . 2018;15:1005–1015. doi: 10.1513/AnnalsATS.201806-406WS. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Clinical Outcome Assessment - BEST (Biomarkers, EndpointS, and other Tools) Resource - NCBI Bookshelf. https://www.ncbi.nlm.nih.gov/books/NBK338448/def-item/glossary.clinical-outcome-assessment/
- 8. Cook DA, West CP. Perspective. Reconsidering the focus on “outcomes research” in medical education: a cautionary note. Acad Med . 2013;88:162–167. doi: 10.1097/ACM.0b013e31827c3d78. [DOI] [PubMed] [Google Scholar]
- 9. Gorgone M, O’Connor TP, Maximous SI. How I teach: ultrasound-guided peripheral venous access. ATS Scholar . 2022;3:598–609. doi: 10.34197/ats-scholar.2022-0029HT. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10. Loftus TJ, Alfaro ME, Anderson TN, Murphy TW, Zayko O, Davis JP, et al. Audiovisual modules to enhance informed consent in the ICU: a pilot study. Crit Care Explor . 2020;2:e0278. doi: 10.1097/CCE.0000000000000278. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11. Kennedy DM, Fasolino JP, Gullen DJ. Improving the patient experience through provider communication skills building. Patient Exp J . 2014;1:56–60. [Google Scholar]
- 12. Schafer KM, Kremer MJ. Outcomes of simulation-based experiences related to family presence during resuscitation: a systematic review. Clin Simul Nurs . 2022;65:62–81. [Google Scholar]
- 13. Taibanguay N, Chaiamnuay S, Asavatanabodee P, Narongroeknawin P. Effect of patient education on medication adherence of patients with rheumatoid arthritis: a randomized controlled trial. Patient Prefer Adherence . 2019;13:119–129. doi: 10.2147/PPA.S192008. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14. Alliger GM, Janak EA. Kirkpatrick’s levels of training criteria: thirty years later. Pers Psychol . 1989;42:331–342. [Google Scholar]
- 15. Haan CK, Edwards FH, Poole B, Godley M, Genuardi FJ, Zenni EA. A model to begin to use clinical outcomes in medical education. Acad Med . 2008;83:574–580. doi: 10.1097/ACM.0b013e318172318d. [DOI] [PubMed] [Google Scholar]
- 16.Association of Clinical Research Professionals - About ACRP. https://acrpnet.org/ [DOI] [PubMed]
- 17. Carline JD. Funding medical education research: opportunities and issues. Acad Med . 2004;79:918–924. doi: 10.1097/00001888-200410000-00004. [DOI] [PubMed] [Google Scholar]
- 18. Jordan J, Coates WC, Clarke S, Runde D, Fowlkes E, Kurth J, et al. The uphill battle of performing education scholarship: barriers educators and education researchers face. West J Emerg Med . 2018;19:619–629. doi: 10.5811/westjem.2018.1.36752. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19. Hakoum MB, Jouni N, Abou-Jaoude EA, Hasbani DJ, Abou-Jaoude EA, Lopes LC, et al. Characteristics of funding of clinical trials: cross-sectional survey and proposed guidance. BMJ Open . 2017;7:e015997. doi: 10.1136/bmjopen-2017-015997. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 20. Schiller JH, Beck Dallaghan GL, Kind T, McLauchlan H, Gigante J, Smith S. Characteristics of multi-institutional health sciences education research: a systematic review. J Med Libr Assoc . 2017;105:328–335. doi: 10.5195/jmla.2017.134. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21. Buja LM. Medical education today: all that glitters is not gold. BMC Med Educ . 2019;19:110. doi: 10.1186/s12909-019-1535-9. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 22. Burns KEA, Moss M, Lorens E, Jose EKA, Martin CM, Viglianti EM, et al. Diversity-Related Research Committee of the Women in Critical Care (WICC) Interest Group of the American Thoracic Society Wellness and coping of physicians who worked in ICUs during the pandemic: a multicenter cross-sectional North American Survey. Crit Care Med . 2022;50:1689–1700. doi: 10.1097/CCM.0000000000005674. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23. Morales A, Murphy A, Fanning JB, Gao S, Schultz K, Hall DE, et al. Key physician behaviors that predict prudent, preference concordant decisions at the end of life. AJOB Empir Bioeth . 2021;12:215–226. doi: 10.1080/23294515.2020.1865476. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24. Weiss CH, Baker DW, Weiner S, Bechel M, Ragland M, Rademaker A, et al. Low tidal volume ventilation use in acute respiratory distress syndrome. Crit Care Med . 2016;44:1515–1522. doi: 10.1097/CCM.0000000000001710. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 25. Vallabhajosyula S, Trivedi V, Gajic O. Ventilation in acute respiratory distress syndrome: importance of low-tidal volume. Ann Transl Med . 2016;4:496. doi: 10.21037/atm.2016.11.36. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26. Giovanni SP, Jennerich AL, Steel TL, Lokhandwala S, Alhazzani W, Weiss CH, et al. Promoting evidence-based practice in acute respiratory distress syndrome: a systematic review. Crit Care Explor . 2021;3:e0391. doi: 10.1097/CCE.0000000000000391. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 27. Duggal A, Panitchote A, Siuba M, Krishnan S, Torbic H, Hastings A, et al. Implementation of protocolized care in ARDS improves outcomes. Respir Care . 2021;66:600–609. doi: 10.4187/respcare.07999. [DOI] [PubMed] [Google Scholar]
- 28.@rbganatra. https://twitter.com/rbganatra/status/1242411075406106624
- 29. Baernstein A, Liss HK, Carney PA, Elmore JG. Trends in study methods used in undergraduate medical education research, 1969-2007. JAMA . 2007;298:1038–1045. doi: 10.1001/jama.298.9.1038. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 30. King AJ, Kahn JM, Brant EB, Cooper GF, Mowery DL. Initial development of an automated platform for assessing trainee performance on case presentations. ATS Scholar . 2022;3:548–560. doi: 10.34197/ats-scholar.2022-0010OC. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 31. Cook DA, Andriole DA, Durning SJ, Roberts NK, Triola MM. Longitudinal research databases in medical education: facilitating the study of educational outcomes over time and across institutions. Acad Med . 2010;85:1340–1346. doi: 10.1097/ACM.0b013e3181e5c050. [DOI] [PubMed] [Google Scholar]
- 32. Starmer AJ, Spector ND, Srivastava R, West DC, Rosenbluth G, Allen AD, et al. I-PASS Study Group Changes in medical errors after implementation of a handoff program. N Engl J Med . 2014;371:1803–1812. doi: 10.1056/NEJMsa1405556. [DOI] [PubMed] [Google Scholar]
- 33. Skinner EA, Diette GB, Algatt-Bergstrom PJ, Nguyen TT, Clark RD, Markson LE, et al. The Asthma Therapy Assessment Questionnaire (ATAQ) for children and adolescents. Dis Manag . 2004;7:305–313. doi: 10.1089/dis.2004.7.305. [DOI] [PubMed] [Google Scholar]
- 34. Shaikh N, Rockette HE, Hoberman A, Kurs-Lasky M, Paradise JL. Determination of the minimal important difference for the acute otitis media severity of symptom scale. Pediatr Infect Dis J . 2015;34:e41–e43. doi: 10.1097/INF.0000000000000557. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 35. Gorelick MH, Stevens MW, Schultz TR, Scribano PV. Performance of a novel clinical score, the Pediatric Asthma Severity Score (PASS), in the evaluation of acute asthma. Acad Emerg Med . 2004;11:10–18. doi: 10.1197/j.aem.2003.07.015. [DOI] [PubMed] [Google Scholar]
- 36. Gélinas C, Johnston C. Pain assessment in the critically ill ventilated adult: validation of the Critical-Care Pain Observation Tool and physiologic indicators. Clin J Pain . 2007;23:497–505. doi: 10.1097/AJP.0b013e31806a23fb. [DOI] [PubMed] [Google Scholar]
- 37. ATS Committee on Proficiency Standards for Clinical Pulmonary Function Laboratories. ATS statement: guidelines for the six-minute walk test. Am J Respir Crit Care Med . 2002;166:111–117. doi: 10.1164/ajrccm.166.1.at1102. [DOI] [PubMed] [Google Scholar]
- 38. Tamaru Y, Sumino H, Matsugi A. Usefulness of the Cognitive Composition Test as an early discriminator of mild cognitive impairment. J Clin Med . 2023;12:1203. doi: 10.3390/jcm12031203. [DOI] [PMC free article] [PubMed] [Google Scholar]