Skip to main content
Annals of Family Medicine logoLink to Annals of Family Medicine
. 2022 Nov-Dec;20(6):559–563. doi: 10.1370/afm.2887

Competencies for the Use of Artificial Intelligence in Primary Care

Winston Liaw 1,, Jacqueline K Kueper 2,3, Steven Lin 4, Andrew Bazemore 5, Ioannis Kakadiaris 6
PMCID: PMC9705044  PMID: 36443071

Abstract

The artificial intelligence (AI) revolution has arrived for the health care sector and is finally penetrating the far-reaching but perpetually underfinanced primary care platform. While AI has the potential to facilitate the achievement of the Quintuple Aim (better patient outcomes, population health, and health equity at lower costs while preserving clinician well-being), inattention to primary care training in the use of AI-based tools risks the opposite effects, imposing harm and exacerbating inequalities. The impact of AI-based tools on these aims will depend heavily on the decisions and skills of primary care clinicians; therefore, appropriate medical education and training will be crucial to maximize potential benefits and minimize harms. To facilitate this training, we propose 6 domains of competency for the effective deployment of AI-based tools in primary care: (1) foundational knowledge (what is this tool?), (2) critical appraisal (should I use this tool?), (3) medical decision making (when should I use this tool?), (4) technical use (how do I use this tool?), (5) patient communication (how should I communicate with patients regarding the use of this tool?), and (6) awareness of unintended consequences (what are the “side effects” of this tool?). Integrating these competencies will not be straightforward because of the breadth of knowledge already incorporated into family medicine training and the constantly changing technological landscape. Nonetheless, even incremental increases in AI-relevant training may be beneficial, and the sooner these challenges are tackled, the sooner the primary care workforce and those served by it will begin to reap the benefits.

Key words: artificial intelligence, primary care education, AI training, domains of competency

INTRODUCTION

The artificial intelligence (AI) revolution is here, and primary care clinicians must adapt.1-3 This call for change follows the years of unrealized promise of electronic health records (EHRs), an era that has caused primary care clinicians to regard AI with skepticism.4 In addition to privacy and liability concerns, critics argue that AI can magnify existing biases, is not generalizable, and degrades over time.5-8 These shortcomings underscore the need to train primary care clinicians to competently work with AI to advance the Quintuple Aim of better patient outcomes, population health, and health equity, at lower costs while preserving clinician well-being.9 To accomplish this goal, our workforce needs additional knowledge and skills so that AI can support the primary care functions of continuity, coordination, timeliness, and comprehensiveness.3,10,11

Without training, this goal will not be achieved, risking harm instead of the intended benefits. AI-based tools will be deployed without rigorous evaluation and created absent specification for the unique needs of primary care. Patient safety will be compromised, and clinicians will become dissatisfied, leading to higher costs, greater fragmentation, and more burnout. To avoid this predicament, primary care clinicians must understand basic principles and have opportunities to practice with AI, similar to learning how to use a stethoscope or ultrasound. Training in AI is essential for primary care, the United State’s largest health care delivery platform.12 Because of its coordinating function and whole-person approach, primary care synthesizes data across a fragmented health system. Interpreting these poorly organized data streams is demanding and a source of burnout.13 Given this central role,12,14 groups such as the American Board of Family Medicine, the American Academy of Family Physicians, and the College of Family Physicians of Canada have sought to identify how AI can support primary care and have launched initiatives that bring together AI experts and primary care clinicians to take on these challenges.15-17

Although training has attempted to facilitate technology-enabled primary care, more work is needed. In the United States, family medicine milestones call for residents to use information technology. Little has been done, however, to operationalize requirements that face important revisions in 2022.18 Following the expansion of virtual care during the COVID-19 pandemic, professional societies have published competencies to guide telehealth training.19,20 The American Board of Artificial Intelligence in Medicine (ABAIM) recently launched certification in AI, and Mount Sinai started the Department of AI and Human Health.21,22 Despite these developments, clinicians are calling for more training because they feel ill-prepared to thrive within this digital future.23 In the United States and abroad, 3 out of 4 medical students think AI competencies should be taught in medical school.24,25 Unfortunately, medical education has been slow to adapt. In one national study in Ireland, two-thirds of medical students received no training in AI, and over 40% had never heard of the term “machine learning.”25

To fill this gap and pave the way for future curricula on AI for clinicians, we propose 6 competencies (Table 1) that build on those published by others26 and describe how these competencies vary for different learners (Table 2).23

Table 1.

Proposed Competencies for the Use of AI-Based Tools in Primary Care Decision Making

Domain Bottom Line Competency Hypothetical Scenario
Foundational knowledge What is this tool? Clinicians will explain the fundamentals of AI, how AI-based tools are created and evaluated, the critical regulatory and socio-legal issues of the AI-based tools, and the current and emerging roles of AI in health care. The FDA approved an AI tool that provides a differential diagnosis using photographs of skin conditions and medical history. It was developed using 16,000 cases and a convolutional neural network to output prediction scores across 400 skin diseases.
Critical appraisal Should I use this tool? Clinicians will appraise the evidence behind AI-based tools and assess their appropriate uses via validated evaluation frameworks for health care AI. In a retrospective study, the AI tool was superior to primary care clinicians, for which use was associated with improved diagnoses for 1 in every 10 cases. A prospective study in a clinical setting has not been done yet.
Medical decision making When should I use this tool? Clinicians will identify the appropriate indications for and incorporate the outputs of AI-based tools into medical decision making such that effectiveness, value, equity, fairness, and justice are enhanced. You decide to use this AI tool to augment your diagnostic ability for skin conditions where the diagnosis is unclear. You use it to inform, not override, your decisions regarding treatment, biopsies, and referrals in a way that boosts accuracy, quality of care, and resource stewardship.
Technical use How do I use this tool? Clinicians will execute the tasks needed to operate AI-based tools in a manner that supports efficiency and builds mastery. You learn to take clinical photographs of skin conditions as required by the AI tool and generate a differential diagnosis using it. You do this seamlessly and efficiently during physical exams.
Patient communication How should I communicate with patients regarding the use of the tool? Clinicians will communicate what the tool is and why it is being used, answer questions about privacy and confidentiality, and engage in shared decision making, in a manner that preserves or augments the clinician-patient relationship. You discuss with the patient why and how the tool is being used and answer questions regarding privacy, ultimately building trust and confidence.
Unintended consequences (cross-cutting) What are the “side effects” of this tool? Clinicians will anticipate and recognize the potential adverse effects of AI-based tools and take appropriate actions to mitigate or address unintended consequences. Foundational knowledge: You recognize that a convolutional neural network is a “black box.” As a result, you will not consult the tool for a rationale behind the suggested diagnosis. You remind yourself to guard against cognitive biases that may arise from only seeing the final suggested diagnosis.
Critical appraisal: You understand that Fitzpatrick skin types I and V are under-represented, and type VI is absent in the data set for this AI tool.a
Medical decision making: You anticipate that the tool will be less accurate for patients with these skin types and adjust your utilization, choosing to learn more about patients with these skin types.
Technical use: You take the appropriate steps when the tool delivers an error message.
Patient communication: You explain to the patient why your diagnosis is not the same as the one suggested by the tool, engaging in a shared decision making process that engenders trust, confidence, and respect.

AI = artificial intelligence; FDA = Food and Drug Administration.

Note: Fitzpatrick skin type 1 is pale white skin, while type VI is dark brown or black.

a

The Fitzpatrick skin type classifies skin according to the amount of melanin pigment in the skin.

Table 2.

Proposed Artificial Intelligence Competencies by Learner Roles

Domain Medical Students Residents
All Prior Competencies +
Faculty
All Prior Competencies +
Foundational knowledge Explain the fundamentals of AI and how AI-based tools are created and evaluated Explain the critical regulatory and socio-legal issues surrounding AI-based tools as they relate to practice Explain, teach, and shape the current and emerging roles of AI in health care
Critical appraisal Describe the validated evaluation frameworks for AI-based tools in health care Appraise the evidence behind AI-based tools Explain, teach, and contribute to the critical appraisal of AI-based tools
Medical decision making Identify the appropriate indications for AI-based tools Incorporate the outputs of AI-based tools into medical decision making Model and study the use of AI in medical decision making to enhance effectiveness, value, and equity
Technical use Identify the tasks needed to operate AI-based tools Execute the tasks needed to operate AI-based tools in a manner that builds efficiency and mastery Teach others to operate AI-based tools and execute the appropriate steps when these tools fail
Patient communication Describe features of effective communication with patients regarding AI-based tools, including the rationale for their use, privacy, confidentiality, and shared decision making Demonstrate effective communication regarding AI-based tools in simulated or real-world settings Model effective communication in a manner that preserves or augments the clinician-patient relationship
Unintended consequences (cross-cutting) Recognize the potential adverse effects of AI-based tools because of deficiencies in foundational knowledge, critical appraisal, medical decision making, technical use, and patient communication Take appropriate actions to address potential adverse effects of AI-based tools Anticipate and study the potential adverse effects of AI-based tools and model appropriate actions to mitigate them

AI = artificial intelligence.

First, learners need a foundational understanding of AI, including the types of tasks that are amenable to AI, appropriate areas in which to consider its application, and stages in its progression from development to implementation and regulation. They need to nest this understanding in a broader context, including AI’s impact on the role of physicians, the challenges of making clinical decisions with an abundance of data, and technology’s influence on clinician-patient relationships. This competency will provide learners with the language and background needed to complete higher-order tasks.

Second, learners need to develop critical appraisal skills tailored to the unique features of AI. Similar to new medications, tests, and programs, AI-based tools should undergo testing for accuracy, generalizability, effectiveness, and fairness. Although these concepts are already introduced through evidence-based medicine (EBM) curricula,6 appropriate selection and application of AI-based tools requires further understanding of its unique challenges, such as inconsistent performance across populations and performance degradation over time (“calibration drift”). Similarly, some AI approaches (such as neural networks) prioritize accuracy over explainability. This lack of transparency becomes important when AI misclassifies a patient, exposes them to unnecessary harm, and is unable to determine why the error was made. Likewise, awareness of the range of sources AI can draw from allows learners to appreciate what can happen when data are inaccurate, incomplete, or biased.27

Third, learners need to understand how to incorporate these tools into medical decision making. For example, AI-based tools can now use smartphone cameras to make dermatologic diagnoses.28,29 Training is needed to guide decision making when the patient’s lesion appears benign to the human evaluator after the AI-based tool identifies the lesion as malignant.30 If these tools prove beneficial, their adoption has important implications for equity. Patients in resource-poor communities may lack access to the requisite technology,31 which can exacerbate disparities similar to how telehealth use varied during the COVID-19 pandemic.32

Fourth, learners need the technical skills required to use AI-based tools in a manner that is effective and efficient. Furthermore, the technology needed to use AI-based tools will inevitably fail. When this occurs, clinicians need to know how to react. Otherwise, they will experience frustration, dissatisfaction, and loss of self-efficacy that contributes to burnout.33

Fifth, learners need to understand how to communicate with patients regarding the use of AI-based tools. This includes explaining how and why the tools are being used, answering questions about privacy and confidentiality, and engaging in shared decision making. They also need to recognize the tools’ impact on clinician-patient relationships. Electronic health records have demonstrated that entering data during visits adversely affects the flow of conversation, attention paid to emotional issues, trust, and patient satisfaction.34 Without adequate training, AI could create similar strains on relationships.

Lastly, the application of any technology comes with unintended consequences. When errors occur, biases are introduced, or disagreements arise, clinicians must understand how to adjust their reasoning and communicate relevant information. These limitations serve as an antidote to overconfidence. Just as learners study the limitations of diagnostic tests, they also need to appreciate how these tools contribute to probabilistic thinking as opposed to diagnostic and prognostic certainty. This domain is cross-cutting, as unintended consequences apply to the 5 competencies above.

When applying these competencies, several caveats need to be taken into consideration. First, training opportunities must be integrated across undergraduate medical education, graduate medical education, and continuing medical education (Table 2). This training is ideal for family medicine residencies extending to 4 years despite an already crowded training space.35 For programs that remain at 3 years, training in AI can be integrated into existing sessions on health informatics or EBM. Second, these competencies will change over time and must be tailored to the local context. For example, we anticipate that AI will become more widespread, with tools progressing from the development to the evaluation, validation, and monitoring phases.36 Furthermore, the specific AI-based tools presented to students may differ based on the prevalence of diseases, the high-priority problems, and the resources available within communities. Thus, learners do not need exposure to the breadth of available AI-based tools but rather to concepts and exemplars that can be applied to a wide range of clinical settings. Third, these competencies serve as point of departure, and more work is needed before integrating them into training. For example, subcompetencies need to be developed for primary, intermediate, and expert users, similar to the process underway to develop competencies for clinical informatics more broadly.37,38 Ultimately, only a small percentage of primary care clinicians will become expert users, for it is not necessary to teach all clinicians to build machine learning models in the same way that it is not required for all clinicians to know the intricacies of how a magnetic resonance imaging machine works or how to run statistical software. Those who become expert users may benefit from additional training to ensure that their knowledge and skills are consistent with the current evidence base. Nevertheless, all primary care clinicians should know how to appraise and apply AI. While we have focused on family physicians, these competencies can also be a starting point for other primary care team members, including nurse practitioners, physician assistants, nurses, and psychologists. For AI to improve care, all health professionals who participate in the team-based care delivery will need additional training.39

In 1991, Gordon Guyatt introduced the term EBM to highlight the need to integrate evidence into medical decision making.40 Incorporating EBM into primary care required customization, with primary care educators emphasizing the importance of patient-oriented evidence, information mastery, and primary care research methods.41 While these concepts have informed medical decision making in primary care, one systematic review on EBM curricula found that no studies assessed the influence of these curricula on patient outcomes, that there was no validated tool to assess these curricula, and that a lack of EBM teachers is a barrier to broader dissemination of such curricula.42,43 These findings highlight the need to adapt AI curricula to primary care. For example, new curricula need to be evaluated so that studies can track whether the use of the curricula affects burnout and AI knowledge, skills, and attitudes. Developing these curricula for primary care will require the involvement of clinicians, educators, informaticists, and AI experts. Some professional societies, such as ABAIM and the American Medical Informatics Association (AMIA), have foundational curricula that can be adapted for primary care. The challenges are real, but the potential payoff is substantial. Through thoughtful development of these competencies, the primary care workforce can use AI to ensure that this digital revolution realizes its potential for the benefit of patients, clinicians, health systems, and society.

Supplementary Material

Liaw_visualabstract_20.6_v003.png

Footnotes

Conflicts of interest: W.L. received a gift from Humana, Inc. J.K.K. is participating in a fellowship sponsored by the College of Family Physicians of Canada and AMS Healthcare. A.B. is an employee of the American Board of Family Medicine. I.K. is a Board Member of the American Board of Artificial Intelligence in Medicine. S.L. has no conflicts of interest to declare.

REFERENCES

  • 1.Muro M, Liu S.. Which Cities Will Drive the Artificial Intelligence Revolution. Brookings Metropolitan Policy Program; 2021. Accessed Feb 2, 2022. https://www.brookings.edu/research/the-geography-of-ai/ [Google Scholar]
  • 2.Lavender J. Investment in AI for healthcare soars. KPMG. Published Nov 3, 2020. Accessed Oct 29, 2021. https://home.kpmg/xx/en/home/insights/2018/11/investment-in-ai-for-healthcare-soars.html
  • 3.Lin SY, Mahoney MR, Sinsky CA.. Ten ways artificial intelligence will transform primary care. J Gen Intern Med. 2019; 34(8): 1626-1630. 10.1007/s11606-019-05035-1 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Blease C, Kaptchuk TJ, Bernstein MH, Mandl KD, Halamka JD, DesRoches CM.. Artificial intelligence and the future of primary care: exploratory qualitative study of UK general practitioners’ views. J Med Internet Res. 2019; 21(3): e12802. 10.2196/12802 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Obermeyer Z, Powers B, Vogeli C, Mullainathan S.. Dissecting racial bias in an algorithm used to manage the health of populations. Science. 2019; 366(6464): 447-453. 10.1126/science.aax2342 [DOI] [PubMed] [Google Scholar]
  • 6.James CA, Wheelock KM, Woolliscroft JO.. Machine learning: the next paradigm shift in medical education. Acad Med. 2021; 96(7): 954-957. 10.1097/ACM.0000000000003943 [DOI] [PubMed] [Google Scholar]
  • 7.Wong A, Otles E, Donnelly JP, et al. External validation of a widely implemented proprietary sepsis prediction model in hospitalized patients. JAMA Intern Med. 2021; 181(8): 1065-1070. 10.1001/jamainternmed.2021.2626 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Davis S, Lasko TA, Chen G, Matheny ME.. Calibration drift among regression and machine learning models for hospital mortality. AMIA Annu Symp Proc Arch. 2017; 2017: 625-634. [PMC free article] [PubMed] [Google Scholar]
  • 9.Matheny M, Israni ST, Ahmed M, Wicher D.. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. National Academies Press; 2019. [PubMed] [Google Scholar]
  • 10.Liaw W, Kakadiaris IA.. Artificial intelligence and family medicine: better together. Fam Med. 2020; 52(1): 8-10. 10.22454/FamMed.2020.881454 [DOI] [PubMed] [Google Scholar]
  • 11.Lin S. A clinician’s guide to artificial intelligence (AI): why and how primary care should lead the health care ai revolution. J Am Board Fam Med. 2022; 35(1): 175-184. 10.3122/jabfm.2022.01.210226 [DOI] [PubMed] [Google Scholar]
  • 12.Committee on Implementing High-Quality Primary Care, Board on Health Care Services, Health and Medicine Division, National Academies of Sciences, Engineering, and Medicine . Implementing High Quality Primary Care: Rebuilding the Foundation of Health Care. National Academies Press; 2021. 10.17226/25983 [DOI] [PubMed] [Google Scholar]
  • 13.Shanafelt TD, West CP, Sinsky C, et al. Changes in burnout and satisfaction with work-life integration in physicians and the general US working population between 2011 and 2017. Mayo Clin Proc. 2019; 94(9): 1681-1694. 10.1016/j.mayocp.2018.10.023 [DOI] [PubMed] [Google Scholar]
  • 14.Basu S, Berkowitz SA, Phillips RL, Bitton A, Landon BE, Phillips RS.. Association of primary care physician supply with population mortality in the United States, 2005-2015. JAMA Intern Med. 2019; 179(4): 506-514. 10.1001/jamainternmed.2018.7624 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.The Center for Professionalism & Value in Health Care . Setting a research agenda for the use of artificial intelligence & machine learning in primary care. Published Mar 18, 2021. Accessed Jun 13, 2021. https://professionalismandvalue.org/setting-a-research-agenda-for-the-use-of-artificial-intelligence-machine-learning-in-primary-care/
  • 16.Mitchell D. Academy’s digital assistant pilot advances to stage two. Published Jun 25, 2020. Accessed Aug 18, 2021. https://www.aafp.org/news/practice-professional-issues/20200625sukistagetwo.html
  • 17.Upshur R. Artificial Intelligence, Machine Learning and the Potential Impacts on the Practice of Family Medicine: A Briefing Document. AMS Healthcare; 2019. Accessed Dec 16, 2021. https://www.ams-inc.on.ca/wp-content/uploads/2019/05/AMS-CFPC-paper-PRINT.pdf [Google Scholar]
  • 18.Accreditation Council for Graduate Medical Education . Family medicine milestones. Accessed Oct 29, 2021. https://acgme.org/specialties/family-medicine/milestones/
  • 19.Association of American Medical Colleges . Telehealth Competencies Across the Learning Continuum. AAMC; 2021. Accessed Oct 26, 2021. https://store.aamc.org/downloadable/download/sample/sample_id/412/ [Google Scholar]
  • 20.Brazelton T. STFM task force releases learning objectives for national telemedicine curriculum. Ann Fam Med. 2021; 19(1): 91.1-91. 10.1370/afm.2665 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.American Board of Artificial Intelligence in Medicine . Published 2021. Accessed Oct 29, 2021. https://abaim.org/
  • 22.Mount Sinai launches department of artificial intelligence and human health. Mount Sinai Health System. Published Oct 11, 2021. Accessed Oct 26, 2021. https://www.mountsinai.org/about/newsroom/2021/mount-sinai-launches-department-of-artificial-intelligence-and-human-health [Google Scholar]
  • 23.Lomis K, Jeffries P, Palatta A, et al. Artificial intelligence for health professions educators. NAM Perspect. Published online Sep 8, 2021. 10.31478/202109a [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Pinto Dos Santos D, Giese D, Brodehl S, et al. Medical students’ attitude towards artificial intelligence: a multicentre survey. Eur Radiol. 2019; 29(4): 1640-1646. 10.1007/s00330-018-5601-1 [DOI] [PubMed] [Google Scholar]
  • 25.Blease C, Kharko A, Bernstein M, et al. Machine learning in medical education: a survey of the experiences and opinions of medical students in Ireland. BMJ Health Care Inform. 2022; 29(1): e100480. 10.1136/bmjhci-2021-100480 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Sapci AH, Sapci HA.. Artificial intelligence education and tools for medical and health informatics students: systematic review. JMIR Med Educ. 2020; 6(1): e19285. 10.2196/19285 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Lin S, Shah S, Sattler A, Smith M.. Predicting avoidable healthcare utilization: practical considerations for AI/ML models in population health. Mayo Clin Proc. 2022; 97(4): 653-657. 10.1016/j.mayocp.2021.11.039 [DOI] [PubMed] [Google Scholar]
  • 28.Aysa . Skin condition questions? AI-Enabled answers. Ask Aysa. Accessed Mar 22, 2021. https://askaysa.com/
  • 29.Bui LY. Using AI to help find answers to common skin conditions. Google Health Blog. Published May 18, 2021. Accessed Nov 9, 2021. https://blog.google/technology/health/ai-dermatology-preview-io-2021/#:~:text=Artificial%20intelligence%20(AI)%20has%20the,helping%20detect%20tuberculosis%20more%20efficiently
  • 30.Jacobs M, Pradier MF, McCoy TH Jr, Perlis RH, Doshi-Velez F, Gajos KZ.. How machine-learning recommendations influence clinician treatment selections: the example of the antidepressant selection. Transl Psychiatry. 2021; 11(1): 108. 10.1038/s41398-021-01224-x [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.Pew Research Center . Demographics of mobile device ownership and adoption in the United States. Pew Research Center: Internet, Science & Tech. Published Apr 7, 2021. Accessed Dec 16, 2021. https://www.pewresearch.org/internet/fact-sheet/mobile/ [Google Scholar]
  • 32.Patel SY, Mehrotra A, Huskamp HA, Uscher-Pines L, Ganguli I, Barnett ML.. Variation in telemedicine use and outpatient care during the COVID-19 pandemic in the United States. Health Aff (Millwood). 2021; 40(2): 349-358. 10.1377/hlthaff.2020.01786 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33.Gardner RL, Cooper E, Haskell J, et al. Physician stress and burnout: the impact of health information technology. J Am Med Inform Assoc. 2019; 26(2): 106-114. 10.1093/jamia/ocy145 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34.Kazmi Z. Effects of exam room EHR use on doctor-patient communication: a systematic literature review. Inform Prim Care. 2013; 21(1): 30-39. 10.14236/jhi.v21i1.37 [DOI] [PubMed] [Google Scholar]
  • 35.Douglass AB. The Case for the 4-year residency in family medicine. Fam Med. 2021; 53(7): 599-602. 10.22454/FamMed.2021.750646 [DOI] [PubMed] [Google Scholar]
  • 36.Government Accountability Office . Artificial Intelligence in Health Care. Government Accountability Office; 2020. Accessed Aug 18, 2021. https://www.gao.gov/products/gao-21-7sp [Google Scholar]
  • 37.Desai S, Mostaghimi A, Nambudiri VE.. Clinical informatics subspecialists: characterizing a novel evolving workforce. J Am Med Inform Assoc. 2020; 27(11): 1711-1715. 10.1093/jamia/ocaa173 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 38.Davies A, Mueller J, Hassey A, Moulton G.. Development of a core competency framework for clinical informatics. BMJ Health Care Inform. 2021; 28(1): e100356. 10.1136/bmjhci-2021-100356 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 39.Li Ron C., Smith Margaret, Lu Jonathan, et al. Using AI to empower collaborative team workflows: two implementations for advance care planning and care escalation. NEJM Catal. 3(4): CAT.21.0457. 10.1056/CAT.21.0457 [DOI] [Google Scholar]
  • 40.Guyatt G; Evidence-Based Medicine Working Group . Evidence-based medicine. A new approach to teaching the practice of medicine. JAMA. 1992; 268(17): 2420-2425. 10.1001/jama.1992.03490170092032 [DOI] [PubMed] [Google Scholar]
  • 41.Slawson DC, Shaughnessy AF.. Teaching evidence-based medicine: should we be teaching information management instead? Acad Med. 2005; 80(7): 685-689. 10.1097/00001888-200507000-00014 [DOI] [PubMed] [Google Scholar]
  • 42.Shaughnessy AF, Gupta PS, Erlich DR, Slawson DC.. Ability of an information mastery curriculum to improve residents’ skills and attitudes. Fam Med. 2012; 44(4): 259-264. [PubMed] [Google Scholar]
  • 43.Halalau A, Holmes B, Rogers-Snyr A, et al. Evidence-based medicine curricula and barriers for physicians in training: a scoping review. Int J Med Educ. 2021; 12: 101-124. 10.5116/ijme.6097.ccc0 [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Liaw_visualabstract_20.6_v003.png

Articles from Annals of Family Medicine are provided here courtesy of Annals of Family Medicine, Inc.

RESOURCES