Skip to main content
. 2024 Aug 16;11:1432319. doi: 10.3389/fmed.2024.1432319

Table 2.

Summary of tips and questions to aid implementation.

Tip # Brief description Questions to aid implementation
1 Not all assessments are equal: Consider goals when determining methods, and clearly communicate the boundaries of these to learners, faculty instructors, and stakeholders.
  • What do stakeholders wish to see in learners related to the curriculum? Is there a preference toward knowledge, attitudes, or behavioral demonstration? What elements of this relate most directly to program assessment and funding opportunities?

  • What are the current attitudes of learners, faculty, and key stakeholders related to assessment? How might assessment or communication practices change to accommodate these attitudes and further promote acceptability?

  • What existing assessments relate to the curriculum learning objectives? How might these be leveraged to reduce assessment burden?

2 Do not assume everyone is on the same page: Ensure instructor guidance is consistent in implementation relative to assessment elements.
  • What unspoken curriculum exists at the institution? Where might this support or conflict with the key learning objectives of the program?

  • Are there specific faculty who may be strongly influencing unspoken curriculum that is detrimental to learning and performance? If so, consider whether additional training, coaching, or reduced exposure to these faculty may limit threats to the learners’ achievement of the learning objectives.

  • Are there clear instructions for facilitators guiding the learners to ensure the assessments can be consistently applied?

3 Expertise is not everything: Consider a variety of perspectives and constraints when selecting raters.
  • Does assessment of the key learning objectives require subject matter experts or experienced raters?

  • Do faculty have sufficient time to conduct assessments? Do they have sufficient time to dedicate to training for reliability?

  • What resources are available to compensate evaluators for their time conducting assessments?

4 Even experts need guidance: Establish well-defined behaviors and specific examples.
  • What are the ways learners can respond within the assessment environment? Are there a relatively low number of predictable response types, or are there a wide variety of possible responses?

  • Has the most ideal rater type been selected to perform the assessment? If not, how might more ideal individuals’ experience be leveraged in a more limited capacity to generate guidelines for those who will be performing the evaluations?

5 Do not assume guides are enough: Train raters thoroughly and hold them accountable for performance.
  • What are the ideal levels of agreement stakeholders expect evaluators to reach for inter-rater reliability? What are the lowest levels acceptable?

  • How do training sessions need to be held to maximize attendance and responsiveness? Are in-person meetings appropriate, or are virtual and/or asynchronous meetings sufficient? How much time can be dedicated to training?

  • Are there sufficient resources to meet with evaluators on a regular basis during the times they are available (e.g., are experienced training staff available after business hours if needed)? What additional resources might be needed to accommodate these schedules (e.g., additional hires, overtime pay, flexible work hour arrangements, etc.)?

6 No matter how well you have planned, it is wise to revisit: Ensure the layout and logistics comprehensively support the desired assessment strategy.
  • Are evaluators rating based off live performance or video review? Do assessment forms support completion in this environment? Are sufficient tools available to support learner identification?

  • What is the ratio of learners to evaluators? Is this adequate for the assessment environment?

  • What interprofessional differences are there in competency demonstration, and has this been incorporated into assessment plans? If applicable, what are plans to share assessment data across professional schools?

7 Development is insufficient: Commit to fostering capacity for iteration, innovation, and optimization.
  • How are the baseline KSAs of students changing upon entry from cohort to cohort?

  • How has the unspoken curriculum developed over time?

  • What changes are needed to accommodate the needs of new learners and new environments?

  • How often do competencies change at the national or international level? What staff might be available to maintain a database of linkages between the competencies, curriculum and assessment instruments?