Skip to main content
American Journal of Pharmaceutical Education logoLink to American Journal of Pharmaceutical Education
. 2018 Dec;82(10):7066. doi: 10.5688/ajpe7066

Best Practices on Examination Construction, Administration, and Feedback

Mary Elizabeth Ray a,, Kimberly K Daugherty b, Lisa Lebovitz c, Michael J Rudolph d, Veronica P Shuford e, Margarita V DiVall f,,g
PMCID: PMC6325455  PMID: 30643316

Abstract

Examinations are typically used in higher education to objectively assess student learning, and they are also used as a frequent assessment tool in the Doctor of Pharmacy curriculum. This paper describes best practices and provides examples for faculty to build reliable and valid examinations, ensure examination security and deter academic misconduct, and enhance student learning and achievement of course objectives. Colleges and schools of pharmacy can incorporate these concepts into comprehensive examination policies and focus faculty development efforts on improving the examination purpose, design, and experience for both faculty and students.

Keywords: assessment, examination, best practice, security, administration

INTRODUCTION

Examinations are a frequent assessment method used in higher education to objectively measure student competency in attaining course learning objectives.1,2 Examinations can serve as powerful motivators by communicating to students which concepts and material are particularly important.2,3 Faculty may then use the results to identify student misconceptions, evaluate learning objectives/activities, and make decisions regarding instructional practices.4

The Accreditation Council for Pharmacy Education standards hold pharmacy programs accountable to ensure the validity of individual student assessments and integrity of student work.5 Among specific requirements and suggestions, faculty should ensure that examinations take place under circumstances that minimize academic misconduct, confirm the identity of students taking the examination, and consider examination validation and enhancement to ensure appropriate student progression.5 Sound principles for examination construction and validation may not be fully understood by all faculty. Additionally, uniform agreement is lacking on the best procedures for examination administration, and whether examinations should be returned to students or retained by faculty. This commentary provides an overview for best practices in examination construction and blueprinting, considerations for ensuring optimal and secure administration of examinations, and guidance for examination reviews and feedback on student performance.

Examination Construction

For an examination to be psychometrically sound, it must be reliable and valid. An examination is considered reliable if the results it generates are consistent and reproducible, such that a student would perform similarly on multiple versions of the examination. An examination is considered valid if it is both reliable and measures the student’s knowledge and skill(s) that it intends to measure. The goal of validity for examinations is to ensure that a representative sample of the intended learning objectives is measured, and that students have satisfied the minimum performance level to be competent with respect to the stated objectives.1,2

There are four important principles that faculty need to consider when creating a content-valid examination. First, establish the purpose of the examination and take steps to ensure that it measures the desired construct(s). Second, link items on the examination to the course learning objectives and the intended teaching taxonomies. Third, ensure that items are clearly written and well-structured; items that are ambiguous or lack congruence with the objectives may confuse students and directly affect examination scores. The last principle specifies that experts in the field should review the examination to ensure the other three principles have been met.6

Use of Backward Course Design

Backward course design is an effective method that helps faculty determine which learning objectives, outcomes, and competencies should be assessed and how.2,7,8 In backward course design, the faculty identifies desired results (outcomes) first, then determines acceptable evidence to demonstrate outcome achievement, and lastly designs the relevant learning experiences with specific objectives. When using examinations as an assessment strategy, faculty must consider which learning objectives are best assessed using this method.

Once the objectives have been specified, faculty need to determine the type of evidence needed to demonstrate student achievement; whether the examination is primarily formative in nature (intended to provide feedback to the student) or summative (to show achievement of the intended outcome(s), and the highest level of expected outcome achievement using Bloom’s taxonomy (eg, knowledge, application, synthesis).8 This information may then be used to develop an examination blueprint, also called a test specification document, which lays the necessary foundation for the item development process.

Examination Blueprinting

Examination blueprinting is one method for faculty to ensure that their assessments align with the intended student learning objectives and levels of learning, and should be completed prior to examination construction.2,9 The process should consider the total amount of time allotted for the examination, which should dictate the number of questions, the distribution of items between topics, and item difficulty. The primary purpose of blueprinting is to maximize examination validity; however, despite its usefulness, a 2003 study found that only 15% of 144 United States and Canadian medical schools required course directors to develop assessment blueprints prior to writing assessments.10,11 Blueprinting was also among the least used best practices for examination construction among nursing faculty.12 Pharmacy literature lacks on the subject of test blueprinting; however, national examinations such as PCOA and licensing examinations do describe their comprehensive blueprinting methodologies.13,14

When blueprinting, faculty must also consider logistics such as the total number of items and overall length of the examination. Time spent on each item will vary based on the level of difficulty of the question, as well as the volume of reading associated with the question. For example, it takes much less time for a student to answer a question with a simple stem based on strict memorization of fact than to read a long and detailed patient case, think critically about the information, and synthesize an answer. Faculty should consider only including auxiliary information when truly needed to answer the question; it can be frustrating for students to spend examination time reading a long case only to be asked a question they could have answered without it. The National Association of Boards of Pharmacy (NABP) allows three hours to complete the 225-item Pharmacy Curriculum Outcomes Assessment (PCOA) examination (0.8 minute per question), and six hours to complete a 250-item NAPLEX examination (1.44 minute per question).13,14 Allocating at least 1 minute per question is a reasonable place to start, and adjustments can be made based on data collected from examinations, particularly if an electronic system is available to collect such data.

Faculty should use blueprinting to reduce two key validity threats: construct representation and construct-irrelevant variance. Issues with construct representation (more specifically misrepresentation, underrepresentation, or overrepresentation) occur when there is under-, over-, or biased sampling of the selected examination content. Faculty can minimize this threat to validity by ensuring proportional representation of content covered within the examination. The blueprint for item allocation and overall weighting should be proportional to the coverage of course content. In a simple scenario where all topics are considered equally important and are delivered using similar instructional methodologies, examination items should be divided among topics proportionally. For example, when writing a 50-item therapeutic examination assessing diabetes, hyperlipidemia, and hypertension with 30%, 30%, and 40% course content coverage respectively, the examination should contain 15 questions each for diabetes and hyperlipidemia, and 20 questions for hypertension. Faculty can support the validity of their examination by planning properly for construct representation.

Construct-irrelevant variance (CIV) occurs when there are flawed item formats, such as when examination items are written at an inappropriate level of difficulty (eg, too hard or too easy) or the wrong question format is chosen. CIV is often seen as a systematic error that either inflates or deflates a test score, which introduces bias.15 Item difficulty will also depend on the level of the learners and the learning objectives being assessed. Using a range of difficulty (minimally difficult to very difficult) is helpful to ensure that the examination can be completed within the allotted time. This is particularly important for examinations with questions from multiple instructors.

The last step in the examination construction is test item development.9 Multiple-choice item writing is not easy, and many pharmacy faculty members have never received formal training related to this critical skill.16 Developing test items requires a significant amount of practice and appropriate feedback. It is helpful for new faculty to field-test items before they are used on examinations, and to use peer expert reviewers to establish validity of items.2,9,12 In-depth coverage regarding examination question writing is beyond the scope of this commentary, however there are many resources related to best practices in item writing, including proper selection of question format which helps to avoid construct-irrelevant variance.2,3,17

EXAMINATION ADMINISTRATION

Examination Security

After the assessment is created, it requires administration under proper conditions. Policies for examination administration should be designed with input from both faculty and students to outline expectations for student conduct and provide protocols for security to minimize opportunities for dishonesty. Well-developed policies should also identify penalties associated with policy infractions. Unfortunately, when it comes to cheating, cost (ie, potential punishment) versus benefit (ie, better grades) is sometimes an ethical dilemma for students. Ip and colleagues found that 11.8% of pharmacy students admitted to academic dishonesty while in pharmacy school, motivated primarily by fear of failure, stress, and procrastination.18 Cheating as an undergraduate student was the only reliable predictor of cheating in pharmacy school. It is wise for faculty and programs to assume that dishonesty will occur without concerted prevention efforts; consistency among faculty and administrators regarding setting expectations, and reinforcing the examination policy and code of conduct, creates a culture of academic integrity.19

The first step in ensuring examination security is to maximize attendance. At institutions with multiple campuses, examinations should be administered during the same time period. Examination times should ideally be scheduled to avoid conflict with commonly attended professional events, to avoid the need for make-up examinations. Policies limiting the scope of acceptable absences and requiring advanced notification with documentation for anticipated absences (eg, meeting brochure) may serve as a deterrent. In the case where absence cannot be avoided, acquiring identification is key unless the student is well known to the proctor to prevent dishonesty through use of a surrogate.

To maximize examination integrity, makeup examinations should be administered after the regularly scheduled examination for the course is administered. Testing the same outcomes on a make-up examination through different assessment questions or in a different fashion (eg, essay instead of multiple choice) may reduce requests for make-ups to those absolutely necessary. However, having a different make-up test has several implications such as perception of fairness to all students (eg, extra study time or less desirable examination formats), faculty workload, and scheduling challenges when large numbers of students are excused from the initial test date. A creative measure might include administration of the make-up examination with a proctor off-site, such as when a faculty member attends the same conference with students. Remote proctoring services using technology are also available.20,21

Students must be on time for examinations, both for security purposes and to prevent distracting others who have started the examination. Many programs either set an established cut-off time for entry or employ policies that prohibit entry of new examination takers after the first examination taker has left the room. Though in theory this practice seems logical if one assumes that examination security has been compromised, unless a different examination is offered for makeup, it may make more sense to have the student begin the examination rather than delay further. Penalties such as having less time to complete the examination or automatic point deductions may motivate a tardy student for future timeliness. For electronic examinations, late arrivals require students to open their examinations to ensure they did not begin their examination after unauthorized receipt of a password from someone already in the testing room. Students should not be excused from the examination room without good reason; only one student at a time should be permitted to exit and only after ensuring their materials are secure.

Examination Proctors

The number of trained proctors should be chosen to maximize security and limit distractions. NABP uses a proctor to student ratio of 1:25 for the Pharmacy Curriculum Outcomes Assessment - and the College Board uses a ratio of 1:34.22 Regardless of ratio, proctors should be assigned to actively observe, circulate around the room, and attend to students for collection of examination materials or examination upload confirmation. It is also good practice to include at least one faculty member to assist in supervision and assume responsibility in the case of cheating or unforeseen emergencies. The faculty proctor need not have material on the examination to proctor, since answering questions during the examination may create an unfair advantage between students.

Proctors should receive specific training in how to establish and maintain a secure examination environment, as well as handle dishonesty and other emergent situations. The most important aspect of such training is that they avoid distraction and take the task at hand seriously. Too often, proctors including faculty proctors, may be found sitting with their heads down, looking at their phones, or grading papers. Proctors, or invigilators (ie, one who is vigilant), should walk up and down the aisles and make eye contact with students.23 Problems with cheating often occur in situations where a vigilant proctor may have made the difference.

Remote examination proctoring is becoming available through a variety of vendors for use during large examination administrations, with distance-learning students, for make-up examinations, or other instances when it is difficult to secure skilled proctors. In remote proctoring, students take examinations while monitored using web-cam computer technology, as well as fingerprint scanners to verify student identity. Software records audio and video of the student, as well as the 360 degree room environment, and may be “tagged” and reviewed when cheating is suspected. Remote proctoring has been shown to serve as a deterrent of cheating, but requires further investigation.20,21

Deterring Academic Misconduct

Random examination seating either by chart or at the direction of proctors may enhance security. Students have greater difficulty coordinating collaborative methods of cheating or proactively “planting” unauthorized materials if they do not know where they will sit. Seating charts also allow for seats to be blocked (eg, back rows) to ensure easy movement and viewing by proctors. To decrease the burden of randomizing every time, faculty should prepare several versions of a seating chart for each cohort in advance and choose the final version at exam time. Seating charts may also aid in confirming attendance. For additional security and verification of attendance, students may be asked for identification, to wear ID badges, sign an attendance log, or even provide biometric confirmation.24

Another strategy to deter cheating is through administration of different versions of the examination.25 Examination content and items may be kept identical among versions, but placed on the examination in different order. With paper examinations, avoid using different color paper or large font headings indicating examination version to prevent collusion between students with the same version. Electronic testing allows for randomization of sequence and item choices. Programs using electronic testing may consider the use of privacy screens as an additional, but not foolproof, security measure. If used, it is recommended that specifications and estimated cost for screens are included in the technology requirement policy; there is great variation in type, cost, and viewing angle limits of available options on the market.

Careful consideration should be paid to restricting items from students at their seats to those essential for taking the examination (eg, sharpened No. 2 pencil and computer). Bulky coats, hoodies, and hats may be restricted (with exceptions made for religious purposes), to prevent students from hiding notes or other items. Even the simplest of items may be suspect including pens, mechanical pencils, tissue packets, food, and drinks, which is also why board examinations do not permit these items.14 Certainly, any items that transmit information such as smart watches and cellphones should be prohibited. Faculty should consider restriction of watches and activity monitors entirely, communicating time via room clock or electronic testing software. This alleviates the need for proctors to assess whether each item is suspect, and prevents notes from being hidden beneath the face or band. Similarly, distribution of non-programmable calculators or use of electronic examination calculators is recommended. Lastly, faculty should strongly consider creation of and distribution of uniform reference materials (eg, calculation formulas) and scratch paper (ideally of an alternate color), so proctors may easily identify them. All restrictions should be outlined within the exam administration policy to ensure consistency in expectations by students and proctors. Periodic conversations, surveys, or focus groups with students may further inform faculty and administrators of how effective these measures are at deterring cheating.

All students are expected to adhere to the Code of Conduct of their program, which should address matters of cheating (including aiding someone else) as well as the inappropriate use of technology resources. Students should take measures to protect their own work and be cautious of behaviors that give the appearance of cheating (eg, talking, wandering eyes, possessing restricted items). Having a secure examination environment and good proctoring may deter cheating; however, both students and proctors who witness or become aware of acts of academic dishonesty during an examination or examination review session should be encouraged to report the concern. Students should alert a proctor of suspicious behavior discretely but immediately, so the situation may be assessed, ideally with the assistance of a faculty member. Of note, Rabi and colleagues determined that classroom atmosphere can influence cheating behaviors and suggested that faculty who appear more approachable and less intimidating can reduce cheating in the classroom.25

Perhaps the best way to deter cheating is to reinforce expectations and apply punishment for infractions. Faculty should take seriously any reports of cheating on examinations, or in any assessment. Sharing de-identified statistics regarding reports of cheating and resultant consequences lends credibility to the importance of reporting for all stakeholders.26 Having no consequences for cheating sends a message to those cheating and others that there is minimal risk in doing so. Those who are honest and do not cheat are left feeling discouraged and their efforts devalued. Interestingly, for cases of cheating, nursing students most frequently suggested receipt of a zero on the assessment, and expulsion from either the program or university.23 Further investigation is needed as to whether pharmacy student opinions regarding punishments for academic dishonesty differ from their nursing counterparts.

Post-examination Review and Feedback

Some institutions have policies and procedures regarding student reviews of completed examinations, and whether students may keep the examinations. In the absence of institutional policy, faculty often debate the pros and cons of returning examinations to students. As part of the student’s educational record under the Family Educational Rights and Privacy Act of 1974 (FERPA), students have a right to access, inspect, and review examinations. However, unless there is no reasonable way for the student to access their records, this right does not entitle them to receive a permanent copy.27

Both the pros and cons of returning examinations must be considered. Pros primarily relate to logistical factors: if examinations are returned for students to keep, faculty may not need to schedule large group or individual meeting viewing sessions, fewer individual questions may come from students about their performance, and storage room for paper examinations is not necessary. However, these time savings are offset by the need to develop new questions during each course iteration because there is an inherent risk that students may share old examinations with future student cohorts. Future student performance may also be compromised since review of old examinations places the focus on a limited cross-section of previously tested material, rather than comprehensive review.

Perhaps the biggest drawback to returning examinations without a formalized review process is that students lose an opportunity to learn from and reflect on their performance. Reflection on assessments can help develop a student’s metacognition and improve learning.28 This impact should encourage those faculty who wish to maintain control of examination content to strongly consider providing students with an opportunity for review and guidance to reflect on their performance.

Logistically, examination review may be accomplished in many ways, from inviting students to review and discuss their examination during office hours to scheduling several small group review sessions, or holding class-wide optional or mandatory review sessions. The last option may be the most efficient use of time for large enrollment courses, although faculty must take measures to maintain examination-like security during such sessions such as having proctors and prohibiting student use of cameras and note-taking. Some electronic testing software allows students to review the full examination or those marked as incorrect, either during a review session or immediately following completion and closure of their examination. The positives of the latter review approach is that students receive immediate feedback while already in a secure testing environment. Consideration must be paid to whether active test takers have the potential to view the examination screens of those in review mode; a privacy screen may be helpful. Immediate electronic examination review should be restricted if make-up examinations are pending.

Electronic testing software can also provide students with personalized learning reports that provide a snapshot of their performance, alone and/or compared to their peers, in areas that instructors have tagged as valuable (eg, programmatic outcomes, specific content area, complexity of learning achieved). Longitudinal reports on individual and class cohort performance are also beneficial so that students can reflect on their own growth over time in the program, and gauge their progress compared to peers.

Regardless of the method of examination delivery or whether full examinations are provided to students, a post-examination reflective activity or assignment is an effective method to enhance metacognition. An “exam wrapper” or cognitive wrapper assignment focuses on examination preparation instead of content. Through the use of guided questions, students self-assess their study skills and set goals for changes needed to improve future examination performance.29 This activity inculcates the most basic habits of lifelong learning, addressing both cognitive and behavioral regulation. Content reflection is even more essential for long-term competency development, which is the overarching outcome in pharmacy programs. A broad example is when the faculty retains the examination but shares cohort performance data by topic during a live review or as a post-examination assignment; students are prompted to describe their pre-examination understanding of the concepts within that topic, review faculty feedback and identify the essential concepts, and then reflect on their knowledge gaps. A more specific example is when faculty return individual examinations to students for an open-book assignment, and students document the essential concepts and explain their thought process for each question (in the case of multiple choice examinations, why the correct answer is correct and the wrong answers are wrong).30

CONCLUSION

Faculty members at pharmacy schools and colleges must consider strategies to ensure examination validity and security. Best practices described in this paper and discussions among faculty and students should inform comprehensive school-wide examination-related policies that focus on implementing measures to ensure student identify and deter academic misconduct. Faculty development should focus on appropriate examination construction and blueprinting, optimization of examination administration, and providing meaningful feedback for metacognitive learning after the examination. Additional sharing of best practices is encouraged to expand on the literature in these important areas.

REFERENCES

  • 1.Ahmad RG, Hamed OA. Impact of adopting a newly developed blueprinting method and relating it to item analysis on students’ performance. Med Teach. 2014;36(Suppl 1):S55–S62. doi: 10.3109/0142159X.2014.886014. [DOI] [PubMed] [Google Scholar]
  • 2.Shumway JM, Harden RM. AMEE Guide No. 25: the assessment of learning outcomes for the competent and reflective physician. Med Teach. 2003;25(6):569–584. doi: 10.1080/0142159032000151907. [DOI] [PubMed] [Google Scholar]
  • 3.Paniagua MA, Swygert KA. Item Writing Manual: Constructing Written Test Questions for the Basic and Clinical Sciences. 4th ed. Philadelphia, PA: National Board of Medical Examiners; 2016. http://www.nbme.org/publications/item-writing-manual.html. Accessed March 13, 2018.
  • 4.Hubbard JK, Potts MA, Couch BA. How question types reveal student thinking: An experimental comparison of multiple-true-false and free-response formats. CBE Life Sci Educ. 2017;16(2):1–13. doi: 10.1187/cbe.16-12-0339. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Accreditation Council for Pharmacy Education. Accreditation standards and key elements for the professional program in pharmacy leading to the doctor of pharmacy degree. Standards 2016. https://www.acpe-accredit.org/pdf/Standards2016FINAL.pdf. Accessed March 13, 2018.
  • 6.Bridge PD, Musial J, Frank R, Thomas R, Sawilowsky S. Measurement practices: methods for developing content-valid student examinations. Med Teach. 2003;25(4):414–421. doi: 10.1080/0142159031000100337. [DOI] [PubMed] [Google Scholar]
  • 7.Wiggins G, McTighe J. Understanding by Design. 2nd ed. Alexandria, VA: Association for Supervision and Curriculum Development; 2005. Backward design. [Google Scholar]
  • 8.Daugherty KK. Backward course design: making the end the beginning. Am J Pharm Educ. 2006;70(6):Article 135. doi: 10.5688/aj7006135. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Albino J, Young SK, Neumann LM, et al. Assessing dental students’ competence: best practice recommendations in the performance assessment literature and investigation of current practices in predoctoral dental education. J Dent Educ. 2008;72(12):1405–1435. [PubMed] [Google Scholar]
  • 10.Hamdy H. Blueprinting for the assessment of health care professionals. Clin Teach. 2006;3(3):175–179. [Google Scholar]
  • 11.Coderre S, Woloschuk W, McLaughlin K. Twelve tips for blueprinting. Med Teach. 2009;31(4):322–324. doi: 10.1080/01421590802225770. [DOI] [PubMed] [Google Scholar]
  • 12.Killingsworth EE. Nursing faculty decision making about best practices in test construction, item analysis, and revision [dissertation]. Mercer University, 2013. https://search.proquest.com/docview/1467588588. Accessed May 31, 2018.
  • 13.National Association of Boards of Pharmacy. Registration and Administration Guide for Schools and Colleges of Pharmacy. Mount Prospect, IL: National Association of Boards of Pharmacy; 2017. https://nabp.pharmacy/wp-content/uploads/2018/01/PCOA-Registration-Administration-School-Guide-2018.pdf. Accessed March 13, 2018.
  • 14.National Association of Boards of Pharmacy. NAPLEX and MPJE 2018 Candidate Registration Bulletin. https://nabp.pharmacy/programs/naplex/. Accessed March 13, 2018.
  • 15.Haladyna TM, Downing SM. Construct-irrelevant variance in high-stakes testing. Educ Meas Issues Pract. 2004;23(1):17–27. [Google Scholar]
  • 16.Caldwell DJ, Pate AN. Effects of question formats on student and item performance. Am J Pharm Educ. 2013;77(4):Article 71. doi: 10.5688/ajpe77471. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Cohen RJ, Swerdlik M, Sturman E. Psychological Testing and Assessment. 8th ed. New York, NY: McGraw Hill Education; 2012. [Google Scholar]
  • 18.Ip EJ, Nguyen K, Shah BM, Doroudgar S, Bidwal MK. Motivations and predictors of cheating in pharmacy school. Am J Pharm Educ. 2016;80(8):Article 133. doi: 10.5688/ajpe808133. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Hensley L. To cheat or not to cheat: a review with implications for practice. Community Coll Enterp. 2013;19(2):22–34. [Google Scholar]
  • 20.Hylton K, Levy Y, Dringus LP. Utilizing webcam-based proctoring to deter misconduct in online exams. Comput Educ. 2016;92-93:53–63. [Google Scholar]
  • 21.Milone AS, Cortese AM, Balestrieri RL, Pittenger AL. The impact of proctored online exams on the educational experience. Curr Pharm Teach Learn. 2017;9(1):108–114. doi: 10.1016/j.cptl.2016.08.037. [DOI] [PubMed] [Google Scholar]
  • 22.Board AP College. 2017-18. AP Coordinators Manual. https://apcentral.collegeboard.org/pdf/ap-coordinators-manual-2017-18.pdf. Accessed March 13, 2018.
  • 23.Brown DL. Cheating must be okay – everybody does it! Nurse Educ. 2002;27(1):6–8. doi: 10.1097/00006223-200201000-00010. [DOI] [PubMed] [Google Scholar]
  • 24.Al-Saleem SM, Ullah H. Security considerations and recommendations in computer-based testing. Sci World J. 2014:562787. doi: 10.1155/2014/562787. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Rabi SM, Patton LR, Fjortoft N, Zgarrick DP. Characteristics, prevalence, attitudes, and perceptions of academic dishonesty among pharmacy students. Am J Pharm Educ. 2006;70(4):Article 73. doi: 10.5688/aj700473. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.DiVall MV, Schlesselman LS. Academic dishonesty: whose fault is it anyway? Am J Pharm Educ. 2016;80(3):Article 35. doi: 10.5688/ajpe80335. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Feder J. The Family Educational Rights and Privacy Act (FERPA): a legal overview. J Altern Perspect Soc Sci. 2015;6(3):329–335. [Google Scholar]
  • 28.Medina MS, Castleberry AN, Persky AM. Strategies for improving learner metacognition in health professional education. Am J Pharm Educ. 2017;81(4):Article 78. doi: 10.5688/ajpe81478. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29.Gezer-Templeton PG, Mayhew EJ, Korte DS, Schmidt SJ. Use of exam wrappers to enhance students’ metacognitive skills in a large introductory food science and human nutrition course. J Food Sci Educ. 2017;16(1):28–36. [Google Scholar]
  • 30.Medina MS, Yuet WC. Promoting academic integrity among health care students. Am J Health Syst Pharm. 2013;70(9):754–757. doi: 10.2146/ajhp120598. [DOI] [PubMed] [Google Scholar]

Articles from American Journal of Pharmaceutical Education are provided here courtesy of American Association of Colleges of Pharmacy

RESOURCES