Skip to main content
American Journal of Pharmaceutical Education logoLink to American Journal of Pharmaceutical Education
. 2016 Dec 25;80(10):163. doi: 10.5688/ajpe8010163

Identifying Best Practices for and Utilities of the Pharmacy Curriculum Outcome Assessment Examination

Timothy Y Mok a,, Frank Romanelli b,,c
PMCID: PMC5289719  PMID: 28179712

Abstract

Objective. A review was conducted to determine implementation strategies, utilities, score interpretation, and limitations of the Pharmacy Curriculum Outcome Assessment (PCOA) examination.

Methods. Articles were identified through the PubMed and American Journal of Pharmaceutical Education, and International Pharmaceutical Abstracts databases using the following terms: “Pharmacy Curriculum Outcomes Assessment,” “pharmacy comprehensive examination,” and “curricular assessment.” Studies containing information regarding implementation, utility, and predictive values for US student pharmacists, curricula, and/or PGY1/PGY2 residents were included. Publications from the Academic Medicine Journal, the Accreditation Council for Pharmacy Education (ACPE), and the American Association of Colleges of Pharmacy (ACCP) were included for background information and comparison of predictive utilities of comprehensive examinations in medicine.

Results. Ten PCOA and nine residency-related publications were identified. Based on published information, the PCOA may be best used as an additional tool to identify knowledge gaps for third-year student pharmacists.

Conclusion. Administering the PCOA to students after they have completed their didactic coursework may yield scores that reflect student knowledge. Predictive utility regarding the North American Pharmacy Licensure Examination (NAPLEX) and potential applications is limited, and more research is required to determine ways to use the PCOA.

Keywords: PCOA, Pharmacy Curriculum Outcome Assessment, Comprehensive Assessment, Curricular Assessment, Pharmacy Comprehensive Examination

INTRODUCTION

The Accreditation Council for Pharmacy Education (ACPE) Standards 2016 include a requirement for colleges and schools of pharmacy to use the Pharmacy Curriculum Outcomes Assessment (PCOA) as a component of all doctor of pharmacy (PharmD) degree programs (Standard 24.2, Appendix 3).1 The reasoning for this mandate originates from the perceived need for an annual psychometrically validated, objective assessment across all colleges and schools of pharmacy that would allow for consistent curricular evaluation. Student performance on the examination is intended to aid the ACPE and institutions in benchmarking student knowledge while allowing for comparisons of differing curricula. Examination results can also identify educational gaps and facilitate curricular change.2

Benchmarking and outcome examinations are not new pedagogic concepts. These types of assessments have been employed at several colleges and schools including the University of Houston College of Pharmacy’s Milemarker Examination.3 In a survey of US colleges and schools of pharmacy regarding their use of a cumulative assessment examination, approximately 50 reported using some form of a cumulative assessment, 10 of which indicated they used the PCOA. Vyas et al determined that the creation, implementation, and administration of cumulative assessments across pharmacy schools vary and are resource heavy, and that these tests often lack the external validity required to measure education competency.4 A follow-up survey of 52 institutions identified by the National Association of Boards of Pharmacy regarding their use of the PCOA was conducted from 2009 through 2014.5 The survey investigated institutions’ intentions for using the PCOA, test administration strategies, and score evaluations and distributions.5 A majority of the respondents intended to use the examination for academic benchmarking (74%) and programmatic assessment (76%).5

Thirty-five programs administered the examination to third-year pharmacy students, less than half of respondents reported using the PCOA as the only form of summative evaluation, and most commonly given as “no stakes” or without academic consequences for low performance.5 Eight institutions noted remediation interventions for low test performance.5 These interventions were stated as remediation plans that were still in development, or procedures set by faculty advisors on a case-by-case basis.5 Gortney and colleagues concluded that PCOA implementation varied significantly across colleges and schools of pharmacy.5 This review will explore published literature regarding the implementation, utilities, potential applications, and limitations of the PCOA.

Examination Background

The National Association of Boards of Pharmacy (NABP) developed the PCOA in 2008 as a tool for measuring student performance. It is a 225 Rasch-based examination, designed to cover four content areas with 28 subtopics. The Rasch method is a psychometric educational model that analyzes categorical data such as test scores as a function of both respondent abilities and item difficulty.6 The PCOA examination uses varied question formats including single- and multiple-option multiple-choice items and open response items.7 There are four testing windows available in a given year, with the site of administration determined by the host institution. The examination provides overall and subtopic scores to both the participant and the parent institution. Results for each content area and subtopic are released to the participating institution within four weeks of the conclusion of a testing window.7

The NABP analyzes both pretest and test questions based on at least six factors including time analysis and data forensics. Scores are calibrated based on all examinations within an administration window, allowing for more accurate comparisons. In July 2015, the NABP implemented several changes related to the PCOA: the total number of questions was increased from 220 to 225, the number of subtopics was decreased from 35 to 28, and the weight scores were shifted among the four global examination content areas. Furthermore, some subtopics were reworded, combined, and/or moved to other content areas (eg, “Medication Dispensing and Distribution Systems” was moved from the Clinical Sciences section to the Social/Behavioral/Administrative Pharmacy Sciences section).7,8 Colleges and schools may use individual scores, overall and domain level scores, percentile ranks, scaled national scores or “normed” scores for analytical, research, or other educational purposes. Beginning in 2016, a summary report of each institution’s scores will be submitted to ACPE.7 This report will contain mean overall and domain level scores for each participating college or school of pharmacy.

METHODS

Relevant published studies were identified through the PubMed, American Journal of Pharmaceutical Education, and International Pharmaceutical Abstracts databases using terms that included “Pharmacy Curriculum Outcomes Assessment,” “pharmacy comprehensive examination,” and “curricular assessment.” Two hundred sixty-two published papers and abstracts were found. Nineteen of these were included in the study, eight of which pertained to the PCOA and two of which were case reports presented as posters. Studies containing information regarding implementation of the PCOA, utility of the PCOA, and ability to predict success for US student pharmacists, curricula, and/or PGY1/PGY2 residents were included. In addition to the publications found in the literature, 18 publications from the Academic Medicine Journal, the Accreditation Council for Pharmacy Education (ACPE), and the American Association of Colleges of Pharmacy (ACCP) were included for background information and comparison of predictive utilities of comprehensive examinations in medicine.

IMPLEMENTATION

While every US college and school of pharmacy must meet ACPE standards for accreditation, room exists to infuse innovation and craft unique curricula. Pharmacy curricular elements vary across the academy and include sundry requirements related to admission, program length, academic terms (ie, block, semester, quarter, etc), and experiential opportunities. With regard to curricular assessment, colleges and schools employ a variety of methods to measure whether a student meets minimal competency. The PCOA is designed to be a benchmarking examination within existing curricula. Examination results are intended to distinguish between levels of test takers by professional year, thus allowing for more robust comparison and analysis.7

In a small study conducted by Scott et al, the PCOA was administered to pharmacy students at the end of professional years 1, 2, and 3 (P1, P2, P3) between 2008 and 2010, with optional examination completion for P4 students in 2008.8 A Pearson’s coefficient was used to compare grade point averages and scaled PCOA scores by professional year. The PCOA scores had low to modest correlations with grade point averages across all professional years between 2008 and 2010, with Pearson coefficients ranging from 0.25 to 0.49. The most notable correlation between PCOA and grade point average (GPA) was for the P3 cohort in 2008 (R=.71). While correlation was greatest in 2008, the correlation weakened in 2009 and 2010 (R =.46 and .26, respectively). A correlation was also found between scaled PCOA scores and professional year, most notably with P3 students (R=.91, .90, and .93, during the 2008, 2009, and 2010 examination periods, respectively).8 The results of this study suggest that the P3 year might be the optimal period during the PharmD program to administer the PCOA. Several study limitations should be considered including inclusion of only a single institution and potential variations in student motivation, which might have affected performance and thus scores. Additionally, the study employed an older version of the PCOA.

As Scott and colleagues found, student performance on the PCOA seems to most closely correlate with P3 students’ GPA, suggesting that administering the examination at the end of the P3 year may be most reflective of student performance. For accelerated programs, it is reasonable to extrapolate this information to implement the PCOA examination immediately after the final year of didactic courses. Administering the PCOA as a cumulative examination in the final didactic year may aid schools in determining curricular content gaps. Additionally, yearly administration at this point in the curriculum may allow for consistent comparisons across differing institutions.8 However, an accurate representation of an individual’s knowledge using PCOA relies heavily on each institution. ACPE Standards only mandate that the examination be administered at least once.1

Another implementation factor to consider is the stakes of the examination. When a school elects to use PCOA, that institution needs to determine whether to administer the examination as a high- or low-stakes assessment. Examinations are generally considered “high stakes” when tied to other summative assessments (eg, grades, licensure, etc). Administering PCOA as a high- or low-stakes examination may impact student effort and burden.9-11 High-stakes examinations may increase student effort by conveying to them the importance of the examination. Despite students exerting maximal effort, low scores can result from a curriculum that does not wholly align with PCOA objectives. Conversely, administering PCOA as a low-stakes examination might result in students putting forth insincere effort.

The Wilkes University explored a unique approach to issues surrounding student effort. Waskiewicz et al conducted a study with the goal of determining whether the PCOA could be administered as a low-stakes examination with incentives (eg, personalized letter from the dean) in order to maximize student motivation.9 The study only included 65 P3 students who were split into two arms: 32 students received a personalized letter from the dean imploring them to apply their best effort on the examination in order to aid in identifying both curricular and student weaknesses, while 33 students received nonpersonalized letters with the same information. Study investigators sought to determine whether PCOA scores were based on either ability/knowledge and/or motivation/effort. They calculated an “ability score” based on students’ personal Scholastic Aptitude Test (SAT) math and verbal scores and GPA. Students were matched based on mean scores as a control for ability. “Motivation-effort scores” were calculated for each student using results from the Student Opinion Scale (a validated student motivation questionnaire).9,10 The motivation-effort scores were arranged in order from lowest to highest. A motivation-filtering algorithm was applied, removing individuals from the cohort when a student’s motivation-effort scores deviated from those of the group. Each time motivation filtering was applied, a shift in mean ability scores occurred. The motivation filtering was stopped when the ability score varied significantly from the baseline mean ability score. Remaining student scores were then expected to reflect the content knowledge of the group.9

Motivation filtering was not needed in the group that received personalized letters because no correlation was found between motivation-effort and PCOA scores, as well as motivation-effort scores and ability scores. The group that received nonpersonalized letters (the nonincentivized group), required filtering, with 25 scores remaining for final analysis. Results indicated that, after both incentive and motivation filtering, PCOA scores would likely reflect students’ knowledge. Limitations to this report include being a single institution study, and the experimental design, where individuals scores may be filtered out so that the resulting PCOA score is only applicable to students with scores that were not removed.9 This proposed method for administering the PCOA and analyzing data demonstrated usefulness in terms of lessening student burden (eg, test anxiety, examination fees for repeated testing, excess time spent studying, etc), while potentially producing meaningful scores that could be used in guiding curricular change.

Coyle et al assessed the utility of implementing the PCOA as a high-stakes examination.11 The PCOA replaced the University of Houston’s Milemarker examination in 2014, and was made a mandatory summative assessment to determine minimal competencies required to commence advanced pharmacy practice experiences (APPEs).11 The examination was administered to P1, P2, and P3 students. Minimal competency for each class differed. The P1 and P2 classes’ minimal competency was defined as the PCOA National Scaled Score (NSS). The P3 class’ minimal competency was defined as NSS minus the mean standard deviation of the P3 PCOA class scores at the college.11 The P3 students who did not achieve minimal competency were required to repeat the examination and possibly face a delay in starting their APPEs because of the examination-testing window.

Analysis revealed a correlation between GPA and examination performance on the PCOA (p<.003) for the P1, P2, and P3 classes. Further analysis revealed that students with GPAs ≤ 3.5 had a lower PCOA average percentile rank as compared to average scores on the third year Milemarker examination.11 While percentile rank and average scores are not equivalent because examination contents differ, it is important to note that the P3 Milemarker was specifically designed as a measure of the Colleges’ curriculum while the PCOA was not. In the 2014 cohort, 12 students needed to remediate after taking the first PCOA examination, two of which remediated twice and had to begin their APPEs late.11 The following year, only two students needed to remediate and there was no delay in the students beginning their APPEs.11 Study investigators also determined that those who remediated did not perform as well on their APPEs, although neither the methodology for reaching this conclusion nor data used were presented.11 Limitations to this study included analysis of data from only a single institution. Additionally, data concerning remediation strategies and cost, and reasons for having differing definitions for minimal competency were not provided.

Literature suggests that implementing the PCOA examination near the end of the didactic portion of the curriculum may result in scores that best reflect student performance. Administering the examination as low stakes with incentives or high stakes can help maximize student effort and thus result in scores that accurately reflect performance. More information is still required to determine best approaches to maximize student effort and remediation strategies.

CORRELATION TO NAPLEX

Investigators at the North Dakota State University (NDSU) explored other uses of PCOA beyond simply as a means of curricular evaluation. Third-year student pharmacists’ PCOA scores were examined as a predictor of NAPLEX outcomes in 2009 and 2010.12 In addition to determining whether correlation exists between total PCOA and NAPLEX scores, investigators compared each of the four PCOA competency areas to each of the three NAPLEX domains for each respective student.12 The investigators found a positive mild to moderate relationship between PCOA and NAPLEX scores (R=.3 to .6, p<.001), with only one PCOA component (Social/Behavioral/Administrative Sciences) not correlating specifically to its associated NAPLEX competency area ((2) Assess Safe and Accurate Preparation and Dispensing of Medications) (R=.17, p=.08).12 Additionally, investigators found that PCOA scores were positively associated with overall NAPLEX scores (R=.59, p<.001), indicating that higher PCOA scores moderately predicted NAPLEX scores.12

Hutchinson and colleagues also studied the relationship between PCOA and NAPLEX scores.13 The PCOA was administered to P3 students in January 2012 and 2013, and the sum of their Pharmaceutical and Clinical Sciences scores were compared to respective NAPLEX scores in 2013 and 2014.13 Using Pearson’s rho, investigators found a moderate correlation between these content areas and NAPLEX scores (R=.57, p<0.0005).13 The most significant limitation to this data is a lack of in-depth explanation of study methods. Whether study investigators used incentives to encourage students to enhance their examination performance is unclear. Other confounding factors include whether the study compared initial or subsequent attempts at taking either the PCOA or NAPLEX. In addition, if a student or recent graduate performed below the national mean on either examination, they may have received remediation, and subsequently these associated scores could have been included in the study.

Other limitations to extrapolating data from both of the aforementioned studies include involvement of only one institution, and inherent differences in the nature of the PCOA and the NAPLEX. The PCOA is designed as a knowledge-based assessment while the NAPLEX encompasses some pharmacy praxis assessment and is designed to determine minimal competency to practice.12 Additionally, both the PCOA and NAPLEX have changed since November 2015. Changes to the NAPLEX blueprint reflect evolutions in current pharmacy practice, while the PCOA has been modified to better align with ACPE Standards. Because ACPE Standards are ultimately intended to prepare students for practice, PCOA scores should logically relate to NAPLEX scores.

Notably, neither examination contains an objective structured clinical examination (OSCE) component nor other performance-based step intended to measure soft skills. Learning from other standardized assessment systems such as the Canadian Pharmacist Qualification Examination and United States Medical Licensure Examination (USMLE) will be crucial in guiding future modifications to either assessment.14 These parameters may become increasingly critical as the integration of knowledge and clinical skills through postgraduate training has become an increasingly popular choice and may potentially become a prerequisite for employment, especially in direct patient care.15-17 While more information is needed before the PCOA could potentially be used as a predictor of NAPLEX scores, institutions may choose to investigate individual students’ scores to determine areas requiring improvement, and tailoring the content of APPEs to address those areas.

POTENTIAL APPLICATIONS

Those PCOA scores below the national average may be useful in identifying both institutional and individual curricular issues. Conversely, data related to the utility of “higher” PCOA scores has not yet been elucidated. ACPE Standards indicate that PCOA scores, on both institutional and individual levels, can be used for comparison (Standard 24.2).1,2 This implies that a ranking system may result, although PCOA may not match objectives from nonprescriptive and unique curricula.18 Since students also receive personalized scores, some have postulated that postgraduate training programs or employers may consider and/or assess applicants’ PCOA scores to secure optimal candidates or employees.19

Some have drawn comparisons between the PCOA and the United States Medical Licensure Examination (USMLE) Step 1.20 Both examinations are intended to test knowledge, do not feature OSCE components, and are scored on a numeric scale. Medical students sit for the USMLE Step 1 after completing their didactic courses and prior to starting rotations or patient care experiences.20 To achieve an accurate and comprehensive evaluation of an institution’s curriculum, pharmacy students will probably be required to also take the PCOA examination close to the completion of didactic coursework (just prior to APPEs).20 Some have discussed using the PCOA to implement a “step” model similar to that used in medicine.20 Other than equating PCOA to USMLE Step 1, the author suggests a plan to further mirror USMLE Step examinations. After the PCOA, a two-part examination similar to the USMLE Step 2 could be administered at the conclusion of APPEs.

Subsequently, a third examination to assess clinical skills could be administered during residency, similar to the USMLE Step 3.20 While this concept is innovative, a call in medicine to critically appraise the continued prospective use of the USMLE Step 1 as a predictor for residency success exists. Pharmacy educators should wait for the results of its utility with other markers of resident success to be validated before colleges and schools of pharmacy follow suit. Observers have suggested that USMLE Step 1 scores weigh heavily when evaluating potential medical residency candidates, yet the examination does not measure clinical abilities or skills, teamwork, professionalism, or fundamental competencies.21

Studies examining use of the USMLE Step 1 as a predictor of subsequent examination performance have reported varying results. In certain specialties, such as orthopedic surgery and pathology, the USMLE Step 1 has only demonstrated weak to moderate correlations with subsequent certifying examination pass rates.22-24 Correlations vary when performance on USMLE Step 1 are compared to annual in-training service examinations (ITEs). ITEs are designed to predict those students who may be at a risk of not passing certifying examinations. Scores have been reported to be moderately correlated in the specialty areas of surgery and emergency medicine and significantly correlated in obstetrics and gynecology.25-28 Residency programs might be able to better use resources to supplement practitioners early in their career if USMLE Step 1 scores accurately predicted ITE scores. Naturally, increasing students’ ITE scores may increase their chances of passing certifying examinations.

Given the similarities of a broad array of clinical rotations between postgraduate year 1 pharmacy residents and internal medicine residents, it may be reasonable to compare assessment data for internal medicine residents with that for pharmacy residents. A recent research report from a single program sought to examine correlations between the USMLE Step 1, ITEs, and performance on the American Board of Internal Medicine Certifying Examination (ABIM-CE) scores.29 This study included 241 resident graduates from the Medical College of Wisconsin between 2004-2012, of whom 212 passed the ABIM-CE. Information regarding the residents’ performance on the USMLE Step 1, Step 2, and ITE for each year of the residency, and the ABIM-CE were available.

Using the Pearson’s correlation, USMLE Step 1 scores were found to be moderately correlated with ITE scores in years 1 and 2, and mildly correlated with year 3 (R=.67, .70, .44, respectively).29 ITE scores during years 1 and 2 also moderately correlated to ABIM-CE scores (R=.66 for both). Year 3 ITE scores were mildly correlated to ABIM-CE scores (R=.48).29 Although year 3 ITE score was less significantly correlated than years 1 or 2, if a resident scored in the bottom quartile, a higher relative risk (RR=10.5, 95% CI 3.1-35.1) of failing the ABIM-CE was detected. USMLE Step 1 score was also moderately correlated with the ABIM-CE (R=.59).29

Overall the investigators found that possible predictors of failing the ABIM-CE include being in the bottom half of the USMLE Step 1 and lowest quartile of the ITE.29 Possible predictors of passing the ABIM-CE include being in the top quartile of the ITE.29 The study was hindered by a small sample size and limited external validity.29 Although there are natural differences between pharmacy and medicine, both fields engender similar elements such as a sound knowledge base, strong communication skills, abilities to work with teams, and critical thinking aptitude.

While summative scores such as GPA have been a mainstay in objective comparisons of students, postgraduate training programs are cognizant of the fact that GPAs between academic programs are not equivalent because of unique characteristics within individual programs. Besides GPA, postgraduate training programs such as residencies and fellowships are hard pressed to find other numerical data to compare students. As enticing as it may be to use the PCOA as a potential mirror of the USMLE Step 1, the utility of the PCOA to predict the traits and abilities needed to succeed as a postgraduate trainee are severely limited.

EXAMINATION LIMITATIONS

As institutions decide to implement and then schedule the PCOA within their curriculum, it is important to acknowledge the limitations of the examination. In a statement to ACPE, the Executive Committee of the AACP’s Assessment SIG stated that the tool was validated to measure overall student pharmacist performance, compare institutional scores nationally, track student growth, and provide student feedback.18 The report noted that the PCOA has not been studied for use as an evaluation tool for curricula, as an assessment for summative fundamental knowledge, or as an evaluation for pre-APPE readiness. These comments raise serious issues with regards to the meaningful use of data and evidence generated by the examination, whether it is used by colleges and schools and/or accreditation bodies. Naughton and colleagues detailed an additional limitation to the examination, suggesting that students score higher in areas that were recently taught in the curriculum, thus clouding the evaluation of retained knowledge.12

Colleges and schools also must be cognizant that the PCOA is not directly tailored to any institution’s curriculum, meaning that an individual student’s scores and percentile rankings and scores and rankings for an institution will vary from national averages. The two types of data available to the institution are individual students’ scores and the particular institution’s scores. Data are lacking on how to effectively interpret results for students and institutions. Overall, more data currently exist concerning the utility of individual student scores as compared to institutional scores. On the institution level, there has yet to be reports of programmatic outcomes from curricular changes initiated from using institutional score reports.

Moreover, ACPE will receive institution scores, but how ACPE decides to interpret and use them is also unclear. Scott and colleagues explain that when analyzing results, limitations in score variability can arise from differences in students’ motivation and timing of examination administration.8 On an individual level, Naughton and colleagues found that individual scores had limited correlation to one of the NAPLEX competency areas, Assess Safe and Accurate Preparation and Dispensing of Medications.12 If individual scores fall below national means, the best remediation methods to achieve meaningful outcomes from retaking the examination remain to be elucidated.

Another major limitation to the PCOA is an inability to assess communication skills, professionalism, critical thinking, and clinical decision-making. These aptitudes are considered essential traits or “soft skills” for pharmacy professionals. Institutions will need to decide whether their current assessment plans are adequate in evaluating both fundamental knowledge and soft skills, and how implementing the PCOA will bolster student assessments. It is also imperative to determine any potential financial burdens on students and institutions incurred by test taking.8 If the PCOA is intended to track student growth, then some institutions may elect to administer the examination at multiple periods.11 Currently, the first administration of PCOA for a given class cohort is free, with subsequent administrations costing $75 per student per examination.7 Additionally, if the examination was administered as a high-stakes examination where students who do not pass may have a delay in graduating, then both direct and indirect costs would need to be determined. Institutional costs include manpower resources related to actual test administration and management as well as potential remediation systems. Lastly, repeated administration of the examination to a given cohort may over time result in testing bias, which could lead to scores that overestimate the students’ actual knowledge.

ACPE Standards 2016 has mandated use of the PCOA as a standardized examination across the academy.1 Scott and colleagues found strong correlations between scaled examination scores and professional year in the pharmacy curriculum (P3).8 Because the PCOA is a knowledge-based examination, it would be logical that students who have immediately finished the didactic portion of the curriculum would be in the best position to complete the examination. Administering the examination to students completing APPEs may be a logistical issue.8 Additionally, the utility of administering the PCOA to students on APPEs has not been well studied. Availability of the Pre-NAPLEX examination coupled with the PCOA’s modest correlation in predicting NAPLEX scores may be further reasons to defer test taking in this cohort.

Another important factor to consider when an institution decides to use the PCOA is ensuring scores accurately and realistically reflect students’ knowledge. In theory, students must approach the examination with effort to gain adequate assessments of student knowledge. As previously mentioned some colleges and schools demonstrated some success with using personalized notes from senior administrators as a means of eliciting student performance in a low-stakes examination.8,9 Others have also reported achieving correlations between PCOA examination performance and students’ GPA, a marker of student effort, using a high-stakes examinations.11 Studies examining other innovative strategies to achieve the maximum student effort required for an accurate measurement of knowledge are needed in order to determine the best approach to administering the PCOA.

While the PCOA examination may be intended to assist students in their learning and to longitudinally track outcomes, more studies are needed to detail the success of remediation processes for low scores in PCOA content areas. Remediation strategies in turn should map to better performance on the PCOA examination. Faculties will need to exercise caution with regards to both interpretation and extrapolation of PCOA scores in terms of both the individual student and the associated curricula. Critical analysis must be made concerning whether a student or group of students performed poorly because of a deficit in test-taking strategies or actual knowledge, and/or where gaps exist in the curriculum. Finally, more studies should be undertaken to determine the utility of PCOA scores, and whether these results can be used to predict future performance on APPEs and/or licensure examinations.

Standardized measurements (eg, USMLE Step 1) have been used in medicine to evaluate and compare candidates applying for various residency programs. While it is enticing for pharmacy residency programs to follow suit, caution must be taken to ensure robustness in predicting success before investing time and resources in a process that may become ingrained as regular practice. Studies surrounding the predictive utility of USMLE Step 1 have resulted in moderate correlations to ITE and ABIM-CE.22-29 Results have also determined that individuals who scored in the lower quartiles of Step 1 are at higher risks for failing the certifying examination.29 More studies are needed to establish correlations and predictive abilities of PCOA scores in terms of resident success before recommending programs use the PCOA as an evaluation tool for residency candidate eligibility.

SUMMARY

The precise role of the PCOA examination was heavily debated prior to its inclusion within accreditation standards and will likely continue to be debated as Standards 2016 are implemented. Overall PCOA scores seem modestly useful in identifying weakness areas for specific candidates, especially when administered to P3 students. Studies have mostly occurred at single institutions, and very few of these investigations explored similar outcomes. More research is needed to determine best practices surrounding the use of the PCOA as a marker of either individual or institutional curricular success. Research at a minimum should focus on the potential utilities of the PCOA examination in terms of revealing critical student knowledge gaps or correctable deficiencies.

ACKNOWLEDGMENTS

We would like to thank Jeff Cain, EdD, Adjunct Associate Professor, and Leah Simpson, MPA, Director of Assessment, of the University of Kentucky College of Pharmacy for their review and support.

REFERENCES

  • 1.Accreditation Council for Pharmacy Education. Accreditation standards and guidelines for the professional program in pharmacy leading to the doctor of pharmacy degree. Standards 2016. Approved January 15, 2015. https://www.acpe-accredit.org/pdf/Standards2016FINAL.pdf. Accessed November 7, 2015.
  • 2. Directors ABO. ACPE Standards 2016. AACP Interim Meeting. 2015.
  • 3.Szilagyi JE. Curricular progress assessments: the MileMarker. Am J Pharm Educ. 2008;72(5):Article 101. doi: 10.5688/aj7205101. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Vyas D, Halilovic J, Kim M-K, Ravnan MC, Rogan EL, Galal SM. Use of cumulative assessments in US schools and colleges of pharmacy. Pharmacy. 2015;3:27–38. doi: 10.3390/pharmacy3020027. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Gortney JS, Bray BS, Salinitri FD. Implementation and use of the Pharmacy Curriculum Outcomes Assessment at US schools of pharmacy. Am J Pharm Educ. 2015;79(9):Article 137. doi: 10.5688/ajpe799137. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6. Rasch G. Probabilistic Models for Some Intelligence and Attainment Tests. Copenhagen: Danish Institute for Education Research; 1960. (expanded edition (1980) with foreword and afterword by B.D. Wright, Chicago, IL: The University of Chicago Press; 1980.
  • 7. Catizone CA. PCOA registration and administration guide for schools and colleges of pharmacy. National Association of Boards of Pharmacy. 2015.
  • 8.Scott DM, Bennett LL, Ferrill MJ, Brown DL. Pharmacy Curriculum Outcomes Assessment for individual student assessment and curricular evaluation. Am J Pharm Educ. 2010;74(10):Article 183. doi: 10.5688/aj7410183. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Waskiewicz RA. Pharmacy students’ test-taking motivation-effort on a low-stakes standardized test. Am J Pharm Educ. 2011;75(3):Article 41. doi: 10.5688/ajpe75341. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Thelk AD, Sundre DL, Horst J, Finney SJ. Motivation matters: using the student opinion scale to make valid inferences about student performance. J Gen Educ. 2009;58(3):129–151. [Google Scholar]
  • 11. Coyle EA, Hatifield CL, Jenkins TL. Using the PCOA as a high stakes exam in assessing pre-APPE readiness. Poster presented at 116th Annual Meeting of the American Association of Colleges of Pharmacy. National Harbor, MD; 2015.
  • 12.Naughton CA, Friesner DL. Correlation of P3 PCOA scores with future NAPLEX scores. Curr Pharm Teach Learn. 2014;6(6):877–883. [Google Scholar]
  • 13. Hutchinson DJ, Souza JM, Lenhard LM. Pharmacy curriculum outcomes assessment (PCOA) as a predictor of performance on NAPLEX. Poster presented at 116th Annual Meeting of the American Association of Colleges of Pharmacy. National Harbor, MD; 2015.
  • 14.Romanelli F. Pharmacist licensure: time to step it up? Am J Pharm Educ. 2010;74(5):Article 91. doi: 10.5688/aj740591. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Hester EK, McBane SE, Bottorff MB, et al. Educational outcomes necessary to enter pharmacy residency training. Pharmacotherapy. 2014;34(4):e22–e25. doi: 10.1002/phar.1411. [DOI] [PubMed] [Google Scholar]
  • 16.Johnson TJ. Pharmacist work force in 2020: implications of requiring residency training for practice. Am J Health Syst Pharm. 2008;65(2):166–170. doi: 10.2146/ajhp070231. [DOI] [PubMed] [Google Scholar]
  • 17.McElhaney A, Weber RJ. Role of pharmacy residency training in career planning: a student’s perspective. Hosp Pharm. 2014;49(11):1074–1080. doi: 10.1310/hjp4911-1074. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.American Association of Colleges of Pharmacy. Assessment special interest group response to proposed PCOA requirement. 2015. http://www.aacp.org/governance/SIGS/assessment/Assessment%20Docs/Assessment%20SIG%20Response%20to%20Proposed%20PCOA%20Requirement-2.pdf. Accessed November 6, 2015.
  • 19. Berger M. How the PCOA benefits pharmacy students. Pharmacy Times. November 5, 2015.
  • 20.Poirier TI, Devraj R. Time for consensus on a new approach for assessments. Am J Pharm Educ. 2015;79(1):Article 2. doi: 10.5688/ajpe79102. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Prober CG, Kolars JC, First LR, Melnick DE. A plea to reassess the role of United States Medical Licensing Examination Step 1 scores in residency selection. Acad Med. 2015;91(1):12–15. doi: 10.1097/ACM.0000000000000855. [DOI] [PubMed] [Google Scholar]
  • 22.Dougherty PJ, Walter N, Schilling P, Najibe S, Herkowitz H. Do scores of the USMLE Step 1 and OITE correlate with the ABOS Part I certifying examination? A multicenter study. Clin Orthop Relat Res. 2010;468(10):2797–2802. doi: 10.1007/s11999-010-1327-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Swanson DB, Sawhill A, Holtzman KZ, et al. Relationship between performance on part I of the American Board of Orthopaedic Surgery Certifying Examination and scores on USMLE steps 1 and 2. Acad Med. 2009;84(10 Suppl):S21–S24. doi: 10.1097/ACM.0b013e3181b37fd2. [DOI] [PubMed] [Google Scholar]
  • 24.Picarsic J, Raval JS, Macpherson T. United States Medical Licensing Examination Step 1 two-digit score: a correlation with the American Board of Pathology first-time test taker pass/fail rate at the University of Pittsburgh Medical Center. Arch Pathol Lab Med. 2011;135(10):1349–1352. doi: 10.5858/arpa.2010-0240-EP. [DOI] [PubMed] [Google Scholar]
  • 25.Black KP, Abzug JM, Chinchilli VM. Orthopaedic in-training examination scores: a correlation with USMLE results. J Bone Joint Surg Am. 2006;88(3):671–676. doi: 10.2106/JBJS.C.01184. [DOI] [PubMed] [Google Scholar]
  • 26.Thundiyil JG, Modica RF, Silvestri S, Papa L. Do United States Medical Licensing Examination (USMLE) scores predict in-training test performance for emergency medicine residents? J Emerg Med. 2010;38(1):65–69. doi: 10.1016/j.jemermed.2008.04.010. [DOI] [PubMed] [Google Scholar]
  • 27.Armstrong A, Alvero R, Nielsen P, et al. Do US medical licensure examination step 1 scores correlate with council on resident education in obstetrics and gynecology in-training examination scores and American Board of Obstetrics and Gynecology written examination performance? Mil Med. 2007;172(6):640–643. doi: 10.7205/milmed.172.6.640. [DOI] [PubMed] [Google Scholar]
  • 28.Spurlock DR, Jr., Holden C, Hartranft T. Using United States Medical Licensing Examination (USMLE) examination results to predict later in-training examination performance among general surgery residents. J Surg Educ. 2010;67(6):452–456. doi: 10.1016/j.jsurg.2010.06.010. [DOI] [PubMed] [Google Scholar]
  • 29.Kay C, Jackson J, Frank M. The relationship between internal medicine residency graduate performance on the ABIM certifying examination, yearly in-service training examinations, and the USMLE step 1 examination. Acad Med. 2015;90(1):100–104. doi: 10.1097/ACM.0000000000000500. [DOI] [PubMed] [Google Scholar]

Articles from American Journal of Pharmaceutical Education are provided here courtesy of American Association of Colleges of Pharmacy

RESOURCES