Abstract
Objective. To determine whether a correlation exists between third-year PharmD students’ perceived pharmacy knowledge and actual pharmacy knowledge as assessed by the Pharmacy Curricular Outcomes Assessment (PCOA).
Methods. In 2010 and 2011, the PCOA was administered in a low-stakes environment to third-year pharmacy students at North Dakota State University College of Pharmacy, Nursing, and Allied Sciences (COPNAS). A survey instrument was also administered on which students self-assessed their perceived competencies in each of the core areas covered by the PCOA examination.
Results. The pharmacy students rated their competencies slightly higher than average. Performance on the PCOA was similar to but slightly higher than national averages. Correlations between each of the 4 content areas (basic biomedical sciences, pharmaceutical sciences, social/administrative sciences, and clinical sciences) mirrored those reported nationally by the National Association of Boards of Pharmacy (NABP). Student performance on the basic biomedical sciences portion of the PCOA was significantly correlated with students’ perceived competencies in the biomedical sciences. No other correlations between actual and perceived competencies were significant.
Conclusion. A lack of correlation exists between what students perceive they know and what they actually know in the areas of pharmaceutical science; social, behavioral, and administrative science; and clinical science. Therefore, additional standardized measures are needed to assess curricular effectiveness and provide comparisons among pharmacy programs.
Keywords: Pharmacy Curricular Outcomes Assessment, competencies, self-assessment, perception
INTRODUCTION
In their Accreditation Standards and Guidelines 2007, the Accreditation Council for Pharmacy Education (ACPE) placed increased emphasis on assessing and evaluating programmatic outcomes using standardized instruments and data to allow comparisons with other programs.1 To assist programs in gathering assessment data for accreditation, the American Association of Colleges of Pharmacy (AACP) and ACPE developed Curriculum Quality Perception Surveys, which are standardized instruments designed to be administered to students, faculty members, preceptors, and alumni.2 Data from the Curriculum Quality Perception Surveys is required to assess 25 out of the 30 accreditation standards, including Standard No. 3, Evaluation of Achievement of Mission and Goals, and Standard No. 15, Assessment and Evaluation of Student Learning and Curricular Effectiveness.3
Surveys that capture perceptions and opinions are commonly used in health science research.4 At the same time, research published in health science literature provides some evidence to support discordance when perceptions are compared to external measures (ie, reality). For instance, an older review of 14 studies involving health professions training found students generally were overconfident in their perception of knowledge, with only low to moderate correlations found between students’ perceived knowledge and external measurements.5 Senior-level bachelor of science in pharmacy (BS Pharm) students, consistently overestimated their clinical knowledge as evident when compared with their scores on external assessments.6 In 3 studies that looked at actual and self-perceived knowledge of diabetes among nurses, 2 found a positive correlation between perceived knowledge and actual diabetes knowledge, while the other study found no correlation.7-9 Studies comparing the accuracy of physician self-assessment and external assessments also reported weak or no association.10,11 Interestingly, with regard to skill performance, a few studies in the health professions found that those who were the most confident in their skills performed the worst on external assessments.12,13 Using data from perception surveys to evaluate program quality and guide curricular development can be problematic if pharmacy students’ accuracy in self-assessing their knowledge, skills, and ability is flawed.
Examinations are commonly used to validate a student’s knowledge, skills, and ability. The North American Pharmacist Licensure Examination (NAPLEX) essentially assesses whether a student possesses the minimum knowledge necessary to practice pharmacy in the United States. The Pharmacy Curriculum Outcomes Assessment (PCOA) is a national standardized examination created by the National Association of Boards of Pharmacy (NABP) to assess the academic progress of pharmacy students and guide curriculum development.14 Content of the PCOA falls into 4 domains corresponding to the doctor of pharmacy (PharmD) core curriculum as defined by ACPE Standard 13: basic biomedical sciences; pharmaceutical sciences; social, behavioral, and administrative pharmacy; and clinical sciences. Assessment data generated from the PCOA, therefore, may provide a valid and reliable external measurement of a student’s pharmacy knowledge.
The North Dakota State University College of Pharmacy, Nursing, and Allied Sciences (COPNAS) decided to administer the PCOA to students in their third year as a low-stakes, voluntary, free, and formative assessment to determine the value of the PCOA as a measure of curricular effectiveness to meet ACPE Accreditation Standard No 15. If students’ perceptions of their knowledge were found to be reliable indicators of actual knowledge, the researchers might conclude that the cost (eg, time and financial) of administering the PCOA exceeded its benefit. The purpose of this study, therefore, was to determine whether a correlation existed between third-year PharmD students’ perceived and actual pharmacy knowledge as assessed by the PCOA.
METHODS
Institutional Review Board (IRB) approval for this research project was granted by the North Dakota State University IRB. In 2010, and again in 2011, an informed consent describing the research, along with a knowledge perception survey instrument to rate levels of perceived knowledge, was obtained from third-year pharmacy students before they completed the PCOA. Each cohort of students was given verbal information about the PCOA including test design (eg, number of questions, multiple-choice format), the expected time commitment, a Web site for more information, and an explanation of how the results would be used. The same information was communicated in a follow-up e-mail to the students sent 24 hours later. One week before the PCOA examination, an e-mail reminder was sent to the students reminding them of the date and time of the PCOA examination. To encourage student participation and effort, participating students received a free lunch at the conclusion of the examination and individualized feedback regarding their strengths and weaknesses after the examination was scored by NABP. In addition, students scoring at or above the 85th percentile received 2 extra personal days off from their advanced pharmacy practice experiences (APPEs), those scoring in the 70th to 84th percentile received 1 extra personal day off, and students scoring in the 40th to 69th percentile received 1 extra professional day off. A drawing to award a free pre-NAPLEX examination to 5 students was held after the PCOA results were returned for participating students who completed both the examination and the survey instrument.
In the perception survey, students were asked to rate their perceived level of knowledge for 35 curricular areas using a 5-point Likert scale (1 = low, 5 = high). The 35 curricular areas on the survey instrument directly corresponded to the subtopics tested on the PCOA, used language that mirrored the PCOA subtopic explanations, and took approximately 10 minutes to complete. The timing of the knowledge perception survey and PCOA was such that it occurred in the last semester of the student’s classroom education and prior to the beginning their advanced pharmacy practice experiences.
Given the lack of empirical evidence linking student perceptions to actual outcomes in the general health sciences literature, we operated under a null hypothesis of no relationship (correlation) between any student perception indicators and the actual outcomes as measured by PCOA scores. To simplify the analysis, we used the PCOA total scaled score and the scores from each of the 4 content domains. The total PCOA score was included primarily for completeness, but was expected to be too aggregated to analyze and interpret in a meaningful fashion. Instead, primary focus was placed on the major domain scores, as these scores can be succinctly interpreted and related back to courses or sets of courses in a pharmacy curriculum. Additionally, given that the data consisted of a 2-year cohort, we focused primarily on the PCOA scores rather than the corresponding percentile rankings of those scores, which were ranked relative to students taking the examination in a given year as the scores are more easily interpreted in a multi-year context. (Analyses using the percentile rankings were conducted that provided results qualitatively similar to the PCOA score analyses but are not reported. Data not presented in the paper are available from the author upon request.) Because the student perception survey instrument mimicked the design and language of the PCOA closely, we aggregated the survey items into 4 content scales by taking a simple average of the subtopic areas contained in that content area. For example, we created a biomedical science perception scale by calculating an average score of the 7 subtopics (microbiology, anatomy and physiology, pathophysiology, immunology, biochemistry/biotechnology, molecular biology/genetics, and biostatistics) contained in that content area for each student in our data set. We did not create a combined index of student perception because (like the total PCOA score) it would have been too aggregated and, thus, difficult to interpret in a meaningful fashion.
Correlations between PCOA scores and perceived knowledge were compared using both parametric (Pearson) and nonparametric (Spearman) methods. Primary emphasis was given to Pearson’s correlations, and Spearman correlations are provided only if they were fundamentally different. Student responses for the 2010 and 2011 were analyzed cumulatively and for each class by year. For simplicity, we focused primarily on the cumulative analysis.
Our primary focus was on evaluating the correlation between the pharmacy students’ perceived and actual knowledge. Nonetheless, we conducted a simple empirical analysis to determine whether performance of COPNAS pharmacy students on the PCOA could be generalized to the population of all pharmacy students who completed the PCOA. First, to determine whether the 2010 and 2011 cohorts were comparable and representative of the pharmacy students progressing through the COPNAS program, we checked for significant differences in PCOA scores between the cohorts using parametric (F test) and nonparametric (Kruskal-Wallis) analysis of variance under the null hypothesis of no mean differences by year. That is, we operated under the null that such biases did not exist, and the data could be pooled and analyzed cumulatively. Second, because we did not have access to PCOA scores for the entire population (nationally), we examined the correlation coefficients between COPNAS students’ actual PCOA scores (across the 4 content domains) and the corresponding national correlations. We also conducted simple z tests under the null hypothesis that the sample mean for a given COPNAS PCOA score was not significantly different from the “population” mean as characterized by the national PCOA average for a given domain score or total score. For percentiles, we conducted z tests under the assumption that the “population” average (mean or median) was the 50th percentile. All data analyses were conducted using the SAS Statistical Software Package, Version 9.2 (SAS Corporation Cary, North Carolina) and a 5% significance level.
RESULTS
One hundred seventy-two students from 2010 and 2011 were included in the study. Of those students, 157 (91.3%) completed the knowledge perception survey tool in its entirety, 138 (80.2%) sat for the PCOA, and 132 (76.7%) completed both the knowledge perception survey instrument and the PCOA. A response rate in survey research methods of 50% to 60% or greater is considered sufficient to minimize nonresponse bias.15
Table 1 contains the variable names, definitions, and descriptive statistics for each of the variables included in the analysis. First, the mean scores and percentile rankings for the 2-year cohort of students were relatively similar across the 4 domains. The lowest mean score was in the area of pharmaceutical sciences (346.3 ± 49.5), while the highest mean score was in the area of clinical sciences (373.2 ± 60.5). Simple z tests rejected the null hypothesis of no difference between NDSU and national PCOA scores for each of the corresponding domain averages (p < 0.05 for each). Percentile rankings for scores ranged between 55.1% (for pharmaceutical sciences) and 65.7% (for biomedical sciences), indicating that the COPNAS cohort scored consistently near the 60th percentile nationally (overall percentile = 60.9%). Z tests applied to these percentile rankings suggested that COPNAS students did not score significantly higher than the national average on the pharmaceutical sciences (p = 0.25), social/administrative sciences (p = 0.05) and clinical sciences (p = 0.06) domains, but did score significantly higher in the biomedical sciences (p < 0.001) and the 50th percentile overall (p = 0.012).
Table 1.
Variable Names, Definitions, and Descriptive Statistics Used in a Comparison of Pharmacy Students’ Perceived and Actual Knowledge Using the Pharmacy Curricular Outcomes Assessment (n = 132)
The last portion of Table 1 provides descriptive statistics for each of the 4 perception scales. The mean scores for the biomedical sciences (3.1, p = 0.093) and pharmaceutical sciences (3.0, p = 0.933) were not significantly different than the midpoint of the perception scale, which was 3. Concomitantly, the mean values for the social/administrative sciences (3.2, p < 0.001) and clinical science (3.4, p < 0.001) domain scales were significantly higher than the midpoint of the perception scale.
Table 2 deconstructs our primary variables of interest by year. For each of the perception variables and PCOA scores (and percentiles), there was a high degree of consistency across both time periods. At the 5% level, there were no significant differences across each of the variables by year. Students in the 2010 class scored significantly higher than their 2011 counterparts in the biomedical sciences area of the exaination (372.1 in 2010 vs. 356.4 in 2011, p < 0.001). Given the generally high degree of consistency across years, we chose to focus the remainder of our analysis on the combined 2010 to 2011 data. However, because of the significant difference in biomedical science scores, an analysis of PCOA perceptions and scores yielded results extremely similar to those using the complete panel. In addition, an analysis comparing students’ perceptions to PCOA percentile rankings by year were also consistent with the aggregated data and year-by-year analyses of overall PCOA scores.
Table 2.
Differences by Year and Content Area of Pharmacy Students’ Scores on the Pharmacy Curricular Outcomes Assessment (N = 132)
We conducted parametric (Pearson) and nonparametric (Spearman) correlations between content areas (interscale correlations) and when linking student perceptions with actual competencies as measured by PCOA scores. Because the parametric and nonparametric correlations were extremely similar in sign, magnitude, and significance, we focused primarily on the Pearson correlations. Correlations between the total PCOA scale score and the 4 domain area scores were all significantly correlated (p < 0.001). Moreover, the signs and magnitudes of these correlations were all roughly similar to national figures reported by the NABP. For example, the Pearson correlation between the score on the biomedical sciences domain and the score on the pharmaceutical sciences domain was 0.623 for the COPNAS data compared to 0.636 for the 2011 national data.16 There were significant correlations between the students’ perceived competencies related to the biomedical sciences and 3 actual scores (correlation in parentheses): total PCOA score (0.196), biomedical sciences (0.208), and pharmaceutical sciences (0.264). No other correlations between perceived and actual competencies were significantly different from actual scores. A moderate but significant correlation exists between students’ perceived competencies in the 4 domain areas. These correlations were all positive, and ranged from 0.371 (perceived social/administrative and perceived biomedical competencies) to 0.534 (perceived clinical and perceived pharmaceutical science competencies).
DISCUSSION
Data from Curriculum Quality Perception Surveys is required to assess 25 out of the 30 accreditation standards, including Standard No. 15, Assessment and Evaluation of Student Learning and Curricular Effectiveness.3 Using data from perception survey instruments to evaluate program quality can be problematic if student perceptions are inaccurate. The purpose of this study was to determine whether a correlation exists between COPNAS third-year PharmD students’ perceived and actual pharmacy knowledge as assessed by the PCOA. We chose the PCOA as an external measure of pharmacy knowledge for several reasons. The PCOA is a national standardized progress examination that has been administered to over 9,000 pharmacy students since 2007 (G. Johannes, PCOA Manager, NABP, e-mail, May 6, 2011). Also, national results from the 2007 to 2011 administrations showed that the internal reliability coefficient, reported as a Cronbach alpha for raw scores from the entire assessment, has remained high (0.90 to 0.93).16 Finally, content on the examination corresponds to the PharmD core curriculum as identified by ACPE, and theoretically, should match the curriculum of other accredited pharmacy programs.
Because we administered the PCOA as a low-stakes examination, which is generally the case with progress examinations in other pharmacy programs, we were concerned about students’ motivation to perform to the best of their ability.17 Therefore, besides the individualized formative feedback on strengths and weaknesses, students were offered small incentives in the form of a free lunch, extra days off from APPEs, and the chance to receive a free pre-NAPLEX practice examination. While these incentives did not guarantee that all students gave maximum effort, they reduced the likelihood that students failed to take the examination seriously. The fact that our scores were near national averages (Table 1) also suggests that COPNAS students were taking the examination just as seriously and making as great of an effort as students at other US pharmacy colleges and schools.
Students rated their knowledge near or slightly above the middle of the perception range in each of the 4 domains (Table 1), with the clinical and social sciences perceptions significantly (p < 0.001) higher than the midpoint. Thus, students appear to be more confident of their knowledge in the clinical and social sciences than that in biomedical and pharmaceutical sciences. Biomedical and pharmaceutical sciences are taught earlier in the COPNAS PharmD program (eg, first and second years), whereas social and clinical sciences content is presented primarily in the third year, which may explain students’ higher confidence level regarding material in those domains. As a group, students’ ratings of their level of knowledge appeared to be consistent as evidenced by the moderate but significant inter-scale correlations between student perceptions for each of the 4 content domains. This consistency in perceptions among students may be secondary to the fact that all of them were educated within the same PharmD program and curriculum.
Comparing student’s perceptions and actual PCOA scores for each of the 4 domains revealed a weak but significant correlation only with the biomedical sciences. In all other cases, there were no significant associations between what students thought they knew about the science of pharmacy and actual knowledge as measured by the PCOA. Although students rated their knowledge of pharmaceutical sciences near to the midpoint of the perception range and their actual mean scaled score in the pharmaceutical science domain was no different than the national mean (Table1), the correlation failed to reach significance. Likewise, even though students perceived themselves as more knowledgeable in the clinical and social sciences, there was no correlation between what students thought they knew and actual PCOA score. This implies (but does not prove) that students overestimated their abilities in the clinical and social sciences domains. Also, the lack of correlation or presence of only a weak correlation between perceived and externally measured knowledge seen in our study is similar to that reported in studies of students in other health science training programs.5-11 The tendency of pharmacy students to overestimate their own abilities was reported in a study of 117 senior-level BS Pharm students.6 Interestingly, in that particular study, students with the lowest measured performance consistently overestimated their clinical knowledge by the largest percentile, while those scoring the highest actually underestimated their performance.6 These results seem to suggest that more emphasis should be placed on external measurement rather than on students’ perceptions to evaluate the curricular effectiveness of the PharmD program. The data contained in Table 1 and Table 2 also suggest, but do not prove, that COPNAS is representative of other ACPE-accredited programs. Assuming this is true, those programs as well as ACPE may also find that external measurements such as the PCOA are more informative than survey data.
Evaluating competency by examination is a recognized educational method that can be configured to assess knowledge (eg, paper/pencil examination) or ability and skill performance (eg, simulations). An increasing number of pharmacy programs are developing their own progress examinations to be administered at one or more points in the curriculum to assess knowledge acquisition and retention.18 Potential disadvantages associated with this practice include faculty time commitment, difficulties in validating results, and inability to compare results with other programs. Limitations to using the PCOA as an external measure have been reported and include cost, lack of participation among schools, variability in student motivation, and differences in curriculum structure.19 However, compared with program-specific progress examinations, the PCOA has the potential to provide benchmarking comparisons between programs and may be more cost effective in the long run if more schools participate.
CONCLUSION
The purpose of this study was to determine whether a correlation exists between third-year NDSU PharmD students’ perceived and actual pharmacy knowledge as assessed by the PCOA. This study showed a lack of correlation between what students perceive they know and what they actually know in the areas of pharmaceutical science, social and behavioral administration, and clinical sciences. Therefore, student perceptions of curricular quality may be inaccurate and additional standardized measures are needed to assess curricular effectiveness and allow for comparisons among pharmacy programs. Despite its limitations, the PCOA examination may provide a better measure of curricular effectiveness and thereby allow for a more objective decision making in regards to curriculum development and quality improvement.
REFERENCES
- 1.Accreditation Council for Pharmacy Education. Accreditation standards and guidelines for the professional program in pharmacy leading to the doctor of pharmacy degree. http://www.acpe-accredit.org/pdf/FinalS2007Guidelines2.0.pdf. Accessed March 24, 2012.
- 2.American Association of Colleges of Pharmacy. Curriculum Quality Perception Surveys. http://www.aacp.org/resources/research/institutionalresearch/Pages/CurriculumQualitySurveysInformation.aspx. Accessed March 24, 2012.
- 3.Accreditation Council for Pharmacy Education. Data Views and Standardized Tables Required for “Rubric” Version 4.0 (Standards 2007 Guidelines 2.0) April 2011. http://www.acpe-accredit.org/deans/resources.asp. Accessed March 24, 2012.
- 4.Neutens JJ, Rubison L. Research Techniques for the Health Sciences, 3rd ed. San Francisco: Benjamin Cummings; 2002. [Google Scholar]
- 5.Gordon MJ. A review of the validity and accuracy of self-assessments in health professions training. Acad Med. 1991;66(12):762–769. doi: 10.1097/00001888-199112000-00012. [DOI] [PubMed] [Google Scholar]
- 6.Austin A, Gregory PA. Evaluating the accuracy of pharmacy student’s self-assessment skills. Am J Pharm Educ. 2007;71(5) doi: 10.5688/aj710589. Article 89. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Gossain VV, Bowman KA, Rovner DR. The actual and self-perceived knowledge of diabetes among staff nurses. Diabetes Educator. 1993;19(3):215–221. [Google Scholar]
- 8.Baxley SG, Brown ST, Pokorny ME. Perceived competence and actual level of knowledge of diabetes mellitus among nurses. J Nurs Staff Develop. 1997;13(2):93–98. [PubMed] [Google Scholar]
- 9.Chan MF, Zang Y. Nurses’ perceived and actual level of diabetes mellitus knowledge: results of a cluster analysis. J Clin Nurs. 2007;16(7b):234–242. doi: 10.1111/j.1365-2702.2006.01761.x. [DOI] [PubMed] [Google Scholar]
- 10.Davis DA, Mazmanian PE, Fordis M, Harrison RV, Thorpe KE. Accuracy of physician self-assessment compared with observed measures of competence. JAMA. 2006;296(9):1094–1102. doi: 10.1001/jama.296.9.1094. [DOI] [PubMed] [Google Scholar]
- 11.Caspi O, McKnight P, Kruse L, Cunningham V, Figueredo AJ, Sechrest L. Evidence-based medicine: discrepancy between perceived competence and actual performance among graduating medical students. Med Teach. 2006;28(4):318–325. doi: 10.1080/01421590600624422. [DOI] [PubMed] [Google Scholar]
- 12.Hodges B, Regehr G, Martin D. Difficulties in recognizing one’s own incompetence: novice physicians who are unskilled and unaware of it. Acad Med. 2001;76(4):S87–S89. doi: 10.1097/00001888-200110001-00029. [DOI] [PubMed] [Google Scholar]
- 13.Kruger J, Dunning D. Unskilled and unaware of it.: how difficulties in recognizing one’s own incompetence lead to inflated self-assessments. J Pers Soc Psychol. 1999;77(6):1121–1134. doi: 10.1037//0022-3514.77.6.1121. [DOI] [PubMed] [Google Scholar]
- 14.National Association of Boards of Pharmacy. PCOA Frequently asked questions. http://www.nabp.net/programs/assessment/pcoa/faqs/. Accessed March 24, 2012.
- 15.Fincham JE. Response rates and responsiveness for surveys, standards, and the journal. Am J Pharm Educ. 2008;72(2) doi: 10.5688/aj720243. Article 43. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16.National Association of Boards of Pharmacy. 2009-2011 PCOA Administration Highlights. http://www.nabp.net/programs/assessment/pcoa/pcoa-highlights/. Accessed March 24, 2012.
- 17.Kirschenbaum HL, Brown ME, Kalis MM. Programmatic curricular outcomes assessment at colleges and schools of pharmacy in the United States and Puerto Rico. Am J Pharm Educ. 2006;70(1) doi: 10.5688/aj700108. Article 8. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18.Plaza CM. Progress examinations in pharmacy education. Am J Pharm Educ. 2007;71(4) doi: 10.5688/aj710466. Article 66. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19.Scott DM, Bennett LL, Ferrill MJ, Brown DL. Pharmacy curriculum outcomes assessment for individual student assessment and curricular evaluation. Am J Pharm Educ. 2010;74(10) doi: 10.5688/aj7410183. Article 183. [DOI] [PMC free article] [PubMed] [Google Scholar]


