Skip to main content
American Journal of Pharmaceutical Education logoLink to American Journal of Pharmaceutical Education
. 2008 Aug 15;72(4):88. doi: 10.5688/aj720488

A Progress Assessment to Evaluate Pharmacy Students' Knowledge Prior to Beginning Advanced Pharmacy Practice Experiences

Katherine A Kelley 1,, Stuart J Beatty 1, Julie E Legg 1, James W McAuley 1
PMCID: PMC2576427  PMID: 19002286

Abstract

Objective

To develop an assessment that would (1) help doctor of pharmacy (PharmD) students review therapeutic decision making and build confidence in their skills, (2) provide pharmacy practice residents with the opportunity to lead small group discussions, and (3) provide the assessment committee with program-level assessment data.

Design

A case-based interactive assessment was developed and delivered to PharmD students immediately prior to advanced pharmacy practice experiences (APPEs). The assessment used an audience response system to allow immediate feedback followed by small group discussions led by pharmacy-practice residents. Students self-assessed their knowledge and confidence levels and developed personalized learning objectives for APPEs.

Assessment

Eighty-nine percent of students found the assessment useful, and pharmacy practice residents reported that it was helpful in developing precepting skills. The college assessment committee was able to use the data to supplement the ongoing College curricular mapping process.

Conclusions

An interactive assessment process can help students build confidence for experiential training, provide a learning opportunity for pharmacy residents, and produce program-level data for college assessment purposes. Planned modifications of the assessment include expanding the content areas covered and adding ability-based assessments such as communication skills.

Keywords: audience response system, assessment, ability-based outcomes, confidence, advanced pharmacy practice experience

INTRODUCTION

Progress examinations have been defined as an assessment methodology used to measure the acquisition and retention of knowledge based on a defined set of outcomes or objectives and administered successively across the curriculum.1 In pharmacy education, this defined set of outcomes should be based on program-level outcomes which represent the locally-defined, generalist entry-level practitioner. Assessment measures can be used to fulfill both formative (improvement) and summative (accountability) agendas. These formative and summative measures should be used to improve student learning,2 thus completing the assessment loop.3 In fact, the Accreditation Council for Pharmacy Education (ACPE), through Standards 2007, requires that all programs assess program outcomes including student learning outcomes.4 Specifically Guideline 15.1 closely resembles Portanova's definition of progress examinations by suggesting that assessment measures “incorporate periodic, psychometrically sound, comprehensive, knowledge-based, and performance-based formative and summative assessments, including nationally standardized assessments (in addition to graduates' performance on licensure examinations) that allow comparisons and benchmarks with all accredited and peer institutions.” Guideline 15.1 also suggests that assessment plans include student self-assessments. The use of progress examinations in pharmacy education has been extensively reviewed recently.5

Several factors contributed to the development and delivery of the assessment methodology presented here. ACPE, through Standards 2007, requires that all programs assess program outcomes including student learning outcomes.4 Members of the Ohio State University College of Pharmacy's assessment committee have been working on ways to incorporate assessment methods involving direct measures of student learning. This committee had recently completed a revision of the PharmD program's ability-based outcomes6 and mapped those outcomes to each required course throughout the curriculum. The next step in the process was to begin designing measurement tools to answer the question “how do we know that our students are achieving these stated outcomes?” In addition to the need for measures of student learning, College faculty members had a general perception that students entering APPEs were very nervous and could benefit from a boost to their confidence prior to beginning their APPEs.

This paper describes the development and implementation of an assessment method that provides a collaborative and supportive learning environment where students are encouraged to self-reflect and self-assess while at the same time gathering program-level student outcome data for the purposes of assessment and continuous improvement. The assessment tool described here falls short of the true definition of a progress examination because it is based on a single administration of the assessment. Therefore the term progress assessment will be used instead.

The purpose of this exercise was to help students review basic concepts of key disease states and assist them in self-assessing areas for improvement throughout APPEs. Secondary objectives included hands-on experiential training for pharmacy practice residents to supplement their teaching workshop, and assessment data collection necessary to track curricular changes and determine areas for curricular improvement for ACPE accreditation.

The Ohio State University College of Pharmacy offers a 4-year curriculum leading to the PharmD degree. A prior baccalaureate degree is required for entry to the program. The PharmD curriculum consists of 3 years of didactic course work and introductory pharmacy practice experiences (IPPEs) followed by a final year of full-time APPEs. The class size is approximately 120 students per year, of which roughly 80% are instate residents.7 Ohio State is a public research intensive university (Carnegie Classification, very high research university).8

DESIGN

For their project at the American Association of Colleges of Pharmacy (AACP) Institute in May 2007, a team of Ohio State faculty members (steering team) selected assessing students' therapeutic decision-making via case-based assessment questions. At this annual theme-focused conference designed to promote “continuous improvement of curricular and pedagogical activities,” college-based teams are encouraged to choose a project of local interest to work on during the 4-day conference. This project was chosen to fill a need for tools that directly measure student abilities. Initial planning was already underway, but the focused nature of the Institute provided the forum to move the project toward implementation. Team members provided critical expertise and perspectives to ensure an optimal design was achieved.

In order to ensure pertinent therapeutic topics were covered, the Top 200 prescribed medications of 2006 were used to guide case-based questions on the most common drugs and disease states.9 The following disease states or drug classes were selected for incorporation into patient cases: atrial fibrillation, chronic heart failure, depression, diabetes, gastrointestinal disorders, hypertension, lipids, migraine, and pain management. Medication safety, nonprescription medicines, jurisprudence, and medication use issues were also incorporated into the cases. Instructions were developed by the steering team and distributed to pharmacy practice faculty colleagues who had been asked to write patient cases. Faculty members were assigned to write a simple or complex case and given examples of each (Table 1). Approximately 17 pharmacy practice faculty members voluntarily participated in the case writing and review process with each case requiring about 2 hours to complete.

Table 1.

Group Level Performance

graphic file with name ajpe88tbl1.jpg

aPercent of student selecting the correct answer

bAverage percent of correct answers per case

Each case was written by a faculty “content-expert” and subsequently reviewed by an independent faculty reviewer. The case writers were asked to write multiple-choice questions to accompany their case and provide feedback on why each answer choice was correct or incorrect. Additionally, they were asked to connect each question back to the OSU College of Pharmacy's 100 ability-based outcomes (ABOs).6 Independent faculty reviewers were selected to review the cases for content validity and independently link each case to the ABOs. This exposure to the ABOs was considered an important step to allow faculty members to see the link between this assessment and PharmD program-level outcomes. These linkages were also useful for the assessment committee during the interpretation of the results. The steering team reviewed the cases, comments, and linkages to the ABOs from both the case writers and the independent reviewers. This allowed for a triangulated approach to the development of each case.

Once the cases were written and reviewed, they were loaded into TurningPoint software (Turning Technologies, Youngstown, Ohio). This audience response system enabled the researchers to provide an interactive assessment process that allowed students to receive immediate feedback while at the same time capturing the students' responses for later analysis.10 Prior to the assessment activity, all cases were pilot tested with pharmacy practice residents. The residents provided valuable feedback on content, item difficulty, and the process of using “clickers” to conduct this assessment. Following the pilot test by the residents, minor revisions were made to cases/questions.

A survey instrument was developed to measure student confidence in their skills and knowledge. Students rated their confidence in their ability to perform in 4 skill areas prior to the assessment. They were also asked to rate their confidence in their knowledge about each of the assessed therapeutic areas before and after the assessment. Students self-assessed their confidence using a 7-point semantic differential scale where 1 = not at all confident and 7 = very confident.

Audience response devices were preassigned to students to allow the steering team to track outcomes to specific individuals. The devices were used during a 120-minute APPE orientation held at the end of July, immediately prior to the students beginning their first APPE. Clickers were pre-assigned to individual students to allow the steering team to track outcomes to specific individuals. Students were asked to respond to general demographic, self-assessment, and confidence questions, then patient cases were presented and students were asked to respond to subsequent questions. The students moved through the assessment activity as a group and were given a time limit to respond to each question. Times for each question varied from 30 to 180 seconds depending on the difficulty of the question and the anticipated time students would need to use their PDAs to research answers. Some questions had more than one right answer to illustrate the complexities of clinical practice. A moderator (member of the steering team) led a discussion of the answer choices after each question and explained the reasoning behind the best response.

At the end of the orientation day, all students participated in small group discussions with pharmacy practice residents who assisted them in writing goals and objectives for learning during their APPEs based on how they performed during the assessment. The students were then instructed to make the objectives part of their experiential portfolios, which were to be shared/discussed with each of their APPE preceptors. In addition, the residents were positioned to serve as peer mentors during this activity in helping to answer questions about experiential expectations and share tips and pointers for success. The residents who participated in these small group sessions had attended a 1-hour training session conducted by 2 members of the steering team during the annual teaching workshop earlier the same month. Residents were asked to complete a post-assessment evaluation form to help the steering team make improvements to future assessment sessions.

Data collected during the assessment session via the TurningPoint software was transferred to SPSS 14.0, (SPSS Inc, Chicago, Ill). Descriptive analyses were conducted and an analysis of variance (ANOVA) was used to compare mean assessment scores by pharmacy school grade point averages (GPA).

ASSESSMENT

Table 2 presents the demographic characteristics of the group of students who participated in the assessment. One hundred nine of the 111 enrolled fourth-year PharmD students participated in the assessment. Students rated their confidence in their ability to perform in 4 skill areas prior to the assessment (Table 3). They were also asked to rate their confidence in their knowledge about each of the assessed therapeutic areas before and after the assessment (Table 3). In 6 of the 10 areas assessed, a higher percentage of students reported confidence in that content area after the assessment compared with their scores before the assessment. The total number (percentage) of students responding correctly to each of the 38 analyzable case questions is reported in Table 1 along with the average percentage of correct responses for the entire case. The assessment started with 43 questions. Five questions were omitted from the analysis; the last case (4 questions) was eliminated due to time constraints, and 1 question was eliminated due to technology failure during the assessment.

Table 2.

Student Demographics (N=109)

graphic file with name ajpe88tbl2.jpg

aValues may not sum to 100% due to rounding

bPercentages based on the number of students responding to the items

Table 3.

Preassessment and Postassessment Scores of Self-Reported Confidence Levels Among Fourth-Year Pharmacy Students Prior to Beginning Advanced Pharmacy Practice Experiences

graphic file with name ajpe88tbl3.jpg

aPercent of students selecting 5, 6 or 7 on a 7 point semantic differential scale where 1 = not at all confident and 7 = very confident

bPre-scores and post scores were rank ordered from highest confidence to lowest confidence

cCase created but not delivered due to time constraints

Table 1 also presents the time interval between when the material was covered in class and the assessment, which varied from 1 to 7 quarters. A relationship was found between time interval and average score, with a longer time interval resulting in lower scores. Though this makes sense from a temporal relationship, it was clearly not the only factor influencing the scores. Student motivation and preparation may also be related to performance. The assessment was low stakes and the students were not advised to study or prepare in advance, nor were they informed about the content of the cases.

Following the small group meetings with residents, students were asked to rate the usefulness of the assessment procedure on a 10-point semantic differential scale, on which 0 = not at all useful and 9 = very useful. Seventy-nine students responded and 89% of them rated the assessment from 5 to 9, which was deemed “useful” by the steering committee (Figure 1). Residents were also asked to provide feedback about the assessment. Residents reported that their involvement in the assessment was a positive training tool for future practice. In fact, many residents expressed that they wished their school had provided a similar activity prior to their APPEs.

Figure 1.

Figure 1

Percent of students responding to each level of overall satisfaction. Students were asked to rate on a 10-point semantic differential scale the usefulness of the assessment procedure where 0 = not at all useful and 9 = very useful.

A one-way analysis of variance was conducted to determine whether there was a significant difference (p < 0.05) between the student's mean assessment scores based on their self-reported pharmacy school GPA. In order to achieve adequate sample sizes per group, the 3 GPA ranges from 2.00-2.99 were combined into 1 category for analysis. The ANOVA results were not statistically significant; however, higher assessment scores appeared to be associated with higher GPA (Table 4).

Table 4.

Mean Score by GPA Range

graphic file with name ajpe88tbl4.jpg

DISCUSSION

Students rated their confidence in 4 skill/knowledge areas (communication, drug information, problem solving, and professionalism) immediately prior to the administration of the assessment. This information was collected so that the assessment committee could obtain some feedback from students about their level of confidence or anxiety about skills that are taught across the curriculum, but these skills were not directly evaluated on this progress assessment. Nine cases focusing on the 13 pharmacotherapeutic areas were administered during the assessment. In 6 of the 9 subject areas, students' self-reported level of confidence increased following the assessment, indicating the potentially positive effects of this assessment on overall student confidence. In general, 89% of the students reported that the assessment was useful (Figure 1). Students had the opportunity to self-reflect on their performance and write learning outcomes for their APPEs. Research on progress examinations in medicine had similar results related to helping students to learn self-assessment techniques and reduce their anxiety about (increase their confidence in) their own knowledge or learning.1 For the 3 areas in which students' confidence scores decreased, 2 possible explanations are the amount of time that had passed since the content was covered and the complexity of the cases. Even though the nonprescription drug case is labeled as “simple,” it was based on the recently changed prescribing guidelines for cough and cold medications. An important point is that in all 3 cases, the percent decrease was relatively small (1%-3%).

Open-ended student comments revealed that the interaction with the residents was valuable. Students also reported that they would like the opportunity to interact with this type of learning environment throughout the year and would have liked to cover more than the selected disease states. The use of audience response devices was well received by students. This format provided the opportunity for live interaction with peers and a faculty moderator, which is an advantage over assessments delivered via computer-based individual tests. The most common negative comment from students was that the 120-minute assessment was too long.

Pharmacy practice residents served as facilitators of the self-reflective component of the assessment. This facilitation proved beneficial to the students, the College, and the individual resident. Students likely felt more comfortable discussing concerns over advanced practice experiences since most residents had completed their APPEs within the previous 2 years. The residents also provided positive feedback about the assessment process. They valued the opportunity to interact with students in small groups and practice their newly learned teaching/precepting skills. Residents reported that this interaction provided them with a chance to observe and guide small groups and practice managing small group dynamics. They perceived that the students benefited from the opportunity to ask questions of their “peers” prior to commencing their APPEs. Residents were also able to provide feedback to the steering team on general trends of students' strengths and weaknesses with respect to the therapeutic areas assessed.

This assessment process enabled faculty members to obtain a summative assessment of student learning outcomes delivered in a case-based, interactive, and technology-enhanced format. The overall mean score for the class was 58 ±14% (median 59%, mode 67%). These scores are similar to those reported by Sansgiry and Nadkarni (2004), who conducted a milemarker examination annually of pharmacy students.11 Student motivation to perform well on low-stakes assessments is problematic and does have an impact on the validity of the results.12 In order to increase motivation, Wise and Demars suggest making assessments more intrinsically motivating using strategies such as providing a moderate level of challenge and making the content interesting to students to stem boredom.12 They also recommend providing feedback and choosing questions that are not too difficult or mentally taxing. In this project, students were provided immediate feedback, the test itself was moderately challenging and delivered in an innovative format (to address issues of boredom), and the cases were based on what the student will likely encounter in their APPEs (to address intrinsic motivation). Another strategy to deal with these validity issues is that the data generated will be interpreted in conjunction with results of other assessments of outcomes of the PharmD curriculum. The progress assessment data are currently under analysis by the assessment committee as part of an overall curricular revision planning process. Other data also being used for this process include curricular mapping data and student survey (satisfaction) data.

Each case has a connection to commonly encountered disease states as they were written based on the top 200 most commonly prescribed drugs. The cases were also linked (Appendix 1) to the College's ABOs (competencies). These 2 linkages will allow the assessment committee to use the results for evaluation of curricular effectiveness and any need for curricular change from both the content and outcomes standpoints. These data can be used in conjunction with curricular mapping data to investigate specific areas of the curriculum. By examining student confidence levels coupled with their performance on the items, the assessment committee can make decisions about student knowledge and confidence within the context of the ABOs for the PharmD program. For example, students reported a low degree of confidence in their knowledge about atrial fibrillation both before and after the assessment. In addition, the percentage of students selecting the correct answer on the questions related to this therapeutic area ranged from 32% to 72%, with an overall case average score of 56% (Table 1). Further investigation by the assessment committee into this content area may be necessary.

In Table 1, there is no consistency between the case score and the case level (ie, simple cases did not consistently produce the highest case scores). The steering team discovered (after the assessment administration) that there was some confusion among the case writers about the definition of case level for this first administration. Most of the case writers were able to incorporate the simple or complex designation into their cases, but the multiple-choice questions based on the cases often varied substantially in their levels of difficulty.

Overall, students rated themselves very confident in their communication, problem-solving, and professionalism skills; however, only 24% of students reported confidence in their drug information skills. As a result, this skill area will be referred to the assessment committee for follow up.

Student achievement of ABOs can also be assessed. For example, cases 6 and 8 both address the ABO “monitor a patient's response to therapy.” By evaluating the overall percentage or item average score for these 2 cases (56%), faculty members can begin to make determinations of student competence with respect to that particular ABO. Data gathered via this assessment is useful for overall curricular reform when used in conjunction with other measures or tools such as curricular maps.

There are several limitations to this assessment process. The assessment occurred at a single point, thus being subject to the same biases as other cross-sectional assessments whereby the student's performance is measured or judged based on a single measurement. Second, this assessment was largely focused on content or knowledge components and students were not asked to demonstrate their abilities to perform any skills. Additionally, this was a “low-stakes” assessment with no grades or penalties for nonparticipation. Previous publications have discussed the limitations of using low-stakes examinations to assess student learning.12,13 The assessment process as designed was highly dependent on the TurningPoint software and “clicker” technology. Data capture was affected during this administration by system failures. The development of cases for this initial assessment as well as any future additions and editing of cases will require faculty time and willingness to participate, which adds to the already busy schedules of the faculty members.

Based on the first administration of the assessment, the following changes are planned for the future. The steering team will work to incorporate other pharmacy content areas (eg, pharmacokinetics, pharmaceutics, medicinal chemistry, etc) into the assessment and revise the cases used in 2007. Additionally, stations may be added to assess skills such as student communication, drug information, and patient counseling. These stations will likely incorporate an objective structured clinical examination (OSCE) type format. Though our original intent was to combine an objective assessment with an OSCE, we decided to focus on the assessment only for this the first offering. Other programs, including medicine, have reported their experiences in combining progress testing and OSCEs.1 After multiple administrations of the assessment and a complete psychometric evaluation of items, consideration will be given to making the activity a high-stakes assessment which students would be required to successfully complete prior to beginning their APPEs. In order to improve the concordance between the level of the cases and the level of difficulty of the questions, the steering team plans to create some definitive guidelines possibly based on the Canadian OSCE blueprinting technique.14 The steering team also plans to develop a method to track the students' progress on their objectives during their APPEs. This may involve creating a formal system of reviewing and documenting the progress on these objectives in student portfolios throughout the APPEs.

CONCLUSION

A single technology-based interactive assessment tool was shown to serve both summative and formative assessment purposes at the student and program levels. This assessment also provided students with a means of self-assessing their confidence as well as testing their content knowledge. Overall, students felt the assessment process was useful and reported higher scores on most of their self-assessments of confidence in their knowledge following the activity. Pharmacy practice residents benefited from the opportunity to practice teaching skills and the College received program-level data to aid in curricular evaluation and assessment for accreditation purposes.

ACKNOWLEDGEMENTS

The authors gratefully acknowledge: Anand Khurma for his technical expertise; Jerry Cable, as fellow steering team member and Experiential Director, for his contributions to the design of the assessment; the pharmacy practice faculty case writers/reviewers; and the pharmacy practice residents for their invaluable feedback, time, and expertise.

Appendix 1. Map of assessment cases to program-level ability-based outcomes.

graphic file with name ajpe88app1a.jpg

Appendix 1. Continued

graphic file with name ajpe88app1b.jpg

REFERENCES


Articles from American Journal of Pharmaceutical Education are provided here courtesy of American Association of Colleges of Pharmacy

RESOURCES