Skip to main content
American Journal of Pharmaceutical Education logoLink to American Journal of Pharmaceutical Education
. 2019 Apr;83(3):6582. doi: 10.5688/ajpe6582

Quality Assurance and Improvement Practices of Experiential Education Programs in Schools of Pharmacy

Mitra Assemi a,, Margarita V DiVall b,,c, Kelly Lee d, Erin Sy e, Teresa O’Sullivan e
PMCID: PMC6498196  PMID: 31065159

Abstract

Objective. To identify common practices for measuring quality of experiential education (EE) programs at US schools and colleges of pharmacy.

Methods. In-depth, semi-structured phone interviews were conducted with directors of experiential education or their equivalent, to identify elements of quality assurance (QA) processes for EE. To ensure representativeness from all fully accredited programs, purposeful sampling was used for participant solicitation and enrollment until both code and meaning saturation were reached. Participants were asked questions in six domain areas (preceptor and student performance, site quality, role of site visits, coursework, and achievement of learning outcomes). An iterative data coding and analysis process identified themes and notable practices within each domain area.

Results. Interviews were conducted with representatives of 29 programs. All participants reported evaluating preceptor performance. Fewer participants identified a deliberate site assessment process, with most equating preceptor and site evaluation. Participants conducted site visits primarily to assess site quality and maintain relationships with preceptors. Few participants were able to provide details of a process used for evaluating experiential education coursework and student outcomes. All participants used student performance assessments to measure the quality of student performance. Overall, participants almost universally reported collecting data, less frequently described processes for data evaluation, and rarely shared outcomes arising from data collection and analysis.

Conclusion. Themes and notable practices identified in this study provide initial benchmarks for QA programs for EE and will inform content and metrics of subsequent follow-up studies. A six-step process for QA for EE is proposed.

Keywords: experiential education, quality assurance, quality improvement, assessment, qualitative research

INTRODUCTION

In 1994, a task force of American Association of Colleges of Pharmacy (AACP) members of the then-named Practice Experience Program Special Interest Group published standards and guidelines for pharmacy practice experience programs.1 At that time, most experiential programs at US schools and colleges of pharmacy were run by a single faculty member. The experiential curriculum provided students with operations-focused externships in hospital and community settings and a patient care-focused clerkship experience during which students typically shadowed a “clinical” pharmacist in a hospital setting. This paper was the first to officially acknowledge that experiential education (EE) had a curriculum of its own and to identify quality components of that curriculum. The report highlighted the importance of and considerations related to assessment of student performance in the practice setting. It also defined the role of both preceptor and student in the assessment process. The authors predicted that experiential curricula would evolve, requiring a team-based approach to oversee, manage, and monitor preceptor orientation and development, site quality, and student performance.

In 1999, the American College of Clinical Pharmacy (ACCP) and the European Society of Clinical Pharmacy held a global conference on documenting the value of clinical pharmacy services.2 One meeting report emphasized the importance of developing a systematic assessment plan for experiential education that comprises identification of student learning outcome hypotheses to test, practical study design, efficient data collection, data analysis that includes the learning environment as a component, and timely development and implementation of any needed action plan.3 The continuous quality improvement (CQI) process identified in this report is similar to the five-step process outlined in the American Physical Therapy Association’s (APTA) primer on curricular outcomes assessment: set program goals, develop an assessment plan, implement the assessment plan, analyze the results, and close the assessment loop.4 Closing the assessment loop requires either determining that the curricular approach is successful or making needed changes. Either decision ends with the process beginning over again to ensure continued success or test the effect of changes.

The 2003-2004 AACP Professional Affairs Committee Report further emphasized the role of academia in experiential education quality assurance (QA).5 Included in this report were recommendations and suggestions related to programmatic quality, including identification of exemplary practice sites and preceptors; orientation, development, and quality assessment of preceptors; a process for tracking and evaluating student performance relative to achievement of student learning outcomes; and a “biannual audit,” which appeared to be a comparison of desired student learning outcomes to what students were actually learning at individual practice sites. An ACCP White Paper described curricula, preceptor, program oversight, and assessment-related elements of quality experiential education.6 Although both reports suggest program goals that could be set and data that could be collected, neither illustrated nor informed the full process of CQI in experiential education.

The most robust report to date on a CQI process in experiential education was published by Stevenson and colleagues in 2011.7 This report described a quality improvement process for advanced pharmacy practice experiences (APPEs) conducted through the curriculum committee structure for course review. The formal review included an analysis of course materials (eg, site-specific syllabi, the student performance assessment instrument), student performance as measured by assigned grades, and student performance assessments for their sites and preceptors. While this report did not identify a set of specifically tested hypotheses, it nonetheless used the CQI process to identify areas for improvement, several of which were specific enough to close the assessment loop.

The Accreditation Council for Pharmacy Education (ACPE) Standards 20168 for doctor of pharmacy (PharmD) programs emphasize the importance of programmatic quality improvement, including the need to establish QA procedures for all pharmacy practice experiences, with the goal of ensuring achievement of stated course expectations, standardizing key components of experiences across sites offering the same experiential course, and promoting consistent assessment of student performance (Standard 10.15). Additionally, colleges or schools are required to apply quality criteria for preceptor performance evaluation and engage preceptors as part of the experiential quality improvement process (Standard 20). Furthermore, programs must develop quality criteria for practice facility recruitment and regular evaluation of practice sites (Standard 22) and initiate quality improvement if deficiencies in student learning outcomes are noted.

While reports detailing the structure and staffing of experiential education-related administration provide insight into the human resources dedicated to EE,9,10 little data in the literature exist outlining how those EE teams are assessing program quality. Our report describes a qualitative approach to exploring QA structure and processes for EE. Our first objective was to identify common practices in measuring quality of preceptor performance and experiential sites and the role of site visits in these evaluations. Our second objective was to determine how programs assess experiential education courses and curriculum, student performance, and achievement of learning outcomes. The final objective was to identify any notable practices that could serve as elements in the construction of an ideal QA model for EE programs.

METHODS

This qualitative study obtained data from semi-structured phone-based interviews of EE directors or their equivalent at US PharmD degree programs with full accreditation status. Drawing upon investigations into both code and meaning saturation,11 we aimed to conduct 25-30 interviews. An interview guide was developed to verify the participant’s role in QA for EE and obtain answers to six questions. Three of the questions asked what participants felt their program did well or could improve upon in measuring the quality of preceptor performance (ACPE Standard 20.1), experiential sites (ACPE Standard 22.3), and student performance (ACPE Standard 24.1). One question asked about the assessment process for EE courses or curricula (ACPE Standard 25.3), and another question asked how the participant’s program measured achievement of student learning outcomes (ACPE Standard 24.4). A sixth question asked about the role and purpose of site visits to probe how or if data from site visits were used in the QA process for EE (ACPE Standard 10.15). Each interview was semi-scripted;12,13 the six questions were identically phrased and presented in the same order to every participant, but the interviewer was allowed to ask follow-up and probing questions at their discretion to either clarify or expand a participant’s response. The study was reviewed and certified as exempt by the Investigational Review Board at the University of California, San Francisco.

Program characteristics considered to create an initial stratified representative pool of US PharmD programs for potential inclusion into the study were geographic location; funding status (private/public); accreditation before or after 1995; length of program; class size; existence of multiple (branch) campuses; and whether faculty from the program had publications and/or national presentations within the prior 10 years on the topic of QA in EE. From this list, purposeful sampling14 was used to solicit and enroll participants, who were EE directors identified from college and school websites. A standardized email was sent to prospective participants informing them of the study and containing a link to a web-based informed consent form administered using a commercially available survey platform (Qualtrics, Inc, Provo, Utah). Subjects who did not respond to the initial solicitation, declined consent, or did not participate in an interview were excluded from the study.

All interviews were recorded, transcribed into a word-processed document, and de-identified by the interviewer. The transcriber compared audio recordings of interviews to the transcription for accuracy, and then destroyed the audio version. Participant recruitment, data acquisition through participant interview, and data analysis occurred simultaneously, with data analysis occurring iteratively as new data were acquired and as codes and themes began to emerge.15,16 Interview transcript data were initially ordered into spreadsheets, with each spreadsheet representing all participant responses to one of the structured questions. Initial spreadsheets were reviewed by two of the primary investigators to test questions for clarity. Each study investigator then independently reviewed the dataset to identify preliminary keywords, phrases, and themes. Next, two investigators imported the data into ATLAS.ti, version 1.0.41 (ATLAS.ti GmbH, Berlin, Germany), a qualitative data analysis software program, to perform a formal thematic analysis and develop initial coding rules.17 Two of the investigators grouped the data by question into spreadsheets and applied coding rules for themes independently per question to code all data. The independent coders then reviewed the coding and resolved discrepancies by amending the coding rules. A third investigator completed the verification coding. All investigators reached consensus on the final themes. In addition to themes, investigators also identified “notable” practices, which were activities or procedures reported by one or two participants that other EE program directors might consider adopting.

Aggregate demographic variables from the responding schools (eg, number of participants from public vs private schools of pharmacy) were summarized using descriptive statistics. Agreement between coders and verifier was calculated as percent agreement and tested for congruence using Cohen’s kappa. A result of 0.6 or higher was considered satisfactory agreement.18 When some values of Cohen’s kappa turned out to be 0, despite a high percent of agreement, coder-verifier agreement was retested using Gwet’s first agreement coefficient (AC1).19,20 All coder-verifer congruency testing was done using R, version 3.2.3 (The R Project, Vienna, Austria).21

RESULTS

The results of school solicitation and participation are summarized in Figure 1. Of the 38 EE directors contacted about participating, nine did not respond, declined to participate in the study, or agreed to participate but did not respond to an interview schedule request. Of the nine EE directors who did not participate in the study, eight were from schools that were private institutions, six were from schools accredited in 1995 or after, three were from the southeastern region of the US, and all were not affiliated with an academic medical center. Failure of these nine EE directors to participate resulted in underrepresentation of schools with these characteristics. Representatives of 29 schools were interviewed, and all participants completed the full interview. Characteristics of the participating schools were compared to those of all US pharmacy schools (Table 1). Theme saturation occurred by the 12th interview, with the last theme agreed upon by both coders and verifier emerging in interview 11. Although meaning saturation was evident by interview 25, because the next four interviews were already scheduled, they were conducted and information from them was added to the dataset.

Figure 1.

Figure 1.

Selection of Study Participants

Table 1.

Representative Characteristics of Participant Institutions Compared to National Institutions

graphic file with name ajpe6582-t1.jpg

There were three overarching themes apparent in measuring the performance quality of preceptors, sites, and students: what data were collected, how collected data were analyzed, and how analyzed data were used for programmatic decision-making. Identified themes and subthemes, illustrative quotes, and notable practices are outlined in Table 2. Also outlined for each theme is the percent of participants who confirmed the presence of the theme in their EE program, the percent agreement between coders and verifier, and kappa and AC1 measures of agreement.

Table 2.

Measuring Quality of Preceptor, Student, and Site Performance

graphic file with name ajpe6582-t2.jpg

All participants described using data from student assessments in measuring the quality of preceptor performance. Fewer participants described use of data from other sources. Almost all participants indicated that they review or analyze preceptor performance data, but most participants described an informal process where incoming performance assessments with low scores are flagged and reviewed. Fewer participants described a scheduled process for examining data either individually or in aggregate. Although most participants indicated they were providing personalized feedback to preceptors in the form of student ratings and comments, fewer reported discussing those results with the preceptors. Also, some participants stated they did not know how the preceptors were using the feedback. Desired areas for improvement included using data from the review process to develop continuing professional development (CPD) programs for preceptors and determining whether the feedback provided resulted in actual performance changes.

Most participants also used student assessment data to measure site quality, although many considered the preceptor and site review to be the same process. Three participants overtly described a formal annual review process for sites, and another five participants implied that a formal review was completed. Fourteen participants did not describe a site review process. A few participants noted that student assessment comments about the quality of the site were frequently not helpful because their comments focused only on nonprogrammatic issues (eg, distance student had to travel to site) rather than actionable items that could improve site quality.

All participants described using data from preceptor assessments to measure the quality of student performance. Most participants described an informal process, where low scores on incoming performance assessments generated an alert and were reviewed at that time. Eight participants analyzed student performance assessment data in aggregate, but only three participants specifically explained how that analysis informed the curriculum. Some of the specific needs that participants noted were to establish consistency in activities across similar APPEs, to implement a method for tracking students longitudinally across the APPE year or EE curriculum, to establish equal rigor in student performance assessments of IPPEs and APPEs, and to create a method for assessing consistency in student skill acquisition at the end of the APPE year.

Almost all participants identified quality assurance as an important purpose of routine site visits, either to collect metrics related to site quality, orient a new preceptor, or address a specific student situation at a site. Nearly half of the participants identified “outreach” or maintenance of relationships between school and preceptors as another important purpose of site visits. Only a few participants stated that preceptor development was a purpose for site visits. Data collected during site visits usually centered on space and resources for students rather than on student performance. However, several participants mentioned observing student interprofessional interactions at the site during their visit. Some participants noted that their ability to assess site quality during visits was limited. Few participants identified how the data collected during site visits were used in the school’s QA program for EE.

At schools where the director of EE was the primary site visitor, sites were generally visited every 2 to 3 years. At schools where a dedicated site visitor or regional coordinators were employed, site visits were completed more frequently. Many participants expressed a desire to conduct site visits more frequently and some participants desired greater consistency in the purpose and scope of the visit. Other areas for improvement included visiting out-of-area sites more frequently, extending site visits to all EE sites (IPPE as well as APPE sites), and making visits to sites of school-based (non-volunteer) faculty members. Notable practices included soliciting feedback from preceptors about what they wanted to occur during site visits, using teleconferencing for remote site visits, using webinars to communicate important information to preceptors, and dedicating protected time for site visits (eg, once weekly or during summer term).

Metrics identified by participants for assessing the quality of experiential education courses are outlined in Table 3. Several participants provided data for EE course review, usually to the curriculum or assessment committees or an EE-affiliated committee. Some participants described a structure used for course review, but few spoke of the decision-making process used in that review. Even fewer participants described how the outcomes of the assessment process were used in modifying, enhancing, or confirming elements of the EE courses or curriculum. Several participants noted the importance of EE personnel representation in the course review process.

Table 3.

Measuring Quality of Experiential Courses/Curriculum and Achievement of Student Outcomes

graphic file with name ajpe6582-t3.jpg

Participants shared how achievement of student learning outcomes is assessed (Table 3), with most confirming that experiential performance instruments are mapped to the learning outcomes. Few participants articulated a process for systematically evaluating achievement of student learning outcomes, and only three participants described how the student learning outcome data were used in the curricular change process. Areas for improvement identified by participants included implementation of a validated assessment instrument, establishment of a method for ensuring consistency in student performance across sites offering the same type of experience, and development of a structure that would enable accurate measurement of and improvement in key skills across the EE curriculum.

DISCUSSION

The importance of quality assurance is universally recognized and acknowledged, yet remains a significant concern among experiential education faculty and staff members.22 Challenges to optimizing the experiential curriculum include inadequate resources (personnel and technology required to effectively manage the daily workload), miscommunication with stakeholders, challenges to student progression, and tensions arising from the need to balance site capacity with quality, especially in rural or highly used geographic areas.

Upon completing data analysis, we realized our identified themes tied in well with published assessment processes such as Deming’s Plan-Do-Study-Act (PDSA) cycle23 and the APTA Outcomes Assessment Process.4 Although most participants identified program goals when asked about student learning outcomes, and all participants indicated they were collecting data, fewer participants described analyzing data in aggregate. Even fewer participants described using the results of data analysis to inform curricular success or the need for curricular change, a step often referred to as “closing the assessment loop.” A missing step in most participants’ described assessment processes was development of a clear assessment plan. A reason for this could be the need to first identify a specific curricular approach to examine, a step not described by participants in our study. Adding this step to the APTA process generates a six-step cycle for continuous EE curricular assessment as illustrated in Figure 2. This assessment cycle aligns with the model for overall curricular assessment outlined by Ried.24

Figure 2.

Figure 2.

The Experiential Education Assessment Cycle

When applying the results of our study to this six-step cycle, it appears that missing steps from some QA programs for EE are identification of the designed curricular approach to study, development of an assessment plan that contains measurable benchmarks, analysis of data in aggregate, and application of data analysis to close the assessment loop. Absence of these steps may explain why some participants in our study who identified that they were collecting large volumes of data could not or did not describe how they used the data. Similarly, absence of these steps may explain the paucity of evidence that systematic assessment of collected data is being used to inform preceptor continuing professional development.

One reason that these steps only appear to be missing from some QA programs for EE is that we did not ask questions of participants in a way that elicited an answer that clearly showed how each step was being addressed. Failure to ask the right question may have been a contributing factor to the low agreement scores between coders and verifier seen in the question about measuring the quality of student performance. Alternatively, we may not have been asking the right people how their program’s QA process for EE worked. For example, assessment directors may have provided more detail about their school’s QA assessment process, but may have been less likely to have known how preceptor and site quality were measured.

There are limitations to using student-derived data to inform QA of experiential education curricular. For example, preceptor performance assessments submitted by the student will reflect only the student’s point of view and only one preceptor even though other preceptors also may have contributed significantly to that student’s learning. Other metrics of preceptor performance, such as peer assessments,25,26 self-assessments, intra-rater reliability calculations for student performance assessment, and participation in continuing professional development activities are time consuming to measure or of uncertain value in determining overall quality of preceptor performance. The same concerns apply to gathering, summarizing, and understanding measures of student performance, site performance, and other assessment-related outcomes.

Qualitative research methods do not require achievement of a representative sample. Nevertheless, we purposefully tried to identify and include programs that would represent all demographic facets of the academy. While we tried to interview representatives of programs with published innovations in experiential education assessment, additional notable practices likely are occurring in experiential education programs not captured in our study. Because multiple investigators conducted the interviews, there were variations in interviewing style and the specific questions asked by individual interviewers. Regular and frequently scheduled meetings among all investigators allowed debriefing on interviewing experiences, which facilitated making dynamic adjustments to the interview script and minimized the impact that the variability in probing questions had on acquired data. Answers to some questions may have been affected by where the participant was in the accreditation cycle. For example, a participant in the midst of preparing for accreditation under the new standards might have had a more well-developed assessment plan than a participant who had gone through the process several years previously. Not all interviewees may have been able to accurately answer all questions. Some of the EE directors in our study appeared to be far removed from the curricular assessment process and thus not able to answer questions about curricular assessment or achievement of student learning outcomes. These limitations were balanced by the general similarity of demographics at the interviewed schools to those of all schools, as well as by achieving theme and meaning saturation prior to the end of data collection, and reaching high levels of agreement between coders and verifier on most identified themes.

One unexpected issue was the failure of Cohen’s kappa calculations to provide a reasonable measure of congruency in specific situations. Cohen’s kappa measures percent agreement but assumes that some agreement is due to chance alone and thus subtracts the expected agreement due to chance alone from the observed percent agreement. Cohen’s kappa has been criticized for being too conservative when coding tasks are difficult, as was the case in our study, because chance agreement in such cases will be low (and disagreement correspondingly higher compared to that in studies with easy coding tasks).27 In our study, Cohen’s kappa could not calculate a reliable value when there was perfect or nearly perfect agreement between coders and verifier. Mathematically, this sets up a situation where there is a zero in either the numerator or the denominator of the resulting equation. Once we realized that Cohen’s kappa had this limitation, a search for a more robust measure of association led to identification of Gwet’s first agreement coefficient.19,20 This equation proved to be robust when agreement was nearly perfect, less conservative than Cohen’s kappa when percent agreement was high, and reliably low when percent agreement was not high.

Future efforts towards elucidating successful QA processes for EE and constructing a model CQI framework should determine additional details related to data collection (eg, instrumentation and process development). Future studies also should focus on investigating how data are shared, reviewed, and acted upon across the EE unit and other facets of the program by individuals and/or groups to inform quality assurance, improvement of the curriculum, and student achievement of learning outcomes.

CONCLUSION

Our report describes a qualitative approach to exploring programmatic approaches towards QA of EE. Common metrics collected by EE programs relating to preceptor, site, and student performance, and the differing role of site visits at different schools were identified, as was a description of EE course review and achievement of student learning outcomes. We identified some notable practices that could provide EE directors with ideas for how to expand their QA data collection and assessment practices. Most importantly, our results inform the framework of a six-step cycle that all EE directors can follow to produce a clear and robust QA process.

ACKNOWLEDGMENTS

The assistance of Donal O’Sullivan, PhD, with the statistical analysis and Jennifer Danielson, PharmD, MBA, with manuscript review and revision is gratefully acknowledged.

REFERENCES

  • 1.Campagna KD, Boh LE, Beck DE, et al. Standards and guidelines for pharmacy practice experience programs. Am J Pharm Educ. 1994;58(Winter Supp):35S–47S. [Google Scholar]
  • 2.American College of Clinical Pharmacy. Introduction. Pharmacotherapy. 2000;20(10 Pt 2):233–234S. [Google Scholar]
  • 3.Beck DE. Outcomes and Experiential Education. Pharmacotherapy. 2000;20(10 Pt 2):297S–306S. doi: 10.1592/phco.20.16.297s.35020. [DOI] [PubMed] [Google Scholar]
  • 4.American Physical Therapy Association. Outcomes assessment in physical therapy education. Last updated May 24, 2016. http://www.apta.org/OutcomesAssessment/. Accessed June 8, 2017.
  • 5.Littlefield LC, Haines ST, Harralson AF, et al. Academic pharmacy’s role in advancing practice and assuring quality in experiential education: Report of the 2003-2004 Professional Affairs Committee. Am J Pharm Educ. 2004;68(3):Article S8. [Google Scholar]
  • 6.Haase KK, Smythe MA, Orlando PL, Resman-Targoff BH, Smith LS. Quality experiential education. Pharmacotherapy. 2008;28(10):219–227e. doi: 10.1592/phco.28.12.1548. [DOI] [PubMed] [Google Scholar]
  • 7.Stevenson TL, Hornsby LB, Phillipe HM, Kelley K, McDonough S. A quality improvement course review of advanced pharmacy practice experiences. Am J Pharm Educ. 2011;75(6):Article 116. doi: 10.5688/ajpe756116. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Accreditation Council for Pharmacy Education. Accreditation Standards and Guidelines for the Professional Program in Pharmacy Leading to the Doctor of Pharmacy Degree (“Standards 2016”). 2015. https://www.acpe-accredit.org/pdf/Standards2016FINAL.pdf. Accessed June 8, 2017.
  • 9.Danielson J, Craddick K, Eccles D, Kwasnik A, O’Sullivan TA. Status of pharmacy practice experience educations programs. Am J Pharm Educ. 2014;78(4):Article 72. doi: 10.5688/ajpe78472. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.McCullough E, Staton A, Bonner CL, Fetterman JW, Wickman J, Parham L. Team development of the experiential program office and beyond. American Association of Colleges of Pharmacy Experiential Education Section Newsletter. 2015;2(2):1–3. [Google Scholar]
  • 11.Hennink MM, Kaiser BN, Marconi VC. Code saturation versus meaning saturation: how many interviews are enough? Qualitative Health Research. 2016;27(4):591–608. doi: 10.1177/1049732316665344. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Rubin HJ, Rubin IS. Qualitative Interviewing: the Art of Hearing Data. 3rd ed. Thousand Oaks, CA: SAGE Publications; 2012. [Google Scholar]
  • 13.Guest GS, Namey EE, Mitchell ML. Collecting Qualitative Data: A Field Manual for Applied Researchers. 1st ed. Thousand Oaks, CA: SAGE Publications; 2013. [Google Scholar]
  • 14.Palinkas LA, Horwitz SM, Green CA, Wisdom JP, Duan N, Hoagwood K. Purposeful sampling for qualitative data collection and analysis in mixed method implementation research. Adm Policy Ment Health. 2015;42(5):533–544. doi: 10.1007/s10488-013-0528-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Merriam SB. Qualitative Research. 3rd ed. San Francisco, CA: Jossey-Bass; 2009. [Google Scholar]
  • 16.Guest GS, MacQueen KM, Namey EE. Applied Thematic Analysis. 1st ed. Thousand Oaks, CA: SAGE Publications; 2012. [Google Scholar]
  • 17.Saldana J. The Coding Manual for Qualitative Researchers. 2nd ed. Thousand Oaks, CA: SAGE Publications; 2013. [Google Scholar]
  • 18.Viera AJ, Garrett JM. Understanding interobserver agreement: the kappa statistic. Fam Med. 2005;37(5):360–363. [PubMed] [Google Scholar]
  • 19.Wongpakaran N, Wongpakaran T, Wedding D, Gwet KL. A comparison of Cohen’s kappa and Gwet’s AC1 when calculating inter-rater reliability coefficients: a study conducted with personality disorder samples. BMC Med Res Methodol. 2013;13(Apr 29):61. doi: 10.1186/1471-2288-13-61. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Gwet KL. R functions for calculating agreement coefficients. Gaithersburg, MD: Advanced Analytics, LLC, 2010. http://www.agreestat.com/r_functions.html. Accessed June 8, 2017.
  • 21.R Core Team. R. A language and environment for statistical computing (v. 3.3.2, 2016). R Foundation for Statistical Computing Vienna, Austria. https://www.r-project.org/. Accessed January 17, 2017.
  • 22.Danielson J, Craddick K, Eccles D, Kwasnik A, O’Sullivan TA. A qualitative analysis of common concerns about challenges facing pharmacy experiential education programs. Am J Pharm Educ. 2015;79(1):Article 6. doi: 10.5688/ajpe79106. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23. Plan-Do-Study-Act Cycle. Ketchum, ID: The Deming Institute, 2017. https://deming.org/management-system/pdsacycle. Accessed June 8, 2017.
  • 24.Ried LD. A model for curricular quality assessment and improvement. Am J Pharm Educ. 2011;75(10):Article 196. doi: 10.5688/ajpe7510196. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Barnett CW, Matthews HW. Teaching evaluation practices in colleges and schools of pharmacy. Am J Pharm Educ. 2009;73(6):Article 103. doi: 10.5688/aj7306103. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Cox CD, Peeters MJ, Stanford BL, Seifert CF. Pilot of peer assessment within experiential teaching and learning. Curr Pharm Teach Learn. 2013;5(4):311–320. [Google Scholar]
  • 27.Feng GC. Mistakes and how to avoid mistakes in using intercoder reliability indices. Methodology. 2014;11(1):13–22. [Google Scholar]
  • 28.Dawson K, Hammer DP, Manolakis PG, O’Sullivan TA, Skelton JB, Weber SS. Academic practice partnership initiative summit to advance experiential education in pharmacy. Final report and proceedings. Alexandria, VA: American Association of Colleges of Pharmacy. 2005. http://www.aacp.org/resources/education/APPI/Documents/SummitFinalReport.pdf. Accessed August 10, 2017.
  • 29. McLeod R, Mires G, Ker J. Direct observed procedural skills assessment in the undergraduate setting. Clin Teach. 9(4):228-232. [DOI] [PubMed]

Articles from American Journal of Pharmaceutical Education are provided here courtesy of American Association of Colleges of Pharmacy

RESOURCES