Skip to main content
American Journal of Pharmaceutical Education logoLink to American Journal of Pharmaceutical Education
. 2019 Oct;83(8):7246. doi: 10.5688/ajpe7246

Educational Outcomes Resulting From Restructuring a Scholarship Course for Doctor of Pharmacy Students

MaryPeace McRae a,, Teresa M Salgado a, Julie A Patterson a, Benjamin W Van Tassell a, Jeremy S Stultz b, Amy L Pakyz a, Katherine Henderson c, Leticia R Moczygemba d, Kai I Cheang a
PMCID: PMC6900817  PMID: 31831905

Abstract

Objective. To compare educational outcomes between two iterations of a scholarship and research course for Doctor of Pharmacy (PharmD) students at Virginia Commonwealth University’s School of Pharmacy.

Methods. The first iteration of a course intended to teach pharmacy students the knowledge and skills necessary to design and conduct research involved lectures and application exercises, including limited guided questions about different aspects of the research process. In the fall of 2015, multiple structured activities and accompanying grading rubrics, each designed around the structure and content of a section of a research proposal, were introduced to the course to supplement lectures. Both iterations of the course culminated with students submitting a research proposal. After establishing interrater reliability, faculty members graded a random sample of 20 research proposals, 10 from each version of the course, and section-specific and overall proposal scores were compared.

Results. In the proposals submitted after the course revisions, significant improvements in three areas were identified: the overall score, the section-specific scores for research hypothesis/specific aims, and institutional review board (IRB) discussion/informed consent. Nominal, though not statistically significant, improvements were observed in other sections.

Conclusion. Additional research is needed regarding the best instructional strategies to reinforce data analysis and statistical testing knowledge and skills in PharmD students. Overall, our findings support the hypothesis that a more formalized, guided approach for teaching research methods improves learning outcomes for PharmD students.

Keywords: scholarship, research course, student, curriculum, pharmacy research

INTRODUCTION

According to Boyer’s theory of teaching and learning, scholarship is the creation, discovery, advancement, or transformation of knowledge.1 Scholarship is evidenced when that knowledge is assessed for quality by peer review and communicated to others.2 Scholarship in health professions has also been described as a lifelong commitment to thinking, questioning, and pursuing answers.3

The idea of a “culture of scholarship,” which refers to a creative and productive environment extending from active scholarly activity,2 has been endorsed by several pharmacy professional organizations. The 2008-2009 American College of Clinical Pharmacy (ACCP) Task Force on Research in the Professional Curriculum encouraged the integration of research and research skills as part of the core pharmacy curriculum.4 In 2012, the American Association of Colleges of Pharmacy (AACP)’s Argus Commission suggested required student research experiences in all schools of pharmacy to promote the development of inquisitiveness and scholarly thinking.5 Additionally, the Accreditation Council for Pharmacy Education (ACPE) encourages the inclusion of knowledge and skills related to research design and interpretation, having identified eight curricular competencies: identifying relevant problems, generating a hypothesis, designing a study, analyzing data using appropriate statistical tests, interpreting and applying the findings to practice, effectively communicating research and clinical findings to professionals, effectively communicating research and clinical findings to patients, and applying regulatory and ethical principles to the conduct and use of research findings.6

Despite pharmacy faculty’s recognition that having a working understanding of statistics and research methodology are among the most important research-related competencies for pharmacy students, ranking behind only drug information and drug literature evaluation,7 the optimal topics for inclusion in and pedagogical methods for teaching research development courses for doctor of pharmacy (PharmD) students are not well established. Although AACP suggests required student research experiences, the necessity of requiring PharmD students to conduct and complete full research projects has been debated.7 Students who are exposed to the complexity of developing a research question and designing a study to address that question may be better equipped as practitioners to assimilate new knowledge and adapt to changes in practice.8 Scientifically minded health professionals will be better able to access and appropriately apply current scientific knowledge as a regular part of their work, contribute to the body of knowledge, critically evaluate interventions and their outcomes, and routinely subject their work to the scrutiny of colleagues, stakeholders, and the public.9

There have been several descriptive articles reporting the impact of a research course on student perceptions of research,10-12 student confidence,10 and on research output (poster presentations and manuscript submissions).13,14 However, there has been much less focus within the literature on the impact of different methods of teaching a research design course.

The Virginia Commonwealth University (VCU) School of Pharmacy curriculum includes a required course designated Pharmacy Practice Research. The course sequence was originally implemented in 2008, and active-learning strategies were used throughout. The students (typically a class of about 140) worked in groups of six to eight to review background literature and formulate their own pharmacy practice-related research question to address a gap in the existing literature. The student groups then developed hypotheses and specific aims and designed a research study, including the data analysis plan to test those hypotheses. This process also required the student groups to address the role of human subjects’ protection and informed consent in their proposed studies. The main output for assessment was a full, written research proposal. At inception, the course comprised lectures and application exercises, including limited guided questions about different aspects of the research process. Beginning in 2015, a more formal, structured approach to the course was adopted to better facilitate student learning and improve educational outcomes. However, despite these improvements to the course design, no formal evaluation of the curricular changes was conducted. Therefore, the purpose of this study was to compare the quality of the research output (ie, the written research proposals) between two academic years (2013-2014 and 2015-2016) in which different teaching methodologies were used. Both iterations of the course lasted for the entire academic year (two semesters), met for two hours every two weeks, and the students in each iteration were third-year pharmacy students. Our hypothesis was that providing structured, guided activities throughout the research design process would enhance student learning and improve the quality of their research proposals.

METHODS

This was an observational study comparing the quality of student-developed research proposals between the 2013-2014 academic year, in which students were given a limited number of ungraded, guided questions about the research process, and the 2015-2016 academic year in which a more structured and guided proposal writing process with continuous feedback was implemented, as described below. This study was approved by Virginia Commonwealth University’s Institutional Review Board (IRB).

Although the overall mission and expectations of the PharmD research course remained the same, the course structure underwent a considerable redesign in 2015, including a change in course title from Scholarship to Pharmacy Practice Research to more specifically align with the course content. In both iterations of the course, content delivery consisted mainly of lectures. However, the application of that content differed in style and format. Additional structure was implemented during the course redesign with the aim of more effectively guiding students in a stepwise manner through the research development process (Table 3).

Table 3.

Comparison of Coursework and Grading Between 2013-14 and 2015-16 Academic Years

graphic file with name ajpe7246-t3.jpg

In its early years, the course introduced students to topics related to research, such as forming research questions, study design, grants and funding, and responsible conduct in research, and provided examples of faculty-led research. Specifically, in the 2013-2014 iteration of the course, students were given only a few application exercises in the form of guided questions to assist with the research development process. Furthermore, instructor feedback on these assigned activities was limited, no formal feedback was given for the research journal entries, and draft proposals were not assigned or evaluated.

The 2015-2016 course was designed to further enhance student experience of the research project development process by using additional strategies of scaffolding, pedagogical “chunking,” and frequent feedback. In this application of scaffolding, the students were provided the instructions and requirements for the final research proposal at the beginning of the course, including predefined rubrics for each section of the proposal. They had access to, and early knowledge of, the expectations for all the required sections, as well as the appropriate order in which they should be organized within the written proposal. Requiring students to submit structured written assignments throughout the course broke down the written proposal into manageable component parts (“chunking”). The assignments also complemented and reflected the content and organization of the scaffold. Assignment topics included: forming research questions and hypothesis/specific aims; choosing the appropriate study design, measurement tools, and recruitment strategies; developing a statistical analysis plan; and addressing responsible conduct in research. Students completed these assignments for their chosen research questions.

Because of the iterative and cumulative nature of the research process, faculty members thought that frequent and timely feedback on assignments would be necessary to allow students to correct or refine their proposals along the way as needed. Therefore, each assigned activity in the new course design was graded in real-time, and written feedback was provided to the students before the next class session. Assignment grading was split between three course instructors, and each instructor graded assignments from the same research groups throughout the course. Course instructors also actively engaged with each student group individually during class sessions to provide verbal feedback. Examples of student activities and the associated grading rubric for the redefined 2015-2016 course can be accessed here: (https://docs.google.com/document/d/1VgyWGPHs9LJuonvrbTk-y81GP78sKChhUofyr22YwcI/edit?usp=sharing).

Proposals for the 2013-2014 (n=20) and 2015-2016 (n=22) academic years were de-identified and assigned successive numbers. Ten proposals from each academic year were selected by a random number generator to be included in this study. Each of the 20 de-identified proposals was assigned to two reviewers, for a total of 40 separate grading events. Although the reviewers evaluated each of their assigned proposals independently, the reviewers were paired together, so that each pair of reviewers evaluated the same six or seven proposals. Furthermore, approximately half of the proposals assigned to each reviewer were from the 2013-2014 academic year and half were form the 2015-2016 academic year. All reviewers were pharmacists with experience and expertise in pharmacy practice research. To minimize bias in the grading, the reviewers were not current coordinators for the course. The course coordinator at the time of the study was responsible solely for de-identifying, randomizing, and assigning proposals to reviewers.

To facilitate consistent evaluation of study proposals by reviewers, a grading rubric was created. The first version of the rubric included 11 sections: background/significance, hypothesis/specific aims, study design, sample selection, endpoints/measurable outcomes, statistical analysis, IRB/informed consent, limitations, references, format (according to instructions), and grammar. Each of these sections was further divided into specific evaluation criteria, with point values assigned to each criterion proportionate to its importance to the assignment, totaling 28 points. The initial rubric was created by modifying and combining relevant content from several existing grant proposal evaluation rubrics.

To pilot test the rubric and grading process, each pair of reviewers graded one proposal using the initial rubric. The pilot test proposals were chosen from proposals that were not selected for inclusion in the study. Each reviewer independently reviewed and assigned scores to the proposal. After all reviews were compiled, reviewers met as one large group and, to improve inter-reviewer grading consistency, discussed their experiences. These discussions included each reviewer’s experiences with the interpretation of the evaluation criteria and point assignments. Based on the feedback from the meeting, the rubric was then modified. Specifically, three sections, data collection, style, and overall quality, were added, bringing the total sections for the modified rubric to 14. A final overall quality ranking on a six-point Likert scale (excellent, very good, good, fair, poor, and very poor) was added to the rubric as a qualitative measure. For all other sections, the point breakdowns within each section were further clarified and expanded (Table 1). Reviewers were allowed to assign partial points in one-fourth point increments. The total number of points (overall score) in this revised version of the rubric was 50. This revised version of the rubric was then pilot tested. The reviewers were assigned another proposal to grade, which subsequently was not included in the study. Finally, once preliminary inter-rater reliability was established for the pilot test, as described below, the randomly selected study proposals were assigned to the three sets of independent paired reviewers.

Table 1.

Grading Rubric Used in Assessment of Research Proposals Submitted by Doctor of Pharmacy Students Enrolled in a Scholarship Course

graphic file with name ajpe7246-t1.jpg

Paired scores were averaged to obtain a final score for section-specific and overall proposal scores. Inter-rater reliability for section-specific and overall proposal scores (continuous variables) and for the overall quality ranking (ordinal variable) was assessed using a one-way, consistency, average measures intra-class correlation coefficient (ICC).15,16 A one-way model for the ICC was used because the coders were randomly selected from a larger population of coders for each proposal being assessed. Consistency in ratings rather than absolute agreement was selected to confirm that rank orderings of the ratings were similar between the two reviewers, thereby accounting for some reviewers being more stringent than others. Because all proposals were coded by two raters and the average of their ratings was used for hypothesis testing, average-measures ICCs were used. Finally, because the purpose of this analysis was to assess the level of agreement between the reviewers’ ratings within the current study and not to generalize ratings to a larger population of reviewers, a mixed effects model was selected where reviewers were considered to be fixed effects and proposals entered as random effects. As a qualitative measure of agreement, ICC values less than 0.5 indicated poor reliability; values of 0.5-0.75, moderate reliability; values of 0.75-0.90, good reliability; and values greater than 0.90, excellent reliability.17 We established 0.75 as the minimum acceptable cutoff for inter-rater reliability in this study.

Normality of section-specific and overall proposal scores distributions between the two academic years were assessed by direct observation of the Quantile-Quantile (QQ) plots. Because of the non-normal distribution of the data, descriptive statistics were presented as medians and interquartile ranges (IQRs). Differences in median proposal scores for each section and overall proposal score between the 2013-2014 and 2015-2016 academic years were tested using a two-tailed Wilcoxon rank sum test, with the significance level set at p<.05. Statistical analyses were performed on IBM SPSS Statistics for Windows, Version 24.0. Armonk, NY: IBM Corp. (ICC) and JMP Pro 13. SAS Institute Inc., Cary, NC (descriptive statistics and hypotheses testing).

RESULTS

Inter-rater reliability for section-specific and overall proposal scores was 0.92 (95% CI=0.78-0.97), suggesting excellent agreement within reviewer pairs. Inter-rater reliability for the ordinal variable “overall quality” was 0.77 (95% CI=0.43-0.91), suggesting good reliability. The median (IQR) scores for overall and section-specific data for the graded research proposals are summarized in Table 2. Across both academic years, median section-specific scores were generally high for data collection (82% and 100% for the 2013-2014 and 2015-2016 years, respectively) and background/significance (83% and 98%) (Table 2). The median overall proposal score was significantly higher among student groups in the more structured, redesigned 2015-2016 course compared to the 2013-2014 course (43.9 vs 37.5, p=.03). Significant section-specific improvements between 2013-2014 and 2015-2016 were identified for the hypothesis/specific aims (3.8 vs 5.6, p=.005) and IRB discussion/informed consent sections (1.0 vs 2.6, p=.004). Improvements in median scores on the other sections were not significant. The statistical analysis (69% and 82% for the 2013-2014 and 2015-2016 years, respectively) and IRB discussion/informed consent (33% and 88%) sections were consistently scored lower relative to other sections within the same academic year.

Table 2.

Reviewers’ Scores for Pharmacy Students’ Research Proposals Submitted Before and After Revision of a Required Year-Long Scholarship Course

graphic file with name ajpe7246-t2.jpg

DISCUSSION

This study describes a retrospective analysis of the effectiveness of a research design course using a more structured, guided format with frequent feedback (2015-2016) compared to a previous iteration of the course in which there were fewer application-based student assignments and less instructor feedback (2013-2014). The more structured format in the 2015-2016 academic year resulted in significantly higher overall proposal scores and significantly higher scores in the hypothesis/specific aims and IRB/informed consent sections.

One of the major goals of the course is to have students identify pharmacy practice-related research questions of interest to them, conduct a thorough literature review, and design a research project to answer their questions. Often, students who are new to research design ambitious and highly complex research projects. Their research questions are commonly too broad and the methodologies proposed are not feasible or realistic. For example, students often do not have an appreciation of common challenges such as poor survey response rates or loss to follow-up; accordingly, they are unable to account for these challenges in their study design. The improvements observed within the hypothesis/specific aims and study design sections of the revised course suggest an increased ability of students to identify a research question, develop a hypothesis, and focus their research question with appropriate and feasible specific aims. Similar improvements in students’ confidence in their ability to identify a research question have been reported following a required PharmD research course.12 Pharmacy residency directors have previously ranked identifying and writing a research question as the most important research skill for successfully completing a residency research project,18 suggesting that improvements in student learning outcomes in this area are meaningful.

The redesigned research course was also associated with a significant improvement (163%) in the IRB/informed consent section, which previously had been by far the lowest scoring section. Both the 2013-2014 and 2015-2016 classes received a lecture on human subjects’ protection with similar content objectives. However, the new class format, which introduced a short activity identifying and justifying a level of IRB for each group’s study, was far more effective at teaching the students about IRB review than a general lecture on the various types of IRB approvals. Despite the improvement noted following the course structure change, the scores on the IRB section were still low relative to scores in other sections. Student knowledge of the IRB process has previously been reported to be an area of weakness among fourth-year PharmD students completing capstone research projects.19

Despite a relatively high median background/significance section score in 2013-2014 (83%), there was a non-significant improvement in the quality of this section in the 2015-2016 academic year (98%). In constructing the background/significance section, students were required to read, interpret, and summarize the literature, which is an important skill that students take with them to use in their clinical rotations, residencies, and future careers as pharmacists. Students may be more comfortable with this section of the proposal because of their introductory experiences with reading, interpreting, and analyzing literature in the evidence-based pharmacy courses from their first and second professional years. Moreover, this background/significance section is primarily retrospective in nature and does not require the same degree of creative synthesis as other parts of the proposal, such as determining study designs and analyses to answer specific research questions.

There was an 18% increase in the median statistical analysis section scores between the two academic years. This difference did not reach statistical significance, suggesting that the more structured, guided format of the course may not be as effective at improving student learning outcomes for this topic as it was for other topics. Notably, in both academic years, students were asked only to select the appropriate data analysis plan to receive full credit in this section; they were not asked to perform any statistical tests. Because statistics is covered in depth during a prior course in the curriculum (during fall of their second year), the Pharmacy Practice Research course dedicates only two hours to reviewing statistical methods. The poor performance on the statistical analysis section within the research proposals may reflect a loss of knowledge following the prior course. This finding may also reflect the difficulty associated with mastering data analysis and statistics without repeated application of the material. Pharmacy students’ confidence in their abilities to evaluate statistical methods in medical literature is low at baseline, even among those who took a statistics course prior to pharmacy school.20 Furthermore, past studies on PharmD research and literature evaluation courses,19,21 as well as on biostatistics knowledge among pharmacy students, 22 residents,23 and practitioners24,25 have reported lower degrees of competency with biostatistics than other aspects of research design and interpretation. Enhancing students understanding and application of statistical testing is an obvious area for improvement in this and other PharmD research design courses.

Our study findings are limited by the fact that the study was retrospective, and therefore randomization was not possible. However, efforts to mitigate this limitation were made by conducting the study with relatively high stringency. All student research proposals were graded by independent reviewers who are pharmacists, researchers, and educators. Additionally, to decrease bias, all research proposals were de-identified to student and academic year, and a formal rubric that had been pilot tested by the reviewers was used for grading. Another limitation of the study was the small sample size. Only a sample of 10 proposals out of 20 and 22, respectively, from 2013-2014 and 2015-2016 were assessed. However, despite the small sample size, we were able to find significant differences in reviewer assigned points for overall proposal score and for two of the subsections. Although it is quite possible that a larger sample size may have revealed significant differences in additional subsections, our findings support the hypothesis that the more formalized, guided teaching approach provided during the later academic year resulted in improved learning outcomes for the students. Despite the improved scores, there were still sections in the later academic year with some relative weaknesses, such as that relating to data analysis and statistical testing, that should be addressed in future iterations of the course.

CONCLUSION

This study demonstrated that a more formal teaching approach with structured, guided activities resulted in improved quality of research proposals submitted by pharmacy students. Adding more structure, more assignments, and additional rubric-guided feedback for those assignments and focusing assignments to highlight a specific aspect of the research proposal process enhanced student performance on their research proposals. Because not all aspects of the research proposal were improved with this more structured format, future research is needed to better understand teaching strategies that will support continued student development and mastery of topics such as statistics/data analysis and human subjects’ protection.

REFERENCES

  • 1.Boyer E. Scholarship Reconsidered: Priorities of the Professoriate. Princeton, NJ: Carnegie Foundation for the Advancement of Teaching; 1990. [Google Scholar]
  • 2.Kennedy RH, Gubbins PO, Luer M, Reddy IK, Light KE. Developing and sustaining a culture of scholarship. Am J Pharm Educ. 2003;67(3):1-18. doi: 10.5688/aj670392. [DOI] [Google Scholar]
  • 3.Sevean PA, Poole K, Strickland DS. Actualizing scholarship in senior baccalaureate nursing students. J Nurs Educ. 2005;44(10):473-476. [DOI] [PubMed] [Google Scholar]
  • 4.Lee MW, Clay PG, Kennedy WK, et al. The essential research curriculum for doctor of pharmacy degree programs. Pharmacotherapy. 2010;30(9):966. doi: 10.1592/phco.30.9.966. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Speedie MK, Baldwin JN, Carter RA, Raehl CL, Yanchick VA, Maine LL. Cultivating “habits of mind” in the scholarly pharmacy clinician: report of the 2011-12 Argus Commission. Am J Pharm Educ. 2012;76(6):Article S3. doi: 10.5688/ajpe766S3. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Slack MK, Martin J, Worede L, Islam S. A systematic review of extramural presentations and publications from pharmacy student research programs. Am J Pharm Educ. 2016;80(6):Article 100. doi: 10.5688/ajpe806100. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Fuji KT, Galt K a. Research skills training for the doctor of pharmacy in U.S. schools of pharmacy: a descriptive study. Int J Pharm Pract. 2009;17(2):115-121. doi: 10.1211/ijpp/17.02.0007. [DOI] [PubMed] [Google Scholar]
  • 8.Bertolami CN. The role and importance of research and scholarship in dental education and practice. J Dent Educ. 2002;66(8):918.-924-926. [PubMed] [Google Scholar]
  • 9.Bieschke KJ, Fouad NA, Collins FL, Halonen JS. The scientifically-minded psychologist: science as a core competency. J Clin Psychol. 2004;60(7):713-723. doi: 10.1002/jclp.20012. [DOI] [PubMed] [Google Scholar]
  • 10.Perez A, Rabionet S, Bleidt B. Teaching research skills to student pharmacists in one semester: an applied research elective. Am J Pharm Educ. 2017;81(1):Article 16. doi: 10.5688/ajpe81116. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Nykamp D, Murphy JE, Marshall LL, Bell A. Pharmacy students’ participation in a research experience culminating in journal publication. Am J Pharm Educ. 2010;74(3):1-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Cailor SM, Chen AMH, Kiersma ME, Keib CN. The impact of a research course on pharmacy students’ perceptions of research and evidence-based practice. Curr Pharm Teach Learn. 2017;9(1):28-36. doi: 10.1016/j.cptl.2016.08.031. [DOI] [PubMed] [Google Scholar]
  • 13.Brandl K, Adler D, Kelly C, Taylor P. Effect of a dedicated pharmacy student summer research program on publication rate. Am J Pharm Educ. 2017;81(3):Article 48. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Cooley J, Nelson M, Slack M, Warholak T. Outcomes of a multi-faceted educational intervention to increase student scholarship. Am J Pharm Educ. 2015;79(6):Article 80. doi: 10.5688/ajpe79680. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.McGraw KO, Wong SP. Forming inferences about some intraclass correlation coefficients. Psychol Methods. 1996;1(1):30-46. doi: 10.1037/1082-989X.1.1.30. [DOI] [Google Scholar]
  • 16.Hallgren KA. Computing inter-rater reliability for observational data: an overview and tutorial. Tutor Quant Methods Psychol. 2012;8(1):23-34. doi: 10.20982/tqmp.08.1.p023. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Koo TK, Li MY. A guideline of selecting and reporting intraclass correlation coefficients for reliability research. J Chiropr Med. 2016;15(2):155-163. doi: 10.1016/j.jcm.2016.02.012. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Anderson HD, Saseen JJ. The importance of clinical research skills according to PharmD students, first-year residents, and residency directors. Curr Pharm Teach Learn. 2017;9(2):224-229. doi: 10.1016/j.cptl.2016.11.011. [DOI] [PubMed] [Google Scholar]
  • 19.Wuller CA. A capstone advanced pharmacy practice experience in research. Am J Pharm Educ. 2010;74(10):Article 180. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Feild C, Belgado B, Dougherty J, Doering P, Gong Y. The use of application-based learning techniques in a college of pharmacy-based biostatistics course. Curr Pharm Teach Learn. 2015;7(5):599-605. doi: 10.1016/j.cptl.2015.06.021. [DOI] [Google Scholar]
  • 21.Reidt SL, Morgan J, Janke KK. A series of literature evaluation skill development interventions progressing in complexity. Curr Pharm Teach Learn. 2016;8(6):846-854. doi: 10.1016/j.cptl.2016.08.005. [DOI] [Google Scholar]
  • 22.Sheehan AH, Wagner LE, Sowinski KM. Student pharmacists’ knowledge of biostatistical and literature evaluation concepts. Curr Pharm Teach Learn. 2016;8(5):622-628. doi: 10.1016/j.cptl.2016.06.013. [DOI] [Google Scholar]
  • 23.Bookstaver PB, Miller AD, Felder TM, Tice DL, Norris LB, Sutton SS. Assessing pharmacy residents’ knowledge of biostatistics and research study design. Ann Pharmacother. 2012;46(7-8):991-999. doi: 10.1345/aph.1Q772. [DOI] [PubMed] [Google Scholar]
  • 24.Ferrill MJ, Blalock SJ, Norton LL. A survey of the literature evaluation skills of health-system pharmacists. Hosp Pharm. 2000;35(7):721-727. [Google Scholar]
  • 25.Ferrill MJ, Norton LL, Blalock SJ. Determining the statistical knowledge of pharmacy practitioners: a survey and review of the literature. Am J Pharm Educ. 1999;63(4):371-376. [Google Scholar]
  • 26.Overholser BR, Sowinski KM. Development and student evaluation of an introductory biostatistics course as a required course in the doctor of pharmacy curriculum. Curr Pharm Teach Learn. 2010;2(3):171-179. doi: 10.1016/j.cptl.2010.04.006. [DOI] [Google Scholar]
  • 27.Kim SE, Whittington JI, Nguyen LM, Ambrose PJ, Corelli RL. Pharmacy students’ perceptions of a required senior research project. Am J Pharm Educ. 2010;74(10):Article 190. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Vaidean GD, Vansal SS, Moore RJ, Feldman S. Student scientific inquiry in the core curriculum. Am J Pharm Educ. 2013;77(8):Article 176. doi: 10.5688/ajpe778176. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29.Kidd RS, Latif DA. Student evaluations: are they valid measures of course effectiveness? Am J Pharm Educ. 2004;68(3):1-5. doi: 10.5688/aj680361. [DOI] [Google Scholar]
  • 30.Surratt CK, Desselle SP. Pharmacy students’ perceptions of a teaching evaluation process. Am J Pharm Educ. 2007;71:6. doi: 10.1108/17537980910981750. [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from American Journal of Pharmaceutical Education are provided here courtesy of American Association of Colleges of Pharmacy

RESOURCES