Skip to main content
American Journal of Pharmaceutical Education logoLink to American Journal of Pharmaceutical Education
. 2019 Dec;83(10):7074. doi: 10.5688/ajpe7074

Evaluation of a Situational Judgement Test to Develop Non-Academic Skills in Pharmacy Students

Fiona Patterson a,, Kirsten Galbraith b, Charlotte Flaxman a, Carl MJ Kirkpatrick b
PMCID: PMC6983889  PMID: 32001871

Abstract

Objective. To design, implement, and psychometrically evaluate a situational judgement test (SJT) to use as a formative assessment of pharmacy students’ non-academic skills in an Australian-based university.

Methods. An SJT was developed using a previously validated design process including involvement of subject matter experts. The first phase included design of a blueprint through stakeholder consultation and the development of bespoke attribute definitions and a tool specification. Following on from this, SJT items were developed through subject matter expert interviews and in-depth review process.

Results. Students (702) from four different cohorts (first through fourth years) of a Bachelor of Pharmacy degree program completed the situational judgment test. Data from 648 students was eligible for inclusion in the analysis. The SJT demonstrated good reliability, appropriateness for use (difficulty and quality), fairness, and face validity. The variability in students’ scores suggested that the SJT may be a useful metric to identify students most in need of additional support.

Conclusion. Evaluation of the SJT demonstrated that the tool was valid, reliable, fair and appropriate to use as a formative assessment. Through implementing an SJT such as this, pharmacy students are provided the opportunity to receive feedback on their non-academic skills and consider how to approach challenging or unfamiliar situations before entering the profession.

Keywords: situational judgement test, development, non-academic, pharmacy education

INTRODUCTION

Students’ prior academic achievement is a crucial factor in their acceptance into higher education institutions.1 In the United States, many pharmacy schools or colleges require the Pharmacy College Admission Test (PCAT)2 and prerequisite courses, whereas pharmacy schools in the United Kingdom and Australia admit students at the undergraduate level (ie, Bachelor of Pharmacy [BPharm] degree). Although entrance to pharmacy education has traditionally focused on academic ability,3 there is increasing recognition that prior academic achievement is not the only marker for longer term success in pharmacy and other health care profession roles where there is now increasing emphasis on important non-academic attributes.1,4-7

Globally there is a growing need to educate students in non-academic attributes important for clinical practice including empathy, teamwork, and resilience. Recent research suggests that these non-academic attributes are likely to be important factors for success in pharmacy and other health care professions over time.1,7-10 However, there has been much debate over who are the "best” students to admit. Some authors argue that for certain students, their strength in foundational sciences, (eg, chemistry and mathematics), may actually be associated with less-developed interpersonal and non-academic skills. If academic and non-academic skills are inversely related, then this creates a tension within pharmacy education.11 Other authors have highlighted the need to select students with strong communication and problem-solving skills for admission into pharmacy school.1 In medical education, Lievens found data that students’ performance during medical school and clinical practice was predicted by interpersonal skills assessment at the time of selection more than by scores on cognitively oriented tests.12 Taken together with other research demonstrating the predictive validity of measures of non-academic attributes for in-role performance,13-16 evidence suggests that as trainees progress through the educational pathway into practice, non-academic attributes become increasingly important for effective performance.

The situational judgment test (SJT) methodology has been identified as an effective way to assess important non-academic attributes at the point of selection for many different health care roles at various levels in the education pathway.17-19 SJTs are used to measure the combination of a person’s experience, ability, and personality,20,21 and have been used in health care education to identify non-academic attributes at every step of the education pathway from selection into undergraduate and postgraduate degree programs, into clinical practice performance.14,22-26 SJTs present individuals with scenarios they are likely to encounter in the workplace and require judgments to be made regarding the appropriateness of a variety of responses.27 The types of SJT questions used can vary. For example, an individual may respond to a situational judgement scenario by picking either the most or least appropriate response or by independently rating the importance of several different items. Several recent studies show SJTs used as part of selection to have good reliability and predictive validity for a range of performance outcomes in several health care professions.10,28-31 In addition, Pangallo and colleagues used an SJT methodology to conduct formative assessments in a multi-professional group in palliative care. In their study, they provided evidence that the SJT methodology is effective for enhancing resilience in experienced care workers as part of an educational intervention.32

To the authors’ knowledge, no research has used the SJT methodology as a formative assessment as part of an educational intervention for student populations. In this study, we designed and evaluated the effectiveness of an SJT for formative assessment purposes as part of a broader educational intervention to enhance pharmacy students’ professional skills (eg, empathy). The aim of this study was to design, implement, and psychometrically evaluate an institution-specific SJT as a formative assessment for pharmacy students in an Australian university. The secondary aim was to explore whether the SJT could differentiate the level of non-academic skills in a cohort of pharmacy students and whether these results were related to student demographics.

METHODS

This study is presented in two parts. Part one describes the development of the SJT and associated items, and part two outlines the analyses conducted to explore the tool’s psychometric properties. Following a previously validated SJT design process outlined in Figure 1,30 we developed the SJT in the following three key phases: stakeholder consultation, item writing and development, and item review and answer key confirmation.

Figure 1.

Figure 1.

Situational Judgement Test Development Process

During the first phase of the SJT design process, we consulted stakeholders to develop a blueprint for the tool, including defining the non-academic attributes to be targeted. We conducted one-on-one and group consultations with faculty members, clinical educators, and pharmacy practice leaders to ascertain which graduate attributes they most valued. These individuals were asked to draw upon examples of when they have seen effective and ineffective pharmacy student behavior.

After completing the consultations, we analyzed the data by employing a thematic analysis. Our analysis revealed that the stakeholders viewed the following four attributes as most important for students planning a career in pharmacy: integrity, empathy, team involvement, and critical thinking and problem solving.33 Definitions for each attribute were written using information provided by the key stakeholders, which ensured that the defined attributes would be specific to pharmacy students (Table 1).

Table 1.

Target Attributes Measured Within a Situational Judgement Test Developed to Assess Non-Academic Skills in Pharmacy Students

graphic file with name ajpe7074-t1.jpg

After the consultations, we also decided on specifications for the tool. We decided that the scenarios needed to be understood by students with a limited preexisting knowledge of practice as a pharmacist. We determined which types of health care or education settings the scenarios would take place in. Lastly, we created an overall item and response format whereby students were asked to rate the appropriateness of a number of items to scenarios (ie, SJTs) using a four-point Likert scale (Appendix 1).

After concluding the stakeholder consultation, we entered the second phase of the SJT design process: item writing and development. Experienced organizational psychology researchers conducted telephone interviews with nine subject matter experts (SMEs) to gather material for the SJT scenarios. The SMEs included lecturers and tutors who had contact with pharmacy students on a regular basis, representatives from the Australian Pharmacy Council, and pharmacists from a range of workplace sites that hosted university students for pharmacy placements. All SMEs were registered pharmacists: six were currently practicing in patient care roles and five were academics, including two in academic-practice roles. The researchers interviewed the SMEs to gather material that would be relevant to all pharmacy students, including students who had recently commenced their first year of study. Interviews lasted approximately 45 minutes and were conducted using the critical incident technique.34 In contrast to approaches that simply ask participants to discuss their opinions or views about a particular subject, the critical incident technique explores interviewees’ direct experience with significant events. Through this technique, SMEs provided examples of situations in which they had witnessed a pharmacy student perform the selected attributes particularly effectively or ineffectively. Then by probing the interviewee for more context, the researchers were able to shape these real-life situations into scenarios for the SJT.

During the interviews, the researchers recorded resulting scenarios and possible ways that students could respond to each scenario. For each scenario, researchers developed from five to eight corresponding items of varying degrees of appropriateness, along with a suggested answer key indicating the appropriateness of each item for a pharmacy student. In total, 50 scenarios (11 critical-thinking and problem-solving scenarios, 16 empathy scenarios, 15 integrity scenarios, and eight teamwork scenarios), with 380 items nested within, were developed. Lastly, the researchers reviewed the content and allocated a target domain (eg, empathy) to each scenario and item.

During the third phase of the SJT design process, items were reviewed and an answer key solidified. First, an experienced researcher trained in the development of SJTs facilitated a one-day workshop to review the scenarios and items with a new group of 10 SMEs. Of the 10 SMEs, aged from 26 to 66 years, nine were registered pharmacists, and four were female. The initial answer key for each item was reviewed alongside the scenarios. This workshop aimed to ensure that each of the SJT items was fair, appropriate, and relevant to the role of a pharmacy student. For example, we asked SMEs to consider whether any of the items would unfairly disadvantage any identified subgroup of students.

Following the review workshop, once updates had been made to the scenario content, the SJT paper was developed. The test comprised 40 scenarios that spanned the four domains and a range of relevant pharmacy themes. The answer key was finalized based on consensus from a third SME group made up of 10 people (all female, aged 26 to 55 years, and 70% born in Australia). This final group of SMEs completed the SJT under examination-like conditions and provided comments on the item content. Psychometric experts analyzed the SMEs’ responses to each item by using consensus analysis to determine the level of consensus and to finalize the answer keys.30 In total, 29 SMEs (three groups) took part in the development of the SJT across the various stages of development.

After completing the design phases of the SJT, we moved on to part two of the study, evaluating the psychometric properties of the SJT. A pilot study was conducted using the SJT created in part one of the study. The test consisted of 290 items nested within 40 scenarios (an example scenario and associated items are shown in Appendix 1).

The SJT was piloted with four cohorts of pharmacy students in different years of study. The SJT was uploaded onto a university survey and data management tool. We recruited students (N=824) during a class briefing in which we presented students with details of the purpose of the tool and what their data would be used for, in addition to explaining that they could withdraw at any point without penalty. The students provided consent for their data being used for research purposes before beginning the pilot SJT. All testing took place under examination conditions and was completed in groups over 10 days. We assigned students (irrespective of their year in the program) to a time and computer laboratory based on timetabling, availability of students, and availability of a suitable space. All students (N=824) were invited to complete the tool under examination conditions. Students were given 90 minutes to complete the test and provide demographic information and feedback. Timing was monitored through a timer on the online system; the SJT ended automatically when the allocated time had elapsed whether or not the student had responded to all of the scenarios. Approval for this study was granted by the Human Research Ethics Committee at the host university.

RESULTS

The majority (702 of 824, 85.2%) of students in years one through four of the pharmacy degree program completed the SJT. However, students’ data were removed from analyses if they did not consent to their details being used in research or if they had a high number of test items that were not completed (31 or more missing responses). The data were then checked for extreme outliers as determined by sample size (z-score >-3.18), but none of the remaining participants matched this criterion for removal. Therefore, 648 participants were included for further analyses. Table 2 has the breakdown of response rates within each cohort. Additionally, as this was a pilot, we expected a high redundancy of items as is usual during the preliminary stage of establishing the content for an SJT. Therefore, low quality SJT items were determined by the item partial correlation (a correlation between the item and the overall mean SJT score, excluding the item itself). Of the 290 items, 179 resulted in a satisfactory item partial. The remaining 111 items (38.3%) were of low quality and were therefore withdrawn from further analyses.

Table 2.

Sampling Results for Undergraduate Pharmacy Students Who Completed a Situational Judgement Test Developed to Assess Their Non-Academic Skills

graphic file with name ajpe7074-t2.jpg

Of the 648 participants represented in the final sample, 27.9% were male and 71.6% were female (missing data from 0.5%). The mean age of students in the final sample was 20.7 years (SD=2.8 years, with a range of 17-53 years). Out of these participants, 28.7% were in their first year of study for their pharmacy degree; 26.5% in their second year; 25.0% in their third; and 19.8% in their fourth year. For almost half (49.8%) of the students, English was their first language (missing data from 1.7%); 40.9% of the students were born in Australia (missing data from 6.8%).

We conducted psychometric analyses to evaluate the internal consistency, appropriateness for use (difficulty and quality), fairness, and face validity (ie, student perceptions) of the SJT. Each of these points are discussed below. The SJT demonstrated good reliability with regards to internal consistency (α=.91 where the minimum desired level for such a tool is α=.70).35,36 The distribution of the SJT total scores indicated a close to normal distribution, showing that there was a spread of scores and that the SJT differentiated between students (Figure 2). The mean score of 544.8 is 81.4% of the total possible score (669).

Figure 2.

Figure 2.

Histogram to Demonstrate Distribution of Total SJT Scores

The item quality was assessed by examining the item partial correlation. Items were classified in terms of their quality as follows: good items, partial of 0.25 or above; satisfactory items, between 0.24 and 0.17; moderate items between 0.16 and 0.13; and limited items <0.13. These cut-offs were based upon a previously validated methodology outlined by Patterson and Driver.30 Of the 179 items included in the analysis, 83 items were good, 58 items were satisfactory, 16 items were moderate, and 22 items were of limited item quality. The 22 items with limited item quality did not affect the overall reliability of the tool; therefore, we included them in the final SJT. However, we recommend monitoring the performance of these items in future, to inform their ongoing use in the SJT.

To examine the SJTs’ fairness, group performance differences were analyzed based on gender, country of birth, and first language. For all group differences, t tests were used. Cohen’s d was used to determine the size of the effect between the groups (0.20 is considered a small effect, 0.50 is a medium effect, and 0.80 is a large effect).37 Full results for this aspect of the SJT are available in Table 3.

Table 3.

Assessment of Demographic Differences on Undergraduate Pharmacy Students’ Scores on a Situational Judgement Test to Assess Non-Academic Skills

graphic file with name ajpe7074-t3.jpg

With regards to gender, an independent samples t test showed a significant difference in total SJT score between male and female students. Female students scored higher than male students, although the effect size was small.

In addition, the difference in scores between students born in Australia and students born outside Australia was explored using an independent samples t test. Those born in Australia scored significantly higher; however, the effect size was small. Furthermore, an independent samples t test showed that students whose first language was English scored significantly higher on the SJT than those whose first language was not English, although the effect size was small.

We anticipated that students’ country of birth and native language could be confounding variables, as those born outside of Australia were also more likely to have English as a second language. Therefore, a multiple regression analysis was conducted to further understand the impact of country of birth and native language on SJT scores. Native language was entered in step one, and explained 2.5% of the variance, F (df 1,602)=16.33, p<.001. Adding country of birth in step two explained an additional 2.6% of the variance, (F (df 2,602)=17.36, p<.001), indicating that country of birth significantly predicts SJT score when native language was adjusted for. This regression analysis showed that both native language and country of birth explained significant amounts of the variation in students’ SJT scores.

Finally, we explored student perceptions of the SJT to provide some initial evidence of the face validity of the test. For the purposes of this pilot, face validity included students’ perceptions of how appropriate and relevant the content of the SJT was to them. All students who completed the SJT were asked to complete a previously validated evaluation questionnaire based on justice theory.38 Of the 668 students who completed the SJT and consented to their data being used, 642 (96.1%) completed the questionnaire. Students were asked to indicate their level of agreement with several statements regarding relevance, difficulty, fairness, and whether they felt that the SJT could effectively differentiate between levels of performance of students. Findings showed that, in general, students reported positive perceptions about the SJT: 92.5% agreed that the content of the SJT was relevant to the pharmacy degree; 86.7% of students agreed that the SJT was appropriate for their level of training, and 88.4% agreed that the SJT was fair.

Students were also asked for their views on the level of difficulty of the SJT. Approximately 85% agreed that the SJT had an appropriate level of difficulty, and 74.9% of students agreed that the SJT gave them the opportunity to demonstrate their ability.

Students were invited to provide qualitative comments on the SJT. 281 (42.1%) students provided qualitative comments. The comments were analyzed to identify themes using thematic analysis. Common themes found in the comments are summarized below.33 The findings were generally positive and typical of reactions to an initial SJT pilot. Most comments highlighted that the students found the tool fair and relevant, and believed that it was a good method to learn more about how to apply their skills in the workplace. Students also highlighted that completing the SJT provided them with a useful opportunity to reflect on their own strengths and weaknesses, and to prepare for potential situations that they may experience in clinical placements. While many students in their first or second year highlighted this as a positive aspect of the SJT, they also noted that their unfamiliarity with some of the situations presented within the scenarios, for example, completing placements, made it more difficult to respond to the questions.

In addition, students stated that the SJT was a novel and useful way to increase their exposure to workplace situations and an interesting and engaging tool to complete. In addition to the knowledge, students believed the SJT allowed them to learn more about the non-academic attributes and interpersonal skills that are required to be an effective pharmacist. However, many students also commented that the tool was overly long and that reviewing the SJT scenarios and associated items was a fairly complex task that was tiring to complete in one sitting.

DISCUSSION

To the authors’ knowledge, this is the first study to report on the development, implementation, and psychometric evaluation of an SJT used as a formative assessment for undergraduate pharmacy students. Our psychometric evaluation results indicated that the SJT demonstrated appropriate levels of reliability. Similar to findings within other health care professions,14 the SJTs ability to discriminate between students and level of difficulty both indicate that the SJT is likely to be able to support colleges and schools in terms of identifying students that may require additional support to develop core non-academic attributes throughout their pharmacy degree, as these students will sit at the lower end of the distribution of SJT scores. Further, the difficulty of the SJT was appropriate for a formative assessment because the majority of students will perform at a satisfactory level, as many items are considered “easy.” Thus, the SJT would be a useful metric to support the pharmacy school and contribute in identifying poorly performing students who may require additional support. The item quality metrics were within acceptable levels for this type of pilot study. We plan to monitor the performance of the items which were of poorer item quality in future, to inform their ongoing use in the SJT, and continuous improvement of the tool.

The distribution of scores shows a similar pattern to previous studies using an SJT methodology, which is a relatively normal distribution with a slight negative skew.18 Other studies have demonstrated that those students falling towards the bottom end of the distribution (low scorers) tend to be poorer performers both in education and clinical practice.14,39,40 Moreover, research in medical education has also shown the SJT methodology to be especially effective at supporting pharmacy schools in identifying students who may be poorer performers in relation to important non-academic and professional attributes.14

With regard to the fairness of the test, the SJT showed small differential effects based on gender and country of birth. We believed the effect was small enough not to warrant future test revision. Also, there was a question regarding whether country of birth or language impacted a student’s total SJT score. However, the findings suggest that the group differences observed in SJT scores were not simply due to students’ differing proficiency in English, resulting from the confounding interaction between these two variables. Finally, feedback from the students regarding face validity was encouraging as most students were positive about the SJT and believed that it was relevant, appropriate, and fair. These findings were viewed particularly positive, given that the SJT as a formative assessment is new in this context.

In the future, we aim to provide students with feedback on their SJT performance. After completing the tool, students will be provided with the percentage of their responses that were exactly or closely aligned to the expert answer key. Additionally, the provision of questions and/or statements will allow further reflection on the four domains and consideration of potential areas of development for the upcoming academic year. Providing feedback to students will allow them to consider the extent to which their responses agreed with expert answer key, and therefore consider their specific strengths or where further development or support may be required. The opportunity to reflect on and discuss their answers after completing the SJT may also be important to ensure that students understand why certain responses are more appropriate than other responses, thus facilitating opportunities for continued learning.

Completion of the SJT provides students with the opportunity for self-assessment and reflection. This is important as the ability to identify specific behavioral objectives and compare their own perceptions with their results from the SJT will enable students to understand the “best” way to do things. This results in a student’s ability to adapt behavior based on the self-assessment.8 These self-assessment skills are an important part of personal and professional development in the health professions as flawed perceptions could be dangerous, especially if inflated self-perceptions result in a lack of understanding regarding what the students do not yet know. Self-assessment is an important skill to develop as it is foundational to continuous professional development (CPD) and necessary for reflective practice.41 Our research findings are consistent with previous research that used a similar methodology to identify levels of resilience within palliative care workers.32 This suggests that the SJT methodology is appropriate for identifying those students towards the lower end of the distribution and thus will hopefully support the pharmacy school in understanding those students that may require further tailored support to build on their skills in these areas. Our findings from this study uniquely add to the research literature within this area, with this being the first study of its kind to use an SJT methodology to develop a formative assessment to provide feedback on non-academic attributes to a student population.

Our study has several limitations. Students reported that they found the SJT to be a useful opportunity to reflect on their own strengths and weaknesses and to prepare for potential situations that they might experience in placements. However, for some scenarios, the limited information provided may have meant that students could not fully immerse themselves in the situation. To account for this, future research could explore the possibility of creating a multimedia SJT. A multimedia format could improve the fidelity of the test and provide students with greater context for the scenarios.

Within the design process of the SJT, we aspired to develop scenarios that did not force students to select actions that did not align with their own values and behaviors. We achieved this aim by adopting a format whereby each action was independently rated in terms of its appropriateness. However, a limitation inherent in a tool that uses an SJT methodology is that the responses are closed, thereby not allowing students the opportunity to provide an explanation of the rationale for their ratings of each response option. Although the authors did not deem it appropriate to offer respondents the option of providing free-text responses on the SJT, the pharmacy school is building educational activities to foster open-ended student discussion around the development of these non-academic skills.

While past research has focused on the use of SJTs within selection processes,15,17,42 the current research provides support for the methodology to be used to develop an SJT for formative use as part of a curriculum. Using this approach will support the assessment of non-academic skills to identify development areas for students in the upcoming academic year. This unique approach allows students to reflect and put themselves into the scenario in a safe manner and consider how they would react to various dilemmas that they may be faced with in their role. Further, the opportunity in the future to receive feedback regarding their alignment to the experts in these situations can provide further opportunity for reflection in each of these non-academic attributes and drive motivation to develop skills in these areas.

One approach to supporting students with the development of these non-academic skills on an ongoing basis would be having them complete an SJT annually. This would provide the faculty with a metric to use in monitoring the progression of pharmacy students throughout their pharmacy degree program. This information could also help the faculty to tailor the curriculum to students’ needs throughout the program, thereby emphasizing the importance of these skills to students and providing continued support to them.

Asking students to complete the SJT and providing them with formative feedback may also create opportunities for informal learning. Peer-to-peer learning may occur through informal student discussions of various approaches. The faculty could seek to formalize some of this informal peer-to-peer learning by having students share ideas on how to approach some of the more challenging situations and receive “expert” feedback as part of a group discussion.

Finally, involving subject matter experts in the initial and continued design process of the SJT provided a formalized way for senior professionals and educators to share their knowledge and expertise with students. Without completing an SJT, some students may enter the profession without receiving any feedback on their non-academic skills and how they should approach challenging situations. By bringing this learning forward through the SJT, alongside other educational interventions, it allows students to receive evidence and feedback on their non-academic skills much sooner in the career pathway.

CONCLUSION

A situational judgement test was developed using best practice methodology incorporating subject matter experts from the profession and implemented in a pharmacy program at an Australian university. The results of the evaluation demonstrated evidence of the tool’s validity, reliability, fairness, and appropriateness to use as a formative assessment. Further research is required to demonstrate the impact on non-academic skills in pharmacy settings.

ACKNOWLEDGMENTS

The authors wish to thank Kayley Lyons from Monash University, who supported in the final revisions to this manuscript.

Appendix 1.

Example of a Situational Judgement Scenario with Response Instructions and Associated Items

graphic file with name ajpe7074-app1.jpg

REFERENCES

  • 1.Jones J, Krass I, Holder G. Selecting pharmacy students with appropriate communication skills. Am J Pharm Educ. 2000;64(1):68-73. [Google Scholar]
  • 2.PCAT: Pharmacy College Admission Test. http://pcatweb.info/. Accessed November 1, 2017.
  • 3.Shaw J, Kennedy J, Jensen M, Sheridan J. An international perspective on pharmacy student selection policies and processes. Am J Pharm Educ . 2015:79 (8):Article 115. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Douglas CA. Towards an operational definition of pharmacy clinical competency. Diss Abstr Int . 2011;73-06: Sect. B:3490. [Google Scholar]
  • 5.Urteaga E, Attridge R, Tovar J. Evaluation of clinical and communication skills of pharmacy students and pharmacists with an objective structured clinical examination. Am J Pharm Educ . 2015;79(8):Article 112. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Zhao L, Sun T, Sun B. Identifying the competencies of doctors in China. BMC Med . 2015;15:1. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Patterson F, Tavabie A, Denney M, et al. A new competency model for general practice: implications for selection, training, and careers. Br J Gen Pract. 2013;63 (610):331-338. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Austin Z, Gregory P. Evaluating the accuracy of pharmacy students’ self-assessment skills. Am J Pharm Educ . 2007;71(5):Article 89. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Murawski M, Miederhoff P. Pharmaceutical caring. Am J Pharm Educ.1994;58:310-315. [Google Scholar]
  • 10.Patterson F, Lievens F, Kerrin M, Zibarras L, Carette B. Designing selection systems for medicine: the importance of balancing predictive and political validity in high-stakes selection contexts. Int J Sel Assess. 2012;20(4):486-496. [Google Scholar]
  • 11.Talley C. Who becomes a pharmacist? Am J Hosp Pharm. 1994;51:317. [PubMed] [Google Scholar]
  • 12.Lievens F. Adjusting medical school admission: assessing interpersonal skills using situational judgement tests. Medical Education. 2013;47(2):182-189. [DOI] [PubMed] [Google Scholar]
  • 13.Ferguson E, James D, Madeley L. Factors associated with success in medical school: systematic review of the literature. BMJ. 2002;324(7343):952-957. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Cousans F, Patterson F, Edwards H, McLaughlan J, Good D. Evaluating the complementary roles of a situational judgement test and academic assessment for entry into clinical practice. Adv Heal Sci Educ Spec Issue. 2017;22(2):401-413. [DOI] [PubMed] [Google Scholar]
  • 15.Zibarras L, Patterson F, Driver R. A future research agenda for selection into healthcare. Eur J Dent Educ. 2017;1-3. [DOI] [PubMed] [Google Scholar]
  • 16.Patterson F, Cousans F, Edwards H, Rosselli A, Nicholson S, Wright B. . The predictive validity of a text-based situational judgment test in undergraduate medical and dental school admissions. Acad Med. 2017;92(9):1250-1253. [DOI] [PubMed] [Google Scholar]
  • 17.Patterson F, Zibarras L, Ashworth V. Situational judgement tests in medical education and training: research, theory and practice: AMEE Guide No. 100. Med Teach. 2015;3(1):3-17. [DOI] [PubMed] [Google Scholar]
  • 18.Rowett E, Patterson F, Cousans F, Elley K. Using a situational judgement test for selection into dental core training: a preliminary analysis. Nat Publ Gr. 2017;222(9):715-719. [DOI] [PubMed] [Google Scholar]
  • 19.Petty-saphon K, Walker KA, Patterson F, Ashworth V, Edwards H. Situational judgment tests reliably measure professional attributes important for clinical practice. Adv Med Educ Pract . 2017;8:21-23. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Lievens F, Peeters H, Schollaert E. Situational judgment tests: a review of recent research. Pers Rev . 2008;37(4):426-441. [Google Scholar]
  • 21.Motowidlo SJ, Beier ME. Differentiating specific job knowledge from implicit trait policies in procedural knowledge measured by a situational judgment test. J Appl Psychol . 2010;95(2):321-333. [DOI] [PubMed] [Google Scholar]
  • 22.Lievens F, Patterson F, Corstjens J, Martin S, Nicholson S. Widening access in selection using situational judgement tests: Evidence from the UKCAT. Med Educ . 2016;50(6):624-636. [DOI] [PubMed] [Google Scholar]
  • 23.Pashayan N, Gray S, Duff C, et al. Evaluation of recruitment and selection for specialty training in public health: interim results of a prospective cohort study to measure the predictive validity of the selection process. J Public Health (Bangkok). 2015;38(2):194-200. [DOI] [PubMed] [Google Scholar]
  • 24.Patterson F, Ashworth V, Zibarras L, Coan P, Kerrin M, O'Neill P. Evaluations of situational judgement tests to assess non-academic attributes in selection. Med Educ . 2012;46(9):850-868. [DOI] [PubMed] [Google Scholar]
  • 25.Patterson F, Lievens F, Kerrin M, Zibarras L, Carette B. Designing selection systems for medicine: the importance of balancing predictive and political validity in high-stakes selection contexts. Int J Sel Assess . 2012;20(4):486-496. [Google Scholar]
  • 26.Patterson F, Knight A, Dowell J, Nicholson S, Cousans F, Cleland J. . How effective are selection methods in medical eduction and training? Evidence from a systematic review. Med Educ. 2016;50(1):36-60. [DOI] [PubMed] [Google Scholar]
  • 27.Sackett P, Lievens F. Personnel selection. Ann R Psych . 2008; 59:419-450. [DOI] [PubMed] [Google Scholar]
  • 28.Koczwara A, Patterson F, Zibarras LD, Kerrin M, Irish B, Wilkinson M. Evaluating cognitive ability, knowledge tests and situational judgement tests for postgraduate selection. Med Educ. 2012;46(4):399-408. 10.1111/j.1365-2923.2011.04195.x. [DOI] [PubMed] [Google Scholar]
  • 29.Patterson F, Ashworth V, Zibarras L, Coan P, Kerrin M, Neill PO. Evaluations of situational judgement tests to assess non-academic attributes in selection. Med Educ. 2012;46(9):850-868. 10.1111/j.1365-2923.2012.04336.x. [DOI] [PubMed] [Google Scholar]
  • 30.Patterson F, Driver R. Situational judgement tests. In Patterson F, Zibarras L, eds. Selection & Recruitment in the Healthcare Professions . London: Palgrave Macmillan; 2018:79-112. [Google Scholar]
  • 31.Patterson F, Knight A, Dowell J, Nicholson S, Cousans F, Cleland J. . How effective are selection methods in medical eduction and training? evidence from a systematic review. Med Educ. 2016;50(1):36-60. [DOI] [PubMed] [Google Scholar]
  • 32.Pangallo A, Zibarras L, Patterson F. Measuring resilience in palliative care workers using the situational judgement test methodology. Med Educ. 2016;50(11):1131-1142. [DOI] [PubMed] [Google Scholar]
  • 33.King N. Template analysis. In Symon G, Cassell C eds. Qualitative Methods and Analysis in Organisational Research: A Practical Guide . London: Sage; 1998:118-134. [Google Scholar]
  • 34.Flanagan JC. The critical incident technique. Psychol Bull . 1954;51:327-358. [DOI] [PubMed] [Google Scholar]
  • 35.Kline P. The Handbook of Psychological Testing. Routledge; 2000. [Google Scholar]
  • 36.Field A. Discovering Statistics Using SPSS. SAGE Publications Ltd; 2009. [Google Scholar]
  • 37.Cohen J. Statistical Power Analysis for the Behavioral Sciences. Erlbaum Associates; 1988. [Google Scholar]
  • 38.Patterson F, Zibarras L, Carr V, Irish B, Gregory S. Evaluating candidate reactions to selection practices using organisational justice theory. Med Educ. 2011;45(3):289-297. [DOI] [PubMed] [Google Scholar]
  • 39.Patterson F, Lopes S, Harding S, Vaux E, Berkin L, Black D. The predictive validity of a situational judgement test, a clinical problem solving test and the core medical training selection methods for performance in specialty training. Clinical Medicine . 2017;17(1):13-17. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 40.Patterson F, Baron H, Carr V, Plint S, Lane P. Evaluation of three short-listing methodologies for selection into postgraduate training in general practice. Med Educ . 2009;43(1):50-57. [DOI] [PubMed] [Google Scholar]
  • 41.Fjortoft N. Self-assessment in pharmacy education. Am J Pharm Educ . 2006;70(3):1-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 42.Patterson F, Cleland J, Cousans F. Selection methods in healthcare professions: where are we now and where next? Adv Heal Sci Educ . 2017;22:229-242. [DOI] [PubMed] [Google Scholar]

Articles from American Journal of Pharmaceutical Education are provided here courtesy of American Association of Colleges of Pharmacy

RESOURCES