Skip to main content
SAGE Open Nursing logoLink to SAGE Open Nursing
. 2023 Oct 9;9:23779608231207217. doi: 10.1177/23779608231207217

Development, Validity and Reliability of Objective Structured Clinical Examination in Nursing Students

Carolina Chabrera 1,, Eva Diago 2, Laura Curell 3
PMCID: PMC10563491  PMID: 37822363

Abstract

Introduction

The adoption of measurement instruments such as the Objective Structured Clinical Examination (OSCE) is essential to assess clinical competencies in nursing students.

Objective

The purpose of this study is to develop an OSCE, analyze its validity and reliability in the nursing curriculum and measure self-assessment, stress and satisfaction.

Methods

The observational validation study of a measurement instrument was carried out in two phases: the design and development of the OSCE and validity and reliability analysis.

Results

A total of 118 students participated in the study. Ten scenarios were designed that incorporated six competency components extracted from the curriculum. Good results were obtained in face validity, content validity (CVI .82-.95), criterion validity (r = .71, p < .001), and reliability (α Cronbach .84). Satisfaction and stress scores were high, and self-assessment scores were lower than the scores obtained.

Conclusion

A rigorously designed OSCE provides a reliable and valid method for assessing the clinical competence of nursing students.

Keywords: objective structured clinical examination, competency-based education, nursing education, validity, reliability, psychometrics

Introduction/Background

The evaluation of clinical competencies is an essential condition of the education of health professionals (Mitchell et al., 2009). Nursing competency is considered an integrative ability of clinical knowledge, judgment, skills, attitude, and beliefs, necessary to effectively perform the nursing role in specific practice settings (Brown & Crookes, 2016).

The assessment of these competencies is a complex process if the aim is to ensure that nurses are suitable for professional practice, regardless of the environment in which it is carried out (simulation or clinical practice) (Hodges et al., 2019). Recent studies have shown that nurses with a higher level of competency will deliver high-quality, safe and cost-effective health care (Furuki et al., 2023; Kim et al., 2017; Koota et al., 2021; Melnyk et al., 2018, 2020). However, the conventional assessment methods used for clinical competency assessment remain a cause for concern because examiners may not have complete objectivity and standardization throughout the assessment process (Harden, 2016). In the assessment of nursing competencies, in general, there are three main types of assessment of clinical competencies: observation, self-assessment and a combination of both; the most common of all is structured observation using rubrics (Reljić et al., 2017). Wu et al. (2015) indicated the need to develop a complete clinical evaluation method with adequate validity and reliability to assess the clinical competence of nursing students.

The Objective Structured Clinical Examination (OSCE) is an instrument that uses simulation to assess clinical skills in planned scenarios with maximum objectivity (Harden & Gleeson, 1979). It has been reported that it is a feasible method to assess the competence in health education of undergraduate and postgraduate students (Harden, 2016; Najjar et al., 2016) without putting the student or the patient at risk and allowing the evaluation of competencies in a clinical context (Bates & Singh, 2018).

Review of Literature

In the literature, good practice guidelines for the use of the OSCE are established to provide an evidence-based approach to guide academics in maximising the benefits of this educational strategy (Nulty et al., 2011) and improve student learning (Kelly et al., 2016), guidelines to help teachers implement this instrument within studies (Henderson et al., 2013) and guidelines on the implication of this form of assessment in the training of nursing students (Johnston et al., 2017; Navas-Ferrer et al., 2017). The Good Practice Guidelines cover all aspects of this complex assessment including content, a comprehensive scoring guide, competency components, a sequential approach to the entire process, a supportive technical environment, feedback and transfer to practice.

In recent years, this assessment method has gained attention in nursing education. This is due to the different advantages it provides, such as the evaluation of competencies with objective criteria and its usefulness for evaluating communication skills; it helps students develop their confidence and makes them feel better prepared for clinical practice. In addition, ensuring safe practices (Smrekar et al., 2017) contributes to the quality of nursing education.

Nursing students perceive OSCE as a valuable and a positive experience (Johnston et al., 2017). However, the OSCE also has some limitations, such as the high levels of stress and anxiety it produces in participants (Johnston et al., 2017; Majumder et al., 2019; Vincent et al., 2022). In addition to the complexity and high economic cost of implementing this evaluation in the nursing curriculum (Goh et al., 2019; Palese et al., 2012).

Currently, there is a great diversity of OSCEs described. However, scientific evidence supporting the effects of the OSCE on nursing education compared to other educational tools is limited (Montgomery et al., 2021). Although several studies show the content of the OSCE and the nature of its evaluation, there are few that determine the validity and reliability of the OSCE for the evaluation of clinical competencies in nursing students (Goh et al., 2019; Lee et al., 2020; Montgomery et al., 2021).

For this reason, the objective of this study was to develop an OSCE, analyze its validity and reliability as an instrument for measuring clinical competencies in the nursing curriculum, and measure self-assessment, stress, and satisfaction in third-year of nursing students.

Methods

Design

This research involved an observational study to validate a measurement instrument and consisted of two phases: Phase I. Design and development of the OSCE, and Phase II. Validity and reliability analysis. The study period was September 2017 to July 2019.

Sample and Inclusion/Exclusion Criteria

A convenience sampling was carried out, and third-year students in the nursing program at TecnoCampus (a center attached to the Pompeu Fabra University) were included in the study. By implementing an OSCE at the end of the third year, students and teachers have the opportunity to establish individual learning goals for the fourth year, during which they engage in extended periods of clinical practice aligned with the curriculum.

Ethical Consideration

All the procedures in this study were carried out in accordance with the Declaration of Helsinki and were approved by the TecnoCampus Institutional Research Ethics Committee (N° CE-001-20140313).

All participants were informed and provided with written information, and informed consent was obtained from students before they participated in the OSCE. There was no penalty for not participating in the study. Data collection occurred following the completion of voluntarily signed consent.

Phase I. Design and Development of the OSCE

To develop the OSCE model, a tiered approach was followed through the design steps, definition of competency components, identification of examination tasks, station writing and review, and standard setting (Khan et al., 2013; Wass et al., 2001).

For this, an “OSCE Committee” was created, formed by a group of 10 experts, consisting of teachers of nursing, nurses and simulation experts. One expert had a doctorate, three were PhD candidates, three had master's degrees, and two were specialists. Of the seven nursing teachers, four were responsible or members of the clinical simulation team, all were responsible for different subjects of the nursing program, and four continued as clinical nurses. The three external nurses were also experts in clinical simulation.

The OSCE assessment was developed and implemented in terms of feasibility, objectivity, validity, reliability, educational impact and acceptability.

The evaluation was based on results obtained from the students’ scores (measured with the detailed evaluation rubrics of each station that have been elaborated and validated during the first phase of this study), the perception of the level of acquisition of competencies of the students (measured on a self-development scale from 0 to 10 for each competency component) and the level of stress before the test (assessed on a self-development scale from 0 to 10) through self-administered questionnaires.

In addition, student and evaluator satisfaction was also assessed using self-administered questionnaires including items related to the previous information received, the time allocated to each station, the duration of the OSCE, the level of scenario preparation, the clarity of documentation, the support during testing, and the relationship with the evaluator and student. Each item was evaluated on a scale of 0 to 10.

Phase II. Validity and Reliability Analysis. Statistical Analysis

To ensure face validity, the guidelines are followed in terms of completion, number of simulated patients, maximum evaluation items for each station, number of students per round and the use of different evaluation systems (Kelly et al., 2016).

To examine the validity of the content, eight nursing experts representing different healthcare settings were invited to perform content validation of the checklist. The checklist included the evaluative items for each station as well as the scoring criteria. The criteria established to be part of the expert panel were to have a minimum of 10 years of experience as a nurse and more than 3 years in clinical simulation. The nursing experts were requested to rate each item according to its relevance using a scale from 1 to 4 (1 = not relevant, 2 = somewhat relevant, 3 = quite relevant, and 4 = very relevant). A comment section was also provided for each item so that the reviewers could provide qualitative response. A content validity index (CVI) of 0.8 or higher was measured as valid and signaled the representativeness and clarity of the checklist items (McGartland-Rubio, 2005; Wynd et al., 2003).

The criterion validity indicates the relationship of the score of each participant with a gold standard that guarantees the measurement of what is desired. Currently, there is no instrument that measures competencies in a recognized way, and in similar studies, no correlation of the grades obtained by students at the OSCE and grades obtained with other assessment methods has been found (Navas-Ferrer et al., 2017). For this reason, and to guarantee that the scenarios represented real clinical practice, that the rubrics made it possible to measure the expected competencies and were capable of detecting professional competence, nurses with 5 years of experience participated in the OSCE as references, so their results and those of the students could be compared. Pearson's correlation coefficient was used to examine the strength of the relationship between the students’ total scores and the nurses’ global measure, giving a measure of criterion validity.

To examine the reliability of the OSCE, in terms of internal consistency of scale scores, Cronbach's alpha coefficients were calculated considering the following values (DeVellis & Thorpe, 2021): ≥ .90, excellent;.80-.89, well;.70-.79, acceptable;.60-.69, questionable;.50-.59, poor; i < 50, unacceptable.

All statistical analyses were performed using Jamovi Software to generate descriptive and inferential statistics (The Jamovi Project, 2020).

Procedures

The researcher explained the purpose and development of the study to all the students enrolled in the subject “Clinical Practicum III” of the 3rd quarter of the 3rd year, the information sheet was delivered, and all the students who wanted to participate in the study signed the informed consent form.

Before the OSCE, the students were received in a classroom where the dynamics were explained again; they were able to listen to the audio sound indications that indicated the times, and possible doubts were resolved. In addition, they were given the questionnaire of perception of acquired competencies and levels of stress. Each student finished the round according to the established schedule, completing the 10 stations; each station lasted 12 min (2 min to read the case and 10 min to resolve the scenario).

At the end of the OSCE, students and evaluators reported their level of satisfaction.

Results

An OSCE model was developed, implemented and evaluated.

Phase I. Design and Development of the OSCE

The model developed was characterized by a) integrating the knowledge and skills worked on in all subjects from previous years to the end of the third year, b) focusing on the assessment of skills in a variety of areas (pediatrics, pharmacology, mental health, specialty hospitalization and primary education), c) adjusting to the level of third-year students focusing on the evaluation of the skills acquired through early clinical exposure from the first year and simulation-based education developed in the curriculum and d) resource efficiency.

The “OSCE Committee” grouped into six competencies components of all the competencies extracted from the curriculum up to the third year: critical thinking, technical skills, knowledge, communication skills, ethical aspects and interpersonal skills.

Ten OSCE scenarios were designed that incorporated the six competency components with different weights. All stations were workstations. To ensure OSCE feasibility in our context, a rest station was not included. OSCE stations were designed to cover general clinical nursing content and skills. The OSCE consisted of five stations with a Human Patient simulator (stations 1, 2, 5, 8 and 9), three standardized patient stations (stations 3, 4 and 10) and two silent written stations (stations 6 and 7). The scenarios were created based on real clinical cases and written in detail. The general design is collected in the specifications table (Table 1).

Table 1.

OSCE Specification Table.

Competencies
Scenario Clinical case Evaluative instrument C IS CT TS K EA Total
1 Pediatric care HPS 5% 20% 20% 45% 10% 100%
2 Bedridden patient care: diaper change and postural change HPS 10% 10% 20% 50% 10% 100%
3 Tracheostomy patient care SP 20% 10% 30% 40% 100%
4 Thoracic drainage and venous return bandage SP 5% 20% 50% 20% 5% 100%
5 Preparation and administration of medication HPS 10% 30% 30% 30% 100%
6 Identification of alterations (Clinical History) OQ/CR 100% 100%
7 Nursing Care Plan OQ/CR 10% 70% 20% 100%
8 Blood analysis extraction HPS 20% 20% 50% 10% 100%
9 Peripheral venous access cannulation HPS 20% 20% 50% 10% 100%
10 Health education for overweight patients SP 20% 20% 20% 40% 100%

HPS: Human Patient Simulator / SP: Standardized Patient / OQ: Open question / CR: Clinical Report

C: Communication / IS: Interpersonal Skills / CT: Critical Thinking / TS: Technical Skill / K: Knowledge / EA: Ethical Aspects

The OSCE was performed for three days. For this reason, a total of 30 clinical cases were developed as contents of the test, adapting the cases to the competencies being assessed and their percentage load. The “OSCE Committee” ensured that each case was independent of the rest within the same day of trial to ensure its variability. Each scenario consisted of a description of the case, the competencies being assessed, the activity to be carried out, the role of the actor or evaluator, a checklist of the scenario (material and human resources by logistics and test preparation staff), an evaluation rubric and a station number identification poster.

A detailed evaluation rubric was designed for each station defining the items that assessed the competencies as well as the weights of the competency components of each station. The weight assigned to each competency varies according to the importance inside the scenario. The grade for each item was assessed with 0 (not done or done poorly) or 1 (done correctly). Finally, a personalized assessment book was created for each participant with the 10 assessment rubrics and the calculations made. This book was introduced to the university domain in the Google Drive system and was shared only by reviewers and evaluators. The evaluators also had the evaluation rubrics in paper format in case any informatics problems arose in the system during the evaluation. The approval criteria used the overall rating score (score ≥ 50).

The OSCE examiners were nursing teachers with > 5 years of clinical experience and > 5 years of academic experience. A total of 10 evaluators and 3 logistics assistants were recruited and trained on their intended exam tasks. In this study, evaluators received an 8-h OSCE scenario training session from the OSCE coordinator, who had extensive experience in clinical simulation and OSCE assessments.

All examiners met with the “OSCE Committee” to analyze and discuss possible inconsistencies in the evaluation items, used instant feedback systems, and were able to consult the scenario checklist sheet before initiating the OSCE.

Phase II. Validity and Reliability Analysis

Sample Characteristics

A total of 118 students participated in the study. The average age of the participants was 25.82 years, 82.2% (n = 97) were women, 60.16% (n = 71) had combined studies with a job, and only 39.83% (n = 47) were dedicated solely to studying. Of the 14 nursing professionals who participated, 71.42% were women with a mean age of 36.15 years.

Validity

Face validity was ensured during the planning phase to guarantee the applicability and acceptability of the OSCE. The CVI was calculated for the items of all rubrics and ranged between .82 and .95, which represents good content validity. None of the items were withdrawn during CVI because they met the agreements recommended (Polit et al., 2007).

The relationship between the total scores of the students and the global measure of the nurses had a statistically significant positive correlation (r = .71, p < .001).

Reliability

The analysis and reliability of the general scale of the OSCE were examined from the responses of the 118 participants, and the results are listed in Table 2. The results of the scores obtained by the professionals are superior but of similar tendency in all the competency components: a score of 9.02 (DS: .7, [7.9–10]) in Communication, 9.45 (DS: 1.29 [6.1–10]) in Interpersonal Skills, 6.66 (DS: .92 [4.8–8.75]) in Critical Thinking, 8.16 (DS: .84, [6.3–8.93]) in Technical Skill, 6.55 (DS: .82, [5.55–8.0]) in Knowledge and 7.5 (DS: 2.07, [5–10]) in Ethical Aspects.

Table 2.

Score Obtained from the OSCE and Perception of Acquired Competencies of Participants (N=118).

Score Perception of acquired competencies
Competencies Mean SD Min. Max. Mean SD p valor
Communication 8.37 0.85 5.40 10 6.72 1.03 < .001
Interpersonal Skills 8.10 1.25 5.50 10 6.36 1.07 < .001
Critical Thinking 5.83 0.96 3.70 8.64 6.43 1.07 < .001
Technical Skill 6.51 1.21 3.60 9.09 6.21 1.02 .043
Knowledge 5.79 1.44 2.75 8.60 6.59 0.84 < .001
Ethical Aspects 7.40 2.26 2.50 10 7.70 1.09 .10

SD: standard deviation.

OSCE score: 0 to 10 points. The higher the score, the higher the level of competence.

Scoring of the perception of the acquired competencies: 0 to 10 points. The higher the score, the higher the perception level of acquired competencies.

Regarding internal consistency, Cronbach's alpha for the total OSCE was .84, while the coefficients for the competency components were Communication (.86), Interpersonal Skills (.83), Critical Thinking (.92), Technical Skill (.91), Knowledge (.89) and Ethical Aspects (.79).

The general perception of competencies acquired by participants up to the time of the OSCE was lower in relation to the score obtained in each competency (Table 2).

The pre-OSCE stress level was 8.4/10 (DS 1.41) for students and 7.0/10 (DS: 1.68) for professionals.

The results of the degree of satisfaction are shown in Table 3.

Table 3.

Students’ and Evaluators’ Satisfaction.

Variables Students (%) Evaluators (%)
Previous information received 93 100
Time allocated to each station 83.2 95.0
Time allocated to complete the evaluation - 98
Duration of the OSCE 96.3 100
Scenario preparation 88.6 100
Clarity of clinical cases and documentation 83.7 96
Support during testing 99.0 100
Relationship with the evaluator 98.2 -
Relationship with the student - 100
Overall satisfaction 89.3 96.4

Discussion

This was the first OSCE implemented in the nursing program at this university to assess clinical competence, and it shown acceptable reliability and validity. In this study, six competence components were included in the OSCE as a result of the analysis and consensus carried out by the OSCE Committee according to the nursing study plan.

Numerous studies have investigated the validity of the OSCE to determine the results of the evaluation, which are related to other exams and discriminate between candidates. These studies indicate that OSCEs can have face, content, construction, and concurrent validity. As in previous similar studies (Goh et al., 2016; Lee et al., 2020), the present study showed that all OSCE stations had good content validity.

The 10 OSCE stations were proposed by the experts due to their relevance to assess the clinical competence of undergraduate nurses according to the competency components determined with a CVI of .82 to .95.

Some interesting observations may lead to additional discussion.

Previous investigation suggested that there was a positive correlation between the number of stations and reliability (Yamada et al., 2017). The overall reliability in the present study was 0.84, which is a good level of reliability for certification tests (LaRochelle et al., 2022; Lee et al., 2016). The number of stations in this OSCE was therefore appropriate to assess clinical competence.

Some studies suggest minimum test lengths of 3–4 h and a minimum of 10 stations for good reliability (Wass et al., 2001). However, in this study, we adopt a length of 2 h and 10 stations, determining 12 min for each station. Organizing an OSCE of more than two hours, taking into account all the students enrolled, is costly and time-consuming and requires considerable human and infrastructure resources. However, such a long test duration can lead to low OSCE reliability due to tester and student burnout. In this sense, an effective and feasible OSCE has been designed to assess the clinical competence of nursing students. Although preparing to perform an OSCE is time-consuming and requires careful planning, including this type of assessment in the teaching program offers a valuable aid in assessing clinical performance.

It should be noted that the skills with the lowest scores were critical thinking and knowledge, both directly related. These results can be attributed to the fact that the students study to pass the exams, and during clinical practice, they focus mainly on working on more technical skills. Incorporating an OSCE during the curriculum allows the student to determine his or her level of competence and establish individual objectives during the training. In addition, it allows teachers to gain better knowledge about the group and incorporate cases into teaching that facilitate the integration of knowledge, skills and behavior to improve their performance (Chen et al., 2021).

Given that OSCEs seek to provide objectivity and transparency in the evaluation process, this explains to some extent why they are acceptable to students, as well as the high degree of satisfaction obtained in this study by students and evaluators. This evaluation system can facilitate a positive impact on learning just because of the type of format. In addition, the OSCE facilitated the task of the evaluators through the standardization of tools without the need to interfere by asking questions during the exam or writing comments arguing the weighting.

Self-assessment is essential to advance learning, and it is important to identify deficits to create improvement plans (Pisklakov, 2014). Students have little experience in reflecting on practice, so it is necessary to move toward self-assessment in a gradual and accompanied manner (Siles-González & Solano-Ruiz, 2016). Sometimes the differences between their perception and that of the evaluator lead to erroneous conclusions, so informed self-assessment is necessary (Wolff et al., 2017).

Strengths and Limitations

Student stress in the OSCE is widely described in the scientific literature (Brighton et al., 2017; Cazzell & Rodriguez, 2011; Johnston et al., 2017; Majumder et al., 2019; Mojarrab et al., 2020) and it was considered one of the main limitations.

For this reason, training sessions were carried out to reduce stress as recommended by different authors (Massey et al., 2017; Stunden et al., 2015). Further, examiners were given a set of rules to follow during the evaluation to avoid increasing the stress on the students (Zamanzadeh et al., 2021). Despite the high level of stress in the results obtained, no student abandoned the assessment.

Another limitation of this study lies in the fact that the OSCE was not compared with conventional evaluation methods. However, learning enhancement was measured by evaluating the self-efficacy of the participants or by using quantitative improvements in OSCE scores. Furthermore, study participants were not selected through random sampling. Therefore, generalizations based on our results should be made with care.

Implications for Practice

The results of this study offer an objective and standard tool to assess the clinical ability of nursing students in a situation close to the clinical context. Furthermore, it is an appropriate method to objectively measure and effectively address weaknesses in clinical evaluation. Along this line, this OSCE allows further research into whether the knowledge acquired in OSCEs is transferred to clinical practice and whether simulation-based education is more effective in achieving improved knowledge compared to traditional education.

Conclusions

A rigorously designed OSCE provides a reliable and valid method for assessing the clinical competence of nursing students. The implementation of OSCE in the nursing degree at TecnoCampus (a center attached to the Pompeu Fabra University) provides evidence of the reliability and validity of this tool in assessing the students’ competency skills. This study details the methodological process and the analysis of the validity and reliability of an OSCE that can be used as a reference for future studies.

Despite being a costly method (in terms of time and resources) and one that generates a high level of stress, the OSCE is presented as an evaluative method with high satisfaction from students and evaluators.

Therefore, our OSCE could make a significant contribution to maintaining high standards of future nurses in our country and be used as a valid and reliable tool in future research studies on the effectiveness of clinical simulation.

Footnotes

Data Availability Statement: Data not available - participant consent. The participants of this study did not give written consent for their data to be shared publicly, so due to the sensitive nature of the research supporting data is not available.

The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

Funding: The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: The authors wish to acknowledge the support received from the Tecnocampus Chronic Care and Health Innovation Research Group (GRACIS - GRC 01604) to finance the publication of this study.

Author Contributions: Carolina Chabrera has designed the research project and guided the data collection and analysis and participated in the drafting and significantly revising of the paper. Data were collected and analysed by Laura Curell and Eva Diago. Laura Curell has provided important contribution to the construction of content of the paper and made review significant revision. All authors have reviewed and agreed with the content of the paper.

ORCID iD: Carolina Chabrera https://orcid.org/0000-0002-1661-7916

References

  1. Bates D. W., Singh H. (2018). Two decades since to err is human: An assessment of progress and emerging priorities in patient safety. Health Affairs, 37(11), 1736–1743. 10.1377/hlthaff.2018.0738 [DOI] [PubMed] [Google Scholar]
  2. Brighton R., Mackay M., Brown R. A., Jans C., Antoniou C. (2017). Introduction of undergraduate nursing students to an objective structured clinical examination. Journal of Nursing Education, 56(4), 231–234. 10.3928/01484834-20170323-08 [DOI] [PubMed] [Google Scholar]
  3. Brown R. A., Crookes P. A. (2016). What are the ‘necessary’ skills for a newly graduating RN? Results of an Australian survey. BMC Nursing, 15(1), 1–8. 10.1186/s12912-016-0144-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. Cazzell M., Rodriguez A. (2011). Qualitative analysis of student beliefs and attitudes after an objective structured clinical evaluation: Implications for affective domain learning in undergraduate nursing education. Journal of Nursing Education, 50(12), 711–714. 10.3928/01484834-20111017-04 [DOI] [PubMed] [Google Scholar]
  5. Chen S. H., Chen S. C., Lai Y. P., Chen P. H., Yeh K. Y. (2021). The objective structured clinical examination as an assessment strategy for clinical competence in novice nursing practitioners in Taiwan. BMC Nursing, 20(1), 1–9. 10.1186/s12912-021-00608-0 [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. DeVellis R. F., Thorpe C. T. (2021). Scale development: Theory and applications (5th ed.). Sage publications. [Google Scholar]
  7. Furuki H., Sonoda N., Morimoto A. (2023). Factors related to the knowledge and skills of evidence-based practice among nurses worldwide: A scoping review. Worldviews on Evidence-Based Nursing, 20(1), 16–26. 10.1111/wvn.12623 [DOI] [PubMed] [Google Scholar]
  8. Goh H. S., Tang M. L., Devi M. K., Ng K. C. E., Lim L. M. (2016). Testing the psychometric properties of objective structured clinical examination (OSCE) in nursing education in Singapore. Singapore Nursing Journal, 43(1), 12–18. [Google Scholar]
  9. Goh H. S., Zhang H., Lee C. N., Wu X. V., Wang W. (2019). Value of nursing objective structured clinical examinations: A scoping review. Nurse Educator, 44(5), E1–E6. 10.1097/NNE.0000000000000620 [DOI] [PubMed] [Google Scholar]
  10. Harden R. M. (2016). Revisiting ‘assessment of clinical competence using an objective structured clinical examination (OSCE)’. Medical Education, 50(4), 376–379. 10.1111/medu.12801 [DOI] [PubMed] [Google Scholar]
  11. Harden R. M., Gleeson F. A. (1979). Assessment of clinical competence using an objective structured clinical examination (OSCE). Medical Education, 13(1), 39–54. 10.1111/j.1365-2923.1979.tb00918.x [DOI] [PubMed] [Google Scholar]
  12. Henderson A., Nulty D. D., Mitchell M. L., Jeffrey C. A., Kelly M. A., Groves M., Glover P., Knight S. (2013). An implementation framework for using OSCEs in nursing curricula. Nurse Education Today, 33(12), 1459–1461. 10.1016/j.nedt.2013.04.008 [DOI] [PubMed] [Google Scholar]
  13. Hodges A. L., Konicki A. J., Talley M. H., Bordelon C. J., Holland A. C., Galin F. S. (2019). Competency-based education in transitioning nurse practitioner students from education into practice. Journal of the American Association of Nurse Practitioners, 31(11), 675–682. 10.1097/JXX.0000000000000327 [DOI] [PubMed] [Google Scholar]
  14. Johnston A. N. B., Weeks B., Shuker M. A., Coyne E., Niall H., Mitchell M., Massey D. (2017). Nursing students’ perceptions of the objective structured clinical examination: An integrative review. Clinical Simulation in Nursing, 13(3), 127–142. 10.1016/j.ecns.2016.11.002 [DOI] [Google Scholar]
  15. Kelly M. A., Mitchell M. L., Henderson A., Jeffrey C. A., Groves M., Nulty D. D., Glover P., Knight S. (2016). OSCE Best practice guidelines-applicability for nursing simulations. Advances in Simulation, 1(1), 1–10. 10.1186/s41077-016-0014-1 [DOI] [PMC free article] [PubMed] [Google Scholar]
  16. Khan K. Z., Gaunt K., Ramachandran S., Pushkar P. (2013). The objective structured clinical examination (OSCE): AMEE guide no. 81. Part II: Organisation & administration. Medical Teacher, 35(9), e1447–e1463. 10.3109/0142159X.2013.818635 [DOI] [PubMed] [Google Scholar]
  17. Kim S. C., Ecoff L., Brown C. E., Gallo A. M., Stichler J. F., Davidson J. E. (2017). Benefits of a regional evidence-based practice fellowship program: A test of the ARCC model. Worldviews on Evidence-Based Nursing, 14(2), 90–98. 10.1111/wvn.12199 [DOI] [PubMed] [Google Scholar]
  18. Koota E., Kääriäinen M., Kyngäs H., Lääperi M., Melender H. L. (2021). Effectiveness of evidence-based practice (ebp) education on emergency nurses’ ebp attitudes, knowledge, self-efficacy, skills, and behavior: A randomized controlled trial. Worldviews on Evidence-Based Nursing, 18(1), 23–32. 10.1111/wvn.12485 [DOI] [PubMed] [Google Scholar]
  19. LaRochelle J., Durning S. J., Boulet J. R., van der Vleuten C., van Merrienboer J., Donkers J. (2022). Beyond standard checklist assessment: Question sequence may impact student performance. Perspectives on Medical Education, 5(2), 95–102. 10.1007/s40037-016-0265-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
  20. Lee J., Lee Y., Lee S., Bae J. (2016). Effects of high-fidelity patient simulation led clinical reasoning course: Focused on nursing core competencies, problem solving, and academic self-efficacy. Japan Journal of Nursing Science, 13(1), 20–28. 10.1111/jjns.12080 [DOI] [PubMed] [Google Scholar]
  21. Lee K. C., Ho C. H., Yu C. C., Chao Y. F. (2020). The development of a six-station OSCE for evaluating the clinical competency of the student nurses before graduation: A validity and reliability analysis. Nurse Education Today, 84(1018), 104247. 10.1016/j.nedt.2019.104247 [DOI] [PubMed] [Google Scholar]
  22. Majumder M. A. A., Kumar A., Krishnamurthy K., Ojeh N., Adams O. P., Sa B. (2019). An evaluative study of objective structured clinical examination (OSCE): Students and examiners perspectives. Advances in Medical Education and Practice, 10(1), 387–397. 10.2147/AMEP.S197275 [DOI] [PMC free article] [PubMed] [Google Scholar]
  23. Massey D., Byrne J., Higgins N., Weeks B., Shuker M. A., Coyne E., Mitchell M., Johnston A. N. B. (2017). Enhancing OSCE preparedness with video exemplars in undergraduate nursing students: A mixed method study. Nurse Education Today, 54, 56–61. 10.1016/j.nedt.2017.02.024 [DOI] [PubMed] [Google Scholar]
  24. McGartland-Rubio D. (2005). Content validity. In Kempf-Leonard K. (Ed.), Encyclopedia of social measurement (pp. 495–498). Elsevier. [Google Scholar]
  25. Melnyk B. M., Gallagher-Ford L., Zellefrow C., Tucker S., Thomas B., Sinnott L. T., Tan A. (2018). The first U.S. study on nurses’ evidence-based practice competencies indicates major deficits that threaten healthcare quality, safety, and patient outcomes. Worldviews on Evidence-Based Nursing, 15(1), 16–25. 10.1111/wvn.12269 [DOI] [PubMed] [Google Scholar]
  26. Melnyk B. M., Zellefrow C., Tan A., Hsieh A. P. (2020). Differences between magnet and non-magnet-designated hospitals in nurses’ evidence-based practice knowledge, competencies, mentoring, and culture. Worldviews on Evidence-Based Nursing, 17(5), 337–347. 10.1111/wvn.12467 [DOI] [PubMed] [Google Scholar]
  27. Mitchell M. L., Henderson A., Dalton M., Nulty D., Groves M. (2009). The objective structured clinical examination (OSCE): Optimising its value in the undergraduate nursing curriculum. Nurse Education Today, 29(4), 398–404. 10.1016/j.nedt.2008.10.007 [DOI] [PubMed] [Google Scholar]
  28. Mojarrab S., Bazrafkan L., Jaberi A. (2020). The effect of a stress and anxiety coping program on objective structured clinical examination performance among nursing students in Shiraz, Iran. BMC Medical Education, 20(1), 1–7. 10.1186/s12909-020-02228-9 [DOI] [PMC free article] [PubMed] [Google Scholar]
  29. Montgomery A., Chang H. R., Ho M. H., Smerdely P., Traynor V. (2021). The use and effect of OSCES in post-registration nurses: An integrative review. Nurse Education Today, 100(10), 45–48. 10.1016/j.nedt.2021.104845 [DOI] [PubMed] [Google Scholar]
  30. Najjar R. H., Docherty A., Miehl N. (2016). Psychometric properties of an objective structured clinical assessment tool. Clinical Simulation in Nursing, 12(3), 88–95. 10.1016/j.ecns.2016.01.003 [DOI] [Google Scholar]
  31. Navas-Ferrer C., Urcola-Pardo F., Subirón-Valera A. B., Germán-Bes C. (2017). Validity and reliability of objective structured clinical evaluation in nursing. Clinical Simulation in Nursing, 13(11), 531–543. 10.1016/j.ecns.2017.07.003 [DOI] [Google Scholar]
  32. Nulty D. D., Mitchell M. L., Jeffrey C. A., Henderson A., Groves M. (2011). Best practice guidelines for use of OSCEs: Maximising value for student learning. Nurse Education Today, 31(2), 145–151. 10.1016/j.nedt.2010.05.006 [DOI] [PubMed] [Google Scholar]
  33. Palese A., Bulfone G., Venturato E., Urli N., Bulfone T., Zanini A., Fabris S., Tomietto M., Comisso I., Tosolini C., Zuliani S., Dante A. (2012). The cost of the objective structured clinical examination on an Italian nursing bachelor’s degree course. Nurse Education Today, 32(4), 422–426. 10.1016/j.nedt.2011.03.003 [DOI] [PubMed] [Google Scholar]
  34. Pisklakov S. (2014). Role of self-evaluation and self-assessment in medical student and resident education. British Journal of Education, Society & Behavioural Science, 4(1), 1–9. 10.9734/BJESBS/2014/5066 [DOI] [Google Scholar]
  35. Polit D. F., Beck C. T., Owen S. V. (2007). Is the CVI an acceptable indicator of content validity? Appraisal and recommendations. Research in Nursing & Health, 30(4), 459–467. 10.1002/nur.20199 [DOI] [PubMed] [Google Scholar]
  36. Reljić, N. M., Lorber, M., Vrbnjak, D., Sharvin, B., & Strauss, M. (2017). Assessment of clinical nursing competencies: Literature review. In C. Stiglic, M. Pajnkihar, & D. Vrbnjak (Eds.), Teaching and learning in nursing (pp. 35–39). InTech. 10.5772/67362 [DOI] [Google Scholar]
  37. Siles-González J., Solano-Ruiz C. (2016). Self-assessment, reflection on practice and critical thinking in nursing students. Nurse Education Today, 45, 132–137. 10.1016/j.nedt.2016.07.005 [DOI] [PubMed] [Google Scholar]
  38. Smrekar M., Fičko S. L., Hošnjak A. M., Ilić B. (2017). Use of the objective structured clinical examination in undergraduate nursing education. Croatian Nursing Journal, 1(1), 91–102. 10.24141/2/1/1/8 [DOI] [Google Scholar]
  39. Stunden A., Halcomb E. J., Jefferies D. (2015). Tools to reduce first year nursing students’ anxiety levels prior to undergoing objective structured clinical assessment (OSCA) and how this impacts on the student’s experience of their first clinical placement. Nurse Education Today, 35(9), 987–991. 10.1016/j.nedt.2015.04.014 [DOI] [PubMed] [Google Scholar]
  40. The jamovi project (2020). Jamovi (Version 1.2) [Computer Software]. https://www.jamovi.org
  41. Vincent S. C., Arulappan J., Amirtharaj A., Matua G. A., Al Hashmi I. (2022). Objective structured clinical examination vs traditional clinical examination to evaluate students’ clinical competence: A systematic review of nursing faculty and students’ perceptions and experiences. Nurse Education Today, 108, 105170. 10.1016/j.nedt.2021.105170 [DOI] [PubMed] [Google Scholar]
  42. Wass V., Van der Vleuten C., Shatzer J., Jones R. (2001). Assessment of clinical competence. The Lancet, 357(9260), 945–949. 10.1016/S0140-6736(00)04221-5 [DOI] [PubMed] [Google Scholar]
  43. Wolff M., Santen S. A., Hopson L. R., Hemphill R. R., Farrell S. E. (2017). What’s the evidence: Self-assessment implications for life-long learning in emergency medicine. Journal of Emergency Medicine, 53(1), 116–120. 10.1016/j.jemermed.2017.02.004 [DOI] [PubMed] [Google Scholar]
  44. Wu X. V., Enskär K., Lee C. C. S., Wang W. (2015). A systematic review of clinical assessment for undergraduate nursing students. Nurse Education Today, 35(2), 347–359. 10.1016/j.nedt.2014.11.016 [DOI] [PubMed] [Google Scholar]
  45. Wynd C. A., Schmidt B., Schaefer M. A. (2003). Two quantitative approaches for estimating content validity. Western Journal of Nursing Research, 25(5), 508–518. 10.1177/0193945903252998 [DOI] [PubMed] [Google Scholar]
  46. Yamada T., Sato J., Yoshimura H., Okubo T., Hiraoka E., Shiga T., Kubota T., Fujitana S., Machi J., Ban N. (2017). Reliability and acceptability of six station multiple mini-interviews: Past-behavioural versus situational questions in postgraduate medical admission. BMC Medical Education, 17(1), 1–7. 10.1186/s12909-017-0898-z [DOI] [PMC free article] [PubMed] [Google Scholar]
  47. Zamanzadeh V., Ghaffari R., Valizadeh L., Karimi-Moonaghi H., Johnston A. N. B., Alizadeh S. (2021). Challenges of objective structured clinical examination in undergraduate nursing curriculum: Experiences of faculties and students. Nurse Education Today, 103, 104960. 10.1016/j.nedt.2021.104960 [DOI] [PubMed] [Google Scholar]

Articles from SAGE Open Nursing are provided here courtesy of SAGE Publications

RESOURCES