Skip to main content
Physiotherapy Canada logoLink to Physiotherapy Canada
. 2010 Apr 23;62(2):147–154. doi: 10.3138/physio.62.2.147

Scoring of the Physical Therapist Clinical Performance Instrument (PT-CPI): Analysis of 7 Years of Use

Peggy L Proctor 1,2,3,4,5,, Vanina P Dal Bello-Haas 1,2,3,4,5, Arlis M McQuarrie 1,2,3,4,5, M Suzanne Sheppard 1,2,3,4,5, Rhonda J Scudds 1,2,3,4,5
PMCID: PMC2871024  PMID: 21359047

ABSTRACT

Purpose: The aims of this study were to (1) describe the completion rates of the 24 performance criteria (PCs) from the Physical Therapist Clinical Performance Instrument (PT-CPI) by clinical instructors; (2) evaluate change in PC visual analogue scores (VAS) with students' clinical experience; and (3) evaluate scoring patterns over time.

Methods: Final VAS scores for 208 physiotherapy (PT) students (seven cohorts) from 1,039 clinical placements between 2001 and 2008 were analyzed. Completion rates were calculated for each PC. Kruskal-Wallis tests evaluated differences in VAS scores between cohorts. Friedman's tests were used to compare VAS scores for each PC over time.

Results: Completion rates were above 90% for 18 PCs. Data from the seven cohorts were combined. All PC scores showed significant change from 10 to 15 weeks and from 15 to 20 weeks of clinical experience (p≤0.001). Although differences in scores decreased over time, 19 PCs showed significant differences between 20 and 25 weeks, and 11 PCs showed significant differences between 25 and 31 weeks of clinical experience (p<0.01).

Conclusions: Certain PCs had lower completion rates. The PT-CPI was used consistently by clinical instructors to evaluate student performance over time. A continual progression in acquisition of clinical competencies was illustrated by PT-CPI scoring patterns as students advanced through their PT education programme.

Key Words: clinical education, clinical performance, physical therapy education, students

INTRODUCTION

There is general agreement regarding the significance of skills, attitudes, and behaviours gained in the clinical education of physical therapy (PT) students. However, the valid and reliable measurement of student learning outcomes in clinical settings has been a challenge. For many years, individual PT education programmes (or clinical facilities) developed and used their own tools to evaluate the construct of clinical performance.1 Before 1985, few clinical evaluative instruments were reported in the PT literature. No information was provided about the reliability and validity of these evaluations, even though the results from clinical evaluation tools were being used to decide whether or not PT students graduated as physical therapists.2,3

In the United States, concern over the variety of evaluation tools eventually resulted in the development of the Physical Therapist Clinical Performance Instrument (PT-CPI) by the American Physical Therapy Association (APTA).4 In November 1993, a task force was appointed by the APTA to develop a consistent clinical education evaluation instrument to measure student performance outcomes in PT and PT assistant education.5 Roach et al., members of the Task Force, described the process of developing the PT-CPI and published information regarding its psychometric properties.6 The PT-CPI was developed using a multifaceted and sequential process, with feedback on the Pilot Study and Field Study versions gathered through testing in both American and Canadian environments.6

The PT-CPI (December 1997 version) is a 24-item assessment tool that is believed to describe all the essential aspects of professional practice of a PT clinician performing at entry level.7 Each item is assessed using a 100 mm horizontal visual analogue scale (VAS) to represent the continuum of points between the lowest and highest levels of student performance that can be observed by a clinical instructor (CI) for each performance criterion (PC).4,6 The line is anchored on the far left with the words “Novice Clinical Performance” and on the far right with the words “Entry-Level Performance.”4,6 Psychometric properties of the first three drafts of the PT-CPI indicate reliable and valid measurement of PT student clinical performance.6 Internal consistency has been reported to be 0.97 (Cronbach's alpha), while interrater reliability of individual PT-CPI items ranged from 0.21 to 0.76 (ICCs; total-score interrater reliability was 0.87).6 The fourth version of the PT-CPI was made available by the APTA in December 1997.6 The construct and predictive validity and the internal consistency of this version were recently examined by Adams et al.8 Through exploratory factor analysis, three sub-scales emerged (“integrated patient management,” “professional practice,” and “career responsibilities”) from their analysis of VAS scores from 147 PT students in seven cohorts who had completed their first clinical experience.8 The authors also showed that each of the 24 items of the PT-CPI had high interrater reliability and adequate internal consistency (Cronbach's α=0.75–0.96).8

According to Sliwinski et al., the use of the PT-CPI has facilitated consistency among clinical education objectives, learning experiences, and performance standards for successive clinical education experiences.9(p.51) Despite its widespread use, however, few studies7,8 have been published reporting the analysis of longitudinal data from the PT-CPI. This information is important in evaluating different levels of PT student performance at different stages in the professional programme and, ultimately, in establishing normative values. Consistent sores for successive levels of clinical placements would support the construct validity of the PT-CPI, reflecting an incremental acquisition of clinical competencies.

The purpose of this study was a longitudinal analysis of the VAS data from each PC of the PT-CPI for seven cohorts of students progressing through five separate full-time clinical placements in a PT academic programme. The specific objectives were (1) to describe the completion rates of the 24 performance criteria; (2) to determine scoring patterns over time for the same clinical placement levels; and (3) to determine the degree of change in final VAS scores from one clinical placement level to another.

METHODS

Data Collection

The clinical education programme at the University of Saskatchewan developed a comprehensive database for the purpose of tracking information about clinical placements and student performance in these clinical placements. Students in the programme completed six clinical placements: an initial 5-week “Orientation to Clinical Practice” following 8 months of study; two junior internships (Jr 1 and Jr 2, each of 5 weeks' duration) at the end of the second year of the programme; and, finally, three consecutive senior internships (Sr 1, 5 weeks; Sr 2, 5 weeks; Sr 3, 6 weeks) during the final semester of the three-year programme. The decision was made in 2001 not to use the PT-CPI to evaluate student performance in the initial clinical placement, primarily because of the limited expectations of these students in terms of caseload responsibilities but also because the instrument was new to our environment and to CIs.

The data collected included clinical placement setting, geographic location, and the nature and type of clinical placement. Starting in 2001 with the implementation of the December 1997 version of the PT-CPI in the programme, final VAS scores for each student from five different clinical placements (Jr 1, Jr 2, Sr 1, Sr 2, and Sr 3) were entered into the database. After completed PT-CPIs were submitted following each clinical placement, the VAS for each of the 24 PCs (or each completed VAS of the possible 24) was measured in centimetres to one decimal point, and these values were entered into the database. The database used in this study comprised seven complete sets of data from five different clinical placements for each student in the BSc(PT) programme from May 2001 to May 2008. Approval for this study was obtained from the Behavioural Research Ethics Board at the University of Saskatchewan.

Data Analysis

All data were analyzed using SPSS version 15 (SPSS Inc., Chicago, IL). For the purposes of this study, the authors clustered the 24 individual PCs into smaller, more manageable themed sub-groups for interpretation of the data: Red Flag (Items 1–5 and Item 10, which is designated as a red-flag item in our programme); Communication (Items 6–8); Assessment (Items 1, 9–12, 15, 24); Treatment (Items 1, 13–17, 24); Organizational Skills (Items 18–21); and Professional Development (Items 22–23) (see Table 1). Items in the Red Flag category are considered foundational elements in PT practice4 and therefore sometimes appear in more than one sub-group. Completion rates for each PC, combining all five clinical placements for each student cohort, were calculated as percentages. The overall completion rate for all cohorts combined and all clinical placements combined was also calculated.

Table 1.

Clinical Performance Criteria: Physical Therapy Student

PC* Description Category % Completion
1 Practices in a safe manner that minimizes risk to patients, self, and others Red Flag Assessment Treatment 97.8
2 Presents self in a professional manner Red Flag 98.0
3 Demonstrates professional behaviour during interactions with others Red Flag 97.9
4 Adheres to ethical practice standards Red Flag 96.0
5 Adheres to legal practice standards Red Flag 90.2
6 Communicates in ways that are congruent with situational needs Communication 97.5
7 Produces documentation to support delivery of physical therapy services Communication 97.9
8 Adapts delivery of physical therapy care to reflect respect for and sensitivity to individual differences Communication 97.2
9 Applies the principles of logic and the scientific method to the practice of physical therapy Assessment 97.2
10 Screens patients using procedures to determine the effectiveness of and need for physical therapy service Assessment Red Flag 88.7
11 Performs a physical therapy patient examination Assessment 97.6
12 Evaluates clinical findings to determine physical therapy diagnoses and outcomes Assessment 96.9
13 Designs a physical therapy plan of care that integrates goals, treatments, outcomes, and discharge plan Treatment 97.4
14 Performs physical therapy interventions in a competent manner Treatment 97.6
15 Educates others (patients, family, caregivers, staff, students, other health care providers) using relevant and effective teaching methods Assessment Treatment 95.3
16 Participates in activities addressing quality of service delivery Treatment 83.6
17 Provides consultation to individuals, businesses, schools, government agencies, or other organizations Treatment 32.6
18 Addresses patient needs for services other than physical therapy as needed Organization 80.9
19 Manages resources (e.g., time space, equipment) to achieve goals of the practice setting Organization 97.6
20 Incorporates an understanding of economic factors in the delivery of physical therapy services Organization 64.0
21 Uses support personnel according to legal standards and ethical guidelines Organization 74.5
22 Demonstrates that a physical therapist has professional/social responsibilities beyond those defined by work expectations and job description Professional Development 90.4
23 Implements a self-directed plan for professional development and lifelong learning Professional Development 97.3
24 Addresses primary and secondary prevention, wellness, and health-promotion needs of individuals, groups, and communities Assessment Treatment 74.6
*

PT-CPI performance criterion / item

In order to determine whether the data from all seven cohorts could be combined at each level of clinical placement, Kruskall-Wallis tests and multiple Mann-Whitney U tests, correcting for the number of comparisons, were conducted. If no difference between the cohorts was found, their data were combined and the mean, standard deviation (SD), and range of VAS scores for each of the five levels of clinical placements were calculated for each PC. If differences between the cohorts were found, the same descriptive statistics were conducted on each cohort separately. To determine whether there was a significant difference in VAS scores for each PC between the different levels of clinical placement, a Friedman test was conducted to examine five comparisons of interest to the researchers, including comparing the mean VAS scores for each PC between 10 and 15 weeks of clinical experience (end of Jr 1 to end of Jr 2); between 15 and 20 weeks (end of Jr 2 to end of Sr 1); between 20 and 25 weeks (end of Sr 1 to end of Sr 2); and between 25 and 31 weeks (end of Sr 2 to end of Sr 3). As a result of ceiling effects reported in previous studies7,8 that used the PT-CPI with senior clinical placements, differences between 20 and 31 weeks (end of Sr 1 to end of Sr 3) were also analyzed to assess for these effects. Friedman tests that yielded statistically significant results were followed by multiple Wilcoxon signed rank tests (with corrections for multiple comparisons) to determine which levels of clinical experience were significantly different from others.

RESULTS

In total, final VAS scores for each PC from 1,039 completed clinical placements of 208 students (approximately 30 students/cohort×5 different placements/student×7 cohorts) were analyzed. The total number of VAS scores analyzed was 22,378. The majority (84.6%) of clinical placements between May 2001 and May 2008 took place in the province of Saskatchewan, Canada; the types of clinical placements are summarized in Table 2.

Table 2.

Types of Clinical Placement Settings

Type of Clinical Placement % of All Placements
Cardiorespiratory 21.0
Community therapy / home care 3.5
Geriatrics / long-term care 4.8
Mixed 4.2
Musculoskeletal/orthopaedics 36.4
Neurology 26.6
Paediatrics 3.1
Other 0.4

Completion Rates for Various Performance Criteria

None of the 24 PCs from the PT-CPI had a 100% completion rate. Overall, when all cohorts and levels of clinical placements were combined, 18 performance criteria had a completion rate of 90% or higher. Completion rates for each individual PC are reported in Table 1.

Comparison of Scores among Seven Different Cohorts

Mean VAS scores for each PC were compared across the seven cohorts. Following the Kruskall-Wallis test, PC 11, 12, and 15 (see Table 1) showed a statistically significant difference among the seven cohorts at 15 weeks of clinical experience, and PC 20 showed a statistically significant difference between the seven cohorts (p<0.05) at 30 weeks of clinical experience. Following the post hoc analyses, however, none of these PCs showed a significant difference between cohorts (p>0.002, corrected α). As a result, the data from all seven cohorts were combined to examine the remainder of the research questions.

Scoring of Performance Criteria for Five Different Clinical Placements

The descriptive statistics for each PC are summarized in Table 3. Friedman tests were conducted on 23 of the 24 PCs. Item 17 could not be included in the analysis because there were insufficient data for this test. Significant differences among the five clinical placements were found for all other PCs (p<0.001). Post hoc tests revealed that for all PCs, there were significant differences in VAS scores between 10 and 15 weeks of clinical experience (all p<0.001) and between 15 and 20 weeks of clinical experience (all p≤0.001).

Table 3.

Descriptive Statistics and Results of Post Hoc Tests* for the VAS Scores for Each Performance Criterion Over Five Clinical Placements

Item 10 weeks 15 weeks 20 weeks 25 weeks 31 weeks
mean SD range mean SD range mean SD range mean SD range mean SD range
1 7.25 2.24 0.5–10 8.21** 1.88 1.5–10 9.73*** 0.63 6.8–10 9.82 0.57 4.3–10 9.80 0.90 3.0–10
2 7.87 2.18 2.5–10 8.61** 1.83 2.2–10 9.75*** 0.61 5.8–10 9.85 0.42 7.1–10 9.93§ 0.33 6.3–10
3 7.81 2.12 1.4–10 8.46** 1.95 0.0–10 9.78*** 0.54 6.8–10 9.89 0.40 5.9–10 9.94 0.27 7.7–10
4 7.38 2.35 0.8–10 8.34** 1.95 2.1–10 9.77*** 0.55 6.5–10 9.91 0.25 8.5–10 9.96 0.17 8.4–10
5 7.30 2.42 0.9–10 8.17** 2.10 2.0–10 9.74*** 0.68 5.2–10 9.91 0.28 7.6–10 9.96 0.17 8.0–10
6 6.96 2.32 0.9–10 7.83** 1.97 0.0–10 9.53*** 0.80 6.3–10 9.80 0.47 7.6–10 9.87 0.39 7.4–10
7 6.74 2.29 1.1–10 7.79** 1.82 1.7–10 9.51*** 0.81 5.0–10 9.77 0.51 6.5–10 9.88§ 0.37 7.6–10
8 7.26 2.28 1.2–10 8.35** 1.78 1.9–10 9.78*** 0.56 6.8–10 9.90 0.31 7.5–10 9.94 0.21 8.5–10
9 6.35 2.38 0.8–10 7.38** 2.02 1.8–10 9.42*** 0.91 5.0–10 9.70 0.64 4.9–10 9.80 0.54 6.3–10
10 6.26 2.39 0.4–10 7.54** 1.95 2.0–10 9.51*** 0.75 5.7–10 9.74 0.50 6.9–10 9.87§ 0.36 7.3–10
11 6.24 2.27 1.3–10 7.67** 1.85 2.3–10 9.43*** 0.84 5.7–10 9.70 0.70 4.2–10 9.86§ 0.40 7.0–10
12 6.09 2.39 0.4–10 7.32** 1.90 1.8–10 9.38*** 0.88 5.0–10 9.71 0.55 6.5–10 9.82§ 0.44 7.2–10
13 6.27 2.34 0.9–10 9.66** 0.70 5.9–10 9.42*** 0.91 5.4–10 9.66 0.70 5.9–10 9.83§ 0.45 7.2–10
14 6.64 2.31 1.0–10 7.72** 1.88 2.0–10 9.52*** 0.81 6.1–10 9.77 0.62 4.9–10 9.87 0.36 7.6–10
15 6.61 2.32 1.3–10 7.72** 1.88 2.0–10 9.53*** 0.90 4.6–10 9.81 0.44 7.2–10 9.90§ 0.35 7.6–10
16 6.03 2.47 0.7–10 7.36** 2.03 2.3–10 9.45*** 0.93 4.8–10 9.77 0.50 6.4–10 9.90§ 0.29 7.5–10
17 5.76 2.63 0.2–10 7.21 2.06 2.5–10 9.50 0.87 4.7–10 9.70 0.64 6.5–10 9.84 0.39 7.5–10
18 5.82 2.43 0.9–10 6.91** 2.13 1.2–10 9.32*** 1.10 3.1–10 9.78 0.43 7.4–10 9.86 0.37 7.7–10
19 6.83 2.35 0.9–10 7.92** 1.98 1.3–10 9.58*** 0.80 4.4–10 9.79 0.48 6.9–10 9.84 0.47 6.7–10
20 5.82 2.59 0.8–10 7.26** 2.11 1.9–10 9.40*** 1.21 2.2–10 9.77 0.45 7.6–10 9.88 0.32 8.0–10
21 6.37 2.48 0.8–10 7.36** 2.10 0.7–10 9.40*** 1.01 3.8–10 9.73 0.60 4.9–10 9.88§ 0.35 8.0–10
22 6.80 2.31 1.2–10 7.86** 2.07 1.4–10 9.67*** 0.77 4.7–10 9.87 0.33 7.5–10 9.93 0.24 8.3–10
23 7.04 2.36 0.5–10 7.95** 2.08 1.5–10 9.59*** 0.87 4.8–10 9.82 0.44 7.0–10 9.90 0.33 7.5–10
24 6.20 2.44 1.0–10 7.40** 1.97 2.0–10 9.53*** 0.73 5.4–10 9.69 0.65 6.7–10 9.87 0.42 7.0–10
*

Wilcoxon Signed Ranks test with Bonferroni correction

**

p<0.001 vs. 10 weeks

***

p<0.001 vs. 15 weeks

p≤0.006 vs. 20 weeks

p≤0.001 vs. 20 weeks

§

p≤0.008 vs. 25 weeks

All PCs in the Communication, Organizational Skills, and Professional Development categories showed significant differences between 20 and 25 weeks (p≤0.01). Three of the six red-flag PCs (Items 4, 5, 10) showed significant differences between 20 and 25 weeks (p<0.004); the other three (Items 1–3) did not show significant differences between these two levels of clinical experience (p≥0.01). The change in VAS scores over the five levels of clinical experience for the six red-flag PCs (Items 1–5, 10) can be seen in Figure 1. Apart from Item 1 (p=0.02) and Item 24 (p=0.04), all assessment items showed a significant difference between 20 and 25 weeks of clinical experience (p<0.001). All PCs in the Treatment category showed a significant difference between 20 and 25 weeks (p≤0.001), except Item 24 (p=0.04).

Figure 1.

Figure 1

Mean scores over time for the red-flag performance criteria

More than half of all PCs (Items 1, 3–6, 9, 14, 18–20, 22–24) did not show a significant difference between 25 and 31 weeks of clinical experience (p>0.01). The items that did show a statistically significant difference between 25 and 31 weeks came from the Red Flag (Items 2, 10), Communication (Item 7), Assessment (Items 10–12, 15), Treatment (Items 13–16), and Organizational Skills (Item 21) categories (all p≤0.007). All PCs showed significant differences between 20 weeks and 31 weeks (p≤0.001), which is the interval between end of the first senior internship and the end of the third (and final) senior internship.

DISCUSSION

We found that the completion rates for various PCs did not change dramatically from early to final clinical placements. We anticipated that certain PCs might be more difficult to assess with very junior PT students but would be more commonly marked with senior PT students, but this was not the case. Perhaps all PCs are applied across all levels of students because the PT curriculum introduces the student to the multiple roles of PT early in the programme and builds on this introduction throughout the curriculum; students should therefore be well aware of and able to be assessed on these multiple roles and competencies at all stages of their training.

In field testing with 319 PT students, Roach et al. found that 0% of supervising CIs marked “not observed” for Item 1 (safety), while 63% marked “not observed” for Item 17 (consultation), despite the fact that “the Task Force believes that all items […] of the CPI can be rated in almost any setting.”6(p.348) Adams et al. also recently reported that Item 17 was marked less than 50% of the time.8 Similarly, in our experience, PC 17 (consultation) was by far the least frequently evaluated for all levels of students (32.6% overall). Reasons for this might include CIs' not recognizing their own role as “consultants” in some situations and CIs' not allowing/facilitating PT student interns to act as consultants because of their lack of experience. We might argue that “physical therapist as consultant” is a very important professional role that should continue to be included in the PT-CPI in order to raise awareness of this function in the profession. PC 21 (use of support personnel) typically has low completion rates because there are many clinical settings that do not employ support personnel. The relatively low usage rate of PC 20 (understanding of economic factors) in the present study may be partly due to the fact that many PT services in Canada are delivered within a public health care system, such that CIs and students alike are less likely to think in terms of explicit economic factors, even though economic factors are fundamental to the PT funding, budgets, and resources available in both private and public settings.

No significant differences were found in the scoring of the 24 PCs over the 7 years during which data from the PT-CPI were collected. Although each student had at least five different CIs marking the PT-CPI in five different clinical settings with various patient populations, each of the 24 PCs was scored consistently across seven cohorts of students moving through the programme from 2001 through 2008. Because there was no minimum training requirement for this version of the PT-CPI, the CIs had received varying amounts of training with the instrument, ranging from no formal training to special PT-CPI training workshops. CIs also had a range of experience with numbers of students (i.e., varying levels of experience with the tool) and worked in various clinical placement settings, not only in Saskatchewan but in other parts of Canada. The lack of differences in scoring implies that CIs over a period of 7 years were grading each distinct level of PT student consistently. Our experience with the consistent marking of the tool over time supports the reliability of the instrument, regardless of amount of CI training, experience, or type of practice setting.

The variability in VAS scores was greatest in the early clinical placements and decreased steadily as the clinical placements progressed. The variability of the scores in early placements might be due to the diversity of the students' theoretical background and/or to students' varying levels of maturity at programme entry. Input from CIs over time also indicates that they have more difficulty in understanding clinical performance expectations for junior students than in understanding expectations for more senior students. In their final clinical placement, students are expected to be at the far right-hand end of the VAS; therefore, less variability in scores is to be expected. The benchmark for expected “entry-level” performance seems generally easier for CIs to define. Despite a wide variety of placements scored by many different CIs, the high mean VAS scores and small standard deviations for each PC following the final clinical placement indicate that the PT programme produced “entry-level” physical therapists from the 30 diverse students who entered the programme each year.

There were significant differences in final VAS scores at each level of clinical placement compared to the previous level, indicating a continual progression in students' acquisition of clinical competency over time. After 10 weeks of clinical experience (end of Jr 1), mean VAS scores are >5 cm. Given the structure of our programme, this finding was expected, because the first 5 weeks of clinical experience (where the PT-CPI was not used) followed year 1, while the second 5-week placement followed the conclusion of year 2 in a 3-year programme. Therefore, after 10 weeks of clinical experience the students had completed two-thirds of the entire academic programme and had a great deal of theory to apply during the clinical experience. Students seem to reach a high level of performance that they consistently demonstrate from 20 weeks onward. Straube and Campbell have quantitatively described how clinical instructors, as a group, use the VAS on the CPI when rating student performance.7 They analyzed 256 CPI forms completed for 182 PT students on clinical placements in three different PT programmes in the Chicago metropolitan area, using evaluative data collected from placements ranging from first to fourth (or final) clinical experience, and similarly found that half of all scores (1,505 of 2,993 rater responses) fell within the 91–100 mm range.7 We also found a ceiling effect present in the final three clinical placements, with most median scores at or very close to 10 cm and small standard deviations, but this ceiling effect was not as absolute in nature as we previously believed. It is interesting to note that although the VAS scores tend to cluster more tightly around the mean at advancing levels of clinical experience, statistically significant differences continue to be exhibited in the majority of PCs until the last clinical placement, and even then 11 of 23 criteria continue to demonstrate significant change. It is important to note that students are placed in three different settings and work with distinctly different patient populations in each of the three senior clinical placements (16–20 weeks, 21–25 weeks, 26–31 weeks) in our programme; therefore, one might expect that they would achieve or acquire similar increasing skill levels over the 5 or 6 weeks of each senior placement.

In the 2007/2008 academic year, the University of Saskatchewan transitioned to a 26-month Master of Physical Therapy (MPT) programme. We are now collecting similar data from every clinical placement in the MPT programme, and we have used our experience in patterns of scoring with the PT-CPI to date for a variety of purposes: to determine the pass standard for each of the clinical courses in our new MPT programme; (2) to develop orientation content related to clinical practice courses for MPT students; and (3) to develop content for training CIs. For example, we review patterns of completion of the various PCs in the PT-CPI to emphasize accurate interpretation of the instrument with users. Future research will focus on defining norms for each level of the MPT clinical education component, in order to clarify expectations of clinical competency for PT students at different levels of clinical training.

CONCLUSIONS

The results of this study indicate that the PT-CPI has been used consistently by CIs over time in various levels of clinical placements. In addition, the increase in VAS scores across placements suggests a continual progression in acquisition of clinical competencies as students advance through the programme. Systematic collection of PT-CPI scores from clinical placements is critical for monitoring usage and scoring patterns by CIs. PT education programmes should collect and analyze similar types of data longitudinally in order to establish valid benchmarks for student performance and to monitor the training needs of users.

KEY MESSAGES

What Is Already Known on This Subject

The PT-CPI was launched in 1997, following an extensive testing and development process, and has been shown to be the most reliable and valid clinical performance instrument currently available. The PT-CPI has been used by all Canadian PT education programmes since 2001, but data have not been collected and analyzed over time in order to understand scoring patterns. Although decisions about clinical competency and ability of students to advance to subsequent clinical placements are based on this instrument, we lack adequate evidence on which to base these important decisions. No systematic measurement, analysis, and reporting of pooled PT-CPI data has been published to demonstrate how the instrument is being used and scored by clinical instructors in Canada.

What This Study Adds

To our knowledge, this is the first study of its kind to report data collected by the PT-CPI over five consecutive clinical placements for seven cohorts of physical therapy students in a Canadian PT education programme. The completion rates for some items were lower than in previous published reports. Over time, the visual analogue (VAS) scores of PT-CPI items remained consistent for each level of clinical placement. We found large improvements in all VAS scores as students advanced from junior to senior placements. Although ceiling effects have been reported in previous studies, our study demonstrated that significant improvement in scores continues to be seen up to and including the final senior clinical placement.

Proctor PL, Dal Bello-Haas VP, McQuarrie AM, Sheppard MS, Scudds RJ. Scoring of the Physical Therapist Clinical Performance Instrument (PT-CPI) : analysis of 7 years of use. Physiother Can. 2010;62:147–154.

References

  • 1.English ML, Wurth RO, Ponsler M, Milam A. Use of the physical therapist clinical performance instrument as a grading tool as reported by academic coordinators of clinical education. J Phys Ther Educ. 2004;18(1):87–91. [Google Scholar]
  • 2.Loomis J. Evaluating clinical competence of physical therapy students. Part 1: the development of an instrument. Physiother Can. 1985;37:83–9. [PubMed] [Google Scholar]
  • 3.Loomis J. Evaluating clinical competence of physical therapy students. Part 2: assessing the reliability, validity and usability of a new instrument. Physiother Can. 1985;37:91–8. [PubMed] [Google Scholar]
  • 4.American Physical Therapy Association. Physical Therapist Clinical Performance Instrument. Alexandria, VA: The Association; 1997. [Google Scholar]
  • 5.Final Report: Task Force on Clinical Education (1992–1994) American Physical Therapy Association Board of Directors; 1994. Program 60, Education Division, Exhibit 45. [Google Scholar]
  • 6.Task Force for the Development of Student Clinical Performance Instruments. The development and testing of APTA clinical performance instruments. Phys Ther. 2002;82:329–53. [PubMed] [Google Scholar]
  • 7.Straube D, Campbell SK. Rater discrimination using the visual analog scale of the physical therapist clinical performance instrument. J Phys Ther Educ. 2003;17(1):33–8. [Google Scholar]
  • 8.Adams CL, Glavin K, Hutchins K, Lee T, Zimmerman C. An evaluation of the internal reliability, construct validity, and predictive validity of the physical therapist clinical performance instrument (PT CPI) J Phys Ther Educ. 2008;22(2):42–50. [Google Scholar]
  • 9.Sliwinski MM, Schultze K, Hansen RL, Malta S, Babyar SR. Clinical performance expectations: a preliminary study comparing physical therapist students, clinical instructors and academic faculty. J Phys Ther Educ. 2004;18(1):50–7. [Google Scholar]

Articles from Physiotherapy Canada are provided here courtesy of University of Toronto Press and the Canadian Physiotherapy Association

RESOURCES