Skip to main content
Wiley Open Access Collection logoLink to Wiley Open Access Collection
. 2022 Feb 14;17(3):176–180. doi: 10.1002/jhm.12790

Development of a novel hospitalist advanced practice provider assessment instrument: A pilot study

Amteshwar Singh 1,, David Klimpl 2, Flora Kisuule 1, Tracy Cardin 3, Sean Tackett 4,5, Ishaan Gupta 1, Kimberly Blum 1, Kinsey Wimmer 2, Scott Wright 4, Jorie Colbert‐Getz 6
PMCID: PMC9305217  PMID: 35504586

Abstract

Advanced practice providers (APPs) graduate from school with variable hospitalist experience. While hospitalist‐specific onboarding is recommended for hospitalist APPs, no standard method currently exists to assess their readiness for practice. We created a 17‐item instrument called the Cardin Hospitalist Advanced Practice Provider‐Readiness Assessment (CHAPP‐RA) to assess APPs'; readiness for practice using a milestones‐based scale. We piloted CHAPP‐RA at a single site where 11 APPs with varied experience were rated by 30 supervising physicians. Supervisors also provided global ratings for overall performance. We investigated the feasibility of CHAPP‐RA and collected validity evidence for the interpretation of scores. The mean time to complete one CHAPP‐RA was 10.5 min. Supervisors rated novice APPs lower than more experienced APPs, p ≤ .001. CHAPP‐RA ratings also correlated strongly with global ratings. CHAPP‐RA is feasible to implement and has initial validity evidence.

INTRODUCTION

Advanced practice providers (APPs), 1 defined as nurse practitioners (NPs) and physician assistants (PAs), form an integral part of the hospitalist workforce. The percent of hospital medicine groups (HMGs) that employ APPs has increased from 66% in 2014 to 83% in 2020 2 Experienced hospitalist APPs have achieved similar patient outcomes as hospitalist physicians. 3 APPs undergo variable training in hospital medicine prior to entering the workforce. NP and PA schools are not required to include education in hospital medicine, though some do provide this option 4 . Postgraduate training in hospital medicine is likewise only optional, facilitated through programs such as hospital medicine fellowships. 5 Once hired, and after variable onboarding, most hospitalist APPs work with considerable autonomy. 6 This transition from student to clinical provider can be jarring for newly graduated APPs, 7 and an instrument that enables HMGs to efficiently assess an APP's readiness for practice could help with their development.

To date, no standardized method exists to assess an APP's readiness for hospitalist practice. To this end, we developed a milestone‐based assessment instrument (Cardin Hospitalist Advanced Practice Provider‐Readiness Assessment instrument, or CHAPP‐RA). In this report, we describe the CHAPP‐RA, results from a test of its feasibility in practice settings, and validity evidence 8 for interpretation of scores.

METHODS

Participants and setting

A total of 11 APPs and 30 physicians participated in this study from August to September 2020 at Johns Hopkins Bayview Medical Center, Baltimore, a 420‐bed Level 2 trauma center.

Instrument development: Content validity evidence

One author (T. C.) drafted a list of 44 clinical practice items pertinent to hospitalist APPs based on existing instruments for assessing nurses 7 and NPs. 9 These original items were refined by a drafting team utilizing NP/PA core competencies, 10 Society of Hospital Medicine's (SHM's) core competencies, 11 ACGME internal medicine milestones, 12 and APP assessment instruments used in our HMGs. The drafting team included APPs, HMG leaders, and hospitalist educators who practice with APPs. To avoid bias, the drafting team was excluded from piloting the tool. During the iterative process, nonessential items like “APP is comfortable with their professional identity” and “function as a resource to other healthcare professionals” were eliminated. Common concepts such as “know the limits of their knowledge and when to seek consultation” and “able to identify the appropriate need for specialty consultation” were consolidated. These efforts contributed to content validity evidence.

Iterative revisions refined the initial 44 items to a 17‐item instrument, which was then aligned with the ACGME milestones format to create five‐levels of ability. 13 These levels, (1) novice, (2) advanced beginner, (3) competent, (4) proficient, and (5) expert/coach, were expanded to a nine‐point scale to permit rating of intermediate performance. Five labels were not included on CHAPP‐RA to avoid bias.

Before administering CHAPP‐RA, we conducted think aloud with five physicians who routinely supervise APPs. Small edits to wording were made to improve clarity.

Data collection

Physician raters received a Qualtrics link to complete the CHAPP‐RA via email after working three consecutive shifts directly supervising an APP. Because some raters worked with APPs for multiple stretches of consecutive shifts (range: 3–5), only the longest stretch was evaluated.

Data analysis

Response process validity evidence

Response process validity evidence was collected by investigating the time to complete each instrument (using Qualtrics timer), the range of rating options selected for each item, and how ratings compared based on APP level of experience. The frequency and percentages of ratings for each item was computed to ensure raters used the full milestones scale. To determine if ratings varied with clinical experience, we grouped APPs by years of hospitalist practice (Novice APPs were postgraduate fellows with no hospitalist experience, mid‐career APPs had 1–5 years of experience, and senior APPs had >5 years of experience). Means for each item for these groups were compared using t‐tests for pairwise comparisons.

Relationship to other variable validity evidence

To gather relationships to other variables validity evidence, we asked raters their level of agreement with two global statements: APP is ready to practice independently, and I would feel comfortable having this APP care for my loved ones on a 5‐point Likert scale (1 = Strongly Disagree, 5 = Strongly Agree). CHAPP‐RA ratings were correlated with the two global ratings using Spearman's ρ, with a ρ of ≥0.50 indicating a strong correlation between global ratings and average CHAPP‐RA ratings.

We analyzed data using Stata version 15. This study was exempted by a Johns Hopkins Medicine IRB (IRB00276810).

RESULTS

The study included 11 APPs (9 PAs, 2 NPs; 8 females, and 3 males). Four were novice, four were mid‐career, and three were senior. The 30 raters had an average of 7.1 years of clinical experience (SD: 5.7). Raters completed 42 out of 52 assigned CHAPP‐RAs. The number of CHAPP‐RAs completed by raters ranged from 1 to 7 (mean: 2.3, SD: 1.3).

Response process validity evidence

The average time to complete a CHAPP‐RA was 10.5 min (range: 2.0–28.0 min). Table 1 shows the percentage of CHAPP‐RA items able to be observed with descriptive statistics. Raters used a broad range of the 9‐point milestones scale. Four items observed by all raters were “assessment/plan of care,” “documentation/written communication,” “time management/reliability,” and “collaboration with a multidisciplinary team.” 13 items were “not observed” by at least one rater. Of these, the two items observed least often were “identification and management of the acutely ill” (37% not observed) and “history taking” (27%). Table 2 shows average ratings for novice, mid‐career, and senior APPs. Novice APPs were rated statistically significantly lower than all other APPs, p < .001 for all comparisons. No statistically significant differences were found in items scores between mid‐career and senior APPs.

Table 1.

Descriptive statistics for each APP's item rating on the CHAPP‐RA by 30 physicians

CHAPP‐RA item Percentage of raters able to observe the items for CHAPP‐RAs Mean rating (SD) Range of ratings
History‐taking 80% 6.3 (2.0) 2–9
Physical exam 87% 6.3 (1.7) 2–9
Medication reconciliation 93% 6.5 (1.8) 1–9
Clinical reasoning 97% 6.0 (1.8) 3–9
Assessment/plan of care 100% 6.0 (1.9) 2–9
Documentation/written communication 100% 6.7 (1.6) 3–9
Presentation/oral communication 97% 6.5 (1.7) 2–9
Identification and management of the acutely ill 73% 6.2 (2.1) 3–9
Subspecialty and multidisciplinary consultation 90% 6.6 (1.8) 3–9
Knowledge of labs, images, and procedures 97% 6.3 (1.9) 3–9
Time management/reliability 100% 6.5 (1.9) 1–9
Socioeconomic barriers to care 87% 6.9 (1.8) 3–9
Patient interview 90% 6.8 (1.6) 3–9
Patient and family discussions 87% 6.7 (1.7) 3–9
Unique patient characteristics 87% 6.9 (1.6) 2–9
Collaborating with a multidisciplinary team 100% 7.0 (1.6) 3–9
Self‐improvement 93% 6.7 (1.7) 3–9

Abbreviations: APP, advanced practice provider; CHAPP‐RA, Cardin Hospitalist Advanced Practice Provider‐Readiness Assessment.

Table 2.

Average CHAPP‐RA item ratings for novice, mid‐career, and senior APPs with standard deviations in parenthesis

CHAPP‐RA itema , b Novice APP Mid‐career APP Senior career APP
History‐taking (n = 4) (n = 4) (n = 3)
Physical exam 5.3 (1.4) 7.6 (1.0) 7.9 (1.1)
Medication reconciliation 5.4 (1.4) 7.9 (0.9) 8.3 (0.8)
Clinical reasoning 4.8 (1.3) 7.7 (1.2) 8.0 (1.1)
Assessment/plan of care 4.9 (1.4) 7.6 (1.2) 7.9 (1.4)
Documentation/written communication 5.8 (1.5) 7.7 (1.2) 8.1 (1.1)
Presentation/oral communication 5.6 (1.4) 7.7 (1.2) 8.0 (1.3)
Identification and management of the acutely ill 4.7 (1.4) 7.8 (1.1) 8.1 (1.3)
Subspecialty and multidisciplinary consultation 5.5 (1.6) 7.9 (1.2) 8.0 (1.1)
Knowledge of labs, images, and procedures 5.1 (1.5) 7.7 (1.2) 8.0 (1.2)
Time management/reliability 5.6 (1.7) 8.2 (0.6) 8.2 (0.9)
Socioeconomic barriers to care 5.7 (1.6) 8.1 (0.9) 8.3 (0.8)
Patient interview 5.9 (1.4) 8.0 (0.8) 8.0 (1.1)
Patient and family discussions 5.7 (1.3) 8.0 (0.7) 8.2 (1.0)
Unique patient characteristics 5.8 (1.5) 8.1 (0.8) 8.1 (0.9)
Collaborating with a multidisciplinary team 6.1 (1.5) 8.4 (0.7) 8.3 (0.8)
Self‐improvement 6.0 (1.6) 8.0 (0.7) 8.1 (0.9)

Abbreviations: APP, advanced practice provider; CHAPP‐RA, Cardin Hospitalist Advanced Practice Provider‐Readiness Assessment.

a

p < .001 for all row‐wise comparisons for novice versus mid‐career and novice versus senior.

b

p > .05 for all row‐wide comparisons for mid‐career versus senior.

Relationship to other variables validity evidence

Supplement A provides correlation values between the two global item ratings and CHAPP‐RA items ratings. All correlations were positive (>.50) suggesting a strong relationship between CHAPP‐RA items ratings and global assessment of an APP (range: 0.82–0.96 for both correlations).

DISCUSSION

This study provides content, response process, and relationship to other variables validity evidence for the interpretation of scores for a new instrument, the CHAPP‐RA, to assess hospitalist APPs'; readiness for practice. The CHAPP‐RA was used to assess trainees and more established providers. The SHM statement paper, “Hospital Medicine NPPA Practice Integration and Optimization,” recommends onboarding hospitalist APPs using a standardized assessment instrument. 14 We designed the CHAPP‐RA with this recommendation in mind. We intended for it to be easy to implement, requiring minimal instruction prior to use, using a milestone format already familiar to medical providers. Checklists for APP onboarding processes have been created in the past, however, they have not reported substantial validity evidence. 15 In this pilot study, we report initial validity evidence for an instrument which could facilitate standardized clinical assessment during onboarding and subsequently into APPs'; practice.

APPs with less than 1 year of hospitalist experience scored lower compared to more experienced colleagues. Hence, we believe the CHAPP‐RA could be most useful in assessing less experienced APPs. There are potential applications for such an instrument including but not limited to individualizing onboarding to the strengths and learning edges of new hires, assessing the level of supervision needed, and creating specific competency goals for future assessments.

We expected that all CHAPP‐RA items could be assessed across 36 h of supervision. However, two items, “identification, and management of the acutely ill” and “history taking” were listed as “unable to rate” in over a quarter of occurrences. We realized that observing an initial history would only occur if the dyad was admitting new patients—which was not guaranteed in our study. Similarly, observing the management of an acutely ill patient would require direct observation of an APP managing a decompensating patient (e.g., shock, unstable arrhythmias, acute respiratory failure, etc.), which was not guaranteed during floor rounding shifts. Admitting shifts, rounding in higher acuity units, and participation in rapid response teams could ensure such observation.

Some limitations of this study merit discussion. The pilot was performed on a small sample at a single hospital with only two NPs. Due to variable number of CHAPP‐RAs completed by each rater, we had to aggregate ratings by APP with the assumption that APPs'; performance was not dependent on one another and used t‐tests and Spearman's ρ accordingly. A large number of comparisons in t‐tests inflates the familywise type 1 error which should be considered when interpreting p values for pairwise comparisons. Large‐scale studies are needed to collect evidence for internal consistency, reliability, consequences, and responsiveness. While we believe the instrument would be useful to tailor education and onboarding, this needs further investigation.

CONCLUSION

Assessing readiness for practice is a crucial step in the onboarding and continued education of hospitalist APPs. This study presents a readiness for practice assessment instrument that was feasible to implement and has established initial validity evidence for the interpretation of scores. Further study is needed to validate the CHAPP‐RA on a larger scale.

CONFLICT OF INTEREST

The authors report no relevant conflicts of interest.

Supporting information

Supporting information.

ACKNOWLEDGMENTS

The authors would like to thank the (1) Master of Education in the Health Professions program at Johns Hopkins University School of Education, (2) Division of Hospital Medicine at Johns Hopkins Bayview Medical Center, and (3) Advanced Practice Fellowship in Hospital Medicine at the University of Colorado School of Medicine for their suggestions and guidance on this scholarly project. Dr. Wright receives support as the Anne Gaines and G. Thomas Miller Professor of Medicine through the Johns Hopkins Center for Innovative Medicine.

Singh A, Klimpl D, Kisuule F, et al. Development of a novel hospitalist advanced practice provider assessment instrument: A pilot study. J Hosp Med. 2022;17:176‐180. 10.1002/jhm.12790

Amteshwar Singh and David Klimpl contributed equally to this study.

REFERENCES

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Supporting information.


Articles from Journal of Hospital Medicine are provided here courtesy of Wiley

RESOURCES