Abstract
The objective of this study was to compare unannounced standardized patient (USP) and patient reports of care. Patient satisfaction surveys and USP checklist results collected at an urban, public hospital were compared to identify items included in both surveys. Qualitative commentary was reviewed to better understand USP and patient satisfaction survey data. Analyses included χ2 and Mann-Whitney U test. Patients provided significantly higher ratings on 10 of the 11 items when compared to USPs. USPs may provide a more objective perspective on a clinical encounter than a real patient, reinforcing the notion that real patients skew overly positive or negative.
Keywords: outpatient satisfaction data, patient expectations, patient feedback, patient satisfaction, quality improvement
Introduction
Improving the patient experience of health care delivery is a core element of the Triple Aim, the Institute for Healthcare Improvement's three-dimensional framework for optimizing health system performance (1). To accomplish this aim, healthcare systems pursue patient-centered care by assessing patient satisfaction and patient experience (2). Although satisfaction and experience are often used interchangeably in the context of quality assurance, these terms represent distinct yet intertwined concepts (3).
Patient satisfaction is inextricably linked to the patient's expectations and preconceptions of the visits, and though satisfaction scores may be used to assess care quality, they are not methodologically robust measures (4). Questionnaires asking patients to rate their care in terms of how satisfied they are tend to elicit overly positive ratings and may not reflect the specific processes that affect the quality of care delivery (5). Patient experience is a more explicit measure of healthcare quality and processes, as defined by the Agency for Healthcare Research and Quality standard assessment tool the Consumer Assessment of Healthcare Providers & Systems Hospital Survey (CAHPS) as “including several aspects of health care delivery that patients value highly when they seek and receive care, such as getting timely appointments, easy access to information, and good communication with health care providers.”(6) Assessments of patient satisfaction and patient experience often capture both clinical and nonclinical factors and are influenced by the specific measures used as well as by patient, setting, and provider characteristics (7).
Since its development in 1995, CAHPS has become a widely used method of measuring patient experience for quality improvement efforts. In some hospital systems, insurance reimbursement is adjusted based upon the quality scores received from these surveys (8). However, CAHPS may not be entirely reliable as patients tend to provide high scores; one study documented 79% of patients rated their providers as 9 or 10 on a 10-point scale (9). Other issues with traditional patient experience surveys include the impact of survey administration timing on results (i.e., proximity to time of visit and collection method) as well as patient age, gender, and other demographic and personal characteristics (10). Furthermore, there are significant costs to fielding surveys in order to obtain sample sizes that ensure reliable results.
Unannounced standardized patients (USPs) provide an innovative method for assessing patient experience and satisfaction, bypassing many of the methodological challenges associated with patient surveys. USPs are “secret shoppers,” actors extensively trained to portray actual patients’ incognito and document their experiences of routine healthcare visits. USPs can be used in a variety of clinical settings to assess a range of measures including communication skills, clinical assessment and care management, physical examination skills, diagnostic measures, and laboratory testing (11,12). They also assess the entire “door to door” experience of a clinical visit from the vantage point of the patient, including visit flow, interaction with office staff, and patient safety indicators such as handwashing (12). Using a behaviorally anchored checklist, USPs can be trained, tested, and calibrated to make reliable and valid assessments of the full range of patient experiences before, during, and after a clinical encounter (13).
Although USPs provide reliable ratings of providers and clinic experience, differences between the perspectives of USPs and real patients are not well understood. USP feedback may provide more objective or negative feedback on clinical practice than current patient surveys. The few studies that have explored this question focus solely on the assessment of communication skills and suggest that USP are more discerning and have higher reliability of ratings when compared to real patients (14–16).
To date, there have been no studies examining the congruence between USP and real patient ratings of clinic experience. To address this gap in literature, we used cross-sectional survey data to compare the experiences during primary care clinic visits as assessed by both real patients and USPs. We hypothesized that USP-reported experience would be somewhat consistent with real patient ratings, except with more explicit and actionable feedback on the encounter.
Methods
Patient experience feedback surveys were collected from 2 populations including (1) real patients who spoke either English or Spanish and (2) USPs who completed visits to English-speaking clinicians in the NYC Health + Hospitals/Bellevue Adult Primary Care Clinic (APCC), between 2017 and 2018. The APCC is a busy clinic within an urban, safety net hospital system in New York City that provides clinical care to approximately 22,000 adult primary care patients (17). Patient experience and satisfaction surveys were completed as part of an ongoing quality improvement initiative. A convenience sample of all patients, including both new patients and revisits, were approached upon completion of their primary care visit by research assistants (RAs) and asked to answer questions focused on their experience at the clinic. The RAs excluded patients who declined to participate or those who indicated that they did not speak either English or Spanish. All surveys were anonymous.
USPs participated in a minimum of 6 h of character and checklist training prior to completing a clinic visit. The USP visits were part of a larger, ongoing educational evaluation of internal medicine residents in the NYU Langone Health system who receive feedback on their performance throughout the duration of their training program (18,19). The APCC uses USPs routinely to evaluate the patient experience and reviews aggregate USP performance data periodically with clinic staff and leadership. USPs were registered as new patients at the clinic to minimize their detection by clinic staff, who were informed that USPs would be visiting the clinic at random for quality improvement purposes. Upon visit completion, USPs returned to a trained research assistant's office and completed a post-visit checklist and debrief. Residents were asked to consent to have their routinely collected educational data entered into a research registry during orientation (NYU IRB #i06-683) and only USP visits with residents who consented were included in the current study.
Measures and Data Collection
The real patient survey consisted of 37 items describing the visit, including registration, screening by medical assistant, encounter with physician, checkout, and general clinic functioning. Response options included yes/no items and 3- and 5-point Likert-style, category rating scale items, with behavioral anchors for each point on the scale. Patients self-administered the survey on paper or an iPad in the waiting room. Patients were also given the option to have the RA administer the survey verbally.
The USP post-visit checklist focused on the encounter with the physician (assessment of physician communication, history gathering, counseling, and treatment plan/management) (20) and the patient experience from the time they entered the clinic until they left (and, in some cases, follow up telephone calls to address patient questions). For most visits at the APCC, clerical associates and medical assistants are the primary team members who interact with a patient prior to the physician, so nurses were excluded from the assessment. Experiences captured in the checklist included interactions with staff, patient safety items, ease of navigation of system, and impressions of clinic microsystem functioning. In our work, these data are traditionally gathered to provide a “clinical microsystem” report that is shared with clinic staff and clinic administration as a quality improvement measure. The questions on the checklist included dichotomous yes/no response options, 3- or 5-point Likert-style behavioral anchors, and open-ended text responses for USPs to further describe their experiences.
For the purposes of the current study, we included only the 11 questions that appeared on both assessments in our analyses (Table 1). These items targeted 4 areas: ratings of clinic staff (5 items), physician (2 items), team functioning (3 items), and clinic atmosphere (1 item) (21). Response categories across both RP and USP surveys included yes/no questions, and 3- and 5-point category response scales. We also examined USP open-ended text responses across 4 items (Table 1).
Table 1.
Questions Appearing on Both Patient Survey and Unannounced Standardized Patient Checklist.
| Question | Response Options | |
|---|---|---|
| Domains | 2 Point Scale | |
| Clerical associate |
|
Yes/No |
|
Yes/No | |
| Medical assistant |
|
Yes/No |
|
Yes/No | |
| 3 Point Scale | ||
|
|
|
| Physician |
|
|
|
|
|
| Team/clinic |
|
|
|
|
|
|
|
|
| 5 Point Scale | ||
|
|
Data Management and Analysis
All survey data were entered and managed in the REDCap application hosted by NYU Langone Health System (22). Differences between USP and real patient survey responses were compared using χ2 analyses and a Mann-Whitney U (Wilcoxon rank sum) test. Open-ended text responses were coded by 2 coders (LA and SC) using a deductive thematic approach, following a typical timeline of activities during medical visits (eg, registration with clerical associates, assessment by medical assistants and physicians, and overall team and clinic atmosphere).
Results
A total of 190 real patients completed the survey; 55% of respondents were female and 45% were male. The age range of the cohort was 21 to over 65 (2% under 25 years, 27% 25-44 years, 55% 45-64 years, and 16% over 65 years). Only 14% were new visits; most were established patients. Forty-one USPs completed checklists after 177 distinct visits. USPs portrayed patients in their mid 20s to 50s (exact ages of the USPs are not available). Detection rates remained low throughout (less than 10% of visits were detected).
Quantitative Results
USP and real patient ratings of the clinic differed significantly on 10 of the 11 items, with real patients consistently providing higher (more positive) ratings than the USPs. Real patients’ satisfaction ratings of the medical assistant were twice as high as the USPs’ (Figure 1). Further, real patients were significantly more likely to report that their clinician gave them sufficient information regarding follow-up (85% RP vs 53% USP, P = .001), that the clinician had answered all of their questions (89% vs 55%, P = .001), and that the clinician had spent enough time with them (86% vs 74%, P = .020). Nearly 75% of the real patients noted that the clinic staff “functioned well as a team” while less than half of the USPs reported the same. Real patients were also more likely to recommend the clinic to a friend when compared to the USPs (Figure 2).
Figure 1.
Percent Reporting that the Clerical Associate and Medical Assistant (MA) were helpful and explained instructions clearly: USPs versus real patients.
Figure 2.
Percent endorsement of USPs versus real patients on items with 3-point response options.
Results of the Mann-Whitney U test indicated that USPs reported significantly higher mean ranks on chaos in the clinic atmosphere than real patients (P = .001). Real patients were more likely to report the clinic as calm (1 or 2 category scale response) when compared to the USPs (54% RP vs 32% USP), while USPs were more likely to report the clinic as chaotic (4 or 5 on category scale response) compared to real patients (26% USP vs 7% RP). The median response for RPs and USPs was 2 and 3 on category scale, respectively, indicating USPs have the propensity to report marginally higher levels of clinical busyness.
Qualitative Commentary
Open-ended text responses from USPs provided detailed descriptions of their experience of being a patient in the clinic (Table 2). A total of 537 comments (N = 104 on the clerical associate, 129 on the medical assistant, 129 on the provider, and 175 on clinic function) were collected by USPs across 4 open-ended items included in the checklist. Although real patients were provided with an open-ended section, few provided actionable information. USP responses noted that processes and procedures were completed during visits, described interpersonal interactions with staff and clinicians, and covered overall impressions of individual, team, and clinic functioning. Comments were often quite detailed, for example, describing which types of patient identifiers were used and the types of screening procedures that were completed or omitted. They also noted how clinic staff responded to unique challenges, such as computer crashing. They reported exemplary behavior exhibited by clinic personnel, including interactions they witnessed between staff and other patients (“offering newspaper to patient who had no reading material”). The specificity of comments also revealed opportunities for improvement (eg, improved signage, changes to checkout procedures, and gaps in patient education about follow-up plans) and supported the notion that USP checklists are a comprehensive measure of an entire clinical encounter.
Table 2.
Examples of USP Commentary by Assessment Domain and Actionable Recommendations.
| Domain Assessed | Item | Sample Commentary from USPs | Actionable Recommendations to Clinic |
|---|---|---|---|
| Clerical associate | Describe your encounter with the clerical associate | “The first [person] I spoke with was not that friendly, and
she was not super helpful, shuffling me between desks to get
paperwork. However, her shift ended and the second [person]
I saw was very kind and gave me all the required
information, and explained how my visit today would
work.” “…the [person] I spoke with really wanted to make sure I got on the pay-scale or something so the cost of my visit wouldn't be so high, and [they] gave me a lot of recommendations for how to do so. [They were] super nice and concerned and I really felt cared for and looked after.” “The clerk is very respectful and professional with patients coming in for their appointments. One patient came asking for reading material and [they] offered a newspaper for this gentleman to read while waiting for his name to be called. Visitors were checked in properly in a timely fashion and there was no long lines in the waiting area.” “I kept having to come back and forth to the desk to ask what I was supposed to do and where I was supposed to go because they didn't really lay it out for me.” “Working with the clerk team as a whole, they were generally professional, and helpful when asked information. They might have been more proactive about educating people about their whole system, with some signs perhaps. One clerk was good about going down the line and instructing people as to what to do.” |
|
| Medical assistant | Describe your encounter with the medical assistant | “This assistant was great. [They were] very thorough in
[their] explanations and made me laugh which made me feel
comfortable. [They] had a great rapport with the other
employees. [The assistant] looked like a core part of the
team.” “I loved my assistant today. [They were] super friendly and when I asked [them] about a doctor's note, [they] explained to me exactly what would happen. The computer systems were down today, and everyone was really nice about it and made sure to let me know what was going on. [The assistant] also told me what [they were] going to do before [they] did it, and announced my numbers after they came up, which was very helpful and something not everyone does.” “My MA was not particularly warm or friendly towards me, and [they] also did not comment on my condition or ask me what brought me into the office today. [They] worked pretty quickly (my time with her totaled 8 min). [They] confused me a little bit by not inquiring about what was wrong with me, and I did not observe [them] washing [their] hands, though perhaps [they] did and I did not notice. [They] also asked me for my height though [they] did not measure it, and [they] did not apologize for my wait time. However, it was a quick in and out, and post-visit, [they] remembered my name and who I was to bring me the referral paperwork.” “[They were] very nice, but [they] didn't wash [their] hands.” “No greeting and [they] didn't wear a name tag. [They] verified my phone number but didn't check my date of birth. [They] didn't explain anything. [Their] attitude was friendly but the fact that [they were] chewing gum the whole time is really annoying and came across as unprofessional. I won't call [them] disrespectful, but I do think [their] attitude is way too casual. The loud sound of gum chewing is really irritating.” “Didn't explain payment station well; by the time it was resolved I had missed the appointment time.” “[They were] cold and hardly explained anything [they were] doing. As if [they] expected me to know what to do next. And then asked me to fill out PHQ without explaining why.” “[They were] warm and courteous. The room we were in was not well-configured, and it was difficult for me to get behind the chair to the scale to be weighed and have my height measured (hard to navigate—quite awkward for someone in pain). [They] did not tell me [their] name, but I heard someone else [say it]. I would like for [them] to have introduced [their self] to me.” |
|
| Physician | Describe the physician's communication | “Maybe because I waited so long to finally see the doctor
that I felt a bit rushed when I would answer questions. I
don't think [they] cut me off, but I think [they] quickly
moved on to [their] next question without further delving
deeper. I remember feeling weird for telling [them] about
having some type of anxiety to have to use my inhaler.
[They] didn't acknowledge it and I felt bad for sharing that
info.” “[They] listened to me very carefully and took the time to assess the problem. Along the way, [they] would check in to make sure we were on the same page and that I understood everything [they were] saying.” “It was clear that the doctor had more experience. At the same time, it did seem that [they were] aware of time in that the visit was moving at a quick pace, but [they] covered the important points, and was thorough with the physical exam. But there was something about the pace which then subconsciously made me feel like I shouldn't ask more questions, or I needed to be behaving at [their] same pace. So we did get through most things, but it felt quick…but complete.” “In fairness, today clinic was more busy and understaffed. I had to wait over an hour to see the resident. When [they] entered room he seemed busy and preoccupied: I thought [they were] rushing from last patient and still thinking about that person. It took doctor a few minutes to focus on me. But then the visit was OK.” “I felt respected and listened to. I would have preferred [them] to ask questions one at a time. I was asked lots of multiple questions which was frustrating. I found myself saying things like: 'I don't have that, but I do have this…' I also felt a little rushed, this may have been due to the fact that I told [them] I had been waiting for a long time. I also feel there were questions he did not ask for example: Family Medical History, which would have allowed me to tell more of my story. The encounter seemed to go by quickly.” “Excellent communicator. Human-to-human. Never felt like [they] wanted to rush out of the room to get onto the next patient. Wanted to get the whole story/scenario from me, wanted to know about my life so [they] could better assist me, was always clear, nice, respectful, helpful. Intentions were great.” “I experienced [them] as rushed and only interested in gathering information. I did not experience any empathy from [them] for my situation. There was little eye contact, as [they] spent most of [their] time looking at [their] computer screen while typing, and this made me feel disconnected from [them]. I did not hear any explanation of what might be causing my pain. I was told what to do, rather than having my feelings/thoughts elicited on the subject.” |
|
| Team/clinic | Describe your experience at the clinic | “In my experience, I've noticed the longest wait time is
when you're waiting for them to give you a follow up slip
that just has a phone number for me to call to make
additional appointments. I always sit there for about an
hour waiting to get a phone number and my prescription. This
always makes the appointments feel like a drag. Today would
have been concise and pleasant and easy but that wait time
for the final instruction is excessive.” “My experience was positive, but I arrived for a 2:00 appointment, saw the nurse at 2:20, and then didn't see the doctor until 3, and I left at 4. I felt like it took a little while, but other than that, my experience was positive. Also I wasn't sure if the signs that said 'Wait to be called' meant even when I first arrived, like I shouldn't check in until they called my name.” “The [person] at the front desk was extremely kind as well as the physician and assistant. It's the environment you want to be in when you need to feel taken care of.” “My experience with this visit is speedy and pleasant. Everyone was professional and friendly towards the visiting patients. Names were called out clearly for everyone to hear. No confusion as the assistant came out to the waiting area to make sure that names called out were heard.” “The clinic can be hectic at times (like today), you can easily be lost in the system if you don't know where to go. Heck, I know the system here for a while now and I almost lost my place in today's appointment.” “Its busy, and there are people here who need a lot of attention. The staff is doing their best, but it's clear they are sometimes being spread thin” “Always the people who staff your clinic are amazing. They are dealing with tons of people with tons of concerns in tons of languages. I saw evidence of a sign-language interpreter, I saw evidence of bilingual desk attendants. These ladies are unsung heroines. They could definitely use an appreciation day.” “The person with whom I spoke at the front desk was pleasant and told me that I should see financial aid after my visit, or I would get the full bill. The MA was warm and probably the most helpful person with whom I interacted. [They were] able to give me some guidance about the follow-up with a PT, however, there was no appointment available with the doctor for a follow-up. I was supposed to call about that, but I was not sure what the time frame was. The doctor did not engage with me on a personal level, which I would have appreciated. [They] did tell me what [they] thought needed to happen, but my input was not really sought. Some investigation into my financial difficulties and stress would have been appreciated. I did not get all of the help that I could have today.” |
|
Discussion
Although USPs cannot replace the direct report of patient experience as captured through patient surveys, this study demonstrates that they can accurately reflect real patient perspectives and provide additional valuable information about clinical teams and systems. USP ratings of clinical encounters were similar to real patient ratings, but they were both lower and more discerning than real patient reports. Trained to observe specific behaviors and provided with feedback during their training in order to ensure consistency of ratings, USPs may be more objective than real patients. They also typically have exposure to a diverse range of health-related scenarios as SPs in other contexts and can provide an informed, standardized perspective on care quality, minimizing differing expectations as well as the “halo effect.” The larger range of USP scores compared to RP scores suggests USPs may be more useful for quality improvement assessment as they do not have the “ceiling effect” that real patient surveys have. North et al. note that CAHPS scores lack specificity and items are highly inter-correlated, making it difficult to delineate concrete and actionable behaviors to target for quality improvement efforts for providers (23). They posit that this relationship is due to the time lag between visit and completion of the survey, but patient expectations and the potential for cognitive dissonance may also impact patients’ ability to be critical of their clinicians. Patients at safety net sites and those with low socioeconomic status may be resigned to less optimal service or have difficulty voicing critiques due to power differentials or fear of mistreatment by the health system. It is also possible that the limited qualitative data provided by real patients compared to USPs is a result of lower health literacy and language barriers in the survey population. USPs’ training and standardized health literacy can illuminate gaps in care that patients might not be able to articulate or acknowledge.
The breadth of USP ratings as opposed to real patients’ rating clustering is consistent with previous work; Fiscella and colleagues (16) compared USPs and real patient's ratings of patient-centered communication and found mean USP scores were lower and standard deviation was greater than real patient scores and standard deviation was greater. Spearman rank-order correlations between USP and real patient ratings were positive though small. Similarly, work by Rezai et al. (14) found that only 12% of real patients rated the provider as “poor” in communication skills, while 47% of USPs did, further reinforcing the notion that USPs are more discerning.
Although not reported in this paper, USPs can report on the whole visit experience. In our USP program, telephone calls to the clinic after a visit afforded information about the accessibility of the care team for follow-up questions and behaviorally specific information and actionable improvement targets for the site and staff. In prior work, we describe the impact of systematic USP data about the implementation of patient-centered care and patient safety measures for improving outcomes (12,24–27).
Despite the advantages of the USP methodology, surveys of real patient satisfaction and experience are irreplaceable and provide essential information that is sensitive to local and cultural norms and perspectives. USPs can offer a more comprehensive assessment suitable for targeted quality improvement efforts and can provide follow-up on targeted issues detected by traditional patient surveys or identify new issues not typically addressed by patient surveys. For example, we have used USP visits to gather information about the clinical system's capacity to address social determinants of health and implement workflows for patient safety (25,28). With respect to feasibility, SP use has a modest cost (i.e., actor time) and many hospitals have simulation centers that can support the training and preparation of actors. The use of both USP and real patient surveys in tandem can often be the best method of obtaining quality information on care teams and systems. This indicates future quality improvement and research on patient experience and satisfaction should include both USP and real patient surveys to gain a multifaceted understanding of the patient journey in all care settings. Further, both types of surveys should include the same quantitative and qualitative sections and questions in order to elucidate the differences between responses.
Limitations
This is a study at a single institution, thus, results may not be as representative of other health systems. Future work will expand on our cross-institutional survey research (29) to include patients and USP visits in other clinics and types of hospital systems (private and federal). Additionally, USPs and real patients were not matched for age, education, or type of visit (new vs established patient) at this ambulatory care clinic. Although USPs provided rich details on experience, real patients provided minimal usable qualitative data, making comparisons challenging. Although we were not able to match specific demographics, the overall findings of both USP and real patient responses aligned with other site-specific surveys completed during the time period. Real patients surveyed were those that were returning for visits, whereas all USP visits are first patient visits. This may contribute to higher overall scores or satisfaction in real patient surveys versus USP checklists. Further, though real patient surveys were anonymized, patients may still feel pressure to overreport satisfaction or literacy rates. Subsequent research will design USP cases to better reflect patient demographic information and create more comprehensive patient surveys to describe differences in demographics and visit contexts.
Conclusions
Results of this study indicate that real patients tended to provide higher ratings on patient satisfaction and experience measures than USPs. USPs provide detailed, critical, and objective evaluations of the entire clinical encounter, enhancing efforts to further understand and improve the experience of real patients in clinical settings. USPs are a useful tool for answering quality improvement-oriented questions and may provide more nuanced information about clinicians and the clinical systems than real patients, though they are not a substitute.
Footnotes
Authors’ Note: This is project was designed as a quality improvement study through the NYU Grossman School of Medicine IRB process. That is, it aims “to assess a process/program/system” and “to prompt adoption of results to local site.” Our QI goal was to compare feedback from USPs and patients to better understand the value of these different evaluation methodologies. We have attached the NYU Grossman School of Medicine IRB attestation form here. We made every effort to maintain anonymity and participant confidentiality. Written informed consent was obtained from the USPs and patients for their anonymized information to be published in this article.
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding: The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work was supported by the Health Resources and Services Administration Primary Care Training and Enhancement, Agency for Healthcare Research and Quality (grant number T0BHP28577, 5R18HS021176, 5R18HS024669).
ORCID iD: Zoe Phillips https://orcid.org/0000-0002-0057-6136
References
- 1.Berwick DM, Nolan TW, Whittington J. The triple aim: care, health, and cost. Health Aff. 2008;27:759-69. doi: 10.1377/hlthaff.27.3.759 [DOI] [PubMed] [Google Scholar]
- 2.Gleeson H, Calderon A, Swami V, Deighton J, Wolpert M, Edbrooke-Childs J. Systematic review of approaches to using patient experience data for quality improvement in healthcare settings. BMJ Open. 2016;6:e011907. doi: 10.1136/bmjopen-2016-011907 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Wolf JA, Niederhauser V, Marshburn D, LaVela SL. Defining patient experience. Patient Exp J. 2014;1:7-19. doi: 10.35680/2372-0247.1004 [DOI] [Google Scholar]
- 4.Blozik E, Iseli AM, Kunz R. Instruments to assess patient satisfaction after teleconsultation and triage: a systematic review. Patient Prefer Adherence. 2014;8:893-907. doi: 10.2147/PPA.S56160 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5.Jenkinson C. The picker patient experience questionnaire: development and validation using data from in-patient surveys in five countries. Int J Qual Health Care. 2002;14:353-8. doi: 10.1093/intqhc/14.5.353 [DOI] [PubMed] [Google Scholar]
- 6.Agency for Healthcare Research and Quality. Consumer Assessment of Healthcare Providers and Systems (CAHPS®) program. Accessed November 17, 2021. https://www.ahrq.gov/cahps/about-cahps/cahps-program/index.html
- 7.Berkowitz B. The patient experience and patient satisfaction: measurement of a complex dynamic. Online J Issues Nurs. 2016;21. doi: 10.3912/OJIN.Vol21No01Man01 [DOI] [PubMed] [Google Scholar]
- 8.Elliott MN, Beckett MK, Lehrman WG, et al. Understanding the role played by Medicare’s patient experience points system in hospital reimbursement. Health Aff. 2016;35:1673-80. doi: 10.1377/hlthaff.2015.0691 [DOI] [PubMed] [Google Scholar]
- 9.Detailed Report—Patient Experiences: Providers with a “Most Positive” Rating. Minnesota HealthScores; 2014. Accessed December 13, 2021. https://www.mnhealthscores.org/index.php
- 10.Tyser AR, Abtahi AM, McFadden M, Presson AP. Evidence of non-response bias in the Press-Ganey patient satisfaction survey. BMC Health Serv Res. 2016;16:350. doi: 10.1186/s12913-016-1595-z [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Borrell-Carrió F, Poveda BF, Seco EM, Castillejo JAP, González MP, Rodríguez EP. Family physicians’ ability to detect a physical sign (hepatomegaly) from an unannounced standardized patient (incognito SP). Eur J Gen Pract. 2011;17:95-102. doi: 10.3109/13814788.2010.549223 [DOI] [PubMed] [Google Scholar]
- 12.Zabar S, Hanley K, Stevens D, et al. Unannounced standardized patients: a promising method of assessing patient-centered care in your health care system. BMC Health Serv Res. 2014;14:157. doi: 10.1186/1472-6963-14-157 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13.Weiner SJ, Schwartz A. Directly observed care: can unannounced standardized patients address a gap in performance measurement? J Gen Intern Med. 2014;29:1183-7. doi: 10.1007/s11606-014-2860-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14.Rezaei R, Mehrabani G. A comparison of the scorings of real and standardized patients on physician communication skills. Pak J Med Sci. 1969;30. doi: 10.12669/pjms.303.3255 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15.Fiscella K, Meldrum S, Franks P, et al. Patient trust: is it related to patient-centered behavior of primary care physicians? Med Care. 2004;42:1049-55. doi: 10.1097/00005650-200411000-00003 [DOI] [PubMed] [Google Scholar]
- 16.Fiscella K, Franks P, Srinivasan M, Kravitz RL, Epstein R. Ratings of physician communication by real and standardized patients. Ann Fam Med. 2007;5:151-8. doi: 10.1370/afm.643 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17.Expanded adult primary care clinic opens, improving access to care. Published October 4, 2018. Accessed December 13, 2021. https://www.nychealthandhospitals.org/bellevue/pressrelease/expanded-adult-primary-care-clinic-opens-improving-access-to-care/
- 18.Taormina DP, Zuckerman JD, Karia R, Zabar S, Egol KA, Phillips DP. Clinical skills and professionalism: assessing orthopaedic residents with unannounced standardized patients. J Surg Educ. 2018;75:427-33. doi: 10.1016/j.jsurg.2017.08.001 [DOI] [PubMed] [Google Scholar]
- 19.Zabar S, Ark T, Gillespie C, et al. Can unannounced standardized patients assess professionalism and communication skills in the emergency department? Acad Emerg Med. 2009;16:915-8. doi: 10.1111/j.1553-2712.2009.00510.x [DOI] [PubMed] [Google Scholar]
- 20.Zabar S. Objective Structured Clinical Examinations: 10 Steps to Planning and Implementing OSCES and Other Standardized Patient Exercises. Springer; 2013. [Google Scholar]
- 21.Perez HR, Beyrouty M, Bennett K, et al. Chaos in the clinic: characteristics and consequences of practices perceived as chaotic. J Healthc Qual. 2017;39:43-53. doi: 10.1097/JHQ.0000000000000016 [DOI] [PubMed] [Google Scholar]
- 22.Harris PA, Taylor R, Thielke R, Payne J, Gonzalez N, Conde JG. Research electronic data capture (REDCap)—A metadata-driven methodology and workflow process for providing translational research informatics support. J Biomed Inform. 2009;42:377-81. doi: 10.1016/j.jbi.2008.08.010 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23.North F, Tulledge-Scheitel SM. Patient satisfaction with providers: do patient surveys give enough information to help providers improve specific behaviors. Health Serv Res Manag Epidemiol. 2019;6. doi: 10.1177/2333392819885284 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24.Zabar S, Hanley K, Watsula-Morley A, et al. Using unannounced standardized patients to explore variation in care for patients with depression. J Grad Med Educ. 2018;10:285-91. doi: 10.4300/JGME-D-17-00736.1 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 25.Wilhite JA, Hardowar K, Fisher H, et al. Clinical problem solving and social determinants of health: a descriptive study using unannounced standardized patients to directly observe how resident physicians respond to social determinants of health. Diagnosis. 2020;7:313-24. doi: 10.1515/dx-2020-0002 [DOI] [PubMed] [Google Scholar]
- 26.Wilhite JA, Velcani F, Watsula-Morley A, et al. Igniting activation: using unannounced standardized patients to measure patient activation in smoking cessation. Addict Behav Rep. 2019;9:1-6. doi: 10.1016/j.abrep.2019.100179 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 27.Fisher H, Re C, Wilhite JA, et al. A novel method of assessing clinical preparedness for COVID-19 and other disasters. Int J Qual Health Care. 2021;33:1-4. doi: 10.1093/intqhc/mzaa116 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 28.Wilhite JA. Do providers document social determinants? Our EMRs say…! Presented at: Society of General Internal Medicine Annual Meeting (SGIM-on-Demand); 2020. https://sgim-ccast.echo360.org/media-player.aspx/4/6/132/578
- 29.Wilhite JA, Phillips Z, Altshuler L, et al. Does it get better? An ongoing exploration of physician experiences with and acceptance of telehealth utilization. J Telemed Telecare. 2022. doi: 10.1177/1357633X221131220 [DOI] [PubMed] [Google Scholar]


