Abstract
Introduction
Measuring the experiences of patients regarding delivery and receipt of person-oriented primary care is of increasing policy and research interest and is a core component of the Institute for Healthcare Improvement’s Quadruple Aim.
Objective
To describe the Problem-Oriented Patient Experience—Primary Care (POPE-PC) survey, a novel instrument designed to measure patients’ experiences of primary care, and to assess the instrument’s psychometric properties.
Methods
Psychometric testing was performed using data from a Canadian urgent primary care center, derived from March 2019 to September 2019. Patients automatically received the 9-question survey by email after leaving the clinic. Exploratory factor analysis (EFA) on all questions and the entire dataset was performed using parallel analysis and scree plot for factor extraction. Internal consistency was assessed by calculating Cronbach α. A split-half cross-validation of the ensuing factor structure was conducted. A correlation analysis helped explore associations between the survey’s questions.
Results
Results from the initial EFA indicate that the POPE-PC has a conceptually sound 2-factor structure, with good internal consistency. A split-half validation yielded the same findings, reaffirming that the 2-factor model has good psychometric properties. The correlation analysis indicated that the concept of respect is strongly associated with clinical functions related to problem recognition.
Discussion
Problem recognition, despite being the cornerstone of person-oriented primary care, remains largely overlooked in health services research. The POPE-PC’s validity and problem orientation render it potentially useful in rigorously assessing patient experiences of problem-oriented primary care.
Conclusion
The survey’s conceptual underpinning and psychometric properties, coupled with its simple and parsimonious design, enable application in primary care settings to provide person-oriented care.
Keywords: patient experience, primary care, problem orientation, psychometric properties, quadruple aim, quality improvement, validation
INTRODUCTION
Measuring the experiences of patients in relation to the delivery and receipt of person-oriented primary care is of increasing policy and research interest and is a core component of the Institute for Healthcare Improvement’s (IHI’s) Quadruple Aim.1–3 Patient experiences pertaining to the delivery and receipt of clinical primary care can be measured and assessed using systematic and validated survey instruments.4–6 There has therefore been increasing health services research attention and resources dedicated to the design and testing of survey instruments to measure primary care patients’ experiences.7,8 Existing instruments vary in design, content, and function and are underpinned by different conceptual frameworks because they are often adapted and validated for specific organizational settings, patient population profiles, and purposes.8
This study describes a novel instrument designed to measure patient experiences relating to the care delivery and receipt functions of person-oriented primary care, and assesses the instrument’s psychometric properties. The survey, named the Problem-Oriented Patient Experience-Primary Care (POPE-PC), was designed by a team of medical directors and senior administrators at Vancouver Coastal Health Authority in Vancouver, British Columbia, Canada.
Before development of the survey, a scoping literature review was performed to identify potentially suitable English-language patient experience and satisfaction surveys for consideration. The following tools were identified and reviewed because most had undergone at least some processes of validation: The Johns Hopkins Primary Care Assessment Tool,4,6 the Canadian Institute for Health Information Measuring Patient Experiences in Primary Health Care Survey,8 the General Practice Assessment Questionnaire,9 the Relational Communication Scale,10 the CollaboRATE survey,11 the Primary Care Assessment Survey,12 the European Task Force on Patient Evaluation of General Practice Care (EUROPEP),13 the Components of Primary Care Index,14 the Interpersonal Processes of Care Survey,15 the Saanich Peninsula Patient Experience Survey,16 the Veterans Affairs National Outpatient Customer Satisfaction Survey,17 the Consumer Assessment of Healthcare Providers and Systems (CAHPS) Patient-Centered Medical Home survey,18 the Care Coordination Quality Measure for Primary Care survey,19 and the RAND Patient Satisfaction Questionnaire.20
Although these surveys have interesting and useful content for primary care in various contexts, none were deemed to properly fit the specific needs or context in this case. The main reasons pertained to factors such as the length of surveys, the perceived complexity of wording, a lack of specific focus on experiences of clinical interactions at the interface of care delivery and receipt, and general perceptions of survey design and content not fitting the needs of the particular organizational or regional context.
Therefore, the survey development team decided to conceptualize and develop a new survey (described in the Methods section) with a fit-for-purpose design using the following key criteria: conceptual rigor, system orientation, problem orientation, parsimony, simplicity, consistency, relevance, and practicality. The team decided to conceptually underpin the survey using Dr. Starfield’s21,22 model for health services research, because its content directly pertains to system-oriented functions of problem-oriented primary care delivery and receipt (specifically the performance domains related to Provision of care and Receipt of care; Figure 1). The model’s key functions are defined and summarized in Starfield’s4 seminal book titled Primary Care: Balancing Health Needs, Services, and Technology. The most poorly recognized and understood function by health services researchers is that of “problem recognition,” which Starfield4(p28) defines as follows:
Figure 1.
Starfield’s19 model for health services research.
The providers first must recognize the needs existing both in the community and in individual patients. This feature is known as problem (or needs) recognition and is a particularly important consideration for primary care. The problem may be a symptom, a sign, an abnormal laboratory test, a previous but relevant item in the history of the patient or of the community, or a need for an indicated preventive procedure. Problem recognition implies being aware of the existence of situations requiring attention in a health context. After recognizing the problem, the health professional generally formulates a diagnosis or an understanding of the problem when no diagnosis is possible.
METHODS
Survey Development and Implementation
The team worked collaboratively to operationalize the conceptual model into survey questions. The content of the POPE-PC survey was refined by the team using an iterative content validation approach, whereby the list of drafted questions was reviewed several times for relevance, completeness, and essentiality until consensus was reached. The team members paid special attention to ensure that the survey content and design were simple and concise, considering that their public community health centers’ patients present with differing levels of distress, limited literacy, high burdens of illness and disease, complex psychosocial needs, limited resources and abilities, and weak motivational profiles.3
Team members operationalized the model’s domains into a series of 6 questions, with the first 2 regarding receipt-of-care functions related to problem recognition and the other 4 questions about delivery-of-care functions related to acceptance and satisfaction, understanding, and concordance.4,21 Two additional cross-cutting questions were added: 1 relating to the temporal nature of the experience at the interface of care delivery and receipt, and one relating to the theme of respect. A final question was added to measure patient satisfaction, using Reichheld’s Net Promoter Score (NPS) question.23 The ensuing 9-question POPE-PC survey is found in Table 1.
Table 1.
Problem-Oriented Patient Experience-Primary Care (POPE-PC) survey
Question number | Question |
---|---|
1 | Were you given a chance to describe your problems or concerns? |
2 | Did staff listen to what you had to say? |
3 | Did you get useful help for your problems or concerns? |
4 | Did you get a chance to ask questions? |
5 | Did you get a chance to talk about decisions and plans regarding your care? |
6 | Did you understand the advice you received? |
7 | Were you given enough time to discuss your problems or concerns? |
8 | Were you treated with respect? |
9 | Would you recommend this clinic to your friends, family, or colleagues? |
Survey questions 1–8 (Q1–Q8) are answered on a 5-point Likert scale (not at all; very little; somewhat; yes, for the most part; yes, definitely). The NPS question (Q9) is answered on a simple 3-point scale (not at all; maybe; yes, definitely) rather than the traditional 10-item NPS scale, which was perceived to be potentially confusing for patients. Furthermore, the 3-point scale conceptually aligns with the 3 assessment categories used by the NPS instrument (ie, “detractors,” “passives,” and “promoters”). The POPE-PC survey is free to use by anyone and requires no licensing or special permission for use.
The finalized survey questions specifically pertain to experiences at the interface of care delivery and receipt, thereby providing insights on the performance of functions that are directly in health care practitioners’ actual sphere of influence and that are amenable to change and improvement. This approach renders the survey tool fit for purpose and useful for quality improvement efforts. The survey’s design explicitly focuses on the practitioner-patient relationship and on patient engagement in the context of problem-oriented care settings, both of which are critical aspects of person-oriented primary care.21,24,25 The POPE-PC survey’s content and problem orientation therefore support assessment of the IHI’s and the National Patient Safety Foundation’s “Ask Me 3” guidance activities, which focus on enabling patients to ask the following 3 questions during care encounters: 1) “What is my main problem?,” 2) “What do I need to do?,” and 3) “Why is it important for me to do this?”26
The POPE-PC survey was implemented at a public community-based urgent primary care setting, and patients automatically receive the survey by email on leaving the clinic. The survey has enabled performance assessment, evaluation, multidisciplinary team-based quality improvement initiatives, and accountability to the regional Health Authority and provincial Ministry of Health. There is also substantial organizational and policy interest in potentially leveraging the POPE-PC as a regional and provincial standard. Validation of the instrument’s psychometric properties is therefore essential and is the key purpose of this study.
Data Processing and Preliminary Analysis
Psychometric testing was performed using POPE-PC survey data from City Centre Urgent Primary Care Centre, Vancouver Canada, derived from the 6-month period of March 2019 to September 2019. The original dataset is not linked to any patient identifiers and is completely anonymous. Research was conducted according to the ethical principles of the Declaration of Helsinki.
Data were randomized using a spreadsheet’s random function (Microsoft Excel RAND function, Microsoft Corp, Redmond, WA). Outliers were identified by calculating z-scores (standard scores) and were removed if found to be 2.99 standard deviations from the mean. Assumptions (ie, additivity, normality, homogeneity, homoscedasticity) were tested in Excel by running the correlation table, histogram, normal probability plot, and residual plot.
Excel data were saved as a CSV (comma separated values) file. The file was then imported into an open-source statistics program (JASP v0.11, University of Amsterdam, Amsterdam, Netherlands) to conduct descriptive statistics (ie, means, standard deviations, minimum/maximum ranges, frequency tables, distribution plots, and box plots).
Exploratory Factor Analysis and Internal Consistency
JASP software was used to conduct testing of psychometric properties.27 Exploratory factor analysis (EFA) on the entire dataset was used to assess the factor structure of the study data. An EFA was performed on the entire dataset and all 9 questions using parallel analysis and scree plot using the Promax oblique rotation (JASP Team [2019], JASP Version 0.11.1, Amsterdam, The Netherlands) for factor extraction. Factor loadings of 0.4 or greater were considered significant, and any factor cross-loadings below 0.4 were considered acceptable.28 With use of these criteria, problematic items were gradually eliminated until EFA resulted in a satisfactory factor structure. Goodness of fit was tested using the Non-Normed Fit Index (NNFI, also called the Tucker-Lewis Index), with values above 0.90 considered acceptable. Residual statistics were tested using the root mean square error of approximation (RMSEA), with values less than 0.08 considered acceptable. Internal consistency was assessed by calculating a Cronbach α, with a score of 0.7 or higher being considered satisfactory.28
Split-Half Cross-validation
A split-half cross-validation of the ensuing factor structure (from the aforementioned EFA of the entire dataset) was then conducted, by randomly splitting the dataset in halves and running EFA on each half.28 Parallel analysis and scree plot (Promax oblique rotation, JASP Version 0.11.1) were used for factor extraction, with factor loadings of 0.4 or greater considered significant. Goodness of fit was tested using the NNFI, and values above 0.90 were considered acceptable. Residual statistics were tested using the RMSEA, with values below 0.08 considered acceptable. Internal consistency was assessed by calculating a Cronbach α, with a score of 0.7 or higher being considered satisfactory.28 Cronbach α values were calculated for all factor models emerging from the respective EFA.
Exploratory Correlation Analysis
The correlation table (used to test assumptions relating to additivity) was used to explore associations between questions that did not fit the factor structure and those that did.
RESULTS
Assumptions Testing and Descriptive Statistics
The dataset was composed of 1152 complete survey responses collected between March 2019 and September 2019. Of the total dataset, 2.95% (34 responses) were deemed to be outliers (ie, z-score = 2.99) and therefore excluded from further analysis. The ensuing dataset was tested and found to meet the assumptions relating to additivity, normality, homogeneity, and homoscedasticity. Basic descriptive statistics were run on the ensuing dataset (N = 1118), as shown in Table 2. The descriptive statistics indicated a high level of clinic performance in relation to patient experiences across all 9 survey questions.
Table 2.
Descriptive statistics (N = 1118)
Statistic | Q1 | Q2 | Q3 | Q4 | Q5 | Q6 | Q7 | Q8 | Q9 |
---|---|---|---|---|---|---|---|---|---|
Mean | 4.872 | 4.846 | 4.600 | 4.739 | 4.553 | 4.795 | 4.768 | 4.908 | 2.925 |
Standard deviation | 0.395 | 0.421 | 0.749 | 0.606 | 0.793 | 0.501 | 0.556 | 0.338 | 0.287 |
Minimum | 2.000 | 2.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 2.000 | 1.000 |
Maximum | 5.000 | 5.000 | 5.000 | 5.000 | 5.000 | 5.000 | 5.000 | 5.000 | 3.000 |
Q = question.
Exploratory Factor Analysis
EFA on the entire dataset (N = 1118) and all 9 questions was conducted using parallel analysis and scree plot (Promax oblique rotation, JASP Version 0.11.1) for factor extraction. Factor loadings equal to or greater than 0.4 were considered significant; factor cross-loadings were not found. Because single-item factors and factors that had a loading less than 0.4 were removed, questions 7 to 9 were removed, resulting in a 2-factor model comprising 6 questions (Table 3). Factor 2 (questions 1 and 2) conceptually aligns with the “Provision of Care” performance domain’s problem recognition function, whereas factor 1 (questions 3–6) conceptually aligns with the “Receipt of Care” performance domain.
Table 3.
Factor loadingsa
Question | Factor 1 | Factor 2 | Uniquenessb |
---|---|---|---|
1 | 0.826 | 0.365 | |
2 | 0.747 | 0.403 | |
3 | 0.662 | 0.502 | |
4 | 0.634 | 0.429 | |
5 | 0.886 | 0.275 | |
6 | 0.659 | 0.619 |
Applied rotation method is Promax oblique (Promax oblique rotation, JASP Version 0.11.1).
Uniqueness is the variance that is unique to the variable and not shared with other variables.
Goodness of fit was found to be satisfactory with an NNFI value of 0.993 and residual statistics with a RMSEA value of 0.033. Reliability analysis yielded a Cronbach α of 0.810 for factor 1 and a Cronbach α of 0.760 for factor 2, indicating satisfactory reliability.
Split-Half Cross-validation of Two-Factor Structure
A split-half cross-validation of the 6-question, 2-factor structure was conducted by randomly splitting the dataset in halves (set 1 and set 2, each consisting of 576 survey responses) and running EFA on each half. Parallel analysis and scree plot (Promax oblique rotation JASP Version 0.11.1) were used for factor extraction. For both sets, single-item factors and factors that had loadings less than 0.4 were removed, resulting in 2-factor models composed of 6 questions (Tables 4 and 5).
Table 4.
Set 1 factor loadingsa
Question | Factor 1 | Factor 2 | Uniqueness |
---|---|---|---|
1 | 0.885 | 0.299 | |
2 | 0.729 | 0.386 | |
3 | 0.635 | 0.515 | |
4 | 0.680 | 0.394 | |
5 | 0.897 | 0.243 | |
6 | 0.705 | 0.574 |
Applied rotation method is Promax oblique (Promax oblique rotation, JASP Version 0.11.1).
Table 5.
Set 2 factor loadingsa
Question | Factor 1 | Factor 2 | Uniqueness |
---|---|---|---|
1 | 0.696 | 0.489 | |
2 | 0.822 | 0.372 | |
3 | 0.693 | 0.486 | |
4 | 0.584 | 0.460 | |
5 | 0.876 | 0.308 | |
6 | 0.605 | 0.664 |
Applied rotation method is Promax oblique (JASP v0.11, University of Amsterdam, Amsterdam, Netherlands).
For set 1, goodness of fit was found to be satisfactory with an NNFI value of 0.965, and residual statistics with a RMSEA value of 0.075. Reliability analysis yielded a Cronbach α of 0.823 for factor 1 and a Cronbach α of 0.784 for factor 2, indicating satisfactory reliability.
For Set 2, goodness of fit was found to be satisfactory with an NNFI value of 0.967 and residual statistics with a RMSEA value of 0.065. Reliability analysis yielded a Cronbach α of 0.795 for factor 1 and a Cronbach α of 0.723 for factor 2, indicating satisfactory reliability.
Exploratory Correlation Analysis
The correlation matrix (Table 6) indicates that Q7 was most strongly associated with Q4 (r = 0.737), Q5 (r = 0.672), Q2 (r = 0.665), and Q1 (r = 0.663). Q8 was most strongly associated with Q2 (r = 0.694), Q7 (r = 0.629), Q4 (r = 0.613), and Q1 (r = 0.613). Q9 was most strongly associated with Q8 (r = 0.665) and Q3 (r = 0.609).
Table 6.
Correlation matrix
Q | Q1 | Q2 | Q3 | Q4 | Q5 | Q6 | Q7 | Q8 | Q9 |
---|---|---|---|---|---|---|---|---|---|
1 | |||||||||
2 | 0.749 | ||||||||
3 | 0.514 | 0.573 | |||||||
4 | 0.612 | 0.631 | 0.660 | ||||||
5 | 0.524 | 0.562 | 0.685 | 0.711 | |||||
6 | 0.398 | 0.469 | 0.518 | 0.556 | 0.584 | ||||
7 | 0.663 | 0.665 | 0.602 | 0.737 | 0.672 | 0.592 | |||
8 | 0.613 | 0.694 | 0.523 | 0.613 | 0.508 | 0.484 | 0.630 | ||
9 | 0.549 | 0.578 | 0.609 | 0.585 | 0.478 | 0.448 | 0.582 | 0.665 |
Q = question.
DISCUSSION
This study aimed to assess the psychometric properties of the POPE-PC survey. Results from the initial EFA indicate that the POPE-PC survey has a 2-factor structure, with good internal consistency. A split-half validation yielded the same findings, which reaffirms that the 2-factor model has good psychometric properties.
The 2-factor structure aligns with the survey tool’s underpinning conceptual framework, which reaffirms the value of the expert input and their perceptions pertaining to the tool’s face validity.21,22 Factor 2 (Q1 and Q2) operationalizes the “Provision of Care” performance domain’s problem recognition function, whereas factor 1 (Q3–Q6) operationalizes the “Receipt of Care” performance domain’s functions related to acceptance and satisfaction, understanding, and concordance.21 The study’s findings therefore suggest that the survey is conceptually sound, which has important implications regarding the instrument’s validity for the measurement and assessment of patient experiences related to delivery and receipt of person-oriented primary care.
As expected from the conceptual framework, questions 7 to 9 did not fit the factor structure. The correlation matrix (Table 6) highlighted hypothetically plausible and interesting associations between the 3 questions and the questions that fit the 2-factor model, providing interesting insights for future potential assessments of concurrent validity. Performance on Q7 (a temporal question relating to patients being given enough time to discuss problems or concerns) is strongly associated with patients being given the opportunity to ask questions, patients being given the opportunity to discuss decisions regarding their care, whether staff listened to what patients had to say, and whether patients were given the opportunity to describe their problems or concerns. Performance on Q8 (respect) is most strongly associated with whether staff listened to what patients had to say, whether patients were given enough time to discuss their problems or concerns, whether patients were given the opportunity to ask questions, and whether patients were given a chance to describe their problems or concerns. Performance on Q9 (NPS question) is most strongly associated with whether patients were treated with respect and whether patients perceived that they received useful help for their problems or concerns.
The dynamics of the aforementioned associations warrant further exploration by health services researchers to better understand the complex mechanisms by which different primary care functions affect patient perceptions relating to respect and satisfaction. Specifically, the strength of the relationships between respect (Q8) and factor 2’s questions (Q1 and Q2) relating to problem recognition are particularly interesting in light of research findings indicating that respect is strongly a function of recognition of problems and attention to needs.29 Such findings, along with this study’s correlation analysis results, do indicate that Q8 (respect) can potentially be used as a proxy to assess concurrent validity for factor 2’s questions.
It is important to also highlight the survey’s parsimonious design philosophy, which has likely enabled a high response rate, which peaked at 42% for one of the study’s months. The survey was created by a team of Health Authority Medical Directors, senior administrators, and researchers working with public community health centers and was therefore designed to enable responses by patients presenting for care exhibiting high levels of distress, relatively low literacy rates, weak motivational profiles, high burdens of illness and disease, and complex biopsychosocial profiles.
The survey may be potentially well suited for application in various primary care settings that strive to provide problem-oriented care, regardless of the patient population profile. This is reaffirmed by the context and setting of the study’s clinical site, an urgent primary care center with a multidisciplinary team providing care for patients from diverse demographic and socioeconomic backgrounds, exhibiting various levels of urgency (of needs) and biopsychosocial complexity. It would be of value to test and to continue to validate the POPE-PC survey at different health care settings (ie, different organization contexts serving various patient population profiles) that strive to provide problem-oriented primary care, including virtual care or telehealth interactions.
Accurate measurement of the core functions of primary care enables meaningful performance assessment and the development of evaluation and quality improvement initiatives.3,30 At the study’s clinical site, monthly clinic and practitioner-level patient experience reports are operationalized, the results of which are actively used by the clinic’s Medical Director and management team to monitor performance and enable multidisciplinary team-led quality improvement activities. These performance assessments and quality improvement activities are executed with a spirit of enabling learning, continuous improvement, and professional development, rather than being punitive. The clinic’s leadership, clinical, and administrative teams have been reporting the patient survey to be of high value, particularly in relation to quality improvement and enabling staff motivation. The leadership team plans to design trending studies that enable assessment of the impact of aquality improvement activities on patient experience over time. The clinic also sends monthly patient experience performance reports to the Vancouver Coastal Health Authority and the British Columbia Ministry of Health, for accountability purposes.
By enabling the measurement of patient experiences of problem-oriented primary care, the POPE-PC survey aims to make a major contribution to health services research, with a focus on the field of primary care. Problem recognition, despite being a critical function and the cornerstone of person-oriented primary care, remains largely overlooked by health services research.4,21,24,25 Survey instruments that incorporate elements of measuring performance relating to the function of problem recognition include The Johns Hopkins Primary Care Assessment Tool, the Relational Communication Scale, and the CollaboRATE survey.6,11,31–33 The POPE-PC survey, by explicitly focusing on problem orientation in primary care, can potentially help assess performance of organizations participating in the IHI’s and National Patient Safety Foundation’s Ask Me 3 educational program.26
Basic standards for problem recognition and problem orientation of care, originally developed in the 1960s by Lawrence L Weed, MD,34 father of the problem-oriented medical record, remain largely absent in contemporary primary care systems. Problem lists in contemporary electronic medical records often contain diagnostic hypotheses rather than verifiable statements of the presenting problem or chief concern.35 Ensuing care plans are therefore potentially formulated for the wrong diagnoses.36 The subsequent inappropriate care and outcomes are not accurately captured, since the frame of reference (ie, the patient’s actual problem) is effectively distorted. This compromises the meaning of quality of care evaluations by health services researchers. Without problem-oriented standards, primary care activities are often untethered from the realities of patients and are therefore of uncertain value.37,38 Starfield’s21 research therefore strongly affirmed that the performance of primary care is largely contingent on systems enabling problem recognition.
Trained both in clinical medicine (Pediatrics) and Public Health, I have devoted my entire professional career to improving the effectiveness and equity of health services. Early in my career, I developed a conceptual scheme—published in the New England Journal of Medicine—that captured all the health systems [sic] characteristics related to providing health services. One key feature was identified as “health needs and problem recognition” by health professionals, a feature of care neglected by all approaches to measuring and assuring quality of care. My subsequent work expanded on the notion that recognition of needs is a salient contributor to improvements in individual and population health.
It is therefore hypothetically plausible that the relatively high level of performance of City Centre Urgent Primary Care Centre on the POPE-PC survey can be attributed to the fact that its multidisciplinary team systematically elicits and documents patients’ presenting problems and concerns, which are coded in the electronic medical record using Presenting Complaint codes derived from the Canadian Emergency Department Information System Presenting Complaint List.39 It is likely that most community-based primary care clinics in Canada, which solely use International Classification of Diseases, Ninth Revision electronic medical record classification systems and do not systematically elicit and code presenting problems or chief concerns, could manifest lower levels of performance on the POPE-PC survey.35,40,41 Use of the POPE-PC tool across primary care settings could reaffirm the importance of problem recognition and promote positive changes to classification and care standards, which ultimately contribute toward achievement of the Quadruple Aim.1,4
It is important to highlight that a formal assessment of concurrent validity—although essential as part of the survey’s ongoing validation process—was not performed within the scope of this study. Concurrent validity will be the subject of future studies that will involve linkage of the POPE-PC survey dataset to other datasets that are deemed to contain suitable variables that enable analyses of concurrent validity. In the absence of a formal assessment of concurrent validity, the study’s exploratory correlation analysis (Table 6) provided interesting insights requiring further research, particularly in relation to potentially leveraging questions outside the POPE-PC’s 2-factor structure (ie, Q7–Q9) for assessment of concurrent validity.
CONCLUSION
The POPE-PC survey was designed to enable the conceptually sound measurement of patient experiences of problem-oriented primary care. This study indicates that the instrument has satisfactory psychometric properties and is unique in that it rigorously enables the assessment of initiatives promoting problem orientation in primary care, such as the IHI’s and National Patient Safety Foundation’s Ask Me 3 activities.26 The POPE-PC survey’s problem orientation reaffirms Starfield’s21 assertion that “recognition of needs is a salient contributor to improvements in individual and population health” and thereby aims to make a positive contribution to operationalization of the IHI’s Quadruple Aim.
Acknowledgments
The author would like to acknowledge that the Problem-Oriented Patient Experience-Primary Care (POPE-PC) Survey was developed in collaboration with the following team: Dean Brown, MD; Andrew Day, MSc; Rachael McKendry, MA; Michael Norbury, MD; and Nardia Strydom, MD, ChB. The author notes that the content of the published article does not necessarily reflect their respective individual views or perspectives.
Kathleen Louden, ELS, of Louden Health Communications performed a primary copy edit.
Footnotes
Disclosure Statement
The author(s) have no conflicts of interest to disclose.
References
- 1.Bodenheimer T, Sinsky C.From triple to quadruple aim: Care of the patient requires care of the provider Ann Fam Med 2014. November–December126573–6. 10.1370/afm.1713 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2.Gelmon S, Sandberg B, Merrthew N, Bally R. Refining reporting mechanisms in Oregon’s patient-centered primary care home program to improve performance. Perm J. 2016 Sum;20(3):15–115. doi: 10.7812/TPP/15-115. [DOI] [PubMed] [Google Scholar]
- 3.Shukor AR, Edelman S, Brown D, Rivard C. Developing community-based primary health care for complex and vulnerable populations in the Vancouver Coastal Health Region: HealthConnection Clinic. Perm J. 2018;22:18–010. doi: 10.7812/TPP/18-010. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Starfield B. Primary care: Balancing health needs, services, and technology. New York, NY: Oxford University Press; 1998. [Google Scholar]
- 5.Malouin RA, Starfield B, Sepulveda MJ. Evaluating the tools used to assess the medical home. Manag Care. 2009 Jun;18(6):44–8. [PubMed] [Google Scholar]
- 6.Cassady CE, Starfield B, Hurtado MP, Berk RA, Nanda JP, Friedenberg LA. Measuring consumer experiences with primary care. Pediatrics. 2000 Apr;105(4 Pt 2):998–1003. [PubMed] [Google Scholar]
- 7.Haggerty JL, Pineault R, Beaulieu M-D, et al. Practice features associated with patient-reported accessibility, continuity, and coordination of primary health care Ann Fam Med 2008. March–April62116–23. 10.1370/afm.802 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Wong ST, Haggerty JL. Measuring patient experiences in primary health care: a review and classification of items and scales used in publicly-available questionnaires [Internet] Vancouver, BC: University of British Columbia Centre for Health Services and Policy Research; 2013. May, [cited 2019 Jan 7]. Available from: https://open.library.ubc.ca/cIRcle/collections/facultyresearchandpublications/52383/items/1.0048528. [Google Scholar]
- 9.Mead N, Bower P, Roland M. The General Practice Assessment Questionnaire (GPAQ)—adevelopment and psychometric characteristics. BMC Fam Pract. 2008 Feb 20;9(1):13. doi: 10.1186/1471-2296-9-13. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.Gallagher TJ, Hartung PJ, Gregory SW. Assessment of a measure of relational communication for doctor-patient interactions. Patient Educ Couns. 2011 Dec 1;45(3):211–8. doi: 10.1016/s0738-3991(01)00126-4. [DOI] [PubMed] [Google Scholar]
- 11.Forcino RC, Barr PJ, O’Malley AJ, et al. Using CollaboRATE, a brief patient-reported measure of shared decision making: Results from three clinical settings in the United States. Health Expect. 2018 Feb;21(1):82–9. doi: 10.1111/hex.12588. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12.Safran DG, Kosinski M, Tarlov AR, et al. The Primary Care Assessment Survey: Tests of data quality and measurement performance. Med Care. 1998 May;36(5):728–39. doi: 10.1097/00005650-199805000-00012. [DOI] [PubMed] [Google Scholar]
- 13.Wensing M, Mainz J, Grol R. A standardised instrument for patient evaluations of general practice care in Europe. Eur J Gen Pract. 2000 Jan 1;6(3):82–7. doi: 10.3109/13814780009069953. [DOI] [Google Scholar]
- 14.Flocke SA. Measuring attributes of primary care: Development of a new instrument. J Fam Pract. 1997 Jul;45(1):64–74. [PubMed] [Google Scholar]
- 15.Stewart AL, Nápoles-Springer AM, Gregorich SE, Santoyo-Olsson J. Interpersonal processes of care survey: Patient-reported measures for diverse groups. Health Serv Res. 2007 Jun;42(3 Pt 1):1235–56. doi: 10.1111/j.1475-6773.2006.00637.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16.Saanich Peninsula Patient Experience Survey. Community Health and Care Evaluation Program. Victoria, British Columbia, Canada: Vancouver Island Health Authority; [Google Scholar]
- 17.Borowsky SJ, Nelson DB, Fortney JC, Hedeen AN, Bradley JL, Chapko MK. VA community-based outpatient clinics: Performance measures based on patient perceptions of care. Med Care. 2002 Jul;40(7):578–86. doi: 10.1097/00005650-200207000-00004. [DOI] [PubMed] [Google Scholar]
- 18.Hays RD, Berman LJ, Kanter MH, et al. Evaluating the psychometric properties of the CAHPS Patient-centered Medical Home survey. Clin Ther. 2014 May;36(5):689–696.e1. doi: 10.1016/j.clinthera.2014.04.004. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19.Care Coordination Quality Measure for Primary Care (CCQM-PC) [Internet] Rockville, MD: Agency for Healthcare Research and Quality; 2016. Jul, [cited 2019 Dec 11]. Available from: www.ahrq.gov/ncepcr/care/coordination/quality/index.html. [Google Scholar]
- 20.Thayaparan AJ, Mahdi E. The Patient Satisfaction Questionnaire Short Form (PSQ-18) as an adaptable, reliable, and validated tool for use in various settings. Med Educ Online. 2013 Jul 23;18(1):21747. doi: 10.3402/meo.v18i0.21747. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21.Starfield B. Primary care and equity in health: The importance to effectiveness and equity of responsiveness to peoples’ needs. Humanity Soc. 2009 Feb 1;33(1–2):56–73. doi: 10.1177/016059760903300105. [DOI] [Google Scholar]
- 22.Starfield B. Health services research: A working model. N Engl J Med. 1973 Jul 19;289(3):132–6. doi: 10.1056/NEJM197307192890305. [DOI] [PubMed] [Google Scholar]
- 23.Krol MW, de Boer D, Delnoij DM, Rademakers JJ. The Net Promoter Score—An asset to patient experience surveys? Health Expect. 2015 Dec;18(6):3099–109. doi: 10.1111/hex.12297. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24.Starfield B. Is patient-centered care the same as person-focused care? Perm J. 2011 Spring;15(2):63–9. doi: 10.7812/TPP/10-148. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 25.Shukor AR. An alternative paradigm for evidence-based medicine: Revisiting Lawrence Weed, MD’s system approach. Perm J. 2017;21(16):16–147. doi: 10.7812/TPP/16-147. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26.Ask Me 3 Good questions for your good health [Internet] Boston, MA: Institute for Healthcare Improvement; [cited 2019 Oct 21]. Available from: www.ihi.org:80/resources/Pages/Tools/Ask-Me-3-Good-Questions-for-Your-Good-Health.aspx. [Google Scholar]
- 27.Quintana DS, Williams DR. Bayesian alternatives for common null-hypothesis significance tests in psychiatry: A non-technical guide using JASP. BMC Psychiatry. 2018 Jun 7;18(1):178. doi: 10.1186/s12888-018-1761-4. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 28.Gambashidze N, Hammer A, Brösterhaus M, Manser T WorkSafeMed Consortium. Evaluation of psychometric properties of the German Hospital Survey on Patient Safety Culture and its potential for cross-cultural comparisons: A cross-sectional study. BMJ Open. 2017 Nov 9;7(11):e018366. doi: 10.1136/bmjopen-2017-018366. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 29.Dickert NW, Kass NE. Understanding respect: Learning from patients. J Med Ethics. 2009 Jul;35(7):419–23. doi: 10.1136/jme.2008.027235. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 30.Bodenheimer T, Ghorob A, Willard-Grace R, Grumbach K.The 10 building blocks of high-performing primary care Ann Fam Med 2014. March–April122166–71. 10.1370/afm.1616 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 31.Tai-Seale M, Foo PK, Stults CD. Patients with mental health needs are engaged in asking questions, but physicians’ responses vary. Health Aff (Millwood) 2013 Feb;32(2):259–67. doi: 10.1377/hlthaff.2012.0962. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 32.Tai-Seale M, Elwyn G, Wilson CJ, et al. Enhancing shared decision making through carefully designed interventions that target patient and provider behavior. Health Aff (Millwood) 2016 Apr;35(4):605–12. doi: 10.1377/hlthaff.2015.1398. [DOI] [PubMed] [Google Scholar]
- 33.Barr PJ, Forcino RC, Thompson R, et al. Evaluating CollaboRATE in a clinical setting: Analysis of mode effects on scores, response rates and costs of data collection. BMJ Open. 2017 Mar 24;7(3):e014681. doi: 10.1136/bmjopen-2016-014681. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 34.Weed LL. Medical records that guide and teach. N Engl J Med. 1968 Mar 21;278(12):652–7. doi: 10.1056/NEJM196803212781204. [DOI] [PubMed] [Google Scholar]
- 35.Hofmans-Okkes IM, Lamberts H. The International Classification of Primary Care (ICPC): New applications in research and computer-based patient records in family practice. Fam Pract. 1996 Jun;13(3):294–302. doi: 10.1093/fampra/13.3.294. [DOI] [PubMed] [Google Scholar]
- 36.Weed LL, Weed L. Diagnosing diagnostic failure. Diagnosis (Berl) 2014 Jan 1;1(1):13–7. doi: 10.1515/dx-2013-0020. [DOI] [PubMed] [Google Scholar]
- 37.Weed LL, Weed L. Medicine in denial. Scotts Valley, CA: CreateSpace Independent Publishing Platform; 2011. [Google Scholar]
- 38.Weed LL, Weed L. Opening the black box of clinical judgment—an overview. Interview by Abi Berger. BMJ. 1999 Nov 13;319(7220):1279. doi: 10.1136/bmj.319.7220.1279. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 39.Grafstein E, Bullard MJ, Warren D, Unger B CTAS National Working Group. Revision of the Canadian Emergency Department Information System (CEDIS) Presenting Complaint List version 1.1. CJEM. 2008 Mar;10(2):151–73. doi: 10.1017/S1481803500009878. [DOI] [PubMed] [Google Scholar]
- 40.Verbeke M, Schrans D, Deroose S, De Maeseneer J. The International Classification of Primary Care (ICPC-2): An essential tool in the EPR of the GP. Stud Health Technol Inform. 2006;124:809–14. [PubMed] [Google Scholar]
- 41.Soler J-K, Okkes I, Wood M, Lamberts H. The coming of age of ICPC: Celebrating the 21st birthday of the International Classification of Primary Care. Fam Pract. 2008 Aug;25(4):312–7. doi: 10.1093/fampra/cmn028. [DOI] [PubMed] [Google Scholar]