Abstract
Background
Quality of care remains a priority issue and is correlated with patient experience. Measuring multidimensional patient primary care experiences in multiprofessional clinics requires a robust instrument. Although many exist, little is known about their quality.
Objective
To identify patient perception instruments in multiprofessional primary care and evaluate their quality.
Methods
Systematic review using Medline, Pascal, PsycINFO, Google Scholar, Cochrane, Scopus, and CAIRN. Eligible articles developed, evaluated, or validated 1 or more self-assessment instruments. The instruments had to measure primary care delivery, patient primary care experiences and assess at least 3 quality-of-care dimensions. The COnsensus-based Standards for the selection of health status Measurement Instruments (COSMIN) checklist was used to assess methodological quality of included studies. Instrument measurement properties were appraised using 3 possible quality scores. Data were combined to provide best-evidence synthesis based on the number of studies, their methodological quality, measurement property appraisal, and result consistency. Subscales used to capture patient primary care experiences were extracted and grouped into the 9 Institute of Medicine dimensions.
Results
Twenty-nine articles were found. The included instruments captured many subscales illustrating the diverse conceptualization of patient primary care experiences. No included instrument demonstrated adequate validity and the lack of scientific methodology for assessing reliability made interpreting validity questionable. No study evaluated instrument responsiveness.
Conclusion
Numerous patient self-assessment instruments were identified capturing a wide range of patient experiences, but their measurement properties were weak. Research is required to develop and validate a generic instrument for assessing quality of multiprofessional primary care.
Trial registration
Not applicable.
Keywords: multiprofessional clinics, patient experience, patient self-assessment instrument, quality of primary care, systematic review
Key Messages.
Review found 29 patient perception instruments in multiprofessional primary care.
A wide range of patient experiences were captured.
No instrument had adequate validity and measurement properties were weak.
No study evaluated instrument responsiveness.
An instrument to assess quality of multiprofessional primary care is needed.
Background
In an ageing population, the prevalence of multimorbidity is growing.1 Health care systems are becoming more comprehensive to meet the growing scope and scale of care required for complex, multimorbid patients and disease-centred care is giving way to patient-centred care.2,3 To meet these demands, single professional practices are increasingly becoming multiprofessional clinics4 and new primary health care models, such as the Chronic care model5 or the Patient-centred medical home,6,7 are being applied.
In France, these multiprofessional structures are financed by the French health authority (HAS) so their productivity and economic value are monitored closely. The HAS recently concluded that quality of care and patient satisfaction also need assessing for these structures yet they have not been studied to date.8 This is therefore a new and rapidly changing area of interest in France requiring more assessment and research.
Quality of care can be defined in terms of structure, process, and outcome.9,10 Specifically, the WHO defines quality of care as “the extent to which health care services provided to individuals and patient populations improve desired health outcomes. To achieve this, health care must be safe, effective, timely, efficient, equitable and people-centred.”11 In fact, patient-centred care is an essential requirement of modern medicine. Therefore, when assessing quality of care, the Institute of Medicine (IOM) health care quality and patient-centredness dimensions should be considered.12,13
Patient satisfaction is positively correlated with quality of care received14,15 so a satisfaction survey measuring patient appreciation of the care received, could be 1 research option.10 However, these surveys only provide a limited view of care as an experience16 and are unable to assess potential improvements.17 Measuring patient experience is 1 solution to this limitation and reflects the paradigm shift towards patient-centred care.18 These patient care experience measures could facilitate efforts targeting patient-centred care such as improved accountability and quality. Furthermore, better patient experiences are linked to improved adherence to preventive and treatment processes, improved clinical outcomes and patient safety, and reduced health care use.19
Patient Reported Experience Measures (PREMs) are a more complete measure of patient experiences whilst receiving care20 and directly evaluate how patient-centred the care is.19 These instruments evaluate the impact of care processes on patient experience and differ from satisfaction surveys in that they objectively measure specific aspects of patient experience such as communication and timeliness.
Measuring multidimensional patient perception of their primary care experiences requires a robust instrument, with proven reliable, valid, and responsive measurement properties according to current standards such as COSMIN (COnsensus-based Standards for the selection of health status Measurement Instruments).21,22 Evaluating measurement properties determines instrument quality. COSMIN defines 9 measurement properties within the 3 domains of reliability, validity, and responsiveness. The quality criteria for measurement properties in health status instruments include content validity, internal consistency, criterion validity, construct validity, reproducibility, responsiveness, floor and ceiling effects, and interpretability with content validity arguably being the most important.23
Although numerous patient perception instruments are available, their quality has not been systematically reviewed, despite these measurement standards. This systematic review aims to identify existing patient perception instruments in multiprofessional primary care and evaluate their quality.
Methods
To minimize potential sources of bias, this systematic review followed the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines.
Definitions
For this study, the following definitions were used.
Instrument: a questionnaire built with objective and subjective questions used to evaluate patient satisfaction and experience of primary care (authors’ definition).
Primary care (IOM definition): “the provision of integrated, accessible health care services by clinicians who are accountable for addressing a large majority of personal health needs, developing a sustained partnership with patients, and practicing in the context of family and community.”24
Quality of care: see WHO definition in Background.
Search strategy
A systematic literature review was undertaken in Medline, Pascal, Cochrane, Scopus, Cairn, PsycINFO, and Google Scholar with publication dates from 1990 to November 2019. 1990 was chosen as the start date because multiprofessional practice only emerged as the main practice model in that year. The search strategy was constructed with help from an expert at the University of Western Brittany (Université de Bretagne Occidentale). It was developed using MeSH terms and non-MeSH terms including tool, instrument, family practice, and scale. Furthermore, the terms patient “experience” and “satisfaction” are often used interchangeably in the literature, so both were included in the search strategy. After trying different search strategies, it was discovered that using multiple terms for questionnaire and primary care did not alter the search results. For this reason, the following MeSH terms and keywords in 4 domains were used: “questionnaire,” “patient satisfaction,” “patient experience,” and “primary health care.” The Search strategy carried out in Pubmed was ((“patient satisfaction”[MeSH Terms] OR “patient satisfaction”[All Fields]) OR “patient experience”[All Fields]) AND (“general practice”[MeSH Terms] OR “primary care”[All Fields]) AND (“surveys and questionnaires”[MeSH Terms] OR “questionnaires”[All Fields] OR “surveys”[All Fields]). Filters activated: Publication date from 1990/01/01 to 2019/11/22.
Selection of eligible articles
Peer-reviewed articles in English or French were included if they described a primary study that developed, evaluated, or validated 1 or more self-reported instruments. These instruments had to assess at least 3 quality-of-care dimensions and be developed to measure the health care process delivered to a patient (of any age) by at least 2 primary health care providers or be developed to measure patient primary care experiences.
To guarantee that the investigated instrument indeed measured patient experiences of primary care, articles were excluded if they evaluated instruments in health care establishments other than general practitioner centred settings, evaluated instruments in a restricted population (including ageing, specific condition, specific gender), or investigated instruments measuring other health outcomes such as quality of life, health status, burden of disease, or disability.
Titles and abstracts were independently screened by 2 researchers (JD and TP) who then reviewed the full text. Where necessary, a third reviewer (JYLR) was consulted for a final decision. The bibliography of each included article was then checked following the same inclusion process.
Data extraction
For each included study, data were extracted manually by 1 team member (TP) and checked by a second (JD); differences of opinion were discussed until a consensus was reached. Where there was any doubt, a third researcher was consulted (JYLR).
Patient experience captured through subscales
Subscales reported in the study were analysed to measure the breadth of patient experience captured by the instrument.
The following data were extracted for each instrument identified: instrument name, number of subscales and items, response scale, and score range.
Details of the subscales used to capture patient primary care experiences were extracted and grouped into the 6 IOM patient-centredness dimensions: respect for patient values, preferences, and expressed needs; coordination and integration of care; information, communication, and education; physical comfort; emotional support—relieving fear and anxiety, involvement of family and friends,13 and 3 IOM health care quality dimensions: timeliness; efficiency and equity/accessibility.12
Where articles evaluated multiple instruments, data for each instrument were extracted separately.
Descriptive statistics are presented in a bar graph and in tables.
Quality appraisal
For each instrument, the measurement properties and interpretability (see Appendix 1) were appraised in 2 ways. Firstly, the methodological quality of each included study was assessed (methodological quality appraisal). Secondly, the measurement properties themselves were appraised according to the study results. Four members of the research team (TP, BP, JD, and JYLR) rated the methodological quality and measurement property of each article. Discrepancies were discussed until a consensus was reached. Data from these 2 appraisals were combined to provide best-evidence synthesis.
Methodological quality appraisal
The COSMIN checklist22,23 was used to assess the methodological quality of each included study. The methodology of each study was examined according to the 9 COSMIN measurement properties. These were categorized into 3 quality domains, according to the COSMIN taxonomy: (i) reliability (including internal consistency, reliability, and measurement error), (ii) validity (including content validity, criterion validity, structural validity, cross-cultural validity, and hypothesis testing (construct validity), and (iii) responsiveness (see Appendix 1).
For each measurement property evaluated within each study, the methodological quality was rated as: “excellent,” “good,” “fair,” or “poor.” An additional box was used to assess requirements for studies using item response theory.
For interpretability, floor and ceiling effects, minimally important change (MIC), and minimally important difference (MID) values were evaluated. Results are presented in tables.
Measurement property appraisal
Criteria developed by Terwee et al.21 and Schellingerhout et al.25,26 (see Appendix 2) were used to rate the instrument measurement properties within each particular study with 3 possible quality scores: a positive rating (labelled +), an inconclusive rating (labelled ?), and a negative rating (labelled −).
Best-evidence synthesis
When the same measurement properties of a specific instrument were evaluated in more than 1 study, the quality of each measurement property was determined using the method recommended by Schellingerhout et al.25,26 The results from the different studies were then synthesized, as suggested by Terwee et al., considering study methodological quality, measurement property appraisals, number of studies assessing the property, and result consistency from multiple studies. This overall result was rated as “strong,” “moderate,” “limited,” “conflicting,” or “unknown” (see the footer of Table 3 for more information about result ratings). One researcher then performed the best-evidence synthesis (JD), which was checked by a second researcher (JYLR).
Table 3.
Best-evidence synthesis for each of the 9 measurement properties of the 29 quality of primary care instruments (1990–2019).
Instrument | Instrument authors (year) | Article(s) | Internal consistency | Reliability | Measurement error/agreement | Content validity | Structural validity/item response theory | Hypothesis testing | Cross-cultural validity | Criterion validity | Responsiveness |
---|---|---|---|---|---|---|---|---|---|---|---|
ACES | Safran et al. (2006) | 27 | ? | ||||||||
CPCI | Flocke et al. (1997) | 28,29 | -- | - | |||||||
CSQ | Baker et al. (1990) | 30–32 | +/- | ? | ? | ||||||
CSS-VF | Gasquet et al. (2003) | 33 | - | ? | - | ? | |||||
EUROPEP | Comité EUROPEP (1998) | 29,34 | ? | ||||||||
G-MISS-16-VF | Maurice-Szamburski et al. (2017) | 35 | ++ | IRT: ++ | ? | ||||||
GPAQ | Mead et al. (2008) | 36 | -- | ||||||||
GPAQ-R | Roland et al. (2013) | 37 | - | - | |||||||
GPAS | Ramsay et al. (2000) | 38 | -- | ? | ? | ||||||
Grogan-PSQ-40 | Grogan et al. (1995) | 39 | ? | ? | ? | ||||||
Haddad-PSQ-22 | Haddad et al. (2000) | 40 | ++ | + | ? | ||||||
IPQ | Greco et al. (2003) | 41 | ? | ? | |||||||
Marshall-PSQ-18 | Marshall et al. (1994) | 42 | ++ | ||||||||
MISS-21 | Meakin et al. (2002) | 43 | ? | + | |||||||
MISS-26 | Wolf et al. (1978) | 44 | ? | ||||||||
MISS-29 | Wolf et al. (1981) | 32 | ? | ||||||||
NMPSSA | Eccles et al. (1992) | 45 | ? | + | |||||||
NMPSSIAC | Eccles et al. (1992) | 45 | ? | + | |||||||
PCAS | Safran et al. (1998) | 29,46 | +/- | +/- | +/- | ||||||
PCAT | Shi et al. (2001) | 47 | - | - | |||||||
PDIS | Bowman et al. (1992) | 48 | ? | - | ? | ||||||
PDRQ-9 | Van der Feltz-Cornelis et al. (2004) | 49 | ? | - | ? | ||||||
PEI | Howie et al. (1998) | 50 | ? | ||||||||
PEQ | Steine et al. (2001) | 51 | ? | + | |||||||
QVFP | Marcinowicz et al. (2010) | 52 | + | ? | - | ||||||
SSQ | Baker et al. (1991) | 31,53 | ? | ++ | ? | ? | |||||
VSQ-VF | Gasquet et al. (2003) | 33 | + | ? | - | ? | |||||
Vukovic-PSQ-20 | Vukovic et al. (2012) | 54 | ? | ? | -- | ||||||
Ware-PSQ-55 | Ware et al. (1983) | 55 | + | ? | ? | -- |
A plus sign (+) indicates positive results for a measurement property evaluation and a minus sign (-) indicates negative results for a measurement property evaluation, e.g. + stands for limited evidence of positive results and --- stands for strong evidence of negative results for a measurement property.
Rating: +++ or --- strong level of evidence for positive/negative results (consistent findings [Terwee] in multiple studies of good methodological quality [COSMIN] OR 1 study of excellent methodological quality), ++ or -- moderate level of evidence for positive/negative results (consistent findings in multiple studies of fair methodological quality OR 1 study of good methodological quality), + or — limited evidence for positive/negative results (1 study of fair methodological quality), +/- conflicting evidence, ? = unknown, due to poor methodological quality, empty cell = no synthesis possible due to a lack of studies for this measurement property.
Results
Included studies
Electronic searches identified 2,775 articles. Title and abstract screening excluded 2,627 records, leaving 148 full-text articles. A hand search of these articles identified an additional 236 records. Of these, 170 were excluded leaving 66 full-text articles. A total of 214 full-text articles were therefore retrieved and assessed for eligibility. In total, 37 articles met the inclusion criteria, of which 21 were derived from the primary search and 16 from the hand search. After removing 8 duplicates, 29 articles were included in the analysis. The main reason articles were excluded was nonassessment of instrument measurement properties (97 articles). Figure 1 provides the PRISMA flow chart of the inclusion process with the complete list of reasons for exclusion at each step.
Fig. 1.
Flow diagram of article selection process for the systematic review (1990–2019) showing article inclusion and exclusion with the complete list of reasons for exclusion at each step.
Overview of studies
Table 1 gives an overview of included instruments (see Appendix 3 for details on the included studies). Overall, of the 29 articles included, 15 reported on the initial instrument development and validation and 14 reported on further development and validation of an existing instrument (with a different sample, assessing a different psychometric property). Some studies compared several instruments, and some instruments were included in several studies. Most studies [66% (19/29)] were conducted in the United Kingdom (N = 12) and United States (N = 7). All studies reported on instruments validated in English.
Table 1.
Overview of the 29 eligible instruments found from the systematic review (1990/01/01–2019/11/22) showing instrument name, study authors, number of items, subscales, response scale, and language.
Instrument name (article) | Instrument authors (year) | Items | Subscales | Response scale | Language |
---|---|---|---|---|---|
Ambulatory Care Experience Survey (ACES)27 | Safran et al. (2006) | 39 | Eleven subscales: organizational access, visit-based continuity, integration, clinical team, office staff, communication, whole-person orientation, health promotion, interpersonal treatment, patient trust, relationship, and duration. | 6-Point Likert scale | English |
Components of Primary Care Index (CPCI)28,29 | Flocke et al. (1997) | 19 | Seven subscales: comprehensiveness of care, accumulated knowledge, interpersonal communication, coordination of care, first contact, continuity of care, and longitudinally. | 5-Point Likert scale | English |
Consultation Satisfaction Questionnaire (CSQ)30–32 | Baker et al. (1990) | 18 | Four subscales: general satisfaction, professional care, depth of relationship, and perceived time. | 5-Point Likert scale | English |
Consumer Satisfaction Survey VF (CSS-VF)33 | Gasquet et al. (2003) | 39 | Nine subscales: access to primary care, access to secondary care, communication and competence of general practitioner, communication of specialist, competence of specialist, choice and continuity, interpersonal care, general satisfaction, and finances. | 5-Point Likert scale | French |
EUROPEP29,34 | Comité EUROPEP (1998) | 23 | Five subscales: relationship, technical aspects of care/competence, information and support, organization of care and access. | Gradual scale from 1 to 5 points | French |
Generic Medical Interview Satisfaction Scale 16 items VF (G-MISS-16-VF)35 | Maurice-Szamburskiet al. (2017) | 16 | Three subscales: pain, communication, and compliance. | 5-Point Likert scale | French |
General Practice Assessment Questionnaire (GPAQ)36 | Mead et al. (2008) | 46 | Five subscales: access, office staff, continuity of care, communication, and medical care. | Gradual scale varying from 2 to 6 points | English |
General Practice Assessment Questionnaire for Revalidation (GPAQ-R)37 | Roland et al. (2013) | 46 | Five subscales: access, office staff, continuity of care, communication, and medical care. | Gradual scale ranging from 2 to 6 points | English |
General Practice Assessment Survey (GPAS)38 | Ramsay et al. (2000) | 53 | Nine subscales: access, technical aspects of care, communication, humanity, trust, accumulated knowledge, medical care, appointments, and premises. | 5-Point Likert scale | English |
Grogan Patient Satisfaction Questionnaire 40 Items (Grogan-PSQ-40)39 | Grogan et al. (1995) | 40 | Five subscales: general practitioner, access, nurses, appointment, and facilities. | 5-Point Likert scale | English |
Haddad Patient Satisfaction Questionnaire 22 Items (Haddad-PSQ-22)40 | Haddad et al. (2000) | 22 | Three subscales: relationship, technical aspects of care, and outcomes. | 5-Point Likert scale | English |
Improving Practice Questionnaire (IPQ)41 | Greco et al. (2003) | 27 | No subscales reported (expected subscales were: facilities, office staff, and general practitioner). | 5-Point Likert scale | English |
Marshall Patient Satisfaction Questionnaire 18 Items (Marshall-PSQ-18)42 | Marshall et al. (1994) | 18 | No subscales reported (expected subscales were: general satisfaction, technical aspect of care, communication, relation, finances, time, and access). | 5-Point Likert scale | English |
Medical Interview Satisfaction Scale 21 items (MISS-21)43 | Meakin et al. (2002) | 21 | Four subscales: communication comfort, distress relief, compliance intent, and rapport. | 5-Point Likert scale | English |
Medical Interview Satisfaction Scale 26 items (MISS-26)44 | Wolf et al. (1978) | 26 | Three subscales: cognitive satisfaction, affective satisfaction, and behavioural satisfaction. | 5-Point Likert scale | English |
Medical Interview Satisfaction Scale 29 items (MISS-29)32 | Wolf et al. (1981) | 29 | Four subscales: communication comfort, distress relief, rapport, and compliance intent. | Scale 7 points | English |
Newcastle MAAG Patient Satisfaction Survey Accessibility (NMPSSA)45 | Eccles et al. (1992) | 12 | No subscales reported (expected subscales were: access and patient reception). | 5-Point Likert scale | English |
Newcastle MAAG Patient Satisfaction Survey Interpersonal Aspects of Care (NMPSSIAC)45 | Eccles et al. (1992) | 11 | Three subscales: listening, information, and global satisfaction. | 5-Point Likert scale | English |
Primary Care Assessment Survey (PCAS)29,46 | Safran et al. (1998) | 51 | Eleven subscales: finances, access, longitudinal continuity, visit-based continuity, knowledge of the patient, preventive counselling, integration, communication, physical examination, interpersonal treatment, and trust. | 5-Point Likert scale and score range from 1 to 100 | English |
Primary Care Assessment Tool (PCAT)47 | Shi et al. (2001) | 84 | Nine subscales: first contact/accessibility, first contact/using, care in progress, coordination of services, services available, services received, family-centred care, community orientation, and cultural competencies. | 4-Point Likert scale | English |
Patient–Doctor Interaction scale (PDIS)48 | Bowman et al. (1992) | 19 | No subscales reported (aim was to explore patient–doctor interactions). | 5-Point Likert scale | English |
Patient Doctor relationship Questionnaire (PDRQ-9)49 | Van der Feltz-Cornelis et al. (2004) | 9 | No subscales reported (aim was to assess patient–doctor relationship). | Scale 5 points | English |
Patient Enablement Instrument (PEI)50 | Howie et al. (1998) | 6 | No subscales reported (aim was to understand the feelings of patients after a consultation). | Scale 3 points | English |
Patient Experience Questionnaire (PEQ)51 | Steine et al. (2001) | 18 | Five subscales: communication, emotions, outcomes, barriers, and auxiliary staff. | 5-Point Likert scale | English |
Quality of Visit to Family Physician (QVFP)52 | Marcinowicz et al. (2010) | 30 | Three subscales: doctor–patient relationship and consultation outcome, barriers and difficulties, accessibility to care. | 5-Point Likert scale | English |
Surgery Satisfaction Questionnaire (SSQ)31,53 | Baker et al. (1991) | 17 | Six subscales: general satisfaction, continuity, access, medical care, premises, and availability. | 5-Point Likert scale | English |
Visit-specific Satisfaction Questionnaire VF (VSQ-VF)33 | Gasquet et al. (2003) | 9 | No subscales reported (aim was to assess the global satisfaction after a visit). | 5-Point Likert scale | English |
Vukovic Patient Satisfaction Questionnaire 20 Items (Vukovic-PSQ-20)54 | Vukovic et al. (2012) | 20 | No subscales reported (aim was to assess the global satisfaction). | 5-Point Likert scale and dichotomous score | English |
Ware Patient Satisfaction questionnaire 55 items (Ware-PSQ-55)55 | Ware et al. (1983) | 55 | Seven subscales: access to care, financial aspects, availability of resources, continuity of care, technical quality, interpersonal manner, and overall satisfaction. | 5-Point Likert scale | English |
These 29 articles evaluated, developed, or validated 29 individual instruments. The number of items in the included instruments ranged from 6 to 84. Of the 29 instruments, 19 used a 5-point Likert scale for response categories. Study sample size varied from 21 to 190,038 patients.
Subscales captured by the included measurement instruments
Table 1 illustrates the 58 subscales used to capture patient primary care experiences in multiprofessional clinics. Twenty-one studies reported subscales, but no instrument captured all 9 IOM dimensions. However, the most frequently assessed IOM dimension was “respect for patient values, preferences, and expressed needs” (59%). The least frequently assessed dimensions were “physical comfort” (3.45%) and “involvement of family and friends” (3.45%) (Fig. 2).
Fig. 2.
Percentage of instruments assessing each of the 9 IOM dimensions (1990–2019).
Quality of design, methods, and reporting
Table 2 provides a methodological quality appraisal overview of the studies using the 9 COSMIN criteria and checklist with 4-point scale ratings. Whilst most studies used the classical test theory, 1 study used the item response theory. On average, 3 out of the 9 COSMIN measurement properties were assessed, and no study assessed all 9.
Table 2.
Methodological quality appraisal overview of the 29 included studies (1990–2019) using COSMIN ratings.
Instrument | Instrument authors (year) | Article(s) | IRT or CTT | IRT score | Internal consistency | Reliability | Measurement error | Content validity | Structural validity | Hypothesis testing | Cross-cultural validity | Criterion validity | Responsiveness |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
ACES | Safran et al. (2006) | 27 | CTT | 0 | |||||||||
CPCI | Flocke et al. (1997) | 28,29 | CTT | +++[31] | +[31] | ||||||||
CSQ | Baker et al. (1990) | 30–32 | CTT | ++[34] | ++[33] | 0[33] | |||||||
CSS-VF | Gasquet et al. (2003) | 33 | CTT | + | 0 | + | 0 | ||||||
EUROPEP | Comité EUROPEP (1998) | 29,34 | CTT | 0 | |||||||||
G-MISS-16-VF | Maurice-Szamburski et al. (2017) | 35 | IRT | ++ | +++ | +++ | 0 | ||||||
GPAQ | Mead et al. (2008) | 36 | CTT | +++ | |||||||||
GPAQ-R | Roland et al. (2013) | 37 | CTT | ++ | +++ | ||||||||
GPAS | Ramsay et al. (2000) | 38 | CTT | +++ | ++ | 0 | |||||||
Grogan-PSQ-40 | Groganet al. (1995) | 39 | CTT | 0 | 0 | 0 | |||||||
Haddad-PSQ-22 | Haddad et al. (2000) | 40 | CTT | +++ | ++ | 0 | |||||||
IPQ | Greco et al. (2003) | 41 | CTT | 0 | 0 | ||||||||
Marshall-PSQ-18 | Marshall et al. (1994) | 42 | CTT | +++ | |||||||||
MISS-21 | Meakin et al. (2002) | 43 | CTT | 0 | + | ||||||||
MISS-26 | Wolf et al. (1978) | 44 | CTT | 0 | |||||||||
MISS-29 | Wolf et al. (1981) | 32 | CTT | 0 | |||||||||
NMPSSA | Eccles et al. (1992) | 45 | CTT | 0 | + | ||||||||
NMPSSIAC | Eccles et al. (1992) | 45 | CTT | 0 | + | ||||||||
PCAS | Safran et al. (1998) | 29,46 | CTT | ++[49] | ++[49] | ++[49] | |||||||
PCAT | Shiet al. (2001) | 47 | CTT | +++ | +++ | ||||||||
PDIS | Bowman et al. (1992) | 48 | CTT | 0 | + | 0 | |||||||
PDRQ-9 | Van der Feltz-Cornelis et al. (2004) | 49 | CTT | 0 | ++ | 0 | |||||||
PEI | Howie et al. (1998) | 50 | CTT | 0 | |||||||||
PEQ | Steine et al. (2001) | 51 | CTT | 0 | +++ | ||||||||
QVFP | Marcinowicz et al. (2010) | 52 | CTT | + | 0 | +++ | |||||||
SSQ | Baker et al. (1991) | 31,53 | CTT | 0[56] | ++[34] | 0[34] | 0[56] | ||||||
VSQ-VF | Gasquet et al. (2003) | 33 | CTT | + | 0 | + | 0 | ||||||
Vukovic-PSQ-20 | Vukovic et al. (2012) | 54 | CTT | 0 | 0 | +++ | |||||||
Ware-PSQ-55 | Ware et al. (1983) | 55 | CTT | +++ | 0 | 0 | +++ |
4-point scale rating: +++ = excellent, ++ = good, + = fair, 0 = poor, empty space = COSMIN rating not applicable. CTT, classical test theory; IRT, item response theory.
For interpretability, all the studies reported the way in which missing items had been handled. Eleven studies reported the percentage of respondents with the highest possible score and the lowest possible score. Neither MIC nor MID were assessed in any study.
For generalizability, most studies reported the sampling method and description with the most common being convenience sampling. Most studies included patients with a wide age range and gender distribution was achieved in all the studies. All the studies had been conducted in Western countries.
Overall results on the best-evidence synthesis of the included instruments
We were unable to make any clear conclusions on best-evidence synthesis (Table 3). Best-evidence synthesis was unknown for more than 50% of the instruments across all their measurement properties. Responsiveness had not been evaluated for any instrument. Cross-cultural validation and criterion validity were analysed in 6 instruments but with unknown conclusion due to lack of methodology. Three studies had moderate evidence of positive results for internal consistency. One study had moderate evidence of positive results for measurement error/agreement. However, for reliability, content validity, structural validity/item response theory, and hypothesis testing, results ranged from strong evidence of negative results to limited evidence of positive results. The PCAS instrument had conflicting results for internal consistency, structural validity/item response theory, and hypothesis testing. Only the G-MISS-16-VF instrument used item response theory.
Discussion
Statement of principal findings
To our knowledge this is the first systematic review of patient self-assessment instruments to measure the quality of primary care in a multiprofessional setting. We identified a diverse range of concepts to describe patient primary care experience. However, no instrument assessed all 9 IOM quality-of-care dimensions. Also, of those instruments identified, the scientific methodology to validate them was limited.
Quality-of-care dimension heterogeneity
Across all studies in our sample, there was a diverse conceptualization of patient primary care experiences as demonstrated by the 58 subscales but not 1 instrument covered all IOM dimensions (see Appendix 4).
This is in agreement with Haggerty et al.,56 who created a Delphi consultation of experts to define primary care attributes which should be evaluated: community orientation (equity, community participation), patient-centred care (global care, family-centred care, cultural sensitivity, patient–doctor relationships, respect, communication), clinical care attributes (technical quality, accessibility, continuity, care management, comprehensibility), and structural dimensions (information management, multidisciplinarity). These attributes were all identified during this review, but no instrument addressed all attributes. This is consistent with another study which found that validated instruments evaluating patient primary care experiences do not cover many important attributes.57
By classifying instruments according to the dimensions and attributes they cover, quality of primary care investigators can choose the questionnaire best suited to fulfil their research objectives. Since no instrument covers all attributes or dimensions, a combination of instrument subscales may be required to give optimal measurement.57
Lack of validity
Instrument validity determines its ability to measure what it was designed to measure, in this case quality of primary care, and is the most fundamental and important concept. However, no included instrument demonstrated adequate validity. This is consistent with findings in another review into the Improving Practices Questionnaire (IPQ) or the General Practice Assessment Questionnaire (GPAQ) which revealed that both had suboptimal validity.58 Some studies use the question: “Are you satisfied with the quality of care you have just received?” as a “gold standard” in their psychometric assessment. This is understandable because, by nature, no gold standard can be designed for an instrument assessing quality of care. However, this question can only be used for statistical analysis, since it is not sufficiently descriptive to constitute an evaluative tool.59
Some properties were not applicable to every instrument, such as cross-cultural validity. This property analyses the validity of an instrument’s translation. Therefore, only the 2 questionnaires which had been translated into French33,35 could have their cross-cultural validity evaluated (Table 2). The other 27 instruments were all written and validated in English.
Lack of reliability
Instrument reliability is evaluated on internal consistency and measurement error.60 In the included instruments, many authors used internal consistency as the only reliability indicator, but this is inadequate. The lack of scientific methodology for assessing the reliability of most of the studies included in this review makes interpreting their validity questionable because an instrument can only be valid if it is reliable.61
Lack of insight into the ability to measure and interpret change
No study evaluated the responsiveness of its instruments which is unfortunate as it is a particularly important dimension. The study authors did not provide any reasons for not evaluating responsiveness, but it could be due to lack of budget or time constraints. MIC or MID should be known for any given instrument to study the effects of a quality-of-care intervention over time. This is important for measuring changes over time in terms of lived experiences.
New publications since this review
Since our original search, only 2 articles have been published which would be eligible for inclusion in this study.62,63 However, on examination, neither study had better best-evidence synthesis or methodology. Furthermore, neither instrument described in the 2 studies (Japanese Primary Care Assessment Tool—Short form and The Norwegian Patient experiences with GP Questionnaire) demonstrated adequate validity. There was a lack of reliability, and instrument responsiveness was not evaluated. Therefore, these 2 instruments do not change the results of this study.
Strengths and limitations
This review was based on a published methodology and followed all the standards required for a systematic literature review.64 It appears to be the first to evaluate measurement property analysis, using the COSMIN method, applied to patient self-assessment instruments on the quality of primary care in multiprofessional clinics.
Two researchers (or 4 when necessary) evaluated article eligibility, extracted the data, and performed the quality appraisal for each measurement property meaning the results are robust. All study results, and methodological quality were considered to ensure unbiased appraisal of instrument measurement quality. Furthermore, methodological quality was rated using the widely accepted COSMIN standards. The high number of included instruments provides an insight into overall trends regarding measurement property evaluations, their quality, and overall instrument quality. This makes it possible to provide general recommendations on how to improve instruments, and their studies, when assessing the quality of patient experiences in primary care.
The study had some limitations. To be eligible for inclusion, articles had to describe a study developing or validating a primary care patient experience evaluation instrument. Consequently, it was possible to miss relevant articles, even those with excellent sensitivity, if instrument development or validation was not explicitly mentioned in either the title or abstract. Furthermore, since this review focussed on instruments tested in primary care, any instruments evaluated in other settings such as hospitals or emergency departments, were excluded. This may have resulted in instruments being missed. In addition, with one of the objectives being psychometric analysis, all the questionnaires found to be lacking a psychometric analysis were excluded from this review. A selection bias is therefore possible for instruments without any psychometric analysis. The methodological analysis was performed using the COSMIN criteria thus minimizing information biases. Nevertheless, the possibility of this type of bias remains. In addition, due to our resources, we were only able to search for articles in English and French which may mean that studies in other languages are available on this subject but were not included in this review.
Interest and future implications
Interpretation within the context of the wider literature
This is the first systematic literature review on patient self-assessment instruments assessing the quality of multiprofessional primary care and including measurement property analysis. It enables designers of primary care quality studies to understand the strengths and limitations of existing instruments in terms of captured dimensions and measurement properties. It also reveals, for a given instrument, the measurement property weaknesses and possibly creates the opportunity to design a study reinforcing these properties. Despite a growing interest in quality-of-care evaluation and the abundance of instruments validated within the hospital framework, particularly in the context of their accreditation, few psychometric evaluations of instruments have been developed in primary care.
Implications for policy, practice, and research
Researchers should choose an instrument which is reliable, valid, responsive, and interpretable so it can be used within a health care system and allow comparability, both in space (comparing the quality of care in 2 health care settings to define optimal organization) and over time (comparing results before and after an intervention to measure the impact). Future research will need to be of high quality and involve creating, developing, or validating a generic instrument for assessing quality of primary care. It would be beneficial to follow COSMIN and consider the IOM dimensions. This review could be the starting point for such work and provides a solid foundation enabling a researcher to identify the most suitable questionnaire for their work and to supplement or reinforce the psychometric analysis. Later work could involve translating it, adapting it to different cultures and testing it in each environmental location.
Conclusions
This systematic review identified numerous patient self-assessment instruments concerning the quality of primary care in multiprofessional clinics. Although a wide variety of patient experiences were captured, few instruments have strong measurement properties. High quality research is required to develop and validate a generic instrument for assessing quality of primary care.
Supplementary Material
Acknowledgements
We would like to thank Georges Buzet (Library, University of Western Brittany, Brest, France) for his assistance in performing the searches and for his contribution during the selection of eligible studies and Dr C.B. Terwee (Department of Epidemiology and Biostatistics and the EMGO+ Institute for Health and Care Research, VU University Medical Centre, Amsterdam, The Netherlands) for her advice on questions concerning the interpretation and application of the COSMIN guidelines. This study is part of the French network of University Hospitals HUGO (Hôpitaux Universitaires du Grand Ouest). Editorial assistance in the preparation of this article was provided by Charlotte Wright BVM&S(hons) MRCVS DipTrans of Speak the Speech Consulting.
Contributor Information
Jérémy Derriennic, Department of General Practice, University of Western Brittany, 22, av. Camille Desmoulins, Brest, FR, France; ER 7479 SPURBO, University of Western Brittany, 22, av. Camille Desmoulins, Brest, FR, France.
Patrice Nabbe, Department of General Practice, University of Western Brittany, 22, av. Camille Desmoulins, Brest, FR, France; ER 7479 SPURBO, University of Western Brittany, 22, av. Camille Desmoulins, Brest, FR, France.
Marie Barais, Department of General Practice, University of Western Brittany, 22, av. Camille Desmoulins, Brest, FR, France; ER 7479 SPURBO, University of Western Brittany, 22, av. Camille Desmoulins, Brest, FR, France.
Delphine Le Goff, Department of General Practice, University of Western Brittany, 22, av. Camille Desmoulins, Brest, FR, France; ER 7479 SPURBO, University of Western Brittany, 22, av. Camille Desmoulins, Brest, FR, France.
Thomas Pourtau, Department of General Practice, University of Western Brittany, 22, av. Camille Desmoulins, Brest, FR, France.
Benjamin Penpennic, Department of General Practice, University of Western Brittany, 22, av. Camille Desmoulins, Brest, FR, France.
Jean-Yves Le Reste, Department of General Practice, University of Western Brittany, 22, av. Camille Desmoulins, Brest, FR, France; ER 7479 SPURBO, University of Western Brittany, 22, av. Camille Desmoulins, Brest, FR, France.
Funding
The authors declare that there was no source of funding for the research.
Authors’ contribution
JD and J-YLR were responsible for study conception and developed the systematic review protocol. TP, BP, PN, and MB contributed to the protocol development. JD conducted the literature search. JD, TP, BP, and J-YLR scanned selected titles and abstracts, assessed full-text versions independently and performed the narrative synthesis. JD wrote the first draft of the manuscript. PN, MB, and JYLR revised the manuscript critically. All authors read and approved the final manuscript.
Ethics approval and consent to participate
Not applicable.
Consent for publication
Not applicable.
Conflict of interest
None declared.
Data availability
All data generated or analysed during this study are included in this article and in its online supplementary material.
References
- 1. Le Reste JY, Nabbe P, Manceau B, Lygidakis C, Doerr C, Lingner H, Czachowski S, Munoz M, Argyriadou S, Claveria Aet al. The European General Practice Research Network presents a comprehensive definition of multimorbidity in family medicine and long term care, following a systematic review of relevant literature. J Am Med Dir Assoc. 2013;14(5):319–325. [DOI] [PubMed] [Google Scholar]
- 2. Moffat K, Mercer SW.. Challenges of managing people with multimorbidity in today’s healthcare systems. BMC Fam Pract. 2015;16(1):129–132. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3. Stumm J, Thierbach C, Peter L, Schnitzer S, Dini L, Heintze C, et Döpfmer S.. Coordination of care for multimorbid patients from the perspective of general practitioners—a qualitative study. BMC Fam Pract. 2019;20(1):160–171. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4. Schuttner L, Parchman M.. Team-based primary care for the multimorbid patient: matching complexity with complexity. Am J Med. 2019;132(4):404–406. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5. Wagner EH. Chronic disease management: what will it take to improve care for chronic illness? Eff Clin Pract. 1998;1(1):2–4. [PubMed] [Google Scholar]
- 6. Rittenhouse DR, Shortell SM.. The patient-centered medical home: will it stand the test of health reform? JAMA. 2009;301(19):2038–2040. [DOI] [PubMed] [Google Scholar]
- 7. Rittenhouse DR, Shortell SM, Gillies RR, Casalino LP, Robinson JC, Mccurdy RK, Siddique J.. Improving chronic illness care: findings from a national study of care management processes in large physician practices. Med Care Res Rev. 2010;67(3):301–320. [DOI] [PubMed] [Google Scholar]
- 8. Kruk ME, Kelley E, Syed SB, Tarp F, Addison T, Akachi Y.. Measuring quality of health-care services: what is known and where are the gaps? Bull World Health Organ. 2017;95(6):389–389A. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9. Donabedian A. The quality of care. How can it be assessed? JAMA. 1988;260(12):1743–1748. [DOI] [PubMed] [Google Scholar]
- 10. Donabedian A. The Lichfield Lecture. Quality assurance in health care: consumers’ role. Qual Health Care. 1992;1(4):247–251. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11. Hanefeld J, Powell-Jackson T, Balabanova D.. Understanding and measuring quality of care: dealing with complexity. Bull World Health Organ. 2017;95(5):368–374. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12. Institute of Medicine Committee on Quality of Health Care in A. Crossing the quality chasm: a new health system for the 21st century. Washington (DC): National Academies Press (US) Copyright 2001 by the National Academy of Sciences. All rights reserved; 2001. [Google Scholar]
- 13. Tzelepis F, Sanson-Fisher RW, Zucca AC, Fradgley EA.. Measuring the quality of patient-centered care: why patient-reported measures are critical to reliable assessment. Patient Prefer Adherence. 2015;9(1):831–835. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14. Dong Q, Huang J, Liu S, Yang L, Li J, Li B, Zhao X, Li Z, Wu L.. A survey on glycemic control rate of type 2 diabetes mellitus with different therapies and patients’ satisfaction in China. Patient Prefer Adherence. 2019;13(1):1303–1310. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15. Narayan KM, Gregg EW, Fagot-Campagna A, L Gary T, B Saaddine J, Parker C, Imperatore G, Valdez R, Beckles G, M Engelgau M.. Relationship between quality of diabetes care and patient satisfaction. J Natl Med Assoc. 2003;95(1):64–70. [PMC free article] [PubMed] [Google Scholar]
- 16. Fan VS, Burman M, McDonell MB, Fihn SD.. Continuity of care and other determinants of patient satisfaction with primary care. J Gen Intern Med. 2005;20(3):226–233. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17. Jenkinson C, Coulter A, Bruster S, Richards N, Chandola T.. Patients’ experiences and satisfaction with health care: results of a questionnaire study of specific aspects of care. Qual Saf Health Care. 2002;11(4):335–339. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18. Salisbury C, Wallace M, Montgomery AA.. Patients’ experience and satisfaction in primary care: secondary analysis using multilevel modelling. BMJ. 2010;341:c5004. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19. Anhang Price R, Elliott MN, Zaslavsky AM, Hays RD, Lerhmann WG, Ribowsky L, Edgman-Levitan S, Cleary PD.. Examining the role of patient experience surveys in measuring health care quality. Med Care Res Rev. 2014;71(5):522–554. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 20. Kingsley C, Patel S.. Patient-reported outcome measures and patient-reported experience measures. BJA Educ. 2017;17(4):137–144. [Google Scholar]
- 21. Terwee CB, Bot SD, de Boer MR, Van Der Windt DAWM, Knol DL, Dekker J, Bouter LM, De Vet HCW.. Quality criteria were proposed for measurement properties of health status questionnaires. J Clin Epidemiol. 2007;60(1):34–42. [DOI] [PubMed] [Google Scholar]
- 22. Mokkink LB, Terwee CB, Patrick DL, Alonso J, Stratford PW, Knol DL, Bouter LM, De Vet HCW.. The COSMIN study reached international consensus on taxonomy, terminology, and definitions of measurement properties for health-related patient-reported outcomes. J Clin Epidemiol. 2010;63(7):737–745. [DOI] [PubMed] [Google Scholar]
- 23. Mokkink LB, Terwee CB, Patrick DL, Alonso J, Stratford PW, Knol DL, Bouter LM, De Vet HCW.. The COSMIN checklist for assessing the methodological quality of studies on measurement properties of health status measurement instruments: an international Delphi study. Qual Life Res. 2010;19(4):539–549. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24.Institute of Medicine (US) Committee on the Future of Primary Care, Donaldson M, Yordy K, Vanselow N, eds. Defining primary care: an interim report. Washington (DC): National Academies Press (US); 1994. [PubMed] [Google Scholar]
- 25. Schellingerhout JM, Heymans MW, Verhagen AP, de Vet HC, Koes BW, Terwee CB.. Measurement properties of translated versions of neck-specific questionnaires: a systematic review. BMC Med Res Methodol. 2011;11:87–101. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26. Schellingerhout JM, Verhagen AP, Heymans MW, Koes BW, de Vet HC, Terwee CB.. Measurement properties of disease-specific questionnaires in patients with neck pain: a systematic review. Qual Life Res. 2012;21(4):659–670. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 27. Safran DG, Karp M, Coltin K, Chang H, Li A, Ogren J, Rogers WH.. Measuring patients’ experiences with individual primary care physicians. Results of a statewide demonstration project. J Gen Intern Med. 2006;21(1):13–21. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 28. Flocke SA. Measuring attributes of primary care: development of a new instrument. J Fam Pract. 1997;45(1):64–74. [PubMed] [Google Scholar]
- 29. Haggerty JL, Burge F, Beaulieu MD, Pineault R, Beaulieu C, Lévesque JF, Santor DA, Gass D, Lawson B.. Validation of instruments to evaluate primary healthcare from the patient perspective: overview of the method. Healthc Policy. 2011;7(Spec Issue):31–46. [PMC free article] [PubMed] [Google Scholar]
- 30. Baker R. Development of a questionnaire to assess patients’ satisfaction with consultations in general practice. Br J Gen Pract. 1990;40(341):487–490. [PMC free article] [PubMed] [Google Scholar]
- 31. Baker R, Whitfield M.. Measuring patient satisfaction: a test of construct validity. Qual Health Care. 1992;1(2):104–109. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 32. Kinnersley P, Stott N, Peters T, Harvey I, Hackett P.. A comparison of methods for measuring patient satisfaction with consultations in primary care. Fam Pract. 1996;13(1):41–51. [DOI] [PubMed] [Google Scholar]
- 33. Gasquet I, Villeminot S, Dos Santos C, Vallet O, Verdier A, Kovess-Masféty V, Hardy-Baylé MC, Falissard B.. [Cultural adaptation and validation of questionnaires measuring satisfaction with the French health system]. Sante Publique. 2003;15(4):383–402. [PubMed] [Google Scholar]
- 34. Wensing M, Mainz J, Grol R.. A standardised instrument for patient evaluations of general practice care in Europe. Eur J Gen Pract. 2000;6(3):82–87. [Google Scholar]
- 35. Maurice-Szamburski A, Michel P, Loundou A, Auquier P.. Validation of the generic medical interview satisfaction scale: the G-MISS questionnaire. Health Qual Life Outcomes. 2017;15(1):36–49. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 36. Mead N, Bower P, Roland M.. The General Practice Assessment Questionnaire (GPAQ)—development and psychometric characteristics. BMC Fam Pract. 2008;9(1):13–24. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 37. Roland M, Roberts M, Rhenius V, Campbell J.. GPAQ-R: development and psychometric properties of a version of the general practice assessment questionnaire for use for revalidation by general practitioners in the UK. BMC Fam Pract. 2013;14(1):160–167. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 38. Ramsay J, Campbell JL, Schroter S, Green J, Roland M.. The General Practice Assessment Survey (GPAS): tests of data quality and measurement properties. Fam Pract. 2000;17(5):372–379. [DOI] [PubMed] [Google Scholar]
- 39. Grogan S, Conner M, Willits D, Norman P.. Development of a questionnaire to measure patients’ satisfaction with general practitioners’ services. Br J Gen Pract. 1995;45(399):525–529. [PMC free article] [PubMed] [Google Scholar]
- 40. Haddad S, Potvin L, Roberge D, Pineault R, Remondin M.. Patient perception of quality following a visit to a doctor in a primary care unit. Fam Pract. 2000;17(1):21–29. [DOI] [PubMed] [Google Scholar]
- 41. Greco M, Powell ROY, Sweeney K.. The Improving Practice Questionnaire (IPQ): a practical tool for general practices seeking patient views. Educ Prim Care. 2003;14(4):440–448. [Google Scholar]
- 42. Thayaparan AJ, Mahdi E.. The Patient Satisfaction Questionnaire Short Form (PSQ-18) as an adaptable, reliable, and validated tool for use in various settings. Med Educ Online. 2013;18(1):21747–21750. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 43. Meakin R, Weinman J.. The ‘Medical Interview Satisfaction Scale’ (MISS-21) adapted for British general practice. Fam Pract. 2002;19(3):257–263. [DOI] [PubMed] [Google Scholar]
- 44. Wolf MH, Putnam SM, James SA, Stiles WB.. The Medical Interview Satisfaction Scale: development of a scale to measure patient perceptions of physician behavior. J Behav Med. 1978;1(4):391–401. [DOI] [PubMed] [Google Scholar]
- 45. Bamford C, Jacoby A.. Development of patient satisfaction questionnaires: I. Methodological issues. Qual Health Care. 1992;1(3):153–157. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 46. Safran DG, Kosinski M, Tarlov AR, Rogers WH, Taira DH, Lieberman N, Ware JE.. The Primary Care Assessment Survey: tests of data quality and measurement performance. Med Care. 1998;36(5):728–739. [DOI] [PubMed] [Google Scholar]
- 47. Shi L, Starfield BH, Xu J.. Validating the Adult Primary Care Assessment Tool. J Fam Pract. 2001;50(1):161–176. [Google Scholar]
- 48. Bowman MA, Herndon A, Sharp PC, Dignan MB.. Assessment of the patient-doctor interaction scale for measuring patient satisfaction. Patient Educ Couns. 1992;19(1):75–80. [DOI] [PubMed] [Google Scholar]
- 49. Van der Feltz-Cornelis CM, Van Oppen P, Van Marwijk HW, De Beurs E, Van Dyck R.. A Patient-Doctor Relationship Questionnaire (PDRQ-9) in primary care: development and psychometric evaluation. Gen Hosp Psychiatry. 2004;26(2):115–120. [DOI] [PubMed] [Google Scholar]
- 50. Howie JG, Heaney DJ, Maxwell M, Walker JJ.. A comparison of a Patient Enablement Instrument (PEI) against two established satisfaction scales as an outcome measure of primary care consultations. Fam Pract. 1998;15(2):165–171. [DOI] [PubMed] [Google Scholar]
- 51. Steine S, Finset A, Laerum E.. A new, brief questionnaire (PEQ) developed in primary health care for measuring patients’ experience of interaction, emotion and consultation outcome. Fam Pract. 2001;18(4):410–418. [DOI] [PubMed] [Google Scholar]
- 52. Marcinowicz L, Rybaczuk M, Grebowski R, Chlabicz S.. A short questionnaire for measuring the quality of patient visits to family practices. Int J Qual Health Care. 2010;22(4):294–301. [DOI] [PubMed] [Google Scholar]
- 53. Baker R. The reliability and criterion validity of a measure of patients’ satisfaction with their general practice. Fam Pract. 1991;8(2):171–177. [DOI] [PubMed] [Google Scholar]
- 54. Vuković M, Gvozdenović BS, Gajić T, Stamatović Gajić B, Jakovljević M, McCormick BP.. Validation of a patient satisfaction questionnaire in primary health care. Public Health. 2012;126(8):710–718. [DOI] [PubMed] [Google Scholar]
- 55. Ware JE Jr, Snyder MK, Wright WR, Davies AR.. Defining and measuring patient satisfaction with medical care. Eval Program Plann. 1983;6(3–4):247–263. [DOI] [PubMed] [Google Scholar]
- 56. Haggerty J, Burge F, Lévesque JF, Gass D, Pineault R, Beaulieu MD, Santor D.. Operational definitions of attributes of primary health care: consensus among Canadian experts. Ann Fam Med. 2007;5(4):336–344. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 57. Lévesque JF, Haggerty J, Beninguissé G, Burge F, Gass D, Beaulieu MD, Pineault R, Santor D, Beaulieu C.. Mapping the coverage of attributes in validated instruments that evaluate primary healthcare from the patient perspective. BMC Fam Pract. 2012;13(1):20–28. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 58. Hankins M, Fraser A, Hodson A, Hooley C, Smith H.. Measuring patient satisfaction for the quality and outcomes framework. Br J Gen Pract. 2007;57(542):737–740. [PMC free article] [PubMed] [Google Scholar]
- 59. Starfield B. New paradigms for quality in primary care. Br J Gen Pract. 2001;51(465):303–309. [PMC free article] [PubMed] [Google Scholar]
- 60. McCrae RR, Kurtz JE, Yamagata S, Terracciano A.. Internal consistency, retest reliability, and their implications for personality scale validity. Pers Soc Psychol Rev. 2011;15(1):28–50. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 61. John OP, Soto CJ.. The importance of being valid: reliability and the process of construct validation. London: The Guilford Press; 2007. [Google Scholar]
- 62. Aoki T, Fukuhara S, Yamamoto Y.. Development and validation of a concise scale for assessing patient experience of primary care for adults in Japan. Fam Pract. 2020;37(1):137–142. [DOI] [PubMed] [Google Scholar]
- 63. Bjertnæs ØA, Iversen HH, Valderas JM.. Patient experiences with general practitioners: psychometric performance of the generic PEQ-GP instrument among patients with chronic conditions. Fam Pract. 2021:cmab133. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 64. Moher D, Liberati A, Tetzlaff J, Altman DG.. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. PLoS Med. 2009;6(7):e1000097. [DOI] [PMC free article] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Data Availability Statement
All data generated or analysed during this study are included in this article and in its online supplementary material.