Skip to main content
Integrated Healthcare Journal logoLink to Integrated Healthcare Journal
. 2020 Nov 1;2(1):e000033. doi: 10.1136/ihj-2019-000033

How to implement patient experience surveys and use their findings for service improvement: a qualitative expert consultation study in Australian general practice

Hyun Jung Song 1,, Sarah Dennis 1,2,3, Jean-Frédéric Levesque 1,4, Mark Harris 1
PMCID: PMC10240725  PMID: 37441312

Abstract

Objective

To identify barriers (patient, provider, practice and system levels) to consider when implementing patient experience surveys in Australian general practice and enablers of their systematic use to inform service improvement in clinical practice as well as the broader health system.

Methods and analysis

An expert consultation and qualitative content analysis of cross-sectional, open-text survey data. Data were collected from key international and Australian experts in the areas of measurement and quality improvement in general practice.

Results

Responses from 20 participants from six countries were included in the study. Participants discussed the importance of ensuring value and relevance of surveys to stakeholders. Lack of resources, IT infrastructure, capacity building and sustained funding were identified as barriers to implementing surveys. Participants discussed the importance of clearly defining and communicating the purpose of surveys and agreed on the value of using patient experience to inform reflective, team-based learning at the practice level. Opinions differed on the use of patient experience data at the system level, with some questioning its utility or fairness for external performance reporting. Others recommended the aggregation and reporting of these data under certain conditions, including for the purpose of triangulation with other quality and outcome data. The study identified an evidence gap in the assessment and interpretation of patient experience data at the practice and system levels, including the analysis and contextualisation of survey findings at the system level.

Conclusion

Patient experience surveys have potential for guiding practice level quality improvement, but many barriers to their implementation remain. There is need for greater research and policy efforts to understand how this information can be used at the system level for improving Australian general practice.

Keywords: patient satisfaction, performance measures, surveys, healthcare quality improvement, general practice


Key messages.

What is already known about this subject?

  • Patient-reported experience measures (PREMs) are widely recognised to be key indicators of healthcare quality.

  • They are typically collected through surveys across healthcare settings, and evidence suggests its usefulness in informing service improvement.

  • In Australia, we currently lack knowledge on how PREMs can be administered, implemented, interpreted and used for quality improvement in primary care, including general practice.

What does this study add?

  • This qualitative study consulted with international and Australian experts to develop an understanding of how PREMs can be implemented and used at the general practice and system-levels to inform service improvement.

  • Participants agreed on the value of using PREMs’ information for reflective learning and improvement of clinical practice but disagreed on how these data could be aggregated and reported at the system level.

  • Some suggestions on how this could be achieved at the system level were provided.

How might this impact on clinical practice or future developments?

  • Previous literature has identified the importance of interpreting and assessing PREMs data prior to use. This study highlights the need for a more structured approach to this step at the practice level and identifies a significant gap in this area at the system level. Greater research and policy efforts are needed to address these practice and knowledge gaps.

Introduction

Patient feedback on their experience of care is one of the core quality dimensions for health system performance and is a widely recognised promoter of patient-centred care.1 2 Collecting this information is an important step in ensuring that services are responsive to patients’ needs and preferences. Positive patient experience has also been shown to be associated with higher levels of safety and clinical effectiveness of health services.3

Patient-reported experience measures (PREMs) are most commonly collected through surveys, and their potential for improving the quality of health services is increasingly recognised.4–6 In clinical practice, assessing patient experience can provide useful insights into how patients observe, interact with and are impacted by the care environment and highlight specific areas for improvement.7–9 Examples of information commonly collected through patient experience surveys include: access to care, interpersonal communication and trust, continuity and coordination, comprehensiveness of services and patient outcomes.8 10 Routinely measuring and publicly reporting such PREMs data can strengthen the healthcare system by fostering accountability and transparency and by providing an important stimulus for service improvement.11 12 Despite this potential, operationalising patient surveys can be a difficult task in healthcare. One study identified several barriers to implementing surveys that exist at organisational and professional levels of healthcare, including those relating to organisational culture, adequate staff time and knowledge to administer and interpret data for use.13 Extensive qualitative research done in the UK has documented other barriers that include the limited perceived validity and credibility of the surveys by practice staff.14–17 Beyond the challenges of implementing surveys, researchers have also noted a lack of evidence surrounding how the collection of patient experience information can be meaningfully used for improvements in service delivery, which often leads to barriers to applying this information for change.4 6 18

There is a particular evidence gap in Australia, especially in primary care. Policy and practice efforts to systematically collect and report on patient survey data in Australia have been primarily focused on acute care settings.10 In primary care, there is a lack of nationwide policies mandating the routine, systematic and cohesive collection and reporting of PREMs data. Furthermore, there is very little published evidence to tell if patient experience is being measured in a standardised, robust way at this level of care. In Australia, patient feedback on their primary care experience is mainly collected using commercially provided general practice accreditation surveys—such as the Patient Accreditation Improvement Survey, which has been validated for use—and are done infrequently.19 20 As part of accreditation objectives to improve the quality of care delivery, findings from these surveys can be reported back to the individual practices; however, very little is known about how this information is used or reported, including in formal quality improvement processes10 At a broader level, the Australian Bureau of Statistics (ABS) conducts a national survey that collects information about access to general practice based services from a random sample of individuals each year.21 However, there is no evidence to suggest that this is used for quality improvement activities in primary care.

The aim of this study was to identify key barriers at multiple levels (patient, provider, practice and system) to consider when implementing patient experience surveys in Australian general practice and how PREMs data can be used to inform service improvement in clinical practice as well as the broader health system.

Materials and methods

This expert consultation study used qualitative content analysis of cross-sectional, open-text survey data. Expert consultations have been used previously in primary care literature to capture key stakeholder views and recommendations on a range of topics relating to the development of patient-centred interventions and models of care.22–24 Given limited examples of PREMs-driven quality improvement in Australian primary care, we also sought the expertise of international participants with experiential knowledge in this area. The Standards for Reporting Qualitative Research checklist guided the writing of this paper.25

Sample and recruitment

Participants were recruited through a mix of purposive and snowball sampling, initially using contacts known to the researchers, then inviting other experts recommended by the participants. Stakeholders were selected for their expertise locally and/or internationally (Organisation for Economic Co-operation and Development (OECD)-based countries) in topics relating to quality improvement and/or patient surveys, including the design, implementation and coordination of survey programmes, with a focus on primary care. Participants who were actively engaged in patient care and service improvement were invited from academia, clinical practice, consumer representative organisations and primary health governance or administration (eg, employees of primary health networks (PHNs)). Individuals were informed that participation was voluntary and that they would not be remunerated. Letters of invitation and study background documents were emailed to a total of 37 experts.

Data collection

Qualitative data were collected between June 2018 and January 2019 using a questionnaire with open-ended questions. The survey was administered in three ways based on the participant’s preference. Seventeen participants completed the questionnaire online via a secure, individualised link (Qualtrics). Two participants completed and returned an electronic hard copy of the questionnaire (Microsoft Word) (online supplemental appendix 1). One participant wished to be surveyed by telephone, and their responses were recorded and transcribed.

Supplementary data

ihj-2019-000033supp001.pdf (1.2MB, pdf)

The survey questions were designed to elicit information on the following: key considerations in administering and operationalising patient experience surveys in general practice (eg, methodological issues); stakeholder-specific challenges (at patient, provider, practice and system levels); and recommendations for using and reporting survey findings at various levels. Participants were asked to draw from specific experiences and examples from their local contexts wherever relevant.

Data analysis

Data were analysed and managed in Microsoft Word. Deidentified responses from all sources were extracted and compiled into a single document for analysis. Data from Australian and international respondents were analysed together in order to extract issues in patient experience surveys that are common and relevant across primary care settings.

First, all responses were carefully read and assessed for pertinence to the questions being asked and reorganised into appropriate response categories prior to analysis (HJS). Using a process described by Graneheim and Lundman,26 the text was decontextualised into smaller ‘meaning units’, which were then assigned more context-descriptive codes. Similar codes were clustered into subcategories and then into broader categories, which were based on the stages of implementing patient experience surveys. These were issues relating to: (1) survey administration in clinical practice and (2) interpretation and use of findings at the practice level and system level. Finally, themes were developed from underlying meanings and interactions between the categories. An iterative process of reflection and discussion between the wider research team guided data analysis, including multiple revisions and the final refinement of the themes.

Ethics

All participants provided their written informed consent to participate.

Results

Participants

Of the 37 experts invited to participate, 15 did not reply or declined participation. Two invitees agreed to participate but withdrew from the study prior to participation. In total, 20 participants were included in the study (response rate=54%) from Australia, New Zealand, USA, Canada, UK and Switzerland. Most participants (n=16, 80%) reported being currently active in primary care research, with a median of 15–19 years of experience. Participant characteristics are described in table 1.

Table 1.

Characteristics of participants (n=20)

Participant characteristics Number (% total or range, as indicated)
Sex
 Male 13 (65)
 Female 7 (35)
Current profession
 GP academic 9 (45)
 Academic/professor/researcher 7 (35)
 Practicing GP 3 (15)
 Survey programme director 1 (5)
Country where they are based
 Australia 10 (50)
 New Zealand 3 (15)
 UK 3 (15)
 USA 2 (10)
 Canada 1 (5)
 Switzerland 1 (5)
Years of experience: (median in bold)
Primary care research
 0–9 3 (15)
 10–14 4 (20)
 15–19 3 (15)
 20–24 3 (15)
 25+ 7 (35)
Primary care practice
 0–9 6 (30)
 10–14 1 (5)
 15–19 0 (0)
 20–24 3 (15)
 25+ 10 (50)
Health administration, governance and management
 0–9 13 (65)
 10–14 3 (15)
 15–19 0 (0)
 20–24 3 (15)
 25+ 1 (5)
Patient advocacy and consumer representation
 0–9 19 (95)
 10–14 0 (0)
 15–19 1 (5)
 20–24 0 (0)
 25+ 0 (0)
Self-reported level of expertise in:
Patient-centred care
 Did not answer or none 1 (5)
 Beginner 1 (5)
 Intermediate 5 (25)
 Proficient 7 (35)
 Exper 6 (30)
Survey administration
 Did not answer or none 2 (10)
 Beginner 1 (5)
 Intermediate 6 (30)
 Proficient 3 (15)
 Expert 8 (40)

Findings from content analysis

Administering patient experience surveys

Participants discussed their views on key areas for consideration at various stakeholder levels when implementing patient surveys in clinical practice (table 2). They emphasised the importance of ensuring that patient experience surveys have value and relevance to stakeholders. Participants also discussed the lack of resources, IT infrastructure, capacity building and sustained funding as barriers to implementing surveys at all levels. Furthermore, the importance of establishing a robust sampling strategy to ensure representativeness was discussed. At the practice level, participants highlighted the importance of an organisational culture of quality improvement that places patient surveys as a core business practice. They felt this was integral to alleviating the challenge of ensuring fit of patient surveys with practice workflow. At the system level, they raised the importance of putting in place an accountability or governance framework to oversee surveys across practices, which they felt was currently lacking. It was agreed that relevant stakeholders, including consumers, need to be engaged throughout the design and implementation processes. A recommended approach was supporting continued engagement through the participatory process of codesign. Participants felt this would give patient experience survey respondents a sense of ownership over the survey process and would make the purpose and outcome of the surveys both meaningful and actionable.

Table 2.

Areas to address in administering patient experience surveys in clinical practice

Patients Survey needs to capture what is meaningful and relevant to patient experience.
Purpose of survey needs to be clearly communicated to patients.
Survey needs to be specific to particular patient contexts (eg, culture and language) or incorporate diverse perspectives.
Sampling strategy needs to ensure representativeness of patients to fit survey purpose and generate meaningful findings.
Interpretation of questions may vary by patients (eg, health literacy and background).
Survey administration needs to be done in a setting and format that is conducive to patient participation.
Ensure confidentiality and privacy of data and assure patients that their data will be anonymised and safely stored.
Providers Providers need to see the value of patient survey to their work:
  • What matters to clinicians?

  • What information will be relevant to their clinical practice?

  • What is feasible to achieve?


Purpose of survey needs to be made clear to providers.
Practices Practice requires a culture of quality improvement that integrates patient surveys as part of core business.
Practices need to have sense of ownership of survey process – codesign must take practice needs into account and have benefit for practice improvement.
Practice needs to have clear purpose for doing the survey, or if being coordinated by an external entity, then have that purpose be made clear to them.
Practices are resistant to surveys if their purpose is solely for performance reporting (eg, connected to punitive sanctions). Quality improvement is a better lever for change.
If surveys are done too often or concurrently with other research or quality improvement activities, staff will be at risk of survey fatigue.
Adequate resources need to be in place for survey and implementation:
  • Skilled workforce to carry out surveys.

  • IT systems in place for management and use of data.

  • Dedicated time and space.


The survey needs to fit with provider schedule and workflow.
Providers need to be upskilled in all aspects of patient survey and implementation of findings (eg, recruitment, administration, data management, analysis and interpretation).
Providers need to be able to easily access and extract data from patient surveys.
Dissemination of survey findings needs to reach a wide and diverse audience, in a timely fashion.
System Nationally, there needs to be a stronger culture of quality in the Australian health system.
The system needs to commit to building an evidence base on how to use patient surveys for QI in general practice.
  • For example: partnership with academia.


There is a need for strong governance and accountability framework for overseeing patient surveys at national and regional levels.
Implementing surveys will require a communication strategy so that all stakeholder groups are continuously engaged and understand the purpose and functioning of this work.
There needs to be a unified system of IT for data sharing or aggregation at regional levels.
Surveys will require committed and long-term funding and resources to enable practices to continue this work.
There needs to be partnership and alignment with PHNs and other entities to help support and operationalise this work.

PHNs, primary health networks.

Practice-level interpretation and use of PREMs data

Usefulness of patient experience information for improving services at the practice level

Participants felt strongly that applying patient feedback on their experience of care can affect positive change for practices. Box 1 presents a summary of these views. There was a consensus that patient experience surveys are useful for reflective learning and improvement especially at the individual practice level, as ‘most practices operate more as an island than part of a system’ in Australia. They felt that through reflective learning, patient experience surveys had the potential to provide a valuable opportunity for practices to identify gaps in service and areas for improvement that otherwise may go unnoticed. By doing so, surveys were viewed to be critically valuable to the work of individual clinicians.

Box 1. A summary of expert views on the value of using patient experience surveys for quality improvement.
  • Highlights gaps in service and areas for improvement for the practice, even when things seem to be running smoothly.

  • Can be powerful tools for change, especially if results are consistent about a specific issue (eg, poor comments).

  • Reveals to clinicians what patients are experiencing and how this is related to their satisfaction with care.

  • Empowers patients to have direct input on service provision.

  • Gives clinicians a chance to reflect on their performance and how to help patients make the most informed decisions. Valuable to their clinical work.

  • Results of surveys can be used to create patient-centred metrics (not externally generated ones) that reflect patients’ perceptions of good quality care.

  • If the metrics can be easily entered into the patient’s electronic health record, this would be helpful.

Analysing and interpreting findings through reflective, team-based learning

Participants noted that many practices may not be sufficiently familiar with how to interpret and apply findings of patient experience surveys, given that they are not currently standard practice in Australia. Thus, they recommended a guided approach to supporting practice staff and providers in analysing the data and interpreting findings to inform clinical practice. Some suggested that this ‘sense-making role’ could be performed by an external, system-level organisation, such as PHNs, which could provide analytical support to practices as part of guiding continuous quality improvement and service planning. Others recommended that interpretation of data should be performed as an internally driven process within the practice. To do this, participants recommended the use of a reflective, team-based learning approach among practice staff.

It was strongly emphasised that all practice staff should be engaged in the process of using the findings for practice improvement. The most frequently recommended method for this reflective exercise was to have regular whole-of-practice meetings, during which staff would review findings together, unite around a shared purpose and agree on actions to be taken. Some participants felt that this team-based learning approach would be a ‘more palatable option’ for practice staff compared with an externally directed initiative, as it would provide a safe environment for staff to discuss and reflect on their performance in relation to peers and identify areas to improve (table 3, Q1).

Table 3.

Examples of responses highlighting themes

Theme Examples of responses
Interpreting and analysing findings through reflective, team-based learning Q1: ‘Patient surveys, if done well, can be powerfully helpful in directing practice improvement and very informative in helping individual clinicians improve their care when shared in a safe, reflective, learning environment’. (GP Academic, USA, 3M11)
Embedding surveys into continuous quality improvement in practices Q2: ‘One issue is timeliness – survey results are often quite old, yet we haven’t worked out good ways of getting real time feedback – easier in hospital where patients are more of a captive audience’. (GP Academic, UK, 4M17)
Use of PREMs for system-level performance reporting Q3: ‘This would be disastrous in my opinion – PRM (patient reported measures) are not a performance tool - the culture is not ready for it and will not be for a number of years’. (GP, Australia, 1M02)
Q4: ‘[Data that is used for] a lot of quality improvement type of cycles are very context specific and so what may be relevant for a particular context may not be at all relevant for another context(…)If you aggregate too much you may lose the nuances of a particular setting’. (GP Academic, New Zealand, 2F18)
Q5: ‘National [aggregation and reporting] is useless. The only point of doing surveys for quality improvement is if they can meaningfully be reported for the relevant operational unit (eg, practice) to be able to take action’. (GP Academic, UK, 4M17)
Q6: ‘Patient experience is an internationally recognised measure that can be used with other output and process measures to inform service improvement efforts and monitor national progress on certain issues’. (Academic, Australia, 1F03)
Q7: ‘ccess to care is a recognised national indicator of quality that has traction at the national, meso (eg, PHN) and service levels’. (Academic, Australia, 1F03)
Q8: ‘I would hope performance reporting should use patient experience as one of a number of qualitative and quantitative measures of quality’. (Academic, Australia, 1F09)
Q9: ‘Particularly with patient experience you may get qualitative data that actually helps you inform [practice] in a way that you might not have otherwise been able to extract just with your quant data’. (GP Academic, New Zealand, 2F18)
Use of PREMs data for service planning and care commissioning Q10: ‘[Data] Should be aggregated at the level where planning of resources takes place. They can be used to support practices [to] respond to the needs of their patients within specific geographical contexts (regions)’. (Academic, Canada, 5F19)

PREMs, patient-reported experience measures.

Participants also discussed the benefit of collaborative learning between clinicians outside of individual practices, citing the potential for such initiatives to drive improvement ‘above and beyond that which could be achieved by internal reporting only’. One recommendation was to establish peer-learning groups (eg, communities of practice) among clinicians in the region or PHN, or among colleagues from other practices that serve similar patient populations.

Analysing and interpreting findings with patients

Participants emphasised the importance of partnering with patient stakeholders as an integral part of interpreting and applying patient survey findings in a patient-centred way. Several respondents discussed that this was not being done sufficiently in Australia. International examples were provided to highlight this potential, including the engagement of Patient Participation Groups in the UK. Participants discussed a general need to better establish the evidence on how patient groups and practices can work together to support practice improvement.

Embedding surveys into continuous quality improvement in practices

It was discussed that after reflecting on the survey findings, these results needed to be applied systematically to improve practice on a continuous basis, for instance, within a formal framework of improvement such as Plan–Do–Study–Act. There were varying opinions as to how frequently surveys should be implemented for continuous quality improvement, in order to address patient concerns in a timely way and to monitor improvements over time. One participant recommended that practices collect and audit patient experience information every ‘six months or one year’. Others suggested the possibility of establishing real-time or immediate feedback collection in practices, although this was not discussed in greater detail aside from comments on the difficulty of operationalising this activity in general practice (table 3, Q2).

System-level interpretation and use of PREMs data

Opinions varied widely on the usefulness of using or reporting on data at levels beyond individual practices. Many agreed that analysing and using data at the system-level could pose significant challenges relating to data aggregation and interpretation.

Use of PREMs for system-level performance reporting

Using information collected from individual practices for external performance reporting was a controversial topic, and there was some doubt as to whether it was an appropriate use of patient experience surveys. Participants were especially opposed to linking performance with punitive sanctions for individual practices and clinicians. They cautioned that in such cases, survey efforts would be met with strong resistance from practices (table 3, Q3). Furthermore, some participants felt that since data collected from individual practices likely have a more localised focus, it would be ‘generally not helpful and event harmful’ to directly use such context-specific findings for benchmarking and making cross-practice comparisons. Without proper contextualisation and analysis of PREMs data, many argued that system-level assessment of quality could be misleading and unfair. They thus advised against transferring and reporting this information outside of practices (table 3, Q4 and Q5).

Other participants saw the value of patient experience surveys for system-level reporting, for instance, in offering a broader view of healthcare performance and ensuring greater transparency and accountability in how services are being delivered (table 3, Q6). At the same time, they recognised the risks involved and stressed the importance of aggregating and reporting on these data under specific conditions.

First, it was suggested that surveys incorporate measures that have relevance beyond individual practices. One recommendation was to use measures of patient experience that are broadly relevant and applicable to multiple levels. The proposed benefit of using such measures was that the data can be easily extracted, aggregated and compared across practices without need for significant contextualisation; at the same time, they provide actionable information at the practice level. One example was the use of patient-reported access to care (eg, waiting times for appointments and frequency of visits with a preferred general practitioner), similar to those currently being measured through the national ABS patient experience survey21 (table 3, Q7).

A suggested means of contextualising PREMs data at the system level was to view and understand this information in triangulation with other data sources. Several participants pointed out that interpreting PREMs within the context of other measures of quality would build a more complete and multifaceted understanding of the quality of services delivered to patients (table 3, Q8). They also recommended using PREMs with various measures of outcome to observe possible interactions between patient experience (ie, process of care) and outcome (ie, impact of care), such as those relating to health and well-being, or service utilisation patterns (table 3, Q9). For instance, outcome measures were suggested to be useful for contextualising PREMs data, including offering an explanation as to why patients may be experiencing care in a certain way. A suggested example was to look at actual waiting times for appointments together with patients’ experience or satisfaction with waiting times.

Use of PREMs data for service planning and care commissioning

Some participants discussed the value of PREMs-based reporting as a roadmap to drive patient-centred service planning. By providing policy makers with a broad overview of how well practices and the healthcare system are performing in these areas, targeted changes could be made to enhance service provision and delivery to improve patient experience. For this purpose, participants recommended reporting PREMs specifically at levels where planning of resources and care commissioning take place (table 3, Q10). Suggested measures for service planning purposes included patient experience of continuity, coordination and comprehensiveness of care. These measures were viewed as being important factors to consider in driving improved models of care, particularly in the context of chronic and complex care that require a multidisciplinary and team-based approach. Findings from these measures were viewed to have potential to influence resource allocation and planning to close service gaps and to support targeted workforce training and development to guide improvement in these areas.

Discussion

We aimed to ascertain stakeholder views on how to implement patient experience surveys in Australian general practice and use PREMs data to inform quality improvement at multiple levels of primary care. Participants considered that in order to successfully develop and implement surveys in clinical practice, they should contain information that is relevant to patients, providers and practices. Surveys should also be administered in a user-friendly way that captures a representative sample of patients. Findings should be managed and disseminated appropriately. Finally, the need to ensure sufficient infrastructure such as IT systems, as well as resources such as staff time and continued funding was discussed. These key considerations were similar to those identified in administering patient surveys in other care settings in literature.13 18 27

Perceived relevance of the survey was considered to be an integral factor in the implementation of surveys in general practice. This suggests that rigorous research is needed to inform the development of the survey to ensure it measures what matters to patients, providers and practices. Currently, the science behind the development of validated survey tools—including research to identify relevant indicators of patient experience—is limited in Australian primary care and requires further development.28 Furthermore, in order to ensure that all stakeholders find relevance and value in the survey activity, the purpose of the survey needs to be clearly defined and communicated to all involved, supporting previous research that has identified the clarity of survey objective as a factor in the success of patient surveys.17

Defining the purpose of surveys also has significance for implementing findings. The aim of the survey directly influences what measures need to be included, as well as the levels to which the resulting data can be meaningfully aggregated and reported. Furthermore, it allows the governing entity to develop a clear sense of planning, including rollout, analysis and use of the resulting information. Participants also highlighted the importance of strong leadership and an accountability framework to plan and regularly monitor these activities.

Studies have shown that providing survey feedback alone is insufficient for practice staff to identify and action changes, since they are often unsure what to do with PREMs data6 17 29—a sentiment shared by our own participants. Once PREMs data have been collected, the critical next step is to take time and effort to make sense of this information before planning for change.30 However, literature suggests that healthcare staff are typically not given sufficient guidance on how to meaningfully interpret survey information and have difficulty finding the time to provide thoughtful feedback.6 13 Participants in this study also felt that the assessment and interpretation phase is a challenge for staff in general practice.

Some suggestions were given to address this challenge. Respondents recommended holding regular practice meetings in which staff reflect together on what the data means and what findings are relevant for practice change. Research has found evidence to support the effectiveness and durability of reflective, team-based learning activities to engage clinicians with the results, facilitate ownership and offer a valuable opportunity to challenge any scepticisms about the findings.29 31 Several participants felt that patient experience surveys are in fact best suited for this purpose of practice improvement through reflective learning, as a way to ensure that staff are supported to critically and openly discuss potentially sensitive feedback information.

Participants also discussed the possibility of collaborative learning between staff of different practices, including through communities of practice, a peer-learning method based on sustained interactions and knowledge enhancement between practitioners around a shared domain of interest. A recent feasibility study in Australia found that communities of practice are an acceptable and potentially sustainable method of peer learning among regional GP colleagues.32 Similar comparative peer-learning methods between clinicians have been trialled internationally and found to support improvements in patient-centred care and enhance professional confidence.33–35

Finally, the need to engage patients in the interpretation and planning phases was discussed, and Patient Participant Groups (PPGs) in the UK were cited as an example. PPGs have been implemented in a large number of general practices in the UK; however, due to a lack of nationally agreed roles, their tasks and engagement level have been thought to vary greatly by practice.36 This points to a need for greater research efforts to understand collaborative learning between patients and practices, and how patient input in survey processes can be established in a more concrete way.

While there was near universal agreement on the usefulness of using PREMs data to inform service improvement at the practice level, the use of survey findings at the system level was controversial. For some participants, there was concern about any use of practice-level PREMs data at this level, especially if attached to punitive sanctions on providers or practices. Aggregating or reporting this information beyond practices was thought to be potentially unfair and stigmatising, especially if contextual differences in patient experience of care were ignored. This perspective was identified to be a challenge to the perceived credibility of patient experience survey results among general practice in the UK, in a study that explored staff attitudes towards patient feedback attached to the Quality and Outcomes Framework pay-for-performance scheme.16 However, if done under certain conditions, the reporting of this information at various levels was thought by our participants to be useful to document system performance, to monitor and target areas for improvement and for service planning. This view supports current views about public reporting suggesting that it promotes accountability by ensuring transparency to patients, acting on key levers of change in healthcare, especially when the public reporting aligns with other levers.37 38 For system-level reporting, participants suggested the use of measures that are broadly relevant to multiple levels for aggregation. However, no detailed explanations were provided as to how the appropriate analysis or interpretation of practice-level PREMs data could be achieved at the system level. Literature also appears to be limited in this area, which requires further attention.

Finally, among the suggested use of PREMs for quality improvement at the system level, participants discussed using PREMs in triangulation with other measures of quality and outcome to gain a comprehensive picture of system performance. One idea could be to use PREMs together with patient-reported outcome measures (PROMs), which can elicit patients’ perspectives on the process and impact of care they receive.39 An example of a validated PROM for this purpose is the patient activation measure, which captures patients’ health literacy and capacity for self-management.40 This measure has been used in large-scale studies to assess and monitor activation levels in patient populations,41 and combined with PREMs, could be used to monitor the effectiveness of health services in improving patient experience and capacity at a broad level or in certain communities. Similarly, the Patient Enablement Instrument,42 a validated instrument that assesses patient-reported changes in self-management skills and understanding of the health problem, has been adapted and used broadly in primary care research.43–45 Used together, PREMs and PROMs could be harnessed to enhance system responsiveness to the needs of various patient populations based on their direct input. This includes, for instance, underserved groups in Australia, such as Aboriginal and Torres Strait Islander peoples, especially when findings from patient reported measures are embedded into quality improvement processes and systems.46

Some factors may limit the transferability of our study findings to Australian general practice. These include a relatively small sample size of participants and the differences between health systems and settings represented through the inclusion of international participants. However, the collective expertise and knowledge of the participants comprise a major strength of this study, as they are leading experts in the areas of patient experience and general practice improvement. Furthermore, the international participants invited to this study also had some familiarity with Australian primary care, either through their own work or previous or current collaboration with the research team; thus, they were able to provide information that was also relevant and applicable to the Australian context. Another potential limitation is the lack of inclusion of patient perspectives, which was partly the result of unsuccessful recruitment of consumer representatives. The recruitment strategy for this study was also targeted to individuals with a specific set and depth of knowledge, namely, around high-level policies and organisational factors relating to the use of PREMs data for general practice improvement. However, as key stakeholders in primary care and the intended respondents of patient experience surveys, patients can provide important insights into the barriers and enablers of survey implementation in clinical practice. Thus, it would be prudent to include them in future research to explore how surveys can be more effectively administered and used to improve their experience.

Overall, the systematic collection and use of patient experience was viewed to have strong potential for change in clinical practice, and subsequently in the transformation of care delivery to patients. In Australia, there is very little evidence guiding the interpretation and use of PREMs data once it has been collected. This study is the first of its kind to document recommendations on how to collect, interpret and use PREMs for service improvement in Australian general practice. It has highlighted the need for greater research and policy efforts to strengthen the assessment and interpretation of PREMs data at all levels and to understand how this information can be aggregated and reported to inform meaningful changes to the primary care system.

Acknowledgments

The authors would like to acknowledge the participants of this study for volunteering their time to respond to the questionnaire.

Footnotes

Contributors: HJS led the planning, participant recruitment, development of the data collection tool, data collection and analysis and reporting of the work described in the article. SD, JFL and MH directly contributed to the planning of the study, active recruitment of participants and the synthesis and reporting of the findings. MH also supported the development of the data collection tool and data analysis. All authors have read and approved the manuscript.

Funding: The research reported has been conducted as part of the author’s (HJS) doctoral study at the University of New South Wales. Funding was provided as part of her scholarship.

Competing interests: Dr Levesque is a member of the Strategic Advisory Board of the Integrated Healthcare Journal. The authors declare no other conflicts of interest.

Patient and public involvement statement: This research focused on the views of key stakeholders in primary care practice and research that did not include patient participants. Thus, patients were not invited to comment on the study design and were not consulted to develop patient relevant outcomes or interpret the results. Furthermore, patients were not invited to contribute to the writing or editing of this document for readability or accuracy.

Patient consent for publication: Not required.

Ethics approval: This study was approved by the Human Research Ethics Committee at the University of New South Wales (HC16529).

Provenance and peer review: Not commissioned; externally peer reviewed.

Data availability statement: Data are available on reasonable request. Deidentified data that support the findings of this study may be available on reasonable request from the corresponding author (hyun.song@unsw.edu.au). The data are not publicly available due to privacy or ethical restrictions.

References

  • 1. Murray CJ, Frenk J. A framework for assessing the performance of health systems. Bull World Health Organ 2000;78:717–31. [PMC free article] [PubMed] [Google Scholar]
  • 2. Picker Institute . Using patient feedback: a practical guide to improving patient experience. Oxford, 2009. [Google Scholar]
  • 3. Doyle C, Lennox L, Bell D. A systematic review of evidence on the links between patient experience and clinical safety and effectiveness. BMJ Open 2013;3:e001570. 10.1136/bmjopen-2012-001570 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4. Carter M, Davey A, Wright C, et al. Capturing patient experience: a qualitative study of implementing real-time feedback in primary care. Br J Gen Pract 2016;66:e786. 10.3399/bjgp16X687085 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5. Coulter A, Locock L, Ziebland S, et al. Collecting data on patient experience is not enough: they must be used to improve care. BMJ 2014;348:g2225. 10.1136/bmj.g2225 [DOI] [PubMed] [Google Scholar]
  • 6. Gleeson H, Calderon A, Swami V, et al. Systematic review of approaches to using patient experience data for quality improvement in healthcare settings. BMJ Open 2016;6:e011907. 10.1136/bmjopen-2016-011907 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7. NSW Agency for Clinical Innovation . Patient experience and consumer engagement: a framework for action Chatswood, NSW, 2015. Available: https://www.aci.health.nsw.gov.au/__data/assets/pdf_file/0005/256703/peace-framework.pdf
  • 8. Wong ST, Haggerty J. Measuring patient experiences in primary health care : a review and classification of items and scales used in publicly-available questionnaires. Vancouver, Canada: Centre for Health Services and Policy Research (CHSPR), 2013. [Google Scholar]
  • 9. Coulter A. Patient feedback for quality improvement in general practice. BMJ 2016;352:i913. 10.1136/bmj.i913 [DOI] [PubMed] [Google Scholar]
  • 10. Gardner K, Parkinson A, Banfield M, et al. Usability of patient experience surveys in Australian primary health care: a scoping review* . Aust J Prim Health 2016;22:93–9. 10.1071/PY14179 [DOI] [PubMed] [Google Scholar]
  • 11. Rechel B, McKee M, Haas M, et al. Public reporting on quality, waiting times and patient experience in 11 high-income countries. Health Policy 2016;120:377–83. 10.1016/j.healthpol.2016.02.008 [DOI] [PubMed] [Google Scholar]
  • 12. Berwick DM, James B, Coye MJ. Connections between quality measurement and improvement. Med Care 2003;41:I30–8. 10.1097/00005650-200301001-00004 [DOI] [PubMed] [Google Scholar]
  • 13. Davies EA, Meterko MM, Charns MP, et al. Factors affecting the use of patient survey data for quality improvement in the Veterans health administration. BMC Health Serv Res 2011;11:334. 10.1186/1472-6963-11-334 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14. Reeves R, Seccombe I. Do patient surveys work? the influence of a national survey programme on local quality-improvement initiatives. Qual Saf Health Care 2008;17:437. 10.1136/qshc.2007.022749 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15. Barry HE, Campbell JL, Asprey A, et al. The use of patient experience survey data by out-of-hours primary care services: a qualitative interview study. BMJ Qual Saf 2016;25:851–9. 10.1136/bmjqs-2015-003963 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16. Asprey A, Campbell JL, Newbould J, et al. Challenges to the credibility of patient feedback in primary healthcare settings: a qualitative study. Br J Gen Pract 2013;63:e200–8. 10.3399/bjgp13X664252 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17. Boiko O, Campbell JL, Elmore N, et al. The role of patient experience surveys in quality assurance and improvement: a focus group study in English general practice. Health Expect 2015;18:1982–94. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18. Scott J, Heavey E, Waring J, et al. Implementing a survey for patients to provide safety experience feedback following a care transition: a feasibility study. BMC Health Serv Res 2019;19:613. 10.1186/s12913-019-4447-9 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19. Royal Australian College of General Practitioners (RACGP) . Patient feedback requirements, 2019. Available: https://www.racgp.org.au/running-a-practice/practice-standards/patient-feedback-requirements
  • 20. Kalucy L, Katterl R, Jackson-Bowers E. Patient experience of health care performance. Adelaide: Primary Health Care Research & Information Service (PHCRIS), 2009. [Google Scholar]
  • 21. Australian Bureau of Statistics . 4839.0 - Patient Experiences in Australia: Summary of Findings, 2018-19 Canberra, ACT, 2019. Available: https://www.abs.gov.au/ausstats/abs@.nsf/mf/4839.0 [Accessed 12 Nov 2019].
  • 22. Lévesque J-F, Haggerty JL, Burge F, et al. Canadian experts' views on the importance of attributes within professional and Community-Oriented primary healthcare models. Healthc Policy 2011;7:21–30. 10.12927/hcpol.2011.22690 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23. Haggerty J, Burge F, Levesque JF, et al. Operational definitions of attributes of primary health care: consensus among Canadian experts. Ann Fam Med 2007;5:336–44. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24. Suter E, Mallinson S, Misfeldt R, et al. Advancing team-based primary health care: a comparative analysis of policies in Western Canada. BMC Health Serv Res 2017;17:493. 10.1186/s12913-017-2439-1 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25. O'Brien BC, Harris IB, Beckman TJ, et al. Standards for reporting qualitative research: a synthesis of recommendations. Acad Med 2014;89:1245–51. [DOI] [PubMed] [Google Scholar]
  • 26. Graneheim UH, Lundman B. Qualitative content analysis in nursing research: concepts, procedures and measures to achieve trustworthiness. Nurse Educ Today 2004;24:105–12. 10.1016/j.nedt.2003.10.001 [DOI] [PubMed] [Google Scholar]
  • 27. Davies E, Cleary PD. Hearing the patient's voice? factors affecting the use of patient survey data in quality improvement. Qual Saf Health Care 2005;14:428–32. 10.1136/qshc.2004.012955 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28. Song HJ, Dennis S, Levesque J-F, et al. What matters to people with chronic conditions when accessing care in Australian general practice? A qualitative study of patient, carer, and provider perspectives. BMC Fam Pract 2019;20:79. 10.1186/s12875-019-0973-0 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29. Reeves R, West E, Barron D. Facilitated patient experience feedback can improve nursing care: a pilot study for a phase III cluster randomised controlled trial. BMC Health Serv Res 2013;13:259. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30. Kumah E, Osei-Kesse F, Anaba C. Understanding and using patient experience feedback to improve health care quality: systematic review and framework development. J Patient Cent Res Rev 2017;4:24–31. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31. Brookes O, Brown C, Tarrant C, et al. Patient experience and reflective learning (pearl): a mixed methods protocol for staff insight development in acute and intensive care medicine in the UK. BMJ Open 2019;9:e030679. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32. Song HJ, Harris M, Li F-Y, et al. Connecting general practitioners through a Peer-Facilitated community of practice for chronic disease care. Ann Fam Med 2020;18:179. 10.1370/afm.2490 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33. Davies E, Shaller D, Edgman-Levitan S, et al. Evaluating the use of a modified CAHPS survey to support improvements in patient-centred care: lessons from a quality improvement collaborative. Health Expect 2008;11:160–76. 10.1111/j.1369-7625.2007.00483.x [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34. Watt G, Deep End Steering Group . Gps at the deep end. Br J Gen Pract 2011;61:66–7. 10.3399/bjgp11X549090 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35. Rohrbasser A, Harris J, Mickan S, et al. Quality circles for quality improvement in primary health care: their origins, spread, effectiveness and lacunae– a scoping review. Plos One 2018;13:e0202616. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 36. Gillam S, Newbould J. Patient participation groups in general practice: what are they for where are they going? BMJ 2016;352:i673. 10.1136/bmj.i673 [DOI] [PubMed] [Google Scholar]
  • 37. Levesque J-F, Sutherland K. What role does performance information play in securing improvement in healthcare? A conceptual framework for levers of change. BMJ Open 2017;7:e014825. 10.1136/bmjopen-2016-014825 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 38. Baldie DJ, Guthrie B, Entwistle V, et al. Exploring the impact and use of patients’ feedback about their care experiences in general practice settings—a realist synthesis. Fam Pract 2018;35:13–21. 10.1093/fampra/cmx067 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 39. Kingsley C, Patel S. Patient-Reported outcome measures and patient-reported experience measures. BJA Educ 2017;17:137–44. 10.1093/bjaed/mkw060 [DOI] [Google Scholar]
  • 40. Hibbard JH, Stockard J, Mahoney ER, et al. Development of the patient activation measure (PAM): conceptualizing and measuring activation in patients and consumers. Health Serv Res 2004;39:1005–26. 10.1111/j.1475-6773.2004.00269.x [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 41. Blakemore A, Hann M, Howells K, et al. Patient activation in older people with long-term conditions and multimorbidity: correlates and change in a cohort study in the United Kingdom. BMC Health Serv Res 2016;16:582. 10.1186/s12913-016-1843-2 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 42. Howie JG, Heaney DJ, Maxwell M, et al. A comparison of a patient enablement instrument (PEI) against two established satisfaction scales as an outcome measure of primary care consultations. Fam Pract 1998;15:165–71. 10.1093/fampra/15.2.165 [DOI] [PubMed] [Google Scholar]
  • 43. Rööst M, Zielinski A, Petersson C, et al. Reliability and applicability of the patient enablement instrument (PEI) in a Swedish general practice setting. BMC Fam Pract 2015;16:31. 10.1186/s12875-015-0242-9 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 44. Hudon C, Fortin M, Rossignol F, et al. The patient enablement Instrument-French version in a family practice setting: a reliability study. BMC Fam Pract 2011;12:71. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 45. Mead N, Bower P, Roland M. Factors associated with enablement in general practice: cross-sectional study using routinely-collected data. Br J Gen Pract 2008;58:346–52. 10.3399/bjgp08X280218 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 46. Bailie J, Laycock A, Matthews V, et al. System-Level action required for Wide-Scale improvement in quality of primary health care: synthesis of feedback from an interactive process to promote dissemination and use of aggregated quality of care data. Front Public Health 2016;4:86. 10.3389/fpubh.2016.00086 [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Supplementary data

ihj-2019-000033supp001.pdf (1.2MB, pdf)


Articles from Integrated Healthcare Journal are provided here courtesy of BMJ Publishing Group

RESOURCES