Abstract
Objective
Patient-reported outcomes (PROs) provide self-reported patient assessments of their quality of life, daily functioning, and symptom severity after experiencing an illness and having contact with the health system. Feeding back summarised PROs data, aggregated at the health-service level, to healthcare professionals may inform clinical practice and quality improvement efforts. However, little is known about the best methods for providing these summarised data in a way that is meaningful for this audience. Therefore, the aim of this scoping review was to summarise the emerging approaches to PROs data for ‘service-level’ feedback to healthcare professionals.
Setting
Healthcare professionals receiving PROs data feedback at the health-service level.
Data sources
Databases selected for the search were Embase, Ovid Medline, Scopus, Web of Science and targeted web searching. The main search terms included: ‘patient-reported outcome measures’, ‘patient-reported outcomes’, ‘patient-centred care’, ‘value-based care’, ‘quality improvement’ and ‘feedback’. Studies included were those that were published in English between January 2009 and June 2019.
Primary and secondary outcome measures
Data were extracted on the feedback methods of PROs to patients or healthcare providers. A standardised template was used to extract information from included documents and academic publications. Risk of bias was assessed using Joanna Briggs Institute Levels of Evidence for Effectiveness.
Results
Overall, 3480 articles were identified after de-duplication. Of these, 19 academic publications and 22 documents from the grey literature were included in the final review. Guiding principles for data display methods and graphical formats were identified. Seven major factors that may influence PRO data interpretation and use by healthcare professionals were also identified.
Conclusion
While a single best format or approach to feedback PROs data to healthcare professionals was not identified, numerous guiding principles emerged to inform the field.
Keywords: Clinical audit, public health, audit, health services administration& management
Strengths and limitations of this study.
This scoping review provides a novel summation of the published and grey literature of the guiding principles for effectively feeding back patient-reported outcome data to healthcare providers.
The search strategy was broad, including individual patient-level, health service-level and system-level reporting of patient-reported outcome data to ensure no relevant articles were missed.
The grey literature search was restricted to seven countries due to the limited timeframe for completing the study.
Two reviewers conducted the literature syntheses, with one person completing the academic synthesis and one person completing the grey synthesis.
Using a standardised data extraction process for both types of literature, the findings from this review inform the rapidly growing fields of improvement science and implementation research related to health service-level reporting of aggregate patient-reported outcome data to healthcare professionals.
Introduction
There is growing interest in the use of patient-reported outcomes (PROs) for all aspects of healthcare. This is because information available from administrative and routinely collected clinical data does not provide a comprehensive picture related to health outcomes once patients leave hospital.1 PROs are outcome data collected directly from patients about their health and the potential impacts of treatments or management within the health system.2 PROs are differentiated from patient-reported outcome measures (PROMs), which are the instruments or survey tools used to obtain PROs.3 Reporting of PROs data can occur at the individual patient level and be used to inform decisions about patient-centred care, or at the aggregated service and system levels, and may be used to assess and compare organisational performance or for population surveillance.4 5
PROs were originally developed for use in research, such as comparative effectiveness studies and clinical trials.6 7 However, the value of using PROs to inform clinical practice has since been realised.8 9 PROs have evolved in a somewhat disparate manner between different countries, with each country aligning the use of PRO collections with a slightly different emphasis.5 For example, in England the focus of PRO collections is on hospital performance in selected elective surgeries; whereas in the Netherlands and Sweden, collection of PROs predominately occurs through disease-specific Clinical Quality Registries (CQRs).5
Healthcare professionals have reported challenges in relation to interpreting the meaning and implications of PROs data.6 10 These challenges can arise due to the variation by which PRO data are used, scored and reported.6 Methods for optimising the feedback of PRO data to healthcare professionals are an emerging field of research.2 11 12 Currently, little is known about the best methods for providing summarised PROs data in a way that is meaningful for healthcare providers. To the best of our knowledge, there is currently little empirical evidence available to support best practice in the feedback methods for PROs data, particularly at the health service level.
The aim of this review was to investigate the emerging approaches to the feedback and reporting of PROs data to healthcare professionals, in order to understand how to increase engagement and uptake of these data. Three questions were used to explore this aim: (1) What is the existing evidence on best practice in the readability and feedback of PROs data to healthcare professionals? (2) What PROs data presentation formats have the most utility for healthcare professionals? (3) Are there factors that influence PROs data interpretation or use in clinical practice?
Methods
The rapid scoping review was undertaken by a research team with clinical expertise (nursing, allied health, psychology) from the Australian Stroke Clinical Registry (AuSCR) with over 10 years’ experience collecting and reporting generic and disease-specific PROs in consultation with end-users who work in hospitals or government.13 Consultation was undertaken with government representatives from the Victorian Agency for Health Information (VAHI) including author PK, who are collecting PROs data on an ongoing basis from health services, including hospitals. Weekly team meetings were held to ensure a standardised screening and data extraction process, whereby information about papers under consideration was discussed based on the information gathered by author SLH (Honours, Psychology) or OFR (Honours, Health Information Management) using the relevant data extraction tool.
The methods used for the review (including inclusion criteria, search strategy, extraction and synthesis) were specified in advance in an unpublished protocol, based on the Joanna Briggs Institute Guidelines for conducting a scoping review.14 Two search strategies were used. The first covered the academic, peer-reviewed literature and the second covered grey literature (such as government reports and policy documents). Different strategies were used to search the two sources of evidence. Rapid review methods using recommended approaches by the Cochrane collaboration15 were drawn on for this scoping review. The Preferred Reporting Items for Systematic reviews and Meta-Analyses extension for Scoping Reviews (PRISMA-ScR) was used to report results.16
Academic literature search
For the academic literature, four databases were selected, including: Embase; Ovid Medline; Scopus and Web of Science. These databases were chosen to maximise the scope of articles that were retrieved. The search included phrases related to the following terms and concepts: patient-reported outcome measures, patient-reported outcomes, patient centred care, value-based care, quality improvement, feedback, audit and dashboard. A full list of search terms and the combinations used is available in online supplemental table 1.
bmjopen-2020-038190supp003.pdf (62.4KB, pdf)
Studies included were those that were published in English between January 2009 and June 2019, where the feedback methods of PROs to patients or healthcare providers were described. Studies prior to 2009 were excluded to accommodate a contemporary, timely and comprehensive summary. Abstract booklets, conference abstracts and newsletters were excluded. Publications for studies that were pilot/development/protocol projects, focused on testing a PROs measurement tool, or in which PROs were used as the endpoint outcome for an observational or comparative-effectiveness study, were excluded. Further, studies related to primary care, emergency care or non-acute conditions (eg, surgical interventions or interventional devices) were also excluded. The initial search was broad to include studies related to individual patient-level feedback of PROs data to ensure no relevant articles were missed, however, the synthesis of the literature focused primarily on health service-level reporting of aggregate PROs data to healthcare professionals.
All references identified from these searches were downloaded and imported into Covidence software.17 Following removal of duplicates, the screening process involved one reviewer (SLH, Honours Psychology) reading the titles and abstracts of each article to determine relevance using the inclusion and exclusion criteria outlined above. The full text of the relevant articles was then assessed by one reviewer (SLH), with a second reviewer (CW, Masters, Health Information Management) conducting an independent assessment on a subset of the articles to ensure standardisation. If any disagreements for study eligibility arose, these were resolved through discussion and consensus between the two reviewers. If disagreements were unable to be resolved using this approach, the article was to be reviewed by a third reviewer to determine eligibility. This latter process was not required. SK provided training for the team in conducting a review, as a past Cochrane reviewer. Additional support was provided by SK and DAC, who have extensive experience conducting literature reviews.18–20
Academic literature data extraction and charting
Data from the included academic literature were systematically extracted using a predetermined data extraction template by one reviewer (SLH). The extraction template was developed by the review team in consultation with VAHI representatives. The template was then piloted and adapted as necessary. The final extraction template included: characteristics of study participants (including age, profession, area of practice and number of participants), type of article, which PROs were used, the purpose of the feedback and the findings of the study. Findings were extracted from all included academic literature by selecting those text passages and outcomes that related to each research question. The academic data extraction tool is available in online supplemental appendix 1.
bmjopen-2020-038190supp001.pdf (13KB, pdf)
Level of evidence and critical appraisal of the academic literature
The methodological design of all included articles was assessed according to the Joanna Briggs Institute Levels of Evidence for Effectiveness21 by SLH, in order to assess the quality and rigour of the evidence. Studies were assigned level 1 (experimental), level 2 (quasi-experimental), level 3 (analytical), level 4 (descriptive) or level 5 (expert opinion). Further, the included research articles were appraised for strength of evidence by one reviewer (SLH) using the critical appraisal tools from the Joanna Briggs Institute.22 Each article was assigned a rating of quality based on how many of the criteria the article fulfilled (eg, ‘Were the criteria for inclusion in the sample clearly defined?’). Studies that met all criteria were rated as very high, studies that met 80% or more of the criteria were rated as high, studies that met 60% or more, 40% or more, and less than 40% of the criteria were rated as moderate, low and very low, respectively. While critical appraisal assessments are not mandatory for conducting a scoping review,14 given the breadth of studies and their designs that we were anticipating, we felt that an assessment of the article quality was relevant to considering the evidence we were extracting.
Grey literature search
We elected to use similar methods to those of a previous PROs literature search conducted by Williams et al.5 The grey literature component of our search included Google searches, targeted website searches and snowballing of reference lists, where appropriate. The first 10 pages of results retrieved from each Google search were reviewed.5 The following search terms were used:
‘Use of Patient-Reported Health Outcome Measures in (country)’
‘Feedback of Patient-Reported Health Outcome Measures in (country)’
‘patient-reported outcome measure + feedback + use in (country)’
‘Benchmarking of Patient-Reported Health Outcome Measures in (country)’
Due to the limited timeframe for completing the study, the grey literature search was restricted to seven countries. The countries included in the Google searches were Sweden, the Netherlands, Finland, Canada, the UK, the USA and Australia. The selection of these countries was based on the prior research of Williams et al 5 which found substantial examples of the use of PROs within these regions. The websites of relevant leading organisations (ie, health agencies, government organisations, professional organisations, special interest groups, research institutes and universities) were also searched. For example, the websites of organisations such as: the Institute for Healthcare Improvement, the International Consortium for Health Outcomes Measurement (ie, the USA), the Dutch Institute for Clinical Auditing (ie, The Netherlands) and the Organisation for Economic Co-operation and Development were searched. Further, the websites and annual reports of national CQRs that were known to collect and report PRO data were also searched.
Similar to the inclusion criteria applied for the academic literature, materials that were published in English between 2009 and 2019 were included. Internet page entries without PROs data, focused on single-centred studies or testing PRO instruments, were excluded. We also excluded literature related to primary care, emergency care or surgical interventions/devices; did not relate to the target country or were duplicate entries.
Grey literature data extraction and charting
A second data extraction template was used for the grey literature. Data from the included grey literature were systematically extracted using a predetermined data extraction template by two independent reviewers (OFR, Honours, Health Information Management; and VM, Nursing). Similar to the methods used for the academic literature template, the grey literature data extraction template was developed by the review team in consultation with VAHI representatives. The final template included information on: the type of document, title, name of the organisation that produced the document, background PROM information, PROs data display features, PROs data feedback mechanism(s), the identified barriers and enablers to PROs uptake among clinicians, and PROs data issues (eg, statistical/analytical methods). The grey literature data extraction tool is available in online supplemental appendix 2.
bmjopen-2020-038190supp002.pdf (23.6KB, pdf)
Collating and synthesising results
The data within the extraction forms used for the academic and grey literature templates were sorted according to which research question they contributed to answering. The findings were then grouped into themes (eg, missing data, healthcare professional education and training). Once established, each theme was presented and discussed between SLH, OFR and VM. The preferences of PROs data formats among healthcare professionals determined in the current study were summated from all articles that described PRO data format preferences. An inductive approach was used to analyse the qualitative findings to address the research question related to factors that influence PROs data interpretation or use, whereby themes were developed by studying the findings and were considered how they fit within the developing themes.
Patient and public involvement
No patients were involved in setting the review questions or in the design of the initial protocol and overall study. No patients were asked to advise on the interpretation and write-up of the results. This study forms the first component of a broader programme of work initiated by VAHI, and stakeholder engagement methods were used in the subsequent stages of the project.
Results
The initial search resulted in the identification of 4445 academic articles. Following the removal of duplicates, 3480 unique articles remained, 19 of which were included in the final review. Figure 1 summarises the academic literature search using a PRISMA flowchart.23 The publication characteristics, level of evidence and quality appraisal of the included academic literature are available in table 1. Research methods included 2 reviews,24 25 3 case studies,12 26 27 2 consensus panels,11 28 1 opinion article29 and 11 observational studies.2 6 30–38 According to the Joanna Briggs Institute Levels of Evidence for Effectiveness,21 the 19 included studies were classified according to the following levels: 1 (n=0), 2 (n=1), 3 (n=3), 4 (n=12), 5 (n=3). The studies were primarily conducted in the USA, Australia, Canada and European countries. From the grey literature search, 103 materials were determined to be topically relevant and were scanned for further information. Of these, a total of 22 were included in the final review, including 16 reports,1 3–5 39–49(Aspex Consulting, Evaluation Framework and Initial Appraisal of PROMS: Final Report, personal communications, 2018), 2 book chapters,50 51 1 dissertation,52 1 forum proceeding document,53 1 user’s guide54 and 1 research report.55 The summary of the included grey literature is available in table 2.
Figure 1.
Shows the study identification and selection process that was applied to the academic literature during the study. The original database search resulted in 4445 records identified. An additional four records were identified from other sources. After duplicates were removed, there were 3480 unique records. The title and abstract screening process excluded 3191 records for being unrelated to the topic. The remaining 289 records underwent the full-text screening process, where 270 records were excluded for the following reasons: 31 were not about patient-reported outcomes, 159 did not feed back the patient-reported outcomes, 21 were the wrong article type, 11 were the wrong article setting and 2 records were not in English. Nineteen unique records were included in the final synthesis.
Table 1.
Characteristics of the included academic literature
| Author, year, country | Study method | Clinical area | Number of participants or included studies | Study aim/design | Relevant findings | Review question related to | JBI level of evidence | Strength of evidence |
| Aiyegbusi et al, 2019, UK30 | Semistructured interviews and focus groups | Chronic kidney disease | 12 patients with chronic kidney disease and 22 healthcare professionals (nurses, psychologist, nephrologist, registrars and surgeons) | Thematic analysis of participants’ views on the use of a PROM system | Healthcare professionals suggested graphical representations of PROM feedback (rather than numeric), and to include ‘traffic light’ colour-coding for quick and easy review. Healthcare professionals believed that ‘alert fatigue’ from PROM feedback could be a barrier to use of PROM data, with the numerous alerts being provided to healthcare professionals encouraging them to ignore the PROM results. |
Q1 Q3 | Level 4 | Moderate |
| Allwood et al, 2013, UK31 | Structured focus groups | All healthcare areas | 107 healthcare professionals (including consultants, junior doctors, nurses and allied health professionals) | Thematic analysis of participants’ comprehension and format preference for PROM data | Healthcare professionals were generally positive about the use of bar charts and caterpillar plots for the display of PROM results. Opinions were mixed for the use of tables, funnel plots and spider plots. Healthcare professionals found that tables with icons were insufficient. | Q1 Q2 | Level 4 | High |
| Arcia et al, 2018, USA12 | Case study | Unspecified | 2 case studies of PRO feedback projects | Explore methods affecting the design decisions of PRO feedback projects | Summarises considerations that must be understood for the visualisation of PROs data, including the range and direction of scoring. | Q1 | Level 4 | Very low |
| Bantun et al, 201624 | Integrated literature review, dates: 1999–2014 | Oncology | 9 included studies | Exploring the interpretation of graphical presentations of PRO data in clinical practice | HRQOL PROs can be accurately interpreted by healthcare professionals and patients; line graphs and bar charts were the most preferred format for PROs; patients prefer simple graphs, while healthcare professionals prefer simple graphs with CIs. | Q1 Q2 | Level 4 | High |
| Boyce et al, 201425 | Systematic review, dates: up to 2012 | All healthcare areas | 16 included studies | Summarise qualitative studies that explore the experience of healthcare professionals using PROMs | Healthcare professionals value PROMs if they can be used to aid decision making. They appreciate graphical presentations that clearly depict clinically important changes. However, they can question whether the PROM data produced are an accurate reflection of care. Attitude towards the use of PROMs may be improved by engaging the healthcare professionals in the planning stage of PROMs introduction. |
Q1 Q3 | Level 4 | High |
| Brehaut et al, 2016, Canada29 | Opinion | All healthcare areas | 68 included studies | Identify suggestions for designing and delivering effective feedback interventions | Barriers: the use of unnecessary three-dimensional graphical elements which can clutter the display and bias the interpretation of the underlying information. Enablers: closely linking visual displays with summary messages, minimisation of extraneous cognitive load for target audiences, the provision of short, actionable messages combined with optional detail and addressing the credibility of the data source used to produce the feedback. |
Q1 Q3 | Level 5 | Low |
| Brundage et al, 2015, USA6 | Survey followed by a semi-structed interview | Cancer | 50 patients with cancer and 20 oncology healthcare professionals (doctors and nurses) | Explore interpretation accuracy, ratings of ease-of understanding and usefulness of graphical formats. The interview explored helpful and confusing format attributes. | Both patients and healthcare professionals prefer line graphs across group-level data and individual-level data formats (compared with bar charts and cumulative distributions), but healthcare professionals prefer greater detail (ie, statistical details) for group-level data. | Q1 Q2 | Level 3 | Moderate |
| Brundage et al, 2018, USA2 | Survey followed by an interview with healthcare professionals | Cancer | 233 healthcare professionals and 248 PRO researchers | Explore interpretation, accuracy and clarity ratings of graphical formats and difference score representations | Participants were accurate in their interpretation of PRO line graphs when the directionality of the score was indicated with a label ‘better’. Participants were more accurate in their interpretation of pie charts compared with bar graphs, for the display of proportions. |
Q1 Q2 | Level 3 | Very high |
| Edbrooke-Childs et al, 2016, UK32 | Pre–post observational study | Child mental Health | 48 healthcare professionals attended the 1-day training course, 17 healthcare professionals attended the 3-day training course | Evaluate the effect of the training courses on attitudes and self-efficacy towards PROMs and feedback | Increased time and duration of PROMS training showed greater improvement in attitudes towards PROMS, feedback attitudes and PROM self-efficacy. | Q1 Q3 | Level 3 | Moderate |
| Forsberg et al, 2015, USA and Sweden26 | Case study | Pain and spin conditions, rheumatology, and private healthcare | 3 case studies of PRO feedback used in routine practice | Describe the principles and lessons learnt from using PROs in the 3 case studies | Healthcare professionals need to be able to place the PRO results within the context of the patient’s current clinical state, prognosis and attitudes (eg, a patient’s health status may be declining despite receiving best care). Healthcare professionals need to know what to do with the results, such as when the results are suggesting a significant health problem. | Q3 | Level 4 | High |
| Hartzler et al, 2016, USA33 | Preliminary focus groups and interviews, followed by a pre–post study | Prostate cancer | The focus group included 60 prostate cancer survivors. 50 patients and 50 providers completed the interviews. 12 patients completed the pre–post observation | The focus groups assessed the needs of patients in relation to PROM feedback. The interviews evaluated preferred feedback methods. The pre–post study evaluated self-efficacy, satisfaction, communication and compliance with the PRO dashboard. |
Patients prioritised needs for dashboards to compare longitudinal trends and provide comparative groups. Patients and providers preferred bar charts and line graphs compared with tables and pictographs. |
Q1 Q2 | Level 2 | Low |
| Hildon et al, 2012, UK34 | Focus groups | Knee surgery | 45 patients who were planning or had undergone knee surgery | Thematic analysis of patients preferred PROM format | Patients were generally positive about the use of bar charts and caterpillar plots. Opinions were mixed for tables and tables with icons. Patients did not like funnel plots. Patients liked the use of ‘traffic-light’ colour scheme and did not like the use of CIs. |
Q1 Q2 | Level 4 | High |
| Jensen et al, 2016, USA28 | Workshop proceedings | All healthcare areas | 519 participants (including patients, healthcare professionals, researchers, healthcare system leaders and policymakers) attended the workshop, either in person or online | Summary of workshop outcomes | Healthcare professionals should be provided with guidance in interpreting PRO scores, as they may not know the meaning of just raw scores. Translate PROs into specific actions for healthcare professionals by establishing clear recommendations on how to respond to PRO scores in clinical settings. |
Q3 | Level 5 | High |
| Kuijpers et al, 2016, UK, the Netherlands, Austria and Poland35 | Questionnaire | Cancer | 548 patients with cancer and 227 healthcare professionals (doctors and nurses) | Understanding of PROM scores and preferences for different formats | Patients had no preference between non-coloured bar charts and non-coloured line graphs. Patients preferred coloured bar charts over coloured line graphs. Healthcare professionals showed a preference for line graphs with ‘traffic light’-coloured thresholds. Understanding did not differ between graphical formats for patients or healthcare professionals. |
Q1 Q2 | Level 4 | High |
| Oliver et al, 2019, Australia, USA and Sweden27 | Case study | Multiple sclerosis, spinal care and rheumatology | 3 case studies of PRO feedback used in routine practice | Features that aid in the interpretation of PROs in the 3 case studies | The use of colour coding and threshold indicators, linked decision support functions (such as predictive calculators) can aid interpretation of PRO scores. | Q1 | Level 4 | Very low |
| Snyder et al, 2019, USA11 | Consensus panel | Cancer | Participants included healthcare professionals, PRO researchers, patients and caregivers. 28 participants in meeting 1, and 27 participants in meeting 2 (participants were not mutually exclusive) | A modified Delphi process to develop recommendations for PRO data display | Recommendations for the display of PRO data include using labelling and thresholds, not mixing score direction in a single display, accommodating both normed and non-normed scoring, displaying CIs, indicating possibly concerning results. | Q1 Q2 | Level 5 | High |
| Talib et al, 2018, USA36 | Interview | Primary care | 23 patients in primary care | Thematic analysis of the patient’s perception of the utility of PRO in primary care | The patients found the colour coding severe symptoms useful but recommended the addition of ‘traffic-light’ colour scheme. | Q1 | Level 4 | High |
| Van Overveld et al, 2017, the Netherlands37 | Semistructured interview | Head and neck | 37 patients, healthcare professionals (doctors, nurses, speech pathologist, dietician, allied health) and health insurers. | Content analysis of participants preferred PRO feedback method | Patients want PROs feedback to include explanations of how to read the PRO graph, the inclusion of a comparison and the feedback delivered around once a year. Healthcare professionals want PROs feedback to be simple and include a comparison groups (such as national average, best and worst performer). Healthcare professionals want PROs feedback between 1 and 4 times a year and receive the feedback via email. | Q1 Q2 Q3 | Level 4 | High |
| Wu et al, 2016, USA38 | Semistructured interview | Cancer | 42 patients with cancer and 12 healthcare professionals (doctors and nurses) | Evaluate participants’ views of a webtool that was designed to allow PRO use in clinical practice | Patients and healthcare professionals recommended having PROs score directionality be consistent, and more explanation of the score meaning. Healthcare professionals also recommended including if the score indicates better or worse health. |
Q1 | Level 4 | High |
AuSCR, Australian Stroke Clinical Registry; HRQOL, Health-related quality of life; JBI, Joanna Briggs Institute; PRO, patient-reported outcome; PROM, patient-reported outcome measure.
Table 2.
Characteristics of the included grey literature
| Author/organisation, title, year |
Web reference | Type of material | Date accessed | Brief summary/relevant findings
|
| Aaronson et al. User’s Guide to Implementing Patient-Reported Outcomes Assessment in Clinical Practice. Version 2: January 2015.54 | https://www.isoqol.org/UserFiles/2015UsersGuide-Version2.pdf | User’s guide | 09 July 2019 | A User’s Guide developed by a team from the International Society for Quality of Life Research to provide practical guidance for clinicians with an interest in using PROs data in clinical practice. A combination of different tools to facilitate PROs data interpretation were recommended, and their advantages and disadvantages were described. Recommended (eg, tools to aid PROs data interpretation vary depending on whether the patient’s current score or a change in score is fed back).
|
| Batalden et al. Enabling uptake of a registry-supported care and learning system in the United States: A report to the Robert Wood Johnson Foundation from Karolinska Institutet and The Dartmouth Institute, 2014.44 | https://srq.nu/en/welcome/ | Technical report | 10 July 2019 | The authors outlined a synergistic, learning health system model based on a case study from the Swedish Rheumatology Quality Registry whereby several data feedback systems were involved. PRO data were fed forward in a shared information environment and combined with clinical data displayed on a dashboard for outcome evaluation and clinical decision-making
|
| Canadian Institute for Health Information (CIHI). Health outcomes of care: An idea whose time has come, 2012.1 | https://secure.cihi.ca/free_products/HealthOutcomes2012_EN.pdf | Technical report | 23 July 2019 | A report produced by authors from Statistics Canada and the Canadian Institute for Health Information which presented PRO data development options (using several case studies) to address gaps related to health outcomes. The authors included information related to challenges involved with the use of PROs among healthcare professionals.
|
| CIHI. PROMs Background Document, 2015.47 | https://www.cihi.ca/sites/default/files/document/proms_background_may21_en-web.pdf | Report | 23 July 2019 | The authors provided an overview of the coordinated approach to PROMs collection and reporting established in Canada, including the initial implementation steps and a review of the international PROMs landscape.
|
| CIHI. CIHI PROMs Forum Proceedings, 2015.53 | https://www.cihi.ca/sites/default/files/document/proms_forum_proceedings_-_may_26_enweb.pdf | Forum proceedings | 23 July 2019 | An outline of the proceedings from a PROMs Forum hosted by the Canadian Institute for Health Information. In brief, the value of targeting PROs data initiates towards clinicians was outlined, including three clinical areas (eg, renal care) in which well-established PROs reporting mechanisms were determined to be most desirable.
|
| CIHI. Patient-centred measurement and reporting in Canada launching the discussion toward a future state, 2017.45 | https://www.cihi.ca/sites/default/files/document/visioning-day-paper-en-web.pdf | Technical report | 26 July 2019 | The authors presented a summary report based on presentations delivered at an invitational visioning day hosted by the Canadian Institute for Health Information. In brief, a common set of priorities for measurement and reporting of PROs data were highlighted among 33 participants.
|
| Cappelleri et al. Patient-Reported Outcomes: Measurement, Implementation and Interpretation, 2014.50 | https://www.crcpress.com/Patient-Reported-Outcomes -Measurement-Implementation-and-Interpretation/Cappelleri-Zou-Bushmakin-Alvir-Alemayehu-Symonds/p/book/9781138199590 | Book/book chapter | 17 July 2019 | The authors provided a comprehensive overview of various PRO data elements (eg, measurement validity/reliability, missing data and statistical techniques) that can be used to advance the validation and use of these data.
|
| Chen. Integrated Care: Patient reported outcome measures and patient reported experience measures - A rapid scoping review, 2015.42 | https://www.aci.health.nsw.gov.au/__data/assets/pdf_file/0009/281979/ACI_Proms_Prems_Report.pdf | Technical report | 08 July 2019 | A report based on the outcomes of a scoping review that was undertaken to examine the issues of implementing a large-scale PROMs initiative, with a particular focus on patient-centre care in New South Wales, Australia.
|
| Clinical Oncology Society of Australia (COSA). Implementing monitoring of patient-reported outcomes into cancer care in Australia - A COSA Think Tank Report, 2018.41 | https://www.cosa.org.au/media/332504/cosa_pros_think_tank_report_final.pdf | Technical report | 12 July 2019 | A report based on the findings from a Think Tank that involved 32 participants and was focused on approaches to embed PRO assessment as part of routine cancer care in Australia. The authors highlighted effective methods for implementing PRO monitoring and discussed the benefits of using PRO data in clinical practice.
|
| Desomer et al. Use of patient-reported outcome and experience measures in patient care and policy. Belgian Health Care Knowledge Centre, 2018.4 | https://kce.fgov.be/en/use-of-patient-reported-outcome-and-experience-measures-in-patient-care-and-policy | Technical report | 26 July 2019 | A report based on an evaluation of the uses, benefits, barriers and facilitators of PRO and experience measures in clinical practice undertaken by a research team from the Belgian Health Care Knowledge Centre. The authors included an analysis of international initiatives and a review of the peer-reviewed literature along with a set of recommendations to facilitate the introduction of PROs.
|
| Duckett et al. Targeting zero: Supporting the Victorian hospital system to eliminate avoidable harm and strengthen quality of care - report of the Review of Hospital Safety and Quality Assurance in Victoria, 2016.40 | https://www.dhhs.vic.gov.au/sites/default/files/documents/201610/Hospital_Safety_and_Quality_Assurance_in_Victoria.pdf | Technical report | 26 July 2019 | A report based on a review of the governance of quality and safety monitoring and data reporting throughout hospitals located in Victoria, Australia. The review process included stakeholder and expert consultation methods and the authors presented several recommendations, including the establishment of systematic collection of PROMs at a state-level.
|
| Duckett et al. Strengthening Safety Statistics: How to make hospital safety data more useful: The Grattan Institute, 2017.49 | https://grattan.edu.au/wp-content/uploads/2017/11/893-strengthening-safety-statistics.pdf | Technical report | 26 July 2019 | A technical report focused on methods to use to enhance the presentation of hospital safety data (in general), which also included information related to PROs data. The author suggested that aggregated data must be presented in a meaningful and simple ways and directed towards appropriate audiences who can take action.
|
| Franklin et al. Framework to guide the collection and use of Patient-Reported Outcome Measures in the learning healthcare system, 2017.43 | https://egems.academyhealth.org/articles/10.5334/egems.227/ | Technical report | 09 July 2019 | A report outlining the findings based on key informant interviews (conducted with 46 individuals who were actively engaged in the use of PROMs in diverse clinical settings), two interactive web-based discussions and an in-person workshop. The authors presented an implementation framework and included a toolkit of strategies to accelerate collection and use of PROMs.
|
| Nelson et al. Using Patient-Reported Information to Improve Health Outcomes and Health Care Value: Case studies from Dartmouth, KarolInska and Group Health. Lebanon, New Hampshire: The Dartmouth Institute for Health Policy and Clinical Practice, 2012.39 | https://www.researchgate.net/publication/232607583_Using_Patient-Reported_Information_to_Improve_Health_Outcomes_and_Health_Care_Value_Case_studIes_fomm_Dartmouth_KarolInska_and_Group_Health | Technical report | 11 July 2019 | A peer-reviewed, technical report outlining the feasibility, utility and lessons related to PROs data collection systems. The authors presented three case studies from PRO initiatives based at the Dartmouth-Hitchcock Spine (Lebanon), the Swedish Rheumatoid Arthritis Registry and Group Health Cooperative (Seattle, Washington).
|
| NSW Agency for Clinical Innovation. Patient Reported Measures – Program overview, 2018.46 | https://www.aci.health.nsw.gov.au/__data/assets/pdf_file/0004/415219/ACI18050_PRM_ProgOverview_Guide_v1.pdf | Programme overview and guide | 05 July 2019 | A guide and overview of the Agency for Clinical Innovation Patient Reported Outcome Measures program established in New South Wales, Australia. The document outlined implementation considerations related to PROs.
|
| Paxton Partners, Patient-Reported Outcome Measures: Literature scan, personal communication, 2018. | N/A | Report | 14 June 2019 | A report based on the implementation considerations required for the establishment of a PROMs collection system in Victoria, Australia. The authors included a review of the literature and evidence from the experiences of early PRO data adopters located in other countries and jurisdictions.
|
| Peterson. Learning and understanding for quality improvement under different conditions - An analysis of quality registry-based collaboratives in acute and chronic care, 2015.52 | http://hj.diva-portal.org/smash/get/diva2:871675/FULLTEXT01.pdf | Dissertation | 08 July 2019 | A dissertation based on the use of Quality Improvement Collaboratives (QICs) in three national registries (which are also used for follow-up purposes) in Sweden. The author used an interactive approach to examine if, and how, QICs contributed to quality improvement in the provision of healthcare.
|
| Raine et al. Patient-reported outcome measures and the evaluation of services. Challenges, solutions and future directions in the evaluation of service innovations in health care and public health: National Institute for Health Research, 2016.51 | https://www.ncbi.nlm.nih.gov/books/NBK361255/ | Book/book chapter | 16 July 2019 | The authors provided an overview of the progress made in relation to PROs and outlined the main challenges that need to be addressed to further the field. Using the experiences and lessons learnt from several large-scale PROMs programs in different countries, the authors describe the role of PRO data and the need to engage clinicians to ensure uptake.
|
| Snyder et al. Testing Ways to Display Patient-Reported Outcomes Data for Patients and Clinicians, 2018.55 | https://www.pcori.org/sites/default/files/Snyder054-Final-Research-Report.pdf | Research report | 26 July 2019 | A final research report produced by a research team from the Patient-Centred Outcomes Research Institute in the USA. Using a three-part mixed methods study, the authors identified and tested a range of approaches for presenting PRO data (individual and group level) to promote understanding among clinicians and patients from cancer treatment settings.
|
| Thompson et al. Patient-reported Outcome Measures: An environmental scan of the Australian healthcare sector, 2016.3 | https://www.safetyandquality.gov.au/sites/default/files/migrated/PROMs-Environmental-Scan-December-2016.pdf | Final report (environmental scan) | 14 June 2019 | A report based on an environmental scan of the literature undertaken by authors from the Australian Health Services Research Institute. The authors described status of the collection and use of PROMs initiatives in the Australian healthcare system.
|
| Williams et al. Patient-reported outcome measures: Literature review, 2016.5 | https://www.safetyandquality.gov.au/sites/default/files/migrated/PROMs-Literature-Review-December-2016.pdf | Final report (literature review) | 14 June 2019 | A report based on the findings from a literature review conducted by researchers from the Australian Health Services Research Institute. The authors describe the international evidence to support the rationale for PROs data collections and different mechanisms used to facilitate collection, data uses and the impact of these data.
|
| World Economic Forum. Value in healthcare laying the foundation for health system transformation. Cologny/Geneva, Switzerland: World Economic Forum, 2017.48 | http://www3.weforum.org/docs/WEF_Insight_Report_Value_Healthcare_Laying_Foundation.pdf | Report | 05 July 2019 | A report based on a collaborative project undertaken by authors from the World Economic Forum and The Boston Consulting Group whereby the foundational principles of value-based healthcare, including information related to PROs data were described.
|
N/A, not available; PROM, patient-reported outcome measure; PROs, patient-reported outcomes.
The following results are presented by research question.
What is best practice in the readability and feedback of PROs data to healthcare professionals?
Overall, the current evidence base provides some general guidance but inadequately describes specific optimal data display methods for the feedback of PROs data to healthcare professionals. From this review, several issues related to the reporting of PROs data to health professionals were explored and summarised, and recommendations identified to address these issues are provided below.
Authors from two publications suggested that in order to engage health professionals in reviewing PROs data, PROs reports need to be simplistic and easy to read.24 25 Suggested modifications to improve readability of feedback interventions included: reducing the number of metrics (ie, outcomes) presented within a report, minimising page counts, avoiding three-dimensional graphical elements, uncluttering reports to increase readability and including instructions where they will be needed.29
Six publications addressed the issue of directionality of PRO scores in graphical displays.2 11 12 35 38 55 A consensus panel found that there was no intuitive interpretation of symptom scores, with some people expecting higher scores to mean ‘better’ and other people expecting higher scores to mean ‘more’ of the symptom (and therefore worse).11 Healthcare professionals’ interpretation accuracy has been demonstrated to be greater for line graphs when higher scores indicated ‘better’ rather than indicating ‘more’.55 Despite these results, caution should be taken when modifying the directionality of PROs in order for all symptom scores to have the same directionality, due to potential confusion associated with inconsistencies across instruments.11 One suggestion to avoid potential confusion is to provide a label to denote ‘better’ alongside the chart to indicate the directionality of the PROs,2 38 or use coloured arrows: green for better scores and red arrows for worse scores.35
Further, the provision of a written explanation of the PRO score alongside the graph has also been recommended to assist with interpretation.29 Written explanations are particularly valuable for complex graphical displays.31 37 Another suggestion is to include descriptive labels (eg, mild/moderate/severe) alongside the chart, assuming data to support the use of these thresholds are available.11 27 The use of ‘traffic-light’ colours to colour code the thresholds has also been recommended to allow a quick and easy review.30 34–36
Displaying a reference population to use as a comparison was addressed in four publications.3 11 37 48 Reference populations, such as national averages or relevant norm information for peer groups, can help provide context for the interpretation of the PRO scores,37provided these data are available.11 However, there is a need to balance the complexity of presenting additional data and the healthcare professionals’ ability to understand the data.11 Furthermore, in an exploratory study, participants warned that providing comparison data can have unintended consequences, such as negative comparisons leading to reputational damage when the health service or healthcare professional is reported to be lesser performing in their PROs results.37
A cross-sectional mixed methods study in oncology reported that healthcare professionals indicated a preference towards the inclusion of statistical details for PROs data.6 There is a move away from reporting the p value alone to illustrate statistical significance, and instead the use of CIs is encouraged.6 11 The clinically important difference should also be included within the graphical representation of the PROs results, where appropriate.11 25 Though an asterisk is not recommended to indicate clinically important differences, as that symbol is commonly used to indicate statistical significance.11 Patients can find the inclusion of clinically important differences confusing,6 but it is valuable for them to know if the difference matters.11
What PRO data presentation formats have the most utility for healthcare professionals?
There are many different formatting approaches that have been used to display PROs results. Table 3 provides a summary of different formats that have been used to display PROs data, as well as an indication of the preference among healthcare professionals. Line graphs and bar graphs were identified as the most familiar and preferred format among healthcare professionals for comparing and reviewing their service.
Table 3.
Summary of different PRO data presentation formats
| Graphical format | Summary | Healthcare professional preference |
| Tables with numerical data | Presentation of data in tables is considered more neutral and needing less explanation for interpreting the meaning of the data than when presented in graphs. Tables with large amounts of data may be perceived as cluttered and lacking visual clarity, making them difficult to read.31 34 | ± Mixed |
| Use of icons/pictographs | Most healthcare professionals find tables with icons to be insufficient and lacking transparency.31 33 This is the inverse to patients, who prefer such displays due to their simplicity.34 | – Negative |
| Line graphs | Line graphs are the preferred approach for presenting individual patient PRO scores over time.6 11 24 33 35 55 However, if there are too many outcome variables, the line graph may become difficult to interpret.37 The recommended maximum number of lines that should be displayed within a single graph is four.24 | + Positive |
| Bar graph | Bar graphs are widely liked as they are clear and facilitate comparison.24 33 They can also easily include additional information (eg, CIs and descriptive labels). The use of CIs should be accompanied by a written explanation to facilitate interpretation of the data.31 To reduce confusion, the recommended maximum number of bars within a single graph should be six.24 |
+ Positive |
| Funnel plots | Funnel plots can provide a good overview, but also contain a lot of information. Those unfamiliar with funnel plots may find them confusing.31 34 37 As such, the use of funnel plots should be accompanied by a detailed explanation of how to be interpreted. | ± Mixed |
| Caterpillar plots | Caterpillar plots are less familiar to healthcare professionals and patients than bar graphs.31 34 Though caterpillar plots are clearer than bar graphs containing CIs, and can facilitate rapid comparisons between larger amounts of groups.31 37 | + Positive |
| Spider plots or radar chart | Healthcare professionals who are unfamiliar with spider plots may find them confusing and lacking clarity.31 Spider plots also make displaying additional information such as CIs or statistical significance difficult.31 | – Negative |
| Pie charts and stacked bar graphs | Pie charts and stacked bar graphs are both reasonable formats for presenting proportions visually, especially when there are big differences.11 37 Healthcare professionals are more accurate at interpreting stacked bar graphs compared with pie charts,37 while patients can interpret pie charts more accurately.2 | + Positive |
PRO, patient-reported outcome.
Are there factors that influence PRO data interpretation or use in clinical practice?
Within the current body of literature, several barriers and enablers associated with the use and uptake of PROs among healthcare professionals have been identified. However, the evidence base addressing these proposed challenges, or explicit recommendations to enable successful adoption of PROs among healthcare professionals, is limited.1 53 We identified seven factors that influence the interpretation of PROs: missing data, government and local leadership, healthcare professional education and training, engaging healthcare professionals to overcome resistance to change in clinical practice, case-mix adjustment, interoperability of information and communication technology (ICT) systems, and frequency/timeliness of feedback.
Missing data
Missing data pose a challenge with analysis and reporting of PRO results. Missing PROs data may be unavoidable due to a multitude of reasons. There may be specific population groups with missing PROs responses, or sensitive and difficult questions that may be omitted.50 Consequently, these instances may result in scepticism about completeness of the data among healthcare professionals.52
Achievement of high participation and completion rates at follow-up, both individually and at the aggregate level, influences overall usefulness of PROs data.39 However, due to the complex nature of PROs and their inevitable incompleteness in certain cases, strong evidence through a statistical analysis plan may assist in ensuring the resulting analyses, and reports are unaffected by missing data.50
The role of government and local leadership
It has been reported that ‘top-down’ approaches to PRO implementation, whereby government or management is driving the implementation process and performing the assessment and taking actions based on the roles, may be met with resistance from healthcare professionals.4 These endeavours can be complemented with ‘bottom-up’ approaches where PRO implementation is clinically driven and is more focused on quality improvement.4 The use of the data from these collections can be reported back at the micro-level (to inform decisions for individual patient care), as well as the meso-level (to assess performance of services and quality improvement) or the macro-level (to assess healthcare systems).4 5 Importantly, the most evidence for effectiveness of PROs feedback exists at the meso-level.5
Further, clinical/local champions and stakeholder initiatives are crucial to enhance healthcare professionals’ engagement with collecting and use of PROs data.53 Specifically, clinical champions may contribute to broader dissemination and use of PROs data among clinical units or within health services (Aspex Consulting, Evaluation Framework and Initial Appraisal of PROMS: Final Report, personal communications, 2018).
Healthcare professional education and training
Healthcare professionals’ education and training was addressed in nine publications.32 39 41 42 45 49 51 54 55 Healthcare professionals may not understand PRO data or know what to do with the results.39 49 There is a need to increase PRO-specific training and education to aid healthcare professionals’ ability to: interpret PRO data, integrate the use of PROs into clinical practice and respond to concerning PRO results.41 51 There is currently no recommendation for how to direct healthcare professionals to use and interpret PRO data or for how to respond to concerning results in a standardised, clinically appropriate manner.39 54 For example, our review found the need for disease-management pathways to be developed as a resource to respond to issues identified through PRO results.54 Implementation of a PRO training course has been demonstrated to improve attitudes and self-efficacy from healthcare professionals towards PRO data within the child mental health services.32
Engaging healthcare professionals to overcome resistance to change in clinical practice
There may be a lack of buy-in among the clinical community when healthcare professionals are uncertain or lack confidence in understanding how PROs results could be used to improve their clinical practice.42 As such, PROs should be implemented in a way that can be directly translated into specific actions for healthcare professionals, with clear recommendations on how to respond to PROs scores in clinical settings.26 28 Additional recommendations to improve healthcare professional buy-in include: co-designing data display formats and information content with healthcare professionals’ input to ensure the formats meet their needs,25 39 49and showcasing benefits to help health professionals see the merits of using PROs data.30 47
Analyses that include adjustment for differences in patient characteristics (case-mix adjustment)
Due to the differing characteristics of patients admitted to different health services, comparing outcomes between hospitals without case-mix adjustment may be misleading.53 Case-mix adjustments are particularly important to healthcare professionals.53 Case-mix adjustment uses statistical models to account for known variables that affect health (such as age, gender, ethnicity, symptom severity and socioeconomic background) to predict what each hospital outcome would be for a standard patient or population.1 The development of case-mix adjustment methods for PROs data are a widely recognised challenge in the field.1 48 53 For example, patients may be influenced by cultural, development or personality differences, contextual factors or life circumstances, and different health experiences or events when interpreting and responding to questions related to their health.53 Importantly, case-mix adjustment for PROs needs to be disease/condition specific, since demographic factors that may influence patients’ responses to PROs are likely to vary across patient cohorts and clinical settings.42 Published evidence related to the development of case-mix adjustment methods for PRO data is limited. Further development and refinement of robust case-mix adjustment methods is required to guide meaningful interpretation and use of PROs data.1 43 53
Interoperability of ICT systems
A lack of efficient, interoperable health information systems and robust data governance frameworks are a significant barrier to integration and reporting of PROs.44 48 ICT system interoperability issues prohibit patient-level linkage between datasets, impacting on the ability to conduct risk-adjustments and draw meaningful conclusions from some PRO collections.48
Frequency/timeliness of feedback
The frequency or timeliness of PROs feedback was addressed in 10 publications.3 5 29 30 37 45–47 52 53 Perceived time lags associated with PROs data feedback, such as reports fed back annually, may lead to information being discounted as irrelevant.29 45 53 One solution is to routinely report PRO results to healthcare professionals or provide the capacity for clinical teams to continuously retrieve and review their own data.46 52 Conversely, too much feedback could result in ‘alert fatigue’, which may lead healthcare professionals to ignore the PROs results.30 Despite reporting delays as a known barrier to healthcare professionals’ uptake of PROs, optimal intervals for feedback have seldom been investigated in this area.29 One suggested timing for audit and feedback to professional practice is one to four times a year for process and outcome indicators, but more frequently where there is greater possibility for improvement.37
A summary of the overall prevailing consensus-based guiding principles is outlined in box 1.
Box 1. Summary of basic guiding principles.
Recommendations to guide best practice in patient-reported outcome (PRO) data feedback to clinicians:
Reporting PROs data back to clinicians should be done in a simple format that is easy to read to reduce the chance of misinterpretation.25
Features that may be used to facilitate simple reporting include: reducing the number of metrics presented within a report and minimising page counts.29
PROs reporting should avoid mixing the directions of scores that are displayed. Exceptionally clear labelling, titling and annotations should also be used to increase interpretability.2 11 35
The use of coloured arrows (eg, green for better scores and red for worse scores) may enhance clinicians’ interpretation of PROs scores presented across different domains.35
Clinically significant differences and CIs should be included where possible. There is a move away from reporting just the p value.6 11
Recommendations for optimal data presentation formats:
The choice of which graphical format to use to display the PROs data will depend on the type of data (ie, single outcome/multiple outcomes, single time point/multiple time points, amount of data to display and so on) and the intended purpose of the data.24
Line graphs and bar graphs are preferred and reduce the chance of misinterpreting the data.24 33
The maximum number of bars presented within a bar graph should be six, while the maximum number of lines within a line graph should be four.24
More complex displays such as funnel plots or caterpillar plots should be accompanied by a description of how to interpret the graph.31
Recommendations to address barriers and enablers associated with feedback and reporting of PROs:
The inclusion of clinical/local champions is critical to generate buy-in from the clinical community (Aspex Consulting, Evaluation Framework and Initial Appraisal of PROMS: Final Report, personal communications, 2018).
PROs should be reported in a way that can be directly translated into specifications to guide clinicians to respond to concerning results.26
Training and education are needed to improve the clinician’s ability to interpret PRO data, to integrate the use of PROs into their routine practice and to respond to concerning results.39 51
The optimal time intervals for PROs feedback need to be determined. One suggested timeframe for audit and feedback to clinicians is one to four times a year.37
Discussion
PROs data may be used to improve the safety and quality of healthcare, but in order to achieve this, it is critical that feedback methods are optimised. This scoping review provides a novel summation of the published and grey literature of the guiding principles for effectively feeding back PROs data to healthcare providers. The overall synthesis of the literature revealed various issues that provide opportunities to advance this field.
What constitutes ‘best practice’ feedback for PROs is not yet firmly established. Despite this gap in the evidence, we were able to highlight multiple prevailing consensus-based approaches.
Studies on the feedback of PROs data are limited, however there is a large body of literature that informs graphical presentation of clinical data in general. This extensive research can inform understanding for the graphical representation of PROs. For example, similar graphical display features have been demonstrated in other forms of feedback to clinicians. In a review of quality dashboards used in clinical settings, Dowding et al 56 found that most dashboards used the ‘traffic-light’ colour coding in their displays to indicate what type of action is required. Converse to the suggestions made in the current review, Dowding et al 56 found that most dashboards used a table format to represent the data. Providing peer group data or benchmarking to enable comparison of current practice using clinical audits with feedback is also a common technique to improve engagement.57 58
To facilitate the successful uptake of PROs data in clinical practice, it is also recommended that a knowledge translation strategy is developed.59 Identification of local barriers and enablers, and the development of a theory-based integrative knowledge translation plan may support greater uptake and use of PROs data. Further, recommendations to improve knowledge translation have been identified in other types of clinical audit and feedback. The authors from multiple clinical audit and feedback studies have indicated that feedback is more effective when there is a local champion.60 61 The timeliness and actionability of the feedback are other factors that are consistently mentioned for effective clinical feedback.58 60 62 63 These findings are in line with the current study. Additional factors to improve the effectiveness of feedback include: providing feedback both verbally and in written format, and using feedback to decrease rather than increase certain behaviours.60
There have also been several initiatives to develop guidance on communicating data in general, which can further inform the development of PROs data feedback. In a guide published by authors from the National Cancer Institute,64 several suggestions for how to present data effectively are given, and multiple are in line with the current review, including: the use of labels and the use of colour. There are also additional suggestions including: the use of verbal qualifiers or metaphors to help explain the meaning of the numbers and rounding most decimals to the nearest whole number for ease of understanding. Simpson provides guidance on how to choose the appropriate graph type.65 Nominal and ordinal data can be displayed using a pie graph or bar chart, but interval and ratio data may have too many categories to be displayed in a pie chart. Further, box plots are best used to display variables that are not normally distributed.
Strengths of our review included that each reviewer used a predefined protocol and the information from the included literature was summarised using a template to ensure consistency. Despite our rigorous search strategy, several limitations deserve comment. Due to the available timeframe, both the academic and grey literature search and screening process were largely conducted by a single reviewer. This may have resulted in selection and interpretation bias as some relevant literature may have been overlooked. Further, the grey literature search was limited to only seven countries. Despite this limitation, it is reasonable to assume that, much like the standards available for the presentation of data in other healthcare settings, the general guiding principles for PROs data feedback would be consistent across jurisdictions and between countries. Overall, we found limited high-quality published evidence related to optimal feedback methods and formats for PROs data. Our findings here suggest that there is a need for more rigorous testing of PROs feedback methods in the future.
Future directions
PROs represent a key building block required to move towards a health system that can assess the value of healthcare from a consumer’s perspective (Paxton Partners, Patient-Reported Outcome Measures: Literature scan, personal communication, 2018). Little is known about the best way to feedback PROs data effectively to healthcare providers in considering the performance of their health services compared with peer services. We sought to summarise the current evidence base and use this information to facilitate a process to determine the best methods for future implementation of PROs reporting. As part of planned future work associated with the AuSCR,13 66 we seek to test various formats based on our findings and extend the work conducted to date. AuSCR is one of the few national stroke clinical registries around the world to collect PROs.18 The outcome of this work will also inform the field and may be adopted by other CQRs.
Conclusion
While ‘best practice’ feedback methods and presentation formats of PROs data to healthcare professionals are emerging, there remains many unanswered questions. The basic guiding principles and recommendations presented in the body of the current review draw on the findings of the prevailing, consensus-based literature. Further research is required to determine what healthcare professionals perceive to be simple, easy-to-read and interpretable PROs reports for aggregated data. Healthcare professionals require support to interpret the data and should be part of the process of co-designing formats that will be the most meaningful to them. Our work here provides some guidance towards these efforts.
Supplementary Material
Acknowledgments
The authors thank Claire Weickhardt (CW) for her assistance with the screening of the literature.
Footnotes
Contributors: All authors were involved in the planning of the project. SLH, OFR and VM were involved in the search strategy, extraction and synthesis of data, and wrote the manuscript in consultation with SK, SB and DAC. All authors contributed to the final version of the manuscript.
Funding: This work was funded by the Victorian Agency for Health Information as a consultancy.
Competing interests: None declared.
Patient consent for publication: Not required.
Provenance and peer review: Not commissioned; externally peer reviewed.
Data availability statement: All data relevant to the study are included in the article or uploaded as supplemental information. All data relevant to the study are included in the article or supplemental material.
Supplemental material: This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise.
References
- 1. Canadian Institute for Health Information (CIHI) Health outcomes of care: an idea whose time has come. Ottawa, Ontario, 2012: 1–52. [Google Scholar]
- 2. Brundage M, Blackford A, Tolbert E, et al. Presenting comparative study pro results to clinicians and researchers: beyond the eye of the beholder. Qual Life Res 2018;27:75–90. 10.1007/s11136-017-1710-6 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3. Thompson C, Sansoni J, Morris D, et al. Patient-Reported outcome measures: an environmental scan of the Australian healthcare sector. 89 Sydney, NSW: Australian Commission on Safety and Quality in Health Care, 2016. [Google Scholar]
- 4. Desomer A, Van Den Heede K, Triemstra M, et al. Use of patient-reported outcome and experience measures in patient care and policy. Belgian Health Care Knowledge Centre, 2018: 1–151. [Google Scholar]
- 5. Williams K, Sansoni J, Morris D, et al. Patient-Reported outcome measures: literature review. Sydney, NSW: Australian Commission on Safety and Quality in Health Care, 2016: 1–91. [Google Scholar]
- 6. Brundage MD, Smith KC, Little EA, et al. Communicating patient-reported outcome scores using graphic formats: results from a mixed-methods evaluation. Qual Life Res 2015;24:2457–72. 10.1007/s11136-015-0974-y [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7. Santana MJ, Haverman L, Absolom K, et al. Training clinicians in how to use patient-reported outcome measures in routine clinical practice. Qual Life Res 2015;24:1707–18. 10.1007/s11136-014-0903-5 [DOI] [PubMed] [Google Scholar]
- 8. Ahmed S, Berzon RA, Revicki DA, et al. The use of patient-reported outcomes (pro) within comparative effectiveness research: implications for clinical practice and health care policy. Med Care 2012;50:1060–70. 10.1097/MLR.0b013e318268aaff [DOI] [PubMed] [Google Scholar]
- 9. Prodinger B, Taylor P. Improving quality of care through patient-reported outcome measures (PROMs): expert interviews using the NHS PROMs programme and the Swedish quality registers for knee and hip arthroplasty as examples. BMC Health Serv Res 2018;18:1–13. 10.1186/s12913-018-2898-z [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10. Brundage M, Bass B, Jolie R, et al. A knowledge translation challenge: clinical use of quality of life data from cancer clinical trials. Qual Life Res 2011;20:979–85. 10.1007/s11136-011-9848-0 [DOI] [PubMed] [Google Scholar]
- 11. Snyder C, Smith K, Holzner B, et al. Making a picture worth a thousand numbers: recommendations for graphically displaying patient-reported outcomes data. Qual Life Res 2019;28:345–56. 10.1007/s11136-018-2020-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12. Arcia A, Woollen J, Bakken S. A systematic method for exploring data attributes in preparation for designing tailored infographics of patient reported outcomes. eGEMs 2018;6:1–9. 10.5334/egems.190 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13. Cadilhac DA, Lannin NA, Anderson CS, et al. Protocol and pilot data for establishing the Australian stroke clinical Registry. Int J Stroke 2010;5:217–26. 10.1111/j.1747-4949.2010.00430.x [DOI] [PubMed] [Google Scholar]
- 14. Peters M, Godfrey C, McInerney P, et al. Chapter 11: Scoping Reviews (2020 version) : Aromataris EMZ, JBI manual for evidence synthesis. Adelaide: JBI, 2020. [Google Scholar]
- 15. Training C, learning O, 2019. Available: https://training.cochrane.org/online-learning [Accessed June 2019].
- 16. Tricco AC, Lillie E, Zarin W, et al. PRISMA extension for scoping reviews (PRISMA-ScR): checklist and explanation. Ann Intern Med 2018;169:467–73. 10.7326/M18-0850 [DOI] [PubMed] [Google Scholar]
- 17. Covidence systematic review software, [program] Melbourne, Australia. Veritas Health Innovation. [Google Scholar]
- 18. Cadilhac DA, Kim J, Lannin NA, et al. National stroke registries for monitoring and improving the quality of hospital care: a systematic review. International Journal of Stroke 2016;11:28–40. 10.1177/1747493015607523 [DOI] [PubMed] [Google Scholar]
- 19. Campbell BCV, Ma H, Ringleb PA, et al. Extending thrombolysis to 4·5-9 H and wake-up stroke using perfusion imaging: a systematic review and meta-analysis of individual patient data. Lancet 2019;394:139–47. 10.1016/S0140-6736(19)31053-0 [DOI] [PubMed] [Google Scholar]
- 20. Lynch E, Hillier S, Cadilhac D. When should physical rehabilitation commence after stroke: a systematic review. Int J Stroke 2014;9:468–78. 10.1111/ijs.12262 [DOI] [PubMed] [Google Scholar]
- 21. Joanna Briggs Institute JBI levels of evidence 2014. Available: https://joannabriggs.org/sites/default/files/2019-05/JBI-Levels-of-evidence_2014_0.pdf [Accessed June 2019].
- 22. Joanna Briggs Institute Reviewer’s Manual The Joanna Briggs Institute, 2017. [Google Scholar]
- 23. Moher D, Liberati A, Tetzlaff J, et al. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. PLoS Med 2009;6:e1000097. 10.1371/journal.pmed.1000097 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24. Bantug ET, Coles T, Smith KC, et al. Graphical displays of patient-reported outcomes (pro) for use in clinical practice: what makes a pro picture worth a thousand words? Patient Educ Couns 2016;99:483–90. 10.1016/j.pec.2015.10.027 [DOI] [PubMed] [Google Scholar]
- 25. Boyce MB, Browne JP, Greenhalgh J. The experiences of professionals with using information from patient-reported outcome measures to improve the quality of healthcare: a systematic review of qualitative research. BMJ Qual Saf 2014;23:508–18. 10.1136/bmjqs-2013-002524 [DOI] [PubMed] [Google Scholar]
- 26. Forsberg HH, Nelson EC, Reid R, et al. Using patient-reported outcomes in routine practice: three novel use cases and implications. J Ambul Care Manage 2015;38:188–95. 10.1097/JAC.0000000000000052 [DOI] [PubMed] [Google Scholar]
- 27. Oliver BJ, Nelson EC, Kerrigan CL. Turning feed-forward and feedback processes on patient-reported data into intelligent action and informed decision-making. Med Care 2019;57:S31–7. 10.1097/MLR.0000000000001088 [DOI] [PubMed] [Google Scholar]
- 28. Jensen RE, Snyder CF, Basch E, et al. All together now: findings from a PCORI workshop to align patient-reported outcomes in the electronic health record. J Comp Eff Res 2016;5:561–7. 10.2217/cer-2016-0026 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 29. Brehaut JC, Colquhoun HL, Eva KW, et al. Practice feedback interventions: 15 suggestions for optimizing effectiveness. Ann Intern Med 2016;164:435–41. 10.7326/M15-2248 [DOI] [PubMed] [Google Scholar]
- 30. Aiyegbusi OL, Kyte D, Cockwell P, et al. Patient and clinician perspectives on electronic patient-reported outcome measures in the management of advanced CKD: a qualitative study. Am J Kidney Dis 2019;74:167–78. 10.1053/j.ajkd.2019.02.011 [DOI] [PubMed] [Google Scholar]
- 31. Allwood D, Hildon Z, Black N. Clinicians' views of formats of performance comparisons. J Eval Clin Pract 2013;19:86–93. 10.1111/j.1365-2753.2011.01777.x [DOI] [PubMed] [Google Scholar]
- 32. Edbrooke-Childs J, Wolpert M, Deighton J. Using patient reported outcome measures to improve service effectiveness (UPROMISE): training clinicians to use outcome measures in child mental health. Adm Policy Ment Health 2016;43:302–8. 10.1007/s10488-014-0600-2 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 33. Hartzler AL, Izard JP, Dalkin BL, et al. Design and feasibility of integrating personalized pro dashboards into prostate cancer care. J Am Med Inform Assoc 2016;23:38–47. 10.1093/jamia/ocv101 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 34. Hildon Z, Allwood D, Black N. Making data more meaningful: patients' views of the format and content of quality indicators comparing health care providers. Patient Educ Couns 2012;88:298–304. 10.1016/j.pec.2012.02.006 [DOI] [PubMed] [Google Scholar]
- 35. Kuijpers W, Giesinger JM, Zabernigg A, et al. Patients' and health professionals' understanding of and preferences for graphical presentation styles for individual-level EORTC QLQ-C30 scores. Qual Life Res 2016;25:595–604. 10.1007/s11136-015-1107-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 36. Talib TL, DeChant P, Kean J, et al. A qualitative study of patients' perceptions of the utility of patient-reported outcome measures of symptoms in primary care clinics. Qual Life Res 2018;27:3157–66. 10.1007/s11136-018-1968-3 [DOI] [PubMed] [Google Scholar]
- 37. van Overveld LFJ, Takes RP, Vijn TW, et al. Feedback preferences of patients, professionals and health insurers in integrated head and neck cancer care. Health Expect 2017;20:1275–88. 10.1111/hex.12567 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 38. Wu AW, White SM, Blackford AL, et al. Improving an electronic system for measuring pros in routine oncology practice. J Cancer Surviv 2016;10:573–82. 10.1007/s11764-015-0503-6 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 39. Nelson E, Hvitfeldt H, Reid R, et al. Using patient-reported information to improve health outcomes and health care value: case studIes fomm Dartmouth, Karolinska and group health. Lebanon. New Hampshire: The Dartmouth Institute for Health Policy and Clinical Practice, 2012: 1–55. [Google Scholar]
- 40. Duckett S, Cuddihy M, Newnham H. Targeting zero: Supporting the Victorian hospital system to eliminate avoidable harm and strengthen quality of care - report of the Review of Hospital Safety and Quality Assurance in Victoria. Melbourne: State Government of Victoria, 2016. [Google Scholar]
- 41. Clinical Oncology Society of Australia (COSA) Implementing monitoring of patient-reported outcomes into cancer care in Australia - A COSA Think Tank Report. Sydney, Australia: Clinical Oncology Society of Australia, 2018. [Google Scholar]
- 42. Chen J. Integrated Care: Patient reported outcome measures and patient reported experience measures - A rapid scoping review. Sydney: NSW Agency for Clinical Innovation, 2015: 1–116. [Google Scholar]
- 43. Franklin P, Chenok K, Lavalee D, et al. Framework to guide the collection and use of patient-reported outcome measures in the learning healthcare system. EGEMS 2017;5:17. 10.5334/egems.227 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 44. Batalden P, Corrigan J, Harrison W, et al. Enabling uptake of a registry-supported care and learning system in the United States: a report to the Robert wood Johnson Foundation from Karolinska Institutet and the Dartmouth Institute, 2014. [Google Scholar]
- 45. Canadian Institute for Health Information (CIHI) Patient-Centred measurement and reporting in Canada launching the discussion toward a future state. Ottawa, Ont 2017:1–46. [Google Scholar]
- 46. NSW Agency for Clinical Innovation Patient reported measures – program overview. Chatswood: ACI, NSW 2018;18. [Google Scholar]
- 47. Canadian Institute for Health Information (CIHI) Proms background document. Ottawa, Ontario 2015:1–38. [Google Scholar]
- 48. Forum WE. Value in healthcare laying the foundation for health system transformation. Cologny/Geneva. Switzerland: World Economic Forum, 2017: 1–40. [Google Scholar]
- 49. Duckett S, Jorm C, Danks L. Strengthening safety statistics: how to make Hospital safety data more useful. The Grattan Institute, 2017. [Google Scholar]
- 50. Cappelleri J, Zou K, Bushmakin A, et al. Patient-Reported outcomes: measurement, implementation and interpretation. Boca Raton, FL: CRC Press, 2014. [Google Scholar]
- 51. Raine R, Fitzpatrick R, Barratt H, et al. Patient-Reported outcome measures and the evaluation of services. Challenges, solutions and future directions in the evaluation of service innovations in health care and public health: National Institute for Health Research, 2016. [Google Scholar]
- 52. Peterson A. Learning and understanding for quality improvement under different conditions - An analysis of quality registry-based collaboratives in acute and chronic care. Jönköping University, 2015. [Google Scholar]
- 53. Canadian Institute for Health Information (CIHI) CIHI PROMs forum proceedings. Ottawa, Ontario 2015;41. [Google Scholar]
- 54. Aaronson N, Elliott T, Greenhalgh J, et al. User’s Guide to Implementing Patient-Reported Outcomes Assessment in Clinical Practice. International Society for Quality of Life Research 2015;2:1–47. [Google Scholar]
- 55. Snyder C, Brundage M, Smith KC, et al. Testing ways to display patient-reported outcomes data for patients and clinicians. Washington.DC: Patient-Centered Outcomes Research Institute (PCORI), 2018: 1–163. [PubMed] [Google Scholar]
- 56. Dowding D, Randell R, Gardner P, et al. Dashboards for improving patient care: review of the literature. Int J Med Inform 2015;84:87–100. 10.1016/j.ijmedinf.2014.10.001 [DOI] [PubMed] [Google Scholar]
- 57. Ray-Barruel G, Ullman AJ, Rickard CM, et al. Clinical audits to improve critical care: Part 2: analyse, benchmark and feedback. Aust Crit Care 2018;31:106–9. 10.1016/j.aucc.2017.04.002 [DOI] [PubMed] [Google Scholar]
- 58. Colquhoun H, Michie S, Sales A, et al. Reporting and design elements of audit and feedback interventions: a secondary review. BMJ Qual Saf 2017;26:54–60. 10.1136/bmjqs-2015-005004 [DOI] [PubMed] [Google Scholar]
- 59. Eilayyan O, Visca R, Zidarov D, et al. Developing theory-informed knowledge translation strategies to facilitate the use of patient-reported outcome measures in interdisciplinary low back pain clinical practices in Quebec: mixed methods study. BMC Health Serv Res 2020;20:789. 10.1186/s12913-020-05616-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 60. Ivers N, Jamtvedt G, Flottorp S, et al. Audit and feedback: effects on professional practice and healthcare outcomes. Cochrane Database Syst Rev 2012;6:CD000259. 10.1002/14651858.CD000259.pub3 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 61. Christina V, Baldwin K, Biron A, et al. Factors influencing the effectiveness of audit and feedback: nurses' perceptions. J Nurs Manag 2016;24:1080–7. 10.1111/jonm.12409 [DOI] [PubMed] [Google Scholar]
- 62. Payne VL, Hysong SJ. Model depicting aspects of audit and feedback that impact physicians' acceptance of clinical performance feedback. BMC Health Serv Res 2016;16:260–72. 10.1186/s12913-016-1486-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 63. Hysong SJ, Best RG, Pugh JA. Audit and feedback and clinical practice guideline adherence: making feedback actionable. Implement Sci 2006;1:9. 10.1186/1748-5908-1-9 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 64. National Cancer Institute Making data talk: A workbook : U.S. department of health and human services. USA: National Institute of Health, 2011. [Google Scholar]
- 65. Simpson SH. Creating a data analysis plan: what to consider when choosing statistics for a study. Can J Hosp Pharm 2015;68:311–7. 10.4212/cjhp.v68i4.1471 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 66. Cadilhac DA, Andrew NE, Lannin NA, et al. Quality of acute care and long-term quality of life and survival. Stroke 2017;48:1026–32. 10.1161/STROKEAHA.116.015714 [DOI] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
bmjopen-2020-038190supp003.pdf (62.4KB, pdf)
bmjopen-2020-038190supp001.pdf (13KB, pdf)
bmjopen-2020-038190supp002.pdf (23.6KB, pdf)

