Skip to main content
CMAJ : Canadian Medical Association Journal logoLink to CMAJ : Canadian Medical Association Journal
. 2018 Nov 7;190(Suppl):S42–S43. doi: 10.1503/cmaj.180334

Engaging patients to select measures for a primary care audit and feedback initiative

Noah M Ivers 1,, Alies Maybee 1, for the Ontario Healthcare Implementation Laboratory team
PMCID: PMC6472449  PMID: 30404852

KEY POINTS

  • Little is known about how best to involve patients in the development of audit and feedback initiatives that are used to improve health care quality.

  • We recruited panels of patients to help us rank quality indicators that could be selected for use in primary care to ensure that audit and feedback initiatives are aligned with patient priorities, but the exercise did not go as planned.

  • We learned that it may be best to engage patients earlier in the process of the research to inform planning of proposals; to clarify desired roles of participants, advisors and partners; and to ensure that recruited patients’ abilities fit the tasks demanded of them.

  • If patient engagement does not proceed as planned, it can signal opportunities to take a new approach to the entire research question and study design.

To help health providers identify and close the gaps between ideal evidence-based practices and the care routinely received by patients, health systems often use audit and feedback, which involves reporting to health providers on their performance to highlight where improvement efforts may be needed.1 Little is known about how best to involve patients in the development of such initiatives. We undertook a project at Health Quality Ontario (HQO) with the aim to ensure that audit and feedback initiatives were aligned with patient priorities. We learned that engaging patients can redirect research activities and reveal opportunities to take a new approach to the entire research question.

Health Quality Ontario is the province of Ontario’s advisor on health care quality, with a mandate to monitor and report on health-system performance. It launched an audit and feedback initiative for primary care in 2015 with a focus on data pertaining to diabetes, cancer screening and health service use. A partnership was established between implementation scientists and HQO to test ways to maximize the impact of these initiatives.2 A key step was to reconsider which quality indicators to include in the audit and feedback initiative and to choose priority measures from the 291 quality indicators included in HQO’s Primary Care Performance Measurement Framework.3 To meet our project aims of collating patient perspectives to inform the selection of included quality indicators, our multidisciplinary team, including a patient partner (A.M.), planned for several “patient panels” of 6–10 patients each to discuss and prioritize indicators. Since the audit and feedback from HQO relies on administrative databases to measure performance at the level of the physician, we steered the panel discussions toward quality indicators that could be measured using administration data to ensure the selected indicators would be measurable and directly actionable by HQO.

Our panels were recruited through patient and family advisory committees and partner organizations. Interested individuals were surveyed as to their age, sex, household income and visible minority status, to ensure demographic diversity across the panels. We arranged three panels in southern Ontario and two in northern Ontario; four panel discussions were conducted in English and one in French. Before their participation in the panels, patients received a video (www.youtube.com/watch?v=fehuZdufItE), a panel member fact sheet and a sample of the audit and feedback reports distributed by HQO.4 They were asked to review quality indicators and rate them for importance, using a Likert scale of 1–9. With the help of a patient partner, we reworded and grouped indicators to simplify this task. For instance, “% with new congestive heart failure who have a left ventricular function test” and “% with coronary artery disease who received the following tests within the last 12 months: HbA1C or fasting blood sugar, lipid profile, blood pressure measurement, obesity screening” and other similar indicators were consolidated to “The percentage of patients with a heart condition who received the appropriate tests.

Each panel discussion lasted three to four hours, during which a facilitator helped clarify the indicators and guided discussion to inform a reranking. Panellists were asked to reflect on their own experiences, but also to recognize quality indicators needed to represent best practices for the population. Research team members took notes. If directed questions arose that required clinical information (e.g., How likely is a poor outcome if a certain action is not taken?), the clinician scientist lead (N.M.I.) provided evidence in a neutral manner to aid discussions. To capture suggestions not measurable using administrative databases (e.g., patient-centredness and access), the team used a “parking lot,” an idea generated by the patient partner. The facilitator, assisted by the patient partner, helped patients explore the parking lot ideas at the end of each panel discussion.

Panellists recognized the importance of measuring indicators. They did not, however, believe the quality indicators they were asked to rank were the best indicators to be the focus for improvement. For many, it came as a surprise that technical quality of care (i.e., indicators that would fall in the Institute of Medicine’s effectiveness domain) was not reliably delivered. Panellists valued communication skills over the task-oriented items that were readily measurable, and the limitations in measurement capacity for communication indicators were a source of frustration.

Our initial goal was to provide HQO with a patient-informed ranking of specific quality indicators for their audit and feedback, but this was not achieved. Our best efforts at providing multimedia background information was insufficient for panellists to understand the relevance of clinical and evidence-based medicine principles. By the end of the second panel discussion, these difficulties became obvious and we altered our approach. Rather than trying to achieve a numerically ordered ranking of indicators, the facilitator focused on the criteria panellists used when reflecting on the relative importance of indicators. In addition to a focus on patient-centredness and access, panellists emphasized that quality indicators should be weighted more favourably if they address preventive services and actions that might be cost-effective in the long run, and if they address management of common conditions in which severe outcomes may occur if management is substandard or where outcomes may be inequitable among population groups. Thus, quickly adapting the research process gave us the chance to capture unexpected and valuable insights on criteria for prioritization, which HQO can use in selecting future measures for their audit and feedback. Other inputs for the design of HQO’s initiative will include interviews with physician recipients and evidence from the literature.

On reflection, the panel discussions made clear that panellists had difficulty in moving beyond their own experience to think in terms of population health and systems, which raised questions regarding the fit between the patients involved and our initial project goals. A clear role description at the outset, including required abilities and background, may have helped us to engage a different set of patients as advisors. Additionally, a more intensive approach to onboarding patients, to ensure they could fill the desired role, may have been beneficial. We devised our approach based on a desire to inform a specific initiative — the audit and feedback that HQO sends to family physicians. This intervention involves numerous constraints that predictably affected the discussion in challenging ways. There are many systematic approaches to involving patients in setting priorities in health care, including the James Lind Alliance approach,5 citizens’ councils and more.6,7 We chose to follow a more streamlined process, but we might have chosen a different approach — or even a different question — if more patients had been engaged at the time of writing the proposal. More research is needed to inform best practices to enable patient priorities to drive the selection of indicators in audit and feedback in different contexts.

We were able to engage patients in a process that will help inform the selection of quality indicators for primary care, but not as we anticipated. Yet patient input into this process will ultimately ensure that primary care providers focus their quality-improvement efforts in ways that are aligned with patient priorities.

Acknowledgements

The authors acknowledge the staff at Health Quality Ontario for their help in making this, and related work, possible. Outside of research projects like this one, Health Quality Ontario’s patient engagement related to measurement does not start with a predefined list of measures for patients to rank, but starts with asking patients what matters most to them in a given health care situation. This input is then linked to measures that have also been prioritized by the clinical community. The authors also thank the patient panelists for their invaluable insights and time.

Footnotes

More information on this project is available at www.ossu.ca/IMPACTAwards.

Competing interests: Noah Ivers received grant funding for the submitted work from OSSU (the Ontario SPOR [Strategy for Patient-Oriented Research] SUPPORT [Support for People and Patient-Oriented Research and Trials] Unit), which is co-funded by the Canadian Institutes of Health Research (CIHR) and the province of Ontario. Alies Maybee has no conflicts to disclose.

This article was solicited and has been peer reviewed.

Contributors: Both authors, and the other contributing members of the Ontario Healthcare Implementation Laboratory team listed below, contributed to the conception and design of the work, drafted the manuscript, revised it critically for important intellectual content, gave final approval of the version to be published and agreed to be accountable for all aspects of the work.

Funding: This project was funded by a grant from OSSU, which is co-funded by CIHR and the province of Ontario. The opinions, results and conclusions reported in this paper are those of the authors and are independent from the funding sources.

References


Articles from CMAJ : Canadian Medical Association Journal are provided here courtesy of Canadian Medical Association

RESOURCES