Abstract
Patient engagement practices are increasingly incorporated in health research, governance, and care. More recently, a large number of evaluation tools and metrics have been developed to support engagement evaluation. This growing interest in evaluation reflects a maturation of the patient engagement field, moving from a "craft" to a reflective "art and science," with more explicit expected benefits and risks, better understood conditions for success and failure, and increasingly rigorous evaluation instruments to improve engagement theories and interventions. It also supports a more critical view of engagement science, moving beyond reductionist views of engagement as a "black box technology" to a more subtle view of this broad category of complex interventions. Structured evaluation can advance patient engagement by supporting more reflective partnerships between patients, clinicians, health system leaders and citizens. This can help clarify mutual (and potentially contradictory) expectations toward engagement, provide a reality check toward claims of benefits and harms, and increase health systems’ capacity to implement effective engagement practices over time. To do so, closer collaborations are required between engagement scientists and practitioners to align the theories, practice and evaluation of patient and community engagement.
Keywords: Patient and Citizen Engagement, Evaluation, Health Research, Policy
Introduction
Patient engagement practices and programs and increasingly being incorporated in different areas of healthcare, including health policy,1 quality improvement,2 health technology assessment,3 clinical practice guidelines,4 research,5 priority-setting,6 and clinical care.7 More recently, a number of evaluation tools and metrics have been developed to support engagement evaluation. Searching the literature from 1973 to 2015, Dukhanin and colleagues identified 23 evaluation tools for patient engagement in healthcare organizations and system-level decision-making, 87% of which were published after 1996.8 A similar growth in evaluation instruments for patient engagement in clinical care, research and community health programs was found in other complementary systematic reviews.9-11
Mapping Evaluation Metrics: The Need to Align Theories and Practice
An original contribution of Dukhanin’s review is its proposed taxonomy of evaluation metrics. The authors built this taxonomy inductively, through reviewers’ descriptive understanding of what each metric was intended to measure. This mapping is helpful to illustrate which dimensions of engagement have received more or less measure development efforts (eg, the dominance of process over outcome measures). The taxonomy could have been strengthened by distinguishing engagement structures (eg, resources and training provided to participants and staff) from engagement processes (eg, respect, trust and transparency).12 Careful alignment between engagement theories and evaluation metrics is also necessary to properly interpret and use such taxonomy, given the complexities of the engagement field. For example, using process evaluation criteria based on participants’ representativeness may be appropriate for engagement methods that consult with large numbers of patients and community members (eg, focus groups and surveys) but could be inappropriate for partnership methods where a patient selected on the basis of his experiential knowledge and competencies co-leads a project with a clinician.13
The Maturation of the Field
Overall, the growing interest in patient engagement evaluation documented in this systematic review reflects the maturation of the field, moving from a “craft” (with rarely evaluated engagement initiatives) to an “art and science” (with more explicit expected benefits and risks, better understood conditions for success and failure, and increasingly rigorous evaluation instruments to improve engagement theories and interventions). This evolution of patient engagement practice and evaluation has run through different phases over time.
An Ethical and Political Imperative
Decades ago, a number of authors and organizations have called for patient and citizen engagement, based on ethical, legal and political principles of autonomy, participative democracy and self-determination. Back in 1969, Sherry Arnstein argued that citizen participation was fundamentally about redistribution of power. At the international level, the Alma-Ata declaration (1978) and the Ottawa Charter (1984) affirmed that people have the right to participate individually and collectively in the planning and implementation of healthcare.14,15 These ethical and political principles have spurred the growth of formal engagement structures and mechanisms within government and healthcare organizations (eg, advisory committees, citizen juries, public hearings and consultation).
However, structured evaluation of engagement implementation and its effects on health policies, services and care have often lagged behind, fueling criticisms of “tokenistic engagement.” The focus on process evaluation metrics in Dukhanin’s review may be reflective of a persistently dominant view that “good” patient engagement is primarily about following fair and transparent processes. Evaluating effectiveness also raise specific difficulties. In 1981, Rosener listed four challenges for evaluating the effectiveness of public participation: (1) the complex and value-laden nature of the concept; (2) absence of consensus on criteria to judge success and failure; (3) no agreed-upon evaluation methods; (4) the scarcity of reliable measurement tools.16
Testing the “Black Box Technology”
A number of research and systematic reviews in the 1990s and early 2000s have attempted to answer questions about the effectiveness of patient and citizen engagement, using evidence-based medicine paradigms and methods. This assumed that patient engagement is a technology whose effectiveness, benefits and harms can be tested using similar evaluation methods as other health technologies, like drugs of dialysis (Health Affairs metaphorically dubbed patient engagement the “blockbuster drug of the century”).17 Early systematic reviews concluded in the absence of evidence for the effectiveness of patient engagement in collective decision-making (eg, policy-making, research, health technology assessment, clinical guidelines or priority-setting) because of the paucity of randomized controlled trials.18,19 More importantly, classic effectiveness studies have left engagement scientists and practitioners living in separate worlds: scientists testing “black box technologies” (does it work?) with little insight about the designs and underlying mechanisms of patient engagement interventions (how does it work?), leaving practitioners with little evidence-based recommendations on how to improve practices.20 This divide between engagement scientists and practitioners has also opposed proponents of engagement based on ethical principles, with more instrumental views of engagement whose value would be dependent of its proven impacts.21
Moving Toward a Reflective Art and Science of Engagement
Recent growth in innovative evaluation designs and engagement-specific evaluation metrics have supported more comprehensive approaches to evaluation. For example, the use of process evaluation of cluster randomized trials22 and realist evaluations of engagement projects23,24 have conceptualized patient engagement as complex interventions whose participants pursue different (potentially contradictory goals) and whose effectiveness is highly dependent upon context.25 These evaluations have helped to better understand interactions between engagement context, processes and outcomes. This, in turn, supports the emergence of a reflective approach to the art and science of engagement.
Schon argues that competent practitioners “usually know more than they can say” and exhibit a knowledge in practice that is mostly tacit and intuitive.26 From a “reflective art” perspective, evaluation can help engagement practitioners reflect on their own practice and better understand key elements of engagement interventions that explain variations in effectiveness. Looking beyond the technological assumptions of engagement as a set of methods and techniques (eg, deliberative polling vs. nominal group technique), in-depth evaluations of engagement interventions can help understand more subtle aspects of the process (eg, understand how group facilitators influence patients’ expression of opinion and influence).22,27 In doing so, evaluation moves from causal analysis to a contribution analysis perspective, seeking to understand the contribution of specific elements (eg, patient engagement) within larger partnerships involving multiple people and organizations.28
From a “reflective science” perspective, more critical and nuanced approaches toward evaluation can also help raise awareness of the epistemic tensions raised by patient engagement on the activity of science itself (eg, questioning whose knowledge is recognized as valid).29 As patient engagement is increasingly influencing core health science activities (eg, research priority setting, questions, funding and publishing), scientists’ personal attitude toward engagement can influence their interpretation of evidence in the area.30
Building a Coalition of Engagement Practitioners and Scientists
Closer collaboration between engagement practitioners and scientists brought together around “evaluation coalitions” pursing different and complementary evaluation goals (eg, formative evaluation to improve local practices vs. strengthening scientific evidence on engagement), can further support this reflective approach toward the art and science of engagement. Aligned with principles of learning health systems,31 the Center of Excellence for Patient and Public Partnership (https://ceppp.ca) offers an example of hub bringing together engagement practitioners and scientists. Co-led by a team of patients, clinicians and researchers, the Center’s aims at improving the practice and science of patient and public partnership in health education, research, and care. From a practice standpoint, the Center’s Partnership School supports engagement interventions (eg, training healthcare leaders and patients to co-lead improvement projects together). Building on these real-world experiments, the Center’s Partnership Lab supports the evaluation of partnership interventions to facilitate improvement of practices over time and contribute to engagement research.32 This partnership science experience highlights conditions for successful collaboration between practitioners and researchers (trust building, power-sharing, co-governance) that echo those identified in the community-based participatory research litterature.23
Future Steps
In conclusion, the field of patient engagement evaluation has evolved considerably in the past decades. From an ethical and political ideal, the patient engagement field has matured, both from a practice standpoint (with more refined methods and professionalization of activities such as group facilitation) and an evaluation standpoint (with an increasing number of rigorous evaluation tools). In order to fully contribute to the “art and science” of engagement, greater collaboration is required between engagement practitioners and scientists, while keeping in mind the ethical, epistemological and political tensions that are inherent to patient engagement. More informed dialogue between engagement practitioners and scientists could help clarify mutual (and potentially contradictory) expectations toward engagement, provide a reality check toward claims of benefits and harms, and increase health systems’ capacity to implement effective engagement practices over time.
Acknowledgements
The author acknowledges the contribution of Audrey l’Esperance, Agustina Gancia, and Alexandre Berkesse from the Center of Excellence on Patient and Public Partnership, Montreal, QC, Canada for comments on previous versions of the manuscript, as well as input from anonymous reviewers.
Ethical issues
Not applicable.
Competing interests
Author declares that he has no competing interests.
Author’s contribution
AB the single author of the paper.
Funding
AB receives financial support from the Canada Research Chair program (http://www.chairs-chaires.gc.ca).
Citation: Boivin A. From craft to reflective art and science: Comment on "Metrics and Evaluation tools for patient engagement in healthcare organization- and system-level decision-making: a systematic review." Int J Health Policy Manag. 2019;8(2):124–127. doi:10.15171/ijhpm.2018.108
References
- 1.Conklin A, Morris ZS, Nolte E. Involving the Public in Healthcare Policy. RAND Europe. 2010:1–83. [Google Scholar]
- 2.Bombard Y, Baker GR, Orlando E. et al. Engaging patients to improve quality of care: a systematic review. Implement Sci. 2018;13(1):98. doi: 10.1186/s13012-018-0784-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Facey K, Boivin A, Gracia J. et al. Patients’ perspectives in health technology assessment: a route to robust evidence and fair deliberation. Int J Technol Assess Health Care. 2010;26(3):334–340. doi: 10.1017/s0266462310000395. [DOI] [PubMed] [Google Scholar]
- 4.Boivin A, Currie K, Fervers B. et al. Patient and public involvement in clinical guidelines: international experiences and future perspectives. Qual Saf Health Care. 2010;19(5):e22. doi: 10.1136/qshc.2009.034835. [DOI] [PubMed] [Google Scholar]
- 5.Domecq JP, Prutsky G, Elraiyah T. et al. Patient engagement in research: a systematic review. BMC Health Serv Res. 2014;14:89. doi: 10.1186/1472-6963-14-89. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6.Mitton C, Smith N, Peacock S, Evoy B, Abelson J. Public participation in health care priority setting: A scoping review. Health Policy. 2009;91(3):219–228. doi: 10.1016/j.healthpol.2009.01.005. [DOI] [PubMed] [Google Scholar]
- 7.Olsson LE, Jakobsson Ung E, Swedberg K, Ekman I. Efficacy of person-centred care as an intervention in controlled trials - a systematic review. J Clin Nurs. 2013;22(3-4):456–465. doi: 10.1111/jocn.12039. [DOI] [PubMed] [Google Scholar]
- 8.Dukhanin V, Topazian R, DeCamp M. Metrics and evaluation tools for patient engagement in healthcare organization- and system-level decision-making: a systematic review. Int J Health Policy Manag. 2018;7(10):889–903. doi: 10.15171/ijhpm.2018.43. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Boivin A, L’Esperance A, Gauvin FP. et al. Patient and public engagement in research and health system decision making: A systematic review of evaluation tools. Health Expect. 2018 doi: 10.1111/hex.12804. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.Bowen DJ, Hyams T, Goodman M, West KM, Harris-Wai J, Yu JH. Systematic Review of Quantitative Measures of Stakeholder Engagement. Clin Transl Sci. 2017;10(5):314–336. doi: 10.1111/cts.12474. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Phillips NM, Street M, Haesler E. A systematic review of reliable and valid tools for the measurement of patient participation in healthcare. BMJ Qual Saf. 2016;25(2):110–117. doi: 10.1136/bmjqs-2015-004357. [DOI] [PubMed] [Google Scholar]
- 12.Donabedian A. Evaluating the quality of medical care. Milbank Mem Fund Q. 1966;44(3:Suppl):166–206. [PubMed] [Google Scholar]
- 13.Martin GP. Representativeness, legitimacy and power in public involvement in health-service management. Soc Sci Med. 2008;67(11):1757–1765. doi: 10.1016/j.socscimed.2008.09.024. [DOI] [PubMed] [Google Scholar]
- 14. Declaration of Alma-Ata. International Conference on Primary Health Care, Alma-Ata, USSR; September 6-12, 1978:1-3.
- 15. Health Promotion. The Ottawa Charter for Health Promotion. http://www.who.int/healthpromotion/conferences/previous/ottawa/en/. Published 1986.
- 16.Rosener JB. User-oriented evaluation: A new way to view citizen participation. J Appl Behav Sci. 1981;17(4):583–596. doi: 10.1177/002188638101700412. [DOI] [Google Scholar]
- 17.Dentzer S. Rx for the ‘blockbuster drug’ of patient engagement. Health Aff (Millwood) 2013;32(2):202. doi: 10.1377/hlthaff.2013.0037. [DOI] [PubMed] [Google Scholar]
- 18.Crawford MJ, Rutter D, Manley C. et al. Systematic review of involving patients in the planning and development of health care. BMJ. 2002;325(7375):1263. doi: 10.1136/bmj.325.7375.1263. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19.Nilsen ES, Myrhaug HT, Johansen M, Oliver S, Oxman AD. Methods of consumer involvement in developing healthcare policy and research, clinical practice guidelines and patient information material. Cochrane Database Syst Rev. 2006;(3):Cd004563. doi: 10.1002/14651858.CD004563.pub2. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 20. Abelson J, Gauvin F. Assessing the Impacts of Public Participation: Concepts, Evidence and Policy Implications. Research Report P06. Canadian Policy Research Networks. http://www.cprn.org/documents/42669_fr.pdf. Accessed March 12, 2014. Published March 2006.
- 21.Edelman N, Barron D. Evaluation of public involvement in research: time for a major re-think? J Health Serv Res Policy. 2016;21(3):209–211. doi: 10.1177/1355819615612510. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 22.Boivin A, Lehoux P, Burgers J, Grol R. What are the key ingredients for effective public involvement in health care improvement and policy decisions? A randomized trial process evaluation. Milbank Q. 2014;92(2):319–350. doi: 10.1111/1468-0009.12060. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23.Jagosh J, Bush PL, Salsberg J. et al. A realist evaluation of community-based participatory research: partnership synergy, trust building and related ripple effects. BMC Public Health. 2015;15:725. doi: 10.1186/s12889-015-1949-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24.Evans D, Coad J, Cottrell K. et al. Public involvement in research: assessing impact through a realist evaluation. Health Services and Delivery Research. 2014;2(36):1–128. doi: 10.3310/hsdr02360. [DOI] [PubMed] [Google Scholar]
- 25.Rowe G, Frewer LJ. Evaluating public-participation exercises: a research agenda. Sci Technol Hum Values. 2004;29(4):512–556. doi: 10.1177/0162243903259197. [DOI] [Google Scholar]
- 26. Schon DA. The Reflective Practitioner: How Professionals Think in Action. Routledge; 2017.
- 27.Abelson J, Forest PG, Eyles J, Casebeer A, Martin E, Mackean G. Examining the role of context in the implementation of a deliberative public participation experiment: results from a Canadian comparative study. Soc Sci Med. 2007;64(10):2115–2128. doi: 10.1016/j.socscimed.2007.01.013. [DOI] [PubMed] [Google Scholar]
- 28.Wimbush E, Montague S, Mulherin T. Applications of contribution analysis to outcome planning and impact evaluation. Evaluation. 2012;18(3):310–329. doi: 10.1177/1356389012452052. [DOI] [Google Scholar]
- 29.Carel H, Kidd IJ. Epistemic injustice in healthcare: a philosophial analysis. Med Health Care Philos. 2014;17(4):529–540. doi: 10.1007/s11019-014-9560-2. [DOI] [PubMed] [Google Scholar]
- 30.Becker S, Sempik J, Bryman A. Advocates, agnostics and adversaries: Researchers’ perceptions of service user involvement in social policy research. Soc Policy Soc. 2010;9(3):355–366. doi: 10.1017/S1474746410000072. [DOI] [Google Scholar]
- 31. Foley T, Fairmichael F. The Potential of Learning Healthcare Systems. The Learning Healthcare Project; 2015.
- 32. Gancia A, David G, Gregoire A, Wong C, Boivin A. Evaluating patient partnerships formed within Quebec’s SPOR Support Unit at the research project, network and governance levels. Vancouver, BC, Canada: KT Canada Annual Scientific Meeting; 2018.