Abstract
The aim of this special issue of the Canadian Journal of Program Evaluation was to present an overview of current practices in the field of evaluation of complex interventions. Seasoned evaluators described their approaches to these types of evaluation in the healthcare context. Building upon their contributions, this synthesis offers a cross-sectional reading of their experiences, highlighting the common and divergent features of their approaches as well as their most pressing concerns and interests.
Although this synthesis focuses on the articles in this special issue, it is enriched by the presentations and debates at a one-day knowledge transfer conference organized by the AnÉIS (Analyse et évaluation des interventions en santé), a CIHR-funded strategic training program. At that conference, attended by more than 40 members of Quebec’s healthcare research community, the experts who have authored this special issue presented the ideas that inform their articles. In the debate portion of the conference, presenters and attendees raised questions, responded to challenges, and expressed their views on the evaluation of complex interventions. From these exchanges, which explored the merits of current leading ideas in evaluation, new ideas emerged. In the following synthesis we highlight the most salient features of the articles and debates and draw lessons from the practices of these researchers working on the evaluation of complex healthcare interventions.
QUESTIONS AND LESSONS ON EVALUATING COMPLEX INTERVENTIONS
Is Every Intervention Complex?
At the conference, the presenters and participants—all specialists in the evaluation of interventions in healthcare contexts—expressed relatively similar beliefs about the complexity of healthcare interventions. From the presentations, it was clear that complexity results not only from the nature of these interventions, but also from the involvement of multiple actors, the social context and, more broadly, environments that are subject to change. Detailed analysis of problems, potential solutions, environments, and actors will inevitably lead evaluators to conclude that the factors influencing healthcare are diverse and complex. The experts describe this complexity in different ways. Some emphasize that interventions are organized systems of action that evolve over time and undergo a variety of unforeseeable environmental and situational disturbances. Others consider that an intervention’s complexity stems from the actors’ diverse and often paradoxical understandings, interests, and assessments of the intervention and its environment. Interventions are thus complex because they are understood and defined differently by a multitude of interdependent actors, including the evaluator, who at times becomes a significant actor in interpretation.
How Do We Understand Complexity?
What was striking about the presentations on the evaluation of complex interventions was the researchers’ and professionals’ confidence and comfort with the evaluation process. While much of the discussion focused on the considerable challenges of this kind of research, the evaluation experts were undaunted by them. In considering how to frame this synthesis, we subsequently noted a previously overlooked aspect of the day, which was the presenters’ surprising emphasis on the conceptual rather than the methodological aspects of their evaluations. After some reflection, we concluded it was their conceptual approaches that allowed them to function effectively in the face of complexity. Rather than prioritizing any specific methodology, they used their conceptual models to position themselves in relation to the intervention and to guide their evaluation. In this way, methodology was secondary and followed naturally from their initial conceptual position.
What Is the Role of Knowledge in the Evaluation of Complex Interventions?
The evaluation experts concur on the need to integrate several forms of knowledge when studying interventions. Most agree the key condition for successfully evaluating a complex intervention is to base the evaluation on theoretically oriented approaches that draw from diverse disciplines such as politics, economics, social sciences, management, and epidemiology. These different forms of knowledge can be combined to create explanatory theories of intervention that are malleable and receptive to knowledge acquired in the field.
What Role Should Context Play in the Evaluation of Complex Interventions?
The experts agree on the importance of taking into account contextual variables in an evaluative process: the evaluation of interventions should be contingent, contextualized, and embedded within temporal, administrative, social, economic, and political realities. Conference participants concurred that, for interventions to be properly evaluated, they must be considered together with the specific contexts within which they unfold. As well, interventions’ deployment and outcomes will modulate their context. This reciprocal relationship between intervention and context makes planning and carrying out an evaluation all the more complex. However, understanding this relationship helps those involved in the evaluative process develop a more nuanced assessment of the intervention’s functioning and performance.
In the debate, the experts stressed the importance to the evaluation process of taking into consideration the evaluation approach itself. By its very nature and functioning, the approach will influence the intervention, much like the contextual factors. More specifically, one expert pointed out that knowledge produced and shared over the course of the evaluation process will inevitably influence the intervention and its context even before the evaluation report is produced and disseminated.
What Are the Preferred Evaluation Methods?
The articles in this special issue offer no consensus on this question. Although the experts agree on the importance of using a theoretical approach in the evaluation process, no two of them adopt the same theory or approach in their practice. Contrary to what is often taught, choosing which theory or approach to use is not based on an understanding of the issue and the particular intervention, but rather on evaluators’ personal and disciplinary preferences, epistemological values, methodological beliefs, and understanding of the situation. Often, this results in an evaluator using the same theory or approach regardless of the case being examined.
Even if they diverge in their specific theoretical commitments, all the authors suggest using theories or approaches that allow for a comprehensive evaluation of the intervention, including its context and its environment. Thus, the selected theory or approach should make it possible to recognize, understand, and assess the relationships among all these elements. In addition, they all speak about the need to adapt the theory and the selected approach to the evaluation process, the context, and the intervention under study. This calls for a great deal of creativity to produce valid knowledge that makes sense for all the actors, whether involved in the evaluation process or not.
The choice of theory or approach also appears to influence the plans developed to evaluate a complex intervention. For all the authors, a plan helps to control the evaluation process, particularly when evaluating complex interventions, because it anchors the methodology in time and space. Even though the classic formula “gross effect = net effect + external effect + bias” does not directly apply to the evaluation of complex interventions, this should not lead to methodological nihilism. The key to a good evaluation is to recognize and integrate into a valid and consistent approach the almost unlimited number of continually evolving factors (i.e., the context) that affect the intervention’s functioning and performance. This stance is contrary to that of experimental research, which seeks to control contextual influences in order to bring out the intervention’s “pure” effects. In contrast, the authors highlight the importance in evaluation of conserving the context and the explanatory richness of its influences.
What Role Should the Evaluator Play?
The authors agree the evaluator’s first role is that of expert. For some, this role of evaluation expert is expressed as the capacity to develop a valid and rigorous evaluation process for judging the intervention. Others see this role as the evaluator’s ability to translate theoretical and empirical knowledge into a format that will promote the learning of all the actors in a given intervention. They believe that, to present an accurate portrait of the intervention’s complexity, the evaluator needs to retain, consider, and consolidate all possible interpretations. As well, evaluators need to be sensitive to power dynamics and must ensure proper weight is attributed to the positions taken by all the actors.
There is broad consensus that evaluators construct contextualized knowledge over the course of the evaluation process. This empirically based knowledge must then be incorporated into the theories that will be used to further situate and analyze the intervention. In folding empirical information into a theoretical or conceptual framework, the evaluator walks a fine line between constructing information that adequately represents the intervention’s specific issues without being too complex for immediate use by decision-makers, or oversimplifying the information in such a way that the intervention’s environment appears to play no role in its functioning and effects. Achieving this balance allows knowledge to be produced that is contextualized, valid, and accessible. Such knowledge can be assimilated and shared by the actors, who will translate it into concrete actions to improve the intervention.
Many presenters and participants in the debate suggested that the evaluator should be an artful communicator, sharing complex and sometimes incomplete knowledge in a simple and user-friendly form. They most definitely concurred that the evaluator is a major change agent. Ideally, the evaluator will transform knowledge produced by the evaluation into concrete action that can bring tangible improvements.
The vast majority of the articles specify that the evaluator’s role is to ensure that the product of the evaluation is ultimately useful. The evaluation should help to improve the intervention, with some authors even suggesting that it should help to correct the problematic situations that motivated the intervention’s implementation in the first place.
The articles raise questions about the evaluator’s ability to be an independent actor in the evaluation process and to offer a truly objective judgement about a complex intervention’s performance. How can we ensure that the evaluator will not be more influenced by the interpretations of the evaluation’s sponsor than by those of other actors more or less involved in the intervention’s functioning? This question inspired a general consensus: the evaluator’s role is to formulate a generalizable and integrative understanding of the different perspectives, whether shared or not expressed.
At the conference, participants in the debate appeared uncomfortable with accepting the evaluator’s subjectivity. Evaluation is often seen to be a controlled, objective process. However, the more complex the context, the more obvious it becomes that the evaluator must make subjective decisions on the criteria by which the intervention should be judged. Acknowledging that evaluation is a subjective process is at odds with the desire to see it as a scientific, externally valid, reproducible process. It is perhaps in acknowledging the “art” of the evaluation process that complexity will be easier to manage. This may liberate evaluators from trying to present subjective, valuable insights as objective and scientific, much as a physician’s clinical “art” is valued while being different from the apparent “science” of laboratory studies or randomized control trials.
PRACTICE-BASED EVALUATION, A RESPONSE FOR EVALUATING COMPLEX INTERVENTIONS?
Although they differ in several aspects, the articles in this issue encourage the evaluator not to fear complexity. The authors suggest different and valid ways of confidently grasping the complexity of the intervention being evaluated.
In contrast to the experimental approach, the authors have no gold standard to propose. In presenting the diversity of approaches that can be used to evaluate a complex intervention, they open the door to many possibilities. They highlight the value of a pragmatic construction based on the evaluator’s training and expertise. What emerges from a cross-sectional reading of the articles in this issue are the authors’ instrumental views of how to conceptualize the intervention; they all consider such conceptualization to be an essential step in evaluation. Furthermore, unlike experimental approaches that try to control context in order to measure effects due uniquely to the intervention, the authors of this special issue all suggest incorporating the influence of context into the interpretation of the intervention’s logic and of the mechanisms producing its effects.
In summary, the evaluation of complex interventions may appear to be a flawed process: in contrast to the controlled evaluation settings of randomized control trials, there will never be an ideal setting in which to carry out this type of research. In evaluating complex interventions, rather than aiming at the controlled and highly reproducible, we do what we can. That said, we do not believe this limitation, or so-called “flaw,” undermines the validity of the evaluation of complex interventions. Acknowledging and accepting this complexity produces an understanding of interventions that is more relevant to the lived realities of those engaged in them. Rather than opting for control over experimental conditions, as in experimental research, evaluators find that recognizing the complexity of the causal, or correlational, relationships underlying an intervention’s effects leads them toward approaches that actually support the study of relationship in real implementation contexts. This pragmatic way of conducting evaluations—born out of the authors’ experience and expertise—is clearly practice-based evaluation of complex interventions.
Biographies
Nathalie Dubois holds a doctorate in public policy analysis and management and has also completed a postdoctoral training program in health administration. Currently, she is a researcher at Direction de santé publique de l’Agence de la santé et des services sociaux de Montréal, where she is responsible for planning and implementing research and knowledge transfer activities. A research affiliate at École nationale d’administration publique and with the Canada Research Chair in Governance and Transformation of Health Organizations and Systems, Dubois oversees various substantial funded research projects. Her research program falls within the area of public health policy and program evaluation, and targets four specific research themes: complexity of interventions, conceptualization and evaluative methodologies, validation and use of research results, and knowledge transfer.
Stephanie Lloyd is a medical anthropologist and Assistant Professor of Psychiatry at McGill University, based out of the Douglas Mental Health University Institute, Verdun, Quebec. Her interests include the ways that the “psy” disciplines inform how we imagine ourselves as individuals and societies as well as how we understand our actual and possible states. She is currently writing a book, From Subjects to Selves: Social Anxiety Disorder and the Colonization of French Psyches, on the increasing adoption of globalized diagnostic and treatment practices in France and the relationship of these practices with French citizens’ perceptions of society and the “social bond.”
Janie Houle is a Community Psychologist and professor in the psychology department of Université du Québec à Montréal. Her research program is on self-management of chronic disease, especially mental disorders such as depression and anxiety. She has conducted program evaluations in mental health and suicide prevention fields.
Celine Mercier, Ph.D. in psychology, is Professor in the Department of Social and Preventive Medicine at the University of Montreal, and Associate Professor in the Department of Psychiatry at McGill University. Her areas of expertise include evaluation of policies and programs in domains related to health and social services, such as substance abuse, homelessness, mental health, intellectual disabilities, and autism spectrum disorders. Her current research grants pertain to the evaluation of public early intervention autism programs. As a senior consultant at the Montreal WHO Collaborating Centre, she has worked on issues related to the implementation of reforms in mental health services. She was greatly involved in the adoption of the “Montreal Declaration of Intellectual Disabilities” (www.declarationmontreal.com) and was part of the team who realized the WHO Atlas: Global Resources for Persons with Intellectual Disabilities (www.who.int/mental_health/evidence/atlas_id_2007.pdf).
Astrid Brousselle is an Associate Professor in the Department of Community Health Sciences at the Université de Sherbrooke and researcher at the Charles LeMoyne Hospital Research Center. She holds a Canada Research Chair in Evaluation and Health System Improvement–EASY (CIHR-FRQS). Dr. Brousselle contributes to the development of innovative evaluation approaches, where her objective is to use evaluation as a lever for health system improvement. She applies innovative methods to various areas of research in public health and health system organizations.
Lynda Rey is a Ph.D. student in Public Health at the University of Montreal, and holds a degree in political science, with an international relations specialization, from the Institut d’Études Politiques of Aix-en-Provence, and a Master’s in cooperation and international development policies from the Sorbonne University in Paris. In recent years she has worked in international cooperation as a project officer for health (HIV/AIDS, maternal mortality) and human rights. Her research interests are participatory approaches in evaluation, health systems organizations, and health promotion. Her doctoral dissertation focuses on evaluating the implementation of the WHO’s Health Promoting Hospitals concept in a birthing centre.
Contributor Information
Nathalie Dubois, École nationale d’administration publique and Public Health, Department of the Agence de la santé et des services sociaux, Montréal, Québec.
Stephanie Lloyd, McGill University, Montréal, Québec.
Janie Houle, Université du Québec à Montréal, Montréal, Québec.
Céline Mercier, Université de Montréal, Montréal, Québec.
Astrid Brousselle, Université de Sherbrooke, Longueuil, Québec.
Lynda Rey, Université de Montréal, Montréal, Québec.