Skip to main content
PMC Canada Author Manuscripts logoLink to PMC Canada Author Manuscripts
. Author manuscript; available in PMC: 2016 Jun 2.
Published in final edited form as: Can J Program Eval. 2011 Jan 1;26(3):v–xx.

HOW DOES COMPLEXITY IMPACT EVALUATION? AN INTRODUCTION TO THE SPECIAL ISSUE

Brenda J Zimmerman 1, Nathalie Dubois 2, Janie Houle 3, Stephanie Lloyd 4, Céline Mercier 5, Astrid Brousselle 6, Lynda Rey 7
PMCID: PMC4891190  CAMSID: CAMS2581  PMID: 27274614

In recent years, the evaluation of complex interventions has been of great interest to members of the evaluation community. In this special issue of the Canadian Journal of Program Evaluation, we explore how evaluators, in their practice, approach the evaluation of complex interventions. The aim of this issue is not to provide exhaustive coverage of all the different ways of evaluating complex interventions, but rather to inform practitioners on the experiences of seasoned evaluators.

This special issue is an account of a training and knowledge transfer activity carried out within the framework of the AnÉIS (Analyse et évaluation des interventions en santé; Analysis and Evaluation of Health Interventions) program funded by the Canadian Institutes of Health Research. At the one-day knowledge exchange conference in 2009 attended by more than 40 members of the scientific community in health research, experienced evaluators shared their approaches to evaluating complex interventions. All of these researchers work in the healthcare field, but clearly the lessons derived from their practice can be readily applied to other fields as well. Complex interventions are characterized by structural, social, and cognitive complexity. The healthcare context has become structurally more complex, with more players involved in delivery (treatment, diagnosis, and prevention) and with an increasing variety of relationships between the players. There are more players to pay attention to and a higher degree of interdependence in their decisions. This structural complexity also creates a cognitive complexity, in that it is increasingly more difficult to make valid or accurate predictions about the system. Indeed, there are a multitude of definitions of what even constitutes the “healthcare system.” Social complexity is created by the high level of social conflict or disagreement among the many players in the system. These three types of complexity—structural, cognitive, and social—lead to evaluation challenges. Evaluators in healthcare need to acknowledge the complexity inherent in many of the evaluation projects, describe the nature of the complexity, and prescribe approaches that are consistent with the defined complexity.

As complex systems, complex interventions are characterized by uncertainty or unpredictability, interdependence among a large number of actors who themselves adapt and co-evolve, emergent outcomes created by the connections or relationships in the system, and nonlinearity, such that outputs and inputs are not directly correlated (Shiell, Hawe, & Gold, 2008). This creates a very messy system to understand.

If complexity exists, what strategies should be employed to address it? Two opposite approaches are complexity reduction and complexity absorption. Traditionally, complexity reduction is seen as a prudent and appropriate evaluation approach. By reducing complexity, the evaluator may achieve clarity, create focus, and develop practical ideas for users of the evaluations. This special issue does not argue with the innate desire or rationale for reducing complexity. However, complexity reduction has its limits. Indeed, many attempts to reduce complexity in healthcare systems have ended up having little positive—while even negative—impact on clarity, focus, and utility because the complexity is the issue that often needs to be understood to create meaningful results in an evaluation.

The strategic choice between complexity reduction and complexity absorption is a contingency choice. If the nature of the challenge is such that complexity (structural, cognitive, or social) cannot be reduced without reducing the resonance and utility for the users of an evaluation, then absorbing complexity will be more effective in the long run. In addition, absorbing complexity creates new, more effective ways to produce reports that are relevant to the realities of those engaged in the interventions. At times there are inherent opportunities to create more effective and efficient evaluations by embracing or absorbing complexity. By acknowledging and absorbing complexity, evaluators in healthcare systems can increase understanding, creativity, adaptability, and learning, leading to innovative solutions.

Successful evaluation of complex interventions requires recognizing, acknowledging, surfacing, and addressing the paradoxes inherent in complex systems. It is important to note the difference between a paradox and a contradiction. Both involve opposites. In a contradiction, both sides cannot be true at the same time. Not so with a paradox. This is both a factual and a conceptual challenge for leaders and evaluators. Opposites may in fact be contradictions; however, breakthrough thinking often requires reframing a contradiction as a paradox. In healthcare, even today, we far too often fall into the conceptual trap of treating the tensions in the industry as “facts of contradiction” rather than asking seemingly naive paradoxical questions. For example, evaluators might be asked to look at the effectiveness of a complex intervention that has the seemingly opposite goals of dramatically improving the patient experience and quality of outcomes while significantly reducing costs.

Relationships are key to understanding and engaging with the dynamics of a complex intervention. As systems shift, everyone (funders, policy makers, healthcare leaders, clinicians, and patients and their families) is affected. They shift their relationships to the problem and to each other. It is what happens “in between” that matters: between people, organizations, communities, parts of systems—“in between” relationships. Evaluating complex interventions requires paying more attention to relationships as a unit of analysis rather than to the parts or individual agents. Paying attention to parts is conceptually and practically easier. One can point to a part, count the number of agents, and identify where they reside. Relationships are more difficult to describe and explain, yet this conceptual challenge must be met if we are to make improvements in leading complex organizations, networks, and systems.

Finally, evaluating complex interventions requires a certain mindset. It requires a willingness to be uncertain at times and to know that being uncertain is crucial to the process. The mindset is also framed by embracing multiple perspectives and being aware that evaluation is about understanding the networks within and between organizations. The “system,” even when referring to a single organization, is at best a legal or conceptual construct, but the behaviours and actions are co-created in a networked, interconnected world.

An old adage says if one’s only tool is a hammer, every problem looks like a nail. The same risk exists when one has only a single conceptual mode of understanding. A contingency framework tries to address this by highlighting when an approach is appropriate and hence reveals our hidden assumptions about what “needs” to happen.

The contingency framework in this introduction looks at the difference between simple, complicated, and complex issues. In essence, it highlights the difference between the known, the knowable, and the unknowable. These are qualitatively different phenomena that require different skills and attributes for success (Glouberman & Zimmerman, 2002; Patton, 2011; Pawson & Tilley, 1997, 2005; Rogers, 2008; Westley, Zimmerman, & Patton, 2006; Zimmerman, Lindberg, & Plsek, 1998).

With a recipe, preparing a meal is simple. It has clear cause-and-effect relationships and can be mastered through repetition and developing basic skills. The process can be standardized and written with sufficient detail that even someone who has never cooked will have a high probability of success. Likewise, best practices can be delineated with clear directions, as steps that have worked in the past are highly likely to be needed in the future. Checklists and the ordered sequencing of steps can be very effective ways of dealing with simple interventions.

Sending a rocket to the moon is complicated. Expertise is needed. Specialists are required, and coordinating the experts is an area of expertise in itself. Formulae and the latest scientific evidence are used to predict the rocket’s trajectory. Calculations are required to ensure sufficient fuel. If all specifications are met and all tests are done, and if the coordination and communication systems are sophisticated and functioning, there is a high degree of certainty that the outcome can be controlled. Moreover, success in sending one rocket to the moon increases assurance that the next will be okay. Goals are clear at the outset. Some aspects of flying a rocket are recipe-like, but the overall endeavour is more complicated than preparing a meal.

But raising a child is complex. Unlike the recipe and rocket examples, there are no guidebooks or rules to guarantee success. Every parent knows that raising one child provides experience but is no assurance of success with future children. And although parents often resort to reading manuals written by experts, they all seem strangely inadequate. This is because every child is unique and must be understood as an individual. Moreover, children evolve and change in response to forces outside parents’ control. The ingredients for a recipe do not suddenly change their minds and the laws of gravity can be counted on. Children, however, have minds of their own. Almost always, parents and children interact to create outcomes.

While leadership of organizations and healthcare systems involves combinations of all three types of challenges—simple, complicated, and complex—the least understood are the complex. Yet this category is the most fundamental for understanding how to provide healthcare services most effectively. Healthcare is the product of complex interactions. Single individuals, single actions, and single organizations all play a part, but it is the subtle rules of engagement among the elements that are the animating force—the force that seems to give systems a life of their own. In other words, complex systems are made up of relationships. Relationships exist between things. We can point at things, but we cannot point at relationships. They are literally hard to see. Hence, the challenge for healthcare evaluators is how to understand and translate complex interventions in ways that provide meaning and utility to the users of evaluations.

Developing and implementing complex interventions requires a great deal of knowledge, skill, and flexibility. Evaluating them likewise requires the same qualities. Evaluation presents a variety of conceptual, methodological, and operational challenges that must be met to fully satisfy the needs and expectations of providers, professionals, and public administrators. When faced with the complexity of interventions, the evaluator may begin to question his or her ability to carry out such an exercise. How can I make sense of everything emerging out of this complexity? How can I document and interpret the dynamics, interactions, and interdependencies arising from the intervention? How can I make sure the different rationales, perspectives, and experiences of all the actors, whether involved in the intervention or not, have been anticipated, considered, and reconciled? How do I take into account the adaptive nature of the intervention, that is, its ability to continually adjust to its environment, whether in response to an evolving situation or to the demands of the actors? How do I take into account the broader, often unlikely and unexpected impacts of the complex intervention? Ultimately, all these questions boil down to a single one: How can I adequately evaluate complex interventions?

This special issue is divided into two major sections. The first section describes the practices of researchers experienced in the evaluation of complex interventions in the healthcare field. André-Pierre Contandriopoulos, Lynda Rey, Astrid Brousselle, and François Champagne describe the integrative model they developed specifically for evaluating complex interventions. Nassera Touati and Jose-Carlos Suárez-Herrera describe how configurational approaches contribute to the evaluation of complex interventions. Valery Ridde and his colleagues discuss the challenges and usefulness of the Realist approach in order to grasp the complexity of social interventions. Lynda Rey, Astrid Brousselle, and Nicole Dedobbeleer describe how logic analysis, an approach that tests the validity of the intervention’s theoretical model, can improve the evaluation of complex interventions. Louise Potvin, Angèle Bilodeau, and Sylvie Gendron discuss the conceptual principles underlying the realist, idealist, and critical realist conceptions of public health programs.

The second section presents a synthesis of transferred knowledge and offers a cross-sectional reading of the different approaches described in the first section of this special issue. The synthesis, by Nathalie Dubois, Stephanie Lloyd, Janie Houle, Céline Mercier, Astrid Brousselle, and Lynda Rey, provides a better understanding of the challenges to evaluation posed by intervention complexity and is helpful in discerning the common elements in the different approaches proposed by the evaluators. It summarizes the shared realizations, experiences, and lessons learned and highlights a series of new ideas. In conclusion, Zulmira Hartz offers a critical analysis of this issue in evaluation and enlightens us with her experience.

As researchers and practitioners, we thank the Canadian Journal of Program Evaluation for giving us the opportunity to present these innovative reports on experiences in the evaluation of complex interventions in healthcare. These writings propose a variety of conceptual frameworks and methodologies for addressing complexity in carrying out evaluations, while also encouraging an effective and profitable practice.

Happy reading!

Acknowledgments

There are many people who contributed significantly to the success of the one-day knowledge transfer conference and to the publication of this special issue of the Canadian Journal of Program Evaluation and who deserve our thanks and acknowledgement—the list is long and we regret that we cannot cite them all individually here.

First, we would like to thank the Canadian Institutes of Health Research (CIHR) for funding the AnÉIS program, a strategic program for training and knowledge transfer, without which this project would never have seen the light of day.

We also wish to thank the directors of the AnÉIS Program between 2008 and 2011, Nicole Leduc, Jean-Marc Brodeur and Paul Lamarche, for embarking immediately with us on this project and for their invaluable support and commitment at every step of the way. We also thank Anne McManus, program coordinator, for her support and availability.

We especially wish to acknowledge the essential contributions of the professors, researchers, experts, and students who generously participated in planning both the scientific conference and this special issue: Angèle Bilodeau, Pierre Blaise, Astrid Brousselle, François Champagne, André-Pierre Contandriopoulos, Nathalie Dubois, Sylvie Gendron, Robert Guichard, Zulmira Hartz, Janie Houle, Richard Huish, Stephanie Lloyd, Céline Mercier, Josefien Van Olmen, Léo-Roch Poirier, Louise Potvin, Lynda Rey, Valéry Ridde, Jose-Carlos Suárez-Herrera, Nassera Touati, and Brenda Zimmerman. With them, we enjoyed the enthusiastic experience of a “knowledge community” in a never-failing spirit of mutual trust.

We also thank all those who attended the scientific conference on the evaluation of complex interventions and contributed to its success and the quality of the discussions there. Without the active participation of everyone in attendance, this special issue would not be what it is.

Finally, we wish to give special thanks to the editorial staff of the Canadian Journal of Program Evaluation and to the peer reviewers who agreed to read, comment on, and judge the articles submitted for this special issue. As always, your suggestions and ideas contributed greatly to the quality of the product now available to the Journal’s readers.

Contributor Information

Brenda J. Zimmerman, York University, Toronto, Ontario

Nathalie Dubois, École nationale d’administration publique and Public Health, Department of the Agence de la santé et des services sociaux, Montréal, Québec.

Janie Houle, Université du Québec à Montréal, Montréal, Québec.

Stephanie Lloyd, McGill University, Montréal, Québec.

Céline Mercier, Université de Montréal, Montréal, Québec.

Astrid Brousselle, Université de Sherbrooke, Longueuil, Québec.

Lynda Rey, Université de Montréal, Montréal, Québec.

References

  1. Glouberman S, Zimmerman B. Discussion paper. Vol. 8. Ottawa, ON: Commission on the Future of Health Care in Canada; 2002. Complicated and complex systems: What would successful reform of Medicare look like? [Google Scholar]
  2. Patton MQ. Developmental evaluation: Applying complexity concepts to enhance innovation and use. New York, NY: Guilford Press; 2011. [Google Scholar]
  3. Pawson R, Tilley N. Realistic evaluation. London, UK: Sage; 1997. [Google Scholar]
  4. Pawson R, Tilley N. Realistic evaluation. In: Mathison S, editor. Encyclopedia of evaluation. Thousand Oaks, CA: Sage; 2005. pp. 362–367. [Google Scholar]
  5. Rogers PJ. Using programme theory to evaluate complicated and complex aspects of interventions. Evaluation. 2008;14(1):29–48. doi: 10.1177/1356389007084674. [DOI] [Google Scholar]
  6. Shiell A, Hawe P, Gold L. Complex interventions or complex systems? Implications for health economic evaluation. BMJ. 2008;336(7656):1281–1283. doi: 10.1136/bmj.39569.510521.AD. [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Westley F, Zimmerman B, Patton MQ. Getting to maybe: How the world is changed. Toronto, ON: Random House Canada; 2006. [Google Scholar]
  8. Zimmerman B, Lindberg C, Plsek P. Edgeware: Lessons from complexity science for health care leaders. Irving, TX: VHA; 1998. [Google Scholar]

RESOURCES