Abstract
Objectives
Building evaluation capacity for chronic disease prevention (CDP) is a critical step in ensuring the effectiveness of CDP programming over time. In this article, we highlight the findings of the qualitative arm of a mixed-methods needs assessment designed to assess the gaps and areas of strength within Ontario’s public health system with respect to CDP evaluation.
Methods
We conducted 29 interviews and focus groups with representatives from 25 public health units (PHUs) and analyzed the data using thematic analysis. We sought to understand what gaps and challenges exist in the Ontario public health system around CDP evaluation.
Results
Challenges facing Ontario’s PHUs in CDP evaluation include variation and centralization of capacity to evaluate, as well as competing priorities limiting the development of evaluative thinking. Participating PHUs identified the need for evaluation capacity building (ECB) strategies grounded in an understanding of the unique contexts in which they work and a desire for guidance in conducting a complex and thoughtful evaluation. Moving forward, PHUs noted a desire for a strong system of knowledge sharing and consultation across the public health system, including through strengthening existing partnerships with community collaborators.
Conclusion
These results support the case for ECB strategies that are adaptive and context-sensitive and equip PHUs with the skills required to evaluate complex CDP programming.
Keywords: Chronic disease prevention, Evaluation, Evaluation capacity building, Public health
Résumé
Objectifs
Le renforcement des capacités en évaluation dans le domaine de la prévention des maladies chroniques (PMC) est crucial pour assurer l’efficacité des programmes visant à prévenir les maladies chroniques au fil du temps. Dans cet article, nous rapportons les résultats du volet qualitatif d’une analyse des besoins utilisant les méthodes mixtes conçues pour rendre compte des lacunes et des forces du système de santé publique de l’Ontario en matière d’évaluation de programmes de PMC.
Méthodes
Nous avons d’abord mené 29 entretiens et groupes de discussion auprès de 25 unités de santé publique (USP) pour ensuite analyser les données recueillies par une analyse thématique de contenu. Nous avons cherché à identifier les lacunes, les forces et les défis qui existent dans le système de santé publique de l’Ontario en matière d’évaluation de programmes de PMC.
Résultats
Les défis auxquels sont confrontées les USP de l’Ontario en matière d’évaluation de programmes de PMC comprennent la centralisation et les variations dans les capacités à réaliser l’évaluation ainsi que les priorités concurrentes qui limitent le développement de la pensée évaluative. Les organisations participantes souhaitent des stratégies de renforcement des capacités en évaluation qui tiennent compte des différents contextes dans lesquelles elles travaillent ainsi qu’un soutien dans la réalisation d’évaluations complexes. Pour aller de l’avant, les USP souhaitent également mettre en place un système efficace d’échanges d’informations et de consultations à travers le système de santé publique, notamment en renforçant les partenariats existants dans le milieu communautaire.
Conclusion
Ces résultats appellent l’élaboration de stratégies de renforcement des capacités en évaluation qui sont flexibles, sensibles au contexte et qui dotent les USP des compétences requises en matière d’évaluation de programmes de PMC.
Mots-clés: Prévention des maladies chroniques, Évaluation, Renforcement des capacités en évaluation, Santé publique
Introduction
Public health in Canada has a strong and enduring commitment to effective practice. Public health standards are one way to define what is meant by effective public health practice and provide guidance and accountability for implementation of these practices. In Ontario, new public health standards were introduced in 2018. The new Ontario Public Health Standards (OPHS) emphasize evaluation as a central component of effective practice, including through sharing plans for and the results of evaluation. They also include a major emphasis on health equity and a chronic disease prevention (CDP) program and widen the scope of CDP. Chronic diseases (including cancer, diabetes, and respiratory diseases) have a strongly detrimental impact on individual and population health (Haydon et al. 2006); effective chronic disease prevention (CDP) is key to reducing this impact. The new standards, introduced in an environment that lacks an overall CDP strategy (Office of the Auditor General of Ontario 2017), prompted the provincial government to invest in supporting how public health units (PHUs) might enhance the effectiveness of their CDP programming, including through strengthening evaluation.
Rolling out and sustaining effective CDP programming requires organizational and systemic support (Hanusaik et al. 2014). It is important that PHUs are able to understand the impact that their CDP programming is having so that they can adapt and alter programming to be more effective. It is here that evaluation enters the equation; organizational and systemic support for and individual skills in evaluation can help to guide decision-making and enhance learning (Canadian Evaluation Society 2015), allowing PHUs to both make appropriate decisions and accurately report on those decisions within the new public health framework. Evaluation capacity (EC) enables organizations to “produce high-quality evaluations and to use their findings at all levels to improve its programs and reach its social betterment objectives” (Bourgeois et al. 2018, p. 90).
The question of how to build EC has been of interest across public health, where researchers have explored evaluation capacity building (ECB) for both conducting and making use of evaluation data (Bourgeois et al. 2018). At the individual level, ECB efforts have been focused on knowledge, skills, and attitudes. Organizationally and systemically, ECB can engage leaders and promote the creation of procedures to guide evaluation planning and execution (Bourgeois et al. 2018). ECB strategies include courses, trainings, and workshops; the use of technical assistance; and communities of practice (DeCorby-Watson et al. 2018).
Identifying resource and training needs within a specific sector can help to ensure that ECB strategies match the contexts in which they are implemented (Joly et al. 2018; Stockdill et al. 2002). Evaluation-related needs assessments in public health have focused on public health practitioners’ training and guideline needs (Crawford et al. 2008; Denford et al. 2018; Grimm et al. 2015), CDP competencies (Kreitner et al. 2003), and evaluation skills (Joly et al. 2018). This literature has highlighted how contextual barriers can impede the development of evaluation skills among practitioners (Denford et al. 2018); these barriers include distance to training opportunities—often a barrier in rural areas in particular (Crawford et al. 2008)—and siloes in the public health system (Kaufman et al. 2014). ECB can be bolstered by an overarching framework wherein evaluative thinking is prioritized and appropriate resources exist (e.g., Nu’Man et al. 2007).
These findings offer promising answers to the question of how to enhance EC in public health in a way that would allow PHUs to respond to both the needs of their constituents and the reporting standards in the OPHS. However, a number of factors impact our ability to extrapolate general ECB strategies for CDP (LaMarre et al. in press). First, CDP may be used as a “catchall” category for a diverse array of upstream (e.g., policies, advocacy) and downstream (e.g., counselling, health education) interventions, involving a variety of partners in health and non-health settings. CDP programming also tends to be long-term; particularly in contexts where there is high staff turnover, this can complicate evaluation (Dryden et al. 2010). Given the way that CD outcomes entwine with the social determinants of health, CDP outcomes must be considered in relation to health equity and differential access in varied populations (WHO 2005). ECB strategies must also be grounded in the lived realities of participants to be successful (Stockdill et al. 2002); with this understanding, we explored the CDP evaluation landscape among Ontario PHUs. In this article, we present the results of the qualitative arm of this exploration.
Research context
While the 2018 OPHS are new, the desire for ECB among Ontario PHUs is not. Despite this desire, support for ECB at organizational and systemic levels has been limited (Bourgeois et al. 2018; Bourgeois et al. 2016; Fournier et al. 2017). Prior to our research, general evaluation capacity among the majority of Ontario PHUs had been characterized as “developing” (Bourgeois et al. 2016). All PHU staff are expected to have at least some level of competency in evaluation, and some staff specialize in evaluation (called a “hybrid” model; Bourgeois et al. 2016). PHUs vary tremendously in size and structure, ranging from having one or two evaluation staff to large evaluation teams comprised of epidemiologists, program evaluators, and others with dedicated evaluation roles. Many PHUs conduct primarily process evaluations, and some wish to implement more comprehensive evaluation approaches.
Our collaborative project built on previous work in the Ontario public health landscape (e.g., Bourgeois et al. 2016, 2018; Hotte et al. 2015). We targeted ECB in CDP specifically, aiming to (1) understand the evaluation landscape within Ontario PHUs and (2) develop ECB strategies to fill identified gaps in this landscape. The first aim was addressed using multiple sources of data: a literature review on ECB for CDP (LaMarre et al. in press), a quantitative EC survey, a document review, and interviews with PHUs. The interviews represent the qualitative arm reported in this article, which was the richest source of information and insights informing the study’s second aim.
Notably, our research took place during a time of upheaval in Ontario’s public health system. While conducting the study, a change in government meant major changes to how public health services are delivered across the province. The results reflect the public health system as described by participants at the time of interviews.
Methods
We received approval from the University of Waterloo ethics committee to conduct the research in June 2018. The Medical Officers of Health of all Ontario PHUs (n = 36) were invited to participate in the project’s needs assessment. Follow-up telephone calls were made to all contacts to ensure they had received the invitation to participate. No incentives were offered for participation. We conducted 29 individual and group interviews (based on individual and PHU preference) either via telephone (n = 27) or in person (n = 2) with key informants who conduct or manage evaluation and/or CDP programming from Ontario PHUs. Semi-structured interviews were led by two research team members and averaged 48 min (23–79 min).
Conversation guides were informed by Bourgeois and Cousins’ (2013) EC framework, which identifies domains for assessing organizational EC and distinguishes between doing and using evaluation. Questions explored organizational and leadership supports for evaluation, plans for evaluation, and procedures for guiding evaluation decision-making and processes (aspects of “capacity to do”) as well as staff awareness of and skills in evaluation, how results of evaluation were used, and the meaning and trustworthiness of evaluation results as perceived by community partners (aspects of “capacity to use”). The Canadian Evaluation Society’s core evaluation competencies, including reflective, technical, situation, management, and interpersonal practice (CES 2010), also sensitized researchers to the various components of EC and informed our conversations with PHUs.
Interviews were professionally transcribed verbatim. PHU and respondent names and identifying information were removed; PHUs were assigned numbers at random. We analyzed the data using thematic analysis, a flexible approach that, when conducted thoughtfully, allows the researcher to take an active and transparent role exploring data (Braun and Clarke 2006). We took a semantic approach to analysis, looking at responses as they were presented by speakers (Braun and Clarke 2006). We align with a constructivist epistemology, recognizing that experiences are mediated by peoples’ interactions with social structures, and meaning-making is central to human experience (Raskin 2002). Accordingly, we explore the relationship between behaviours and attitudes toward ECB and contextual factors (e.g., OPHS, PHUs’ organizational and geographic contexts). This epistemology also fits with our perspective on our role as researchers in interpreting the data in a way that reflects one possible collective “read” of the data, not “the” single best way to understand it (Braun and Clarke 2019).
In keeping with this orientation, we explore our relationships with the data and relevant systems. Authors varied in their familiarity with the public health and evaluation systems in Ontario, ranging from being new to both systems to having over 25 years of experience in these fields. All authors were trained in social sciences, with variable specificity to public health and evaluation, and with variable experiences in academic and practice settings. Our different backgrounds brought diverse perspectives to data interpretation. They also attuned us to diverse approaches to analysis that might enable and/or constrain our findings. Specifically, to build on existing knowledge and frameworks and remain open to new knowledge, we coded abductively, identifying groupings of similar information within transcripts (ground-up or inductive coding) and also identifying several possible codes based on an understanding of existing ECB frameworks and conceptualizations of EC (top-down, deductive coding). The first author used MAXQDA software to code each interview transcript. She then assembled codes into themes, identifying patterns and looking at how well these patterns reflected the observed meanings in the data as a whole, attending not only to how often a particular word/phrase appeared but also to the strength of prospective themes in relation to the broader dataset (e.g., how well the theme speaks to our research questions). Once initial themes had been generated, these were shared with the team, who discussed and edited themes in relation to the broader dataset. Summaries of individual PHU transcripts were shared back with PHUs; only one PHU had edits to add. Themes were used to guide the development of priorities and ECB approaches; these were presented to all participants and to members of the broader public health system for feedback and were well received as being representative of the needs present in the Ontario public health system for CDP evaluation.
Results
We organize our findings into five themes: the first two themes identify challenges and gaps in CDP evaluation, the third and fourth focus on the development of ECB strategies, and the last identifies an opportunity for CDP evaluation.
Centralization and variation in capacity
Participating PHUs identified variation in evaluation skills among staff. While all PHUs had at least some staff skilled and experienced in evaluation, there was some division between those with and without evaluation skills. Those not designated as “evaluators” tended to not focus on evaluation or to lack the confidence to conduct evaluation:
We’ll generally do pretty surface level evaluations whether it’s process or whether it’s evaluating the experience somebody had through a specific intervention. When in-depth evaluation isn’t repeated over time […] those skills wane a little bit so there’s less of a comfort level amongst staff in conducting those types of evaluation. (PHU 231)
Variations in capacity sometimes led to bottlenecks, wherein those with evaluation skills were unable to meet the PHU’s evaluation needs. Increasing centralization of capacity and widely varying skills had material impacts on both the types of evaluation conducted (e.g., process evaluation, as described above, rather than the desired formative evaluation, noted below) and the volume of evaluations PHUs could conduct:
I think it impacts the volume that we’re able to do… we can’t thoroughly evaluate every single program across the Health Unit just because of the capacity of having just one epidemiologist who’s the program evaluator. So, it does impact the number of formative evaluations that we do. (PHU 14)
This scenario was common, particularly for those who were working in smaller PHUs; as a participant from PHU 24 noted: “We’re kind of a middle-sized health unit and we have one person who’s solely dedicated to doing evaluation.” A PHU 3 participant pointed out:
there’s only one [Name 1]. And everybody wants [Name 1]’s time. [Name 1] has other projects as well too, so you almost have to get in the queue.
Furthermore, not all units had staff with both CDP and evaluation capacity; thus, CDP evaluation represented one among many evaluation requests for those in the evaluator role.
Underscoring the need to at least partly level the evaluation playing field, some PHUs had begun to engage in their own ECB, including through exploring mentorship opportunities, joining or creating communities of practice, delegating “evaluation champions,” offering training, and creating guidelines. For example, PHU 7 had participated in an “evaluation champions” group:
We have that evaluation champions group and the role is to build capacity among staff across the organization and to look at evaluation across the organization similar to what your project is aiming to do […] we have some staff with capacity but we still see some limited skills.
PHU responses crystallize the need for cross-organizational evaluation skills, not necessarily to have all staff equally engaged in evaluation, but rather to promote proficiency in the aspects of evaluation suited to each role, streamlining processes unit-wide.
Competing priorities
Many PHUs acknowledged the importance of evaluation and noted that they would ideally evaluate more of their programming; however, this was challenging due to limited resources (time, financial, and human). They often noted that competing priorities intervened between the desire to evaluate and the capacity to evaluate.
I love evaluation. You don’t have to convince me about a learning lab. I’m going to be signing up tomorrow. How do I get that health promoter who is super busy doing active communities and mobilizing community partners and working with her partners to create safe and active communities? How do we get her to this lab? Someone who maybe is not always doing evaluation or thinking about evaluation… (PHU 9)
As this participant remarks, health promoters and other frontline staff may not see evaluation as the most important aspect of their work; this impacts the extent to which evaluation can occur throughout the organization, despite strong desires for evaluation among some staff.
To help them reconcile competing priorities, PHU participants often desired evaluation strategies and plans that would help with decisions about how, what, and when to evaluate. Some PHUs were actively working on developing such plans, for instance highlighting the utility of “scoping down,” and considering what kind of evaluation is required for different programs before assuming that all require the same type and depth of evaluation:
I think my biggest advice […] is we moved away from very large-scale evaluations that are very time consuming to looking at how we can provide evaluation support but do small evaluation activities to answer the question. Sometimes you don’t need a bazooka gun to answer the question. You could just use a little hammer and that can get you the answer as well too. So instead of spending a year on a program evaluation that we’re only supporting maybe one intervention or one program, we’re really trying to figure out how we can do things smaller that takes less time but that can still answer our question […] try to scope things down. (PHU 2)
Most PHUs that desired more information around CDP evaluation guidelines and decision-making noted an interest in ECB opportunities that could assist them in making these kinds of strategic decisions.
Context-sensitive approaches
Participants noted major differences in context that would need to be considered in developing any ECB strategies. These included, but were not limited to, socio-demographic catchment areas, geographic location, size, organizational structure, resource allocations, shifting structures, new OPHS standards, and different prior experiences of ECB. For instance, a participant from PHU 3 noted:
[In] creat[ing] guidelines and standards, you just want to make sure that they align with what the organization is doing internally. And make sure that you’re not imposing a certain way of doing evaluation for public health units to do evaluation in a particular way, especially if they’re well into their own journey around evaluation capacity building.
In addition to being attuned to individual differences between PHUs, there was also a strongly articulated need for ECB that was aligned with the new provincial context (OPHS). In describing their work for 2018, many PHUs noted the relationship between the standards and evaluation prioritization:
With the standards saying that evidence-informed decision-making and evaluation are everybody’s job, that’s great. That’s an attitude shift from the province but we haven’t created the time for people to learn how to do it or to actually do it. (PHU 5)
As this participant describes, there is an emphasis within this prioritization on how to actively focus on evidence-informed decision-making and evaluation, a challenge without concordant resource allocation.
The focus on health equity and the social determinants of health specified in the 2018 OPHS is another important aspect of context informing the design of ECB efforts—for instance, informing the types of evaluation prioritized in ECB. Identifying a need for health equity-aligned evaluation, PHUs were eager to explore ways of evaluating with health equity in mind.
Accordingly, efforts at ECB for CDP would need to account for the socio-environmental contexts in which PHUs are working—and train PHU staff in how to more efficiently and effectively evaluate within different contexts. PHUs similarly noted that it was important for them to be able to access and use relevant, local data to demonstrate the impact of their work. As a participant from PHU 23 noted:
I think timely local level data that could reflect behaviour change year by year […] would be the dream. To be able to say between 2016 & 2017 we’ve seen a decreased self-report of sugar sweetened beverage consumption among the adult population. And we would have that information you know, mid-way through 2018. That would be the dream […] timely local data that we can get year after year to show behaviour change within a reasonable amount of time.
Such statements reflect a need for ECB that includes strategies to enhance PHUs’ ability to generate and use data that is relevant within their specific contexts.
Complex and thoughtful CDP evaluation
While all evaluation should be thoughtful and many public health contexts are complex, there was a particular focus within the interviews on the complexity of CDP programming, its positioning within broader organizational structures, and the challenges these factors present for evaluation. Participants’ responses illustrated a need to look at evaluation differently, perhaps more collaboratively, as was the case for PHU 6:
There are links everywhere between the standards. We’ve had a history of planning for topic and evaluation for topic and now it’s kind of forcing us to renegotiate those boundaries and how we look at chronic disease prevention and wellbeing, as a whole, and what are those buckets underneath that, how are they interlinked and then how can we assess our impact in the community for that entire bucket of chronic disease prevention and wellbeing. So that’s going to be challenging. (PHU 6)
PHU responses also exemplified how ECB should equip PHU staff with tools, strategies, and frameworks that help them address the challenges of evaluating CDP programming. Particularly in a time of shifting to upstream, social determinants-based work, PHUs were concerned about how to marry a focus on delivering long-term programming with a desire to demonstrate the impact of their work more immediately.
There are a number of challenges with those kind of upstream approaches […] those approaches take a long time and then even when they’re implemented, to see the effect of those things sometimes takes a long time. It becomes very hard to do a 15 or 20 year evaluation to see whether or not that upstream approach is very successful. (PHU 1)
Along the same lines, a participant from PHU 7 specifically noted a desire for tools and strategies in order to help staff conduct evaluation under these conditions:
Some staff felt that we need more long-term follow-up evaluation to measure behaviour change, recognizing that it’s really hard because it’s multifactorial and complex for chronic disease prevention […] some staff are still finding that evaluation and reporting is really cumbersome probably because we’re not using easy-to-use tools or just a tool for people to build that familiarity. (PHU 7)
Accordingly, ECB efforts might be oriented toward the creation and/or use of nimble and dynamic evaluation frameworks that capture relative impact and outcomes along the road to long-term outcomes.
Knowledge sharing and consultation
Many PHUs remarked upon a siloing effect within the Ontario public health system. They wished to talk with other PHUs about what they were doing, to reduce redundancies and increase efficiency. This communication could take a variety of different shapes: some PHUs advocated for formalized networks, whereas others desired consultative models or resource-sharing mechanisms. A knowledge-sharing approach to exploring the challenges and benefits of CDP evaluation was presented as something that would be beneficial to many Ontario PHUs.
We’re all duplicating evaluations or evaluative processes where some kind of repository [could be helpful] I think there could be a lot of benefit to that to give consistency, to share tools, to have more similar expectations of health units across the board. (PHU 15)
As this participant explores, sharing and networking might help PHUs to consistently meet the OPHS and create and fulfill similar evaluation expectations. Actively breaking down siloes requires strong communication. Some PHUs used the example of Locally-Driven Collaborative Partnerships (LDCPs; action research-oriented collaborative projects wherein PHUs identify an area of interest and work together on generating research/sharing resources around it)—such as a recent project that assessed overall evaluation capacity in Ontario PHUs (Fournier et al. 2017; Hotte et al. 2015) as an effective model of knowledge and resource sharing and collaboration that might be adapted for CDP ECB.
The LDCPs, they have also I think contributed to both our confidence in evaluation and our skill […] one example would be the health equity indicators. Actually having some indicators that we can incorporate into our goals and objectives when we are doing our planning. (PHU 25)
These kinds of approaches would help PHUs to develop and/or advance innovative ways of measuring the impact of their work in a complex and rapidly changing social environment.
In addition to cross-PHU communication, PHUs remarked on the importance of community engagement. Most PHUs were working with community partners and were considered to be trustworthy sources of expertise. As a participant from PHU 14 remarked:
We would have lots of community partners, it would vary from program to program, who we disseminate that information out to and then it either stays with them or depending on the information we’re sending, they then can disseminate it to the public that they are working with […] I would say community partners are key for so many of the Chronic Disease programs that we have. (PHU 14)
According to PHUs, it is not just important for the evaluation to take place but for it to be relevant and delivered in a way that facilitates action. PHUs were also eager to explore ways of deepening and strengthening existing or emerging partnerships, to enable work with stakeholders working in diverse contexts, working on politically contentious topics.
Right now […] community engagement and evaluations would be so far down the road. […] with the new standards, partnership and collaboration are a large part of that and the foundational standards and I think in equity as well. There would probably be maybe some processes to have a stronger engagement of stakeholders and partners in evaluation. (PHU 3)
This response—particularly in relation to other PHU responses indicating existing or building relationships with community—exemplifies the heterogeneity of existing work with community and stakeholders and the need for more guidance around how to practically bring an ethic of knowledge sharing and engagement into CDP evaluation work.
Discussion
PHUs’ needs and hopes for ECB for CDP were as varied as PHUs themselves, dependent on broader context and organizational structures. Despite their differences, threads flowed through PHUs’ discussion of their current and anticipated CDP evaluation needs. PHUs faced challenges around the centralization and variation in capacity and competing priorities. They noted that ECB would require context-sensitive approaches and training in complex and thoughtful evaluation. PHU participants held strengths in knowledge sharing and consultation both within PHU networks and with community partners and stakeholders, which also presented opportunities on which to build.
PHUs’ cognizance of the varied capacity and competing needs within their units has implications for ECB. Previous literature has indicated the benefits of engaging across organizational levels (i.e., frontline staff and managers) to help ECB take root (Bourgeois et al. 2018). This was not a particular focus of this study, possibly due to our focus on CDP, which may have limited broader organizational emphasis. Nonetheless, participants pointed to some systems and structures that constrained or facilitated evaluation. Engaging in ECB activities may itself help to raise interest in ECB (Fournier et al. 2017) by positioning ECB itself as a catalyst for increased engagement in evaluation for CDP. Thus, implementing ECB may reduce some of the gaps identified in this study, including people feeling unable to engage in or disconnected from evaluation.
Relating our findings to Bourgeois and Cousins’ (2013) EC framework, CDP teams within PHUs had a desire for evaluation but often lacked the resources to do so. This echoes findings from broader explorations of EC in Ontario PHUs indicating “developing” EC among PHUs in Ontario due in part to time and resource limitations around ECB (Bourgeois et al. 2016). Our findings suggest that this broader observation holds for CDP programs specifically—and may even be more pronounced due to the time-intensive and diverse nature of CDP programming, and the splitting of staff time between various projects within and outside of CDP. Staff delivering CDP programs may have the “capacity to do” evaluation but may not have adequate resources to enact evaluation. Given the highlighted variance in evaluation skills, knowledge, and practice within the PHUs involved in this study, there may be room for improvement in the “capacity to use” evaluation (Bourgeois and Cousins 2013). With respect to strengths, participants were making connections with others outside of their health unit, building organizational linkages and external support (Bourgeois and Cousins 2013). This is perhaps to be expected in CDP work, which often includes many players and “moving parts.”
The desire for ECB that reflects sensitivity to programmatic, local, and provincial contexts was clear within the data. This presents an opportunity to explore how nimble ECB strategies might be in relation to a field of practice that often ends up as a “catchall” category. Given the unique and variable contexts in which PHUs operate, a unidimensional approach to ECB for CDP is unlikely to be effective (see also Bourgeois et al. 2016).
Public health is prone to siloing and competition (Kaufman et al. 2014); in order to feed a thriving public health system, coordination among PHUs working on similar priorities must be supported. Evaluation needs to become a core aspect of programming, rather than tacked on at the end of projects (Lobo et al. 2014). Participants’ identified needs align with core CDP evaluation competencies as identified in Kreitner et al. (2003), including an emphasis on setting priorities and managing limited resources. Within resource-strapped situations, filling gaps must be done with a strong awareness of the barriers and facilitators to evaluation training and practice (Crawford et al. 2008)—for our participants, these include centralized and varied capacity, and competing priorities leading to disconnects between ideals around evaluative thinking and poorly resourced PHU realities. While all PHU staff might participate in ECB activities, our results indicate that strategies might be targeted to the different experiences of and roles for different staff. More experienced staff might desire and require training and resources in advanced evaluation methods, whereas for those staff less involved in evaluation there may be a preference for building basic evaluative thinking to allow them to design their programming in a way that facilitates evaluation.
Together, the varied skills within PHUs and the increasingly centralized evaluation capacity evident therein suggest a need for ECB strategies that (a) understand that participants will be entering into the activities with varied interests and skills and (b) take into account the various roles people hold within the organization in relation to evaluation. This need does not imply that all staff must become evaluation experts, but rather speaks to the need for tailored approaches to ECB that meet staff where they are and support an overall focus on evaluative thinking throughout organizations.
Our study had a limited sample size and moderate response rate (25 out of 36 PHUs). Another limitation is the inevitable researcher influence within any qualitative study; while we involved multiple team members in the interpretation of the data, we acknowledge researchers’ impact on both the generation and interpretation of the data. We make no claims to neutrality in our interpretation, which represents our identification of patterns within a rich dataset representing many nuanced perspectives. The researchers involved in primary analysis are experienced in qualitative analyses, but newer to the public health and evaluation fields, which could be framed as a limitation or a benefit in the analysis and meaning-making process.
Overall, the results reflect Ontario’s public health system needs for ECB in CDP. These results helped to inform the development of ECB activities to address relevant needs. Throughout this project, we have valued the participation of public health stakeholders in developing the types of strategies that work for them, an approach that has been identified as key to engaging Ontario PHUs and ensuring the relevance of findings (e.g., Bourgeois et al. 2018; Fournier et al. 2017). These results represent the first step toward a deep engagement with varied public health stakeholders in determining how to meet ECB needs specific to CDP in Ontario. They help to support the need for weaving evaluation into the fabric of public health, perhaps even more so in times of structural and philosophical change.
Acknowledgements
We wish to thank participants for their time and insights. We also wish to acknowledge the CDP-EvaLL team who helped to design and carry out the broader project from which this work was drawn.
Compliance with ethical standards
We received approval from the University of Waterloo ethics committee to conduct the research in June 2018.
Conflict of interest
This research was funded by the Ontario Ministry of Health and Long-Term Care through a Health and Wellbeing Grant awarded to PI Barbara Riley, an author on this paper.
Footnotes
All PHU numbers are random and do not signify identification.
Please note: affiliations different from affiliation where the research was conducted. Research conducted at the Propel Centre for Population Health Impact, University of Waterloo.
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
References
- Bourgeois I, Cousins JB. Understanding dimensions of organizational evaluation capacity. American Journal of Evaluation. 2013;34(3):299–319. doi: 10.1177/1098214013477235. [DOI] [Google Scholar]
- Bourgeois I, Hotte N, Simmons L, Osseni R. Measuring evaluation capacity in Ontario public health units. The Canadian Journal of Program Evaluation. 2016;31(2):165–183. doi: 10.3138/cjpe.306. [DOI] [Google Scholar]
- Bourgeois I, Simmons L, Buetti D. Building evaluation capacity in Ontario’s public health units: promising practices and strategies. Public Health. 2018;159:89–94. doi: 10.1016/j.puhe.2018.01.031. [DOI] [PubMed] [Google Scholar]
- Braun V, Clarke V. Using thematic analysis in psychology. Qualitative Research in Psychology. 2006;3:77–101. doi: 10.1191/1478088706qp063oa. [DOI] [Google Scholar]
- Braun V, Clarke V. Reflecting on reflexive thematic analysis. Qualitative Research in Sport, Exercise, and Health. 2019;11(4):589–597. doi: 10.1080/2159676X.2019.1628806. [DOI] [Google Scholar]
- Canadian Evaluation Society. (2010). The Canadian Evaluation Society competencies for Canadian evaluation practice. Retrieved from https://evaluationcanada.ca/txt/2_competencies_cdn_evaluation_practice.pdf February 28th, 2019.
- Canadian Evaluation Society. (2015). What is evaluation? Retrieved from https://evaluationcanada.ca/sites/default/files/ces_def_of_evaluation_201510_grande.jpg
- Crawford JM, Vilvens H, Pearsol J, Gavit K. An assessment of training needs in a rural public health agency: barriers to local public health training. Public Health Reports. 2008;123(3):399–404. doi: 10.1177/003335490812300323. [DOI] [PMC free article] [PubMed] [Google Scholar]
- DeCorby-Watson, K., Mensah, G., Bergeron, K., Abdi, S., Rempel, B., Manson, H. (2018). Effectiveness of capacity building interventions relevant to public health practice: A systematic review. BMC Public Health, 18(1), 684. 10.1186/s12889-018-5591-6. [DOI] [PMC free article] [PubMed]
- Denford S, Lakshman R, Callaghan M, Abraham C. Improving public health evaluation: a qualitative investigation of practitioners’ needs. BMC Public Health. 2018;18(1):190. doi: 10.1186/s12889-018-5075-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Dryden, E., Hyde, J., Livny, A., Tula, M. (2010). Phoenix rising: Use of a participatory approach to evaluate a federally funded HIV, hepatitis and substance abuse prevention program. Evaluation and Program Planning, 33(4), 386–393. 10.1016/j.evalprogplan.2010.02.004. [DOI] [PubMed]
- Fournier M, Bourgeois I, Buetti D, Simmons L, the Building Evaluation Capacity in Ontario’s Public Health Units LDCP Workgroup . Building evaluation capacity in Ontario’s public health units: results from ten action research projects. Ontario, Canada: Cornwall; 2017. [Google Scholar]
- Grimm BL, Johansson P, Nayar P, Apenteng BA, Opoku S, Nguyen A. Assessing the education and training needs of Nebraska’s public health workforce. Frontiers in Public Health. 2015;22(3):161. doi: 10.3389/fpubh.2015.00161. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hanusaik N, Contandriopoulos D, Kishchuk N, Maximova K, Paradis G, O’Loughlin JL, Decision-makers Advisory Committee PHORCAST. Chronicling changes to the chronic disease prevention landscape in Canada’s public health system 2004-2010. Public Health. 2014;128(8):716–724. doi: 10.1016/j.puhe.2014.05.016. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Haydon, E., Roerecke, M., Giesbrecht, N., Rehm, J. & Kobus-Matthews, M. (2006). Chronic disease in Ontario & Canada: risk factor, determinants and prevention priorities. Report prepared for Ontario Chronic Disease Prevention Alliance and the Ontario Public Health Association. Toronto: Ontario Public Health Association.
- Hotte, N., Simmons, L., Beaton, K., & the LDCP Workgroup. (2015). Scoping review of evaluation capacity building strategies. Cornwall, Ontario, Canada.
- Joly BM, Coronado F, Bickford BC, Leider JP, Alford A, McKeever J, Harper E. A review of public health training needs assessment approaches: opportunities to move forward. Journal of Public Health Management and Practice. 2018;24(6):571–577. doi: 10.1097/PHH.0000000000000774. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kaufman NJ, Castrucci BC, Pearsol J, Leider JP, Sellers K, Kaufman IR, Fehrenbach LM, Liss-Levinson R, Lewis M, Jarris PE, Sprague JB. Thinking beyond the silos: emerging priorities in workforce development for state and local government public health agencies. Journal of Public Health Management and Practice. 2014;20(6):557–765. doi: 10.1097/PHH.0000000000000076. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kreitner S, Leet TL, Baker EA, Maylahn C, Brownson RC. Assessing the competencies and training needs for public health professionals managing chronic disease prevention programs. Journal of Public Health Management and Practice. 2003;9(4):284–290. doi: 10.1097/00124784-200307000-00006. [DOI] [PubMed] [Google Scholar]
- LaMarre, A., D'Avernas, E., Raffoul, A., Riley, B., & Jain, R. (In press). A rapid review of evaluation capacity building strategies for chronic disease prevention. Canadian Journal of Program Evaluation. 10.3138/cjpe.61270.
- Lobo R, Petrich M, Burns SK. Supporting health promotion practitioners to undertake evaluation for program development. BMC Public Health. 2014;14:1315. doi: 10.1186/1471-2458-14-1315. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Nu’Man, J., King, W., Bhalakia, A., Criss, S. (2007). A framework for building organizational capacity integrating planning, monitoring, and evaluation. Journal of Public Health Management and Practice, 13, S24–S32. 10.1097/00124784-200701001-00006. [DOI] [PubMed]
- Office of the Auditor General of Ontario. (2017). Summary: public health: chronic disease prevention. Value-for-money audit. Retrieved from http://www.auditor.on.ca/en/content/news/17_summaries/2017AR%20summary%203.10.pdf
- Raskin JD. Constructivism in psychology: personal construct psychology, radical constructivism, and social constructivism. American Communication Journal. 2002;5(3):1–25. [Google Scholar]
- Stockdill S, Baizerman M, Compton D. Toward a definition of the ECB process: a conversation with the ECB literature. New Directions for Evaluation. 2002;2002(93):7–26. doi: 10.1002/ev.39. [DOI] [Google Scholar]
- World Health Organization (WHO) (2005). Policy brief: preventing chronic diseases: designing and implementing effective policy. https://www.who.int/chp/advocacy/policy.brief_EN_web.pdf
