Summary
Health promotion (HP) capacity of staff and institutions is critical for health-promoting programmes to address social determinants of health and effectively contribute to disease prevention. HP capacity mapping initiatives are the first step to identify gaps to guide capacity strengthening and inform resource allocation. In low-and-middle-income countries, there is limited evidence on HP capacity. We assessed collective and institutional capacity to prioritize, plan, deliver, monitor and evaluate HP within the South African Department of Health (DoH). A concurrent mixed methods study that drew on data collected using a participatory HP capacity assessment tool. We held five 1-day workshops (one national, two provincial and two districts) with DoH staff (n = 28). Participants completed self-assessments of collective capacity across three areas: technical, coordinating and systems capacity using a four-point Likert scale. HP capacity scores were analysed and presented as means with standard deviations (SDs). Thematic analysis of verbatim transcripts of audio-recorded group discussions that provided rationale and evidence for scores were conducted using deductive and inductive codes. At all levels, groups revealed that capacity to develop long-term, sustainable HP interventions was limited. We found limited collaboration between national and provincial HP levels. There was limited monitoring of HP indicators in the health information system. Coordination of HP efforts across different sectors was largely absent. Lack of capacity in budgeting emerged as a major challenge, with few resources available to conduct HP activities at any level. Overall, the capacity mean score was 2.08/4.00 (SD = 0.83). There is need to overcome institutional barriers, and strengthen capacity for HP implementation, support and evaluation within the South African DoH.
Keywords: health promotion capacity, organizational capacity assessment, health department, South Africa
INTRODUCTION
The Ottawa Charter indicates that one of the key health promotion (HP) pillars is reorienting health services from being curative-focused to emphasizing HP and disease prevention (World Health Organization, 1986). Effective health-promoting services contribute to addressing social and behavioural determinants which contribute to achieving population health goals (Ziglio et al., 2011), and reaching the 2030 sustainable development goals (World Health Organization, 2017). Capacity of HP staff, as well as the institutions responsible for HP, is critical if potential is to be reached. The purpose of this paper is to assess collective and institutional capacity for HP to see its potential in contributing to sustainable development.
The Bangkok Charter for HP (2005) steered countries to build national capacity for HP (Catford, 2005). In the early 2000s, World Health Organization (WHO) introduced a global initiative to map country-level capacity for HP. The aim was to investigate what infrastructure exists in different countries to plan, implement, coordinate and evaluate HP efforts (World Health Organization, 2010). Capacity mapping assesses pre-existing capacities, how well they link together as a system (Battel-kirk et al., 2009; Aluttis et al., 2014). They need to remain context and content specific, and be able to capture change of capacity building initiatives over-time (van Herwerden et al., 2019).
HP capacity can be defined as a system’s collective ability to deliver and support HP programmes (Lin et al., 2009). It includes knowledge, skills, commitment, structures, systems and leadership (Smith et al., 2006), which are affected by the availability of supportive environments, workforce, resources and funds. Understanding where capacity gaps and limitations exist could inform efforts to strengthen them (Dejoy and Wilson, 2003; Cosme Chavez et al., 2017).
Many countries globally have systematically mapped HP capacity (Ebbesen et al., 2004; Lin and Fawkes, 2005; Mittelmark et al., 2005; Nam and Engelhardt, 2007; World Health Organization, 2010; Mahmood, 2015). Yet, in African settings, there is much less evidence available on HP capacity, despite significant investment in strengthening HP capacity by donors, e.g. the United States Agency for International Development (USAID) (Jana et al., 2018). Malawi’s HP capacity assessment, an example of a donor-funded initiative, showed a relatively low capacity among district-level HP staff to plan, implement and evaluate HP interventions, and a fairly strong institutional capacity to lead and co-ordinate HP activities at both national and local levels (Jana et al., 2018).
In South Africa, a middle-income country, the Department of Health (DoH) has a mandate to deliver HP. The national HP directorate offers technical support, while provincial and local (district) levels implement activities (Department of Health, 2014). Almost a decade ago, training workshops to enhance HP capacity targeted senior HP officials from Mpumalanga and Free State provinces and cadres of health promoters working at district level in Gauteng province (Van Den Broucke et al., 2010; Wills and Rudolph, 2010). However, it is not clear whether these initiatives resulted in HP staff acquiring necessary skills (Jana et al., 2018), or whether staff who participated are still available in their positions. Furthermore, there is lack of data to show whether HP capacity strengthening was based on any systematic assessments prior to the training.
Recently, South Africa finalized its first national HP Policy and Strategy (2015 − 2019) based on a number of international and regional declarations on HP (Department of Health, 2014). The policy states that successful implementation depends on a variety of components, including the establishment of HP norms and standards of operating procedure, adequate financing, and a clear plan to build HP practitioners’ (HPPs) capacity (Department of Health, 2014). This paper addresses an important issue in the HP field: capacity mapping at government levels, which should guide planning, implementation and evaluation of HP programme activities. Therefore, the aim of this study was to assess collective and institutional systems capacity to prioritize, plan, deliver, monitor and evaluate HP within the DoH in South Africa.
Conceptual framework
We used an adapted HP capacity assessment framework, that contributed to the development of the data collection tool (Jana et al., 2018). The framework assessed current collective capacity in three broad areas (Table 1): (i) HP technical capacity: specific capacity linked to planning, designing, implementing, budgeting, monitoring and evaluating HP programmes; (ii) Coordinating capacity: capacity of the organization to coordinate and lead multi-sectoral collaboration; (iii) Systems capacity: capacity of the wider organization to support HP programmes such as communication mechanisms, policies, priority setting and human resources. This domain also influences the others. The domains we assessed overlapped substantively with capacity mapping conducted elsewhere (Barry et al., 2009; World Health Organization, 2010).
Table 1:
Description of the three broad capacities, domains and their definitions
Capacities | Domains | Sub-domains | Definition |
---|---|---|---|
HP technical capacity |
|
|
Includes HP competencies needed to effectively plan and design programmes. For example, conducting a situation analysis to guide HP programmes based on evidence; setting priorities; designing an appropriate HP approach to address the identified health or other social barriers to change |
|
|
Best practices for implementing and monitoring HP programmes: developing and using of programme implementation and monitoring plans; supervision and mentoring; having HP staff development plans | |
|
|
Competencies needed to evaluate HP programmes and to scale and sustain HP programmes. For example, documenting and disseminating results; adapting and adjusting programming based on data for sustainability to scale up | |
Coordinating capacity |
|
|
Identifying and building multi-sectoral collaborations; coordinating HP within DoH; collaboration with other stakeholders outside DoH, such as other government and non-governmental institutions; engaging with external evaluators |
Systems capacity |
|
|
Institutional systems within the DoH that are essential to lead, coordinate and harmonize HP programmes. Includes systems that directly influence HP intervention planning: human resources (recruiting, supervising and supporting personnel; management information and reporting systems.); budgeting; policy and strategic plan |
Adapted from Jana and collegues(Jana et al., 2018).
METHODS
Study design
We used a concurrent mixed methods design to assess organizational HP capacity at three levels of the South African DoH (national, provincial and district). This approach allowed for both a self-assessment of collective and institutional capacity, in addition to in-depth exploration of reasons behind the scores. Data were collected at the same time to enhance interpretation of findings (Zhang and Creswell, 2013). The study was primarily qualitative in nature with quantitative self-assessment occurring during the same session as the discussions. Participants had to reach consensus on the capacity scores. The questionnaire, which was used for the quantitative self-assessment contained the open-ended non-directive questions that guided the group discussions.
Study sample
We collected data from five study sites: the national HP directorate, two provinces and one district from each of the two selected provinces. Study sites were purposively selected based on the availability of stable HP structures within the province and district. We selected provinces based on: perceptions that they had ‘stronger HP models’, and higher numbers of designated HP staff. Districts were selected based on employing a greater number of HPPs. Twenty-eight DoH staff participated: 6 national, 6 provincial (3 HP managers and 3 non-HP managers) and 16 district and sub-district HP coordinators. Names of provinces and districts have been anonymized (Provinces A and B, plus Districts A and B, respectively).
Data collection
Data were collected through five one-day workshops (December 2017−February 2018). Adapted participatory HP capacity assessment tools (HP CATs) were used to collect both qualitative and quantitative data (Jana et al., 2018). The HP CAT was developed by USAID under the Health Communication for Life Project, as part of the work to strengthen Malawi’s social and behaviour change communication (SBCC) capacity among HP staff in 2016 (Jana et al., 2018). Three versions of the tool were adapted depending on the level it was administered (national, province or district). The same questions had to reflect HP activities appropriate for either strategic or coalface level of implementation. Changes made to the tool included use of terms like HP, DoH or HP directorate and sub-directorate versus SBCC, Ministry of Health or organization, respectively. The tool was administered through extended focus-group workshops with extensive discussions with teams of DoH staff. The purpose of the workshops was for DoH HP staff to self-assess their collective and institutional capacity in three main areas as outlined in Table 1. Ethical approval was obtained from the University of the Witwatersrand and the DoH. All participants provided informed consent.
Participants discussed each question posed by the researchers, and had to agree on a collective capacity score, using a four-point Likert scale (1 − 4). Scores between 1.00 and 1.49 were Stage 1, indicating capacity was not present; scores ≥1.50 − 2.49 were Stage 2, indicating present but no/poor application; scores ≥2.50 − 3.49 were Stage 3, indicating some application and adherence; and scores ≥3.50 − 4.00 were Stage 4, indicating complete application and adherence. Group discussions were audio-recorded with consent, allowing further rich qualitative data to be collected. Workshops lasted an average of eight hours, ranging from six to ten hours.
Data analysis
Capacity scores recorded on the HP CATs were entered onto an MS Excel spreadsheet. Then imported into STATA 13 software for analysis. Thematic content analysis of the discussions was supported by MAXQDA 2018 software.
Quantitative data
The data were cleaned and checked for completeness and accuracy. We created composite scores for each sub-domain, by adding scores for each question. We used descriptive statistics, calculated and represented as mean scores with standard deviations (SDs) to identify lower and higher capacity in each domain, and across the five study sites. A higher score indicated a higher self-assessed collective and institutional HP capacity.
Qualitative data
Transcribed data were checked against their original recording to ensure accuracy. Based on the standardized HP CAT questionnaire, deductive or topic codes were identified using the five domains of the tool: plan and design, implement and monitor; evaluate, scale and sustain; coordinate and institutional systems, while the sub-domains were used to develop the sub-codes (Table 1). These codes were supplemented with inductive codes that emerged during analysis of the workshop discussions. Trustworthiness of the data was assured through checking parts of coded transcripts by one of the co-authors and verification through the multiple methods that were used. Participants also provided some documents to verify their self-assessments.
FINDINGS
The majority of study participants had worked in the DoH for over 10 years (ranging between 6 months to 30 years). Participants were aged 35 years or older. Qualifications varied with the position of the participant. Most of the managers had a tertiary level qualification, with a post-graduate degree; however, other staff had a high school certificate.
Capacity to effectively plan and design HP activities
Conducting situational analyses
The understanding of what constituted a situation analysis varied across the different levels of the health system. At national level, it was expected that the HP directorate would have the capacity to review and interpret national burden of disease and risk factors and to be able to conduct desk reviews of motivational and other drivers underlying behaviours, environments and policies. At district level, there was an expectation of collective capacity to conduct community assessments. Groups at all levels revealed that capacity to conduct such situation analyses was very limited and in most cases were not done, ‘It’s prescribed what we need to focus on. So we do not have to conduct any assessments’ (District B). In very limited circumstances where elements of a situation analysis was done, it was conducted informally. Provincial and district participants commented that there were HP activities developed by national programmes (e.g. non-communicable diseases), which sometimes contradicted with HP priorities at sub-district or facilities: ‘HP activities should also be based on what is happening in the area’ (Province A). These discrepancies were commonly discussed during all workshops.
Needs assessments were sometimes conducted in form of ‘community disease profiling’, which ‘will influence what programmes you initiate’ (District B). However, few participants reported the collective capacity to interpret statistics about the prevalence of various diseases: ‘Data literacy, we don’t know it. Sometimes even during a review, they’ll tell you about 50%, and we don’t know what 50% is. It doesn’t make sense until they can explain’ (District A). HP activities were also determined by firefighting disease outbreaks, requiring immediate attention, and there was a perception that there was no time to conduct a situation analysis during such instances. Emergencies may also be used, as an excuse not do situation analyses.
Capacity to use behaviour change theories was limited and they were not used for planning activities. Staff competencies in this area were identified as a major barrier, ‘people are planning activities for HP without any HP degree or academic training and knowledge of actual behaviour change’ (Province B). Some participants were critical of the use of theory, stating that they were abstract and impractical: ‘Theories work in an ideal situation. Unfortunately our situation is not ideal’ (District B).
Using data and evidence in priority-setting
Participants described how national programmes generally set priorities. As one of the district staff members stated: ‘Somebody somewhere decides for us’ (District A). These priorities were thought to be influenced by politics, ‘It depends on whether that disease is high on the political agenda, like HIV and TB’ (National HP). This top-down approach often was perceived to ignore local needs. An explanation for this was the absence of a formal system to collect HP indicators that could feed into decision-making, ‘We don’t have reporting structures, tools, and data collection instruments for information to move up all the levels’ (National HP). This may also be caused by decades of focus on implementation and little action on monitoring and evaluation. Top-down pressure was exacerbated at the district level by the needs of clinics or district. For example, if immunization coverage from clinic statistics was low, then HP staff were expected to carry out activities like conducting mobilizations or awareness campaigns to increase coverage.
Due to the pressures of what happens in facilities and sub-districts, health promoters end-up not having the opportunity to plan according to needs or to implement what they have planned. (Province B)
In some places, clinic statistics provided an empirical basis to some targeted activities. This influenced whether health promoters chose to conduct a health or radio talk, ‘statistics coming from the clinic are the ones that inform us there is something wrong, e.g. STIs. We then develop an action plan to address that’ (Province A). Clinic statistics were found to be quite useful in highlighting community needs for HP activity planning at the district level, but were usually interpreted by others, e.g. facility managers.
Developing HP programmes
There was limited capacity to develop long-term, sustainable HP interventions at multiple levels. As a result, the approaches most commonly relied on interpersonal communication, often in the form of a health talk to patients waiting in clinic queues.
Since there was limited capacity in priority setting for HP and conducting needs assessments, a health calendar was often used to plan HP activities. Produced by the DoH, it highlights particular health issues on particular days of the year, in alignment with some of the world health days. For example, one day may focus on diabetes awareness while another on mental health, ‘we plan our things according to the health calendar’ (District A). The health calendar results in short-term activities rather than sustained evidence-based programming. HP programming was restricted further by a limited or lack of a budget and the only available channels were free media, or patient education, ‘it’s expensive, we go with freebies’ (National HP).
The reasons that HP staff provided for weak collective capacity to develop systematic, evidence-based programmes was limited access to and control over resources. It was not possible to drive, coordinate and control HP within DoH, ‘we piggy-back on other programmes…it’s often that approaches are determined by the resources we have’ (National HP). This highlights a weakness in the system.
Designing HP campaigns and materials
Participants indicated that there was limited collective capacity to develop HP activities or materials. Provinces and districts received ready-made information, education and communication (IEC) material from various programmes, ‘we just get whatever comes. Last time it was Khomanani [an HIV prevention programme]. Now we are getting stuff from PHILA [a national-based HP campaign addressing a wide-range of health issues]’ (District A). ‘We don’t have enough resources to produce our own and it won’t be approved if we do it independently. The whole idea is for PHILA to do it’ (National HP). At the time of study, PHILA was responsible for a national HP campaign that included billboards and other material with key messages.
HP budget and resource allocation
Participants described limited capacity for HP-specific budgeting to support planning. Each year, HP was expected to submit costed plans, and a budget was rarely made available: ‘We meet just for the sake of meeting and costing an operational [plan]…submit it, but we don’t get the budget’ (Province B). When there was a budget, there were no systems to track or monitor them, ‘Two-years ago, they said we overspent. How do we overspend when we don’t have money?’ (Province B).
Some vertical programmes within DoH were better resourced for HP activities, and instead of working with existing HP staff and structures, created a parallel system, e.g. the advocacy, communication and social mobilization staff funded through conditional grants in the HIV/AIDS and TB Cluster. The HP directorate seemed to have limited capacity to take these responsibilities on, claim it for themselves and motivate for joint programming.
In summary, the HP programme scored their collective planning and designing capacity as absent-to-limited (Table 2), with an overall mean sore of 1.85/4.00 (SD = 0.34). Collective capacity to use data and evidence in priority setting for HP sub-domain had the highest overall mean score of 1.98/4.00 (SD = 0.33), which was supported by the qualitative findings.
Table 2:
DoH HP capacity self-assessment scores
Domains and sub-domains | National level | Provincial level |
District level |
Mean score (SD) | ||
---|---|---|---|---|---|---|
National | Province A | Province B | District A | District B | ||
Plan and design | ||||||
Situational analysis | 1.00 | 1.33 | 2.33 | 2.00 | 2.00 | 1.83 (0.55) |
Using data and evidence in priority setting | 2.40 | 2.00 | 2.10 | 1.40 | 2.00 | 1.98 (0.33) |
Budgeting for HP activities | 1.00 | 1.00 | 2.00 | 2.00 | 4.00 | 1.83 (1.17) |
Developing an HP communication strategy | 1.70 | 1.80 | 2.00 | 2.20 | 1.90 | 1.95 (0.19) |
Designing campaigns and material development | 1.67 | 1.00 | 2.25 | 1.67 | 1.67 | 1.63 (0.40) |
Mean score (SD) | 1.55 (0.58) | 1.43 (0.46) | 2.14 (0.15) | 1.85 (0.32) | 2.31 (0.95) | 1.85 (0.34) |
Implement and monitor | ||||||
Coordination of implementation | 1.90 | 1.40 | 3.20 | 1.40 | 3.60 | 2.15 (1.00) |
Monitoring of implementation | 1.43 | 2.43 | 3.00 | 1.57 | 2.57 | 2.24 (0.61) |
Mean score (SD) | 1.67 (0.33) | 1.91 (0.73) | 3.10 (0.14) | 1.49 (0.12) | 3.09 (0.73) | 2.19 (0.71) |
Evaluate, scale and sustain | ||||||
Commissioning and conducting outcome evaluations | 1.00 | 1.00 | 1.33 | 1.00 | 1.37 | 1.17 (0.28) |
Re-planning based on data | 1.33 | 1.67 | 2.00 | 2.00 | 3.33 | 2.00 (0.70) |
Quality assurance | 1.50 | 1.00 | 1.00 | 1.00 | 1.00 | 1.17 (0.26) |
Mean score (SD) | 1.28 (0.25) | 1.22 (0.39) | 1.44 (0.51) | 1.33 (0.58) | 2.00 (1.25) | 1.44 (0.28) |
Institutional systems | ||||||
Institutional priorities | 3.67 | 4.00 | 3.67 | 3.00 | 3.67 | 3.50 (0.41) |
Institutional mandate and operations | 2.75 | 3.75 | 3.00 | 3.25 | 2.75 | 3.20 (0.46) |
Staffing structure | 2.00 | 2.75 | 3.38 | 1.50 | 3.75 | 2.65 (0.84) |
Staffing retention and management | 3.00 | 3.67 | 2.33 | 1.00 | 3.00 | 2.78 (1.00) |
Resource allocation | 1.00 | 4.00 | 1.00 | 1.00 | 3.50 | 2.42 (1.56) |
HP coordination | 1.50 | 1.83 | 3.33 | 3.00 | 2.67 | 2.41 (0.71) |
Mean score (SD) | 2.32 (1.00) | 3.33 (0.83) | 2.78 (0.99) | 2.22 (1.07) | 3.22 (0.48) | 2.82 (0.51) |
Key for the presence or absence of function and or system: Stage 1 ≥ 1.00 − 1.49=absent/not present; Stage 2 ≥ 1.50 − 2.49=present, limited capacity; Stage 3 ≥ 2.50 − 3.49=present, regular capacity; Stage 4 ≥ 3.50 − 4.00 = present, full capacity.
Capacity to implement, monitor and evaluate HP activities
Inadequate systematic monitoring of HP activities
We found limited organizational capacity to capture and submit HP monitoring information from provinces to the national HP directorate. Although some indicators in the HP strategic plan existed, there were no tools for data collection. There was consensus that there are no routinely collected HP indicators in the District Health Information System. This meant that the contribution of HP to any health outcomes could not be quantified: ‘even when general indicators within the Department are being achieved, we cannot prove HP assisted with it’ (National HP). Some stated that they collected monitoring data using a non-HP specific template, ‘The template we are using is for healthy lifestyles [a major activity under HP]. It does not have all the activities for HP’ (Province A). The piecemeal approach was attributed to inadequate structures and systems to collate HP information. Some lower level participants mistook routine report writing for monitoring and evaluation (M&E). The national-level staff described that there were bigger challenges with monitoring, as ‘proxy indicators’ are used, such as ‘reducing risk factors of non-communicable diseases’.
Again, participants emphasized fragmentation of HP implementation within DoH where vertical programmes such as HIV/AIDS have their own HP structures, ‘big programmes like HIV, TB and child health have their own HP-type of indicators, which we are not part of, as they have funding’ (National HP). If there were other HP-related programmes with measurable outcomes, the HP directorate did not have the capacity to adopt them. The spirit of ‘no can’t do’ dominated.
Commissioning and conducting HP programme evaluation
There was consensus among all levels that there was no internal capacity within the DoH to conduct or commission evaluations, ‘HP is not being evaluated. We implement only’ (District B). National HP staff members were concerned about the effect of programmes, ‘We don’t see the impact, because HP does not have direct results. For example, in creating awareness for antenatal care [ANC]. After mobilization as HP, many pregnant-mothers may present for early booking. You ask yourself did HP achieve this. It is not easy to say, if their intervention was helpful’ (National HP). Since HP programmes are not being evaluated, it would be possible for HP staff to link awareness activities to ANC visits. Yet, reaching targets on measured programmes is a form of evaluation but the contribution of HP to the targets was not possible to measure. The tendency not to take any credit even for reach could be explained by the low morale among HP staff.
Re-planning based on evaluation data and formative research
Views on capacity to re-plan based on data varied across levels. The kinds of data that informed HP interventions were on disease prevalence or service uptake (e.g. immunization coverage), rather than on the results of evaluations of HP programmes, which showed whether there were desired changes in individuals or environments. District levels showed better capacity to use clinic data for planning activities like health talks or awareness campaigns as mentioned earlier.
Quality assurance and coordination of implementation
When asked about the capacity to carry out quality assurance checks to determine whether predefined standards were met, there was consensus that, ‘there is no system to monitor quality of activities’ (Province B). Quality assurance was an area that participants had not thought about much prior to participation in the capacity assessment. It is an example of where the DoH could not only be assessing materials and activities within the DoH, but also those developed by non-governmental organizations both for-profit and non-profit. A participant stated, ‘We don’t have quality assurance. We need a standard for our work we can go and check ourselves against whatever we are doing’ (District A). There was very limited institutional capacity to coordinate activities within the DoH, particularly for the national HP directorate and they had not considered that their role could extend beyond the DoH to other stakeholders.
In summary, all levels had limited capacity to monitor and evaluate HP activities and programmes, with a mean score of 2.19/4.00 (SD = 0.71) (Table 2). Provinces and their districts had similar results for the implementation and monitoring domain. Capacity to evaluate, scale and sustain HP activities had the lowest scores. All sites had mean scores ≤2.00/4.00 (range 1.22 − 2.00), indicating HP evaluation capacity was mostly perceived as absent.
The state of capacity in institutional systems to support HP work
Institutional HP mandate and operations
In order to understand institutional HP capacity, participants believed that it was important to understand the history of HP within the DoH. HP was introduced in the 1990s, when existing cadres of family planning advisors were incorporated into the programme by simply changing their job titles without any retraining or a guiding policy. The HP policy was first introduced in 2015 after being in draft for more than two-decades, indicating that institutional constraints affected HP work.
The official national HP policy guides programming across all levels of the DoH. Although HP staff were grateful for the HP policy document and strategic plan, they did not fully support the contents, stating, ‘When you look at the policy, it is more theoretical than practical’ (Province B), implying the policy in perceived as not relevant to local needs. Some district participants were unaware of the HP policy, ‘there are no policies… we are not given’ (District A). From an institutional capacity perspective, this shows that while guiding documents exist they have not been well communicated to all staff and are not perceived to be very practical.
Institutional constraints to HP capacity
Participants viewed that institutional systems constrained the implementation of HP activities. In terms of HP reporting, two lines of authority were in place. Frontline health promoters, based in clinics reported to both a facility manager and to an HP sub-district manager, some participants described: ‘so there is always dual reporting except at district. It is very awkward, because there is no centre of power’ (District B). This means HP activities become disjointed due to local power dynamics, as the sub-district HP manager generally deferred to the facility manager.
Occupational classification
A structural barrier to institutional capacity was the lack of uniform job descriptions and the absence of an occupational category for HP within the DoH human resources system. The National-level participants stated, ‘There is no standardization of ranks, structures, salary levels. We are looking at creating an occupational class for HP; and having them registered with a professional body’ (National HP). Only one province (Province A) had professional recognition of health promoters.
The role and function of HPPs differs in terms of what they do. There is no uniformity throughout the country … their occupational classes vary from community liaison officers, communication officers, auxiliary service officers or assistants’ (National HP).
HP workforce, recruitment and retention
In general, retention of HP staff was very high. Many staff had been in their positions for many years. Various reasons were provided for this, ‘there’s no formal education to do HP. I cannot leave. I must stay here, because I don’t have qualifications to apply for another [job]… those who have left for greener pastures, were qualified in other disciplines’ (District A).
However, where posts became vacant due to staff retiring they were not filled because posts were frozen. It should be noted that in one province all posts were frozen (this was not specific to HP), a participant elaborated: ‘if people are in a post they stay forever until they retire’ (Province B). The increase in the number of unfilled HP posts in Province A was verified with supporting documents from district reports.
Challenges that limited institutional capacity included budgetary constraints and a lack of clarity regarding minimum qualification levels, ‘there is two parts, qualifications and filling posts because of money. Our problem is both’ (National HP). Most HPPs entered the field by chance. Some staff had a high school qualification, while others had done in-service training or diplomas. Few staff had HP-specific qualifications.
Both provinces under study had developed a HP orientation manual to address the inadequate qualifications, ‘we developed an orientation manual. We realized our health promoters are appointed without any HP qualifications. Even the managers, most of them don’t have any HP qualification’ (Province B). Tension exists between the need for qualified staff and expectations of graduates for higher positions. Some participants articulated, ‘If we are talking about implementation, you want foot soldiers, like a mid-level worker. These HP graduates all want to be managers. …We are not saying we don’t need graduates, we do; at the same time you can’t just go into a job and be a manager’ (National HP).
Stakeholder coordination
Capacity to engage with stakeholders was variable. Province B reported engaging a wide range of internal and external stakeholders on priority setting or planning. HP staff collaborated with other departments within the DoH and with community stakeholders. However, engagements with other stakeholders did not extend to coordinating activities as mentioned earlier. This is problematic, as one of the main aspects of successful HP is multi-sectoral collaborations to address determinants of health. National-level HP were least likely to report engaging with external stakeholders. This may be explained by the fact that provinces and districts are implementers, and national is meant to provide strategic direction and technical assistance to provinces. However, our findings suggest that national-level HP has a very limited role in guiding what happens in provinces. National participants articulated that another structural barrier was ‘provincial autonomy’ described as provinces being able to run HP independently from the national HP structures. This resulted in limited collaboration between national and provincial HP levels within DoH.
In summary, capacity of institutional systems had the highest scores in the assessment (Table 2), with a mean score of 2.82/4.00 (SD = 0.51). Institutional systems directly influenced planning: communication mechanisms, human resource systems, management, information and reporting systems.
DISCUSSION
HP capacity gaps existed across all three levels of the South African DoH. Capacity gaps occurred in all domains assessed and were compounded by serious structural divides between national and provincial HP levels. Lack of regular contact between national HP and the provincial-level directorates resulted in limited monitoring of activities and centralized strategic planning. This was further impeded by the lack of HP specific indicators being monitored and reported on the health information system, and failure to integrate and use what could be borrowed from other programmes. HP staff, particularly at the district level, were aware of some local health needs, based on clinic-facility performance and statistics. These sometimes contradicted with the national strategic plan. Lack of external and internal HP coordination among national HP staff was evident. The qualitative findings largely aligned well with the collective capacity scores in each domain. Institutional capacity was an exception where scores suggested greater capacity to support HP, but the qualitative data revealed substantive barriers. An example was the re-direction of the HP budget, which emerged as a major challenge to HP planning, with participants reporting few resources to conduct HP activities at any level. Such institutional constraints further reduced HP capacity within the DoH. The HP directorate and provinces engaged in the same practices that had been implemented for years without consideration for whether they were achieving results or whether there were gaps.
Scholars argue that there is limited infrastructure and capacity to support HP delivery in low-and-middle income countries (LMICs), because available resources are usually allocated to medical and preventive approaches (Mahmood, 2015). Results from our current South African study confirm this argument. High-income countries (HICs) seem to have better HP capacity to achieve public health goals (Lin and Fawkes, 2005). This is because HP capacity development occurs as part of ongoing health system strengthening efforts (Mahmood, 2015). In addition, there are clear entry requirements for HP professionals in HICs, such as Australia, which specify qualifications and experience requirements and opportunities for Masters programmes and specialized in-service training are available and in some cases funded (Shilton et al., 2008; Mahmood, 2015). In South Africa, HP competencies have not yet been clearly defined. Attention is therefore needed on how to strengthen HP infrastructure, capacity and organizational performance in LMICs (Battel-kirk et al., 2009).
Inadequate HP qualifications are a challenge within the DoH. Those appointed mostly have learned HP skills on the job and are limited in their capacity to conceptualize or plan programmes beyond what has been implemented for years. An example of this is the reliance on the health calendar to plan activities. Training inadequacies among DoH HP staff have been long-established (Onya, 2007; Wills and Rudolph, 2010). In-service training was provided in some instances; however, it mostly focused on health education. Some HP staff have individually enrolled in HP courses or programmes even at the post-graduate level but are often constrained in being able to implement new ways of doing things because of institutional barriers and a non-supportive environment, which limits HP practice. Our findings are similar to those from the Western Pacific, South Korea and parts of Peru, which found gaps in professional skills development for HP staff (Lin and Fawkes, 2005; Nam and Engelhardt, 2007; Cosme Chavez et al., 2017). This might be one of the reasons HP is attributed a low status. LMICs need to develop HP capacities and competencies to strengthen implementation among the HP workforce.
Self-limiting aspects were also found, such as siloed attitudes of front-line HP workers, lack of moral and feelings that ‘a lot is impossible’. The reason for this could be rooted in the history of HP in South Africa and decades of limited institutional support for HP. A Canadian and Australian study revealed institutional constraints to be pertinent to moral distress among the HP workforce (Sunderland et al., 2010). In addition, at the national HP directorate there was a lack of capacity to lead multi-sectoral coordination as a central role. However, tobacco control work emerged as an exception, where the national HP directorate took ownership and provides leadership across sectors (Rwafa-Ponela et al., nd). These findings suggest a need to look at the whole system rather than focusing on particular levels in the health system or on particular capacities if we are to effectively close capacity gaps needed to promote health, and address determinants (Lin et al., 2009; Mahmood, 2015). In addition, multi-sectoral collaboration is required to build Health in All Policies and create sustainable health-promoting systems (Agarwal et al., 2009; Mahmood, 2015).
Our study highlighted several institutional barriers to HP capacity within the South African DoH, including re-direction and/non-existent HP budgets and lack of resources. These findings resonated globally (Lin and Fawkes, 2005; World Health Organization, 2010; Cosme Chavez et al., 2017). Limited financial resources persistently hinder HP strengthening efforts at any level and inhibits its ability to drive, coordinate and control HP within the DoH or outside government. Although systems capacity was rated higher than other domains in our assessment, institutional system constraints were evident. In particular, we noted a lack of HP-specific indicators, limited data use, and complex chains of command and responsibility. Our assessment tool may not have been sensitive for scores to reflect all the institutional barriers, as it focused more on the presence of structures rather than functionality. Capacity-strengthening initiatives need to address health information systems, with indicators that are focused specifically on HP, as staff capacity alone is not sufficient to address shown gaps. HP implementation has to be sensitive to local needs, with the national-level providing vision and strategy and lower-streams paying attention to local specific contexts. Therefore, there is need to strengthen skills that target the different HP levels and not a ‘one-size-fits-all’.
Study limitations
Results of this capacity assessment should be interpreted in light of some limitations. Social desirability bias may have occurred where participants may have overstated their HP capacity. Evidence was requested from participants to verify their self-assessment. Sometimes the evidence was not provided and it was not possible to determine whether the participants failed to follow through or did not have the evidence. In addition, reaching consensus for a particular score was not always easy and in a few instances, an average score was given as participants felt that consensus was not possible. Capacity domains sometimes overlapped, e.g. monitoring was captured as a technical skill but also was a gap in the institutional system. Overall, participants commented that the workshop was the first opportunity to reflect systematically and collectively about their jobs and roles. Robust discussions occurred, allowing for the recognition of gaps and blind spots in HP implementation and support. Discussions were open despite the different ranks of participants present, in most cases senior staff allowed more junior staff to engage and respond before adding their thoughts. One exception was the workshop in District B, which was dominated by the district HP manager who felt the need to speak on behalf of the group. The findings of the study are not generalizable to the country. However, we believe that the findings could be considered transferable to other districts in the two selected provinces. Other provinces that have similar structures and cadres of HP staff with similar characteristics and experience may have similar levels of capacity.
CONCLUSION
This assessment adds to existing international efforts to map HP capacity. It provides evidence from an African middle-income country, which can be used to inform capacity-strengthening efforts. There is need to overcome institutional barriers, and strengthen HP capacity for HP implementation, support and evaluation within the South African DoH. Monitoring systems and assessment tools need to be developed and implemented.
ETHICS APPROVAL AND CONSENT TO PARTICIPATE
Ethics approval was received from the Human Research Ethics Committee (HREC) at the University of the Witwatersrand and the Department of Health (national level and the two provinces). Districts approved data collection. Participation in the study was voluntary, and all participants gave signed informed written consent to participate in the workshops. Before the workshops began, participants went through an informed consent process, which included separate audio-recording consent forms. Names of provinces and districts have been anonymized.
SUPPLEMENTARY MATERIAL
Supplementary material is available at Health Promotion International online.
AUTHORS’ CONTRIBUTIONS
T.R. conducted the data collection, analysis and write-up of the first drafts of the paper. N.C. collected the data, contributed conceptually to the analysis and writing of the paper. J.E. commented on paper drafts. J.G. reviewed, commented on drafts and writing of the paper. All authors read and approved the final version of the paper.
Supplementary Material
ACKNOWLEDGMENTS
The authors would like to thank all study participants for their valuable discussions and contributions.
FUNDING
This work was supported by the South African National Research Foundation (NRF), and J.E.—the South African Research Chair (SARChi) in Health Systems and Policy, at the Centre for Health Policy, School of Public Health, Faculty of Health Science, University of the Witwatersrand.
CONFLICT OF INTEREST STATEMENT
None declared.
REFERENCES
- Agarwal S., David A., Brink E. J., Pettersson B., Lin V. (2009) A Primer for Mainstreaming Health Promotion. Prepared for the 7th Global Conference of Health Promotion: “Promoting Health and Development Closing the Implementation Gap”, 26 − 30 October 2009, Nairobi, Kenya. [Google Scholar]
- Aluttis C., Van den Broucke S., Chiotan C., Costongs C., Michelsen K., Brand H. (2014) Public health and health promotion capacity at national and regional level: a review of conceptual frameworks. Journal of Public Health Research, 3, 37–42. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Barry M. M., Allegrante J. P., Lamarre M.-C., Auld M. E., Taub A. (2009) The Galway Consensus Conference: international collaboration on the development of core competencies for health promotion and health education. Global Health Promotion, 16, 05–11. [DOI] [PubMed] [Google Scholar]
- Battel-kirk B., Barry M. M., Taub A., Lysoby L. (2009) A review of the international literature on health promotion competencies: identifying frameworks and core competencies. Global Health Promotion, 16, 12–20. [DOI] [PubMed] [Google Scholar]
- Catford J. (2005) The Bangkok Conference: Steering Countries to Build National Capacity for Health Promotion. Oxford University Press, Oxford. [DOI] [PubMed] [Google Scholar]
- Cosme Chavez R., Yoon Y. M., Nam E. W. (2017) A local-level evaluation of capacity for health promotion in Lima, Peru: comas and Callao health center areas utilizing the WHO health promotion capacity profile. International Journal of Health Promotion and Education, 55, 318–332. [Google Scholar]
- Dejoy D. M., Wilson M. G. (2003) Organizational health promotion: broadening the horizon of workplace health promotion. American Journal of Health Promotion, 17, 337–341. [DOI] [PubMed] [Google Scholar]
- Department of Health. (2014) The National Health Promotion Policy and Strategy 2015-2019. Government Printer, Pretoria, South Africa. [Google Scholar]
- Ebbesen L. S., Heath S., Naylor P.-J., Anderson D. (2004) Issues in measuring health promotion capacity in Canada: a multi-province perspective. Health Promotion International, 19, 85–94. [DOI] [PubMed] [Google Scholar]
- Jana M., Nieuwoudt S., Kumwenda W., Chitsime A., Weiner R., Christofides N. (2018) Measuring social and behaviour change communication capacity in Malawi. Strengthening Health Systems, 2, 69–73. [Google Scholar]
- Lin V., Engelhardt K., Mercado S., Fawkes S. A., Lee T. (2009) Building Capacity Through “Reflective Learning-Action Systems”(Release) Towards High Performing Health Promotion Systems: Building Capacity for Health Promotion. World Health Organization, Geneva. [Google Scholar]
- Lin V., Fawkes S. (2005) National Health Promotion Capacity Mapping in the Western Pacific Region: final Report. Prepared for WHO Regional Office for the Western Pacific. Latrobe University, Melbourne. [Google Scholar]
- Mahmood S. (2015) Health promotion capacity mapping in low and middle income countries. PhD Monograph. National University of Ireland. [Google Scholar]
- Mittelmark M. B., Fosse E., Jones C., Davies M., Davies J. K. (2005) Mapping European capacity to engage in health promotion at the national level: HP-Source.net. Promotion & Education, 12, 33–39. [DOI] [PubMed] [Google Scholar]
- Nam E. W., Engelhardt K. (2007) Health promotion capacity mapping: the Korean situation. Health Promotion International, 22, 155–162. [DOI] [PubMed] [Google Scholar]
- Onya H. (2007) Health promotion in South Africa. Promotion & Education, 14, 233–237. [DOI] [PubMed] [Google Scholar]
- Rwafa-Ponela T., Goudge J., Christofides N. (nd) Institutionalization of Health Promotion in the South African Health System: A Qualitative Case Study—‘The One that Pays You Has No Name for You”.
- Shilton T., Howat P., James R., Hutchins C., Burke L. (2008) Potential uses of health promotion competencies. Health Promotion Journal of Australia, 19, 184–188. [DOI] [PubMed] [Google Scholar]
- Smith B. J., Tang K. C., Nutbeam D. (2006) WHO health promotion glossary: new terms. Health Promotion International, 21, 340–345. [DOI] [PubMed] [Google Scholar]
- Sunderland N., Catalano T., Kendall E., Mcauliffe D., Chenoweth L. (2010) Exploring the concept of moral distress with community-based researchers: an Australian study. Journal of Social Service Research, 37, 73–85. [Google Scholar]
- Van Den Broucke S., Jooste H., Tlali M., Moodley V., Van Zyl G., Nyamwaya D.. et al. (2010) Strengthening the capacity for health promotion in South Africa through international collaboration. Global Health Promotion, 17, 6–16. [DOI] [PubMed] [Google Scholar]
- van Herwerden L. A., Palermo C., Reidlinger D. P. (2019) Capacity assessment in public health community interventions: a systematic review. Health Promotion International, 34, e84–e93. [DOI] [PubMed] [Google Scholar]
- Wills J., Rudolph M. (2010) Health promotion capacity building in South Africa. Global Health Promotion, 17, 29–34. [DOI] [PubMed] [Google Scholar]
- World Health Organization. (1986) The Ottawa Charter for Health Promotion [Online]. World Health Organization, Geneva, Switzerland. http://www.who.int/healthpromotion/conferences/previous/ottawa/en/index4.html (last accessed 20 June 2016). [Google Scholar]
- World Health Organization. (2010) Capacity Mapping for Health Promotion. Regional Office for the East Mediterranean, Cairo.
- World Health Organization. (2017) Shanghai declaration on promoting health in the 2030 Agenda for sustainable development. Health Promotion International, 32, 7. [DOI] [PubMed] [Google Scholar]
- Zhang W., Creswell J. (2013) The use of “mixing” procedure of mixed methods in health services research. Medical Care, 51, e51–e57. [DOI] [PubMed] [Google Scholar]
- Ziglio E., Simpson S., Tsouros A. (2011) Health promotion and health systems: some unfinished business. Health Promotion International, 26, ii216–ii225. [DOI] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.