This review was written after reflecting on my original article, “Effective social justice advocacy: a theory of change framework for assessing progress”, published by RHM in 2011,1 and identifies developments in the SRHR field which would add nuance to the original paper. At its core, the original paper offers a theory-of-change framework for social justice advocacy. It describes broad outcome categories against which activists, donors and evaluators can assess progress (or lack thereof) in an ongoing manner – changes in: organisational capacity, base of support, alliances, data and analysis from a social justice perspective, problem definition and potential policy options, policy and policy implementation, visibility, public norms, as well as population-level impacts. It suggests that using these categories for evaluation enables activists and donors to learn from and rethink their strategies as the political context and/or actors change over time. The paper presents a case study comparing factors that facilitated reproductive rights* policy wins during the transition from apartheid to democracy in South Africa and factors that undermined their implementation in the post-apartheid period. It argues that after legal and policy victories had been won, failure to maintain strong organisations and continually rethink strategies contributed to the loss of government accountability and focus on implementation of new policies. By implication, evaluating effectiveness only by an actual policy change does not allow for ongoing learning to ensure appropriate strategies. It also fails to recognise that a policy win can be overturned and needs vigilant monitoring and advocacy for implementation. This means that funding and organising advocacy should seldom be undertaken as a short-term proposition. It also suggests that the building and maintenance of organisational and leadership capacity is as important as any other of the outcome categories in enabling success.
The report from which I drew this article launched me into a career shift in which I focused partly on supporting social justice funders, networks and NGOs in strengthening their strategy, evaluation and learning processes, and partly on doing external evaluations with them. On rereading this article, I asked myself “what has changed or what have I learnt in the twelve years since its publication?” In this note I focus on five areas of personal or field development, each with implications for the funding of SRHR advocacy.
Systems thinking
The paper was written with a systems frame recognising the challenge of understanding the role and contribution of advocacy by single groups when there are multiple players, all contributing in different ways and at different moments in the process to ultimate outcomes.
In the intervening years, frameworks and tools for defining the system of focus have entered into mainstream evaluation discourse and practice. Most helpful is guidance in defining the system both when shaping strategy and when evaluating, by asking: what are the key parts or actors, what are each of their perspectives, what are the relationships and interactions between these, and what are the structures that shape those interactions – meaning the social norms, relations of power, policies and standards.†,3,4 This is helpful in defining the boundaries of the system that will be the focus of advocacy; in helping advocates clarify the forces at play in a particular context, and where they can best intervene. Identifying the boundaries of the system from the perspective of funders, advocacy groups and community groups can also be key in clarifying perspectives and building consensus for action.5 Using this perspective, what the paper’s case study shows is how the advocacy for policy change cast the boundaries of the system very widely, engaging individuals and institutions far beyond the health system that would have to deliver sexual and reproductive health services. But for implementation, advocacy groups tended to focus more narrowly on the public health system. Yet it was social and religious forces and norms that would stigmatise abortion providers and make health system managers reluctant to implement the law. It was those forces playing out in the family, the health system and in the education system that would fail to support young people in building their capacity to navigate their sexual relationships in ways that protected their own and their partners' dignity, rights and health. Those forces would fail to challenge the norms of transactional sex that rendered young women in particular vulnerable to HIV and unwanted pregnancy, and of sexual violence that had the same result, as well as, conversely, the damaging impacts of criminalisation of sex work. These norms, including the valorisation of motherhood even for very young women, plus a failing economy and few job prospects, contributed to young women not using contraception or seeking abortions despite these being safe and free, albeit, over time, offered at fewer public health clinics.
There is a deeper analysis needed here but for my purposes, the issue is that civil society organisations would have needed to sustain very wide-ranging alliances and do deep community-based work on social norms and values, in order to ensure that the progressive laws were actually implemented, that society at large took a stand against gender-based violence and sexual exploitation, and that young women in particular saw value in using contraception and seeking abortions when needed. Moreover, they would have had to keep attention to and develop strategies to address the well-resourced and planned anti-choice interventions from US-based initiatives that have increased their scope and influence in South Africa over time. Hence the value of mapping the systems at play as the basis for shaping advocacy strategies.
The implications for funding are tremendous. Those funders setting the system boundary at health services failed to recognise the need to build and sustain community-level movements to address social norms, values and behaviours, that is, to define system boundaries much more widely. Indeed as a general trend, funds for grassroots and local NGOs have declined, with a handful of international NGOs receiving funds for SRHR work in South Africa, without incorporating any grounded movement-building dimensions, even while funds are pouring into the country to initiate and sustain civil society groups with an anti-SRHR agenda.6
#ShiftThePower
Another change in this period, which relates to the need for grounded organising and movement-building, has been the #ShiftThePower movement. This was initiated in the first instance by the Global Fund for Community Foundations‡ in 2016 to bring together multiple actors at global and local levels who were already questioning and taking action to “centre people and communities as actors, decision-makers and investors in their own development processes and societies”,7 as laid out in the #ShiftThePower Manifesto for Change. In particular, it has further coalesced the debate and action among international NGOs and funders on where they “fit” into the development and social justice system and how their roles should support rather than shape the agendas of those on the ground. At the same time, it has created a platform and opportunities for shared learning among a wide range of community-based organisations – from local and national foundations to youth groups to women’s rights organisations and artists’ collectives. Language to frame this approach is developing, focusing not only on local resourcing, but also on the building of agency and trust as core to the ability of communities to shape and deliver on their own agendas,8 and to negotiate for resources – whether with governments or funders or corporations – on their own terms. This period has also seen the proliferation of participatory grantmaking,§ in the SRHR field modelled as early as 2009 by UHAI, the East Africa Sexual Health and Rights Initiative, East Africa's first indigenous activist fund for sex workers and sexual and gender minorities, and now burgeoning among both global and local funds that focus on or include SRHR.
One of many challenges here is that this kind of local approach can be reified as the only ethical way of supporting movements, although it too suffers from many of its funders not providing long-term general support to the participatory grant-making groups, that they can then pass on to grantees. In addition, movements require a wide diversity of capacities and relationships in addition to well organised constituents – for example research, litigation capacity. Movements are what Batliwala describes as “meshworks”9 to effectively influence change, and this means multiple forms of resourcing are needed.
Methodology
Participatory funds, as well as other funders resourcing small local initiatives, are leading the way in focusing on storytelling as a means of sharing and learning, and in some cases, as an alternative to funders requiring groups with few or no staff, and often with the funders’ language being their second or third language, to be writing reports to funders.10 Some are highly purposeful in shaping their approaches to explicitly change the donor-community evaluation power dynamic. Even beyond participatory funds, there is a growing interest in “trust-based philanthropy” and multiple examples of interview and conversation tools that replace pre-set reporting templates with set indicators and assumptions.11 An example of an entire upending of top-down funder assessment approaches is the “pemakna” approach of the Indonesia for Humanity (IKa) fund which drew on Measuring What Matters,12 one of the key resources of #ShiftThePower. Kamala Chandrakirana of IKa explains, “We want our process to serve as collective spaces for shared learning that engage in a relational approach based on trust and mindfulness.” Identifying evaluators who are embedded in the community, IKa invites them by asking “Do you want to join us in building knowledge from the ground up on the work of social transformation?”13
Outcomes-focused evaluation
One of the many challenges to funders of inviting grantees to share stories of change, is in how to use them to understand if and in what ways multi-grantee or movement funding portfolios as a whole are making progress towards achieving their objectives. Indeed, this challenge holds even when grantees are reporting against templates that ask groups to specify their outcomes, rather than through interviews or storytelling. It is therefore not surprising that there has been a huge growth in use of Outcome Harvesting (OH) as an evaluation approach – illustrated, for example, in How has work funded by Comic Relief’s Power Up programme contributed to shifts in women and girl’s power?14 which uses OH to identify in what ways women had built and influenced power across a cohort of more than 30 women’s rights groups from different countries. Conceptualised within a systems framework, OH argues that precisely because of the multiple actors, perspectives and relationships operating in shifting contexts, advocacy initiatives cannot predict their outcomes and hence it is appropriate to rather ask what actually happened (an outcome), and then trace backwards to see if an initiative’s activities plausibly influenced that outcome, whether directly or indirectly, and without any expectation that that initiative alone would have influenced the outcome.15 Indeed this and other outcomes approaches such as Process Tracing16 enable one to analyse the contributions of different players to the same outcome.17 Outcomes can be identified in multiple ways ranging from participatory storytelling processes to reviews of annual reports. Over the last five years, increased numbers of social and environmental justice-oriented funders, including SRHR, have advertised evaluation terms of reference that ask for Outcome Harvesters, contribution analysis, process tracing or similar. These methods have also received some critique because of how they quantify qualitative data, and there’s a vibrant community of practitioners working to bring conceptual clarity to some of the tensions identified. However, their focus on articulating both outcomes and how an intervention contributed to influencing each outcome with a very high level of specificity, provides data about both what happened (the outcome) and the contribution of one or a number of actors, that can be verified whether through Outcome Harvesting’s “substantiation” with external actors, or by subjecting contribution claims to tests of the level of confidence the evidence offers and/or their relative importance in comparison to alternative explanations. In the words of Lynn, Stachowiak and Coffman,18
“When done well, causal analysis can lift up and leverage the power of stories, lived experiences, and multiple ways of knowing. … When we do not use causal analysis, … we lose the ability to test our assumptions, create knowledge about effectiveness that can drive future work, and break through our cognitive and implicit biases.”
Many SRHR groups are starting to use Outcome Harvesting themselves, at times initially in order to strengthen their reporting to funders, but then finding that it helps them to unpack in what ways their strategies are or aren’t working well, whether their assumptions hold, and what strategic shifts they may need to make. For example, the African Women’s Development Fund strengthened the capacity of grantees, including those working on SRHR, to distinguish their outputs from their outcomes and in doing so to tell their stories with greater specificity, building a stronger body of evidence to support their learning.
Principles-focused evaluation
In addition to identifying if or to what extent advocacy is influencing outcomes that do or do not show progress in relation to the overall objective of an initiative, the paper emphasises values as core to social justice advocacy – both its objectives and its processes. In particular, it stresses the importance of those most affected by the issues – in current discourse, those with “lived experience” – being central to both advocacy and its evaluation. This has significant implications for evaluation as it requires, in addition to looking for outcomes, assessing to what extent the advocacy process upholds social justice principles. In this period, literature on “principles-focused evaluation”19 has provided some guidance to support advocacy groups in articulating their principles in evaluable ways, and then to evaluate if and how well they are walking their talk. Patton’s20 GUIDE on “whether principles provide meaningful guidance (G) and are useful (U), inspiring (I), developmentally adaptable (D), and evaluable (E)” gives specificity to this endeavour. Related to this has been growing attention to the need for continued reflexivity of evaluators,21 to hold themselves accountable in their practice, with evaluation associations investing considerable energy in collectively developing and promoting adherence to guiding principles.22 Values-driven evaluation approaches are increasingly making explicit their driving principles, for example in relation to feminist evaluation.23,24
Emergent learning – integrating strategy, evaluation and learning
A continuing challenge, however, is the separation of “M&E” from strategy, with the very words leaving activists feeling disempowered. In the minds of many advocates, and in the tone of many funding report templates, “M&E” continues to be something that someone else does onto community or advocacy groups or that such groups do only for upwards accountability to funders. One of the main arguments I put to groups that contract me for evaluation support, is that every strategic activist is continually evaluating the effectiveness of their actions, learning from that, and adapting their strategies accordingly – that strategy, evaluation and learning go together. That when they run an event, or work on a campaign, they are continually asking “how did that go? What worked, what didn’t and why? What should we do differently next time? What should we continue to do?” This is the piece of “evaluation” that frequently gets lost in the focus on indicators and fitting into predictive templates – the process of using the evidence of what happened to think and debate and learn, and from this, to continually restrategise. What has grown in this period is the articulation of this “emergent learning” process and method.25 There are two key implications here. The first is that the evaluation-for-learning process cannot belong to outsiders – those who are doing the work are best positioned to look at the evidence of what happened, and work out what it means for the advocacy going forward. Even when an advocacy group or funder hires an external evaluator, the expectation that they will do the evaluation, write a report and go away, misses the entire value of evaluation, which is to provide evidence to support learning which can only be done by those involved in the work. While an external eye may see things and have insights that the group itself hadn’t seen, the learning has to be done by the advocates themselves. This is well articulated by Wenger-Trayner,26 who put the experience of agency and meaningfulness at the core of social learning processes. The second, and related, dimension is that stopping to reflect and learn and review strategies ought to be built into the fabric of day-to-day advocacy processes and, in the case of organisations, into job descriptions. Time needs to go not only into gathering and documenting the evidence, but into reflecting – the “L” in the MEL acronym – monitoring, evaluating and learning.
This is seldom budgeted, especially for those advocacy groups funded by project. There’s often no money to look back and really unpack what happened, let alone to stop and reflect on its implications. In the SRHR field, many groups are expressing a growing interest in feminist evaluation which emphasises that it should be by and for participants in an initiative, and yet few of these groups can actually resource the learning process and few funders ensure that the grants they offer are realistic about the resources needed to trace changes over time, often well after a training or advocacy process is completed, and for the group(s) concerned to reflect on findings in relation to their work going forward. For an evaluator committed to fostering social justice in their processes, this often creates insurmountable conundrums as groups propose feminist principles, but cannot find the time to practise them.
Final thoughts
Looking specifically at SRHR, many of the funders focused on specific populations such as the women’s funds, sex worker and LGBTQI funds, are exploring and innovating in their approaches to evaluation and learning. They are still seeking a workable approach to building solid evidence as the basis for learning in a way that is empowering and meaningful to their constituents. But there remain key funders of reproductive rights, in particular, who are asking advocacy groups and networks to commit in advance to set actions in set periods with set indicators, which is not only burdensome but inappropriate for advocacy initiatives. Ironically, this disempowers the groups they are funding to build power, and thereby stifles the ability of grantees to respond to opportunities and needs as they arise. Given the paucity of funding for grounded organising and networking including at local, national, regional and international levels, such groups tend to still take the funds and grapple with their evaluation requirements, rather than refusing to be pushed in inappropriate directions, a trend noted by Schlangen and Coe.27
One last reflection is that the publishing on evaluating advocacy, and its implications for funders, continues to be dominated by Western journals and authors. Much of the innovation underway, particularly in the global South but also by practitioners in the global North, both in SRHR advocacy and in evaluation and learning approaches, is not written up at all, let alone in the academic literature. Another role funders could play is to encourage and fund groups to document their innovations and share them with the field, both online and in journals. This would be a contribution towards the decolonisation agenda, that has gained more traction in the last decade, in particular the need to recognise multiple ways of knowing and making meaning.
Footnotes
The original paper notes that the term “social justice” is used broadly to incorporate social, economic, cultural, civil and political rights. In the current context, the language of “reproductive justice” is replacing that of “reproductive rights” in order to recognise that people’s experience of both sexual and reproductive health and rights differ because of structural inequities particularly based on class and race, but also because of diverse forms of stigma and discrimination, including on the basis of sexual orientation and gender identity. However, in South Africa, claims for human rights and therefore sexual and reproductive rights have always had this orientation towards justice, rather than a more narrow legal rights framing.2 The Sexual and Reproductive Justice Coalition elaborated the orientation to “justice” to capture South African context.
This particular framing I have taken from Alnoor Ebrahim and Bob Williams.
As a board member of the Global Fund for Community Foundations I have to acknowledge a particular affinity for their approach and the #ShiftThePower platform.
By “participatory grantmaking” I’m referring to the contemporary approach of funders rather than the myriad of indigenous approaches across the world to collective raising and disbursement of funds or other resources within communities.
Disclosure statement
No potential conflict of interest was reported by the author(s).
References
- 1.Klugman B. Effective social justice advocacy: a theory-of-change framework for assessing progress, Reproductive Health Matters. 2011;19(38):146–162. doi: 10.1016/S0968-8080(11)38582-5 [DOI] [PubMed] [Google Scholar]
- 2.Sexual and Reproductive Justice Coalition . Statement of intent; 2016. https://srjc.org.za/statement-of-intent/.
- 3.Ebrahim A. Measuring social change: performance and accountability in a complex world. Stanford: Stanford University Press; 2019. [Google Scholar]
- 4.Williams B. Using systems concepts in evaluation design – a workbook. Aotearoa: Bob Williams; 2016. [Google Scholar]
- 5.Coffman J, Stachowiak S, Raynor J.. A learning agenda for the advocacy evaluation field’s future. New Direct Eval. 2021;171:133–144. doi: 10.1002/ev.20477 [DOI] [Google Scholar]
- 6.Stevens M. Sexual and reproductive health and rights: where is the progress since Beijing? Agenda. 2021;35(2):48–60. doi: 10.1080/10130950.2021.1918008 [DOI] [Google Scholar]
- 7.Global Fund for Community Foundations . The global fund for community foundations; n.d.
- 8.Hodgson J. The case for community philanthropy: how the practice builds local assets, capacity and trust – and why it matters. Aga Khan Foundation, Charles Stewart Mott Foundation, Global Fund for Community Foundation, Rockefeller Brothers Fund; 2013. https://www.mott.org/wp-content/uploads/2013/12/CaseForCommunityPhilanthropy.pdf.
- 9.Batliwala S. All about movements. CREA; 2020.
- 10.Miller K. Piloting trust based reporting at Malala fund. Impact Mapper; 2023. [Google Scholar]
- 11.Trust Based Philanthropy Project . Template collection: grant reporting. Trust based Philanthropy Project; n.d.
- 12.Knight B, Doan D. Measuring what matters; 2020.
- 13.Chandrakirana K. Measuring what matters, one pemakna at a time. Global Fund for Community Foundations; 2022. https://globalfundcommunityfoundations.org/blog/measuring-what-matters-one-pemakna-at-a-time/.
- 14.Klugman B. How has work funded by comic relief’s power up programme contributed to shifts in women and girl’s power? 2021. https://assets.ctfassets.net/zsfivwzfgl3t/37b92Qfj8fniplVyHRjfyd/26b378fdc558c6fea0076dbdd7b213c5/Power_Up_Outcome_Harvesting_Report_2021.pdf.
- 15.Wilson-Grau R. Outcome harvesting: principles, steps and evaluation approaches. Charlotte: Information Age; 2019. [Google Scholar]
- 16.Wadeson A, Monzani B, Aston T. Process tracing as a practical evaluation method: comparative learning from six evaluations. In Monitoring and evaluation news (Issue March); 2020. https://mande.co.uk/wp-content/uploads/2020/03/Process-Tracing-as-a-Practical-Evaluation-Method_23March-Final-1.pdf.
- 17.Coe J, Schlangen R. No royal road: findings and following the natural pathways in advocacy evaluation. Center for Evaluation Innovation; 2019.
- 18.Lynn J, Stachowiak S, Coffman J.. Lost causal: debunking myths about causal analysis in philanthropy. The Foundation Review. 2021;13(3):20. doi: 10.9707/1944-5660.1576 [DOI] [Google Scholar]
- 19.Wolfe S, Long P, Brown K.. Using a principles-focused evaluation approach to evaluate coalitions and collaboratives working toward equity and social justice. In: Price AW, Brown KK, Wolfe SM, editor. Evaluating community coalitions and collaboratives. New Directions for Evaluation. 165; 2020. p. 45–65. [Google Scholar]
- 20.Patton MQ. Principles-focused evaluation: the guide. New York: The Guilford Press; 2018. [Google Scholar]
- 21.Taylor-Schiro-Biidabinikwe E, Cram A.. Who puts the value in evaluation? The need for self-reflection and transparency in advocacy and policy change evaluation. New Direct Eval. 2021;171:83–94. doi: 10.1002/ev.20475 [DOI] [Google Scholar]
- 22.African Evaluation Association . The African evaluation principles 2021. https://afrea.org/AEP/new/The-African-Evaluation-Principles.pdf.
- 23.Dossa S, Desalvo C, Modungwa B. Decolonizing knowledge in philanthropy: what does it mean? Alliance Magazine; 2023.
- 24.Podems D. Making feminist evaluation practical. Eval Matt. 2018: 44–55. https://idev.afdb.org/sites/default/files/Evaluations/2020-03/Making Feminist Evaluation practical.pdf. [Google Scholar]
- 25.Darling M, Guber H, Smith J, et al. Foundat Rev. 2016;8(1):59–73. doi: 10.9707/1944-5660.1284 [DOI] [Google Scholar]
- 26.Wenger-Trayner E, Wenger-Trayner B.. Learning to make a difference: value creation in social learning spaces. Cambridge: Cambridge University Press; 2020. [Google Scholar]
- 27.Schlangen R, Coe J.. Radical rerouting: new roads for advocacy evaluation. New Direct Eval. 2021;171:71–81. doi: 10.1002/ev.20472 [DOI] [Google Scholar]
