Skip to main content
Wellcome Open Research logoLink to Wellcome Open Research
. 2023 Dec 8;8:52. Originally published 2023 Feb 3. [Version 2] doi: 10.12688/wellcomeopenres.18018.2

EIDM training as a key intervention among researchers to enhance research uptake and policy engagement: an evaluation study

Leila Abdullahi 1,a, Hleziwe Hara 2, Elizabeth Kahurani 1, Victory Kamthunzi 2, Lomuthando Nthakomwa 2, Rose Oronje 1, Nyovani Madise 2
PMCID: PMC11074692  PMID: 38716047

Version Changes

Revised. Amendments from Version 1

We made the revisions to the attached document based on feedback as follows:

  1. Revision of grammatical errors to make the general reading of the paper easier.

  2. Clarification of the methods section by explaining why and how the different evaluations were used (e.g. needs assessment, pre-post, endline)

  3. We revised the discussion section to correlate with the Findings and Results section

  4. The conclusion has been re-casted so that it highlights lessons from this evaluation.

  5. We made all other revisions as suggested by the 2 reviewers.

Abstract

The Evidence Informed Decision Making (EIDM) field has evolved faster in the past decade. This progress shows a need for capacity enhancement amongst evidence producers and evidence users in EIDM training. Through the Enhance DELTAS programme, led by the African Institute for Development Policy (AFIDEP), the project provided research uptake and policy engagement training, mentorship and webinars to awardees of the Developing Excellence in Leadership, Training and Science (DELTAS) Africa initiative, led by the African Academy of Sciences (AAS).

Two workshops were offered to individual early career DELTAS researchers in policy engagement and evidence uptake, referred to as ENHD101, and among research leaders to enhance institutional capacity on policy engagement and evidence uptake, (ENHD102).

Overall, the programme attracted 31 early career researchers and 20 research leaders over the eight months of training, mentorship and webinars. Following the programme, the early career researchers understood the importance of EIDM for better health policies and programmes. In addition, the team appreciated the complexities of the policymaking processes as they developed the policy engagement strategy for their research. The implementation of the EIDM knowledge was reflected during the mentorship of research fellows with policy briefs as the end product.

Notably, research leaders, appreciated their role in strengthening the capacity for EIDM in decision-making spaces. Although none of the research leaders participated in strengthening the capacity for EIDM during the programme, the team anticipated improving in the long run. In addition, the research leaders developed and implemented institutional strategies for policy engagement and research uptake through the use of social media to influence policymakers.

In conclusion, the project supported the capacity building of African researchers in EIDM. It was evident that enhancing knowledge and skills on EIDM through an integrated approach to include training, mentorship, and webinars demonstrated enhanced capacity for policy engagement and evidence uptake.

Keywords: EIDM training, policy makers, training & mentorship, evidence

Background

Evidence has an important role to play in improving policy, programme, and practice decisions that ultimately improve development effectiveness 1, 2 . Evidence-informed Decision-making (EIDM) is an evolving discipline to help translate the best available evidence into context-appropriate recommendations aligned with the priorities of decision-makers. EIDM is defined as a process where high-quality evidence from research, local data, and professional experiences is synthesised, disseminated, and applied to decision-making in policy and practice 35 .

The EIDM process is complex as it has to compete with many other factors including interests of policymakers, politics, value systems, individual and institutional capacities, and financial constraints 68 . Individual and institutional weak capacity for evidence use in policy and programme decisions has attracted a lot of focus in the last decade as one of the many barriers to evidence use 2, 8, 9 . There is a lot of research that has investigated the need and efforts to strengthen institutional capacity to increase increase or enhance the use of evidence in decision-making 10, 11 . The studies showed a need for a better understanding of efforts to strengthen capacity for evidence use as well as understanding context-specific lessons and insights in building institutional capacity for evidence use.

This paper draws on the Enhance DELTAS programme led by the African Institute for Development Policy (AFIDEP) to strengthen individual capacity for evidence use to include training, mentorship, and webinars as key interventions among researchers and policymakers. These interventions to enhance research uptake and policy engagement were provided to awardees of the Developing Excellence in Leadership, Training and Science (DELTAS) Africa initiative, led by the African Academy of Sciences (AAS).

The technical support was designed to address the gaps in knowledge and skills for knowledge translation and policy engagement among many DELTAS fellows through the Learning Research Programme (LRP) of the DELTAS initiative 12 . The Learning Research Programme (LRP) report highlighted institutional weaknesses in promoting knowledge translation and policy engagement practices within DELTAS partner institutions. The institutional weaknesses in evidence use are also mirrored by unpublished PhD research of a DELTAS-funded Ph.D. researcher and AFIDEP staff, who has documented similar weaknesses in her doctoral research (research in progress).

Enhance DELTAS team worked with the first DELTAS Africa programmes to enhance the capacity of individuals, support DELTAS institutions in creating enabling environments for policy engagement and research uptake, and facilitate interaction between researchers and policymakers. This DELTAS Africa programme is an initiative implemented by the AAS Alliance for Accelerating Excellence in Science in Africa with the support of the Wellcome Trust and the UK’s Foreign Commonwealth and Development Office (FCDO). Phase Ie of the programme, which ended in May 2021 was designed to train world-class researchers and research leaders in health sciences in Africa and to strengthen the environments in which they operate. The first DELTAS Africa programme supported 11 collaborative teams, spanning 54 lead and partner institutions. DELTAS Africa Phase II started in early 2021 and has introduced a suite of new strategies designed to address gaps identified during DELTAS Africa Phase I. The strategy includes balancing equity and excellence within the constitution of various consortia 13 . The integrated learning program included five sessions of three hours of virtual workshops, online self-learning materials including videos, a six-month mentorship phase, and interactive EIDM at individual and institutional levels. These are described in more details under the methods section.

Research question

1. Can a multi-faceted intervention that combines training and mentoring improve researchers’ knowledge of EIDM and practice?

Methods

The programme implemented a holistic approach at the individual level that intends to strengthen individual capacity and existing institutional systems, structures and processes to enable sustained EIDM. The multi-faceted intervention to including training, mentorship and webinars adopted a virtual format. The following integrated approach was used:

  • a)

    Consultation with AAS to identify potential trainees: In August 2020, we held virtual consultations with AAS were held to introduce Enhance DELTAS programme and assess their interest in co-facilitating. The AAS team supported getting the programme publicised across the DELTAs family. Through the email circulation from AAS, the interested DELTAS programme researchers expressed interest in the two training modules, ENHD 101 and ENHD 102 described fully below in part b.

  • b)

    The training intervention components: A tailor-made virtual training targeting early career researchers and research leaders was developed following a needs assessment to understand participants’ needs.

    • Training for early career researchers (ENHD101) on policy engagement and evidence uptake targeted DELTAS and African Early Career Researchers (ECR) undergoing Ph.D and post-doctoral programmes. The key components of ENHD101 were: introduction to the principles of EIDM, mapping the health policymaking landscape, developing a policy engagement strategy for the research project and knowledge translation and packaging.

    • Training for senior researchers and consortium leaders (ENHD102), to enhance institutional capacity for policy engagement and evidence uptake was designed for research leaders or senior researchers who were responsible for leading policy engagement and research uptake. The components of ENHD102 were: introduction to EIDM, developing institutional strategies for policy engagement and research uptake, generating demand for evidence uptake, and creating an enabling institutional culture for research uptake.

  • c)

    Mentorship programme: As part of the learning process, participants were invited to the virtual mentorship programme to help consolidate the learning, build depth, and most importantly, help them complete their policy products. The mentorship was provided over eight months to monitor their progress of implementing of their policy engagement tasks. This approach has been used on several of our programmes with good success rates 9 . After the training, fellows were assigned a task to complete; for example, developing a policy brief based on their research, doing a stakeholder mapping and power-interest matrix, or developing a policy engagement strategy.

    Out of the participants trained for ENHD101 and 102, six participants from ENHD101 training for early career researchers (four females and two males) expressed interest in being mentored to develop some evidence products. For those participants chose not to take part, it was because they were unsure whether they needed the mentorship as yet because they were starting their research projects. Among those who accepted mentorship, their research ranged from anti-microbial resistance, sexual and reproductive health, strengthening health research, and maternal healthcare service utilisation, among others. The fellows were assigned to mentors who supported them up to the end of the programme. During their first meeting, each mentor-mentee pair was asked to complete an agreement outlining the goals and expectations, and a plan for completing at least one evidence product for sharing with relevant policymakers. Out of the six mentees, four mentees developed policy briefs as the evidence product of choice while two were unable to complete the mentorship.

  • d)

    Webinars: As follow-on support, 2-hour virtual webinar sessions were scheduled in April and May in two key area of interest as expressed by the participants. The webinar topic was derived from the needs assessment. The first webinar conducted in April 2021 was on "How to attain effective context-specific policy engagement strategies" with objective the to familiarise and access policy engagement and evidence uptake toolkits. The second webinar titled “How to effectively use Social Media”, sought to understand why social media is valuable in communicating about research and policy to the public.

Implementing the EIDM Intervention

Before the training, we conducted a needs assessment and baseline study to understand participants’ capacity development needs. The feedback from this assessment helped us customise the agenda to suit the needs of the participants. From the needs assessment, we identified the following areas of interest: evidence-informed decision-making, context and principles of policy-making, health policies and instruments used by governments, policy engagement and research uptake strategy, evidence synthesis and packaging evidence for non-academic audiences. The first questionnaire was used to gather baseline information on the knowledge about EIDM and policy engagement experience to inform our overall programme evaluation.

Training sessions

The training on Policy Engagement and Evidence Uptake for Early Career Researchers (ENHD 101) that targeted Ph.D and post-doctoral early career researchers had 31 participants. The participants came from eight African countries from various DELTAS institutions. On the other hand, the training on enhancing institutional capacity for Policy Engagement and Evidence Uptake (ENHD 102) had 21 participants from seven African countries representing various DELTAS programme. The ENHD 102 training targeted research leaders or senior researchers who are responsible for leading policy engagement and research uptake ( Table 1).

Table 1. Trainee participants and their DELTAS institutions.

Country DELTAS Programme affiliation
ENHD 101
◦   Kenya
◦   South Africa
◦   Nigeria
◦   Namibia
◦   Ethiopia
◦   Tanzania
◦   Ghana
◦   Uganda
◦   KEMRI Wellcome Trust
◦   University of Witwatersrand/ CARTA
◦   Obafemi Awolowo University
◦   Kilimanjaro Christian Medical University
College
◦   University of Ibadan
◦   University of Namibia
◦   Addis Ababa University
◦   West Africa Centre for Cell Biology of
Infectious Pathogens (WACCBIP), University
of Ghana
◦   University of Nairobi
◦   Makerere University
◦   University of Western Cape
ENHD102
◦   Cameroon
◦   Ghana
◦   Kenya
◦   Mali
◦   Senegal
◦   South Africa
◦   Uganda
1)   Developing Excellence in Leadership and
Genetic Training for Malaria Elimination in
Sub-Saharan Africa (DELGEME)
2)   Consortium for Advanced Research Training
in Africa+ (CARTA+)
3)   Makerere University and UVRI Infection and
Immunity (MUII-plus)
4)   Sub-Saharan African Consortium for
Advanced Biostatistical Training (SSACABT)
5)   Initiative to Develop African Research
Leaders (IDeAL)
6)   Malaria Research Capacity Development in
West and Central Africa (MACARD)
7)   West African Centre for Cell Biology of
Infectious Pathogens (WACCBIP)

1.  ENHD101

This training targeted ECRs who had expressed interest in attending the training (37) and werew divided into two cohorts. The first of the ENHD101 training workshop was conducted over five days comprising three virtual hourly slots per day between 26th and 30th October 2020 while the second cohort training was conducted from 25th to 29th January 2021. The first cohort consisted of 19 participants attending the virtual training while the second cohort consisted of 12 participants.

2.  ENHD102

The DELTAS programme training workshop for consortium and senior researchers took place between the 8 th and 11 th of February, 2021. In total, 22 fellows registered, while 20 participated virtually, in the full training over three hours daily for four days. The last day of the workshop facilitated a joint workshop between the senior research leaders and policymakers from one of our programmesand the West African Health Organisation.

The joint researcher-policymaker interactive workshop was an opportunity to bring together these two groups to discuss ways of enhancing research uptake for decision-making. Four policymakers from West, East, Central and Southern Africa took part in the three-hour workshop.

ENHD 101 and 102 training content

Both courses began with a background on the EIDM process as summarised in the content of the training for both ENHD101 and ENHD 102 ( Table 2). The workshop format tcomprised of presentations, discussions and group work supported by practical sessions and questions and answers. Additionally, the coordinators weinvited a Ministry of Health official to share first-hand experience using evidence within policymaking spaces for the ENHD101 session. Further to this, pre-recorded videos and other self-learning materials were prepared to support facilitator-led online learning.

Table 2. ENHD 101 and 102 Training Content.

ENHD 101: Policy Engagement and Evidence Uptake for Early Career
Number of days: 4-day 3 hourly sessions
Topics:
1.   Introduction to Principles of Evidence-Informed Decision-Making
2.   Mapping the Health Policymaking Landscape. Understanding the
policymaking landscapes; understanding the political, social, and economic
contexts which influence policymaking; case studies of national, global and
regional health policymaking processes.
3.   Developing a Policy Engagement Strategy for Research Project.
Identifying key stakeholders (stakeholder mapping), stakeholder power
interest matrix; effective engagement of policymakers; policy engagement
toolkits.
      “So What?” tools – Embedding monitoring and evaluation and learning
in policy engagement and research uptake strategies. Practical
sessions including role play.
4. Knowledge Translation and Packaging.
   •     Rapid synthesis of evidence, translating and packaging evidence in
suitable formats for policymakers and non-academic audiences.
   •     Practical sessions - creating research summaries, policy briefs, blogs,
and briefing notes.
ENHD 102: Enhancing Institutional Capacity for
Policy Engagement and Evidence Uptake
Number of days: 3-day 3 hourly sessions
Topics:
1.  Introduction to Evidence-Informed Decision Making
2.  Developing Institutional Strategies for Policy
Engagement and Research Uptake –
   •     Engagement objectives, mapping key stakeholders,
spheres of influence.
   •     Communications plans- strategic communication
tools, collaborating with knowledge translators and
the media.
   •     Involving policymakers in research advisory
committees; participating in policy advisory
committee.
   •     Developing a monitoring and evaluation framework
- theory of change, outputs and outcomes to
measure.
3.  Generating demand for evidence uptake; lobbying for
research and knowledge translation funding.
4.  Creating an Enabling Institutional Culture for research
uptake - EIDM champions, incentives and motivations for
research uptake.

Pre and post course evaluation: To assess the usefulness of the training, the programme team administered pre-and post-training evaluation questionnaires. The evaluation contained qualitative and quantitative data that took an average of 15 minutes to complete. The pre-test was delivered several minutes before the training was conducted and the post test was administered immediately after the training was completed. The pre-and post-test questionnaires were completed online using the Microsoft questionnaire tools. The questionnaire platform was closed 30 minutes after it was dispensed. The pre-course questionnaire consisted of technical questions to understand knowledge of EIDM and to obtain knowledge to inform our training evaluation indicators, with key domain area like EIDM individual capacity through training, mentorship and practice; creation of enabling institutional culture for strategic stakeholder engagements and research uptake; creation of formal and informal interaction between researchers, policymakers and other decision-makers; and how the team contributed to the use of evidence for decision making. The data was collected once, on the first day of training to assess participants’ level of understanding of the technical components. This was administered using the Survey Monkey online platform. Immediately after the training, the participants were encouraged to complete a post-course questionnaire to assess the change in knowledge after the training and also sought information on the quality of the training, future topics, and potential areas of improvement for the training programme. As observed for the pre-training evaluation, this was administered at once for all participants using the Survey Monkey online platform. The data was based on training materials developed by AFIDEP. For the ENHD101 out of 31 participants who joined the training, 27 (87%) participants completed the pre-post course questionnaire after the training. On the other hand, for the ENHD102, out of 20 participants who attended the training, 8 (40%) participants completed the pre-post survey.

End-line evaluation: eight months after the project, an end-line evaluation was conducted among the trained team to understand the effectiveness of components or the whole programme achievement of the intended outcomes. The self-administered online questionnaire contained both qualitative and quantitative questions that took an average of 15 minutes to complete. However, the questionnaire was sent through an email for the participant to complete at their convenience. The end line evaluation questionnaire was completed online using the Microsoft Teams questionnaire tools where the respondents had one month to send back their responses and the survey was closed. Some of the common intended outcomes for both ENHD101 and ENHD102 include developing a policy engagement strategy for their research and developing and implementing institutional strategies for policy engagement and research uptake respectively. For the end-line evaluation, ENHD101 had 15 (48%) respondents while ENHD102 had three (15%) respondents.

A copy of the questions can be found in the Underlying/Extended data 14, 15 .

Results

The findings are presented based on the various evaluations conducted, as highlighted in the methodology section.

1. Pre and post-evaluation

The pre- and post-training test was administered and analysed using the Survey Monkey software.

a. Technical skills developed during the training

Early career researchers. The ENHD101 pre and post-course survey results showed that the level of knowledge on EIDM that included various domains as listed in Table 3 before training was 66%, compared to 83% at the end of the training. In addition to the pre-and post-survey assessment, we also evaluated the overall quality of the training. Generally, all the participants rated the quality of the training as very good (30%) and excellent (70%) on a scale of 1 to 5 with 1 being the (lowest) poor, 2 being fair, 3 being good, 4 being very good and 5 being the (highest) excellent. Overall the fellows’ understanding of technical aspects improved by the end of the training. For example, knowledge of a well-defined policy question improved by 2.5%, understanding of the streams necessary for the window of opportunity for policy influence increased from 14.7% to 58%, and lastly, knowledge of the steps in applying evidence synthesis concepts increased from 51% to 87%.

Table 3. Pre- and post-knowledge assessment.

END101 (N=31) END102 (N=20)
Pre Post Pre Post
1.   Understand evidence-informed policy: 58 84 1.   Understand evidence-informed policy: 57 63
2.   Understand process of evidence-informed policy 51 88 2.   Understand process of evidence-informed policy 57 75
3.   Understand how well- policy question can be
defined
97 100 3.   Understand how well- policy question can be
defined
75 100
4.   Understand audiences to communicate research
effectively
88 74 4.   Understand audiences to communicate research
effectively
57 87
5.   Understand strategy for bridging the research to
policy gap
72 75 5.   Understand strategy for bridging the research to
policy gap
57 75
6.   Understand the window of opportunity for policy
influence
15 58 6.   Understand the window of opportunity for policy
influence
42 62
7.   Understand the element of a communications
strategy
59 90 7.   Understand the element of a communications
strategy
57 88
8.   Understand the research audiences 56 74 8.   Understand the research audiences 57 75
9.   Audiences to communicate your research to them
more effectively?
70 83 9.   Audiences to communicate your research to them
more effectively?
71 86
10.   Understand steps to applying Evidence synthesis 51 87 10.   Understand steps to applying Evidence synthesis 43 71
11.   Understand systematic literature review? 73 94 11.   Understand how to write for policy influence 43 88
12.   Understand narrative literature review? 63 74 12.   Understand the differences between
conversational writing vs academic writing
57 87
13.   Understand how to write for policy influence 76 94
14.   Understand the differences between
conversational writing vs academic writing
93 99
15.   Understand things to consider during research
presentations for policymakers
73 74
Average 66 83 Average 56 80

Senior researchers. Similarly, the ENHD102 pre and post-course survey results showed that the level of knowledge on EIDM that included various domains as listed in Table 3 improved by the end of the training from 56% to 80%. For example, knowledge of the definition of EIDM and stages of the policymaking process improved from 57% to 63% and from 57% to 75%, respectively. Understanding of Kingdon’s three streams necessary for the window of opportunity for policy influence increased by more than double, from about 30% to 62%. Largely, the level of knowledge increased and the participants were generally interested in follow-up engagement to support their targeted study areas of interest. The surveys also sought to gauge the participants' satisfaction level with the training workshop’s overall design. There was a general consensus in that all participants indicated that the training was effective and it met their expectations. All the respondents rated the quality of the training as “very good” and “excellent”. More results on the training evaluation are included in Table 3.

b. The training quality

In addition to the technical knowledge obtained following the training, participants were asked to assess the training based on things that they liked the most. The following were some of the responses:

graphic file with name wellcomeopenres-8-22431-g0000.jpgThe ease with which the facilitators delivered the training, they are knowledge-packed and interactive which allowed participants to express themselves freely.”

graphic file with name wellcomeopenres-8-22431-g0000.jpgThe content of the presentations and the interactive session were all impactful and engaging. I also like your flexibility in order to achieve the aim of the training.”

graphic file with name wellcomeopenres-8-22431-g0000.jpgLearning about what policy is, stakeholder mapping, synthesis of data, writing policy briefs. It has been an amazing course.”

graphic file with name wellcomeopenres-8-22431-g0000.jpgI like the teaching (presentations). All the presenters are experts in the field and have a lot of knowledge in policymaking.”

graphic file with name wellcomeopenres-8-22431-g0000.jpgThe discussion on evidence, how strong is the evidence? Reviews and the practical on writing the policy brief among others. The whole programme was wonderful.”

b) General reflections from the participants

The team conducted a qualitative survey immediately after the training programme to understand the views of the participants following the training. The participants had the following to say for ENHD101 targeting early career researchers:

  • “I am feeling more comfortable to develop a policy brief; I will pay more attention to stakeholder mapping. I will go back to do a paper on systematic reviews which I had initially dropped” (Ph.D student, ENHD101)

  • “I have seen things from another angle, in pushing my work further regardless of low government interest. I am now more aware of the stakeholders I need to target” (Ph.D. student, ENHD101).

  • “I have learned that it is not always about focusing on publishing but remembering the policy implication. So, for every research, and every protocol that I have developed I will be thinking about what is the policy implication for this. How do you want this to end? How many audiences do I want this research to serve? Because if you don’t get it right from the onset then you may find other questions coming up in the end which you didn’t have that data to answer those questions” (Post-doctoral student, ENHD101)

On the other hand, the ENHD102 participants targeting research and consortium leaders had the following to say:

  • “Although exercises are probably a good teaching tool, I think just learning and discussion was good for us now, and having the tools, especially as you have expressed willingness to coach us when we actually need to do these things for real...” (Program manager, ENHD102).

  • “In my experience, access to policymakers and politicians has not been difficult especially where malaria is concerned. Whenever we invite Ministers or we want to discuss an issue with policymakers it is usually easy. However, following these 3 days of the workshop, what I am realising is that we have been doing this engagement out of what we have seen other people do or from our gut feeling, but we didn’t have a structured or professional way of approaching it. So, there is still a lot of learning on our part to do” (Research leader, ENHD102).

  • “We have worked with researchers for about 5 years. But with COVID it disrupted a lot of things here. I want to say it is possible to improve MNCH with interventions that are evidence-based. We’ve used evidence to move a lot of processes forward for example communication regarding maternal neonatal and child mortality. The communication of evidence has helped to pull a lot of people to try to see how they can use males in improving access to family planning and child spacing” (Policy makers, ENHD101).

Medium-term impact following project end-line survey

Eight months after the training, an end-line survey was conducted to check on the utilisation of knowledge/skills they obtained during the training. From the respondents, the participants provided positive feedback as to how they have used the skills.

ENHD101: The participants responded positively with examples of how the skills were used.

  • Participant 1: I applied to knowledge to write a blog on the potential benefits of my research work (Ph.D student, ENHD101).

  • Participants 2: I'm developing my research protocol with a view to influencing policy using tips from the training (Ph.D student, ENHD101).

  • Participants 3: There was a session during the training that covered how to write for different audiences. Used the skill to write a blog article that would be easily understood by a wide audience, both lay and expert (Postdoctoral student, ENHD101).

  • Participants 4: Yes, the training enabled me to write a better literature review chapter for my PhD proposal. It enabled me to think more critically about my literature search. The training also further re-echoed to me that for any grant proposal I am to write, I have to think about the public health impact of the proposed work and how this will be achieved. The training showed me that for any work, it is important to do stakeholder analysis and take the highlighted stakeholders throughout the research project journey, right from the conception of the idea, to implementation and this will make writing policy briefs easier. Thank you so much for the training. I am grateful (Ph.D student, ENHD101).

Among the early career researchers, three beneficiaries had written blogs as illustrated in Figure 1 respectively .

Figure 1. Policy engagement and research uptake following training among early career researchers.

Figure 1.

ENHD102: Similar to the ENHD101 group, the participants responded positively with examples of how they have used the skills.

  • Participant 1: I'm better able to translate and compile important information into smaller snippets to share on our social media pages (Program manager, ENHD102).

  • Participant 2: Yes. I was able to identify the stakeholders in relation to their possible influence on the objective of my policy engagement (advocacy for improved mental health service in Oyo state) (Program manager, ENHD102).

  • Participant 3: I have participated in writing a press release to share the result of Sars-Cov 2 ARN sequenced in our Lab in Mali. This brought in the Malian Prime Minister and the Minister of Health to make an official visit to our facilities. Also, our Center was contacted before any communication from the government on the evolution of COVID-19 in Mali (Research leader, ENHD102).

In addition, among senior researchers one beneficiary each reported having written blog, media brief and policy engagement plans following the training as illustrated in Figure 2. Two senior researchers mentioned that they used social media to influence policy.

Figure 2. Policy engagement and research uptake following training among senior researchers.

Figure 2.

3. Priority area for future training

Further, when participants were asked to make arecommendation for future training, the early career and senior researchers in unison recommended training on accessing, appraising and synthesising research, developing a media brief and giving an elevator pitch as shown in Figure 3 and Figure 4 respectively.

Figure 3. Priority area for future training among early career researchers.

Figure 3.

Figure 4. Priority area for future training among senior career researchers.

Figure 4.

Discussion

Interest in EIDM has grown in the current decade, and so ies a need for capacity enhancement both at individual and institutional level amongst evidence producers and evidence users. EIDM is a deliberative process that guides decision-making using the best research evidence 16 . Since the 1990s research evidence has traditionally played an integral part in decision-making 17 . Despite knowledge of EIDM, healthcare organisations worldwide, have considerable difficulty in translating research evidence into practice 18, 19 . Barriers to undertaking EIDM include weak capacity for evidence use in policy and programme decisions, which includes; a lack of understanding of the value of research evidence; a lack of knowledge of, and engagement in, the process of EIDM; a lack of skill in EIDM; a lack of access to research literature; and a lack of time 2, 8, 20 .

Evidence from the DELTAS programme has demonstrated the need for capacity development efforts and its implementation to increase or enhance the use of evidence in decision-making. A systematic review conducted in 2011 by Clar et al. had elaborated training among policymakers/influencers as one of the interventions that can harness the use of evidence to inform their decisions. Some of the interventions that improved the use of research to inform decision making in the study included workshops, tailored messaging for decision-makers, evidence dialogue, and capacity building for decision-makers to access and demand research evidence 21 . Hawkes et al. 2016 8 documented an experience of capacity-building interventions targeting four counties (Bangladesh, Gambia, India and Nigeria) aimed at strengthening the capacity for the evidence used to inform the decision-making process. The results showed that the interventions were successful in building the capacity of individuals to access, understand, and use evidence/data 8 . However, there are no frameworks to measure the effect of capacity building across various levels of policy-making cycle.

During the intervention phase, the facilitators and researchers acknowledged the need for leadership skills in engaging stakeholders to enable working better as a team in multi-cultural and multi-sectoral contexts. This is in line with the emerging evidence which shows evidence use in decision-making is enabled by strong leadership and ‘soft’ persuasion skills 2225 . Leadership can use their power to promote and support the EIDM implementation process. Leadership support is considered to be an important facilitator that can act as the champion, initiator, and role model of change interventions to enhance the implementation of EIDM culture at both individual and institutional level 26 . Stakeholder engagement was identified within the target audience as being difficult yet it was also indicated as the most important aspect of EIDM. It is evident that the decision-making process is complex with difficulty in /engaging researchers and policy-makers that have never worked together to hold dialogue 22 . However, it is crucial to involve stakeholders from the beginning, and throughout the entire process, to align priorities and foster a common vision toward decision-making and facilitate the uptake of synthesised evidence 25, 27, 28 .

Following the intervention, it was noted that the demand for skills in how to write for non-academic audiences and policy briefs, and the need to embed in research training was shown through the mentorship phase. In addition, the early career researchers and research leaders agreed that briefs need to be written in clear and jargon-free language. This is because many policy-makers are generalists and do not necessarily come from specific research areas. Therefore, as the skills are included in the EIDM process, there is a need to instil writing skills at the start of research, not at the end.

The flexibility of the Enhance DELTAS programme, in being able to adapt to its target audience requirements, was a key strength of the project. Among the trained group of both early career researchers and research leaders, the project identified a critical gap in evidence synthesis and knowledge management capacities, which affected the ability to respond to project objectives. Some of the knowledge gaps that were recommended for future training include accessing, appraising and synthesising research, developing a media brief, and giving an elevator pitch. The knowledge gap is anticipated to be addressed in the next phase of this project if it is successfully renewed.

The programme experienced challenges that hindered the intervention to include training, mentorship, and webinar intervention to increase or enhance the use of evidence in decision-making. One of the challenges was as a result of the ongoing COVID-19 pandemic, the ENHD 101 training adopted a virtual format as opposed to the originally planned in-person training. This posed challenges in terms of maintaining interactive participation as well as getting the fellows engaged throughout the training. Another challenge experienced was due to poor internet access and connectivity, as such, there was inconsistency in the number of participants who remained online during the sessions. Some of the participants also complained about high internet costs within their home countries, hampering them from being fully involved in the training. Time constrain was reported to be a challenge, the feedback from participants indicated that the time to complete exercises was not enough, and more time needed tobe allocated especially for the practical sessions. Similarly, some participants were unfamiliar with Microsoft Teams as the training platform, especially in accessing the breakout rooms as well as training material. This resulted in the project team transferring the training resources to Google Drive the platform for ease of access. Along the way, this was shifted to Zoom platform as most of the participants were more familiar with this platform.

The low participation rate was one of the limitations of this evaluation. The dropout of participants was contributed by the virtual modality that seems to be the norm with the global COVID-19 pandemic.

Conclusion

Generally, following the intervention, the level of knowledge increased and some of the participants were interested in a follow-up mentorship to support their study areas of interest, concerning research uptake and policy engagement. The participants developed respective tools and demonstrated various skills for engaging and communicating with policymakers such as blogs and policy briefs. The participants also suggested potential areas that they wished were covered in more details for future training. Some of these areas include; social media engagement, systematic review and meta-analysis and monitoring and evaluation of the policies. Additionally, it was recommended to consider having such courses integrated within the university curriculum to train the fellows at an earlier stage.

Ethics approval

The Malawi National Health Sciences Research Committee assessed the study questionnaires as ‘low-risk’ to ethical infringements and waived them from scientific and ethical review in August 2020 (Ref. No. Med /4/36c). The data collection took place between September 2020 to June 2021.

Consent

The Malawi National Health Sciences Research Committee approved the obtainment of verbal consent (instead of written consent) and use of the data, including reporting the study findings anonymously without mentioning the participants’ names.

Funding Statement

This study was supported by funding from Wellcome Trust [221383/Z/20/Z], for a research programme entitled “Enhancing Research Uptake and Policy Engagement in the DELTAS Programme”.

The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

[version 2; peer review: 1 approved, 4 approved with reservations, 1 not approved]

Data availability

Underlying data

Figshare: Combined Pre-post assessment cohort 1& 2 (3).xlsx.

https://doi.org/10.6084/m9.figshare.21532461.v2 14

This project contains the following underlying data:

  • -

    ENHD 101 Post-training evaluation Cohort 2-HH.xlsx

  • -

    ENHD 101 Pre-training survey Cohort 2-HH.xlsx (13.35 kB)

  • -

    ENHD 102 Post-training evaluation Knowledge on research uptake -HH.xlsx (13.85 kB)

  • -

    ENHD 102 Pre-training survey -HH.xlsx

Figshare: ENHD 102 End-line Assessment.xlsx.

https://doi.org/10.6084/m9.figshare.21618252.v3 15

This project contains the following underlying data:

  • -

    ENHD 101 End-line Assessment HH.xlsx (13.66 kB)

  • -

    ENHD 102 End-line Assessment 2-HH.xlsx

Data are available under the terms of the Creative Commons Attribution 4.0 International license (CC-BY 4.0).

References

  • 1. Stewart R, Langer L, Erasmus Y: An integrated model for increasing the use of evidence by decision-makers for improved development. Dev South Afr. 2019;36(5):616–31. 10.1080/0376835X.2018.1543579 [DOI] [Google Scholar]
  • 2. Newman K, Fisher C, Shaxson L: Stimulating Demand for Research Evidence: What Role for Capacity‐building? IDS Bulletin. 2012;43(5):17–24. 10.1111/j.1759-5436.2012.00358.x [DOI] [Google Scholar]
  • 3. Belita E, Yost J, Squires JE, et al. : Measures assessing attributes of evidence-informed decision-making (EIDM) competence among nurses: a systematic review protocol. Syst Rev. 2018;7(1): 181. 10.1186/s13643-018-0849-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4. Motani P, Van de Walle A, Aryeetey R, et al. : Lessons learned from Evidence-Informed Decision-Making in Nutrition & Health (EVIDENT) in Africa: A project evaluation. Health Res Policy Syst. BioMed Central Ltd,2019;17(1): 12. 10.1186/s12961-019-0413-6 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5. Yost J, Dobbins M, Traynor R, et al. : Tools to support evidence-informed public health decision making. BMC Public Health. 2014;14(1): 728. 10.1186/1471-2458-14-728 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6. Vogel I, Punton M: Building Capacity to Use Research Evidence (BCURE) realist evaluation: stage 2 synthesis report.Hove, United Kingdom: ITAD.2017. Reference Source
  • 7. Buse K, Hawkes S: Health post-2015: evidence and power. Lancet. 2014;383(9918):678–9. 10.1016/S0140-6736(13)61945-5 [DOI] [PubMed] [Google Scholar]
  • 8. Hawkes S, Aulakh BK, Jadeja N, et al. : Strengthening capacity to apply health research evidence in policy making: experience from four countries. Health Policy Plan. 2016;31(2):161–70. 10.1093/heapol/czv032 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9. Oronje RN, Murunga VI, Zulu EM: Strengthening capacity to use research evidence in health sector policy-making: experience from Kenya and Malawi. Health Res Policy Syst. 2019;17(1): 101. 10.1186/s12961-019-0511-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10. Mallidou AA, Atherton P, Chan L, et al. : Core knowledge translation competencies: a scoping review. BMC Health Serv Res. 2018;18(1): 502. 10.1186/s12913-018-3314-4 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11. Tait H, Williamson A: A literature review of knowledge translation and partnership research training programs for health researchers. Health Res Policy Syst. 2019;17(1): 58. 10.1186/s12961-019-0497-z [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12. Pulford J, Aiyenigba A, Liani M, et al. : DELTAS Africa Learning Research Programme: Learning Report No. 4 (April 2019–March 2020).2020. Reference Source
  • 13. AAS Open Research: DELTAS Africa – entering a new phase of health research funding.Blog,2019.
  • 14. Hara H, Kahurani E, Abdullahi L, et al. : Combined Pre-post assessment cohort 1& 2 (3).xlsx. figshare, [Dataset].2022. 10.6084/m9.figshare.21532461.v1 [DOI]
  • 15. Hara H, Abdullahi L, Madise N, et al. : ENHD 102 End-line Assessment.xlsx. figshare, [Dataset].2022. 10.6084/m9.figshare.21618252.v3 [DOI]
  • 16. Culyer AJ, Lomas J: Deliberative processes and evidence-informed decision making in healthcare: do they work and how might we know. Evid Policy. 2006;2(3):357–71. 10.1332/174426406778023658 [DOI] [Google Scholar]
  • 17. Sackett DL, Rosenberg WM, Gray JA, et al. : Evidence based medicine: what it is and what it isn't. BMJ. 1996;312(7023):71–2. 10.1136/bmj.312.7023.71 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18. LaRocca R, Yost J, Dobbins M, et al. : The effectiveness of knowledge translation strategies used in public health: a systematic review. BMC Public Health. 2012;12(1): 751. 10.1186/1471-2458-12-751 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19. Schreiber J, Stern P: A review of the literature on evidence-based practice in physical therapy. Internet J Allied Health Sci Pract. 2005;3(4): 9. 10.46743/1540-580X/2005.1089 [DOI] [Google Scholar]
  • 20. Ward M, Dobbins M, Peirson L, et al. : Lessons learnt from implementing an organizational strategy for evidence-informed decision-making. Public Health Panor. 2016;2(03):327–32. Reference Source [Google Scholar]
  • 21. Christine C, Susan C, Lisa D, et al. : What are the effects of interventions to improve the uptake of evidence from health research into policy in low and middle-income countries.Final report to DFID,2011. Reference Source
  • 22. Shroff Z, Aulakh B, Gilson L, et al. : Incorporating research evidence into decision-making processes: researcher and decision-maker perceptions from five low- and middle-income countries. Health Res Policy Syst. 2015;13(1): 70. 10.1186/s12961-015-0059-y [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23. Harries U, Elliott H, Higgins A: Evidence-based policy-making in the NHS: exploring the interface between research and the commissioning process. J Public Health Med. 1999;21(1):29–36. 10.1093/pubmed/21.1.29 [DOI] [PubMed] [Google Scholar]
  • 24. Teng F, Mitton C, MacKenzie J: Priority setting in the provincial health services authority: survey of key decision makers. BMC Health Serv Res. 2007;7(1): 84. 10.1186/1472-6963-7-84 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25. Turner S, D'Lima D, Hudson E, et al. : Evidence use in decision-making on introducing innovations: a systematic scoping review with stakeholder feedback. Implement Sci. 2017;12(1): 145. 10.1186/s13012-017-0669-6 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26. Peirson L, Ciliska D, Dobbins M, et al. : Building capacity for evidence informed decision making in public health: a case study of organizational change. BMC Public Health. 2012;12(1): 137. 10.1186/1471-2458-12-137 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27. Rycroft-Malone J, Seers K, Chandler J, et al. : The role of evidence, context, and facilitation in an implementation trial: implications for the development of the PARIHS framework. Implement Sci. 2013;8(1): 28. 10.1186/1748-5908-8-28 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28. Ahmad R, Kyratsis Y, Holmes A: When the user is not the chooser: learning from stakeholder involvement in technology adoption decisions in infection control. J Hosp Infect. 2012;81(3):163–8. 10.1016/j.jhin.2012.04.014 [DOI] [PubMed] [Google Scholar]
Wellcome Open Res. 2024 Dec 30. doi: 10.21956/wellcomeopenres.22431.r113976

Reviewer response for version 2

German Andres Alarcon Garavito 1,2

Thank you for the opportunity to review your work.

This manuscript provides a detailed description of the Enhance DELTAS programme. It depicts all the steps and methodologies used throughout the experience, such as consultations, training, a mentorship programme, and webinars. Finally, it reports the results following a pre- and post-evaluation structure and includes categories such as new skills, training quality, and, most importantly, reflections from the participants and possible priorities for future programmes.

However, there are slight points for improvement.

Results:

-Could you please add more information regarding the mentorship programme? I perceived an imbalance between this and the other parts of the programme. 

Discussion:

-I suggest expanding your reflections by adding more references from similar experiences. I noticed that you referenced a capacity-building intervention in Bangladesh, Gambia, India, and Nigeria.

It would be good to expand this type of experience so you can contrast your work with theirs. How could experience avoid the limitations of their work and vice-versa?

-You mentioned the COVID-19 pandemic as a limitation is accurate. I would advise expanding a bit there, what would happen with this programme in a scenario like nowadays?, a non-COVID moment, yet with other threats globally e.g., other pandemics, climate change, wars, etc  

Overall, this paper is a valuable contribution to the evidence ecosystem. I recommend accepting this paper with minor revisions to address the areas for improvement mentioned above.

Is the work clearly and accurately presented and does it cite the current literature?

Yes

If applicable, is the statistical analysis and its interpretation appropriate?

Yes

Are all the source data underlying the results available to ensure full reproducibility?

Yes

Is the study design appropriate and is the work technically sound?

Yes

Are the conclusions drawn adequately supported by the results?

Yes

Are sufficient details of methods and analysis provided to allow replication by others?

Yes

Reviewer Expertise:

Evidence ecosystems, health systems research, mental health, global health.

I confirm that I have read this submission and believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard, however I have significant reservations, as outlined above.

Wellcome Open Res. 2024 Dec 17. doi: 10.21956/wellcomeopenres.22431.r113971

Reviewer response for version 2

Maureen Dobbins 1

This is an important and timely paper and study. Thank you for the opportunity to review it. The authors describe a training and mentoring initiative to improve knowledge and practice among early career researchers and research leaders of African institutions, with the intent of improving EIDM. The overall expectation is that researchers who understand the concepts of EIDM will conduct research that will be used by policymakers because the research will be informed by policymakers.  

Overall, this is an interesting paper and with some improvements, will make a contribution to the literature.

General

There continue to be grammatical errors throughout the text, including the abstract, which retract from the readability of the paper.

Methods

Please include the needs assessment and baseline questionnaire as an appendix. Was an reliability and validity of these measures conducted. Were there tools developed from the literature? Or created new? If based on pre-existing tools, these should be cited.

Please add another sentence of two providing more specifics about how recruitment for both 101 and 102 were conducted.

For 102, in paragraph above table 1 you say 21 leaders participated, but in the paragraph below the table you say 20 participated. Please clarify.

In the description of 102, I think it could all be one paragraph. How were policymakers identified to participate in the 3 hour workshop on the last day with research leaders.

Pre-Post questionnaire

Did 101 and 102 participants complete the same pre-post questionnaire. It seems like it would have been two separate questionnaires, given the content of the two trainings differed, but this is unclear.

It would strengthen the paper to include the full questionnaires in the appendix, and examples of specific questions included in the text of the methods section. How was the questionnaire developed? Was it informed by previously existing tools? If yes, it should be described how they were used and they should be cited. If previously existing tools were not used, rationale for why not should be articulate. Was the questionnaire tested for reliability and validity ahead of time? Why not, if not done.

All the same questions as above apply for the end-line evaluation. Did you send reminders to participants to complete the questionnaire over the month?

The statement the questions can be found in the underlying /extended data should be moved to near the beginning of the description of this section on the evaluation measures. This way the reader learns up front they can go look at the full questions if they want to know more specifics about the questions. However, while it is wonderful to have the data included, the actual questions are also needed.

Results

For the pre-post evaluation how was this assessment done specifically. For example, was the change for each individual per question noted, and the change scores for each participant assessed to determine overall change from pre to post. Or was an average (from 1-5) for each question calculated and compared pre-post. These details on the analysis should be included in the Methods section.

From Table 3, it appears to me that the questions may assess participants perceptions of the concepts rather than assess their actual knowledge of the concepts. It is difficult to determine what was actually assessed without including the questions as they actually appeared.

For the qualitative data collected post training (immediate and 8 months post), please include in the Methods what qualitative analysis methods were used to analyze this data.

For the priority areas for future training, please include in the Methods a bit more about how this data was collected and analyzed. Looks like you used a 5 point scale. Did participants right out these statements, or did you identify them and have participants rate their importance for future training.

Discussion

The discussion could be further enhanced and compared and contrasted to the literature. Please expand on implications for the future for this training; and how to improve participation and completion rates. Also please say a bit more about how future studies can objectively measure is attainment of the knowledge and skills through this program by early career researchers and research leaders results in uptake of research by policymakers.

Is the work clearly and accurately presented and does it cite the current literature?

Partly

If applicable, is the statistical analysis and its interpretation appropriate?

Partly

Are all the source data underlying the results available to ensure full reproducibility?

Yes

Is the study design appropriate and is the work technically sound?

Partly

Are the conclusions drawn adequately supported by the results?

Yes

Are sufficient details of methods and analysis provided to allow replication by others?

Partly

Reviewer Expertise:

EIDM, knowledge translation, capacity development, public health

I confirm that I have read this submission and believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard, however I have significant reservations, as outlined above.

Wellcome Open Res. 2024 May 6. doi: 10.21956/wellcomeopenres.22431.r78522

Reviewer response for version 2

Bey-Marrie Schmidt 1

This article describes the results for a multi-faceted intervention that combines training and mentoring to improve researchers’ knowledge of EIDM and practice? This study consists of consultations with trainees, the implementation and evaluation of training and mentorship aspects, and follow-up webinars. The various aspects are well-described, a pre- and post-questionnaire was administered right before and after the training, and an end-line evaluation was conducted. The results are detailed and provide evidence of participants' experiences. The discussion is appropriate and the authors detail gaps in research and practice, however, it would have been more beneficial if they had suggested potential ways of addressing these gaps. For example the authors state "However, it is crucial to involve stakeholders from the beginning, and throughout the entire process, to align priorities and foster a common vision toward decision-making and facilitate the uptake of synthesized evidence". They could have perhaps noted the role of integrated knowledge translation in this type of study. Overall, it is an interesting article and offers valuable findings.

Is the work clearly and accurately presented and does it cite the current literature?

Yes

If applicable, is the statistical analysis and its interpretation appropriate?

Yes

Are all the source data underlying the results available to ensure full reproducibility?

Yes

Is the study design appropriate and is the work technically sound?

Yes

Are the conclusions drawn adequately supported by the results?

Yes

Are sufficient details of methods and analysis provided to allow replication by others?

Yes

Reviewer Expertise:

Evidence-informed decision-making, evidence synthesis, qualitative research, knowledge translation, community engagement.

I confirm that I have read this submission and believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard.

Wellcome Open Res. 2024 Apr 8. doi: 10.21956/wellcomeopenres.22431.r74318

Reviewer response for version 2

Jana Groß Ophoff 1

Dear authors, dear editorial board,

Thank you very much for the opportunity to review the article “EIDM training as a key intervention among researchers to enhance research uptake and policy engagement: an evaluation study”, which describes the effects of a training program for young and senior researchers in Africa to improve their Evidence Informed Decision Making.I recommend a major revision.

The major part of the article consists of the description of the organization and implementation of the training. It contains a lot of redundancies and needs to be streamlined. The mass of abbreviations complicates the comprehensibility – why is it important for readers to know (and remember) all of them? This is not at all necessary and I strongly recommend the authors to check to what extent these are needed. For example, why is the abbreviation ENHD101 used to identify the first intervention? For readers to understand, that there were two different trainings, the terms “intervention 1” and “intervention 2” or the like would be sufficient.

Overall, the article requires a clearer structure. In particular, the description of the training and the context in which they took place belong in the methods section of the article, but make up a large part of the theoretical section of the paper. This section in turn only touches briefly, but accurately EIDM, but as this is at the heart of this work, it should be given more space. This is a well-established topic of research in different disciplines. An example for an interdisciplinary network dealing with this topic can be found here: https://transforming-evidence.org/projects/transforming-evidence-network. What would be way more interesting for international readers (and is missing in the introduction/theory section), is an explanation of the African context and the health sector, and why is EIDM important at all. This can also provide the framework for a much more elaborate discussion of the results (see below).

A lot of the decisions made are not transparent and must be better substantiated. For example, I wasn't sure which stakeholders took part in the training (and why) until the end of the paper: Researchers, policy makers, or who? One reason for that was the repeatedly mentioned objective of supporting “research uptake and policy engagement” of “evidence producers and evidence users”, but isn’t the former part of being a researcher anyway and why are the participants evidence users? Other aspects, that are not sufficiently substantiated are, why young and senior researchers were invited to participate in two separate trainings, why the two different trainings are investigated in the same paper, but not at all compared? Accordingly, Fig.1/2 and Fig. 3/4 should be considered to be merged (and statistical analysis carried out). Even though the research design is described in the methods section, it needs to be straightened (redundancies). The difference between the pre- and post-course evaluation and the end-line evaluation is not easy to follow. One reason for this is that the actual survey is not described in detail: What did the survey actually assess? Were there parallel versions available? And why are there differences in the platforms used to carry out the surveys? And what do the authors mean by technical skills or knowledge? Do they mean research capabilities or competencies (which are necessary for well-reflected decision-making) or really technical skills? From what is reported in Table 3, one comes to the conclusion that self-ratings about the ability to carry out certain EIDM steps were assessed, that are not at all an operationalization of skills or knowledge (for which it can be identified, what the correct solution is). This means that a more profound description of the used research instruments is necessary, and if the authors assessed in fact self-ratings, they need to be more precise in their terminology. For example (if I am right), they investigated the self-rated ability to make evidence-informed decisions as an outcome of a training… here, another theoretical aspect is missing in the theoretical section of this article: Which form of learning and learning gains are to be expected of such a training? For example, self-ratings of abilities are closely related to concepts of self-efficacy beliefs (Bandura, 1997) that are (as an indicator of motivation) reportedly of relevance to EIDM. Connected to that, the research question should be revised, too.

The results section contains a lot of interesting results, particularly the quotes on page 8. However, these need to be framed accordingly, both in the introductory part and in the discussion of the paper (see above). In its current form, the article remains at the level of an evaluation report and as mentioned, does not fully develop the potential to provide insights into how training for EIDM can motivate and impart strategies. And as a researcher in the field of EIDM (even though in another discipline), this is the truly interesting part of this paper, as the research field in general struggles to develop trainings that are able to convince and empower stakeholders to carry out EIDM. If possible, it would be interesting, too, to learn not only about the content of the training that was liked most (which is not associated with knowledge gains, but with the motivation to get involved), but what was missing. In principle, this is echoed in “3. Priority area for future training”, but that is not enough, especially as contradictions arise here that need to be addressed. Why, for example, do emerging and senior researcher recommend “training on accessing, appraising and synthesising research”? Haven’t they been already educated in that regard – or how did they become researchers? For example, why did one participant report “Yes, the training enabled me to write a better literature review chapter for my PhD proposal” and how was this connected to the objective of the training? At this point at the latest, the suspicion arises that the training was more like a tutoring session for researchers - or was the aspiration more than that? Another contradiction not addressed is the result, that senior researchers rated their EIDM knowledge (pre-test) lower compared to the emerging researchers (Table 3). Is this the typical effect that experts are better in appraising their abilities than novices (for which is a lot of evidence available, too) or is it an effect of the education system (for example reforms of study programs)?

Is the work clearly and accurately presented and does it cite the current literature?

No

If applicable, is the statistical analysis and its interpretation appropriate?

No

Are all the source data underlying the results available to ensure full reproducibility?

No source data required

Is the study design appropriate and is the work technically sound?

Partly

Are the conclusions drawn adequately supported by the results?

Partly

Are sufficient details of methods and analysis provided to allow replication by others?

Partly

Reviewer Expertise:

Data-based decision-making (Education), Evidence-Informed decision-making in higher and continuous education.

I confirm that I have read this submission and believe that I have an appropriate level of expertise to state that I do not consider it to be of an acceptable scientific standard, for reasons outlined above.

References

  • 1. : Self-efficacy. 2000; 10.1037/10522-094 212-213 10.1037/10522-094 [DOI] [Google Scholar]
Wellcome Open Res. 2023 Jun 8. doi: 10.21956/wellcomeopenres.19978.r58252

Reviewer response for version 1

Kristen Lwin 1

I’m not sure what “There are a number of evidence that have investigated the need and…” means.

Page 4 - Details about the intervention are required – how many hours of training, how many hours and meetings of mentorship, and number of webinars, etc.

Page 5 - Details about ENHD 101 – It’s not clear what “37” in the first sentence means. If this is the sample size include it in parentheses.

Methods section:

  • It is not clear what the pre- and post-test scores mean or are scored out of - details for all measures must be provided. There is currently almost no methods details.

Results section:

  • Quality of training – “generally, all the participants…” must be explained in greater details. Provide frequencies for results.

  • Were there quantitative results for the 8 month follow up? I believe the same survey (as the pre- and first post-tests) were administered.

Discussion section:

  • Results should be integrated into the wider body of literature.

Overall - Edit for grammar, use of consistent verb tense, flow, and concision throughout the manuscript.

Is the work clearly and accurately presented and does it cite the current literature?

Partly

If applicable, is the statistical analysis and its interpretation appropriate?

Partly

Are all the source data underlying the results available to ensure full reproducibility?

No

Is the study design appropriate and is the work technically sound?

Partly

Are the conclusions drawn adequately supported by the results?

No

Are sufficient details of methods and analysis provided to allow replication by others?

No

Reviewer Expertise:

Quantitative Methodology, Statistical Analysis, Program Evaluation

I confirm that I have read this submission and believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard, however I have significant reservations, as outlined above.

Wellcome Open Res. 2023 Mar 10. doi: 10.21956/wellcomeopenres.19978.r54701

Reviewer response for version 1

Kirchuffs Atengble 1

Firstly, the entire article needs to be revised to improve its ease of reading, as grammatical constructs (and obscure connections in sentences) makes current reading a bit difficult. Consistency and coherence in idea communication should critically be looked at; e.g. "Through the email circulation from AAS, the interested DELTAS programme expressed interest in our two training modules, ENHD 101 and ENHD 102 which are described fully below". Do the authors mean the programme itself, or interested programme partners showed interest?

Quite unconventionally also, the article was written in active voice, using first person plural pronouns. This is however left to the discretion of journal editors.

The methods described suggests that different kinds of evaluations (needs assessment, pre-post, endline) were being reported. These should be clearly differentiated in the findings/Results section, and better articulated.

Quality of the (learning/intervention) evaluations could have been improved if demographic data on learners were used to evaluate their respective degree of learning, producing insights to support learning among particular demographics.

It is unclear if the discussion portion of this article was truly dedicated to findings of the study, or random review of the literature. These should have a common focus, linked to the different findings/results reported.

The conclusion will need to be adequately revised to coalesce lessons from the different findings from the evaluation. Potentially, the authors could explore implications for program management, intervention design, among others. This is an evaluation.

Is the work clearly and accurately presented and does it cite the current literature?

Yes

If applicable, is the statistical analysis and its interpretation appropriate?

Not applicable

Are all the source data underlying the results available to ensure full reproducibility?

Partly

Is the study design appropriate and is the work technically sound?

Partly

Are the conclusions drawn adequately supported by the results?

Partly

Are sufficient details of methods and analysis provided to allow replication by others?

Partly

Reviewer Expertise:

Research methods, EIDM, knowledge management, public policy, decision making, international development, information systems, organisation development

I confirm that I have read this submission and believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard, however I have significant reservations, as outlined above.

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Data Citations

    1. Hara H, Kahurani E, Abdullahi L, et al. : Combined Pre-post assessment cohort 1& 2 (3).xlsx. figshare, [Dataset].2022. 10.6084/m9.figshare.21532461.v1 [DOI]
    2. Hara H, Abdullahi L, Madise N, et al. : ENHD 102 End-line Assessment.xlsx. figshare, [Dataset].2022. 10.6084/m9.figshare.21618252.v3 [DOI]

    Data Availability Statement

    Underlying data

    Figshare: Combined Pre-post assessment cohort 1& 2 (3).xlsx.

    https://doi.org/10.6084/m9.figshare.21532461.v2 14

    This project contains the following underlying data:

    • -

      ENHD 101 Post-training evaluation Cohort 2-HH.xlsx

    • -

      ENHD 101 Pre-training survey Cohort 2-HH.xlsx (13.35 kB)

    • -

      ENHD 102 Post-training evaluation Knowledge on research uptake -HH.xlsx (13.85 kB)

    • -

      ENHD 102 Pre-training survey -HH.xlsx

    Figshare: ENHD 102 End-line Assessment.xlsx.

    https://doi.org/10.6084/m9.figshare.21618252.v3 15

    This project contains the following underlying data:

    • -

      ENHD 101 End-line Assessment HH.xlsx (13.66 kB)

    • -

      ENHD 102 End-line Assessment 2-HH.xlsx

    Data are available under the terms of the Creative Commons Attribution 4.0 International license (CC-BY 4.0).


    Articles from Wellcome Open Research are provided here courtesy of The Wellcome Trust

    RESOURCES