Skip to main content
AAS Open Research logoLink to AAS Open Research
. 2020 Nov 19;3:31. Originally published 2020 Jul 20. [Version 2] doi: 10.12688/aasopenres.13100.2

Strengthening research management and support services in sub-Saharan African universities and research institutions

Justin Pulford 1,a, Susie Crossman 1, Sara Begg 1, Jessica Amegee Quach 1, Pierre Abomo 1, Taghreed El Hajj 1, Imelda Bates 1
PMCID: PMC7780342  PMID: 33437927

Version Changes

Revised. Amendments from Version 1

We have revised our manuscript taking into consideration the comments from the reviewers. We have expanded our methods section and have included a table providing the location, institution type, focal science, and number of participants interviewed in each capacity assessment. In addition we have provided an exemplar interview guide listing the six research management and support service domains. Finally, we have developed our discussion and conclusion section.

Abstract

Background: International development partners and research councils are increasingly funding research management and support (RMS) capacity strengthening initiatives in sub-Saharan Africa (SSA) as part of a broader investment in strengthening national and regional research systems.  However, the evidence-base to inform RMS capacity strengthening initiatives is limited at present. This research note presents a synthesis of 28 RMS capacity assessments completed in 25 universities/research institutions from across 15 SSA countries between 2014 and 2018. 

Methods: All 28 capacity assessments were completed following a standardised methodology consisting of semi-structured interviews conducted with research and research support staff at the respective institution as well as document reviews and observation of onsite facilities. Data were extracted from the 28 reports detailing the findings of each assessment according to a framework synthesis approach.

Results: In total, 13 distinct capacity gap categories emerged from across the 28 RMS capacity assessment reports.  Almost all the institutions assessed faced multiple gaps in RMS capacity within and across each of these 13 categories. The 13 categories were not independent of each other and were often closely inter-connected. Commonalities were also evident across multiple categories, the two most obvious of which were severe fiscal constraints and the often-complex bureaucracy of the institutional operating environment.

Conclusions: The synthesis findings reveal multiple, commonly shared RMS capacity gaps in universities and research institutions across SSA. No single intervention type, or focus, would be sufficient to strengthen capacity across all 13 areas; rather, what is needed to facilitate a significant shift in RMS capacity within such SSA universities and research institutions is a combination of interventions, consisting of differing levels of cost and complexity, variously led (or supported) by both internal and external actors.

Keywords: Research Management, Research Capacity Strengthening, Capacity Assessment, Sub-Saharan Africa, University, Research Institution

Introduction

Well-developed research management and support (RMS) services ensure a conducive research environment within a university or dedicated research institution. In many sub-Saharan Africa (SSA) countries, RMS capacity is poorly developed 1, 2 contributing towards low research production from SSA universities/research institutions relative to their counterparts elsewhere 3. International development partners and research councils are increasingly funding RMS capacity strengthening initiatives in SSA settings as part of a broader investment in strengthening national and regional research systems 4. However, the evidence-base to inform RMS capacity strengthening initiatives is limited at present 5. Large-scale assessments of specific capacity gaps across and between SSA research institutions are scarce and we do not yet have sufficient evidence to reliably inform which types of intervention, in which combinations, with which focus and in what proportion, are required to effectively and sustainably build RMS capacity in SSA settings. Thus, we currently do not understand either what the RMS capacity gaps are or how best to address them.

In this research note, we present a synthesis of 28 RMS capacity assessments completed in 25 universities/research institutions from across 15 SSA countries between 2014 and 2018. Drawing on the findings from this synthesis, we then consider their implications with respect to the design, implementation and evaluation of interventions designed to strengthen RMS capacity in low- and middle-income country settings.

Methods

The findings presented in this research note have been drawn from a review of 28 project reports. Each report presented the outcome of an RMS capacity assessment completed by the Centre for Capacity Research, Liverpool School of Tropical Medicine (LSTM), in collaboration with the SSA institution being assessed and following a standardised methodology as described elsewhere 6. The SSA institutions were collectively participating in eight distinct research capacity strengthening projects and the assessments were conducted in support of their respective programme objectives. Each assessment focused fully or in part on RMS and consisted of semi-structured interviews conducted onsite at the respective institution as well as document reviews (e.g. strategic plans, institutional RMS policies/guidelines, annual reports) and observation of facilities. Interviews and on-site observations were completed by LSTM research staff. The number of interviews per institutional assessment varied (see supplementary file 1, Table 1), ranging between 8–35. A purposive sampling strategy was employed for each assessment, with the aim of obtaining input from staff in positions of strategic interest (e.g. ICT manager) whilst also ensuring participation from a mix of research and research support staff at junior through to senior positions. Pre-visit briefings were conducted remotely with the lead investigator at each institution to explain the purpose and process of the visits and to schedule interviews. Lead investigators were provided with the data collection tools in advance of the visits so they were aware of the range and type of information that would be sought. Interview notes were typed up within a few hours of each interview, checked against audio-recordings of the interviews (available if interviewees gave permission) and final versions verified among the site visit team. Whilst assessments conducted at dedicated research centres tended to span the entire institution, assessments completed at universities typically focused on core RMS services and a focal college or department (e.g. College of Health Sciences or the Department of Public Health). Table 1 (supplementary file 1) lists the country, institution type and focal scientific disciplines (as an indicator of which departments/colleges assessed) of all 28 institutional assessments.

The assessments were designed to gauge the presence and capacity of existing RMS services against an international benchmark. The benchmark was determined based on a review of the RMS literature and in consultation with various stakeholders and focused on six core domains: institutional research strategy; institutional support services; research facilities; human resource management for research; training activities for research; and external promotion of research findings. An exemplar interview guide listing the six RMS domains and the areas examined under each is presented in supplementary file 2. A detailed description of how the benchmark was developed can be found in Wallis et al. 2017 6. All assessments were qualitative, with no attempt made to rank or score existing capacities. The assessment was not designed to identify every possible capacity gap as measured against the international benchmark, but rather those capacity gaps that - from the interviewees’ perspective – meaningfully impacted on their ability to conduct or support research. A detailed report (~20–30 pages) describing the identified capacity gaps, strengths, and recommended capacity strengthening actions was completed at the conclusion of each on-site assessment. Reported capacity gaps were based on interviewee comment, document review or observation and were typically based on at least two independent sources to enhance validity (e.g. two interviewees reporting the same challenge or an interviewee reporting a challenge, subsequently verified by observation or document review). Draft reports were shared with representatives from the assessed institution for review prior to finalisation.

Data were extracted from the 28 reports according to a framework synthesis approach 7. The framework, constructed in Microsoft Excel, consisted of eight column headings including the institution name, the six core RMS domains listed above and an ‘other’ column and 28 rows, one for each report (see underlying data 8). Two independent reviewers, experienced in the institutional capacity assessment process, read the full text of each report and recorded any listed or implied capacity challenges relating to RMS within the corresponding column in the spreadsheet (e.g. ‘unreliable power supply’ would be listed under the ‘research facilities/infrastructure’ column against the respective report). A third reviewer subsequently compared the report extract entries in the spreadsheet. When the same or similar capacity gap was reported by both the initial reviewers, a single representative label was applied to describe it. When a capacity gap was only identified by one of the first two reviewers, the third reviewer consulted the full text of the corresponding report and made a final decision as to its inclusion. Once completed, the recorded entries in the framework were then thematically organised into distinct capacity gap categories. This was an iterative process led by the first author of this research note in collaboration with all co-authors.

Results

In total, 13 distinct capacity gap categories emerged from across the 28 RMS capacity assessment reports. Each of the 13 categories, along with specific examples of capacity gaps common to each category, are presented in Box 1. Almost all the institutions assessed faced multiple gaps in RMS capacity within and across each of these 13 categories.

Box 1. Common RMS capacity gaps.

  • 1.

    Physical Infrastructure

    Unreliable power supply; insufficient laboratory-, office-, study-, meeting or physical storage-space.

  • 2.

    Information and Communication Technologies (ICT) Infrastructure

    Insufficient ICT hardware; nil/limited access to specialist software; limited internet access or bandwidth capacity.

  • 3.

    Operating Equipment

    Absence or critical shortage of essential laboratory-, field- and office equipment; vehicle shortage.

  • 4.

    Laboratory Services and Support

    Poorly maintained laboratory equipment; limited funding to support laboratory maintenance; limited/nil laboratory quality control systems or accreditation; insufficient biosecurity/laboratory safety protocols and resources; nil/sub-optimal revenue generation from provision of laboratory services.

  • 5.

    Research Funding

    Limited/nil availability of national and/or institutional research funding; limited funding to support post-graduate research required for attainment of award.

  • 6.

    Workforce

    Excessive workloads for research and research support staff; prolonged staffing vacancies due to hire freezes and/or absence of suitably qualified candidates; aging workforce; under-qualified and/or unexperienced workforce; insufficient laboratory technicians and/or research support staff.

  • 7.

    Remuneration

    Uncompetitive and/or insufficient salary relative to living costs; inequitable salary ‘top-up’ system applied to externally funded research grants (e.g. academics costed in, but support staff not).

  • 8.

    Professional Development

    Limited/nil access to training/professional development activities for research and research support staff (technicians and support staff having lowest levels of access); limited/nil institutional structures/services to support professional development; limited/nil staff mentorship schemes; limited/nil staff appraisal and performance mechanisms.

  • 9.

    Career Progression

    Limited promotion opportunities (especially for technicians and research support staff); job-insecurity; poor staff retention (primarily support staff); limited opportunities for junior academics to enter faculty positions (exacerbated by aging workforce remaining in post).

  • 10.

    Institutional Support Services

    Inefficient/inadequate financial management-, procurement-, data management-, human resource support services; limited access to research literature/e-resources; limited/nil functionality of institutional review boards.

  • 11.

    Research Support and Project Management

    Limited/nil pre- and post-award support services, quality assurance and monitoring; limited research cost recovery policies/expertise; limited/nil institutional research strategy.

  • 12.

    Internal Communication and Collaboration

    Limited internal (inter-departmental) communication and collaboration mechanisms; limited access to and/or awareness of institutional polices and/or available support services.

  • 13.

    External Communication and Networking

    Limited/nil institutional communications strategy; limited/nil institutional funds and/or staff incentives to support knowledge translation activities; limited/nil research output repository; limited support or oversight of institutional website (content and maintenance).

The 13 categories were not independent of each other, but often closely inter-connected. For example, financial management (i.e. institutional support services) was often constrained by a lack of computing hardware and specialised software (ICT infrastructure), limited training opportunities (professional development), few promotion opportunities (career progression) and perceived low pay (remuneration). Commonalities were also evident across multiple categories, the two most obvious of which were severe fiscal constraints and the often-complex bureaucracy of the institutional operating environment. Many capacity gaps were directly attributable to, or exacerbated by, these two constraints.

Discussion

The synthesis revealed 13 distinct capacity gap categories, suggesting a diverse array of interventions are needed to ‘shift’ current RMS capacity to a substantially stronger position in universities and research institutions across SSA. Resolving some of the identified capacity gaps would necessitate financial support, for example to purchase required resources (e.g. laboratory equipment or ICT hardware), to invest in high-cost infrastructure developments (e.g. laboratory, study or office space), and to support research funding. In other cases, provision of training or technical assistance (e.g. supporting professional development, laboratory maintenance, development of publication/data depositories) would be more appropriate, and in others, support to strengthen institutional policies, practices and systems (e.g. streamlining and strengthening financial management practices, staff induction and accountability processes, establishing institutional review boards) would be the most relevant action. The extent to which external input, whether from national, regional (within SSA) or international (outside of SSA) sources, is required would vary according to the interventions, ranging from full-to-partial-to nil support. For example, external assistance may be required to support the provision of specialised training or the procurement of otherwise unaffordable equipment, but other interventions could be driven by the respective institutions themselves at a low cost such as the development of remuneration policies or more effective internal communication and collaboration mechanisms.

No single intervention type, or focus, would be sufficient to strengthen capacity across all 13 areas; rather, what is needed to facilitate a meaningful shift in RMS capacity within such SSA universities and research institutions is a combination of interventions, of differing levels of cost and complexity, variously led (or supported) by both internal and external actors. However, interventions that address (even in part) fiscal constraints and complex bureaucracies may be especially impactful given the centrality of these issues across many of the 13 categories reported here. Determining which combination of interventions may be most appropriate for any one institution should be a collaborative process, engaging both research and research support staff (from senior to junior levels) from the focal institution and ideally delivered as part of a longer-term, overarching research capacity strengthening strategic plan. Incorporating robust monitoring and evaluation processes within the capacity strengthening plan would help identify optimal intervention strategies, critical given the paucity of understanding in this area 5, and would provide opportunities for shared learning between SSA institutions. Arguably, RMS and broader research capacity within SSA will develop faster the greater the role of regional institutions, Governments and partners in leading the capacity strengthening effort, irrespective of the underlying funding source.

The finding that common capacity gaps existed in many different institutions across multiple countries suggests that time-consuming, external assessments of RMS capacity may not always be required to identify capacity strengthening priorities. Rather, institutional representatives could instead confirm which capacity gaps reported here apply in their context, prioritize these gaps and report additional ones (if any) that might be very specific to their institution and recommend the most appropriate and suitable interventions to mitigate the identified gaps. The commonalities in RMS constraints across institutions further suggests that intervention combinations proven effective be implemented at scale where resources and commitment allow. Finally, not all RMS capacity gaps are equal; thus, assessment processes that allow the relative impact of identified gaps in RMS capacity on subsequent research performance to be better understood would usefully highlight areas for priority intervention in a way that our qualitative approach was not designed to do.

Data availability

Underlying data

All requests to the corresponding author for copies of institutional reports will be duly considered. The reports have not been made available as a dataset because the reports cannot be de-identified without compromising anonymity. The reports were produced under ethical approval conditions for the individual projects which stated that only the research team would have access to the data.

Deidentified intermediary data is available from Harvard Dataverse.

Harvard Dataverse: Pulford Justin, Crossman Susie, Begg Sara, Amegee Quach Jessica, Abomo Pierre, El Hajj Taghreed and Bates Imelda, 2020, "Strengthening research management and support services in sub-Saharan African universities and research institutions - anonymous data extraction". https://doi.org/10.7910/DVN/IP3O06 8

This project contains the following underlying data:

  • -

    Research Management Systems Challenges Data Extraction - Anonymous.xlsx (Intermediary data extracted from 28 research management system capacity assessment reports)

Harvard Dataverse: Centre for Capacity Research, 2020, "Strengthening research management support services in sub-Saharan African universities and research institutions - table 1: location, institution type, focal science, and number of participants interviewed in each capacity assessment", https://doi.org/10.7910/DVN/HJ6SMJ, Harvard Dataverse, V1 9

This project contains the following underlying data:

  • Table 1: Location, institution type, focal science and number of participants interviewed in each capacity assessment

Extended data

Harvard Dataverse: Centre for Capacity Research, 2020, "Strengthening research management and support services in sub-Saharan African universities and research institutions - example interview guide", https://doi.org/10.7910/DVN/HU5E6Q, Harvard Dataverse, V1 10

This project contains the following extended data:

  • An ‘exemplar’ of the interview guides used across the eight projects from which data were drawn to inform the associated research note

Data are available under the terms of the Creative Commons Zero "No rights reserved" data waiver (CC0 1.0 Public domain dedication).

Funding Statement

This work was supported by the African Academy of Sciences (AAS) through the Developing Excellence in Leadership, Training and Science in Africa (DELTAS Africa) Community and Public Engagement fund [AAS/021/2019]. The DELTAS Africa Initiative is an independent funding scheme of the AAS’s Alliance for Accelerating Excellence in Science in Africa (AESA) and supported by the New Partnership for Africa’s Development Planning and Coordinating Agency (NEPAD Agency) with funding from the Wellcome Trust [200918/Z/16/Z] and the UK government. The views expressed in this publication are those of the authors and not necessarily those of AAS, NEPAD Agency, Wellcome Trust or the UK government. This work reports data obtained from multiple projects, including: Royal Society [GB-1-203041], sub-awardee IB; GCRF/Natural Environment Research Council [NE/P02095X/1], sub-awardee JP; Department for International Development [PO 6407], awardee MT; London School of Hygiene & Tropical Medicine [ITDCZH26 and ITDCVT6810], sub-awardee IB; GlaxoSmithKline [3000029095], awardee IB; GCRF/Biotechnology and Biological Sciences Research Council [BB/P027954/1], sub-awardee JP; Grand Challenges Research Fund [MR/P027873/1], sub-awardee JP; and European & Developing Countries Clinical Trials Partnership [EDCTP-CSA-Ebola-355], awardee JR.

The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

[version 2; peer review: 2 approved, 1 approved with reservations]

References

  • 1. Kebede D, Zielinski C, Mbondji PE, et al. : Institutional facilities in national health research systems in sub-Saharan African countries: results of a questionnaire-based survey. J R Soc Med. 2014;107(1 suppl):96–104. 10.1177/0141076813517680 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2. Consort: Scoping work on research management in LMICs, sub-Saharan Africa. London, United Kingdom: Consort;2017. Reference Source [Google Scholar]
  • 3. UNESCO: UNESCO Science Report: Towards 2030. Paris, France: United Nations Educational Scientific and Cultural Organisation (UNESCO);2015. Reference Source [Google Scholar]
  • 4. UKCDS: Health Research Capacity Strengthening: A UKCDS Mapping. London: United Kingdon Collaborative on Development Sciences (UKCDS);2015. Reference Source [Google Scholar]
  • 5. Dean L, Gregorius S, Bates I, et al. : Advancing the science of health research capacity strengthening in low-income and middle-income countries: a scoping review of the published literature, 2000-2016. BMJ Open. 2017;7(12):e018718. 10.1136/bmjopen-2017-018718 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6. Wallis S, Cole DC, Gaye O, et al. : Qualitative study to develop processes and tools for the assessment and tracking of African institutions’ capacity for operational health research. BMJ Open. 2017;7(9):e016660. 10.1136/bmjopen-2017-016660 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7. Dixon-Woods M: Using framework-based synthesis for conducting reviews of qualitative studies. BMC Med. 2011;9(1):39. 10.1186/1741-7015-9-39 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8. Centre for Capacity Research Justin P, Susie C: Strengthening research management and support services in sub-Saharan African universities and research institutions - anonymous data extraction.Harvard Dataverse, V1.2020. 10.7910/DVN/IP3O06 [DOI] [PMC free article] [PubMed]
  • 9. Centre for Capacity Research: Strengthening research management support services in sub-Saharan African universities and research institutions - table 1: location, institution type, focal science, and number of participants interviewed in each capacity assessment.Harvard Dataverse, V1.2020. 10.7910/DVN/HJ6SMJ [DOI] [Google Scholar]
  • 10. Centre for Capacity Research: Strengthening research management and support services in sub-Saharan African universities and research institutions - example interview guide.Harvard Dataverse, V1.2020. 10.7910/DVN/HU5E6Q [DOI] [PMC free article] [PubMed]
AAS Open Res. 2021 Mar 1. doi: 10.21956/aasopenres.14280.r28222

Reviewer response for version 2

Aaron N Yarmoshuk 1

I have now read the revised article in-full and reviewed the responses by the author. I now approve the article without reservation.

Is the work clearly and accurately presented and does it cite the current literature?

Yes

If applicable, is the statistical analysis and its interpretation appropriate?

Not applicable

Are all the source data underlying the results available to ensure full reproducibility?

Partly

Is the study design appropriate and is the work technically sound?

Yes

Are the conclusions drawn adequately supported by the results?

Yes

Are sufficient details of methods and analysis provided to allow replication by others?

Yes

Reviewer Expertise:

Institution Building, Research Management, Capacity Strengthening, Global Health, Globalisation, Global Tertiary Education

I confirm that I have read this submission and believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard.

AAS Open Res. 2020 Dec 17. doi: 10.21956/aasopenres.14280.r28221

Reviewer response for version 2

Victoria O Kasprowicz 1,2

I have no further comments to make. I feel the authors have adequately addressed my concerns.

Is the work clearly and accurately presented and does it cite the current literature?

Yes

If applicable, is the statistical analysis and its interpretation appropriate?

Not applicable

Are all the source data underlying the results available to ensure full reproducibility?

Partly

Is the study design appropriate and is the work technically sound?

Partly

Are the conclusions drawn adequately supported by the results?

Yes

Are sufficient details of methods and analysis provided to allow replication by others?

Partly

Reviewer Expertise:

Immunology, HIV/TB, Research Capacity Strengthening

I confirm that I have read this submission and believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard.

AAS Open Res. 2020 Aug 12. doi: 10.21956/aasopenres.14199.r27576

Reviewer response for version 1

Aaron N Yarmoshuk 1

This is a useful research note. It would be good to get a better sense of the 25 institutions where the 28 assessments were conducted, including the 15 SSA countries in which the institutions are based. Fifteen countries represent approximately 1/3 of SSA. Are all sub-regions (Central, Eastern, Southern and Western) represented?  What is the representation between anglophone, francophone and lusophone countries? I can understand why the authors don’t wish to mention specific countries but it would be good to get some idea of the geographic distribution of the 25 institutions.

“Significant” is used five times in the note yet in the Methods it is stated, “All assessments were qualitative, with no attempt made to rank or score existing capacities.” Without some form of measurement it is suggested that this adjective not be used.

Are the authors willing to provide the data collection tools used in an appendix? This would be useful for further research. It is noted that the authors provide de-identified intermediary data through the Harvard Dataverse.

How many representatives were interviewed per institution assessed? Can the range be provided; for example, between x and y representatives were interviewed per institution?

It is stated that one of the two greatest challenges to strengthening RMS capacity was found to be severe fiscal constraints - was information collected on overhead rates charged by the institutions? If so, can it be presented?

The other of the two greatest challenges was found to be complex bureaucracy of the institutional operating environment. Is it known if this challenge can be addressed at the institutional level itself or is this partially a creation of having to follow national regulations by which the institutions are governed?

Is the work clearly and accurately presented and does it cite the current literature?

Yes

If applicable, is the statistical analysis and its interpretation appropriate?

Not applicable

Are all the source data underlying the results available to ensure full reproducibility?

Partly

Is the study design appropriate and is the work technically sound?

Yes

Are the conclusions drawn adequately supported by the results?

Yes

Are sufficient details of methods and analysis provided to allow replication by others?

Yes

Reviewer Expertise:

Institution Building, Research Management, Capacity Strengthening, Global Health, Globalisation, Global Tertiary Education

I confirm that I have read this submission and believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard, however I have significant reservations, as outlined above.

AAS Open Res. 2020 Nov 10.
Susie Crossman 1

It would be good to get a better sense of the 25 institutions where the 28 assessments were conducted, including the 15 SSA countries in which the institutions are based. Fifteen countries represent approximately 1/3 of SSA. Are all sub-regions (Central, Eastern, Southern and Western) represented?  What is the representation between anglophone, francophone and lusophone countries? I can understand why the authors don’t wish to mention specific countries but it would be good to get some idea of the geographic distribution of the 25 institutions.

Response:  We have included additional information re the country of location for all 28 institutions assessed in a new supplementary file.

“Significant” is used five times in the note yet in the Methods it is stated, “All assessments were qualitative, with no attempt made to rank or score existing capacities.” Without some form of measurement it is suggested that this adjective not be used.

Response: We have replaced the word ‘significant’ with suitable alternatives, less associated with statistical analysis, throughout the manuscript.

Are the authors willing to provide the data collection tools used in an appendix? This would be useful for further research. It is noted that the authors provide de-identified intermediary data through the Harvard Dataverse.

Response: The data collection tools varied somewhat across the eight distinct projects in which the 28 assessments were conducted.  Rather than present the entire suite of data collection tools, we have developed an ‘exemplar’ interview guide listing the six RMS domains and the areas explored under each.  This exemplar interview guide has been uploaded as supplementary file 2. The six RMS domains and their respective question areas were common across all 28 assessments.

How many representatives were interviewed per institution assessed? Can the range be provided; for example, between x and y representatives were interviewed per institution?

Response:  We have included the number of participants interviewed for each assessment in Table 1, supplementary file 1.

It is stated that one of the two greatest challenges to strengthening RMS capacity was found to be severe fiscal constraints - was information collected on overhead rates charged by the institutions? If so, can it be presented?

Response:  Participants were asked whether overhead rates were routinely applied to external grant applications and, if yes, how these rates were calculated and how their application (within grant submissions) was overseen (i.e. who was responsible, how was this monitored). However, we did not record specific overhead rates for each institution assessed, in cases where a standard overhead rate was reported. In general, effective grant costing (including the application of overhead rates) was a common challenge with few participants reporting reliable pre-award support services or processes. This stands out as a critical RMS capacity gap in many institutions given the potential to lose/gain income through poor/robust grant costing practices.

The other of the two greatest challenges was found to be complex bureaucracy of the institutional operating environment. Is it known if this challenge can be addressed at the institutional level itself or is this partially a creation of having to follow national regulations by which the institutions are governed?

Response: Certainly, institutional bureaucracies were often complex due to national regulations over which the respective institutions had limited/nil control.  However, we also observed many examples of bureaucratic procedures that were unnecessarily complex or inefficient and in which the respective institutions could potentially simplify processes.  Common examples were in the areas of financial reporting and procurement where institutional staff were often expending considerable time and effort to complete simple tasks due to a reliance on outdated paper-based systems with onerous and inflexible ‘authorisation/sign-off’ protocols.

AAS Open Res. 2020 Aug 3. doi: 10.21956/aasopenres.14199.r27578

Reviewer response for version 1

Victoria O Kasprowicz 1,2

Pulford et al. present results of 28 research management and support (RMS) capacity assessments completed in 25 universities/research institutions from 15 SSA countries performed between 2014-2018. Thirteen distinct capacity gap categories were identified and the authors report that almost all the institutions faced significant gaps both within and across these categories Pulford et al .state that these multiple commonly shared gaps could not be strengthened by the introduction on one intervention type, but by a combination of interventions. I enjoyed reading this research note and found it to be a valuable contribution to a relatively limited literature pool. The emerging importance and focus on RMS capacity building highlights the need for further publications in this key area.  

I acknowledge that this is a short research note but I feel that this publication could be improved with additional information:

  • Considering the size of SSA I feel that it would be interesting to state the participating countries and/or more detail with regards to breakdown of institute vs university department. I understand there may be hesitation with listing the specific institutes/universities - but I do feel a little more information could be helpful in interpreting the findings.

  • I found the methods/approach a little difficult to fully comprehend with the information provided. For example, the methods stated that an international benchmark was used as a comparison and that this benchmark was determined based on a literature review and consultation with various stakeholders. I think it might be nice to add details on the approach/details of the literature review (e.g. anglophone vs francophone journals, numbers, countries included in the reports etc). Also, it would be of interest to provide details of the participating stakeholders. It would help the interpretation of the paper if more detail on this international benchmark was provided and how it was used. Linked to the benchmark point, and as the authors state ‘significant’ gaps at almost all the sites, I think it might be nice to provide a definition of ‘significant’ and what performance level was identified for the sites who didn’t have significant gaps. 

  • I feel it would be beneficial for more detail on the approach at the SSA sites. For example, details on the tools used to help the reader gain a better feel of how capacity gaps were identified e.g. questions for the semi-structured interviews and perhaps a detailed breakdown of the number of participating team members (scientists vs support staff) at each site. Additional questions I have include: What documents were reviewed as part of the assessment? Who carried out the ‘observation of facilities’? How were the documents and observation of facilities used in the process of identifying capacity gaps? How was the research focus/specialty of each site taken into account when identifying specific capacity gaps? How did the capacity gaps actually link with research activity and outputs at each site?

 

Despite the suggestions noted above, I feel the conclusions that common RMS capacity gaps do exist at many SSA research sites to be useful information that can hopefully guide future interventions efforts. Sites should be encouraged to lead the development of RMS capacity strengthening plans with embedded monitoring and evaluation as part of their strategic plan. Further sharing of gaps, challenges and progress in this area could help identify optimal intervention strategies and opportunities for intra-Africa collaboration.

Is the work clearly and accurately presented and does it cite the current literature?

Yes

If applicable, is the statistical analysis and its interpretation appropriate?

Not applicable

Are all the source data underlying the results available to ensure full reproducibility?

Partly

Is the study design appropriate and is the work technically sound?

Partly

Are the conclusions drawn adequately supported by the results?

Yes

Are sufficient details of methods and analysis provided to allow replication by others?

Partly

Reviewer Expertise:

Immunology, HIV/TB, Research Capacity Strengthening

I confirm that I have read this submission and believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard, however I have significant reservations, as outlined above.

AAS Open Res. 2020 Nov 10.
Susie Crossman 1

Considering the size of SSA I feel that it would be interesting to state the participating countries and/or more detail with regards to breakdown of institute vs university department. I understand there may be hesitation with listing the specific institutes/universities - but I do feel a little more information could be helpful in interpreting the findings.

Response: We have now included additional information re the country of location for all 28 institutions assessed in a supplementary file (supplementary file 1).

I found the methods/approach a little difficult to fully comprehend with the information provided. For example, the methods stated that an international benchmark was used as a comparison and that this benchmark was determined based on a literature review and consultation with various stakeholders. I think it might be nice to add details on the approach/details of the literature review (e.g. anglophone vs francophone journals, numbers, countries included in the reports etc). Also, it would be of interest to provide details of the participating stakeholders. It would help the interpretation of the paper if more detail on this international benchmark was provided and how it was used. Linked to the benchmark point, and as the authors state ‘significant’ gaps at almost all the sites, I think it might be nice to provide a definition of ‘significant’ and what performance level was identified for the sites who didn’t have significant gaps.

Response: We sympathise with the request for additional information regarding the development of the international benchmark and other details regarding the capacity assessment methodology (both this comment and below). We have attempted to strike a balance between providing essential detail about the capacity assessment process underlying each of the 28 reports synthesised in this research note, providing essential detail about the methodology employed to synthesise the report content and the need to remain within the limited word count required of the research note format. We have prioritised a description of the synthesis methodology as that, rather than the capacity assessment process, is the primary focus of this research note. However, we acknowledge that the synthesis data reported here cannot be properly understood without a sound understanding of the capacity assessment process underlying the reports.  Fortunately, the latter has been well described in the published literature previously, including a detailed description of the benchmarking process, and we have now clearly signalled in the text where this can be accessed (via open access publication). We recognise this is less than ideal but is perhaps the best solution given the need to balance the various requirements described above.  In addition, we have replaced the word ‘significant’ throughout the research note in order to reduce the perception that we have measured performance by a quantitative means (which we did not) and have made further clarifications around how (lack of) capacity was assessed (see also our responses to reviewer one above).  We have now listed the number of participants interviewed during each assessment in supplementary file 1.  We are not able to provide a reliable breakdown on the percentage of participants in either research or RMS roles, although we have now noted in text that there was generally a relatively even split between the two and have given more detail about the types of participants interviewed.

I feel it would be beneficial for more detail on the approach at the SSA sites. For example, details on the tools used to help the reader gain a better feel of how capacity gaps were identified e.g. questions for the semi-structured interviews and perhaps a detailed breakdown of the number of participating team members (scientists vs support staff) at each site. Additional questions I have include: What documents were reviewed as part of the assessment? Who carried out the ‘observation of facilities’? How were the documents and observation of facilities used in the process of identifying capacity gaps? How was the research focus/specialty of each site taken into account when identifying specific capacity gaps? How did the capacity gaps actually link with research activity and outputs at each site?

Response: We have now included an exemplar interview guide as a supplementary file (supplementary file 2) and have noted participant number in supplementary file one. We have inserted in the text some examples of the types of documents reviewed during the capacity assessments, as well as a description of how the document review/observations were incorporated into the respective institutional capacity assessment reports. We have also more clearly stated who completed the interviews/observations. Assessments were tailored to some degree for each institution, accounting for their respective research focus (or more accurately the focal scientific discipline as reported in Table 1, supplementary file 1), although – in practice – RMS components of the assessment remained largely consistent across all assessments given the relatively generic nature of RMS functions (nb. the assessments usually covered more than just RMS, but only the RMS components are reported in this manuscript). No attempt was made to link documented capacity gaps with research activity/output at the assessed institutions (as also noted in response to reviewer one above). We have more clearly noted this in the text and have included as a recommendation in the discussion.

Despite the suggestions noted above, I feel the conclusions that common RMS capacity gaps do exist at many SSA research sites to be useful information that can hopefully guide future interventions efforts. Sites should be encouraged to lead the development of RMS capacity strengthening plans with embedded monitoring and evaluation as part of their strategic plan. Further sharing of gaps, challenges and progress in this area could help identify optimal intervention strategies and opportunities for intra-Africa collaboration.

Response: Reviewer two was right to point out the key roles of monitoring, evaluation, learning, strategic planning and intra-Africa collaboration in the RMS and broader research capacity strengthening process.  We have amended the discussion accordingly.

AAS Open Res. 2020 Jul 28. doi: 10.21956/aasopenres.14199.r27577

Reviewer response for version 1

Alex C Ezeh 1,2

  • This research note presents and discusses results of 28 research management and support (RMS) capacity assessments of 25 universities and research institutions across 15 countries in SSA. The assessments were implemented between 2014 and 2018. The authors observed that “almost all the institutions assessed faced significant gaps in RMS capacity” across the 13 distinct but inter-related domains they identified. Severe financial constraints and institutional bureaucracy were seen as the two most obvious capacity gaps. The authors concluded that no single intervention would be sufficient to strengthen capacity across all the 13 domains. 

  • While this is an important study in an area where very little exists, there are a number of limitations in the current version of the research note. It is not clear how the questions in the semi-structured interview guides were framed. If these sought to identify gaps the respondents saw in their respective institutions, it will not be surprising to come to the conclusion the authors did regarding significant gaps across all the institutions assessed. Respondents will always find answers to any question they are asked in an interview, but such answers may not necessarily provide valid basis for the conclusion on the RMS capacity of these institutions. Even the staff of an institution with the best RMS capacity in the world would find something they could improve on. Stating that there are areas of improvement is not the same as having a capacity gap.

  • RMS capacity is not an end in itself; it is supposed to support research. We know nothing about the state of research enterprise at these institutions and the extent to which the identified capacity gaps inhibit research productivity at the institutions. This is particularly important because the nature of what constituted a capacity gap varied enormously in how impactful they could be to research undertaking. For each of the 13 domains, you can almost create a scale that ranges from most severe impact on research to inconsequential impact on research and the inability to distinguish between these types of gaps within a domain in this research note is a major limitation. For example, “unreliable power supply” and “insufficient meeting space” as examples of physical infrastructure gaps are likely to have different impacts on research. There are also variations on how each of the 13 domains could affect research.

  • Related to the above point is the issue of the size of these institutions, which could be a university department or college or a whole research institute. To what extent do the identified gaps relate to factors that constraint research at the institutions or aspirational notions of what would be great to have at these institutions? 

  • The authors noted that a third reviewer read each of the cases in which a specific gap was identified by only one of the initial two reviewers and then “made a final decision as to its inclusion”. We do not know in how many instances the decision was made to include the gap versus exclude it. Is it possible that the fact that the third reviewer already read about one of the initial reviewer’s assessment of a response as constituting a capacity gap would influence their identification of the gap? This would be the case if in majority of the cases, the third reviewer agreed with the identification of a capacity gap and this would generally exaggerate the number of gaps identified.

  • The authors identified a number of the “easy to do” interventions (purchase of laboratory equipment or ICT hardware; investment in laboratory, study or office spaces; training and technical assistance; and strengthening institutional policies, practices and systems). While these are all essential, in the absence of a coherent strategy to strengthen research institutions in SSA, these investments, even if they strengthen RMS at SSA institutions, are unlikely to transform the landscape of research systems in the region. RMS strengthening has to be part of an overall strategy to strengthen knowledge-based institutions in the region. Two critical ingredients to strengthening institutional research capacity in SSA are changing the current funding models and using local African capacity to drive the implementation of any capacity strengthening initiative in the region.

  • There are repeated references to the role of external actors in addressing capacity gaps in SSA. This needs further clarification. Current funding models that support SSA institutions through sub-awards and technical assistance from intermediary organizations based outside SSA will continue to undermine the capacity of African institutions. This funding model robs African institutions of access to the levels of funding needed to transform organizational systems and processes, compete for top African talents, and develop closer partnerships with primary funders of research. If it takes capacity to build capacity, the choice of African institutions as primary agencies for capacity building efforts in the region could be transformative. It affirms and further strengthens existing capacity in the region, ensures capacity solutions are appropriate and contextually relevant, and it could guarantee sustained partnerships beyond any specific project or grant.

  • I agree with the authors’ final conclusion on the critical role of institutional leaders in defining and prioritizing the capacity gaps/needs of their institutions; and I would hasten to add, and in finding the most appropriate and suitable interventions to mitigate the identified gaps.

Is the work clearly and accurately presented and does it cite the current literature?

Yes

If applicable, is the statistical analysis and its interpretation appropriate?

Not applicable

Are all the source data underlying the results available to ensure full reproducibility?

Partly

Is the study design appropriate and is the work technically sound?

Partly

Are the conclusions drawn adequately supported by the results?

Yes

Are sufficient details of methods and analysis provided to allow replication by others?

Partly

Reviewer Expertise:

Global Health, Population Studies/Demography, Research Capacity Strengthening, Urban Health

I confirm that I have read this submission and believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard, however I have significant reservations, as outlined above.

AAS Open Res. 2020 Nov 10.
Susie Crossman 1

While this is an important study in an area where very little exists, there are a number of limitations in the current version of the research note. It is not clear how the questions in the semi-structured interview guides were framed. If these sought to identify gaps the respondents saw in their respective institutions, it will not be surprising to come to the conclusion the authors did regarding significant gaps across all the institutions assessed. Respondents will always find answers to any question they are asked in an interview, but such answers may not necessarily provide valid basis for the conclusion on the RMS capacity of these institutions. Even the staff of an institution with the best RMS capacity in the world would find something they could improve on. Stating that there are areas of improvement is not the same as having a capacity gap.

Response: We agree with reviewer one’s point that there is a distinction between an ‘area for improvement’ and a capacity gap that fundamentally frustrates or undermines the attainment of a sufficient or acceptable level of research capacity. Being qualitative, our assessments were not designed to objectively determine the relative severity of any reported capacity issue on a measurable level of research performance. However, the interviews were designed to identify issues that – from the interviewees’ perspective – meaningfully impacted on their ability to conduct or support research. Where such issues were raised, we also sought to verify these in at least one other interview at the same institute. Thus, whilst the listed capacity gaps vary from major to relatively minor, they were all included in the respective reports from which they were drawn on the basis that they were a genuine cause of frustration for at least one interviewee, but most commonly two or more. To address reviewer one’s concerns, we have now included an exemplar interview guide as a supplementary file (supplementary file 2) and have amended the following sentence in the ‘methodology’ to read (amendment in italics): ‘All assessments were qualitative, with no attempt made to rank or score existing capacities. The assessment was not designed to identify every possible capacity gap as measured against the international benchmark, but rather those capacity gaps that - from the interviewees’ perspective – meaningfully impacted on their ability to conduct or support research.’  

RMS capacity is not an end in itself; it is supposed to support research. We know nothing about the state of research enterprise at these institutions and the extent to which the identified capacity gaps inhibit research productivity at the institutions. This is particularly important because the nature of what constituted a capacity gap varied enormously in how impactful they could be to research undertaking. For each of the 13 domains, you can almost create a scale that ranges from most severe impact on research to inconsequential impact on research and the inability to distinguish between these types of gaps within a domain in this research note is a major limitation. For example, “unreliable power supply” and “insufficient meeting space” as examples of physical infrastructure gaps are likely to have different impacts on research. There are also variations on how each of the 13 domains could affect research.

Response: Reviewer one raises an important point and one we fundamentally agree with although, as noted above, our qualitative assessments were not designed to do this.  However, it would be incredibly helpful to the broader research capacity strengthening effort to better understand the relative impact of different RMS capacities on subsequent research performance in the manner described by reviewer 1 and we have now stated this in the final sentence of the discussion.

Related to the above point is the issue of the size of these institutions, which could be a university department or college or a whole research institute. To what extent do the identified gaps relate to factors that constraint research at the institutions or aspirational notions of what would be great to have at these institutions? 

Response: As previously noted, the listed capacity gaps were all included in the respective reports from which they were drawn on the basis that – from the interviewees’ perspective – they meaningfully impacted on their ability to conduct or support research. This has been made clearer in the text.

The authors noted that a third reviewer read each of the cases in which a specific gap was identified by only one of the initial two reviewers and then “made a final decision as to its inclusion”. We do not know in how many instances the decision was made to include the gap versus exclude it. Is it possible that the fact that the third reviewer already read about one of the initial reviewer’s assessment of a response as constituting a capacity gap would influence their identification of the gap? This would be the case if in majority of the cases, the third reviewer agreed with the identification of a capacity gap and this would generally exaggerate the number of gaps identified.

Response: The use of independent coders was designed to reduce over- or under-reporting of identified capacity gaps that may result when coding is completed by a single individual.  This method reduces the potential for bias although, as reviewer one notes, some potential for bias remains. We did not keep a record of how often a third opinion was required to make an ‘inclusion/exclusion’ decision; however, a third opinion was only required in a minority of cases and did not always result in a decision to ‘include’ a disputed capacity gap. We further contend that, whilst imperfect, this process is more likely to result in a balanced representation of the report content as compared to a coding process that relied on a single individual.

The authors identified a number of the “easy to do” interventions (purchase of laboratory equipment or ICT hardware; investment in laboratory, study or office spaces; training and technical assistance; and strengthening institutional policies, practices and systems). While these are all essential, in the absence of a coherent strategy to strengthen research institutions in SSA, these investments, even if they strengthen RMS at SSA institutions, are unlikely to transform the landscape of research systems in the region. RMS strengthening has to be part of an overall strategy to strengthen knowledge-based institutions in the region. Two critical ingredients to strengthening institutional research capacity in SSA are changing the current funding models and using local African capacity to drive the implementation of any capacity strengthening initiative in the region.

Response: We are in complete agreement with the views expressed by reviewer one here.  The discussion has been amended to better highlight the important roles of both longer-term strategic planning and regional leadership in research capacity strengthening.

There are repeated references to the role of external actors in addressing capacity gaps in SSA. This needs further clarification. Current funding models that support SSA institutions through sub-awards and technical assistance from intermediary organizations based outside SSA will continue to undermine the capacity of African institutions. This funding model robs African institutions of access to the levels of funding needed to transform organizational systems and processes, compete for top African talents, and develop closer partnerships with primary funders of research. If it takes capacity to build capacity, the choice of African institutions as primary agencies for capacity building efforts in the region could be transformative. It affirms and further strengthens existing capacity in the region, ensures capacity solutions are appropriate and contextually relevant, and it could guarantee sustained partnerships beyond any specific project or grant.

Response: Our use of the term ‘external’ actors refers to persons/organisations external to the focal University or research institution. Thus, an external actor could be national, regional (within SSA) or international (outside of SSA). We have clarified our use of the term in the research note.

I agree with the authors’ final conclusion on the critical role of institutional leaders in defining and prioritizing the capacity gaps/needs of their institutions; and I would hasten to add, and in finding the most appropriate and suitable interventions to mitigate the identified gaps.

Response: We agree and have amended the discussion in line with reviewer one’s comments.

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Data Citations

    1. Centre for Capacity Research Justin P, Susie C: Strengthening research management and support services in sub-Saharan African universities and research institutions - anonymous data extraction.Harvard Dataverse, V1.2020. 10.7910/DVN/IP3O06 [DOI] [PMC free article] [PubMed]
    2. Centre for Capacity Research: Strengthening research management and support services in sub-Saharan African universities and research institutions - example interview guide.Harvard Dataverse, V1.2020. 10.7910/DVN/HU5E6Q [DOI] [PMC free article] [PubMed]

    Data Availability Statement

    Underlying data

    All requests to the corresponding author for copies of institutional reports will be duly considered. The reports have not been made available as a dataset because the reports cannot be de-identified without compromising anonymity. The reports were produced under ethical approval conditions for the individual projects which stated that only the research team would have access to the data.

    Deidentified intermediary data is available from Harvard Dataverse.

    Harvard Dataverse: Pulford Justin, Crossman Susie, Begg Sara, Amegee Quach Jessica, Abomo Pierre, El Hajj Taghreed and Bates Imelda, 2020, "Strengthening research management and support services in sub-Saharan African universities and research institutions - anonymous data extraction". https://doi.org/10.7910/DVN/IP3O06 8

    This project contains the following underlying data:

    • -

      Research Management Systems Challenges Data Extraction - Anonymous.xlsx (Intermediary data extracted from 28 research management system capacity assessment reports)

    Harvard Dataverse: Centre for Capacity Research, 2020, "Strengthening research management support services in sub-Saharan African universities and research institutions - table 1: location, institution type, focal science, and number of participants interviewed in each capacity assessment", https://doi.org/10.7910/DVN/HJ6SMJ, Harvard Dataverse, V1 9

    This project contains the following underlying data:

    • Table 1: Location, institution type, focal science and number of participants interviewed in each capacity assessment

    Extended data

    Harvard Dataverse: Centre for Capacity Research, 2020, "Strengthening research management and support services in sub-Saharan African universities and research institutions - example interview guide", https://doi.org/10.7910/DVN/HU5E6Q, Harvard Dataverse, V1 10

    This project contains the following extended data:

    • An ‘exemplar’ of the interview guides used across the eight projects from which data were drawn to inform the associated research note

    Data are available under the terms of the Creative Commons Zero "No rights reserved" data waiver (CC0 1.0 Public domain dedication).


    Articles from AAS Open Research are provided here courtesy of African Academy of Sciences

    RESOURCES