Skip to main content
PLOS One logoLink to PLOS One
. 2020 Nov 5;15(11):e0239757. doi: 10.1371/journal.pone.0239757

Decision-making approaches used by UK and international health funding organisations for allocating research funds: A survey of current practice

Katie Meadmore 1,*, Kathryn Fackrell 1, Alejandra Recio-Saucedo 1, Abby Bull 1, Simon D S Fraser 1,2, Amanda Blatch-Jones 1
Editor: Shelina Visram3
PMCID: PMC7644005  PMID: 33151954

Abstract

Innovations in decision-making practice for allocation of funds in health research are emerging; however, it is not clear to what extent these are used. This study aims to better understand current decision-making practices for the allocation of research funding from the perspective of UK and international health funders. An online survey (active March-April 2019) was distributed by email to UK and international health and health-related funding organisations (e.g., biomedical and social), and was publicised on social media. The survey collected information about decision-making approaches for research funding allocation, and covered assessment criteria, current and past practices, and considerations for improvements or future practice. A mixed methods analysis provided descriptive statistics (frequencies and percentages of responses) and an inductive thematic framework of key experiences. Thirty-one responses were analysed, representing government-funded organisations and charities in the health sector from the UK, Europe and Australia. Four themes were extracted and provided a narrative framework. 1. The most reported decision-making approaches were external peer review, triage, and face-to-face committee meetings; 2. Key values underpinned decision-making processes. These included transparency and gaining perspectives from reviewers with different expertise (e.g., scientific, patient and public); 3. Cross-cutting challenges of the decision-making processes faced by funders included bias, burden and external limitations; 4. Evidence of variations and innovations from the most reported decision-making approaches, including proportionate peer review, number of decision-points, virtual committee meetings and sandpits (interactive workshop). Broadly similar decision-making processes were used by all funders in this survey. Findings indicated a preference for funders to adapt current decision-making processes rather than using more innovative approaches: however, there is a need for more flexibility in decision-making and support to applicants. Funders indicated the need for information and empirical evidence on innovations which would help to inform decision-making in research fund allocation.

Introduction

Health research funding organisations have to make difficult decisions regarding which research applications to fund. For example, deciding which health areas or research questions have priority, and whether the research may lead to changes in practice and better health outcomes (be that patient, economic or social benefit). In theory, funding decisions should try to be objective, assessing all applications against criteria in a fair, consistent and transparent way, and those applications deemed to meet the assessment criteria should be funded. However, in reality, funding decision-making is much more complex and often requires balancing evidence-needs, assessment criteria weightings, potential impact, workload capacity of funding organisation staff, applicants and reviewers, and a finite financial resource [1]. The challenges to funding decision-making are well established and numerous, and although innovations in decision-making are emerging, it is not clear to what extent these are utilised by funders.

The process of decision-making is often facilitated by peer review [24]. Peer review refers to a process by which an application is assessed by an expert in the research area, a person with related expertise (e.g., academic, clinician, health economist, methodologist, patient) or a member of the public. From the perspective of research funders, this can include external peer review, in which the reviewer is independent from the funding organisation and provides a written review, and/or a funding committee that reviews and discusses the application and is often built into the funding organisational process. Peer review, and specifically external peer review, is often considered critical for the decision-making process [3,5], with a recent report finding 78% of survey respondents agreeing that peer review was the best method by which to allocate research funds [4].

There are many benefits to using peer review including receiving expert opinion on particular ideas, methodologies, practicalities and potential future implementation of the research. There are also benefits to the reviewers themselves including keeping up to date with progress in their fields which helps to enhance personal development and can feed into teaching and research practices [5]. However, it is also well established that peer review has flaws including being susceptible to bias. For example, it has been shown that peer reviewers and funding decisions have favoured applicants of specific ages, career stages, genders and institutions, amongst others [611]. Furthermore, peer review is inherently subjective. Peer reviewer scores and recommendations often vary widely [1,3,7,1115] and do not always predict that the research will be successful or will have impact [8,16,17]. Peer review is also considered burdensome, and there is a high cost in terms of time and financial expense for applicants, reviewers and funding organisations [3,8,1821].

Despite the known challenges of peer review, the same processes and associated issues are still largely reported [22]. Variations and innovations in decision-making are emerging to tackle the issues of bias, burden and cost [5,11,23,24]. For example, variations to application forms, numbers of reviewers, teleconferencing and innovative approaches such as sandpits (an interactive workshop over a number of days whereby stakeholders with an interest in research on a particular topic are encouraged to collaborate on innovative solutions to a research question) and modified funding lotteries [11,13,2529]. However, empirical research activity in these areas is limited and challenges in conducting rigorous testing of innovative approaches has resulted in a limited evidence base for decision-making approaches [30].In addition, it is not clear to what extent these are used by health funding organisations. For example, in 2011, the European Science Foundation conducted a survey [31] to explore peer review practices, with the focus on quality assurance of reviews, identifying, incentivising and managing data on reviewers, and how proposals are managed. The survey did not report detailed information on the types of processes used in peer review practices nor whether any innovative approaches were being considered by health research funders. Identifying what approaches are used in current practice in decision-making and whether they include innovative approaches, may provide better understanding about decision-making and why funders are engaged (or not) in exploring mechanisms to change/enhance funding processes to address known challenges.

The UK National Institute for Health Research (NIHR) Research on Research (RoR) team are addressing the lack of an empirical evidence-base in a programme of work exploring peer review and its role in the decision-making process for the allocation of research funding. This paper reports the results of the first study to be completed in this programme of work. The aim of the study was to identify and explore decision-making practices used by UK and international funders to better understand the current decision-making landscape for the allocation of health research funding.

Materials and method

Design

This study used a survey design to gather quantitative and qualitative information about decision-making practices used by organisations that fund health and health-related research. The survey used closed, tick-box questions and open (free-text) questions and was designed to be delivered online in order to have national and global reach. Open questions were underpinned by a qualitative phenomenological approach that aimed to explore and build understanding of experiences in decision-making processes from the perspective of the funding organisations. The study was approved by the University of Southampton, Faculty of Medicine Ethics Committee (ERGO ID 46851, February 2019).

Survey development

The survey was delivered online using iSurvey software maintained by the University of Southampton (https://isurvey.soton.ac.uk/). The questions were developed using an iterative process involving NIHR staff and members of the NIHR, Research on Research team. Initially, the authors compiled a list of potential questions for inclusion in the survey, which were grouped into sections. Questions were developed based on discussions with the team and from previous NIHR projects [23,28,32] and existing literature about different types of decision-making processes (e.g. peer review, triage, sandpits; [31,33]) and were distilled down to generate a 27-question survey covering three sections.

The survey was piloted with four members of NIHR staff known to the research team to determine the relevance of the proposed questions, the face validity of questions, in particular language, comprehension and completion time, and the construct validity of the questions and response options. Two members of staff provided written feedback and two members of staff provided verbal feedback as they tested the survey. Feedback was used to refine and re-order some of the original questions and consensus was reached by agreement from the research team.

The final survey consisted of 27 main questions (see S1 File). Respondents were asked to identify one research programme or funding call within their organisation that they would focus on to complete the survey. They were asked to choose a funding programme or theme (for research projects or programmes, not fellowships or infrastructure) that they were most familiar with. It was made clear that respondents could complete the survey multiple times (for different funding programmes). Each section is described in Table 1.

Table 1. Description of contents of each of the three survey sections.

Section Number of questions Description of questions
1 13 Characteristics of the funding organisation
2 10 Current decision-making practices and if/how these could be improved; Practices used in the past; Benefits and drawbacks to these systems
3 4 Decision-making practices that funders might be interested in exploring in the future and why

Distribution of survey

Purposive and snowball sampling was used to recruit respondents. In order to obtain a broad funder perspective 76 health research funding organisations (109 targeted emails) were contacted across 10 different countries. Where possible emails were sent to named administrative staff for the research programmes, but also included general enquiry email addresses and online forms. Other than some colleagues at the NIHR, there were no prior relationships with potential respondents.

The targeted list of organisations was collated through an online search for health funders and included charities, research councils and other government funded organisations in the UK and internationally. The online search was complemented by cross-referencing with collated lists of funding organisations on websites and in reports (e.g., [34,35]), as well as through known contacts of NIHR staff. Organisations were considered eligible if the remit of research that was funded was health or health-related research projects or programmes (not fellowships or awards funding an individual person or infrastructure).

The survey was launched on 6th March 2019 and was open for seven weeks, closing on 17th April 2019. The survey was also promoted using e-promotion routes including blogs on the Association of Medical Research Charities [36] and NIHR websites [37] via social media channels such as Twitter (e.g. NIHR twitter account), and through other organisational distribution lists and newsletters (e.g., The International Network of Agencies for Health Technology Assessment and Health Research Authority). The survey was also promoted at national and international conferences (e.g., Ensuring Value in Research (EViR) Funders' Collaboration and Development Forum, March 2019). Potential respondents either received an invitation letter and link to the survey via email or could access the survey link via social media posts. The survey link took respondents to an information sheet and required them to provide online consent before the survey questions were displayed. Two reminder emails and tweets were sent, one at two weeks before and a second two days before the survey was due to close.

Data analysis

A mixed methods approach to the analysis was taken in order to cross-validate findings and provide a fuller picture of the peer review landscape. Data were analysed separately but concurrently, and findings from each strand of data were triangulated to inform and explain patterns and interpretation. For example, qualitative data was used to expand and interpret quantitative findings and frequency data was used to confirm patterns in qualitative data.

Prior to analysis, data was screened as part of data cleaning by at least two members of the team. If multiple responses were received from one research programme, the responses were merged so that there was only one entry for that research programme. All data was stored in a dedicated research folder on the University of Southampton’s internal secure server.

For the quantitative data analysis, survey data was downloaded to Microsoft Excel 2016©. Descriptive statistics were used to identify frequencies and percentages of responses to closed questions in relation to characteristics of funders, assessment criteria and types of peer review processes engaged in. Due to the small sample, it was not considered valid to conduct further analyses, such as regression techniques or comparative analyses to explore factors associated with the adoption of particular peer review methods used by funders (e.g., inter-country differences).

For the qualitative analysis, redacted PDFs of survey responses were uploaded to NVivo 12. Free text responses were subjected to inductive thematic analysis in NVivo12 using the 6-step framework [38]. In the initial stages, KM read all the survey responses from all respondents to allow for familiarisation of the responses (step 1). Where there were limited responses, a discussion was held with members of the research team to gain consensus about whether to exclude or include the response. Open text was then coded into simple words or phrases that described the topic of the sentence or word (e.g., “bias”, “transparency”; step 2). These initial codes were refined and grouped together to form themes and subthemes (step 3). These initial themes were then reviewed and discussed with the team and regrouped, refined and defined through an iterative process (steps 4 and 5). The coding process was inductive as no prior framework was considered. However, the authors were mindful that the data came from pre-defined survey questions and tried to take a deeper interpretation of the data and did not just group the codes under the question headings. The COREQ Guidelines were adhered to in the reporting of the qualitative data as a quality check [39].

Results

Respondents

A total of 35 responses were received from respondents in 24 different funding organisations (see Fig 1). For the quantitative data, 31 responses (from 23 funding organisations) were analysed. Multiple entries for two research programmes were merged. The most complete entry was kept, and any blank responses were filled using the other entry. Open responses were combined. No conflicts in open responses were observed. Closed question conflicts were resolved through discussion and by checking the funder website. An additional two responses were excluded as the respondents indicated that they were not the appropriate person to complete the survey and so may not have provided a true reflection of the organisation processes. For the qualitative data, three additional responses were excluded from analysis as no open questions had been completed. Not all respondents provided answers for all questions. For six funding organisations, more than one response was submitted and these pertained to different research programmes. Respondents completed the survey in an average of 33 minutes (SD = 18 minutes).

Fig 1. Diagram to show response rates.

Fig 1

Quality = respondent identified as not being the correct person to complete the survey and did not provide complete responses. Funding organisations could provide more than one response (for different research programmes). Duplicates = There were multiple entries for a research programme. These were merged so that there was only one entry per research programme.

The respondents represented funding organisations in a broad range of health areas, including ageing, neurodegenerative diseases, cancer, diabetes, meningitis, health technology, HIV and AIDS, heart disease and stroke. They covered basic science through to applied clinical research as well as health service delivery, and included disease-specific programmes, public health and global health.

The majority of responses (18/31, 58%) were for research programmes of funding organisations in the charity sector, with research councils and other government-funded organisations also being represented (13/31, 42%; see also S1 Dataset). The majority responses were from funding organisations based in the UK (23/31, 74%), with smaller numbers from Europe (6/31, 19%) and Australia (2/31, 6%). The size of the funding organisations ranged from 0–9 staff for charities and 25–99 staff for government funded organisations to over 250 staff. The average amount funding given per award ranged from £25,000 to £4.5 million, the average number of applications received per round ranged from 10 to 2500 (median = 42 applications/round) and success rates of the funding programme ranged from 3–100% (median = 20%).

Peer review data

Four key themes were extracted from the qualitative data, and for narrative purposes the findings from the quantitative and qualitative data are integrated together and described under these themes. Quotes are written verbatim, although some words have been redacted to maintain confidentiality. Participant number and funding source are provided for context for the quotes.

Theme 1. “Current landscape”: Typical decision-making processes

This theme referred to the reported decision-making processes that were typically used (see S1 Dataset). The data show that nearly all research programmes used some sort of triage (24/29, 83%), external peer review (28/29, 97%) and face-to-face committee meetings (25/29, 86%; see Fig 2). However, these were not the only processes used (and in many cases they were used along with variations to these processes; see theme 4). The use of these processes in the typical review pathway were seen as essential to support decision-making through reducing volume of applications, encouraging discussion and engaging experts.

Fig 2. A typical decision-making pathway and percentage of respondents that use the process.

Fig 2

Total number of survey responses = 29 (two entries were blank so not included in analysis).

The face to face funding committee meetings enable the opportunities for discussion where a decision is not clear cut. P33, Government.

The process involves external experts in the field which should ensure the funding organisation takes into accounts the latest development. P5, Charity.

As we cover all areas and types of [disease-type] research, we need external peer review to ensure expert review of all applications. P9, Charity.

Within these processes, respondents also discussed the importance of feedback to applicants and the right to reply. Respondents indicated that scores as well as written comments by peer reviewers were provided to applicants in order to help strengthen the application for future rounds or submissions. However, the challenges of providing feedback were also acknowledged.

Applicants have feedback at several points about how to strengthen their proposal, which assists with deliverability and start up of projects. P22, Government.

Reasoning for not-funding is challenging to communicate and is not always recognised by applicants. P24, Charity.

The most common scoring system that was used to aid decision-making was numeric (27/29, 93%), although four respondents qualified that the numeric scores also had written qualitative descriptions for clarity. Scores ranged from a 4 to a 20-point scale. Scores were given by external peer reviewers and/or committee members (28/29, 97%), and scoring systems were not always the same for these two groups. For example, one respondent reported that external peer reviewers use a 1–6 scale and the committee use a 1–10 scale. Organisations that used committee meetings used mean, median, consensus and voting to make a final decision, and often used a combination of methods.

Theme 2. Values underpinning decision-making processes

This theme relates to the values that funders see as either integral to current practice or as important to engage in for future practice consideration. All funders, and research programmes within funders had different strategic objectives and underlying aims, and these were reflected in the assessment criteria (see Fig 3 and S1 Dataset).

Fig 3. Assessment criteria based on mean scores for importance.

Fig 3

1 = not important and 4 = very important; * denotes those criteria that averaged a median score of 4; Total number of survey responses = 31.

In this program, pilot projects are funded. Therefore, there is a high risk and possible high gain connected to the funded research. Other programs not involving pilot projects have moderate or low risk and [are] more focussed on timeliness and the highest societal impact. P29, Charity.

Additional criteria that had not been pre-listed in the survey were also reported. These tended to fall under two broad areas; namely the team (focus on track-record and career development, as well as multidisciplinary and international collaborations) and the project (ambition of the project, intellectual property and commercial strategy, and methodological development).

Respondents indicated that the decision-making processes that their organisation used were done so because they demonstrated positive values of decision-making, including transparency, fairness, robustness and rigorousness.

The use of a set process allows for more transparency and fairness, all applications being treated according to the same criteria. P5, Charity.

Respondents also commented on the need to have different stakeholder perspectives (academic, clinician, patient) reviewing the applications. For some organisations this is standard practice, whereas for others it was something that the organisations wanted to engage in, in the future. Regardless of whether this was already integral to current practice, ensuring different perspectives was seen as an important feature of review practice.

External peer review and panel assessment by academics, research users and patients/public provides a range of informed perspectives. P10, Government.

We also have review by people affected by [disease population] to ensure the research is relevant to our beneficiaries. P8, Charity.

Public review is currently still outside the decision-making, but will be brought in over the next years. P20, Government.

Theme 3. Challenges of the decision-making process

This theme relates to the cross-cutting challenges in decision-making approaches that the funder faces. The main challenges that were reported were bias and burden, with 89% of respondents (25/28) commenting on these issues. These comments encompassed both bias and burden as disadvantages of the decision-making process, as well as ways in which they could be or are being managed. Challenges were mainly drawn from the open survey questions asking about the benefits and drawbacks of decision-making approaches.

Bias was described in relation to peer review generally, as part of external peer review, committee review, and in scoring. Respondents acknowledged that processes had potential for bias towards particular people, groups or projects.

There is a propensity for low risk projects to be funded as the project eliciting wide opinions and scores can be scored lower when averaged. P23, Charity.

It was also recognised that there could be inconsistency between peer reviewers’ feedback.

Peer reviewers can sometimes vary in opinion about the same application—that's why we prefer 3 reviews per application. P2, Charity.

Scoring, even when supported by criteria, can be very personal. Some scorers are more generous than others (a 3 may mean a 4.5 from somebody else and vice versa). P5, Charity.

When this happened, comments suggested that this could result in reviews being ignored, “too much variability in quality of reviews meaning they were generally ignored” (P11, Charity), or that “Opinions [of other reviewers] can be swayed by strong opinions” (P23, Charity). Solutions to reduce this bias included stopping certain processes, engaging in better training or education for reviewers, recruiting strong Chairs, undertaking double-blind or open peer review, and using both scores and written comments as “the scores do not always reflect the comments” (P15, Government).

Burden was discussed in relation to monetary cost, time and workload for all stakeholders.

It is a lot of work for the applicants, the external peer reviewers, the secretariat staff, and the committee members. P22, Government.

In terms of workload, some respondents commented on the difficulties in obtaining and/or securing potential peer reviewers, especially when trying to find people with the relevant expertise. Challenges with finding peer reviewers is further complicated by trying to obtain the appropriate number of people to review applications often resulting in managing conflicts of interest. Face-to-face committee meetings were also seen as resource intensive, with additional cost burden to the funder. The fact that face-to-face meetings took a number of days, had high paperwork loads, and included multiple committee members meant that they were difficult to arrange and prepare for. These time burdens were undesirable, prolonging final decisions and detracting from the research programme.

Face to face meetings with multiple experts are hard to arrange. Finding a panel with the right expertise and no conflicts of interest is extremely difficult. P26, Charity.

Peer review process takes too long and the associated timelines make the funding programme less attractive to small-to-medium enterprises. P28, Government.

There was also discussion surrounding the number of stages and types of application that were used in relation to burden. Some regarded one stage processes as more efficient in terms of time allocation compared to a two stage process. The two stage process was more debatable, with some respondents indicating that the two-stage process increases burden in relation to time and effort to all stakeholders, whilst other respondents felt that a two-stage process may reduce burden.

Only applicants who pass shortlisting stage have to do all the work involved in a stage 2 application. P22, Government.

Seeks to optimise time taken for teams to develop applications and in reviewer time by using two stages. P19, Government.

These differences in opinion are an indication of why there is variability in the number of stages and decision-making points used by funders. This variability was further complicated by external limitations and the regulations that organisations have to comply with.

As part of [an organisation] we are obligated to ensure we follow certain principles and practices and we are audited accordingly. P3, Charity.

In addition, one participant reflected on the fact that funders are often reliant on voluntary contributions from external peer reviewers and committee members.

We are very reliant on other people's time—to provide reviews and to participate on the panel. When they delay sending the promised review, or if people cancel attendance at the Panel, you can't moan as they are all volunteers. P2, Charity.

Theme 4. “Emerging Landscape”: Not one size fits all

The final theme describes the emerging landscape in decision-making processes and demonstrates that there is ‘not a one size fits all’ for the allocation of funding decision-making processes. The key concept was the need for funders to be flexible in their funding approaches and to be more supportive of applicants and reviewers. Funders acknowledged that not all approaches work in all research areas, and that there is a need to foster more collaboration and flexible thinking to uphold core values, fund the right research and maximise reviewer contributions.

We are always open to different ways of working and recognise that there is not one-size fits all approach to decision-making. For example, sandpits work for some research areas but not all. P15, Government.

By funders taking a more flexible and supportive role, respondents indicated that this would help to ensure that the applications were tailored to the funders strategic aims and requirements and may enhance opportunities for success.

Potential for new ways to iterate with teams where there is a good question but significant issues to resolve. P19, Government.

As an organisation, we want to encourage greater collaboration—this could be relevant to us in the future when we plan to commission research rather than use a response-led approach. P7, Charity.

It also highlighted that more support could be offered to reviewers both in relation to training and incentives, and that by doing this it may make reviewing grant applications a more attractive prospect.

Comments reflected that many funders already engage in continuous improvements for decision-making, whilst others were aware of work that has been done by others. It is important to note, however, that not all respondents (19/28, 68%) agreed that their current decision-making approach needed to be improved.

In support of ‘not one size fits all’, the data showed that funders are using variations to typical processes (see Fig 4). Variations to typical processes encompassed variations to external peer review, committee meetings, applicant input or decision-stages within the decision-making pathway. However, respondents also commented that they would like to know more about variations and innovative approaches. As described in theme one, those who used variations to typical processes, usually did this in addition to one or more of the typical processes. For those who did not use face-to-face committee meetings, they either used virtual committee meetings or a sandpit approach (see Fig 4). However, these approaches also have pros and cons.

Fig 4. Types of different processes used.

Fig 4

Variations to typical processes and approaches that had some uptake from funding organisations and Venn diagram to show percentage of research programmes who use typical, variations to typical and innovative approaches (from total survey sample of = 29). * This process had interest but not uptake (within the survey sample).

No external peer review, only assessment by panel within the Workshop, so novelty (overlap with existing work) may be harder to judge within short-frame of the idea generation and 'pitches'. P25, Charity.

Discussion

The aim of this study was to survey UK and international funding organisations to better understand current decision-making practices employed for the allocation of health research funding. In line with previous work [3,8], the typical pathway for allocation of funding remains inclusive of triage, external peer review and face-to-face committee meetings. However, there are many variations to this typical pathway, with nuances for different research programmes and/or funders (e.g. different numbers of decision-points, proportionate review). It was clear that funders engage in these processes because they believe them to demonstrate positive values that are important to stakeholders, such as transparency, quality, patient benefit, as well as providing a framework to objectively assess applications according to criteria that match the funder’s strategic goals. Indeed, respondents were keen to emphasise these values as being the main benefits of the decision-making processes used. This may highlight the pressure on funders, particularly public funders, to be accountable and demonstrate the added value and benefit to society for the funding decisions that they make. However, despite our findings indicating that funders believe they have transparent processes and the importance of transparency, a recent report that surveyed grant applicants showed that they were least satisfied with this aspect in the funding pathway [4].

In line with previous work, the findings showed that bias and burden are considered to be the biggest challenges of the typical decision-making pathway (e.g., [3,4,8,10,11,19]). However, the survey highlighted that funders are aware of these challenges and some are taking additional steps to try and overcome them. Methods for doing so include increasing diversity of reviewers by including patients and public members as external peer reviewers and on committee panels, and using teleconferencing rather than face-to-face meetings to increase the potential reviewer pool and to cut down on meeting time and costs. However, these solutions also come with their own challenges. For example, health research covers a broad range of topics and so what is considered expertise will also differ across applications and funders. In practice, funding organisations cannot feasibly cover all areas for each application (especially when trying to reduce burden) and therefore need to balance number of reviewers with which areas of expertise are priority.

The survey also highlighted that funding organisations have very limited control on many aspects of the peer review system, and are reliant on the contributions of reviewers and committee members. Such limitations can impact the amount of time required for a decision to be made, and it is perhaps important for all stakeholders to keep this in mind when considering the decision-making pathway.

Whilst it was clear that there is ‘not a one size fits all’ for peer review practice across different funding organisations and research programmes, the nuances also demonstrate differing opinions on what is considered best practice. For example, there was debate on whether one or two stage processes increase or reduce burden, and there was large variability in scoring scales. These differences also reflect the variability in assessment criteria and strategic goals across the funding organisations. In addition, there are practices that funders acknowledge as being valuable, such as providing written feedback to applicants and allowing rebuttal, but the issue of burden and limited resource constrains some funders from doing this. Indeed, Langfeldt [40] found that funding decisions by committees were most influenced by budget restrictions and the type of scoring method that was used. It is the variability in these processes that have most influence on funding decisions [40].

There was acknowledgment that funders need to be more flexible in their approaches to decision-making, providing more support to applicants and encouraging greater collaboration between applicants, and applicants and funders. Previous work has suggested that funders provide poor support for applicants [41], and burden associated with grant funding falls heavily on applicants as well as reviewers and funding organisation staff [3,42,43]. Although it is appropriate to challenge applicants to ensure rigorous and reliable research, by taking a more flexible and supportive role, and through fostering collaboration and working more closely with applicants, quality, innovation or other important values to funders may be enhanced. This may increase stakeholder (e.g., funder, applicant) burden in the short-term but over the longer term may result in better quality applications, project outcomes and impact. This has also recently been recommended by the AMRC [44]. This is an area for consideration and evaluation and evidence is needed to determine whether this approach will work for funders and researchers and where additional burden may lie.

Many funders seem to be actively engaged in continuous improvement of their decision-making practices. However, the many variations to the typical pathway indicate a preference for funders to make small adaptations to current systems rather than employ innovative approaches such as sandpits and lottery systems. Research has shown conflicting results over different approaches to decision-making [30] which may make funders wary of trying new processes. In addition, there are still very few good quality empirical studies that have evaluated specific decision-making approaches and funders need this evidence base to inform their practice. Moreover, the findings from this survey indicated not only a need for evidence but also a need for more information about innovative approaches. Funders may be more interested in innovative practices once more information (what the process is and how to implement it) and evidence is available. This may also vary across different funding organisations as implementing a new peer review approach would potentially cause significant strain to the funder who would also be trying to maintain a high standard of practice.

This survey has several limitations. The results reported are based on a relatively small number of responses (N = 31), which are open to bias as responders were self-selected, and so caution must be applied in interpretation and generalisation [45]. However, 30 responses was our target sample size and represented a range of different organisations and countries. Note also that five respondents were recruited via the use of e-promotion and promotion at conferences, demonstrating the benefit of multi-modal recruitment routes [46]. Multi-modal recruitment methods are recommended to researchers looking to recruit international or multidisciplinary samples [46]. We also sent reminders and stated the average time it would take to complete the survey, and these strategies are suggested to improve online survey response rates [45]. Personalising the email and appealing to a person’s egotistic motivation are also shown to be strategies that improve online response rate [45,47]. For the current survey, recruitment may have been increased through direct emails to chairs or programme directors of research programmes rather than to generic email addresses or online forms. This may have also helped to ensure that the most appropriate people completed the survey, potentially increasing quality of responses and reducing need for exclusion.

The survey response rate corresponds to the ESF survey [31] which also received 30 responses from organisations in Europe and one in the USA. Our survey focus differed from the ESF survey [31], as we were mainly interested in the different types of peer review process that funders used, in order to better understand the current landscape of peer review, rather than questions on quality and management of peer review and peer reviewers more generally. Thus, our data contribute to a better understanding of decision-making processes and nuances of these processes that are being employed in current practice within health research funding organisations.

This study also provides limited quantitative data and there was the potential for researcher bias during data extraction. The researchers were mindful of their preconceptions about peer review practice (e.g. known challenges of bias and burden) and what peer review practices had been specifically asked about in the survey. Due to the nature of the survey questions, there was some grouping of codes under similar headings; however, this was not a formal predefined framework, the four main themes did not match questions, were discussed by the team and were triangulated with quantitative data. As such, we are confident that the results have been interpreted fairly.

In conclusion, given the emergence of innovative decision-making approaches, the aim of this study was to better understand current decision-making practices for the allocation of health research funding from the perspective of UK and international funders in order to determine what approaches were being utilised and why. The key findings from this survey show that similar decision-making processes tend to be used by all funders and there are many nuances and challenges to these processes. These processes are engaged in because it is considered the optimum way to make funding allocation decisions and demonstrates good practice. Funders continually strive for improvements in decision-making practice, and recognise the need to develop more flexible and supportive approaches that will facilitate decision-making (by reducing bias and burden) whilst maintaining key positive values such as transparency, fairness and quality. However, findings indicate a preference to adapt current systems rather than use innovative processes. This may be due to the lack of evidence available and/or the difficulties that trialling and testing new practices may cause. Thus, it is clear that more empirical studies are needed to evaluate the effectiveness of different peer review approaches, in order to provide funders with a sound evidence-base about what and how practices can be implemented to help inform decision-making in research fund allocation.

Supporting information

S1 File. The survey sent to funding organisations.

Some questions would only be shown depending on prior answers.

(DOCX)

S1 Dataset. A subset of anonymised data received from the survey.

To maintain confidentiality the dataset is split into four sections (organisation demographics, assessment criteria and current peer review practice). All sections have been anonymised and randomised (so row 1 is not necessarily the same funder across all tabs).

(XLSX)

Acknowledgments

We would like to thank Helen Payne for her advice on the project, members of the NIHR staff who piloted the survey, all of those who helped to disseminate the survey, all of the respondents of the survey and to those who commented on the final manuscript.

Data Availability

All relevant quantitative data are within the manuscript and its Supporting Information files.

Funding Statement

This study is supported by the National Institute for Health Research (NIHR) Evaluation, Trials and Studies Coordinating Centre (NETSCC) through its Research on Research programme in the form of salaries for all authors. The views and opinions expressed are those of the authors and do not necessarily reflect those of the NIHR or Department of Health and Social Care.

References

Decision Letter 0

Shelina Visram

27 May 2020

PONE-D-20-03358

Decision-making approaches used by UK and international health funding organisations for allocating research funds: A survey of current practice

PLOS ONE

Dear Dr. Meadmore,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

Please submit your revised manuscript by Jul 11 2020 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols

We look forward to receiving your revised manuscript.

Kind regards,

Shelina Visram, PhD, MPH, BA

Academic Editor

PLOS ONE

Journal Requirements:

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

2. Please include your tables as part of your main manuscript and remove the individual files. Please note that supplementary tables should remain as separate "supporting information" files

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Partly

Reviewer #2: Yes

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: I Don't Know

Reviewer #2: N/A

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: No

Reviewer #2: No

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: Abstract – needs to refer more clearly to which types of funders were participants; for example, ‘other international countries’ is a little vague, can you be more explicit? The way in which the funders are referred to in the abstract suggests that they are relatively homogenous in terms of the types of health research that are funded – can you be more clear if the focus was, for example, biomedical health research, research on wider determinants of health? or both and more? I think that the abstract could also more clearly demonstrate the value and need for this research – which is this important to know? One of the conclusions is about a solid evidence base but could have been clearer how this came from the results. Also, I wasn’t clear what “reviewer diversity that were integral to current practice or important for future practice” meant – can you clarify?

Need for /value of this research needs to be more clearly demonstrated at the start of the piece. It is mentioned that difficult decisions need to be made – can you be more clear what these are? decisions between different types of health research? prevention vs treatment? biomedical vs wider determinants research? There are fundamental differences in ontology/epistemology within the field of health research that could for example at least be alluded to? Can you be more clear about some of the fundamental difficulties in the health field in particular? There is an assertion that there have been wider developments that have happened, mentioning “public contribution and new data legislations” – can you explain more clearly why these are significant, give some examples? And you indicate that peer review has undergone limited change but then go on to give some examples of change – this seems a little inconsistent. Can you be clearer what the need and value of this research is?

Methods – can you be more clear what you mean by wider reach – wider reach that what? (do you just mean a wide reach and where? in the UK, globally? Also, can you explain what you mean by phemenological approach? You refer to the survey questions being developed by a variety of stakeholders but these seem only to be NIHR / NIHR team members, which does not seem that various; I would recommend being more clear about this and how the survey was developed – how was previous research/literature used to develop the survey questions? What is a ‘think aloud pilot’? It would be useful earlier in the methods to indicate who the intended respondents of the survey were. Can you be more clear what you mean by data being screened for quality? What did this involve? Are you suggesting that some data was removed on this basis? How did wider literature on the topic inform the data analysis?

Results / discussion – some interesting findings are highlighted and points discussed. At times, there is use of terminology such as “right questions, teams and research proposals are funded” which suggests that there is a ‘right’ answer to what should be funded. It is not immediately clear that it is always the case that there is a ‘right’ answer – as decision-making involves making value-based choices between alternative, particularly in the health field where, for example, decisions might be made between biomedical research and health-related social science research – what would be the ‘right’ decision here? I think that some of the discussion needs to be more nuanced and reflective of this type of issue.

Earlier on the piece there is mention of a “broad range of health areas, including ageing, neurodegenerative diseases, cancer, diabetes, meningitis, health technology, HIV and AIDS, heart disease and stroke. They covered basic science through to applied clinical research as well as health service delivery, and included disease-specific programmes, public health and global health.” This breadth and complexity of the health field needs to be recognised more in the discussion I think; with some recognition that this relates also to your points about ‘expertise’ - if health funding covers all the above, it is of course challenging to cover all these areas? What other aspects of ‘expertise’ are also important? What about contextual knowledge – which can be particularly important, for example, in evaluative research/implementation research where context can shape how interventions are received/adopted/rolled out/scaled up? Recognition of some of these issues would show more reflection/consideration of how the data in the study is important?

Reviewer #2: Thank you for the opportunity to review this manuscript, which is well-written and has the potential to contribute to the literature on decision-making by research funders. I have a number of suggested revisions that I believe will improve the manuscript:

ABSTRACT: I find the use of the terms ‘current decision-making processes’ and ‘alternatives’ confusing here. If some are using these alternatives then surely they are also part of the current processes? The authors seem to be making some assumptions about the ‘normal’ way of doing things and more ‘unusual’ approaches. If peer review is seen as the ‘norm’ and other approaches are somehow more ‘innovative’ then this needs to be made clear in the abstract.

INTRODUCTION: The final sentence of the first paragraph (lines 57-60) needs to be supported by evidence/relevant citations.

METHODS: The survey development and piloting process appears to have been rigorous and is well-described. However, I would have liked to see the ‘think aloud’ pilot explained further, ideally with reference to the literature.

A minor point but the sentence at lines 144-146 (and the following sentence) could be re-worded to avoid using the words ‘organisations’ and ‘targeted’ multiple times.

76 health research organisations were identified – across how many countries? It is not entirely clear who potential respondents were, e.g. where emails were person-specific, were these targeted towards chairs of funding panels, administrative support staff, etc?

Lines 174-175: how exactly were multiple responses from the same organisation ‘merged’? Were any in conflict?

The description of the analytical process is confusing, particularly the suggestion that inconsistencies or limited responses were excluded. What exactly does this mean? It would be helpful if the authors could say more about this because it leaves the reader wondering if they excluded responses that did not fit with their a priori assumptions. How does this fit with the phenomenological approach (which is mentioned at lines 113-115) and thematic analysis?

Figure 1 – providing a textbook illustration of the thematic analysis process – is not necessary.

RESULTS: Related to the point above about excluding certain responses, I am not clear why only 31 of 35 responses were included (resulting in the exclusion of 1 of 24 organisations). Figure 2 is helpful but does not make clear what exclusion on the basis of quality means. It is however very interesting that the non-targeted recruitment resulted in few additional responses; this is an important learning point for other researchers that I think the authors should pick up on in their discussion.

Much of the information in the sub-section on respondent characteristics might be better presented in the form of a table.

Using n for number of responses is confusing where this actually relates to the number of organisations, e.g. lines 221-223. This also applies when reporting qualitative findings; I am really not clear what n=28 means at line 230. Presumably this denotes the number of usable survey responses that included qualitative data, but this is not made clear in the text.

I was surprised not to see quotes attributed by country or any discussion of inter-country differences in the survey findings.

Redaction of the quote at lines 372-373 has obscured the meaning. Please add some descriptive text to indicate what [name] relates to, i.e. is it an organisation, a process, etc?

DISCUSSION: The discussion is good, with clear recommendations. As stated above, I would like to see some discussion on any inter-country differences and also lessons learned for researchers conducting similar studies.

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: No

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2020 Nov 5;15(11):e0239757. doi: 10.1371/journal.pone.0239757.r002

Author response to Decision Letter 0


3 Jul 2020

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Partly

Reviewer #2: Yes

Response: We have addressed reviewer 1’s comments and hope that the revisions have provided further clarity to the conclusions drawn________________________________________

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: I Don't Know

Reviewer #2: N/A

We have addressed the reviewers’ comments and hope that the revisions have provided further clarity to the methods used throughout the study.

________________________________________

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: No

Reviewer #2: No

Response: We have included all quantitative data to supplement the results reported (see S2 datasets). This has been split into different sections in order to ensure that we maintain confidentiality and adhere to our ethical protocol.________________________________________

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

Response: We are very pleased that the reviewers found the manuscript to be written in an intelligible way.

________________________________________

5. Review Comments to the Author

Reviewer #1:

1.1. Abstract – needs to refer more clearly to which types of funders were participants; for example, ‘other international countries’ is a little vague, can you be more explicit?

Response: This was originally written so as not to specifically name individual countries to maintain confidentiality. However, on reflection, we agree with the reviewer and the current wording is vague and potentially misleading. We feel that we can be more explicit here without breaking confidentiality, as this information is not linked to any other identifying information. This has now been changed.

1.2. The way in which the funders are referred to in the abstract suggests that they are relatively homogenous in terms of the types of health research that are funded – can you be more clear if the focus was, for example, biomedical health research, research on wider determinants of health? or both and more?

Response: The focus was left broad as we wanted any funder related to the health sector to be able to respond and so the remit encompassed biomedical and wider determinants of health too. We have made this clearer in the abstract: “An online survey (active March-April 2019) was distributed by email to UK and international health and health-related funding organisations (e.g., biomedical and social)”

1.3. I think that the abstract could also more clearly demonstrate the value and need for this research – which is this important to know?

Response: Thank you for this comment, a sentence has been added to the objectives: “Innovations in decision-making practice for allocation of funds in health research are emerging; however, it is not clear to what extent these are used.”

1.4. One of the conclusions is about a solid evidence base but could have been clearer how this came from the results.

Response: The conclusion was drawn from the findings that funders are not averse to innovative or different approaches to decision making but that they wanted more information on these. We inferred from this that an evidence base was required. We have now revised this sentence and removed this phrase: “Funders indicated that they wanted more information and empirical evidence on innovative approaches which would help to inform decision-making in research fund allocation”

1.5. Also, I wasn’t clear what “reviewer diversity that were integral to current practice or important for future practice” meant – can you clarify?

Response: This sentence referred to the values that underlie why funders engage in particular practices. We agree that reviewer diversity in the current sentence was confusing and so this has been re-worded: “Key values underpinned decision-making processes. These included transparency and gaining perspectives from reviewers with different expertise (e.g., scientific, patient and public)”

1.6. Need for /value of this research needs to be more clearly demonstrated at the start of the piece.

Response: We have added some additional sentences to demonstrate need of this research to the first paragraph of the introduction: “The challenges to funding decision-making are well established and numerous, and although innovations in decision-making are emerging, it is not clear to what extent these are utilised by funders.”

1.7. It is mentioned that difficult decisions need to be made – can you be more clear what these are? decisions between different types of health research? prevention vs treatment? biomedical vs wider determinants research? There are fundamental differences in ontology/epistemology within the field of health research that could for example at least be alluded to? Can you be more clear about some of the fundamental difficulties in the health field in particular?

Response: We have added some examples specific to health research decisions to the first paragraph of the introduction: “For example, deciding which health areas have priority, and whether research may lead to changes in practice and better health outcomes (be that patient, economic or social benefit).”

1.8. There is an assertion that there have been wider developments that have happened, mentioning “public contribution and new data legislations” – can you explain more clearly why these are significant, give some examples? And you indicate that peer review has undergone limited change but then go on to give some examples of change – this seems a little inconsistent. Can you be clearer what the need and value of this research is?

Response: We have revised the final two paragraphs of the introduction to provide a clearer description of the value of the research. We know that there are challenges to decision-making, including bias towards innovation or early career researchers, and although innovations in decision-making are emerging, these are limited in terms of empirical evidence. In addition, it is not clear to what extent these are utilised by funders. Identifying what approaches are utilised in current practice in decision-making and whether they include innovative approaches, may provide better understanding about decision-making and why funders are engaged (or not) in exploring mechanisms to change/enhance funding processes to address known challenges.

1.9. Methods – can you be more clear what you mean by wider reach – wider reach that what? (do you just mean a wide reach and where? in the UK, globally?

Response: By wider reach, we meant that an online survey compared to a paper survey might be more likely to be picked up globally and by organisations that we did not individually target. We have amended this to read national and global reach.

1.10. Also, can you explain what you mean by phemenological approach?

Response: A phenomenological approach is an approach used in qualitative research that focuses on describing something by exploring it from the perspective of those that have experienced it – in this case, describing decision processes from the perspectives of the research programmes in organisations funding health and health related research.

1.11. You refer to the survey questions being developed by a variety of stakeholders but these seem only to be NIHR / NIHR team members, which does not seem that various; I would recommend being more clear about this and how the survey was developed – how was previous research/literature used to develop the survey questions?

Response: We have removed “variety of stakeholders” so that there is no ambiguity about developing the survey with anybody external to NIHR. We have also revised the text so that it is clear how we used previous research to develop the survey questions.

1.12. What is a ‘think aloud pilot’?

Response: By think aloud pilot, we were referring to a form of verbal feedback. A researcher sat with the staff member while they verbally described their thoughts about the questionnaire (the questions, answer choices, layout, wording etc) as they went through it. The researcher noted thoughts, comments and questions and engaged in dialogue with the member of staff to determine validity and ease of use of the questionnaire. To reduce confusion, we have removed this term and changed it to verbal feedback.

1.13. It would be useful earlier in the methods to indicate who the intended respondents of the survey were.

Response: The intended respondents have been included in the first paragraph of the methods: “This study used a survey design to gather quantitative and qualitative information about decision-making practices used by organisations that fund health and health-related research”

1.14. Can you be more clear what you mean by data being screened for quality? What did this involve? Are you suggesting that some data was removed on this basis? How did wider literature on the topic inform the data analysis?

Response: For two of the responses, many answers included “don’t know” or “not sure” or were not answered. In addition, for these two responses, the respondents also left comments to say that they did not think that they were the best person to complete the survey. As such, it was decided that the survey responses for these two respondents would not be analysed as they may not provide a true reflection of the organisation processes. This has been made clearer in the results.

1.15. Results / discussion – some interesting findings are highlighted and points discussed.

Response: Thank you for this comment.

1.16. At times, there is use of terminology such as “right questions, teams and research proposals are funded” which suggests that there is a ‘right’ answer to what should be funded. It is not immediately clear that it is always the case that there is a ‘right’ answer – as decision-making involves making value-based choices between alternative, particularly in the health field where, for example, decisions might be made between biomedical research and health-related social science research – what would be the ‘right’ decision here? I think that some of the discussion needs to be more nuanced and reflective of this type of issue.

Response: Thank you for bringing this to our attention. We completely agree that there is no right decision and had not intended to convey this. We have amended this phrase in the results and discussion, and have included some additional text in the discussion to reflect this.

1.17. Earlier on the piece there is mention of a “broad range of health areas, including ageing, neurodegenerative diseases, cancer, diabetes, meningitis, health technology, HIV and AIDS, heart disease and stroke. They covered basic science through to applied clinical research as well as health service delivery, and included disease-specific programmes, public health and global health.” This breadth and complexity of the health field needs to be recognised more in the discussion I think; with some recognition that this relates also to your points about ‘expertise’ - if health funding covers all the above, it is of course challenging to cover all these areas? What other aspects of ‘expertise’ are also important? What about contextual knowledge – which can be particularly important, for example, in evaluative research/implementation research where context can shape how interventions are received/adopted/rolled out/scaled up? Recognition of some of these issues would show more reflection/consideration of how the data in the study is important?

Response: Thank you for this comment. We have included some additional text in the discussion to show how funding organisations have to balance getting the appropriate expert reviewers that align with their strategic aims and goals and the scientific field of the application whilst also trying to contain burden and bias. As you suggest, different funding organisations have different priorities and resource which influences how this is done: “However, these solutions also come with their own challenges. For example, health research covers a broad range of topics and so what is considered expertise will also differ across applications and funders. In practice, funding organisations cannot feasibly cover all areas for each application (especially when trying to reduce burden) and therefore need to balance number of reviewers with which areas of expertise are priority.”

2.1 Reviewer #2: Thank you for the opportunity to review this manuscript, which is well-written and has the potential to contribute to the literature on decision-making by research funders. I have a number of suggested revisions that I believe will improve the manuscript:

Response: Thank you for your comments, they have been very helpful.

2.2 ABSTRACT: I find the use of the terms ‘current decision-making processes’ and ‘alternatives’ confusing here. If some are using these alternatives then surely they are also part of the current processes? The authors seem to be making some assumptions about the ‘normal’ way of doing things and more ‘unusual’ approaches. If peer review is seen as the ‘norm’ and other approaches are somehow more ‘innovative’ then this needs to be made clear in the abstract.

Response: Thank you for this comment. This was not our intention but we can see how the terminology used here may be interpreted in this way. We have revised the abstract and manuscript more generally to change ‘alternatives’ to innovative approaches, where appropriate and made it clear that triage, external peer review and committee meetings were the typical decision-making practice.

2.3. INTRODUCTION: The final sentence of the first paragraph (lines 57-60) needs to be supported by evidence/relevant citations.

Response: A reference has been added.

2.4 METHODS: The survey development and piloting process appears to have been rigorous and is well-described. However, I would have liked to see the ‘think aloud’ pilot explained further, ideally with reference to the literature.

Response: In line with comment 1.12, we have now removed this phrase from the manuscript.

2.5. A minor point but the sentence at lines 144-146 (and the following sentence) could be re-worded to avoid using the words ‘organisations’ and ‘targeted’ multiple times.

Response: We had not spotted this before and agree that it should be re-worded. This has been done.

2.6. 76 health research organisations were identified – across how many countries?

Response: We have included the number of countries in the text: “Purposive and snowball sampling was used to recruit respondents. In order to obtain a broad funder perspective 76 health research organisations (109 emails) were contacted across 10 different countries”.

2.7. It is not entirely clear who potential respondents were, e.g. where emails were person-specific, were these targeted towards chairs of funding panels, administrative support staff, etc?

Response: More detail has been added about the targeted emails: “Where possible emails were sent named administrative staff for specific research programmes, but also included general enquiry email addresses and online forms.”

2.8. Lines 174-175: how exactly were multiple responses from the same organisation ‘merged’? Were any in conflict?

Response: Multiple responses were merged so that there was only one entry for each research programme. The most complete entry was kept, and any blank responses were filled using the other entry. Open responses were added together, and close question conflicts (which were minimal; e.g., number of staff) were resolved through discussion and by checking the funder website. This has been added to the text.

2.9. The description of the analytical process is confusing, particularly the suggestion that inconsistencies or limited responses were excluded. What exactly does this mean? It would be helpful if the authors could say more about this because it leaves the reader wondering if they excluded responses that did not fit with their a priori assumptions.

Response: Thank you for bringing this to our attention. Consistency was the wrong word to use here. Data was only merged for multiple responses from the same research programme so there was only one data entry per research programme. For two of the responses, many answers included “don’t know” or “not sure” or were not answered. In addition, for these two responses, the respondents left comments to say that they did not think that they were the best person to complete the survey. As such, it was decided that the survey responses for these two respondents would not be analysed as they may not provide a true reflection of the organisation processes. Three further responses were not used for the qualitative analysis as no open questions had been answered. Qualitative data analysis was inductive and we had no a priori assumptions. We have provided more context around data exclusion.

2.10. How does this fit with the phenomenological approach (which is mentioned at lines 113-115) and thematic analysis?

Response: Data preparation occurred before the analysis was undertaken and only those respondents who did not answer any open questions were excluded from the qualitative analysis as there was no data to code.

2.11. Figure 1 – providing a textbook illustration of the thematic analysis process – is not necessary.

Response: We have removed this figure from the manuscript.

2.12. RESULTS: Related to the point above about excluding certain responses, I am not clear why only 31 of 35 responses were included (resulting in the exclusion of 1 of 24 organisations).

Response: More detail about the exclusion process has been added to the manuscript and figure caption.

2.13. Figure 2 is helpful but does not make clear what exclusion on the basis of quality means.

Response: More detail about the exclusion process has been added to the manuscript and figure caption.

2.14. It is however very interesting that the non-targeted recruitment resulted in few additional responses; this is an important learning point for other researchers that I think the authors should pick up on in their discussion.

Response: Thank you for this comment. We have included this as a learning point in the discussion: “Note also that five respondents were recruited via the use of e-promotion and promotion at conferences, demonstrating the benefit of multi-modal recruitment routes (46). Multi-modal recruitment methods are recommended to researchers looking to recruit international or multidisciplinary samples (46). We also sent reminders and stated the average time it would take to complete the survey. Both strategies are suggested to improve online survey response rates (45). Personalising the email and appealing to a person’s egotistic motivation are also shown to be strategies that improve online response rate (45,47). For the current survey, recruitment may have been increased through direct emails to chairs or programme directors of research programmes rather than to generic email addresses or online forms. This may have also helped to ensure that the most appropriate people completed the survey, potentially increasing quality of responses and reducing need for exclusion.”

2.15. Much of the information in the sub-section on respondent characteristics might be better presented in the form of a table.

Response: We did initially have this information in a table but found that written text worked better, as such we have not reverted back to a table format.

2.16. Using n for number of responses is confusing where this actually relates to the number of organisations, e.g. lines 221-223. This also applies when reporting qualitative findings; I am really not clear what n=28 means at line 230. Presumably this denotes the number of usable survey responses that included qualitative data, but this is not made clear in the text.

Response: We have removed n from this section and replaced with the actual numbers. We have also removed n=28 and explained earlier on the text that qualitative analysis was conducted on 28 responses.

2.17. I was surprised not to see quotes attributed by country or any discussion of inter-country differences in the survey findings.

Response: We agree that inter-country differences would be really interesting to explore. However, the intention of the study was never to explore inter-country differences but instead just to describe what decision-making practices were being used more generally across countries. In addition, given the small numbers of responses we did not think that comparisons were appropriate. In order to remain true to our original research questions, we have not conducted any further analysis on inter-country difference.

2.18. Redaction of the quote at lines 372-373 has obscured the meaning. Please add some descriptive text to indicate what [name] relates to, i.e. is it an organisation, a process, etc?

Response: Thank you for pointing this out. [name] referred to an organisation and so we have added this context to the quote.

2.19. DISCUSSION: The discussion is good, with clear recommendations. As stated above, I would like to see some discussion on any inter-country differences and also lessons learned for researchers conducting similar studies.

Response: As described in 2.17, as we did not set out to make comparisons across countries and due to the small number of responses we have decided not to conduct any further inter-country analysis. However, in line with comment 2.14, we have included some discussion on lessons learned for recruitment.

Attachment

Submitted filename: Response to Reviewers.docx

Decision Letter 1

Shelina Visram

19 Aug 2020

PONE-D-20-03358R1

Decision-making approaches used by UK and international health funding organisations for allocating research funds: A survey of current practice

PLOS ONE

Dear Dr. Meadmore,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

==============================

ACADEMIC EDITOR: Thank you for making the revisions to this paper. R1 is happy that all previous comments have been addressed. Unfortunately the original R2 was not available and so a new reviewer was approached to look at the revised manuscript. They have suggested an additional minor change/clarification that should be very quick and easy to address, and should help to make description of the study design more rigorous. 

==============================

Please submit your revised manuscript by Oct 03 2020 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols

We look forward to receiving your revised manuscript.

Kind regards,

Shelina Visram, PhD, MPH, BA

Academic Editor

PLOS ONE

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #2: All comments have been addressed

Reviewer #3: All comments have been addressed

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #2: (No Response)

Reviewer #3: Yes

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #2: (No Response)

Reviewer #3: Yes

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #2: (No Response)

Reviewer #3: Yes

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #2: (No Response)

Reviewer #3: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #2: (No Response)

Reviewer #3: Thank you for the opportunity to review this paper. It is well written, methodologically sound and discusses an engaging issue well.

The comments to the author have been clearly addressed and result in a more rigorous paper.

Apologies for adding additional feedback, but one minor issue could be discussed in order to ensure full rigour. Your paper states that you took a mixed methods approach to data analysis, but then give only detail of the analysis for qualitative and quantitative strands as separate entities. There is no discussion of synthesis of the strands. If this was carried out, can it be detailed (e.g. was the analysis sequential or prioritised in any way)? If it was not, can this absence be justified in relation to your research question and design?

Thanks again - and apologies for the minor revisions.

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #2: No

Reviewer #3: No

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2020 Nov 5;15(11):e0239757. doi: 10.1371/journal.pone.0239757.r004

Author response to Decision Letter 1


7 Sep 2020

6. Review Comments to the Author

Reviewer #3: Thank you for the opportunity to review this paper. It is well written, methodologically sound and discusses an engaging issue well.

The comments to the author have been clearly addressed and result in a more rigorous paper.

Apologies for adding additional feedback, but one minor issue could be discussed in order to ensure full rigour. Your paper states that you took a mixed methods approach to data analysis, but then give only detail of the analysis for qualitative and quantitative strands as separate entities. There is no discussion of synthesis of the strands. If this was carried out, can it be detailed (e.g. was the analysis sequential or prioritised in any way)? If it was not, can this absence be justified in relation to your research question and design?

Response: Thank you for taking the time to review our manuscript and for your comment. We agree that this detail was missing and have added in some text to clarify how we approached the mixed methods analysis in the data analysis section of the methods: “A mixed methods approach to the analysis was taken in order to cross-validate findings and provide a fuller picture of the peer review landscape. Data were analysed separately but concurrently, and findings from each strand of data were triangulated to inform and explain patterns and interpretation. For example, qualitative data was used to expand and interpret quantitative findings and frequency data was used to confirm patterns in qualitative data.”

Attachment

Submitted filename: Response to Reviewers.docx

Decision Letter 2

Shelina Visram

14 Sep 2020

Decision-making approaches used by UK and international health funding organisations for allocating research funds: A survey of current practice

PONE-D-20-03358R2

Dear Dr. Meadmore,

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

Shelina Visram, PhD, MPH, BA

Academic Editor

PLOS ONE

Additional Editor Comments (optional):

Reviewers' comments:

Acceptance letter

Shelina Visram

26 Oct 2020

PONE-D-20-03358R2

Decision-making approaches used by UK and international health funding organisations for allocating research funds: A survey of current practice

Dear Dr. Meadmore:

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department.

If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org.

If we can help with anything else, please email us at plosone@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Dr. Shelina Visram

Academic Editor

PLOS ONE

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    S1 File. The survey sent to funding organisations.

    Some questions would only be shown depending on prior answers.

    (DOCX)

    S1 Dataset. A subset of anonymised data received from the survey.

    To maintain confidentiality the dataset is split into four sections (organisation demographics, assessment criteria and current peer review practice). All sections have been anonymised and randomised (so row 1 is not necessarily the same funder across all tabs).

    (XLSX)

    Attachment

    Submitted filename: Response to Reviewers.docx

    Attachment

    Submitted filename: Response to Reviewers.docx

    Data Availability Statement

    All relevant quantitative data are within the manuscript and its Supporting Information files.


    Articles from PLoS ONE are provided here courtesy of PLOS

    RESOURCES