Skip to main content
International Journal of General Medicine logoLink to International Journal of General Medicine
. 2010 Aug 30;3:225–230. doi: 10.2147/ijgm.s11117

The Delphi process: a solution for reviewing novel grant applications

Cath Holliday 1, Monica Robotin 1,2,
PMCID: PMC2934605  PMID: 20830198

Abstract

Introduction:

Traditional scientific review processes are not well suited for evaluating the merits of research in situations where the available scientific evidence is limited and if review panels have widely divergent opinions. This study tested whether a Delphi process is useful in grant selection.

Materials and method:

A Delphi process prioritized novel research proposals in pancreatic cancer. Five reviewers holding similar grants overseas ranked research applications by scientific merit, innovativeness, and level of risk.

Result:

Three rounds of voting evaluated the best 10 applications received. In the first round of the Delphi process, scores ranged from 5.0 to 8.3. After the second round, the cumulative scores of the eight remaining applications ranged from 10 to 12.6. At the end of the third round, the final cumulative scores of the remaining six applications ranged from 13.6 to 18.2. The four highest ranking applications were recommended for funding, with agreement from reviewers.

Conclusion:

A modified Delphi process proved to be an efficient, transparent, and equitable method of reviewing novel grant applications in a specialized field of research, where no local expertise was available. This process may also be useful for other peer review processes, particularly where there is limited access to local experts.

Keywords: delphi process, research grant selection, consensus

Introduction

The scientific review process traditionally involves a group decision-making method, where evaluations are mediated and advocated by spokespersons, and a judgment of quality and merit is made by the group through several prespecified steps. A group decision-making process has many advantages, ie, access to a large pool of expertise, member interaction can be a catalyst for debate resulting in new insights into a problem, group interactions can filter out individual idiosyncrasies, and a group decision may carry more weight than an individual one.1

Nevertheless, committee decision-making can also have disadvantages. There is often a tendency for conformity, because group members may feel pressure (real or imagined) to agree with other panel members,1 and committee processes may be controlled by more dominant personalities. Members may be unwilling to take a position before all facts are known, or they may be reluctant to change their view once they have stated it publicly. In addition, members may avoid publicly contradicting senior members of the panel and concerns about losing face may preclude members from taking a public stance in a matter where results are uncertain.2

Given the variety of opinions that may exist when a diverse group considers a highly technical and polarizing topic, approaches to reach consensus (the Delphi method, the nominal group process, and the consensus development conference) are increasingly used to make complex decisions in medicine and health.3 Consensus methods perform well in situations where the evidence is limited, unclear, or when results diverge widely, where they can provide a link between clinical reasoning and clinical research.4

The Delphi method is a way of collating and organising feedback provided by a group of experts. Modifications of the Delphi technique have been used for developing clinical guidelines and quality indicators,510 developing clinical decision aids,11 identifying research priorities,1217 defining priorities in cancer care,18 and identifying health practitioners’ educational priorities.19

This study aimed to ascertain whether the Delphi process can be an efficient and transparent grant assessment method and whether it can make a significant contribution to the peer review process. If the process is acceptable to stakeholders, who view it as being both fair and reproducible, it may be considered as adjunct or an alternative to a grant selection process.

Materials and methods

A research procurement method was developed by the Cancer Council New South Wales (CCNSW), to address an ambitious set of research priorities identified through the New South Wales Pancreatic Cancer Network Strategic Research Partnership grant. Details about the research prioritization process with experts in the field, including consumers, have recently been published in the peer reviewed literature.20 Innovator grants in pancreatic cancer aim to support innovative research of high quality and potential, but unlikely to be considered by traditional funding bodies (due to unusual research questions or design, and the investigators having a limited research track record or a lack of experience in pancreatic cancer research).

Given the specific aims of these grants, it was accepted that they may involve a higher than usual risk of failure, but the CCNSW was willing to consider research proposals that demonstrated high merit and feasibility upon peer review, as potential “high risk-high return” propositions. The mitigation of these risks was through the restricted term of funding, offered for one year at AUD 100,000 per grant.

A round of funding for innovator grants in pancreatic cancer was announced by the CCNSW in early 2008. Based on the average number of pancreatic cancer research applications submitted through the traditional National Health and Medical Research Council (NHMRC) funding scheme and taking into account the level of local activity in pancreatic cancer research, the CCNSW expected fewer than five innovator grant applications to be submitted. However, 19 applications were received from research groups throughout Australia.

This presented a number of challenges for the CCNSW. The original aim was to process all applications within four weeks of receipt, through a peer review process involving at least two independent experts. With a relatively large volume of applications to review from institutions nationwide, and because most experts in the field were listed investigators in these applications (or had conflicts of interest to declare), it was agreed that the traditional review process, ie, inviting local experts as grant application reviewers, was no longer applicable. Therefore, an alternative review process was developed.

Firstly, a convened independent scientific panel reviewed all applications against the specified eligibility criteria and agreed on those proposals meeting the stated objectives of the funding scheme. These applications were recommended for peer review. It was proposed that the peer review process be conducted online, using a modified Delphi process to reach consensus among a convened expert group.

The innovator grant Delphi process

The modified Delphi was held over three rounds on 15–31 March 2009, and involved five experts holding grants with the Pancreatic Cancer Action Network in the US. The three rounds examined the scientific merit, innovativeness, and level of risk of each application (see Figure 1). Participants were required to declare any conflicts of interest before the review. At the end of each round, the two lowest ranking applications were eliminated, leaving four applications to be recommended for funding at the end of the process.

Figure 1.

Figure 1

Diagrammatic representation of the Delphi grant process.

In Round 1, reviewers were provided with the 10 applications and a scoring sheet. In an attempt to reduce the administrative burden for applicants and reviewers, all applications were limited to six pages. Reviewers were invited to rate the scientific merit of each research proposal for clarity and measurability of the endpoint of the research, the scientific quality of the grant proposal, its originality, the adequacy of the study design to achieve the research goals, and whether the potential impact of the study would warrant its funding.

The scoring sheets were returned to the CCNSW for collation and analysis. The scores used for each category of answers were “yes” = 2 points, “no” = 0 points, and “unsure” = 1 point. Delphi participants were provided with a de-identified summary table, documenting the scores assigned to each application by each reviewer, as well as the overall mean score for each application. The scores were listed in decreasing order of magnitude and it was proposed that the two lowest scoring proposals be eliminated from the subsequent round.

At this point, the panel was invited to review the overall ranking, provide feedback on the process, and advise if they wished to proceed to the next round, or if they had objections, to advise the CCNSW of their nature and if a recount was required.

In Round 2, participants were asked to rank the remaining applications for their innovative potential. Participants were invited to assess the degree of innovativeness of each of the eight research applications on a Likert scale from 1 (“not at all original”), to 6 (“very innovative”).

The results were sent to the CCNSW and collated. A table documenting the individual innovativeness scores, the mean innovativeness score for each application, and the mean scientific merit score from Round 1 was circulated to the group. The sum of both mean scores was calculated and the applications were again ranked from the highest to the lowest, based upon the total score.

The group was then asked to comment on the results, and the lowest two ranking applications were eliminated from the final round. Participants were again asked if they had any objections before proceeding to the next round.

In Round 3, participants were asked to rank the remaining six proposals according to their degree of risk, vis-à-vis their potential contribution to pancreatic cancer research, where the highest score (6) was awarded for low risk-high return applications and the lowest score (1) was awarded for high risk-low return and low risk-low return applications. High risk-high return propositions were awarded a score of 4.

As in previous rounds, a table was circulated, listing scores assigned to each application in relation to level of risk and the mean level of risk score. The mean and cumulative total scores of the scientific score, innovativeness score, and level of risk score were calculated and forwarded to participants. Applications were listed from the highest ranking to the lowest ranking based on this cumulative score. The overall score was circulated to the expert assessors who were able to see the rank of each proposal.

At the completion of the Delphi process, feedback was sought from reviewers on the process, its usefulness, and possible alternatives or modifications, to increase its validity and relevance as a grant discriminator tool.

Results

In Round 1 (n = 5 participants, 100% response rate), applications were scored against the five agreed criteria related to scientific merit. Mean application scores across all criteria ranged from 8.3 to 5.0. The two lowest ranking applications (scoring a mean of 5.8 and 5.0, respectively) were eliminated from the subsequent round, leaving eight applications to progress to Round 2. None of the reviewers recorded any objections at the end of Round 1.

Cumulative Round 2 (or innovativeness) scores ranged from 10.0 to 12.6, with the lowest two applications scoring 10.4 and 10.0, respectively. No objections were recorded at the end of Round 2, and the two lowest scoring applications were eliminated, leaving six applications in Round 3.

In Round 3, scoring for the degree of risk associated with funding the research, cumulative mean scores ranged from 13.6 to 18.2. The lowest two applications, scoring a mean of 14.6 and 13.6, were eliminated. The remaining four applications represented the priority list recommended by the expert group for funding.

All rounds were scored by all five experts, for a 100% response rate throughout the process.

The widest range of scores range was observed in Round 1, which ranked scientific merit. There was some movement in relation to where applications ranked before and after scores were cumulated in rounds 2 and 3, however, this did not affect overall ranking in the final round assessing risk level (see Table 1).

Table 1.

Mean scores (range) and ranks for applications after each round of the Delphi process

Application unique identifier Post-round 1
Post-round 2
Post-round 3
Mean score scientific merit Rank Mean score innovativeness Cumulative score Rank Mean score level or risk Cumulative score Rank
Applicant 1 8.3 1 3.5 11.8 2 3.8 15.6 3
Applicant 2 8.0 2 4.6 12.6 1 5.6 18.2 1
Applicant 3 7.8 3 3.8 11.6 4 3.0 14.6 5
Applicant 4 7.4 4 4.0 11.4 5 3.8 15.2 4
Applicant 5 6.8 5 4.0 10.8 6 2.8 13.6 6
Applicant 6 6.8 5 3.2 10.0 8
Applicant 7 6.8 5 4.8 11.6 3 4.4 16.0 2
Applicant 8 6.2 8 4.2 10.4 7
Applicant 9 5.8 9
Applicant 10 5.0 10

Following the process, participants (n = 4, 80% response rate) were asked a series of questions to evaluate their experience with the Delphi process (see Table 2). Responses were positive overall and the feedback provided suggested that the Delphi process was a fair way of assessing the merit of the applications, which was well suited for small application pools, and offering unique solutions, compared with other review processes. The reviewers suggested that incorporating some free discussion among the panel between rounds would be useful.

Table 2.

Reviewer feedback

Would more discussion between rounds be beneficial and, if so, would it have altered your final decision? Discussion would have been beneficial, especially on grants where there was a clear difference in opinion, ie, some reviewers scored it highly and others poorly.
Discussions or written comments would be beneficial. Whether it would alter the outcome is hard to say.
Ideally, one conference call would be ideal – however given the time differences I think the current system works fine.
I thought the review process went well, however, I do feel that having a conference call at some point would help. Many applications are similar in nature/merit and discussing which one to place higher than the other at some point of the process would have been desirable.
Did cumulative scoring result in the optimum outcome? The multiple rounds were kind of a waste of time since the leaders did not move very much between each round.
It really depends on what is weighted more in each round. The round weighing scientific merit was omitted. That resulted in grants emphasizing novelty over scientific merit.
Yes.
How did this process compare with more traditional scientific peer reviews and grant assessment processes? It was actually more time-consuming as I had to go back and read the grants in between rounds to remember and rank them again. I think we could have answered all of the questions at the first round, discussed it quickly, and then made the decision.
This is very similar to the mechanism where novelty is emphasized. This is very common for foundation grants or non-renewable grants. The cumulative scoring is novel to me. I think it works fairly well. Each round does its intended purpose of weeding out certain proposals.
Probably easier.
Would you recommend this process in the future to other funding schemes? Maybe. It really helps to be able to review the ranking of the grant in each round. This feature is very different from study sections that I have served on, where each grant is evaluated independent from other applications and no ranking is given. I would recommend ranking for all small pools of applications.
Yes, I think it is fair.
I think overall the experience was very positive. I think whether to alter the process really depends on what kind of results is expected. I think the current process definitely weighs novelty over scientific merit. If that’s what the Foundation wants, the result is pretty close to the aim.

Discussion

Here we have described a process that can assist the assessment and ranking of research grant applications, using a modified Delphi technique. To our knowledge, this process represents the first use of a Delphi process to appraise and rank research applications. Its ease of administration, reproducibility, and accessibility makes this a useful adjunct to the traditional processes of grant selection, or as a stand-alone process for reviewing very specialized types of research applications, where innovativeness and risk-taking are rewarded, in conjunction with scientific merit.

This method offered the advantages of expediency and speed, as the Delphi process can be carried out electronically and outcomes collated promptly, in preparation for the next step. We believe that its greatest contribution would be in evaluating grant applications in highly specialized research areas, where the availability of local reviewers is very limited, or where the contribution of a diverse, interstate, or international panel of experts would be a great asset to the proceedings.

Many research organizations tailor review processes according to the nature or aim of the grant or of the procurement strategy, eg, attracting researchers who have a sound or very novel idea but may lack a well-established track record in particular areas of research. The process adequately addressed the funder’s specific aims and the specific requirements of reviewing novel grant applications which have a higher level of risk than what is acceptable through traditional grant schemes.

In 2008, the National Institute of Health (NIH) acknowledged that a specialized review process was required to assess grant applications equitably from first-time applicants.21 Modeled on similar programs in Europe and Japan, it was proposed that where a researcher was applying to the NIH for the first time, these applications would be reviewed against each other, rather than against all applications, which include those from experienced researchers. In this same review, the NIH acknowledged issues around the length of applications, identifying appropriate reviewers, and allowing some flexibility for reviewers to reduce the burden on their time.

While the NIH program may represent a different grant process, many of the same issues are germane to Australian pancreatic cancer research and were addressed here through the innovator grant review process. For example, innovator grant applications could not exceed six pages in length, in order to reduce the administrative and time burden on both applicants and reviewers. While this was initially developed to decrease the time in which applications were prepared and reviewed, it was equally important to the success of this Delphi process, because lengthy applications would have made the review process very onerous for the expert panel. Further investigation is needed to measure the time and effort required by assessors, but this should be relative to the level of risk and funding of the grants being offered.

The process successfully addressed the issue of finding appropriate reviewers, because in this case it was necessary to search for international reviewers in order to avoid conflicts of interest. Our online review process was less costly, quicker, and more flexible with regard to reviewer time commitment, because the process could accommodate their individual schedules.

Nevertheless, the comments received from reviewers about the process highlight a clear limitation of the Delphi technique in reviewing grant applications, particularly in relation to the need for some discussion between rounds. For example, where there is a large difference in the range of scores, a discussion between voting rounds could be beneficial. This could, however, introduce the shortcomings of a committee decision-making process, particularly panel members’ tendency for conformity, or the risk that the process would be controlled by more dominant personalities.1 This may be ultimately counterproductive compared with the Delphi process, where individual responses are deidentified and collated in order to facilitate the reaching of consensus. However, circulating comments, without discussion, may be beneficial in the future and would reduce the risk of conformity while maintaining the flexibility of the Delphi.

The indicators used in our Delphi process differ from those used in traditional grant schemes, such as those administered in Australia by the NHMRC, as the latter emphasize an established track record in research and budgetary considerations when ranking applications. The indicators used in the Delphi process were scientific merit, innovativeness, and level of risk.

Traditional funding schemes require an in-depth scrutiny of proposed budgets, due to the larger funding amounts made available to research teams. While this was not an issue in our situation, because the relatively small amount of funds awarded (AUD 100,000 per grant) was simply “start-up grant money”, encouraging innovativeness and lateral thinking, a closer review of larger budgets is required than what can be possible through a Delphi process alone, so further research into the merit of the Delphi process as an adjunct to the traditional grant selection process may be advantageous, as would a trial of the Delphi process with other novel grant schemes.

Conclusion

The value of the Delphi process for grant reviews therefore appears to lie primarily with novel grant schemes, particularly where the aim is to increase researcher involvement in understudied or relatively new research fields. The process is well suited if the stated goal is to attract new researchers into the field, regardless of their track record, who may look for different, nonconventional solutions to research questions, and/or may wish to move into a new research field. In our experience, a modified Delphi process proved an efficient and equitable method of grant review in a research area with a high potential for conflicts of interest, and we contend that it has a wider applicability in other research grant evaluations.

This process may also be useful in other peer review processes, including ethics committee deliberations, progress evaluations of research grant applications, and guideline development, particularly where there is a limited availability of local expert reviewers.

Acknowledgments

This study was funded by the CCNSW. We would like to thank members of the CCNSW Standing Committee on Scientific Assessment, Dr Andrew Penman, Professor Murray Norris, and Professor Andrew Grulich, for their contribution to this review process. Special thanks are extended to Michelle Duff, Director of Research and Scientific Affairs Pancreatic Cancer Action Network USA.

Footnotes

Disclosure

The authors report no conflict of interest in this work.

References

  • 1.Murphy MK, Black NA, Lamping DL, et al. Consensus development methods, and their use in clinical guideline development. Health Technol Assess. 1998;2(3):1–88. [PubMed] [Google Scholar]
  • 2.Turoff M. The policy Delphi. In: HA Linstone MT, editor. The Delphi Method: Techniques and Applications. Boston, MA: Addison-Wesley Publishing Co; 1975. [Google Scholar]
  • 3.Fink A, Kosecoff J, Chassin M, Brook RH. Consensus methods: Characteristics and guidelines for use. Am J Public Health. 1984;74(9):979–983. doi: 10.2105/ajph.74.9.979. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Cross H. Consensus methods: A bridge between clinical reasoning and clinical research? Int J Lepr Other Mycobact Dis. 2005;73(1):28–32. doi: 10.1489/1544-581X(2005)73[28:CMABBC]2.0.CO;2. [DOI] [PubMed] [Google Scholar]
  • 5.Gagliardi AR, Lemieux-Charles L, Brown AD, Sullivan T, Goel V. Barriers to patient involvement in health service planning and evaluation: An exploratory study. Patient Educ Couns. 2008;70(2):234–241. doi: 10.1016/j.pec.2007.09.009. [DOI] [PubMed] [Google Scholar]
  • 6.Hermens RP, Ouwens MM, Vonk-Okhuijsen SY, et al. Development of quality indicators for diagnosis and treatment of patients with non-small cell lung cancer: A first step toward implementing a multidisciplinary, evidence-based guideline. Lung Cancer. 2006;54(1):117–124. doi: 10.1016/j.lungcan.2006.07.001. [DOI] [PubMed] [Google Scholar]
  • 7.McGory ML, Shekelle PG, Ko CY. Development of quality indicators for patients undergoing colorectal cancer surgery. J Natl Cancer Inst. 2006;98(22):1623–1633. doi: 10.1093/jnci/djj438. [DOI] [PubMed] [Google Scholar]
  • 8.Roddy E, Zhang W, Doherty M, et al. Evidence-based recommendations for the role of exercise in the management of osteoarthritis of the hip or knee – the MOVE consensus. Rheumatology (Oxford) 2005;44(1):67–73. doi: 10.1093/rheumatology/keh399. [DOI] [PubMed] [Google Scholar]
  • 9.Biondo PD, Nekolaichuk CL, Stiles C, Fainsinger R, Hagen NA. Applying the Delphi process to palliative care tool development: Lessons learned. Support Care Cancer. 2008;16(8):935–942. doi: 10.1007/s00520-007-0348-2. [DOI] [PubMed] [Google Scholar]
  • 10.Greenberg A, Angus H, Sullivan T, Brown AD. Development of a set of strategy-based system-level cancer care performance indicators in Ontario, Canada. Int J Qual Health Care. 2005;17(2):107–114. doi: 10.1093/intqhc/mzi007. [DOI] [PubMed] [Google Scholar]
  • 11.Elwyn G, O’Connor A, Stacey D, et al. Developing a quality criteria framework for patient decision aids: Online international Delphi consensus process. BMJ. 2006;333(7565):417. doi: 10.1136/bmj.38926.629329.AE. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Bond S, Bond J. A Delphi survey of clinical nursing research priorities. J Adv Nurs. 1982;7(6):565–575. doi: 10.1111/j.1365-2648.1982.tb00277.x. [DOI] [PubMed] [Google Scholar]
  • 13.Chang E, Daly J. Priority areas for clinical research in palliative care nursing. Int J Nurs Pract. 1998;4(4):247–253. doi: 10.1046/j.1440-172x.1998.00089.x. [DOI] [PubMed] [Google Scholar]
  • 14.Cohen MZ, Harle M, Woll AM, Despa S, Munsell MF. Delphi survey of nursing research priorities. Oncol Nurs Forum. 2004;31(5):1011–1018. doi: 10.1188/04.ONF.1011-1018. [DOI] [PubMed] [Google Scholar]
  • 15.Fochtman D, Hinds PS. Identifying nursing research priorities in a pediatric clinical trials cooperative group: The Pediatric Oncology Group experience. J Pediatr Oncol Nurs. 2000;17(2):83–87. doi: 10.1177/104345420001700209. [DOI] [PubMed] [Google Scholar]
  • 16.Hinds P, Norville R, Anthony L, et al. Pediatric cancer nursing research priorities: A Delphi study. J Pediatr Oncol Nurs. 1990;7(2):51–52. doi: 10.1177/104345429000700205. [DOI] [PubMed] [Google Scholar]
  • 17.Monterosso L, Dadd G, Ranson K, Toye C. Priorities for paediatric cancer nursing research in Western Australia: A Delphi study. Contemp Nurse. 2001;11(2–3):142–152. doi: 10.5172/conu.11.2-3.142. [DOI] [PubMed] [Google Scholar]
  • 18.Efstathiou N, Ameen J, Coll AM. Healthcare providers’ priorities for cancer care: A Delphi study in Greece. Eur J Oncol Nurs. 2007;11(2):141–150. doi: 10.1016/j.ejon.2006.06.005. [DOI] [PubMed] [Google Scholar]
  • 19.Broomfield D, Humphris GM. Using the Delphi technique to identify the cancer education requirements of general practitioners. Med Educ. 2001;35(10):928–937. [PubMed] [Google Scholar]
  • 20.Robotin MC, Jones SC, Biankin AV, et al. Defining research priorities for pancreatic cancer in Australia: Results of a consensus development process Cancer Causes Control 2010; DOI 10.1007/s10552-010-9501-1. [DOI] [PMC free article] [PubMed]
  • 21.Bonetta L. Enhancing NIH grant peer review: A broader perspective. Cell. 2008;135(2):201–204. doi: 10.1016/j.cell.2008.09.051. [DOI] [PubMed] [Google Scholar]

Articles from International Journal of General Medicine are provided here courtesy of Dove Press

RESOURCES