Skip to main content
The Cochrane Database of Systematic Reviews logoLink to The Cochrane Database of Systematic Reviews
. 2007 Apr 18;2007(2):MR000003. doi: 10.1002/14651858.MR000003.pub2

Peer review for improving the quality of grant applications

Vittorio Demicheli 1,, Carlo Di Pietrantonj 2
Editor: Cochrane Methodology Review Group
PMCID: PMC8973940  PMID: 17443627

Abstract

Background

Grant giving relies heavily on peer review for the assessment of the quality of proposals but the evidence of effects of these procedures is scarce.

Objectives

To estimate the effect of grant giving peer review processes on importance, relevance, usefulness, soundness of methods, soundness of ethics, completeness and accuracy of funded research.

Search methods

Electronic database searches and citation searches; researchers in the field were contacted.

Selection criteria

Prospective or retrospective comparative studies with two or more comparison groups assessing different interventions or one intervention against doing nothing. Interventions may regard different ways of screening, assigning or masking submissions, different ways of eliciting opinions or different decision making procedures. Only original research proposals and quality outcome measures were considered.

Data collection and analysis

Studies were read, classified and described according to their design and study question. No quantitative analysis was performed.

Main results

Ten studies were included. Two studies assessed the effect of different ways of screening submissions, one study compared open versus blinded peer review and three studies assessed the effect of different decision making procedures. Four studies considered agreement of the results of peer review processes as the outcome measure. Screening procedures appear to have little effect on the result of the peer review process. Open peer reviewers behave differently from blinded ones. Studies on decision‐making procedures gave conflicting results. Agreement among reviewers and between different ways of assigning proposals or eliciting opinions was usually high.

Authors' conclusions

There is little empirical evidence on the effects of grant giving peer review. No studies assessing the impact of peer review on the quality of funded research are presently available. Experimental studies assessing the effects of grant giving peer review on importance, relevance, usefulness, soundness of methods, soundness of ethics, completeness and accuracy of funded research are urgently needed. Practices aimed to control and evaluate the potentially negative effects of peer review should be implemented meanwhile.

Plain language summary

Grant giving relies heavily on peer review for the assessment of the quality of proposals but the evidence of effects of these procedures is scarce

This review was carried out in order to assess the effect of the various processes of peer review on the quality of funded research. Only ten studies were included and described in the review. We were unable to find comparative studies assessing the actual effect of peer review procedures on the quality of the funded researchThere is little empirical evidence on the effects of grant giving peer review. Experimental studies assessing the effects of grant giving peer‐review on importance, relevance, usefulness, soundness of methods, soundness of ethics, completeness and accuracy of funded research are urgently needed. Practices aimed to control and evaluate the potentially negative effects of peer review should be implemented meanwhile.

Background

Both researchers and grant giving bodies have expressed concern about the amount of time spent writing and reviewing grants (Smith 1988; Roy 1985; Kostoff 1994). Grant giving relies heavily on peer review for the assessment of the quality of proposals, but the evidence of effects of these procedures appears scarce (Wessely 1999).

A number of criticisms about peer review of grant applications have focused on the reliability of the process and the existence of a number of biases (Wenneras 1999). Descriptive evidence of gender bias was provided by a study at the Swedish Medical Research Council (Wenneras 1997) but a number of other studies carried out in similar contexts found no evidence of it (Cole 1992; Grant 1997).

Similarly contrasting findings are available on other investigated biases of peer review: age, institution, 'cronyism', discipline, gender, etc. An extensive, although non‐systematic, review of existing studies on grant giving peer review has been published (Wessely 1999).

In spite of these concerns and limitations very little has been done to address aspects such as the equity, effectiveness and efficiency of the process. The availability of a growing amount of original research on the effect of peer review allows a systematic review of studies comparing the effectiveness of peer review processes of research grant applications in terms of identifying high quality proposals for potential funding.

Objectives

To estimate the effect of processes in grant giving peer review on the nature of output.

These processes are grouped as:

  • different ways of screening, assigning or masking submissions

  • different ways of eliciting internal or external opinions

  • different decision making procedures (group or single person)

  • different types of feedback to author(s) and subsequent revision of submissions

Methods

Criteria for considering studies for this review

Types of studies

Prospective or retrospective comparative studies with two or more comparison groups; these groups may be generated by random or other methods and may include historical comparisons. All studies included in our review reported original data.

Types of data

Original research proposal submitted for funding to a grant giving body.

Types of methods

The studies should compare two or more interventions or an intervention against 'do nothing' from within one of the following categories:

  • different ways of screening submissions (i.e. carrying out a preliminary assessment of the submission)

  • different ways of assigning submissions (i.e. choosing and assigning assessors to the submission)

  • different ways of masking submissions (i.e. concealing the identity and background of the authors and/or the assessors)

  • different ways of eliciting internal opinions (i.e. opinions on the scientific quality of the submission from those within the grant giving organisation)

  • different ways of eliciting external opinions (i.e. opinions on the scientific quality of the submissions from those outside the grant giving organisation)

  • different (group or single person) decision making procedures (i.e. deciding whether to fund the submission)

  • different types of feedback to author(s) and subsequent revision of submissions

Types of outcome measures

Quality of the funded research, however measured.

Search methods for identification of studies

The following electronic databases were searched: Cochrane Library, including: The Cochrane Database of Systematic Reviews (CDSR), The Database of Abstracts of Reviews of Effectiveness (DARE), Cochrane Methodology Register, The Cochrane Controlled Trials Register (CENTRAL/CCTR), MEDLINE, EMBASE, Healthstar, CINAHL (Combined Index of Nursing & Allied Health Literature), PsycLIT, Evidence Based Medicine (EBM), Australasian Medical Index (AMI), Current Contents, Dissertation Abstracts, Sociofile, Biological Abstracts, SciSearch, PubScience.

Search terms included (combination of): peer‐review, grant, application, research, proposal, funding.

The electronic searches were conducted on the whole available time period of the databases up to June 2002. 
 Our search strategy was intentionally of low specificity to enable the maximum number of relevant studies to be identified. 
 A citation search on the retrieved paper was conducted. Researchers in the field were contacted. Relevant reviews, books, texts and journals were hand searched.

Data collection and analysis

Two reviewers examined each citation (title and abstract when available) for inclusions. Studies considered for possible inclusion were retrieved in full. The same two reviewers examined studies independently applying inclusion criteria and resolving disagreement by discussion.

Two reviewers extracted data on study design, methodology, interventions and outcomes used in the included studies. The description of each study (including population size, duration, timing and setting) was prepared by one reviewer and checked by the other one. The methodological quality of the included studies was assessed by the two reviewers independently.

The quality of randomized studies was assessed using the criteria adapted from the Cochrane Collaboration Reviewers Handbook (Randomisation, Generation of allocation sequence, Allocation concealment, Blinding) and classified as adequate, possible adequate, inadequate or not used (Clarke 2002). The quality of cohort studies was assessed using the appropriate Newcastle‐Ottawa quality assessment scale (Wells 2000). The quality of other study designs was assessed using the methodological assessment grid developed by the NHS Centre for Review and Dissemination from the University of York (Khalid 2000).

Two reviewers examined the outcome data from the studies and decided that conducting a quantitative analysis would not have been appropriate.

Results

Description of studies

The searches produced an initial list of 178 titles from which 37 reports of studies, possibly fulfilling our inclusion criteria, were identified and retrieved in full. Only ten studies fulfilled our criteria and we excluded the remaining 27 from the review. The studies were read and classified according to their design and the study question addressed.

No comparative studies assessing the effect of peer review on the quality of funded research were found. No studies comparing any effects of peer review against doing nothing were also found. The included studies were grouped according to the study question and presented only in a descriptive way.

Studies assessing different ways of screening submissions

Russel 1983 reports the result of a retrospective survey conducted to see whether a simplified assessment procedure may produce the same outcome of a standard peer review assessment. One hundred and thirteen grant applications to the Canadian Arthritis Society were initially reviewed on the basis of a brief outline of the application (name and university of the applicant, a single page summary of the proposal, state of knowledge, recent relevant publications and budget information) by internal reviewers. The same applications were then reviewed in more detail (the complete application with appendices and reprints) using additional external experts. The details and the external referees reports had little impact on the final rating of the applications.

Vener 1993 reports a non‐randomized experiment designed to test a model of triage for the peer review of grant applications to the National Institute of Health (NIH). Seventy‐three submissions were reviewed using the five members triage team model. The reviewers were kept blinded of the assignments. There were 19 non‐competitive applications (defined as those receiving four non competitive votes out of five) that were triaged out. The remaining 54 applications were reviewed according to the usual NIH procedure by the ordinary 12 to 20 member full committees. Four applications received three non‐competitive votes and 13 received two non‐competitive votes. The authors concluded that the likelihood of the five‐member triage model of eliminating highly competitive application appears to be very small.

Studies assessing different ways of masking submissions

Lee 2000 reports on a retrospective comparison between blind and open peer review of research proposals submitted to the Korea Science and Engineering Foundation (Kosef) in 
 1996 in four research areas (Mathematics, physics, biology and electronics). The Kosef review process involved five reviewers for each proposal; three of which were sighted and two (regarded as connected to the applicants) were blinded. A total of 1978 proposals were sent to 917 reviewers; there were 562 answers 331 of which were from sighted reviewers and 231 from blinded ones. Sighted and blinded reviewers assessed four criteria (clarity of goal, originality, methodology and desirability of outcomes) using a nine‐point scale and sighted reviewers considered five additional ones (qualification of applicants, research team, budget, duration, training). The study compared the final evaluation scores given by the two sets of reviewers and their correlation with nine characteristics of the proposals (Applicant's organisation, experience, publications, academic recognition, stage of research, innovativeness, mainstream/non mainstream, research interest, personal relationship between applicant and reviewer). The study concluded that the behaviour of sighted and blinded evaluation groups was different. The sighted assessment appeared to be affected by the rank of the organisation's department, the professional age and the academic recognition of the applicant.

Studies assessing the effect of different decision making procedures

Das 1985 reports on the results of a retrospective comparison between 78 grant proposals reviewed by the National Institute of Allergy and Infectious Diseases Review Committee and 1021 projects reviewed by three Division of Research Grants study sections. Approval rates, mean scores and distribution of application in various priority score ranges obtained through the two review groups were very similar.

Hodgson 1995 reports the results of a retrospective analysis of 779 research proposals submitted to the Heart and Stroke Foundation of Ontario undertaken in order to identify factors playing a role in the assignment of scores for scientific merits and particularly to investigate the agreement between internal and external reviewer scores. The scores of internal reviewers were more closely correlated to the final decision. Nevertheless final committee scores were significantly different from either internal or external scores.

Hodgson 1997 reports on the results of a retrospective study carried out to evaluate the level of agreement and correlation between two similar but separate peer review systems. Two hundred and forty eight proposals simultaneously submitted to the Heart and Stroke Foundation and the Medical Research Council of Canada were identified and their scores compared. The level of agreement on the fundability of the projects was 73%.

Cole 1981 reports the results of an experiment in which 150 proposals submitted to the National Science Foundation were re‐evaluated independently by a new set of reviewers. Half of the 150 proposals had been originally funded and half had been declined. The new reviewers were selected by a panel of members of the National Academy of Sciences that produced a list of approximately 12 reviewers for each research proposal, two of which were enrolled in the experiment. The degree of disagreement on the final evaluation of the proposals within the population of eligible reviewers was high indicating that the possibility of getting a research grant depended to a significant extent on chance.

Studies using agreement of the results of peer review processes as outcome measure

Hartmann 1990 reports on the analysis of 242 applications for grants to the Deutsche Forschungsgemeinshaft. The 639 related review reports were identified and their content analysed. The content of the comments was classified according to 11 different criteria categories and a seven‐point scale was used to describe the final assessment. Data revealed that a wide range of criteria were used for assessing the quality of the proposals. High inter‐reviewer agreement was found in judging the theoretical and methodological quality of the proposals as well as in evaluating the appropriateness of the budget and also in the final recommendations.

Wiener 1977 reports the results of a prospective study set up to investigate inter‐reviewer agreement during the evaluation of 101 grant and 17 fellowship applications to the New York State Affiliate of the American Heart Association. Each research proposal was assessed by two members of the committee that graded 10 different parameters (knowledge of the subject, experience, methodology, objectives, research plan, data processing, innovation, adequacy, institution and age) on a ten‐point scale. A final priority score was calculated weighting differently the various parameters. Agreement between the two reviewers was judged in terms of standard error of the means for each score, for the weighted priority scores and for the global final score. A significant level of reviewer agreement was found. The degree of inter‐reviewer agreement was greater for applications that obtained high priority scores.

Green 1989 reports the results of a randomized experiment comparing the effect of two scales with different rating intervals (0.5 and 0.1 intervals) used in the procedure of evaluating grant applications assessed by 24 study sections of the National Institute of Health involving a total of 653 research proposals. The one‐half and one‐tenth point scales were introduced to encourage the use of a wider range of scores. Twenty‐four well‐established study sections were selected according to their voting behaviour and no unusual turnover in members (between 15 and 20 each). The study sections were paired according to the type of applications assessed, the number of applications typically reviewed and the experience of the executive secretary. Using a random number table one of each pair was assigned to the one‐half scale and the other to the one‐tenth. The usual voting behaviour of the sections was compared with the priority scores calculated with the two scales. The distribution of scores was analysed and compared in a descriptive way. The two scales appeared to have little influence on the final assessment.

Risk of bias in included studies

Russel 1983 ‐ The methods described in the paper appear to be consistent with the aim of the study. The study design, the outcome measure and the study setting make uncertain the generalisability of the findings.

Vener 1993 ‐ The study design is not experimental as stated by the authors but a retrospective comparison. The comparability of the two screening methods is debatable because only a subset of the applications underwent the two assessment procedures. Statistical methods appear sound and reported in detail. The generalisability of the study findings is difficult to judge.

Lee 2000 ‐ The study design is a retrospective comparison between the activity of two sets of reviewers. There was a low response rate, which may have introduced an important selection bias. The correlation between final scores and proposal characteristics was determined separately among blinded and sighted reviews. There was not a direct comparison of the assessment of the same proposal by different reviewers. The external validity of the findings is limited.

Das 1985 ‐ The study design is a retrospective observation of the activity of two review groups. A number of important selection biases could have occurred and there was no attempt of ensuring comparability of the two groups. The validity of the study is seriously limited and no extrapolation is possible.

Hodgson 1995 ‐ The study design and methods are internally coherent but serious biases in the selection process and in the evaluation of outcomes may have occurred. The comparability between the two groups is limited and no extrapolation of results is possible.

Hodgson 1997 ‐ The study design and methods used are consistent with the aim of the study. The nature of the experiment and the characteristics of the two peer review systems seriously limit the generalisability of the findings.

Cole 1981 ‐ The study is a non‐randomized experiment. The design of the study and methods used to analyse the data are adequate to the scope. Most selection factors have been controlled and the comparability of the evaluations done on the two sets of proposals is fair.

Hartmann 1990 ‐ The methods used are consistent with the descriptive scope of the study. Important bias could have arisen in the selection of proposal and reviewers and the comparability of the various reviewers' comments is therefore limited. The inter‐reviewer agreement could have been judged more appropriately than using the coefficient of variance.

Wiener 1977 ‐ The study design and methods used are sound and consistent with the study aim. Important bias may have occurred in the selection of proposal and reviewers. The inter‐reviewer agreement could have been judged more appropriately than using the standard error of the mean. Very little can be extrapolated from the findings.

Green 1989 ‐ The study is an open randomized experiment. Study design is internally coherent with the aim. Randomization methods were correct but important selection could have occurred in the choice of the study sections and in the assignment of the studies to the sections. The allocation was not concealed. The statistical methods used for comparison are too limited to support the conclusions presented. The narrow focus on scoring scales makes the study not very informative.

Effect of methods

It was impossible to find studies assessing the direct effect of peer review on the quality of the funded research. Also no studies comparing any effects of peer review against a 'do nothing' alternative were found.

The screening procedures assessed by comparative studies (triage or simplified assessment compared with standard and detailed procedures) seem to have little effect on the results of the process. Moreover the quality of the studies seriously limits the generalisability of their findings.

The only available study comparing open with blinded peer review shows that the behaviour of the two reviewer groups is different indicating the possibility of bias affecting the open peer review.

The available studies comparing different decision making procedures (the results of assessments done by different institutions or groups of reviewers) gave conflicting results. This is not surprising given the specificity of the settings and the likelihood of a potential selection bias.

Studies using agreement as an outcome measure usually show high level of agreement between reviewers and between different ways of assigning proposal or eliciting opinions.

Discussion

Peer review plays a central role in selecting research proposals for funding in many countries. Peer review is designated to improve the quality of research. Given the time and resources dedicated there appears to be little evidence from properly conducted studies on the effects of the process.

Many of the limitations and biases that have been attributed to the peer review process appear to have been presented on the basis of personal experience or subjective judgements. 
 Research in this area has focused only on single and limited aspects of the peer review process and no interest has been devoted to the direct impact of the peer review process on the quality and results of funded research.

The retrospective design used in most of the available studies, the specificity of their settings and the questionable quality of the methods used, seriously limit the possibility of deriving general information even on these marginal aspects investigated.

Presently little can be inferred from the available empirical evidence on the effects of grant giving peer review as a mechanism to ensure quality of biomedical research. 
 The lack of any reliable assessment of the effects of peer review of grant applications does not necessarily mean that this procedure is not effective. It might be very effective but, simply, not have been tested reliably. The same applies to it potentially negative effects.

Conclusions from this review indicate two main streams of possible actions:

  • the organisation of properly designed studies assessing the effects of grant giving peer review on importance, relevance, usefulness, soundness of methods, soundness of ethics, completeness and accuracy of funded research.

  • the adoption of reviewing practices aimed to evaluate and control the potentially negative effects of the process. Attempts to improve the efficiency and the transparency of the process and actions encouraging innovative ideas should be implemented and evaluated.

Authors' conclusions

Implication for methodological research.

Experimental studies assessing the effects of grant giving peer review are urgently needed.

What's new

Date Event Description
27 December 2007 Amended Converted to new review format.

History

Protocol first published: Issue 2, 2001
 Review first published: Issue 1, 2003

Date Event Description
20 February 2007 New citation required and conclusions have changed Substantive amendment

Acknowledgements

We would like to thank the many people that contributed to this review. Tom Jefferson commented on an early draft of this review. Simon Wessely provided us the results of his extensive literature searches on the subject. Glaxo Wellcome Research & Development retrieved most of the references. Philippa Middleton and Gabriella Morandi ran the electronic searches. Mike Clarke, Lisa Bero, Julie Glanville and Fiona Godlee provided us lots of useful comments and suggestions that led to a substantial revision and update of the review.

Characteristics of studies

Characteristics of included studies [ordered by study ID]

Cole 1981.

Methods Non randomized experiment
Data 150 proposals
Comparisons The evaluation of the same set of proposals done by two different groups of reviewers was compared
Outcomes Agreement between reviewers; 
 Correlation between scores
Notes  

Das 1985.

Methods Retrospective comparison
Data 78 proposals
Comparisons Two different groups of reviewers were compared
Outcomes Approval rates, mean scores and priority score ranges
Notes  

Green 1989.

Methods Randomised experiment
Data 24 study sections containing 653 research proposals
Comparisons Two different scales (0.5 and 0.1 intervals) for the formulation of priority scores were compared
Outcomes Priority scores calculated from the two scales; 
 Descriptive statistics
Notes  

Hartmann 1990.

Methods Retrospective comparison
Data 639 review reports on 242 applications
Comparisons Different reviewers assessing the same proposal; 
 The contents of the review reports available for the same proposal were compared
Outcomes Frequency of use of 11 criteria categories; Agreement between reviewers measured by the coefficient of variance
Notes  

Hodgson 1995.

Methods Retrospective 
 comparison
Data 779 proposals
Comparisons Assessment of applications by external reviewers was compared with 
 the assessment done by internal reviewers
Outcomes Evaluation scores; 
 Cohen's kappa
Notes  

Hodgson 1997.

Methods Retrospective comparison
Data 248 proposals
Comparisons Assessment of the same applications by two different funding agencies with similar peer review systems
Outcomes Evaluation scores; 
 Cohen's kappa
Notes  

Lee 2000.

Methods Retrospective comparison
Data 562 reviewers' reports
Comparisons Proposal assessed by sighted reviewers were compared with proposal assessed by blinded ones
Outcomes Correlation between evaluation of the scores and a 9 different proposal characteristics
Notes  

Russel 1983.

Methods Retrospective survey
Data 113 applications
Comparisons An internal simplified screening procedure is compared against a more detailed one
Outcomes Final rating of the applications
Notes  

Vener 1993.

Methods Retrospective comparison
Data 73 submissions
Comparisons A five member triage panel was compared with the full committee assessment procedure
Outcomes Applications with negative votes in the two groups
Notes  

Wiener 1977.

Methods Prospective comparison
Data 101 grants and 17 fellowship applications
Comparisons Two reviewers assessing the same research proposal
Outcomes Each proposal judged on 10 different parameters with a 1 to 10 scale; 
 Agreement between the two reviewers on the various scores and on the global one
Notes  

Characteristics of excluded studies [ordered by study ID]

Study Reason for exclusion
Abrams 1991 Descriptive study, no comparisons presented
Anonimous 1994 The paper presents and discusses the experience of a funding institution, there is no evaluation of the effects of peer review
Anonimous 1995 The study does not evaluate the effects of grant giving peer review
Anonimous 1997 The study does not contain data
Bailar 1991 The study presents and discusses existing studies
Birkett 1994 The study does not contain data evaluating the effects of grant giving peer review
Chubin 1990 Review and discussion of existing studies
Chubin 1994 Review and discussion of existing studies
Cicchetti 1991 The paper presents and discusses data presented elsewhere and already considered for this review
Claveria 2000 Descriptive study, no comparisons presented
Cole 1992 The paper presents and discusses data presented elsewhere and already considered for this review
Cunnigham 1993 Descriptive study, no comparisons presented
Fliesler 1997 The paper presents author's opinions
Friesen 1998 The study presents women and men approval rates from MRC Canadian fellowship but does not evaluate the effects of grant giving peer review
Fuhrer 1985 Opinion survey, no evaluation of the effect of peer review
Glantz 1994 The study investigates professional interests of peers but does not evaluate the effects of grant giving peer review
Grant 1997 The study describes gender differences in funded research from a funding institution but does not evaluate the effects of grant giving peer review
Horrobin 1996 The study presents author opinion but does not evaluate the effects of grant giving peer review
Horton 1996 The paper presents and discusses data presented elsewhere and already considered for this review
Kruytbosch 1989 The paper presents the experience of a funding institution and the results of an opinion survey, there is no evaluation of the effects of peer review
Marsh 1999 The study examines the structure of reports from independent assessors but does not evaluate the effects of grant giving peer review
McCullough 1989 Opinion survey, no evaluation of the effects of peer review
McCullough 1994 The study does not evaluate the effects of grant giving peer review
Moxham 1992 The study discusses the peer review process but does not evaluate the effects of grant giving peer review
Narin 1989 The study does not evaluate the effects of grant giving peer review
VandenBeemt 1997 The paper presents the experience of a funding institution, there is no evaluation of the effect of peer review
Wenneras 1997 The study analyses the association of rating scores with measures of scientific productivity but no formal comparison is presented

Contributions of authors

The two reviewers judged the inclusion criteria and the quality of included studies. Both double checked the information extracted. 
 Vittorio Demicheli drafted the text of the review and Carlo Di Pietrantonj commented and contributed to the final version.

Sources of support

Internal sources

  • ASL 20 Servizio Sovrazonale di Epidemiologia, Italy.

External sources

  • No sources of support supplied

Declarations of interest

None known

Unchanged

References

References to studies included in this review

Cole 1981 {published data only}

  1. Cole S, Cole J, Simon G. Chance and consensus in peer review. Science 1981;214:881‐886. [DOI] [PubMed] [Google Scholar]

Das 1985 {published data only}

  1. Das NK, Froehlich LA. Quantitative evaluation of peer review of program project and center applications in allergy and immunology. J Clin Immunol 1985;5:220‐227. [DOI] [PubMed] [Google Scholar]

Green 1989 {published data only}

  1. Green JG, Calhoun F, Nierzwicki L, Brackett J, Meier P. Rating intervals: an experiment in peer review. FASEB J 1989;3:1987‐1992. [DOI] [PubMed] [Google Scholar]

Hartmann 1990 {published data only}

  1. Hartmann I, Neidhardt F. Peer review at the Deutsche Forschungsgemeinschaft. Peer review at the Deutsche Forschungsgemeinschaft. Scientometrics 1990;19:419‐425. [Google Scholar]

Hodgson 1995 {published data only}

  1. Hodgson C. Evaluation of cardiovascular grant‐in‐aid applications by peer review: influence of internal and external reviewers and committees. Can J Cardiol 1995;11:864‐868. [PubMed] [Google Scholar]

Hodgson 1997 {published data only}

  1. Hodgson C. How reliable is peer review? A comparison of operating grant proposals simultaneously submitted to two similar peer review systems. J Clin Epidem 1997;50:1189‐1195. [DOI] [PubMed] [Google Scholar]

Lee 2000 {published data only}

  1. Lee M, Om K, Koh J. The bias of sighted reviewers in research proposal evaluation: A comparative analysis of blind and open review in Korea. Scientometrics 2000;48(1):99‐116. [Google Scholar]

Russel 1983 {published data only}

  1. Russell AS, Thorn BD, Grace M. Peer review: a simplified approach. JRheumatol 1983;10:479‐481. [PubMed] [Google Scholar]

Vener 1993 {published data only}

  1. Vener KJ, Feuer EJ, Gorelic L. A statistical model validating triage for the peer review process: keeping the competitive applications in the review pipeline. FASEB Journal 1993;7:1312‐1319. [DOI] [PubMed] [Google Scholar]

Wiener 1977 {published data only}

  1. Weiner S, Urivetsky M, Bregman D, et al. Peer review: inter‐reviewer agreement during evaluation of research grant evaluations. Clin Res 1977;25:306‐311. [PubMed] [Google Scholar]

References to studies excluded from this review

Abrams 1991 {published data only}

  1. Abrams P. The predictive ability of peer review of grant proposals‐ the case of ecology and the United States National Science Foundation. Soc Stud Sci 1991;21:111‐132. [Google Scholar]

Anonimous 1994 {published data only}

  1. Anonimous. Peer Review: Reforms Needed to Ensure Fairness in Federal Agency Grant Selection. United States General Accounting Office. Washington DC 1994.

Anonimous 1995 {published data only}

  1. Anonimous. Peer Review: An Assessment of Recent Developments. Royal Society. London 1995.

Anonimous 1997 {published data only}

  1. Anonimous. Give him a grant, he's one of us. Research Fortnight 1997:13‐15. [Google Scholar]

Bailar 1991 {published data only}

  1. Bailar J. Reliability, fairness, objectivity and other inappropriate goals in peer‐review Behav Brain Sci. Behav Brain Sci 1991;14:137. [Google Scholar]

Birkett 1994 {published data only}

  1. Birkett N. The review process for applied research grant proposals: suggestions for revision. Canadian Medical Association Journal 1994;150:1227‐1229. [PMC free article] [PubMed] [Google Scholar]

Chubin 1990 {published data only}

  1. Chubin D, Hackett E. Peerless Science: Peer Review and U.S. Science Policy. Albany: SUNY Press 1990.

Chubin 1994 {published data only}

  1. Chubin D. Grants peer‐review in theory and practice. Evaluation Review 1994;18:20‐30. [Google Scholar]

Cicchetti 1991 {published data only}

  1. Cicchetti D. The reliability of peer review for manuscript and grant submissions: a cross‐disciplinary investigation. Behav Brain Sci 1991;14:119‐186. [Google Scholar]

Claveria 2000 {published data only}

  1. Claveria LE, Guallar E, Camì J, Conde J, Pastor R, et al. Does peer review predict the performance of research projects in health sciences?. Scientometrics 2000;47(1):11‐23. [Google Scholar]

Cole 1992 {published data only}

  1. Cole S. Making Science: Between Nature and Society. Cambridge: Harvard University Press, 1992. [Google Scholar]

Cunnigham 1993 {published data only}

  1. Cunnigham BL, Landis GH. A study of the outcome of the American Society for Aesthetic Plastic Surgery research grant program. Plastic and Reconstructive Surgery 1993;92(7):1397‐401. [PubMed] [Google Scholar]

Fliesler 1997 {published data only}

  1. Fliesler SJ. Rethinking grant peer review. Science 1997;275:1399. [DOI] [PubMed] [Google Scholar]

Friesen 1998 {published data only}

  1. Friesen H. Equal opportunities in Canada. Nature 1998;391(326). [DOI] [PubMed] [Google Scholar]

Fuhrer 1985 {published data only}

  1. Fuhrer MJ, Grabois M. Grant application and review procedures of the National Institute of Handicapped Research: survey of applicant and peer reviewer opinions. Arch Phys Med Rehabil 1985;66:318‐321. [PubMed] [Google Scholar]

Glantz 1994 {published data only}

  1. Glantz SA, Bero LA. Inappropriate and appropriate selection of 'peers' in grant review. JAMA 1994;272:114‐116. [PubMed] [Google Scholar]

Grant 1997 {published data only}

  1. Grant J, Burden S, Breen G. No evidence of sexism in peer review. Nature 1997;390:438. [DOI] [PubMed] [Google Scholar]

Horrobin 1996 {published data only}

  1. Horrobin DF. Peer review of grant applications: a harbinger for mediocrity in clinical research?. Lancet 1996;348:1293‐1295. [DOI] [PubMed] [Google Scholar]

Horton 1996 {published data only}

  1. Horton R. Luck, lotteries and loopholes of grant review. Lancet 1996;348:1255‐1256. [DOI] [PubMed] [Google Scholar]

Kruytbosch 1989 {published data only}

  1. Kruytbosch C. The role and effectiveness of peer review. In: Evered D, Harnett S editor(s). The Evaluation of Scientific Research. Chichester: John Wiley, 1989:69‐85. [Google Scholar]

Marsh 1999 {published data only}

  1. Marsh HW, Bazeley P. Multiple evaluation of grant proposals by independent assessors: confirmatory factor analysis evaluations of reliability, validity and structure. Multivariate Behavioural Research 1999;34(1):1‐30. [DOI] [PubMed] [Google Scholar]

McCullough 1989 {published data only}

  1. McCullough J. First comprehensive survey of NSF applicants focuses on their concerns about proposal review. Sci Technol Human Values 1989;14:78‐88. [Google Scholar]

McCullough 1994 {published data only}

  1. McCullough J. The role and influence of the us national science foundation's program officers in reviewing and awarding grants. Higher Education 1994;28:85‐94. [Google Scholar]

Moxham 1992 {published data only}

  1. Moxham H, Anderson J. Peer review; a view from the inside. Science and Technology Policy 1992:7‐15. [Google Scholar]

Narin 1989 {published data only}

  1. Narin F. The impact of different modes of research funding. In: Evered D, Harnett S editor(s). The Evaluation of Scientific Research. Chichester: John Wiley, 1989. [Google Scholar]

VandenBeemt 1997 {published data only}

  1. Beemt F. The right mix: review by peers as well as by highly qualified persons (non peers). In: Wood F editor(s). Peer Review Process: Australian Research Council Commissioned Report: No 54. 1997:153‐164. [Google Scholar]

Wenneras 1997 {published data only}

  1. Wenneras C, Wold A. Nepotism and sexism in peer‐review. Nature 1997;387:341‐343. [DOI] [PubMed] [Google Scholar]

Additional references

Clarke 2002

  1. Clarke M, Oxman AD, editors. Cochrane Reviewers' Handbook 4.1.5. The Cochrane Library Issue 2. Oxford: Update Software, 2002. [Google Scholar]

Khalid 2000

  1. Khalid SK, Ter Riet G, Popay J, et al. Stage II Conducting the review: Phase 5 Study quality assessment. In: Khan KS, Ter Riet G, Glanville J, et al. editor(s). Undertaking Systematic reviews of research on effectiveness. CRD's guidance for carrying out or commissioning reviews. 2nd Edition. York: NHS Centre for Reviews and Dissemination University of York, 2000. [Google Scholar]

Kostoff 1994

  1. Kostoff R N. Research impact assessment. Principles and application to proposed, ongoing and completed projects. Invest Radiol 1994;29:864‐869. [PubMed] [Google Scholar]

Roy 1985

  1. Roy R. Fundoing science: the real defects of peer review and an alternative to it. Sci Technol Human Values 1985;10:73‐81. [Google Scholar]

Smith 1988

  1. Smith R. Problems with peer review and alternatives. British Medical Journal 1988;296:774‐777. [DOI] [PMC free article] [PubMed] [Google Scholar]

Wells 2000

  1. Wells GA, Shea B, O'Connel D, Peterson, Welch V, Losos M, Tugwll P. The Newcaste‐Ottawa Scale (NOS) for assessing the quality of non randomized studies in metanalyses. www.lri.ca/programs/ceu/oxford.htm 2000.

Wenneras 1999

  1. Wenneras C, World A. Bias in peer review of research proposals. In: Godlee F, Jefferson T editor(s). Peer review in health science. London: BMJ Books, 1999:79‐89. [Google Scholar]

Wessely 1999

  1. Wessely S. Peer review of grant application: a systematic review. In: Godlee F, Jefferson T editor(s). Peer review in health science. London: BMJ Books, 1999:14‐31. [Google Scholar]

Articles from The Cochrane Database of Systematic Reviews are provided here courtesy of Wiley

RESOURCES