Skip to main content
The Journal of General Physiology logoLink to The Journal of General Physiology
editorial
. 2013 Sep;142(3):177–178. doi: 10.1085/jgp.201311076

Evaluation of scientific productivity and excellence in the NHLBI Division of Intramural Research

Robert S Balaban 1
PMCID: PMC3753598  PMID: 23980190

I have been asked to write a description of the scientific review processes of investigator programs in the National Heart, Lung, and Blood Institute (NHLBI) Division of Intramural Research (DIR), especially with regard to evaluating the scientific productivity and impact. The DIR is composed of a broad group of investigators working on basic structural biology and cell biology to clinical research within the Clinical Center on the Bethesda Campus and regional hospitals. Investigators are reviewed every four years by an external Board of Scientific Counselors (BSCs) as specified for all National Institutes of Health (NIH) intramural research programs. Each institute accomplishes this task in a slightly different manner. At NHLBI, the review is a Bethesda campus site visit involving BSC members chairing an expert panel of ad hoc experts to evaluate the scientific accomplishments over the last four years and to hear a description of the future direction of the work. Before the visit, the investigator provides an extensive written report on recent work and available resources. At the meeting, the investigator makes a brief presentation before a question and answer session where the strengths and weakness of the program are explored. Any program weakness revealed in the BSC report must be discussed in this open question period. After the site visit, the BSC members working with the information provided by the ad hoc members create a draft report provided to the investigator. The investigator can then write a rebuttal or request a formal meeting with the entire BSC to review the report and finalize the recommendations to the NHLBI director and council. Through this process, an investigator has his or her “day in court” as well as an appeal process.

The primarily retrospective nature of this review is one of the important aspects of this process, focusing on the previous four years. This makes the review rather straightforward in simply reviewing scientific accomplishments rather than the more difficult task of speculating what might happen in the future. The accomplishments of the investigator are judged based on the sometimes interrelated elements of productivity, significance, innovation/scientific approach, use of DIR resources, and mentoring of fellows and students over the last four years. In this task, we ask the ad hoc reviewers to provide their judgment based on the investigators’ publications, write-up, presentation, and response to questions during the review. However, in the last decade or so I have noted a disturbing trend of equating impact in the field with the journal a work is published in rather than the substance of the work. This sometimes resulted in a more careful evaluation of the journals that work was published in rather than the science that was performed. A publication in a handful of “high impact journals” (HIJ), with usually a broad scope of interest, was becoming the benchmark for a successful research program rather than the science itself. The old phrase “publish or perish” is changing to “publish in HIJ or perish.” This practice has sometimes minimized the consideration of the investigators’ development of novel insights, technologies, or new hypotheses for the BSCs to consider. Indeed, reducing the review to impact factors or similar metrics could make the whole peer-review process unnecessary; simply calculate your impact and rely on the journal-review process. In response to this trend, our current charge to the BSCs, and ad hoc members, is asking the reviewers to bring their judgment of the scientific contribution to the table, not the editorial practices of a few journals. I am very pleased to report that our ad hoc reviewers and BSCs have taken this charge very seriously, and excellent discussions of the science produced in the DIR have evolved. In addition to our review boards, numerous other investigators share the opinion that the scientific content is more important than the journal in which it is published. A few of those investigators generated a document entitled “San Francisco Declaration on Research Assessment: Putting Science into the Assessment of Research” at the recent American Society of Cell Biology annual meeting, with many points that I present here concerning the emphasis on where one publishes rather than on what is published.

What is the nature of this biomedical publication “funnel” we have begun to create? As the pay line of the NIH and other grant-awarding agencies dips down to the ∼10% region, it is clear to our community that any scientific review process attempting to cull the best 10% is difficult, and many outstanding proposals are going unfunded. I believe that most investigators would argue that a pay line even at 20% is difficult to defend in capturing the best work presented. In a recent article in Nature on the issues of low percentage acceptance of grants entitled “Research Funding: Making the Cut” (Powell, 2010), Dr. Dick McIntosh, emeritus at the University of Colorado, stated, “That’s in a range (∼20%) where you have lost discrimination.” The chairman of the American Cancer Society grant review panel agreed, stating, “Deciding between the top grants, I don’t want to say it’s arbitrary, but it’s not really based on strong criteria.” This low acceptance rate has generally been described as a major impediment to furthering biomedical research. However, when I look at the HIJ published acceptance rates (when available online), I find that most of these journals are operating well below a 10% acceptance rate that we find so troubling in evaluating research grants. Thus, one could argue that outstanding work is not being published in the HIJ simply based on the inherent limitations of a scientific review when only a small fraction of the work is being accepted, independent of the rigor of the review. Using this system that accepts less than 10% of the manuscripts submitted as an absolute gateway to a successful review is, at best, problematic, and just as distressing as the current pay lines for grants. Thus, it is unclear why we should rely on the review processes of journals with a couple of reviewers in most cases and when a full open grant review panel operating at even higher acceptance levels gives us concerns with regard to missing outstanding science. It is important to note that the “journal funnel” is a creation of our biomedical research community, and we are capable of opening this reduced aperture while the research grant pay line is not, being simply dependent on the economics of the grant-awarding agencies.

On a personal level, I had a fellow take an academic position after a postdoc in my laboratory, and we were reflecting on what it would take to be successful in his academic position. Surprisingly, the fellow identified the acceptance rates in the HIJ journals as the biggest, or first, barrier rather than the NIH grant. Why? First the start-up funds from the academic program were adequate to start, and early career awards were available from NIH and other sources. The impression of the fellow was that without a track record of publications in HIJ, he would not get the larger R01 grant or promotion at his institution. This is an opinion shared by many of the junior faculty with whom I interact. Again, the importance of HIJ publications is a self-inflicted wound created by many review processes generated by the biomedical research community and not a government bureaucracy or group of deans. I fear the most negative impact of this virtual funnel will be on the attraction of new and the success of existing junior investigators. Again, we created this implied requirement of the HIJ publications; we can remove it as well.

It is laudable if a manuscript is published in an HIJ and likely to get more attention than in a more focused journal; however, we must realize that using the HIJ review system as our gateway to judging the scientific performance of a program is flawed. Again, my issue here is nothing specific about the process, editors, or funding mechanism of the journals; it is simply that requiring a successful trek through almost any review process that judges scientific merit within the top 10% should not be the gateway to continued scientific support and recognition. Thus, I have asked our intramural review process to broaden the “funnel” to give credit for papers published in specialty journals that provide a rigorous review of the science presented, and for work that has had a major positive impact on the development of a given field. In presenting this charge to our review panels, it has generally been accepted as an excellent and appropriate goal of the research program, and vigorous scientific discussions have emerged in the reviews.

In the overall evolution of this process, I would like to thank all the members of the NHLBI BSCs over the years and the dedicated ad hoc members of our review panels. Specifically, I would like to thank the past chairs of the BSCs that have evolved this process over the last decade. Without the effort and judgment of scientists willing to take the valuable time to review our programs, it would be impossible to make the appropriate decisions on how to most effectively distribute our valuable resources.

References

  1. Powell K. 2010. Research funding: Making the cut. Nature. 467:383–385 10.1038/467383a [DOI] [PubMed] [Google Scholar]

Articles from The Journal of General Physiology are provided here courtesy of The Rockefeller University Press

RESOURCES