Skip to main content
Molecular Biology of the Cell logoLink to Molecular Biology of the Cell
. 2013 Apr 1;24(7):887–889. doi: 10.1091/mbc.E12-12-0899

Magazine or journal—what is the difference? The role of the monitoring editor

Anthony Bretscher 1,1
Editor: Doug Kellogg2
PMCID: PMC3608498  PMID: 23533210

Abstract

Scientific communication, career advancement, and funding decisions are all dependent on research publications. The way manuscripts are handled by high-visibility, professionally edited magazines differs from the way academic journals evaluate manuscripts, using active scientists as monitoring editors. In this essay, I discuss the benefits that come with the involvement of active scientists. I enumerate the decisions a monitoring editor has to make, and how he or she goes about making them. Finally, I indicate ways in which authors can help to make the process a smoother and more positive experience.

INTRODUCTION

This journal has provided help for reviewers (Drubin, 2011) and concerns over manuscript evaluation (Vale, 2012). Why another essay? Publishing papers is the bread and butter of the research scientist, and differences exist in how manuscripts are evaluated. Here I contrast the different ways in which certain venues, epitomized by Cell/Nature/Science (CNS) and their associated magazine families (I refer to these as magazines, as they aim for a high Science Citation Index rating and wide readership), handle scientific manuscripts, in comparison with academic journals that aim for a solid record of progress. This is an important distinction, because magazines are run by professional editors, whereas, for academic journals such as Molecular Biology of the Cell, the monitoring editor is an active scientist. This difference has become more acute as science has become ever more competitive, complex, and specialized.

PUBLISHING IN EARLIER TIMES

Let us go back in history a little bit and see what publishing a paper used to be like. When I started in this field more than 30 years ago, submitting a manuscript was more tedious but in many ways much simpler. There were far fewer journals, and it was much more obvious where a good place to publish would be. Yes, the NS (and later C) magazines were there, and they tried to publish the more flashy results. Irrespective of where you planned to submit your manuscript, you would assemble the text and figures (and actually go to the library to find the correct references!), but there was no supplemental material, because everything worth documenting was shown in the one and only print version. Titles in those days tended to accurately reflect the content of the study. It was common for the cover letter to simply read: “Dear Editor, Please find enclosed a manuscript that we would like to submit for consideration in … ” That was it. No summary of the importance of your study, as the whole manuscript would be read carefully. Then you assembled your letter with four copies of the manuscript (typed, of course, on a typewriter) and figures (printed laboriously on photographic paper in a darkroom) in an envelope, attached a stamp, and sent it off in the mail (only limited FedEx coverage in those days!). A month or so later, you received a polite letter from the editor and three (usually) very reasonable reviews. Notice: being editorially rejected was almost unheard of. If you were asked to revise the manuscript, a decision was generally made after the first revision. If accepted, your paper took 3–6 months to appear in print. Interestingly, there were very few retractions, and those that surfaced were big news.

PUBLISHING TODAY

Let us fast-forward to today. We assemble a study with vastly more data, driven to this situation mostly as a result of our collective success in developing more sophisticated and sensitive technologies. As a result, it is no longer a simple document, but often a gargantuan study, in which the online supplemental material can be larger than the print version.

As we prepare to submit a manuscript, considerable effort is placed on getting the manuscript into the hands of the reviewer. The cover letter must concisely explain the high points of the study and wax enthusiastic about how important it is. Often, the title exaggerates the reasonable conclusions of the study—this is done to catch the eye of the editor (and for the cursory reader, if the paper is published). Although all the information in the cover letter is present in the manuscript, it is assumed that reiterating it in the letter will increase the likelihood that the study will be reviewed. If the study is sent to a CNS journal, there is a tendency to attempt to raise it into the “exciting” or “surprising” categories. This is done in an attempt to avert an editorial rejection without satisfactory explanation (e.g., “does not represent a significant advance”). If the paper is rejected, the authors will often reiterate the importance of the study and plead to have it reviewed, which is sometimes successful, although this can lead into “negotiations” of what should and should not be in the manuscript and may add a considerable amount of time and effort to a process that has been depicted as “painful publishing” (Raff et al., 2008).

Why do we continue to strive for publications in the CNS families? In his elegant essay in this journal, Ron Vale (Vale, 2012) described how the allure of the CNS magazines is driven by promotion decisions, ability to get funded, and so on, and how we, the active scientists, are partly to blame, as we want these “golden eggs” in our curricula vitae.

A consequence of the allure of CNS magazines for reporting “surprising” new findings is that a greater percentage are found to be inaccurate, perhaps due to sloppy science and review, or are simply fraudulent. Indeed, there is a good correlation between the impact factor of a journal and the frequency of retractions (Fang and Casadevall, 2011). I attribute this to the ever-increasing demands on the professional editor. Do not misunderstand me—professional editors are very talented and hardworking individuals, but they handle such a broad range of manuscripts that they may not have the intimate knowledge necessary to consistently make good decisions on the importance of every study, and especially on selecting the most appropriate reviewers. Thus both excellent and “surprising,” but sometimes poor, studies end up being published. In a recent editorial (July 26, 2012) about errors in large data sets, Nature addressed the general problem of sloppy science. In a remarkably frank assessment, the editorial states:

Much of this sloppy science comes from the pressure to generate “surprising” results and to publish them quickly, even though they are more likely to be driven by errors than are findings that more or less follow from previous work. A researcher who reveals something exciting is more likely to get a high-profile paper (and a permanent position) than is someone who spends years providing solid evidence for something that everyone in the field expected to be true.

This pressure extends throughout the careers of scientists, and is compounded by the preference of journals (like Nature) to publish significant findings—and of media to report them.

As noted above, wrong or fraudulent studies appear in these journals at a higher frequency and are later retracted. It does not have to be this way.

So now let us consider what happens if we send our manuscript to a journal like Molecular Biology of the Cell. A great strength of a journal like this one is the composition of the editorial board: the editor-in-chief carefully selects board members as specialists to cover the scientific areas represented by the journal. As I elaborate below, the process is different from CNS magazines in terms of initial evaluation, review, and seeking a revised version, all due to the involvement of an active scientist who knows the field intimately as monitoring editor.

THE IMPORTANCE AND ROLE OF A MONITORING EDITOR

The involvement of an active scientist as monitoring editor is even more important today than it was 20 years ago. Much more is being published, and manuscripts are more specialized and require an intimate knowledge of a large swath of techniques. It is therefore critical that the evaluation of a submitted manuscript be guided by a monitoring editor who knows the field and the available technologies, can distinguish the hard facts from the soft suggestions, and is aware of limitations in the literature. The steps in evaluating a manuscript described below represent the ways I handled manuscripts as an associate editor of MBoC (1998–2005), and how I operate for other journals. Clearly, other monitoring editors may have a slightly different style, but our approaches are essentially similar.

Step 1: Should the study be editorially rejected?

The idea of editorial rejection by academically run journals was unheard of 25 years ago, but the difficulty in securing good reviews when scientists are increasingly busy has encouraged this practice to various degrees. Such a decision is not made lightly by the editor-in-chief or a monitoring editor and is usually made for submissions that are in the wrong area for the journal or clearly would not be reviewed favorably. It helps both the journal and researcher, who now can go elsewhere without delay. There is, of course, a danger that too many manuscripts might get editorially rejected, so the editor-in-chief (also a practicing scientist!) has to monitor the situation carefully. In contrast with common practice at the CNS magazines, the whole manuscript is evaluated, and the author is provided with a clear and reasoned statement for the editorial rejection. How can an author reduce the chance a study will be editorially rejected? Consider the reasons why studies are rejected, then take a hard and detached look at what has been found, and ask yourself: does it really advance the field in the area of the journal, and if it does, do the data provide solid proof? If you are honest about this evaluation, it is unlikely a study will be editorially rejected.

Step 2: Selecting the reviewers and the first round of reviews

When a manuscript arrives at the journal's office, the editor-in-chief assigns it to a monitoring editor for evaluation, or may handle it himself or herself as the monitoring editor. Perhaps the most critical function of the monitoring editor after deciding a study should be reviewed, is in the selection of reviewers—the monitoring editor knows who does high-quality work in the area of study and seeks those individuals to review the manuscripts. It is also important to have a balanced set of reviewers—many studies straddle different areas, so ideally at least one reviewer from each area should be selected.

Once the reviews come in, it is critical that the monitoring editor be familiar with the study (I personally read the whole manuscript), weighing the reasonable comments and dismissing unreasonable aspects of the reviews. The first decision to make is whether the study is potentially publishable. Should a revision be solicited? In deciding to invite a revision, the monitoring editor is tacitly agreeing that, if various issues are addressed, the manuscript will be published. Presumably due to competition, reviewers have become increasingly negative over the years. As David Drubin so eloquently wrote (Drubin, 2011), this does not help the reviewer, the monitoring editor, or the authors of the study. Nevertheless, with this scenario, it is ever more important that the monitoring editor is both clear and compassionate—identifying in the editorial cover letter the critical issues raised by the reviewers that need to be addressed.

It is common and wrong for reviewers to demand experiments with specific outcomes—this is experimental science, so one should be asked to undertake certain experiments and then evaluate the outcomes. For example, a review might state, “The authors must show that protein A coimmunoprecipitates with protein B.” This presupposes the outcome—what if the coimmunoprecipitation does not give the anticipated result? Is the study doomed? We had exactly such a situation, in which we showed that two proteins interact with very high-affinity in vitro and colocalize perfectly in vivo, yet we could not show a convincing coimmunoprecipitation. Due to the reviewer's demands, we put in a weak coimmunoprecipitation (Reczek et al., 1997). Fifteen years later, using technologies not available to us in 1997, we discovered the fascinating basis for the weak coimmunoprecipitation: one of the proteins is amazingly dynamic in vivo, and a coimmunoprecipitation can only be convincingly captured under standard conditions if this dynamics is attenuated (Garbett and Bretscher, 2012). The point here is that biology is so complex that it is often not possible to have a complete explanation. It is therefore critical that the monitoring editor makes a decision whether there are sufficient hard data to support the conclusions but does not demand that everything fit perfectly. When I review or edit a manuscript, I try to step back from all the data and decide whether I believe the conclusions are sufficiently well supported by the available results. To estimate the study's significance, I try to evaluate whether it is likely to be regarded as significant in five years. Obviously, these are subjective criteria, but they help me assess the potential strengths and importance of a study, and whether it is potentially publishable.

How can authors ensure a good review of their manuscript? When assembling a manuscript, consider the following points, as the monitoring editor and potential reviewers will be strongly influenced by them:

  1. Are the title and abstract informative and do they accurately reflect the outcome of the study? Remember, these are the first things the editor and reviewers will see.

  2. Has the manuscript been assembled carefully? A carelessly assembled manuscript may lead the evaluators to assume that the science is also careless.

  3. Does the introduction lay adequate groundwork for the study, and not more than the study, and place the contributions of others in appropriate context? Leaving out appropriate citations is inappropriate, and they may well be those of the reviewer!

  4. Does the results section flow logically and include the right amount of information for the data to be interpreted? I tell my students that the results should read like a clearly written novel, so you cannot wait to read the next section.

  5. Are the results presented in the figures clear and well organized? It is amazing how many people assemble panels of figures so that some of the micrographs are too small. Visually appealing and clear figures greatly enhance scientific communication.

  6. Does the discussion place the conclusions in the appropriate context, rather than just reiterating the results? Most reviewers have made up their mind about a manuscript before reading a word of the discussion. If a short discussion touches on all the salient points, that is fine.

Step 3: Revising a study

If the monitoring editor seeks a revision, the author has the opportunity to develop a revised version and explain in detail all the issues raised by the reviewers. It is essential that the authors respond to all the points raised by the monitoring editor and reviewers: this ensures that all points have been considered, and since several months have usually passed since the first submission, a detailed response refreshes the memory of the monitoring editor and reviewers on the various issues. This is a critical dialogue with the monitoring editor: some holes can be filled easily, but, as our example above shows, others may not be as simple to address as the reviewers anticipated. If the revision is done correctly, there should be no need for more than a second round of review. The job of the monitoring editor is again crucial: Is the study sufficiently solid to proceed with publication? As the monitoring editor is a working scientist, he or she knows whether the authors are being reasonable or not.

WHY WE SHOULD CARE

Overall, the monitoring editor system that journals use has many advantages over the professional editor approach taken by the magazines. A consequence of having a dialogue with a monitoring editor is that processing a study for publication in a journal is more civilized, and evaluation of manuscripts is based on scholarship rather than more subjective factors, such as significance and impact. Because of the involvement of an active scientist editor in the area of the study, the editorial process is generally less haphazard. This is good for science and its practitioners. Are errors made? Of course they are, but in working with academic journals, you have the option to appeal to the editor-in-chief to investigate whether a mistake has been made. In general, the CNS journals offer no such avenue. Finally, the monitoring editor generally performs this function for free, because of his or her fascination with and love of science, whereas the professional editor is paid to seek out the manuscripts that provide the magazine with the highest profile and subscription revenue.

Footnotes

REFERENCES

  1. Drubin DG. Any jackass can trash a manuscript, but it takes good scholarship to create one (how MBoC promotes civil and constructive peer review) Mol Biol Cell. 2011;22:525–527. doi: 10.1091/mbc.E11-01-0002. [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Fang FC, Casadevall A. Retracted science and the retraction index. Infect Immun. 2011;79:3855–3859. doi: 10.1128/IAI.05661-11. [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Garbett D, Bretscher A. PDZ interactions regulate rapid turnover of the scaffolding protein EBP50 in microvilli. J Cell Biol. 2012;198:195–203. doi: 10.1083/jcb.201204008. [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. Raff M, Johnson A, Walter P. Painful publishing. Science. 2008;321:36. doi: 10.1126/science.321.5885.36a. [DOI] [PubMed] [Google Scholar]
  5. Reczek D, Berryman M, Bretscher A. Identification of EBP50: a PDZ domain containing phosphoprotein that associates with members of the ERM family. J Cell Biol. 1997;139:169–179. doi: 10.1083/jcb.139.1.169. [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Vale RD. Evaluating how we evaluate. Mol Biol Cell. 2012;23:3285–3289. doi: 10.1091/mbc.E12-06-0490. [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from Molecular Biology of the Cell are provided here courtesy of American Society for Cell Biology

RESOURCES