In order to advance the peer review based evaluation system Zimmermann et al. (2012) suggested to use network-based statistics (NBS) combined with modern search tools of internet. In my opinion, the new system they envisage include many useful and positive aspects such as openness, visibility of pre-publication manuscripts, researchers’ initiatives to put forward papers of colleagues for review and publication, increased transparency of the pre-publication procedures, etc. The main underpinning of the proposal by Zimmermann et al. (2012) is an attempt to increase quality of the published works and the to-be-shared scientific ideas accessible to a wide community of scientists.
However, the system as envisaged may bring in also some questionable, if not counterproductive results if not counteracted and taken care of ab ovo. Let me share a few worries:
The new NBS based and fully integrated system has a danger of expansion and overpublication. This is in terms of mass of published and accessible materials, burden of scientists as authors, reviewers, commentators, database monitors, etc. This also means that if authors justly feel they have been done injustice in evaluation and commentaries, they physically and in terms of time resources cannot “wrestle” with all this. No sufficiently strong and visible (and numerous) explanations, counterarguments, specifications can be provided. People become buried under the overload of materials and duties they can not cope with and frustrations will not decrease compared to the present habitual system, but may even increase.
The NBS system is basically a system capable of offering a quantitative data on the publishing affairs and activities of access to and use of the papers included in the database of the network. How come quality out of the quantity? Traditionally, top experts (read: editors and reviewers) dispersed between very many journals with their own traditions and idiosynchrasies have made their evaluation based decisions. This is an Elitarian system combined with diversity and chances for “many tastes.” Citation statistics is essentially based on (i) whether someone has read the paper and/or knows its scientific merits and therefore uses it in his/her own work for substantial reasons and (ii) on scientific-political motivations such as flattery, service to friends. So roughly half of the indices of impact reflect the quality and relevance of the publication. In the new system suggested by Zimmermann et al. (2012) where downloads statistics becomes the main measure of impact the relative role of substantial type of factors may be disproportionately diminished. Compared to a citation (which bears some responsibility of the citing author with regard to what is cited in terms of relevance and quality of the cited work) a download click is much less responsible an action. There are a vast many reasons for downloads besides the paper or an author being of high quality – curiosity, intrigue, habitual knowledge, and routine database-check, size of the scientific “brotherhood” or “alumni-circle” (the larger it is, the more clicks and downloads irrespective of the quality of work). Quality is essentially not reflected in downloads statistics. Unless some other means of evaluation focused on substance and quality will be used, the downloads-statistic in NBS may backfire. Quality cannot be automatically derived from quantity. (For my own views on related matters see Bachmann, 2011.)
The wide-open community-controlled system has another danger, that of scientific populism dependent increasing dominance of the mediocre (or scientific power structure based) ideas together with marginalization of the truly innovative, ahead-of-its-time, ideas, and approaches. In the traditional system, because of less integration of the journals, schools, authorities, and a wider variety (and perhaps partly because of a narrow circle of reviewers/editors) novel thought and approaches have always had their chance, sooner, or later. In the envisaged community-based, totally integrated, and fully open system social-psychological rules of power structure formation and increased influence of the persuasive and marketing skills depending factors may gradually get an upper hand over the talent based, ahead-of-time, or otherwise original thinking. By an analogy, the selection factors in the evolution of the scientific ideas, instead of being varied and dispersed, become unitary, and therefore potentially counterproductive for scientific thought evolution (if not fatal). Furthermore, if you write as expected by a majority and stay within the prevailing paradigm (or within the views defended by the leaders) you get prize and acceptance, but if you present views ahead of your time or not fitting with the views of the leaders in the field, you are not popular and your evaluative ranking is low. Consequently, the new NBS system should build in some procedures or principles where diversity, originality, difference from the bandwagon approaches will not be jeopardized. Advancement of science has been always driven by the minds representing minority of thought at the time. This is despite the fact that peer review has been present already for long time either in terms of reviewing for journals or collegial pre-publication criticism. Now, the question is, whether the envisaged NBS statistics based system (Zimmermann et al., 2012) would be a more favorable context for innovative (ahead-of-time or simply original) scientific development compared to the traditional system of evaluation? “Community-based” brings in an association with “communism” and we all know where this ultimately ends up. Some private elitism is inevitable in such realities as science and discoveries. If a system is a bit populism-prone and disproportionately communal, intellectual freedom based advancement may be in danger. In my view, scientific impact means whether new important facts, theoretical developments, innovations in methods, syntheses of the earlier views are achieved, i.e., whether something has been really advanced in terms of influencing the ways we think about our subject and understand it. Network statistics may be somewhat irrelevant for knowing about the substantive impact and may even motivate more conformism.
References
- Bachmann T. (2011). Fair and open evaluation may call for temporarily hidden authorship, caution when counting the votes, and transparency of the full pre-publication procedure. Front. Comput. Neurosci. 5:61. 10.3389/fncom.2011.00061 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Zimmermann J., Roebroeck A., Uludag K., Sack A. T., Formisano E., Jansma B., De Weerd P., Goebel R. (2012). Network-based statistics for a community driven transparent publication process. Front. Comput. Neurosci. 6:11. 10.3389/fncom.2012.00011 [DOI] [PMC free article] [PubMed] [Google Scholar]