Skip to main content
F1000Research logoLink to F1000Research
. 2018 Oct 15;7:1185. Originally published 2018 Aug 3. [Version 2] doi: 10.12688/f1000research.15429.2

The Congress Impact Factor: A proposal from board members of the World Society of Emergency Surgeons.it (WSES) and Academy of Emergency Medicine and Care (AcEMC)

Belinda De Simone 1,a, Luca Ansaloni 2, Micheal Denis Kelly 3, Federico Coccolini 2, Massimo Sartelli 4, Salomone Di Saverio 5, Michele Pisano 2, Gianfranco Cervellin 6, Gianluca Baiocchi 7, Fausto Catena 1
PMCID: PMC6208567  PMID: 30467521

Version Changes

Revised. Amendments from Version 1

In the updated version of the article, we followed the suggestions of the reviewers to clarify some issues and evaluate limitations of the IFc. Limitations concern the real value of the H-index in the evaluation of the scientific activity of one author. We agree with the criticism reported in literature but at the present, the H-index remains the most used indicator of an author’s activity and in the development of the IFc, the H-index contributes to estimate the scientific impact of an invited lecturer on a congress.

Abstract

Many scientific congresses and conferences are held every year around the world. The aim of the World Society of Emergency Surgeons.it (WSES) and Academy of Emergency Medicine and Care (AcEMC) was to develop a simple mathematical parameter as an indicator of academic quality and scientific validity of a congress. In this opinion article, a new metric, the Congress Impact Factor (IFc), is proposed taking into consideration the widely used Impact Factor as an indicator of journals’ prestige and using H-index analysis.

The IFc is derived from the mathematical ratio between the mean H-index of invited lecturers normalized for lecture topic and number of lectures in the conference. In case of multiple sessions, the mean of all IFc is calculated along with its standard deviation.  We conclude that the IFc can be a useful measure for evaluating and comparing congress prestige, and may also represent a potentially useful parameter for improving academic curriculum and helping participants to choose the more prestigious meetings for their education.

Keywords: Congress Impact Factor, HIndex, Educational Program, Scientific Quality, Academic Curriculum

Introduction

Many scientific congresses, meetings and conferences are organized each year around the world. Each congress can be promoted by a scientific society, which supports and organizes scientific sessions choosing topics and inviting national and international scientists as discussants, speakers or chairs. The choice of attending a specific congress is largely based on personal preferences, scientific area of interest and/or research, or simply as a desire to investigate, update and discuss topics of scientific relevance within the scientific community. The identification of the most useful and prestigious congresses and conferences organized by scientific societies is challenging, especially for young doctors who have not yet garnered a sufficient level of expertise. The scientific impact of a congress can only be valuable when supported by a good scientific program; the lectures delivered by experts in the field are essential for analyzing and discussing different medical and surgical topics 1.

The journal Impact Factor (IF), originally conceived by Irving H. Sher and Garfield in the early 1960s, is a bibliometric parameter aimed to evaluate journals’ prestige. It is usually calculated by dividing the number of citations in the previous two years to the number of citable items published in the same period 2. Therefore, a journal IF is based on two elements: the numerator, which is the number of citations in the current year to items published by the journal in the previous two years; the denominator, which is the number of citable items published in the previous two years 3, 4. Information about citations is obtained from a database now maintained by Clarivate Analytics (formerly by the Institute for Scientific Information). The list of journals’ IF is then published in the InCites Journal Citation reports, which is hence a useful means for establishing the absolute and relative (i.e., within a specific scientific field) prestige of a journal. Notably, albeit originally conceived for evaluating journals’ prestige, the IF is occasionally used also for evaluating scientists according to the number of articles published in high-IF journals 57.

Unlike the IF, the H-index is a different metric used to evaluate scientists’ prestige according to the number of citations https://scholar.googleblog.com/2012/04/google-scholar-metrics-for-publications.html 8. The H-index was suggested in 2005 by Jorge E. Hirsch as a tool for determining theoretical physicists' relative quality 9 and is sometimes called the Hirsch index or Hirsch number. The definition of the H-index is that a scholar with an index of x has published x papers each of which has been cited in other papers at least x times 10. Consequently it involves the number of publications and the number of citations for publication to evaluate the scientific activity of a researcher and not only the total number of citations or publications. The limit is that the H-index can only be properly used for comparing scientists working in the same field.

The congress impact factor

The aim of this opinion article is to present a mathematical coefficient to assess the quality and the academic validity of a scientific congress, using the IF formula and H-index calculation to create a useful tool: the Congress Impact Factor (IFc).

Calculation

We propose that the IFc is calculated using the following formula:

IFc=meanHindexoflecturersnormalizedforlecturetopicnumberoflecturesontopicatcongress

Mean H-index of Lecturers normalized for lecture topic was calculated using Google Scholar by Publish or Perish Harzing.com. For example, to obtain a mean H index normalized for lecture topic by Publish or Perish program is very easy: you have to choose to send your query by Google Scholar, searching for the “Name” and “Surname” of the author; automatically you will obtain your H index. Then you narrow down the search field to lecture topic and obtain H-index normalized for topic, for that author. All results should be analyzed checking for the right scientist, excluding non-relevant ones.

Subsequently, the mean H-index of all lecturers at the congress, normalized for lecture topic, is calculated to obtain a mean H-index plus a standard deviation. This value is divided by the number of lectures given in the congress obtaining the IFc.

Then the mean of all standard deviations must be calculated.

Considerations:

  • -

    The Chair’s H-index is always excluded because they do not give lectures.

  • -

    Only invited lectures should be considered.

  • -

    Free paper presenters are excluded because their academic value is too unpredictable and variable as we do not know how much they can influence the literature in the future: will they be published? In which journal? Will they be cited? How many times will they be cited?

  • -

    In case of a multi-session congress, a mean of all sessions plus standard deviation should be calculated.

Validation

Methods. As an example, we calculated the IFc for the first day of the Open Abdomen International Consensus Conference held in Dublin on July 2016. This was a consensus conference on critical surgical abdomen that produced guidelines on indications and benefits of open abdomen in non-trauma patients, which were published in the World Journal of Emergency Surgery 11. There were no other published proceedings of this conference. To create a comparison, we performed the H-index calculation for the same lecturers normalized for a different topic, “acute” “leukemia”, where none of the lecturers had specific expertise. The following mesh-words were used by Publish or Perish to calculate the H-index for every lecturer and mean H index in the two different topics ( Table S1): "Name Surname" and "open abdomen" and for the other evaluation "Name Surname" and "acute leukemia”. The comparison was made by the Student's t-test Statistical analysis was performed using IBM SPSS Statistics 22. P<0.05 was considered significant.

Results. Invited speakers attending the two sessions of first day were 14 international emergency and trauma surgeons with a specific expertise in the open abdomen field. Table S1 shows the results of the IFc calculation based on normalized H-index for topic. The mean normalized H-index for open abdomen was 13.57 (SD 8.033), and the IFc was 0.96. The mean normalized H-index for the same speakers with a topic outside their expertise (acute leukemia) is 1.85 (SD 1.80; Table S1). The IFc for this hypothetical congress was 0.13. The difference between normalized H-index calculated between these two topics was statistically significant (p=0.0001).

Discussion

In evaluating the quality and quantity of publications, two major categories of bibliometric indicators are available: quantitative indicators that measure the research productivity of a researcher; performance indicators that evaluate the quality of publications 12. The H-index is one of many available bibliometric indicators and is the most popular one to evaluate the academic and scientific activity of a researcher 6. In 2005, physicist Jorge E. Hirsch developed this index as a process for quantifying the output of an individual researcher. Hirsch stated: “I propose the index H, defined as the number of papers with citation number ≤ h, as a useful index to calculate the scientific output of a researcher” 9.

The H-index can be very useful in conceiving the IFc as a parameter to assess scientific quality of countless congresses and conferences that are proposed every year by scientific societies. The scientific impact of a congress is measured by a scientific program worthy of attention. We propose this simple indicator to measure quality of a Congress program based on the quality of its invited lecturers. The IFc involves the H-index combining it with IF calculation principles “to dilute” citation parameter with number of published articles. For IFc “the dilution” is performed with the number of lectures planned at the congress. We use the scientific potential given by the H-index of Lecturers invited/called to participate at the congress, normalized for the specific topic, avoiding the possibility that a highly cited scientist could give a lecture on a field outside their expertise, decreasing their educational effect. Dividing normalized H-index with the number of lectures, we can achieve a real-time picture of the quality of the educational meeting with clear evidence of the congress’s scientific impact. Only a limited number of good quality lectures is the source of a significant IFc with effective education of congress participants.

IFc is based on the H-index, which is actually considerated an indicator of scientific quality, and the IF philosophy. Currently they are both used to evaluate the strength of a scientist and of a scientific journal.

In conceiving the IFc, we reviewed the literature about the H index and we are aware of the criticism reported about the H index as a realistic indicator of the quality of work of one author.

The initial idea of Hirsch was to discriminate the investigators who are persistently productive from those who experienced an isolated auspicious moment in their scientific life. By time, we realized that the H index assumes that researcher A, who published a study that was extensively cited, should deserve less respect than researcher B who publishes often and regularly 1315.

With the H-index it is impossible to compare the investigators during different stages of their careers (even assuming comparisons among those representing the same field, which is another ambiguous factor). There is a certain correlation between the age of an investigator and H-index. Clearly some articles will accumulate citations and this number will increase over the time since they were first published 1315.

Another issue contributing to H-index limitations is that many research groups have different regulations regarding authorship. It is assumed that a researcher's name will be added to the authorship list only after considerable contribution has been made to the published work. However, what occurs fairly often is that being a “middle man” on the listing does not necessary reflect the significant contribution and, worth to be emphasized, the H-index does not differentiate between article authors who hold the most valuable first and last authorship position and those wherein the author's name appears as one among many other listed authors 1315.

Furthermore, the H-index does not discriminate self-citations and friendly cross-citations: it is not difficult to predict that even the investigators who are poorly cited by others but who publish prodigiously, citing mostly themselves or cited by friends-colleagues, will easily increase their H-index 1315.

Being aware of all these unsolved issues and considering all the indicators lately proposed to meet the need of more realistic and precise measures of the scientific activity of an author, the H-index remains the most used bibliometric indicator and we decided to use it to calculate the IFc.

IFc is able to describe the scientific expertise of lecturers on a specific topic with a quantitative evaluation of the quality of the meeting. For validation, we calculated the IFc for the WSES Consensus Conference on Open Abdomen: this was a high level meeting on a particular topic (open abdomen) where international experts are invited. The results of the validation of IFc suggest that the IFc can be an effective qualitative/quantitative metric for assessing congresses.

One limitation of IFc is that it would be difficult to calculate IFc in cases of a very large and heterogeneous congresses (e.g. American College of Surgeons). This is because many different symposiums have to be evaluated but the final IFc could be the mean of all these different IFcs using standard deviation to analyze the dispersion.

To the best of our knowledge there is nothing like the formal IF for conferences. In the past, conference proceedings publications were used to rate “lower quality” as compared to other “higher quality” congresses, especially if articles were published in peer reviewed international journals that are included in Thomson Reuters Journal Citation Reports http://wokinfo.com/products_tools/multidisciplinary/webofscience/cpci/. However with this system it is possible to have a retrospective and quite delayed information which is not so useful for choosing a congress prospectively. In other cases, conference proceedings were ranked in Thomson Reuters using “Conference Proceeding Citation Index”, but this is not comparable with an IF, and in this case you have retrospective and inaccurate information (evaluation of the congress is done a posteriori and without taking in consideration the lecturers). There is also the CORE Conference/Journal Ranking http://www.scimagojr.com/journalsearch.php?q=conference&tip=jou; http://arnetminer.org/page/conference-rank/html/All-in-one.htm, but again it is not a parameter based on strong indicators. There are other sources that could prove useful as an estimate of conference quality. Google Scholar lists top venues mixing journals and conferences in their listings. They list H-index of the venue instead of an IF, but again this is a misleading information (a high H-index venue can organize a Congress with low H-index lecturers).

Choosing the best Congress to attend can be difficult, and especially so for young attendees. Residents, scientific researchers and students require an ideal metric system to use as an indicator of scientific quality of a congress, so they can have the possibility to join congresses with high scientific impact and build on a competitive academic curriculum.

We believe that the IFc is an effective evaluation tool for a scientific meeting and it can become a valid instrument in the selection of the most appropriate congresses to update our knowledge in a specific field of research. This can contribute to develop a competitive academic curriculum vitae, i.e. reporting in the curriculum vitae the different conferences attended with their respective IFc.

Conclusions

Bibliometric indicators are essential to evaluate scientific activity both of a researcher and an institution, or a journal.

Many congresses are organized and held every year and analysis of the congress programs shows that not all have a high scientific quality, despite being sponsored by international scientific societies and biomedical companies. In addition fees are requested to participate, and consequently it is very important to attend the best meetings that can improve one’s knowledge of a specific topic. It is important to be able to have a measurement of the quality of any given conference. We propose the IFc as the mathematical ratio between the mean H-index of invited lecturers normalized for lecture topic and the number of lectures at the conference. We believe that the IFc can be a useful metric system to assess the scientific validity of a congress, helping attendees to choose the best quality meeting to attend.

Data availability

All data underlying the results are available as part of the article and no additional source data are required.

Funding Statement

The author(s) declared that no grants were involved in supporting this work.

[version 2; referees: 2 approved]

Supplementary material

Table S1: Example of IFc calculation for Open Abdomen Congress 2016, in comparison with IFc for hypothetic Acute Leukemia Congress with the same authors.

References

  • 1. Catena F, Moore F, Ansaloni L, et al. : Emergency surgeon: " last of the mohicans" 2014-2016 editorial policy WSES- WJES: position papers, guidelines, courses, books and original research; from WJES impact factor to WSES congress impact factor. World J Emerg Surg. 2014;9(1):14. 10.1186/1749-7922-9-14 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2. Garfield E: The history and meaning of the journal impact factor. JAMA. 2006;295(1):90–93. 10.1001/jama.295.1.90 [DOI] [PubMed] [Google Scholar]
  • 3. Seglen PO: Why the impact factor of journals should not be used for evaluating research. BMJ. 1997;314(7079):498–502. 10.1136/bmj.314.7079.497 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4. Saha S, Saint S, Christakis DA: Impact factor: a valid measure of journal quality? J Med Libr Assoc. 2003;91(1):42–46. [PMC free article] [PubMed] [Google Scholar]
  • 5. Garfield E: The meaning of the impact factor. Revista internacional de psicología clínica y de la salud=International journal of clinical and health psychology. 2003;3(2):363–369. Reference Source [Google Scholar]
  • 6. Lippi G, Borghi L: A short story on how the H-index may change the fate of scientists and scientific publishing. Clin Chem Lab Med. 2014;52(2):e1–3. PLoS Medicine Editors. "The impact factor game." PLoS medicine 3.6 (2006): e291. 10.1515/cclm-2013-0715 [DOI] [PubMed] [Google Scholar]
  • 7. Meral UM, Alakus U, Urkan M, et al. : Publication Rate of Abstracts Presented at the Annual Congress of the European Society for Surgical Research during 2008-2011. Eur Surg Res. 2016;56(3–4):132–40. 10.1159/000443608 [DOI] [PubMed] [Google Scholar]
  • 8. Jones T, Huggett S, Kamalski J: Finding a way through the scientific literature: indexes and measures. World Neurosurg. 2011;76(1–2):36–8. 10.1016/j.wneu.2011.01.015 [DOI] [PubMed] [Google Scholar]
  • 9. Hirsch JE: An index to quantify an individual's scientific research output. Proc Natl Acad Sci U S A. 2005;102(46):16569–72. 10.1073/pnas.0507655102 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10. de Meijer VE, Knops SP, van Dongen JA, et al. : The fate of research abstracts submitted to a national surgical conference: a cross-sectional study to assess scientific impact. Am J Surg. 2016;211(1):166–71. 10.1016/j.amjsurg.2015.06.017 [DOI] [PubMed] [Google Scholar]
  • 11. Coccolini F, Montori G, Ceresoli M, et al. : The role of open abdomen in non-trauma patient: WSES Consensus Paper. World J Emerg Surg. 2017;12:39. 10.1186/s13017-017-0146-1 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12. Joshi MA: Bibliometric indicators for evaluating the quality of scientifc publications. J Contemp Dent Pract. 2014;15(2):258–62. 10.5005/jp-journals-10024-1525 [DOI] [PubMed] [Google Scholar]
  • 13. Masic I, Begic E: Scientometric Dilemma: Is H-index Adequate for Scientific Validity of Academic’s Work? Acta Inform Med. 2016;24(4):228–232. 10.5455/aim.2016.24.228-232 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14. Aznar JI, Guerrero E: [Analysis of the h-index and proposal of a new bibliometric index: the global index]. Rev Clin Esp. 2011;211(5):251–6. 10.1016/j.rce.2010.11.013 [DOI] [PubMed] [Google Scholar]
  • 15. Kreiner G: The Slavery of the h-index—Measuring the Unmeasurable. Front Hum Neurosci. 2016;10:556. 10.3389/fnhum.2016.00556 [DOI] [PMC free article] [PubMed] [Google Scholar]
F1000Res. 2018 Oct 30. doi: 10.5256/f1000research.18107.r39464

Referee response for version 2

Francesco Azzaroli 1

I have read the revised version of the manuscript and I find it now ready for approval.

I have read this submission. I believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard.

F1000Res. 2018 Sep 17. doi: 10.5256/f1000research.16814.r37590

Referee response for version 1

Luca Luzzi 1

The paper from Ansaloni and co writers, touch on some crucial aspects of scientific divulgation. The first is about the quality of teaching: especially for young surgeons or others specialists the meetings are probably the first and most important instruments of updating. However, meanwhile some international meetings represents the masterpiece in their fields, many others give a less impact or appear redundant with a lower quality. Having a standardized method to rank the meetings you can address young to achieve the best quality information avoiding waste of money and time. On the other hand a sort of scientific meeting ranking could help company to address their investments to the best offers of scientific training. Only a final consideration to take in account, the meeting ranking should consider that also local up to date meetings have still a value to improve the medical or scientific culture in the periphery because more accessible respect an international one. Could be reasonable to cure this aspect with a different standardised event classification, for example; international meetings, nationals, inter-study groups or up to date.

In my opinion the paper is for sure worthy of publication and then open for discussion in the scientific arena.

I have read this submission. I believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard.

F1000Res. 2018 Oct 6.
belinda de simone 1

Dear Colleague,

Thank you for your opinion and suggestions. You have hit our aim in the proposition of the IFc as a tool to evaluate the scientific impact both of an international congress and of a meeting organized with the intent to update knowledge about a specific research's field. By the IFc, we can select the best congress/meeting to upgrade our academic curriculum.

F1000Res. 2018 Aug 30. doi: 10.5256/f1000research.16814.r36778

Referee response for version 1

Francesco Azzaroli 1

I reviewed with interest the paper entitled “The Congress Impact Factor: a proposal….”and, as far as my knowledge goes, this is the first time that a metric evaluation of a medical congress is proposed.

The authors propose to measure an impact factor based on the mean H-index of invited lecturers normalized for lecture topic (i.e. the H-index of an author limited to the topic of the invited lecture) related to the number of invited lectures.

Obviously, this metric has several limitations that come both from the intrinsic original defects of the H-index and from the complexity of evaluating the quality of a conference and of the speakers.

In fact, the H-index reflects only the number of papers that have received a certain number of citations and does not include any information about the real contribution of that author to the manuscript nor the number of self citations. Furthermore, it tends to increase with time with increasing number of citations even though that author is no more productive.

Because of these limitations several attempts have been made to improve the H-index trying to take into account the contribution of that author to the paper or the period of activity of the researcher adjusting for the number of years since the first publication. Nonetheless, there still is no perfect index to measure the quality and quantity of research which may be affected by so many factors 1 - 4. In fact, some researchers that have deeply impacted the world of science do not have impressive H-index 2, 3.

Dealing with the world of medicine there is another point to consider that is practical expertise. The professionalism of a physician is not represented by the H-index. We all know that being scientifically very productive does not always correspond to being an “hands on” expert and to measure the practical expertise is an even more challenging task. The implementation of such an index could significantly impact the choice of speakers and may leave out non productive “hands on” experts.

Finally, the metric may be affected by the number of speakers; i.e. a small conference may see its H-index rise if just a few authors with high H-index are invited. In such a case, the median with the range may better reflect the overall composition of invited speakers.

Despite all these observations, I believe this paper deserves publication in order to start a serious discussion about scientific conferences. However, I believe the road to develop an acceptable measure of the quality of a conference is still long and rough.

Coming specifically to the paper I have the following comments:

  • Page 4, last paragraph before the discussion section: it should not be “between these two congresses…” but “…topics…”

  • The discussion section should be partially rewritten taking into account the comments I made above and the fact that the H-index is not so robust. The authors should acknowledge the limitations of the metric and the possible drawbacks.

  • In the last paragraph of the discussion the authors state that the conference impact factor can become a valid instrument of education to develop a competitive academic curriculum vitae. I disagree with this concept since participating to a conference does not necessarily correspond to an improvement of the professional knowledge. In this view the CME program is more close to this concept than the IF of a conference that does not measure learning. I would erase this sentence limiting the conclusion to the fact that the IFc may represent the first step in developing a simple tool to evaluate scientific conferences.

I have read this submission. I believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard, however I have significant reservations, as outlined above.

References

  • 1. Bucur O, Almasan A, Zubarev R, Friedman M, Nicolson GL, Sumazin P, Leabu M, Nikolajczyk BS, Avram D, Kunej T, Calin GA, Godwin AK, Adami HO, Zaphiropoulos PG, Richardson DR, Schmitt-Ulms G, Westerblad H, Keniry M, Grau GE, Carbonetto S, Stan RV, Popa-Wagner A, Takhar K, Baron BW, Galardy PJ, Yang F, Data D, Fadare O, Yeo KJ, Gabreanu GR, Andrei S, Soare GR, Nelson MA, Liehn EA: An updated h-index measures both the primary and total scientific output of a researcher. Discoveries (Craiova).3(3) : 10.15190/d.2015.42 10.15190/d.2015.42 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2. Masic I, Begic E: Scientometric Dilemma: Is H-index Adequate for Scientific Validity of Academic's Work?. Acta Informatica Medica.2016;24(4) : 10.5455/aim.2016.24.228-232 10.5455/aim.2016.24.228-232 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3. Kreiner G: The Slavery of the h-index-Measuring the Unmeasurable. Front Hum Neurosci.2016;10: 10.3389/fnhum.2016.00556 556 10.3389/fnhum.2016.00556 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4. Ahangar H, Siamian H, Yaminfirooz M: Evaluation of the Scientific Outputs of Researchers with Similar H Index: a Critical Approach. Acta Informatica Medica.2014;22(4) : 10.5455/aim.2014.22.255-258 10.5455/aim.2014.22.255-258 [DOI] [PMC free article] [PubMed] [Google Scholar]
F1000Res. 2018 Oct 6.
belinda de simone 1

Dear Professor Azzaroli,

Thank you to have review our opinion paper. 

We agree with you in highlighting limitations of the H index. We followed your suggestions and modified the manuscript considering all the issues you reported and upgrading the references, as you can check in the updated version of the paper.

Aware of all the criticism existing in literature about the real value of the H index but at the present there is no other indicator that can substitute it.

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    Data Availability Statement

    All data underlying the results are available as part of the article and no additional source data are required.


    Articles from F1000Research are provided here courtesy of F1000 Research Ltd

    RESOURCES