Abstract
This exploratory study aims at answering the following research question: Are the h-index and some of its derivatives discriminatory when applied to rank social scientists with different epistemological beliefs and methodological preferences? This study reports the results of five Tobit and two negative binomial regression models taking as dependent variable the h-index and six of its derivatives, using a dataset combining bibliometric data collected with the PoP software with cross-sectional data of 321 Quebec social scientists in Anthropology, Sociology, Social Work, Political Science, Economics and Psychology. The results reveal an epistemological/methodological effect making positivists and quantitativists globally more productive than constructivists and qualitativists.
Keywords: Research performance, Epistemology, Individual researchers, Social sciences, h-index, Cross-sectional survey, Google Scholar, Publish or Perish
Introduction
Since the publication of Hirsch’s paper in 2005 that proposes what is now called the ‘h-index’ as a way to quantify an individual’s research performance, many other metrics have been developed and promoted as alternative ways to assess the research performance of researchers (Egghe 2006; Sidiropoulos et al. 2006; Batista et al. 2006; Jin 2007; Schreiber 2008; Zhang 2009). Based more or less on a combination of measurements of the number of publications and the number of citations, these alternative metrics were all developed to overcome weaknesses in previous metrics, mainly in the h-index. Improvements implemented by these new metrics include a better differentiation between scientists with a similar h-index but different citation patterns (e-index), granting more weight to highly-cited publications (g-index), ensuring a better assessment of current research performance by giving much more weight to publications published during the current year (contemporary h-index), reducing the effects of co-authorship (individual h-index), and adjusting the number of citations by the age of each publication (age-weighted citation rate). Some scholars posit that these metrics are complementary and thus should be used conjointly as they offer different types of information (Bornmann et al. 2008; Bornmann and Daniel 2009).
It has also been recognized that the use of these metrics in human and social sciences (HSS) is challenging, mainly due to the ineffectiveness of the mainstream bibliometric data source, ISI Web of Science, to increase its tracking of HSS journals (Kosmopoulos and Pumain 2007; Jacso 2008). ISI Web of Science has also been criticized for doing a poor job at indexing books, chapters and reports which are routinely produced by academics, especially in the HSS (Kosmopoulos and Pumain 2007). Thanks to Anne-Wil Harzing, researchers interested in measuring research performance of individual researchers in the HHS can now use the Publish or Perish software (PoP), which greatly improves our ability to use the Google Scholar database, which is a more inclusive source of data than ISI Web of Science for HHS scientists of non English-speaking regions. In effect, as it relies on Google Scholar, PoP produces the h-index and h-index derivatives by taking into account articles, books, reports and conference proceedings written in many languages. For example, the book written in French by the first author of this paper has been cited 19 times and this information would not have been considered if ISI Web of Science had been used.
Empirical studies using the PoP software to examine the research performance of individual researchers in the social sciences are scanty. We found one empirical study that calculated the h-index and some of its derivatives among social psychologists (Salgado and Páez 2007), but this study uses ISI Web of Science as the main data source. Furthermore, almost all empirical studies of the h-index and its derivatives conducted at the individual level are descriptive, and where correlational analyses are found, generally they explore correlations or commonalities between different bibliometric indices (e.g. Bornmann et al. 2008; Costas and Bordons 2007). To the best of our knowledge, no empirical study has yet compared the h-index and its derivatives based on their propensity to vary according to the attributes, attitudes, beliefs and behaviours of social scientists.
Theoretical and empirical works showing that scientific productivity varies between academic disciplines led researchers like Schreiber (2008) to develop an indicator aimed at adjusting for these variations between disciplines (by lowering the effect of co-authorship). Theoretical and empirical works have also demonstrated that scientific productivity varies according to researchers’ experience, and this pushed Hirsch (2005) to propose the m-quotient that adjusts the h-index to the scientific age of scientists. However, to the best of our knowledge, no researcher has yet theorized possible variations regarding the epistemological beliefs and methodological preferences of researchers. One possible explanation for this situation is that the social sciences have not been the central target of bibliometricians, compared to the natural sciences or the health sciences, where the epistemological and methodological divide between positivism and constructivism is perhaps less prominent. Indeed, the meaning of “science” is an object of contention in the social sciences, where researchers are more or less inclined towards positivism.
For researchers more inclined towards positivism, scientific activity aims primarily to explain or predict phenomena by formulating and testing explicit research hypotheses (the naturalist, nomothetical approach). Positivists also tend to consider scientific research as a value-free activity or as an activity where researchers must try to influence the research process as little as possible. On the other hand, researchers that are less inclined towards positivism (some would call them constructivists) tend to hold a different view of science. Science will rather be considered as an activity that aims at producing interpretations regarding the meaning of specific phenomena (or cases), which would allow the researchers to understand (rather than to explain or predict) these phenomena. In this context, the formulation of causal hypotheses is no longer seen as essential, and the scientific enterprise is less seen as value-free.
This study aims at answering the following research question: Are the h-index and some of its derivatives discriminatory when applied to rank social scientists with different epistemological beliefs and methodological preferences? The lack of relevant literature on this specific topic makes it hard to formulate a theoretically-grounded research hypothesis. Nonetheless, we can logically assume that researchers who are more inclined towards positivism, and whose researches are mainly empirical and quantitative, will tend to outperform those more inclined towards constructivism, whose works are mainly qualitative or reflexive. Yet again, the paucity of theoretical studies addressing these issues leaves us no choice but to speculate,1 tentatively, about plausible explanatory mechanisms. First, quantitative datasets, once collected, allow researchers to produce papers more rapidly and in greater quantity than qualitative datasets, which take longer to analyze and to generate interpretations. Second, many quantitative empirical studies are nomothetical (i.e. they produce general inferences), while many qualitative studies are more idiographic, (i.e. they focus on a few specific cases and are thus less prone to generalizations). Such generalizations might appear to be applicable to various contexts and might therefore be of potential interest to a possibly wider audience of scholars than context-bound qualitative studies. Of course, such a judgement would have to be weighted against the substantive issues—notably, their societal relevance—tackled by those studies. That is to say that methods alone can hardly qualify the importance of a piece of work beforehand. Third, and notwithstanding this, quantitative studies, especially working with large datasets, are generally able to increase confidence in the inferred results, which might be appealing to scholars and therefore prompt citation. Multiple alternative explanations could be generated to this end, but this paper will work under the assumption that methodological preferences are somehow linked to academic productivity and citation patterns for reasons that have yet to be explored thoroughly and tested properly.
This empirical study aims to examine the association between epistemological (and methodological) preferences of social scientists and their h-index (and some of its alternatives). To date, no empirical study has performed this task. This study reports the results of seven regression models taking as dependent variables seven performance indices, using a dataset combining bibliometric data collected with the PoP software with cross-sectional data of 321 Quebec social scientists in Anthropology, Sociology, Social Work, Political Science, Economics and Psychology.
Data and methods
Participants and survey instrument
The study population consists of full, associate and assistant professors working in departments or schools of Anthropology, Sociology, Social Work, Political Science, Economics and Psychology located in eight academic institutions in the province of Quebec, the second most populous Canadian province after Ontario. Faculty members working for a department that does not offer graduate programs were excluded from the study population. Names and email addresses were collected 1 week before launching the survey to ensure information accuracy. A database including the name, email address, institution name and department name of 890 faculty members was created and sent to the independent survey firm, Infras International Inc. It was decided to send the questionnaire to all 890 individuals because of the small size of the study population.
The questionnaire included closed-ended questions. Types of information collected with the questionnaire include individual attributes (e.g. gender and academic rank), involvement in knowledge transfer activities (the details of which are not reported in this study), research funding, types of methods mainly used (i.e. quantitative, qualitative, mixed-methods, reflexive work) and epistemological beliefs (i.e. their position towards core notions of positivism and neopositivism). Five faculty members reviewed the survey instrument to ensure its comprehensibility and to increase its face validity. The survey was administered online from March 2010 to April 2010 (4 weeks). Faculty members first received an invitation letter by email including a URL link to the Web questionnaire and a unique access code. Faculty members from McGill University and Concordia University were sent an English invitation letter, while other faculty members received a letter written in French. The questionnaire was accessible in both languages. Three recalls were sent by email to those who did not complete the questionnaire.
From the 890 faculty members to whom an invitation letter had been sent, 356 completed the questionnaire for a response rate of 40%. However, 35 of the 356 respondents who completed the questionnaire were deemed ineligible to participate (eligibility criteria were: holding a tenure-track position as an assistant, associate or full professor and holding a tenure-track position at least since the beginning of September 2008, as many questions asked participants to recall activities undertaken since the beginning of this time period). The database thus includes information on 321 faculty members.
At the end of the data collection phase, a PhD student was given an Excel database including solely the names, department name, and institution name of the 321 respondents. Harzing’s Publish or Perish software (Harzing 2010) was then used to calculate all available bibliometric indices for each survey respondent. This software allows deselecting publications that were not from a target scientist, which often occurs due to homonyms. Further verifications, when needed and when possible, were made by cross-checking upon presumed author's academic curriculum. Using an ID variable, we then merged the content of this new bibliometric database (with scientists’ names dropped) with our main database including the information collected through the cross-sectional survey. The matching procedure as well as all data analyses performed for this study were conducted by using Stata v11.0 for Mac.
Data coding and analytical plan
Seven dependent variables were considered in this study, namely the: (1) h-index, (2) m-quotient, (3) g-index, (4) Schreiber’s individual h-index, (5) age-weighted citation rate, (6) e-index, and the (7) contemporary h-index. Table 1 briefly defines each index used in this study. All indices had a positively skewed distribution, which confirms the well-documented phenomenon that scientific productivity is not distributed normally among scientists, i.e. not just in Physics and Chemistry (Lotka 1926), but also in the Humanities (Murphy 1973). As shown in the results section, all alternative indices are strongly correlated with the h-index, except the contemporary h-index, which is significantly, but weakly correlated with the h-index.
Table 1.
Index | Definition |
---|---|
h-index | “A scientist has index h if h of his or her Np papers have at least h citations each and the other (Np—h) papers have fewer than ≤ h citations each” (Hirsch 2005) |
m-quotient | h/y where h is h index, y is number of years since publishing the first publication |
g-index | “The g-index g is the largest rank (where papers are arranged in decreasing order of the number of citations they received) such that the first g papers have (together) at least g 2 citations” (Egghe 2006) |
Individual h-index | Standard h-index divided by the average number of authors in the publications that contribute to the h-index. It aims at reducing the effects of co-authorship. Schreiber’s method was used. It uses fractional paper counts to account for shared authorship of publications, and determines the multi-authored h m index, which is based on the resulting effective rank of the publications using undiluted citation counts |
Age-weighted citation rate | Number of citations to an entire body of work, adjusted for the age of each individual paper. The number of citations to a given publication is divided by the age of that publication (Publish or Perish implementation) |
e-index | The (square root) of the surplus of citations in the h-set beyond h 2, i.e., beyond the theoretical minimum required to obtain an h-index of ‘h’. This index aims to differentiate between scientists with similar h-indices, but different citation patterns |
Contemporary h-index (ac) | h-index weighted by an age-related parametrization (gamma = 4; Delta = 1) to each cited publication, giving less weight to older publications. Citations of a publication published during the current year account for four times. Citations of a publication published 4 years ago account for one time. Citations of a publication published 6 years ago account for 4/6 times, etc |
Notes: All indices, with their corresponding reference, are described on the Publish or Perish Web site: http://www.harzing.com/pophelp/metrics.htm
Sidiropoulos et al. (2006, p. 4) describe the contemporary h-index as follows:
“…for an article published during the current year, its citations account four times. For an article published 4 year ago, its citations account only one time. For an article published 6 year ago, its citations account 4/6 times, and so on. This way, an old article gradually loses its “value”, even if it still gets citations. In other words, in the calculations we mainly take into account the newer articles. Therefore, we define a novel citation index for scientist rankings…”.
By giving much more weight to citations of recent publications, this index partly captures the speed of the impact of recent publications, as to score highly on this indicator, one has to have recent publications that have already been cited.
We ran five Tobit regression models (i.e. for indices with non-integer values) and two negative binomial regressions (i.e. for the h and the g, which solely have integer values). We entered the following variables as correlates:
REFLX: Analytical method that best represents scientists’ methodological approach (1: Reflexive analysis (e.g. essay, theoretical and/or reflexive contributions); 0: otherwise; Reference: Quantitative empirical analysis (e.g. statistical analysis)
QUALI: Analytical method that best represents scientists’ methodological approach (1: Qualitative empirical analysis (e.g. content analysis, in-depth semi-structured interviews); 0: otherwise; Reference: Quantitative empirical analysis (e.g. statistical analysis))
MIXME: Analytical method that best represents scientists’ methodological approach (1: Mixed empirical analysis (Systematic combination of quantitative and qualitative methods within a single study); 0: otherwise; Reference: Quantitative empirical analysis (e.g. statistical analysis))
POSIT: Index of positivism generated by using the mean scores of three items that were all measured on a four-point agreement scale ranging from 1 (completely disagree) to 4 (completely agree): (1) Scientific research primarily aims to explain or predict phenomena; (2) The validity and reliability of scientific knowledge rest on the verification of explicit research assumptions; and (3) The personal values of the researcher must influence the entire scientific approach as little as possible (min: 1; max: 4; integer and non-integer values; Cronbach’s Alpha: 0.64; Principal component analysis: all items loaded on one and the same factor)
SSHRC: At least one research project funded by the SSHRC (i.e. the Canadian research funding agency for human and social sciences) as principal investigator since the beginning of the 2008 Fall Semester—during a 18-month period (1: at least one SSHRC-funded project; 0: no SSHRC-funded project)
ANTHR: 1: working in the Department of Anthropology; 0: otherwise; Reference: working in the Department of Psychology
SOCIO: 1: working in the Department of Sociology; 0: otherwise; Reference: working in the Department of Psychology
SOCWO: 1: working in the Department of Social Work; 0: otherwise; Reference: working in the Department of Psychology
POLSC: 1: working in the Department of Political Science; 0: otherwise; Reference: working in the Department of Psychology
ECON: 1: working in the Department of Economics; 0: otherwise; Reference: working in the Department of Psychology
ASSO: 1: associate professor; 0: otherwise; Reference: full professor
ASSI: 1: assistant professor; 0: otherwise; Reference: full professor
PERIU: Working in a peripheral university (1: working in a university located in a more peripheral area; 0: working in a university located in Montreal or Quebec City)
MEN: 1: men; 0: women
The non-parametric Spearman correlation between each pair of explanatory variables was computed to inspect the possible presence of multicollinearity. Most correlations were below 0.20 and the highest correlation was −0.41 (i.e. between QUALI and POSIT). We also calculated the simulated h-index with the help of the predict post-estimation STATA command, which allows simulating the h-index according to different scenarios. We report the simulated number of events, which is the default for simulations of negative binomial regressions. We also calculated the simulated contemporary h-index, as it is the index that is most different from the h-index. In this case, we used the adjust post-estimation STATA command, which calculates the simulated linear prediction (the default for simulations of Tobit regressions).
Results
Sample characteristics
Looking at the frequency distribution of respondents among universities and academic disciplines, it was found that faculty members from Université Laval (the first author’s institution) and those in Political Science (the first author’s discipline) are a little over-represented when compared to the characteristics of the estimated eligible population. Therefore, the data were weighted to correct for this bias. Univariate, bivariate and mutltivariate data analyses reported in this study were conducted using the weighted dataset to give a better estimate of the characteristics of the true population and to correct for non-response bias.
The descriptive statistics are reported in Table 2. As one can see, more than half of faculty members are male (58.43%). The percentage distribution of faculty members by universitities is thus as follows (decreasing order): Université de Montréal (20.22%), Université du Québec à Montréal (18.88%), McGill University (17.42%), Université Laval (16.81%), Concordia University (14.13%), Université de Sherbrooke (6.09%), Université du Québec à Trois-Rivières (4.02%) and Université du Québec en Outaouais (2.44%). Therefore, it can be seen that 87.45% of faculty members work for an institution located in Montreal (i.e. the province’s economic metropolis) or in Quebec City (i.e. the provincial capital). As for the academic disciplines, 31.91% of the faculty members are in Psychology, 19% in Political Science, 16.69% in Economics, 12.55% in Social Work, 11.57% in Sociology, and 8.28% in Anthropology.
Table 2.
Variable type | Min | Max | SD | Mean | % | |
---|---|---|---|---|---|---|
Dependent variables | ||||||
h-index | Count | 0 | 42 | 7.52 | 7.60 | |
m-quotient | Continuous | 0 | 1.92 | 0.28 | 0.35 | |
g-index | Count | 0 | 94 | 15.52 | 13.99 | |
Individual h-index | Continuous | 0 | 27.03 | 4.65 | 4.91 | |
Age-weighted citation rate | Continuous | 0 | 673.85 | 96.08 | 46.67 | |
e-index | Continuous | 0 | 86.57 | 13.05 | 11.28 | |
Contemporary h-index | Continuous | 0 | 26 | 4.56 | 4.99 | |
Independent variables | ||||||
REFLX | Dummy | 0 | 1 | 18.26 | ||
QUALI | Dummy | 0 | 1 | 21.87 | ||
MIXME | Dummy | 0 | 1 | 18.06 | ||
POSIT | 12-point scale | 1 | 4 | 0.63 | 3.14 | |
SSHRC | Dummy | 0 | 1 | 49.22 | ||
ANTHR | Dummy | 0 | 1 | 8.28 | ||
SOCIO | Dummy | 0 | 1 | 11.57 | ||
SOCWO | Dummy | 0 | 1 | 12.54 | ||
POLSC | Dummy | 0 | 1 | 19.00 | ||
ECON | Dummy | 0 | 1 | 16.69 | ||
ASSO | Dummy | 0 | 1 | 32.88 | ||
ASSI | Dummy | 0 | 1 | 20.25 | ||
PERIU | Dummy | 0 | 1 | 12.54 | ||
MEN | Dummy | 0 | 1 | 58.43 |
Based on the weighted dataset, 46.87% were full professors, 32.88% were associate professors and the remaining 20.25% of the faculty members were assistant professors. The majority of faculty members were principal investigator of at least one funded research project. More precisely, 25.14% of the faculty members had no funded project, 31.16% had one, 23.74% had two, and 19.95% had three or more projects as principal investigator. As can be seen in Table 2, a little less than half the respondents (49.22%) held at least one research grant as principal investigator that was funded by the Canadian leading funding agency for human and social sciences (SSHRC) during the past 18 months. As for the type of analytical approach they generally use, 41.81% of the faculty members mainly conduct empirical quantitative studies, 21.87% empirical qualitative studies, 18.07% empirical mixed-methods studies, and 18.26% produce reflexive works. Finally, faculty members tend to have a positivist view of scientific activity, or at least agree to some extent with some of its core epistemological claims, as the index of positivism has a mean of 3.14 on a scale ranging from 1 to 4.
Correlations among bibliometric indices
Correlations between bibliometric indices are reported in Table 3. Four indices, namely the g-index, the individual h-index, the age-weighted citation rate and the e-index are strongly correlated with the h-index, from which they are supposedly derivatives and alternatives. In fact, many empirical studies that have examined different bibliometric indices for scientists have found strong correlations (Bornmann and Daniel 2009, p. 5). It can also be seen that the contemporary h-index clearly measures something different from the other indices. In fact, the weak correlations found between the contemporary h-index and the other indices are due to the fact that this index gives very little weight to citations of old publications. As noted previously, by giving higher weight to citations of articles published recently, this index captures current scientific impact. Scientific impact is logically different from current scientific impact. As a consequence, the correlates of the h-index may be quite different from the correlates of the contemporary h-index. In other words, the factors that help researchers increase their lifelong productivity might be different from those that might boost the speed with which their recent publications are being cited.
Table 3.
(A) | (B) | (C) | (D) | (E) | (F) | G) | |
---|---|---|---|---|---|---|---|
(A) h-index | 1.00 | ||||||
(B) m-quotient | 0.79 | 1.00 | |||||
(C) g-index | 0.97 | 0.76 | 1.00 | ||||
(D) Individual h-index | 0.93 | 0.69 | 0.89 | 1.00 | |||
(E) Age-weighted citation rate | 0.95 | 0.84 | 0.96 | 0.86 | 1.00 | ||
(F) e-index | 0.91 | 0.74 | 0.97 | 0.81 | 0.95 | 1.00 | |
(G) Contemporary h-index | 0.37 | 0.16 | 0.40 | 0.38 | 0.40 | 0.40 | 1.00 |
Note: All correlations are statistically significant at the 5% level
Regression results
The results of the Tobit regression models are reported in Table 4. The regression results show that, adjusting for multiple confounders, the h-index and all of its derivatives considered in this study are somewhat discriminatory of epistemological beliefs or methodological preferences of social scientists. More specifically, the results show that on average, social scientists who mainly produce non-empirical, reflexive works such as essays score lower than faculty members who mainly produce quantitative studies on the seven research performance indices considered in the study. On average, social scientists who mainly publish qualitative empirical studies perform less than quantitativists on 5 of the 7 indices (including the h index and the m-quotient), while social scientists who mainly produce mixed-methods studies are outperformed by quantitativists on all indices except one (i.e. the contemporary h-index). As for the index of positivism, it is positively and significantly associated with 4 of the 7 indices considered, including the h-index and its age-adjusted version, the m-quotient.
Table 4.
Variables | Regression models | ||||||
---|---|---|---|---|---|---|---|
Negative binomial | Tobit | Negative binomial | Tobit | Tobit | Tobit | Tobit | |
h-index | m-quotient | g-index | Individual h-index | Age-weighted citation rate | e-index | Contemporary h-index | |
Coefficient | Coefficient | Coefficient | Coefficient | Coefficient | Coefficient | Coefficient | |
REFLX | −0.54*** | −0.15*** | −0.57*** | −1.94*** | −46.68*** | −6.53*** | −1.54*** |
QUALI | −0.31** | −0.11** | −0.29** | −1.02 | −36.74** | −4.99** | −0.72 |
MIXME | −0.32*** | −0.17*** | −0.34*** | −1.36** | −38.24*** | −4.49** | −0.70 |
POSIT | 0.15** | 0.07*** | 0.18** | 0.69* | 6.54 | 1.44 | −0.17 |
SSHRC | 0.19*** | 0.09*** | 0.28*** | 1.08*** | 28.02*** | 3.51*** | 0.35 |
PERIU | −0.84*** | −0.21*** | −0.85*** | −4.00*** | −65.71*** | −10.50*** | 0.44 |
ANTHR | −0.68*** | −0.22*** | −0.80*** | −2.02** | −61.12*** | −9.57*** | 0.89 |
SOCIO | −0.64*** | −0.22*** | −0.76*** | −2.91*** | −76.56*** | −10.44*** | 0.87 |
SOCWO | −0.70*** | −0.17*** | −0.85*** | −2.08** | −40.31** | −7.79*** | 0.34 |
POLSC | −0.41*** | −0.15*** | −0.46*** | −1.30* | −48.92*** | −7.87*** | 0.33 |
ECON | −0.28*** | −0.07* | −0.18 | −0.57 | −30.04** | −2.98 | 1.32*** |
ASSO | −0.57*** | 0.00 | −0.57*** | −3.59*** | −48.58*** | −6.94*** | −0.63* |
ASSI | −0.91*** | −0.02 | −0.94*** | −4.33*** | −41.79*** | −7.88*** | −0.26 |
GENDER | 0.03 | −0.06* | 0.03 | 0.37 | 12.50 | 1.51 | 0.46 |
Constant | 2.21**** | 0.39*** | 2.72*** | 5.99*** | 79.19** | 15.42*** | 3.81*** |
*** significant at 1% level, ** at 5%, * at 10%, two-tailed test
Research funding, as measured by the fact of having been principal investigator of at least one SSHRC peer-reviewed research project in the past 18 months, is positively and significantly associated with all outcome variables, except the contemporary h-index, which measures the impact of recent publications. As shown in Table 4, the location of academic institutions matters as well. Indeed, on average, faculty members from universities located in Montreal (i.e. U of Montreal, Concordia, UQAM and McGill) or in Quebec City (i.e. Laval University) perform better than their colleagues from more peripheral universities on each productivity index, but the contemporary h-index.
As for academic disciplines, the results presented in Table 4 show that faculty members in Psychology (the reference) tend to outperform social scientists in Anthropology, Sociology, Social Work and Political Science on all indices, except on the contemporary h-index. On average, faculty members in Psychology perform better than those in Economics with regard to the h-index, the m-quotient and the age-weighted citation rate. However, interestingly, faculty members in Psychology were found to be outperformed by those in Economics regarding the impact of their recent publications (as measured by the contemporary h-index). The non-significance of the association between ECON and the individual h-index suggests that, in reducing the effect of co-authorship (which is what the individual h-index does), there is then no significant difference between faculty members in Psychology and those in Economics. Furthermore, the non-significance of the association between ECON and the g-index suggests that, in taking into account highly-cited publications (which is what the g-index does), there is then no significant difference between faculty members in Psychology and those in Economics.
The results presented in Table 4 also suggest that the m-quotient (i.e. the h-index divided by the scientific age) does a fairly good job at reducing the difference in productivity that is due to years of experience. Indeed, the academic rank was found to be significantly associated with all productivity indices, except the m-quotient. Finally, the only gender effect that was found is in the regression model with the m-quotient as the outcome variable. It was thus found that, on average, female faculty members perform less than male ones when using a proxy of research performance that adjusts for the scientific age. This result might be linked to the fact that female social scientists are more likely than their male counterparts to slow down their productivity at one or more points of their career for family reasons.
Statistical simulation results
Overall, both the correlation matrix presented in Table 3 and the regression results presented in Table 4 show that only one index differs significantly from the h-index, that is, the contemporary h-index. For example, the regression results show that only three independent variables are significantly associated with this index, while the other productivity indices considered are significantly associated with 11–13 correlates. The key feature of the contemporary h-index is its capacity to measure the current scientific impact (i.e. citations of a publication published during the current year account for four times; citations of a publication published 4 years ago account for one time, etc.). The simulated h-index and simulated contemporary h-index for faculty members of different profiles are reported in Tables 5 and 6, respectively.
Table 5.
Scenarios | Econ | PolSci | Psycho | Socio | SoWork | Anthro | |
---|---|---|---|---|---|---|---|
A man working for a university located in Montreal or Quebec City… | |||||||
…who is assistant professor… | |||||||
1a | SSHRC funding—quantitativist—positivist | 6 | 5 | 7 | 4 | 4 | 4 |
2a | SSHRC funding—qualitativist—more constructivist | 4 | 3 | 5 | 3 | 3 | 3 |
3a | No SSHRC funding—qualitativist—more constructivist | 3 | 3 | 4 | 2 | 2 | 2 |
…who is associate professor… | |||||||
4a | SSHRC funding—quantitativist—positivist | 8 | 7 | 10 | 5 | 5 | 5 |
5a | SSHRC funding—qualitativist—more constructivist | 5 | 5 | 7 | 4 | 4 | 4 |
6a | No SSHRC funding—qualitativist—more constructivist | 4 | 4 | 6 | 3 | 3 | 3 |
…who is full professor… | |||||||
7a | SSHRC funding—quantitativist—positivist | 14 | 12 | 19 | 10 | 9 | 9 |
8a | SSHRC funding—qualitativist—more constructivist | 10 | 9 | 13 | 7 | 6 | 6 |
9a | No SSHRC funding—qualitativist—more constructivist | 8 | 7 | 11 | 6 | 5 | 5 |
A man working for a university located in a more peripheral area… | |||||||
…who is assistant professor… | |||||||
1b | SSHRC funding—quantitativist—positivist | 2 | 2 | 3 | 2 | 2 | 2 |
2b | SSHRC funding—qualitativist—more constructivist | 2 | 1 | 2 | 1 | 1 | 1 |
3b | No SSHRC funding—qualitativist—more constructivist | 1 | 1 | 2 | 1 | 1 | 1 |
…who is associate professor… | |||||||
4b | SSHRC funding—quantitativist—positivist | 3 | 3 | 4 | 2 | 2 | 2 |
5b | SSHRC funding—qualitativist—more constructivist | 2 | 2 | 3 | 2 | 1 | 1 |
6b | No SSHRC funding—qualitativist—more constructivist | 2 | 2 | 3 | 1 | 1 | 1 |
…who is full professor… | |||||||
7b | SSHRC funding—quantitativist—positivist | 6 | 5 | 8 | 4 | 4 | 4 |
8b | SSHRC funding—qualitativist—more constructivist | 4 | 4 | 5 | 3 | 3 | 3 |
9b | No SSHRC funding—qualitativist—more constructivist | 3 | 3 | 5 | 2 | 2 | 2 |
Table 6.
Scenarios | Econ | PolSci | Psycho | Socio | SoWork | Anthro | |
---|---|---|---|---|---|---|---|
A man working for a university located in Montreal or Quebec City, who has SSHRC funding, who is more inclined towards positivism… | |||||||
…who is assistant professor… | |||||||
1a | Reflexive approach | 4.10 | 3.11 | 2.78 | 3.65 | 3.12 | 3.67 |
2a | Empirical-qualitative approach | 4.91 | 3.93 | 3.59 | 4.50 | 3.94 | 4.48 |
3a | Empirical mixed approach | 4.93 | 3.95 | 3.62 | 4.49 | 3.96 | 4.50 |
4a | Empirical-quantitative approach | 5.63 | 4.65 | 4.32 | 5.19 | 4.66 | 5.20 |
…who is associate professor… | |||||||
1b | Reflexive approach | 3.73 | 2.74 | 2.40 | 3.28 | 2.75 | 3.30 |
2b | Empirical-qualitative approach | 4.54 | 3.56 | 3.22 | 4.10 | 3.57 | 4.11 |
3b | Empirical mixed approach | 4.56 | 3.58 | 3.24 | 4.12 | 3.59 | 4.13 |
4b | Empirical-quantitative approach | 5.26 | 4.28 | 3.94 | 4.82 | 4.29 | 4.83 |
…who is full professor… | |||||||
1c | Reflexive approach | 4.36 | 3.37 | 3.04 | 3.91 | 3.38 | 3.93 |
2c | Empirical-qualitative approach | 5.17 | 4.19 | 3.86 | 4.73 | 4.20 | 4.74 |
3c | Empirical mixed approach | 5.19 | 4.21 | 3.88 | 4.75 | 4.23 | 4.76 |
4c | Empirical-quantitative approach | 5.89 | 4.91 | 4.58 | 5.45 | 4.92 | 5.47 |
We simulated the h-index for two broad categories of scenarios, namely: (1) a man working for a university located in Montreal (i.e. the economic metropolis) or Quebec City (i.e. the provincial Capital)—scenarios 1a–9a; (2) a man working for a university located in a more peripheral area—scenarios 1b–9b. We subdivided each of these two categories of scenarios into three other categories of scenarios namely: (1) assistant professor (1a–3a & 1b–3b), (2) associate professor (4a–6a & 4b–6b), and (3) full professor (7a–9a & 7b–9b). Each of these three categories of scenarios were then broken down into three other categories, namely: (1) SSHRC funding, quantitativist and positivist (1a, 4a, 7a & 1b, 4b, 7b), (2) SSHRC funding, qualitativist and more prone to constructivism (2a, 5a, 8a & 2b, 5b, 8b), and (3) no SSHRC funding, qualitativist and more prone towards constructivism (3a, 6a, 9a & 3b, 6b, 9b). Finally, these 18 scenarios were multiplied by 6, that is, we simulated them for each academic discipline. As a consequence, we conducted 108 simulations (i.e. 18 scenarios * 6 academic disciplines).
For the sake of these simulations, we defined a positivist as someone who had a score of 3 on the 1–4 index of positivism. This is thus not a radical positivist, but rather someone who is more prone towards positivism. A more constructivist faculty member was defined as someone who scored just below the median of the same index (i.e. 2.666667). This social scientist is thus not a radical anti-positivist, but is nonetheless more prone to constructivism. A quantitativist is someone whose main analytical approach is empirical quantitative data analysis, while a qualitativist is someone who mainly conducts empirical qualitative data analysis.
As can be seen in Table 5, on average, a quantitativist who is more prone towards positivism will have a larger h-index than a qualitativist who is more prone towards constructivism. For example, the difference between scenarios 1a and 2a, when simulating for Political Science, is 2 (i.e. an assistant professor who is quantitativist and positivist would have an h-index of 5, while an assistant professor who is qualitativist and less prone towards positivism will have an h-index of 3).
The results presented in Table 5 also show important differences between Psychology and the other academic disciplines. For scenarios 1a–9a, where faculty members work for a university located in Montreal or Quebec City (i.e. a central university), simulated h-indices are always larger when fixing the academic discipline at the Psychology value. However, the difference between Psychology and other disciplines (mainly Economics and Political Science) decreases when considering that faculty members are assistant or associate professors working for a more peripheral university (see scenarios 1b–6b).
In fact, the central versus peripheral university correlate is the one that has the larger effect on the h-index. To appreciate this effect, one has to compare a specific “a” scenario with its corresponding “b” scenario. For example, let us compare scenario 4a with scenario 4b. Both scenarios describe the same profile of faculty members (i.e. a male, associate professor, with SSHRC funding, quantitativist and more prone towards positivism), except that scenario 4a posits that the individual works in a central university, while scenario 4b rather considers that he is working for a peripheral academic institution. The difference between the h-indices simulated for scenarios 4a and the ones simulated for scenarios 4b is large. For example, an economist who works for a central institution would have an h-index of 8, while his colleague from a peripheral university would have an h-index of only 3. In the same vein, a sociologist who works for a central university will have an h-index of 5, while his colleague from a peripheral institution would have an h-index of only 2. These results might be due to the fact that the recruitment of faculty members is more competitive in central universities than in more peripheral ones. In effect, the statistical simulations reported in Table 5 suggest that an assistant professor that works for a central university has an h-index of about the same level as a full professor from a peripheral university (i.e. compare scenarios 1a–3a with scenarios 7b–9b). An alternative explanation might be that central universities dispose of greater research resources and operate in distinct research cultures where, for instance, international collaborations are more frequent, which might generate greater outputs and exposure of scholarly work.
Let us now have a look at the simulated contemporary h-indices, which we recall partly capture the scientific impact of recent publications. We have already noted in Table 4 that only three correlates were significantly associated with this index, namely the fact of producing reflexive works (essay, theoretical or reflexive contributions) rather than quantitative ones, of being in Economics rather than in Psychology, and of being a full rather than an associate professor.
Four observations can be made from Table 6. First, the simulated contemporary h-indices are always higher for Economics than for all other academic disciplines. This suggests that recent publications by economists are, on average, more rapidly cited than those by social scientists from other disciplines. Second, the simulated contemporary h-indices for Psychology are systematically lower than those simulated for the other academic disciplines. This result suggests that, on average, recent publications by faculty members in Psychology are less rapidly cited than those in the other disciplines. Therefore, psychologists are perhaps the most productive in terms of the h-index, but the impact of their recent publications is slower than what is found in other disciplines, especially in Economics. Third, the impact of recent publications from associate professors is, on average, less rapid than the impact of publications from both assistant and full professors.
Finally, the contemporary h-index is systematically lower when simulating for a faculty member that produces reflexive, non-empirical, works. The largest difference is between the reflexive approach and the quantitative ones. For example, the difference between scenario 1a and scenario 4a (i.e. simulating for an assistant professor in Economics) is 1.53. In other words, an assistant professor who works for a central university, who has SSHRC funding, who is more prone towards positivism and who mainly produces reflexive works would have a contemporary h-index 27% lower than an assistant professor with the same profile, but who mainly publishes quantitative studies. These results suggest that recent non-empirical, reflexive works are quoted much less rapidly than recent quantitative works. This might partly be an indication that, on average, empirical quantitative works, in contrast with reflexive ones, are more likely to be cumulative, so that researchers would be more likely to quote them as soon as they are published.
Discussion and conclusions
The aim of this study was to examine the association between epistemological (and methodological) preferences of social scientists and their h-index (and some of its alternatives). It was assumed that social scientists who are more inclined towards positivism and whose works are mainly empirical and quantitative will tend to outperform those that are more inclined towards constructivism (whose works are mainly qualitative or reflexive). The study findings tend to confirm our research hypothesis. Indeed, it was found that quantitative researchers are, on average and after controlling for potential confounders, more productive than researchers who mainly use other types of analytical approaches. Interestingly however, it was found that researchers who produce quantitative empirical studies are currently quoted faster than researchers who publish reflexive, non-empirical studies. To the best of our knowledge, this is the first study to document the effect of research epistemology and methodology on scientific productivity, a phenomenon that we propose to call “the epistemological/methodological effect” or EM effect.
While our study was able to pinpoint the effects of epistemological dispositions and methodological commitments on research productivity and work citation scores, the reasons why this is so remain unspecified. In the introductory segment, we suggested that the type of data and analytical tools employed by quantitativists might, at least partially, account for greater scientific outputs since it is plausible that many articles might be produced from the same database in a relatively less time-consuming manner than qualitative analysis. This need not necessarily be always the case, as data-collection and (re)coding phases can be lengthy, analytical techniques can take a long time to master, etc., whereas a reflexive paper might be relatively quick to put out. Secondly, it was also suggested that the potential generalizability of statistical inferences—their nomothetical feature—might be an asset of quantitativists when comparing with more locally applicable qualitative research. As indicated earlier, statistical analyses based on a large number of observations have the potential to increase confidence in inferred conclusions (a potentiality that should not be equated with real-world significance). What all this suggests is an intricate relation between one’s views on science, one’s preferred research methods and one’s research productivity and visibility in the scholarly world. The specific causal pathway between these would, of course, need to be uncovered and clarified in further studies.
This study contains some implications for the use of scientific productivity indices by science managers and policy-makers in academic institutions and research funding agencies. First, as already known, one cannot use these indices to compare researchers from different academic disciplines. Even the individual h-index, which corrects for the effect of co-authorship, varies across disciplines. Second, the m-quotient (i.e. the h-index divided by the scientific age) could be used to compare researchers from different academic ranks, as the regression results showed no significant differences between assistant and associate professors on the one hand, and full professors on the other hand. Third, the contemporary h-index is an interesting complement to the h-index, as it measures the scientific impact of recent publications. We notably found that while psychologists are much more productive than economists and others when using the h-index, faculty members in Economics currently produce publications that tend to be cited more rapidly than their colleagues in other disciplines, including Psychology. This last finding might be due to the cumulative nature of Economics, which is the oldest nomothetic social science discipline. Finally, the EM effect makes the indices considered in the study more or less discriminatory of the epistemological beliefs and the methodological preferences of faculty members.
The main strength of this study resides in its originality, as it is the first study that systematically tested the hypothesis that the h-index and some of its alternatives favour positivists and quantitativists. To the best of our knowledge, it is the first empirical study that has demonstrated that fairly recent mainstream bibliometric indices such as the h-index discriminate against the analytical approaches employed by faculty members (reflexive, quantitative, qualitative, mixed), thus relativizing the relevance of using such indices to compare faculty members with different methodological preferences. The originality of this study is also that it relied on a dataset of faculty members spread across six social science disciplines. The dataset is unique as it merged both bibliometric and survey variables.
This study also contains multiple limitations that should be made explicit. First, we used the “Publish or Perish” software program that uses Google Scholar to obtain raw citations and then analyzes them. However, the Publish or Perish tool has some limitations, such as not allowing for automatic merging of publications that appear several times in Google Scholar, and inaccuracies of the author search box that generates more publications than needed (Baneyx 2008). We countered these problems by manually investigating each retrieved publication. Despite the multiple limitations of Google Scholar (that were reviewed in Bar-Ilan 2008), we believe that using Google Scholar was a justifiable choice, as Web of Science and Scopus do a less satisfactory job at indexing both non-English and non peer-reviewed publications. Indeed, using Web of Science or Scopus would have led to a dramatic underestimation of the true scientific productivity of the social scientists in our database, as many of them publish in French and produce book or book chapters that are rarely indexed in these databases. Peer-reviewed articles are not the only outputs of scientific research, and books, conference proceedings and research reports should also be taken into account in measuring productivity. Peer-reviewing is a good thing in science, but it is also imperfect. This is why medical science researchers conduct systematic reviews, which consist of reviewing and assessing the quality of all primary studies (i.e. peer-reviewed or not) on a specific research topic.
Second, the cross-sectional and self-reported aspects of the survey data mean that it was not possible to track changes over time and that there is the possible presence of social desirability bias (i.e. which is often present in self-reported data) and recall bias (i.e. some participants may have encountered problems in recalling activities over an 18-month period). Finally, a key limitation of the study is that findings are observational rather than experimental, thus the study misses the required step of demonstrating experimentally that changes in the modifiable independent variables such as research methods utilized have the desired effects on research performance and are not simply manifestations of some deeper causes. Although the simulations presented above are illustrative of the effect of factors at play when considering the research performance of social scientists in Quebec, the findings do not necessarily imply that encouraging faculty members to conduct quantitative studies will increase their research performance, as there may be a selection bias making those who are most interested in quantitative research also those most interested in publishing papers. This suggests as well that some omitted factors (such as the researcher’s administrative duties, teaching responsibilities, organizational incentives, etc.) could help in refining our explanation of individual research performance.
Finally, there is an obvious knowledge gap regarding the question of whether these indices could be used as measures of research quality. Conceptually, the number of publications captures productivity, while the number of citations measures popularity. What is more, the number of citations itself should be interpreted with reservations, as citation behaviour is not always thoughtful and quotations can be made quite casually. The case in point is that there is no a priori logical relationship between these measures and the concept of research quality. In the future, bibliometric researchers might want to test the association between bibliometric indices and research quality. One important question will thus be how to measure research quality. One promising way to measure the quality of empirical studies (it would be hard to measure quality of editorials or essays) is to read them and rate them according to their risk of bias using validated tools such as the one developed by the Cochrane Collaboration to assess intervention studies in clinical research. If one could do this for a large sample of papers published in the same year and find a high negative correlation between the level of risk of bias found in these papers and the number of citations they received, then, and only then, we could conclude that citations can be used as a proxy of research quality.
Acknowledgement
Mathieu Ouimet receives financial support from the Fonds de la recherche en santé du Québec (FRSQ) as a research scholar. Pierre-Olivier Bédard receives financial support from the Social Sciences and Humanities Research Council of Canada (SSHRC) as a doctoral student. The authors want to thank all faculty members who dedicated some of their professional time to contribute to the study. The authors also want to thank the reviewers for their highly valuable inputs. This study was funded by an operating grant from the Fonds québécois de la recherche sur la société et la culture (FQRSC).
Open Access
This article is distributed under the terms of the Creative Commons Attribution Noncommercial License which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited.
Footnotes
References
- Baneyx A. “Publish or Perish” as citation metrics used to analyze scientific output in the humanities: International case studies in economics, geography, social sciences, philosophy, and history. Archivum Immunologiae et therapiae Experimentalis. 2008;56:363–371. doi: 10.1007/s00005-008-0043-0. [DOI] [PubMed] [Google Scholar]
- Bar-Ilan J. Which h-index?—a comparison of WoS. Scopus and Google Scholar Scientometrics. 2008;74(2):257–271. [Google Scholar]
- Batista PD, Campiteli MG, Kinouchi O, Martinez AS. Is it possible to compare researchers with different scientific interests? Scientometrics. 2006;68(1):179–189. doi: 10.1007/s11192-006-0090-4. [DOI] [Google Scholar]
- Bornmann L, Daniel H-D. The state of h index research. Is the h index the ideal way to measure research performance? EMBO Reports. 2009;10(1):2–6. doi: 10.1038/embor.2008.233. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bornmann L, Mutz R, Daniel H-D. Are there better indices for evaluation purposes than the h index? A comparison of nine different variants of the h index using data from biomedicine. Journal of the American Society for Information Science and Technology. 2008;59(5):1–8. doi: 10.1002/asi.20806. [DOI] [Google Scholar]
- Costas R, Bordons M. The h-index: advantages, limitations and its relation with other bibliometric indicators at the micro level. Journal of Informetrics. 2007;1:193–203. doi: 10.1016/j.joi.2007.02.001. [DOI] [Google Scholar]
- Egghe L. Theory and practice of the g-index. Scientometrics. 2006;69(1):131–152. doi: 10.1007/s11192-006-0144-7. [DOI] [Google Scholar]
- Gerring J. Causal mechanisms: yes, but…. Comparative Political Studies. 2010;43(11):1499–1526. doi: 10.1177/0010414010376911. [DOI] [Google Scholar]
- Harzing, A. W. (2010). Publish or Perish. Version 3.0.3813, www.harzing.com/pop.htm.
- Hirsch, J. E. (2005). An index to quantify an individual’s scientific research output. Proceedings of the National Academy of Sciences of the United States of America, 102, 16569–16572 (Original arXiv paper, arXiv:physics/0508025 v3, later corrected in arXiv:physics/0508025v5). [DOI] [PMC free article] [PubMed]
- Jacso P. Testing the calculation of a realistic h-index in Google Scholar, Scopus, and Web of Science for F. W. Lancaster. Library Trends. 2008;56(4):785–815. [Google Scholar]
- Jin B. The AR-index: complementing the h-index. ISSI Newsletter. 2007;3(1):6. [Google Scholar]
- Kosmopoulos, C., & Pumain, D. (2007). Citation, citation, citation: Bibliometrics, the web and the social sciences and humanities. Cybergeo: EuropeanJournal of Geography, 411. http://www.cybergeo.eu/index15463.html.
- Leuridan B. Can mechanisms really replace laws of nature? Philosophy of Science. 2010;77:317–340. doi: 10.1086/652959. [DOI] [Google Scholar]
- Lotka AJ. The frequency distribution of scientific productivity. Journal of the Washington Academy of Sciences. 1926;16(12):317–324. [Google Scholar]
- Murphy, L. J. (1973), Lotka’s law in the humanities? Journal of the American Society for Information Science, 24, 461–462.
- Salgado JF, Páez D. La productividad científica y el índice h de Hirchs de la psicología social española: convergencia entre indicadores de productividad y comparación con otras áreas. Psicothema. 2007;19(2):179–189. [PubMed] [Google Scholar]
- Schreiber M. To share the fame in a fair way, hm modifies h for multi-authored manuscripts. New Journal of Physics. 2008;10:1–8. doi: 10.1088/1367-2630/10/4/040201. [DOI] [Google Scholar]
- Sidiropoulos, A., Katsaros, D., & Manolopoulos, Y. (2006). Generalized h-index for disclosing latent facts in citation networks. Scientometrics 72(2), 253–280.
- Zhang C-T. The e-index, complementing the h-index for excess citations. PLoS ONE. 2009;5(5):1–4. doi: 10.1371/journal.pone.0005429. [DOI] [PMC free article] [PubMed] [Google Scholar]