Skip to main content
Springer Nature - PMC COVID-19 Collection logoLink to Springer Nature - PMC COVID-19 Collection
. 2021 Mar 8;47(2):295–318. doi: 10.1057/s41302-021-00188-6

Critiques, Ethics, Prestige and Status: A Survey of Editors in Economics

Ann Mari May 1,, Mary G McGarvey 1, Yana van der Meulen Rodgers 2, Mark Killingsworth 3
PMCID: PMC7938261  PMID: 33716351

Abstract

This study examines survey data on the views of editors of economics journals on common critiques of the discipline, ethics and editorial practices, and the role of prestige and status in publishing. We utilize an ordered probit model to investigate whether editors or journal characteristics are systematically related to editors’ views, controlling for gender and editorial position. Regression results show that editors from top-ranked journals are less likely to agree with common disciplinary critiques, more likely to support market solutions and less likely to agree with concerns about editorial practices.

Keywords: Higher education, Research institutions, Sociology of economics

Introduction

Publishing in most scholarly journals today represents a major activity influencing the reputation of universities as well as the success or failure of faculty. Tenure and promotion outcomes often depend heavily on the decisions of editors and reviewers.1 Advancement in the profession also depends on citations, and evidence in Card and DellaVigna (2020) indicates that reviewer recommendations are strong determinants of citations and that editorial decisions are closely linked to reviewer recommendations. Moreover, these decisions play a critical role in the evolution of disciplinary communities, signaling recent developments, establishing boundaries of disciplinary conversation, and shaping which views are perceptually salient, seemingly relevant and forceful (Szenberg and Ramrattan 2014).

While publishing in economics is no different than other disciplines in its impact on faculty careers, there is growing recognition that an important element of publishing in economics lies in the impact of economic theorizing on policy and practice. This nexus between academic output and real-world outcomes was made particularly salient as the profession began to re-examine its role in the 2007–2008 financial crisis. The profession was criticized not only for its failure to predict the crisis, but also for its blindness to the potential for such catastrophic market failures. Leading economists argued that the discipline was hampered by its penchant for high-level mathematics and devotion to free markets. As Paul Krugman put it, “the economics profession went astray because economists, as a group, mistook beauty clad in impressive-looking mathematics, for truth” (Krugman 2009). More recently, the news that Paul R. Milgrom and Robert B. Wilson won the 2020 Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel, led to mixed reviews. Critics decried the Nobel committee for sticking to convention in recognizing auction theory instead of research on a topic like inequality or diversity at a time when the lowest-income households and socially marginalized groups around the globe have borne the brunt of the economic crisis that is accompanying the COVID-19 pandemic.2

As the profession re-examines its role in shaping the contours of economics, we look to the “gatekeepers” to examine their views on some contested issues. It is clear that the journal editor plays a pivotal role as gatekeeper in the review process and that there are a relatively small number of individuals who serve in this function. As of 2019–2020, the AAUP reports a total of 847 degree-granting institutions in the US. Of those institutions, 227 are doctoral awarding. In economics, there are a total of 2,882 academic economists in PhD granting economics departments as of 2019. The Journal Citation Reports (JCR) identifies roughly 373 academic journals in economics with Article Influence Scores.3 Yet, the reality of publishing in economics is more exclusive than even these figures suggest.

The tyranny of the top-ranked journals has been the subject of much research and for a variety of reasons. Tommaso Colussi (2018) points out that about 43 percent of articles in the leading general-interest journals published from 2000 to 2006 were written by authors connected to editors having either worked at the same institution, received their PhD from the same institution, coauthored together, or (in the case of editors) having served as an advisor to an author. Colussi found that 25 percent of authors held appointments in six US universities at the time of publication and the same institutions that also employed 56 percent of the editors. Hence, a relatively small number of editors from a small number of institutions play an influential role in editing journals in the discipline and influencing the careers of hopeful economists. In this analysis, we aim to better understand the views of editors on a number of key issues in the editorial process in economics, and we examine if there are differences in these views based upon the status of the journal and the country affiliation of the editor (US vs. non-US).

Because of this pivotal role, the beliefs and practices of editors deserve considerable attention. Yet, there have been few systematic studies of the views of editors in economics since the beginning of the twentieth century and all have been quite limited in scope.4 Our objective is thus to conduct a novel survey to examine how editors view some of the common critiques leveled at the discipline, whether they have a shared compass around the fairness of some editorial practices and how they view the role of prestige and status in their work as editors. In this study, we examine the views of economists serving as editors, co-editors, and associate editors of economics journals on these seemingly thorny but important issues.

To examine these views, we begin by reporting the responses of editors to questions in these three areas: common critiques of the discipline, ethics and editorial practices, and the role of prestige and status in the profession. We examine many of the common critiques that have emerged in the past several decades critiques such as those raised by Paul Krugman in the midst of the 2007–2008 global financial crisis that include methodological criticisms of the profession and its penchant for formalism and mathematical analysis at the expense of valuable analysis of real-world outcomes. We also examine editors’ views on the growing number of specialized field journals and the perceived lack of general-interest journals, along with periodic calls for interdisciplinary research teams to move research beyond disciplinary silos.

As Robert M. Solow has suggested, there are questions in the practice of editing on which natural differences of opinion occur.5 These questions revolve around the need to deal with bias in reviews, and the responsibility to provide articles that challenge existing paradigms. As Solow points out, while one might hope that editors would be neutral, “accepting or rejecting papers purely on the basis of merit,” this is easier said than done. At the same time, Solow acerbically writes that “A thousand flowers can bloom without all blooming in the same journal. ”6

Finally, with the hegemony of the top journals comes concern about issues of prestige and status and its impact on the discipline.7 We seek to better understand the views of editors on various questions where status considerations may appear, such as the importance of having editorial board members, authors, and reviewers from prestigious universities. While these are not the only areas where status and prestige may inject themselves in the editorial process, they are areas where the views of editors may be especially important.

In the analysis that follows, measures of standard relative entropy are provided to determine the degree to which editors have a shared view on these questions or have less consensus in their responses. Additionally, we estimate ordered probit models to investigate whether any particular editor or journal characteristic such as journal ranking or US versus non-US institutional affiliation is systematically related to the editors’ views on survey questions. Our sample includes editors, co-editors, and associate editors from a wide range of economics journals in the Scientific Journal Rankings (SJR). The survey responses allow us to develop a comprehensive and contemporary understanding of some of the common controversies in the discipline. These insights will allow us to better understand the ways in which editors’ views shape the body of knowledge that is published in economics.

Empirical Strategy

To obtain opinions of editors on publishing in economics, we administered an online survey using the software Qualtrics to a sample of editors selected from the population of all editors, associate editors, and editorial board members of all economic journals listed in the Journal Citation Reports (JCR) having Article Influence Scores (AIS) in 2013.8 The JCR is a yearly publication by Thomson Reuters that provides information about scholarly journals in the sciences and social sciences, including citations, impact factors, and rankings. The AIS uses citations to assess and track the influence of a journal in relation to other journals, and it determines the average influence of a journal’s articles over the first five years after publication.9 In total, there were 314 journals on the JCR list of economics journals that had an AIS reported for them. One of the key variables in the analysis is whether or not a journal is ranked in the top quartile of all AIS-ranked journals; these top 25 percent journals are shown in Table 1.

Table 1.

Journals ranked in the top quartile of all 2013 AIS-ranked economics journals

1 Q_J_ECON 21 J_HUM_RESOUR 41 REV_FINANC 61 ECONOMICA
2 J_ECON_LIT 22 AM_ECON_J-MICROECON 42 ECONOMET_THEOR 62 J_FINANC_ECONOMET
3 J_POLIT_ECON 23 J_MONETARY_ECON 43 J_URBAN_ECON 63 J_ECON_MANAGE_STRAT
4 ECONOMETRICA 24 QUANT_ECON 44 WORLD_BANK_ECON_REV 64 J_MONEY_CREDIT_BANK
5 BROOKINGS_PAP_ECO_AC 25 RAND_J_ECON 45 REV_ENV_ECON_POLICY 65 OXFORD_B_ECON_STAT
6 J_FINANC 26 ECON_J 46 QME-QUANT_MARK_ECON 66 J_LAW_ECON_ORGAN
7 REV_ECON_STUD 27 ECON_POLICY 47 J_ECON_GEOGR 67 ECON_SOC
8 AM_ECON_J-MACROECON 28 J_ECONOMETRICS 48 ANNU_REV_FINANC_ECON 68 J_ECON_HIST
9 REV_FINANC_STUD 29 REV_ECON_DYNAM 49 J_ENVIRON_ECON_MANAG 69 OXFORD_REV_ECON_POL
10 AM_ECON_J-APPL_ECON 30 J_INT_ECON 50 WORLD_BANK_RES_OBSER 70 HEALTH_ECON
11 AM_ECON_REV 31 J_FINANC_QUANT_ANAL 51 J_HEALTH_ECON 71 J_AGRAR_CHANGE
12 J_ECON_PERSPECT 32 J_ACCOUNT_ECON 52 J_LAW_ECON 72 J_REGIONAL_SCI
13 J_FINANC_ECON 33 J_ECON_THEORY 53 GAME_ECON_BEHAV 73 REAL_ESTATE_ECON
14 AM_ECON_J-ECON_POLIC 34 J_PUBLIC_ECON 54 J_RISK_UNCERTAINTY 74 J_ECON_SURV
15 ANNU_REV_ECON 35 INT_ECON_REV 55 J_POLICY_ANAL_MANAG 75 ECOL_ECON
16 REV_ECON_STAT 36 IMF_ECON_REV 56 ECONOMET_J 76 AM_LAW_ECON_REV
17 J_ECON_GROWTH 37 EXP_ECON 57 TRANSPORT_RES_B-METH 77 ENERG_ECON
18 J_LABOR_ECON 38 J_APPL_ECONOMET 58 J_IND_ECON 78 SCAND_J_ECON
19 J_EUR_ECON_ASSOC 39 J_DEV_ECON 59 EUR_ECON_REV 79 ENERG_J
20 J_BUS_ECON_STAT 40 ECON_GEOGR 60 MATH_FINANC

Public information on journal web pages served as a source for collecting names of editors, co-editors and associate editors of economics journals. To focus on the editors with the most decision-making power, we excluded additional categories of editors such as book review editors, managing editors, regional editors, and other editorial board members.10 We also used home pages for the individuals themselves to collect email addresses and information on key variables of interest that include current institutional affiliation and country, PhD degree-granting institution and country, year of the PhD, and gender.

We identified a total of 4906 editors, co-editors, and associate editors in this complete set of journals, with some individuals appearing more than once since it is not uncommon for people to hold multiple editorial positions. In this full population of economics journal editors, there were approximately 3,000 uniquely identified men and 450 uniquely identified women whose essential information was not missing. Thus, of all the individual economics editors, we were able to identify, about 14 percent were women. We then selected a sample of 414 women and a random sample of 421 men for a total sample size of 835. For each person who served in more than one editorial position, we randomly selected one journal from those they served as lead editor or co-editor; if all their positions were associate editor positions, we randomly selected one journal from those for which they served as associate editors. Finally, we sent a unique link via email to each of the 835 individuals in the sample that invited the individual in their role as editor of their assigned economics journal to participate in the survey. In total, 247 individuals completed the survey (124 men and 123 women), for an overall response rate of about 30 percent.11 We over-sampled women and lead editors and co-editors to provide a more uniform distribution of conditioning characteristics in the sample of responding editors and to increase the likelihood that all types of editors would be represented. This strategy also ensures that our sample has sufficient numbers of responses from editors of each type for the methodology to have adequate statistical power so that we can distinguish an actual effect from one that occurs by chance. Over-sampling lead editors and co-editors also makes sense given that they typically have greater decision power in the editorial process than associate editors (Card et al. 2020).

Our data contain 247 sample observations on the responding editors’ coded responses to 17 survey questions and binary indicators of the responding editors’ conditioning characteristics: US affiliation, PhD in 2000 or later, journal AIS in upper quartile, associate editor, and female. We use the marginal distribution of responses to describe the overall views of editors and we use the conditional distribution of responses to describe the relationships between editors’ views and editors’ characteristics. Sample means are reported in Table 2, where the first column weights the observations to correct for over-sampling women and lead/co-editors. The weights ensure that the (joint) distribution of men and women editors and associate editors (female, associate editor) in the population-weighted sample is the same as in the population of editors. The remaining columns describe the characteristics of the responding editors (all editors, lead/co-editors, and associate editors).

Table 2.

Editors’ demographic characteristics, editorial positions, and journal rankings

Variable (X) All All Lead editors & Associate
Editors Editors Co-editors Editors
(population-weighted) n=247 n=81 n=166
US affiliation 44.8 49.0 55.6 45.8
Rank top 25% 32.4 30.0 33.3 28.3
PhD in 2000s 26.6 30.4 23.5 33.7
Associate editor 76.8 67.2 0.0 100.0
Female 13.7 49.8 45.7 51.8

Reported figures represent population-weighted and unweighted percentages of responding editors’ characteristics

In the population-weighted sample, almost 45 percent of the responses are from editors with US affiliations, about 32 percent are from editors of top-ranked journals, and approximately 27 percent are from editors who earned their PhD’s in the year 2000 or later. The corresponding unweighted percentages of all responding editors do not different substantially from the weighted percentages. About half of all editors are affiliated with US institutions, but the US representation is about 10 percentage points higher among lead editors and co-editors than it is for associate editors. About 30 percent of all editors earned their PhD’s in 2000 or after. This percentage is lower for lead editors and co-editors than it is for associate editors, which would be expected given the expectation that lead and co-editors have more seniority and experience. Moreover, about one third of respondents serve journals that are ranked in the top quartile of the AIS ranking. Strikingly, only 14 percent of all editors in the population-weighted sample are women, and of those women, a slightly higher percentage of women are associate editors than lead editors or co-editors. This low representation of women among editors reflects the persistently low representation of women in economics and the chilly climate that women economists continue to face (Lundberg and Stearns 2019).

The weighted sample responses are used to describe the extent to which editors display consensus in their views on some controversial issues in economics.12 As descriptive statistics, we use the percentage distribution, sample mean, and standardized relative entropy of the population-weighted responses to each survey question. Using the weighted responses correct these estimates for any non-response bias that might arise if men and women or lead/co-editors have different views on the survey topics and also have different rates of response to the survey. 13 The entropy parameter (ρ) simply combines the five possible responses to each survey question (from 1 = strongly disagree to 5 = strongly agree) into one number ranging from zero (complete consensus) to one (complete lack of consensus).14 The maximum possible entropy for each question would occur if all responses were evenly divided over the possible responses, representing a complete lack of consensus. Consistent with the meaning of the word entropy, an increase in ρ represents a move from agreement to a state of disagreement. This measure of relative entropy is widely used in the social sciences to indicate the evenness of similarity (Proops 1987).

The final part of the analysis estimates the relationship between editors’ views and the characteristics of the editors and their journals. Using the unweighted sample observations, we estimate the conditional probability of an editor’s survey response to each question using an ordered probit model. The conditioning variables include country affiliation (US vs. non-US), degree vintage (PhD 2000s vs. pre-2000), and the AIS ranking of the editor’s journal (top 25% vs. lower 75%). We also control for the editor’s gender (female vs. male) and editorial position (associate editor vs. lead editor or co-editor) to allow the conditional response probabilities of the first three groups of editors to depend on the editor’s gender and editorial position. The probit estimates will be free from non-response bias even if editors’ views and response rates are correlated, as long as the correlation is due solely to their common dependence on the conditioning variables.15

The estimated marginal effect of a particular editor characteristic (for example, US versus non-US affiliation) on the probability of response to the survey question provides a measure of the difference in views between the two types of editors. We calculate the predicted difference in the probability of agreement (survey responses 4 or 5) between the two types and the predicted difference in the probability of disagreement (survey responses 1 or 2).16 In our discussion of the results, we report the variables or group characteristics that have the largest impact on opinions for a particular question.

Views of Editors

Common Critiques of the Discipline

Table 3 reports the survey questions according to three categories often considered controversial in economics: common critiques of the discipline, ethics and editorial practices, and the role of prestige and status in publishing. We asked nine questions in the section on common critiques of the discipline and eight of which represent common critiques of the discipline. We begin by assessing editor’s acknowledgement that they play an important role in defining the boundaries of the discipline. This question provides a baseline of agreement and also shows the degree to which editors believe that their actions as editors play an important role in shaping the boundaries of the discipline. Here we see that editors’ views reflect a great deal of consensus on the important role that they play in defining the boundaries of the discipline. A large proportion of editors (81 percent) agree or strongly agree with the statement “Editors play an important role in defining the boundaries of a discipline.” On a scale of 1 (strongly disagree) to 5 (strongly agree), the average response for this question is 4.0, and the relative entropy score of .66 is the lowest among all other questions in each of the three categories examined, which indicates that there is relatively more consensus with this statement than the other statements.

Table 3.

Mean response, percentage distribution of responses, and relative entropy of responses to each question on editors surveya (1=strongly disagree, 2=disagree, 3=neutral, 4=agree, 5=strongly agree, unless otherwise indicated)

Mean Percentage distribution Relative entropy
1 2 3 4 5 ρ
Common critiques of the discipline
Editors play an important role in defining the boundaries of a discipline 4.0 0.9 3.9 13.7 61.4 20.1 0.66
Markets are the most efficient way to allocate resources under most circumstances 3.6 3.0 8.1 23.4 55.5 10.1 0.75
The pressure to publish has crowded out time that faculty have for service to the profession, including editing 3.5 2.3 19.5 19.7 44.4 14.2 0.85
There are not enough general-interest journals in economics today 2.7 8.2 38.3 31.8 15.6 6.1 0.87
*How important is it to include articles that challenge existing paradigms in the journal for which I am an editor? 2.4 17.8 43.0 27.0 8.2 4.1 0.84
Some authors are critical of “mathiness” (or the reliance on elaborate mathematics to make an argument) in economic writing. Mathiness is a growing problem in economics 2.6 10.9 45.4 17.6 21.1 5.0 0.86
The economics profession currently relies too much on theoretical modeling and not enough on empirical analysis using real-world data 2.7 10.3 43.8 22.1 17.4 6.5 0.88
As a discipline, economics should focus more on experimental approaches 2.7 5.7 33.1 46.1 12.3 2.9 0.77
Interdisciplinary research teams would improve economic knowledge 3.3 4.6 17.6 28.9 36.2 12.7 0.89
Ethics and editorial practices
The use of only one reviewer is not appropriate for most journals 3.7 1.5 19.7 6.0 48.6 24.2 0.77
Editors should avoid sending manuscripts to reviewers who are likely to have a bias for or against a paper a priori 3.8 0.1 12.7 15.9 51.4 20.0 0.76
As a general rule, editors should be allowed to publish in their own journals 2.8n 11.0 29.5 35.5 17.1 6.9 0.91
*How important is it that a submitted article cites articles from your journal in their references? 3.5 0.2 15.3 30.9 40.5 13.2 0.81
Role of prestige and status
*How important is it to have editorial board members from prestigious universities? 3.5 1.5 11.5 30.7 43.6 12.7 0.81
Articles with a prestigious author/co-author are more likely to be accepted for publication 3.6 0.7 15.6 20.0 51.6 12.2 0.77
More articles would be published by scholars from less prestigious institutions if journals used double blind process versus single blind review processes. 2.8n 13.0 30.0 23.2 27.8 6.0 0.93
*Rate the importance of the following as criteria for selecting a reviewer: Status of the reviewer in profession 2.8 8.2 39.6 26.1 21.1 5.0 0.87

a The estimates are calculated from the population-weighted survey responses of 247 economics journal editors. The full survey is available upon request

*Responses for this question are coded as follows: 1 = not at all important, 2 = somewhat important, 3 = important, 4 = very important, 5 = extremely important

+All of the estimated mean responses are statistically significantly different from 3 (the midpoint of the opinion scale) at the 1% significance level except for those means with a superscript n

Although contested in heterodox journals, there is a good deal of consensus with the view that markets are the most efficient way to allocate resources under most circumstances and a survey question often asked of economists but one that might be especially interesting following a significant economic downturn. The results found here may not be surprising in that faith in markets is an outstanding characteristic of economic theory. The average response for this question is 3.6 and the relative entropy, 0.75, is the lowest among the remaining questions in the category, which signals that there is relatively more consensus with this statement than with the remaining statements.

One of the areas of growing concern centers on the issue of time stress and pressure to publish. We wondered if editors agree that the pressure to publish crowds out time for service to the profession. Although the mean response (3.5) suggests that editors agree with the statement, there is little consensus in their views (relative entropy = .85). Time stress is often mentioned by faculty as a growing challenge, yet the responses of editors in general seem to show relatively little concern about this pressure to publish in crowding out service-related activities.

In contrast, editors agree less with the remaining questions that offer common critiques leveled against the discipline with the final question in this section showing a somewhat more neutral response. Concerning the statement that there are not enough general-interest journals in economics today, results show that editors are more likely to disagree with this statement with an average score of 2.7, but there is not a high degree of consensus as shown by the relative entropy measure of .87, which indicates a lack of consensus.

Even less popular with editors is the notion that it is important for journals to include articles that challenge existing paradigms. Editors found this criterion of some relevance (the average score was only 2.4 in our other scale in which 2 is somewhat important and 3 is important), although there was very little consensus in their responses to the statement (relative entropy=.84). As a result, the responses may reflect a lack of consensus on just what is meant by “challenge existing paradigms.” In the past several decades, the profession has indeed accepted new approaches to the study of economic phenomenon including behavioral and experimental economics along with survey data analysis.

Editors also tend to disagree with the view that “mathiness” is a growing problem in economics but again there is not much consensus in the disagreement. Here, the average score is 2.6 with a relative entropy of .86. Similarly, editors tend to disagree with the view that economics relies too much on theoretical modeling and not enough on empirical analysis but again there is little consensus (average score=2.7 and relative entropy=.88). Here too, editors may simply believe that, while still a problem, “mathiness” is not a growing problem. That indeed, as a profession, we are moving away from our penchant for too much mathiness.

The final two questions addressing common critiques of the discipline concern experimental and interdisciplinary approaches to the study of economics. The view that, as a discipline, economics should focus more on experimental approaches again brings little agreement among economists. The average score of 2.7 yields low average agreement but slightly more consensus than most other statements in the category (relative entropy = .77). As for the notion that interdisciplinary research teams would improve economic knowledge, editors are neutral to the notion. The average score is 3.3. At the same time, the relative entropy score of .89 is the highest of any question in this group, which indicates that there is very little consensus on this notion. Taken together, the results seem to suggest that editors do not agree with common critiques often leveled against the discipline.

In Table 4, the ordered probit results shed additional light on editors’ responses to common critiques by identifying five questions where group differences are found to be statistically significant.17 The first question reported in this section asks editors if they agree with the statement that markets are the most efficient way to allocate resources under most circumstances. We see that the estimated coefficients on US affiliation and on rank top 25% are both positive and statistically significant. Relative to editors who are not in the US, editors who are in the US tend to agree more with the statement. Editors of the top journals also tend to agree more with the statement than do editors of lower ranked journals. According to the negative coefficient on female, relative to male editors, women editors tend to disagree more with the statement that markets are the most efficient way to allocate resources under most circumstances.18

Table 4.

Ordered probit estimation: common critiques of the discipline

Variable (X) Markets are the most efficient way to allocate resources under most circumstances The pressure to publish has crowded out the time that faculty have for service to the profession, including editing There are not enough general-interest journals in economics today Some authors are critical of mathiness (or the reliance on elaborate mathematics to make an argument) in economic writing. Is mathiness a growing problem in economics? Interdisciplinary research teams would improve economic knowledge
Coefficient Coefficient Coefficient Coefficient Coefficient Coefficient Coefficient Coefficient Coefficient Coefficient
US affiliation 0.491*** 0.444*** − 0.073 − 0.074 0.047 0.074 0.082 0.122 − 0.119 − 0.096
(0.146) (0.144) (0.140) (0.138) (0.138) (0.137) (0.139) (0.138) (0.1377) (0.1364)
Rank top 25% 0.602*** 0.634*** − 0.326** − 0.352** − 0.411*** − 0.446*** − 0.468*** − 0.499*** − 0.3120** − 0.3316**
(0.162) (0.161) (0.152) (0.151) (0.152) (0.151) (0.153) (0.152) (0.1497) (0.1489)
PhD in 2000s − 0.114 − 0.149 0.130 0.178 0.059 0.104 − 0.100 − 0.054 − 0.172 − 0.151
(0.155) (0.153) (0.150) (0.148) (0.149) (0.147) (0.149) (0.147) (0.1476) (0.1456)
Associate editor 0.115 0.212 0.039 − 0.092 − 0.043
(0.151) (0.146) (0.145) (0.146) (0.1444)
Female − 0.325** 0.196 0.335** 0.378*** 0.199
(0.144) (0.138) (0.138) (0.139) (0.1363)
Parameter Estimate Estimate Estimate Estimate Estimate Estimate Estimate Estimate Estimate Estimate
C1 − 1.766 − 1.679 − 2.043 − 2.260 − 1.497 − 1.648 − 1.433 − 1.511 − 2.008 − 2.058
(0.223) (0.184) (0.246) (0.221) (0.190) (0.155) (0.185) (0.146) (0.2142) (0.1793)
C2 − 0.788 − 0.709 − 0.665 − 0.889 − 0.103 − 0.279 − 0.084 − 0.187 − 1.001 − 1.057
(0.177) (0.127) (0.168) (0.125) (0.164) (0.117) (0.162) (0.116) (0.1695) (0.127)
C3 − 0.009 0.056 − 0.101 − 0.330 0.821 0.631 0.513 0.401 − 0.092 − 0.150
(0.169) (0.118) (0.163) (0.117) (0.169) (0.120) (0.165) (0.118) (0.1618) (0.1167)
C4 1.821 1.866 1.076 0.833 1.709 1.515 1.508 1.380 0.894 0.831
(0.204) (0.165) (0.174) (0.124) (0.193) (0.150) (0.189) (0.145) (0.1697) (0.1238)
LR Statistic 5.53* 4.31 6.10** 7.65** 2.18

Responses are coded in the following manner: 1=strongly disagree, 2=disagree, 3=neutral, 4=agree, 5=strongly agree. The LR statistic tests the null hypothesis that coefficients on associate editor and female are both equal to zero

The symbols *, **, *** denote rejection of the null hypothesis that at the 10%, 5%, or 1% significance levels, respectively. Standard errors are in parentheses

The group characteristic associated with the largest difference in opinions for this question about markets is journal rank of the editor. Of non-US editors, those who edit top-ranked journals are about 23 percentage points more likely to strongly agree or agree with the statement than those who edit lower ranked journals.19

The estimation results identify two groups of editors—editors of higher ranked journals and editors of lower ranked journals—that differ in their views on time stress. Editors of higher ranked journals tend to disagree more with the statement, “The pressure to publish has crowded out the time faculty have for service to the profession, including editing” than editors of lower ranked journals. The predicted difference in agreement, indicates that editors of the top journals are about 13.5 percentage points less likely to strongly agree or agree with this statement than are editors of lower ranked journals

The third question in this section showing significant differences in-group views asks if editors agree or disagree with the statement “There are not enough general-interest journals in economics today.” The estimated ordered probit coefficient on rank top 25% is negative and statistically significant indicating that editors of top journals are more likely to strongly disagree or disagree with the statement than are editors of lower ranked journals. The statistically significant positive coefficient estimate on female indicates that women editors are more likely to strongly agree or agree with this statement than are male editors. According to the predicted difference in disagreement, editors of top-ranked journals are approximately 16 percentage points more likely to strongly disagree or disagree with the statement than editors of lower ranked journals. For this question as well, journal rank of the editor produces the largest difference in views.

The next question in the section examining common critiques of the discipline with significant group differences in responses asks for editors’ views on the topic of “mathiness” in economics. Mathiness in economics is a common critique of the discipline. Here, the question specifically states, “Some authors are critical of ‘mathiness’ in economic writing or the reliance on elaborate mathematics to make an argument. Is ‘mathiness’ a growing problem in economics?” The estimated coefficient on rank top 25% is negative and statistically significant indicating that editors of top journals tend to strongly disagree or disagree more with the statement than do editors of lower ranked journals. Also, the estimated coefficient on female is positive and statistically significant, indicating that women editors tend to agree more with the statement than do male editors. The largest difference in views is again found by examining journal rank of editors. Specifically, we find that editors of top-ranked journals are approximately 18 percentage points more likely to strongly disagree or disagree with the statement than are editors of lower ranked journals.

The only statistically significant difference in editors’ responses to the question of interdisciplinary research teams is associated with the rank of the editor’s journal. The reported value of − .31 indicates that editors of journals ranked in the top 25 percent are less likely to agree (and more likely to disagree) that interdisciplinary research teams would improve economic knowledge than editors of lower ranked journals. The magnitudes of the predicted differences in agreement and in disagreement between the two groups are similar; editors of higher ranked journals are 10 percentage points more likely to disagree and 13 percentage points less likely to agree than editors of lower ranked journals. The results of this section show that editors do not share these common critiques of the discipline and that this is particularly true of editors of top-ranked journals.

Ethics and Editorial Practices

We provide four questions that address various potential concerns about the editorial practices of journals. Table 3 shows that the responses reveal that editors agree with the concerns raised in three of the four questions. Editors tend to be neutral/agree in their views that the use of only one reviewer as inappropriate for most journals (average score of 3.7 and entropy of .77) and they tend to also agree that they should avoid sending manuscripts to reviewers with a known bias (average score 3.8 and entropy of .76).20 On average editors tend to be neutral with the statement that editors should be allowed to publish in their own journals.21 Here, the average score is 3.1. However, there is little consensus in their views (a high relative entropy score of .91). While about 28 percent of editors either strongly disagree or disagree with this view, another 43 percent of editors agree or strongly agree with this view. Finally, editors’ sensitivities to potential concerns about editorial practices do not appear to get in the way of realism. Editors tend to believe that it is important/very important that a submitted article cites articles from the editor’s journal in their references (with an average score of 3.5 in which 3 is important and 4 is very important, and an entropy measure of .81).

The ordered probit results in Tables 5 show three questions concerning ethics and editorial practices where differences in views are associated with group characteristics. The first question where this is true asks about the inappropriateness of the use of only one reviewer. Both estimated coefficients on the editorial position (associate editor) and the editor’s sex (female) are positive and statistically significant. This indicates that associate editors relative to lead editors/co-editors and women editors relative to male editors are more likely to strongly agree or agree with this statement. The only other group characteristic showing a statistically significant relationship with views in this question is rank top 25%. Here, the estimated coefficient is negative indicating that editors of the top journals are more likely to strongly disagree or disagree with this statement. The largest differences in views are associated with journal rank. Among male lead editors, those who edit top-ranked journals are about 13 percentage points less likely to strongly agree or agree with this statement than are editors of lower ranked journals.

Table 5.

Ordered probit estimation: ethics and editorial practices

Variable (X) The use of only one reviewer is not appropriate for most journals Editors should avoid sending manuscripts to reviewers who are likely to have a bias for or against a paper a priori 1How important is it that a submitted article cites articles from your journal in their references?
Coefficient Coefficient Coefficient Coefficient Coefficient Coefficient
US affiliation − 0.157 − 0.140 − 0.296** − 0.300** 0.133 0.117
(0.145) (0.142) (0.143) (0.142) (0.141) (0.139)
Rank top 25% − 0.331** − 0.387** − 0.146 − 0.163 0.822*** 0.829***
(0.156) (0.154) (0.154) (0.153) (0.159) (0.157)
PhD in 2000s − 0.146 − 0.010 − 0.209 − 0.178 0.056 0.055
(0.155) (0.151) (0.153) (0.151) (0.151) (0.149)
Associate editor 0.563*** 0.149 0.090
(.151) (.149) (0.147)
Female 0.559*** 0.092 − 0.083
(.145) (.141) (0.139)
Parameter Estimate Estimate Estimate Estimate Estimate Estimate
C1 − 1.869 − 2.358 − 2.821 − 2.965 − 2.196 − 2.219
(0.241) (0.220) (0.382) (0.366) (0.302) (0.275)
C2 − 0.657 − 1.199 − 1.318 − 1.457 − 0.722 − 0.745
(0.172) (0.135) (0.182) (0.142) (0.168) (0.124)
C3 − 0.347 − 0.915 − 0.683 − 0.824 0.321 0.296
(0.169) (0.128) (0.169) (0.124) (0.165) (0.117)
C4 1.048 0.383 0.699 0.552 1.570 1.541
(0.176) (0.119) (0.170) (0.120) (0.187) (0.146)
LR 29.53*** 1.50 0.70

Responses are coded in the following manner: 1=strongly disagree, 2=disagree, 3=neutral, 4=agree, 5=strongly agree. The LR statistic tests the null hypothesis that coefficients on associate editor and female are both equal to zero

The symbols *, **, *** denote rejection of the null hypothesis that at the 10%, 5%, or 1% significance levels, respectively. Standard errors are in parentheses

The second question in this section on editorial practices asks if editors should avoid sending manuscripts to reviewers likely to have a bias for or against a particular paper. The coefficient estimate for US affiliation is negative and statistically significant indicating that editors with a US affiliation tend to disagree more with the statement than editors outside the US. Editors with a US affiliation are about 11 percentage points less likely to strongly agree or agree than are editors outside the US.

The final question in this group on editorial practices showing significant differences in responses by group characteristics asks how important is it that a submitted article cites articles from your journal in their references. Only journal rank plays a statistically significant role and here the differences in views are quite large. Results show that editors of the top journals are more likely to view this as “extremely important” or “very important” than are editors of lower ranked journals. Editors of the top journals are about 31 percentage points more likely to view this as “extremely important” or “very important” than editor of lower ranked journals. Taken in total, the differences in views that emerge on ethics and editorial practices show that while editors may have some concerns about the potential fairness of these practices, their concerns vary depending on whether they are editing a top-ranked journal and whether they have a US affiliation. Editors of top-ranked journals and editors residing in the US are less likely to have issues with some editorial practices raised in our questions that others may view as problematic.

The Role of Prestige and Status in Publishing

The third category, the role of prestige and status in publishing, yields a considerable range of responses. These questions address the status of editorial board members and potential authors/co-authors, the relationship between double-blind reviews and scholars from less prestigious institutions, and the status of the reviewer as criteria for selecting the reviewer.

The responses in Table 3 show that editors tend to agree that it is important/very important to have an editorial board member from a prestigious institution (average score of 3.5 in our other scale in which 3 is important and 4 is very important) and they also tend to agree (average score of 3.6) that articles with a prestigious author or co-author are more likely to be accepted for publication.22 The entropy scores on these two questions are fairly moderate compared to the other statements but still do not signal very strong consensus. Likewise, editors tend to believe that status of the reviewer in the profession is somewhat important/important as a criterion for selecting a reviewer (average score of 2.8 in our other scale in which 2 is somewhat important and 3 is important). Yet, there is a lack of consensus on this as shown by the entropy score of .87 which is higher than most other questions in the group. In contrast, results from the remaining question seem to suggest that editors tend to disagree (average score of 2.8) with the notion that more articles would be published by less prestigious institutions if journals used double-blind (versus single-blind) review processes. However, Table 3 shows that the estimated mean responses are not statistically significantly different from 3 (the midpoint of the opinion scale). The responses show very little consensus (a high relative entropy score of .93). Here, about 43 percent of editors disagree with this notion while 34 percent agree.23

The ordered probit results in Table 6 identify three questions related to the role of prestige and status in publishing where group characteristics are significantly associated with responses. When asked whether articles with a more prestigious author or co-author are more likely to be accepted for publication, the results show that only the editor’s PhD vintage is statistically significantly associated with the probability of response. Editors receiving their PhD in 2000 or after tend to agree more with this statement than do those who received their PhD before 2000. Editors who received their PhD in 2000 or after are about 14 percentage points more likely to strongly agree or agree than editors whose PhD was received before 2000.

Table 6.

Ordered probit estimation: role of prestige and status

Variable (X) Articles with a prestigious name as an author or co-author are more likely to be accepted for publication More articles would be published from scholars from less prestigious institutions if journals used a double-blind versus single-blind review process 1Please rate “status of the reviewer in the profession” as a criteria for selecting a reviewer
Coefficient Coefficient Coefficient Coefficient Coefficient Coefficient
US affiliation − 0.143 − 0.157 − 0.106 − 0.114 − 0.369*** − 0.359***
(0.142) (0.141) (0.139) (0.137) (0.139) (0.138)
Rank top 25% 0.253 0.237 − 0.645*** − 0.671*** 0.135 0.148
(0.156) (0.155) (0.154) (0.153) (0.150) (0.149)
PhD in 2000s 0.408*** 0.437*** − 0.214 − 0.151 0.182 0.157
(0.155) (0.153) (0.149) (0.147) (0.148) (0.146)
Associate editor 0.221 0.314** − 0.154
(0.148) (0.145) (0.145)
Female 0.042 0.183 − 0.057
(0.140) (0.137) (0.137)
Parameter Estimate Estimate Estimate Estimate Estimate Estimate
C1 − 2.020 − 2.188 − 1.243 − 1.520 − 1.538 − 1.402
(0.267) (0.240) (0.179) (0.144) (0.183) (0.139)
C2 − 0.771 − 0.939 − 0.234 − 0.521 − 0.292 − 0.159
(0.172) (0.128) (0.162) (0.118) (0.165) (0.117)
C3 − 0.103 − 0.270 0.406 0.104 0.479 0.611
(0.164) 0(.118) (0.163) (0.114) (0.166) (0.121)
C4 1.463 1.282 1.570 1.248 1.439 1.565
(0.186) (0.137) (0.192) (0.144) (0.189) (0.154)
LR statistic 2.36 6.73** 1.35

Responses are coded in the following manner: 1=strongly disagree, 2=disagree, 3=neutral, 4=agree, 5=strongly agree. The LR statistic tests the null hypothesis that coefficients on associate editor and female are both equal to zero

The symbols *, **, *** denote rejection of the null hypothesis that at the 10%, 5%, or 1% significance levels, respectively. Standard errors are in parentheses

Our next question asks editors to respond to the statement, “More articles would be published by scholars from less prestigious institutions if journals used double-blind versus single-blind review processes.” Ordered probit results show that associate editors’ opinions seem to differ from those of lead editors/co-editors on this question. The coefficient estimate on associate editor is positive, indicating that associate editors tend to agree more with this statement than do lead editors/co-editors. In addition, according to the statistically significant negative coefficient on rank top 25%, editors of top-ranked journals tend to disagree more with this statement than editors of lower ranked journals. Examination of the largest difference shows that editors of top-ranked journals are about 25 percentage points more likely to strongly disagree or disagree with this statement than are editors of lower ranked journals.

We conclude by asking respondents to rate how important the “status of the reviewer in the profession” is as a criterion for selecting a reviewer, choosing among “not at all important,” “somewhat important,” “important,” “very important” or “extremely important.” Only the editor’s country affiliation (US vs. non-US) is statistically significant. The estimated coefficient on US affiliation is negative indicating that US editors tend to place less importance on this criterion than non-US editors. Specifically, editors with a US affiliation are approximately 14 percentage points more likely to rank the reviewer’s professional status as not at all important or somewhat important than are editors outside the US.

Discussion and Conclusion

This study explores the views of editors of economics journals—both in the US and throughout the world—on a variety of issues important to the advancement of knowledge in the field. We examine how the views of editors differ according to the rank of the journal, years since getting a Ph.D., and country affiliation in order to better understand how the editorial process may influence the emergence of new approaches to economic knowledge as well as respond to various criticisms of the discipline.

The results of our survey reveal a discipline that is sensitive to the potential for bias in the editorial review process. Editors are in general concerned about issues related to the editorial process (the presence of one reviewer, or a biased referee), but these concerns seem to be less important for editors of top tier journals. In particular, there is general agreement that a single reviewer is not particularly appropriate and that editors should avoid sending manuscripts to reviewers with a known bias either for or against a paper. Editors of top-ranked journals are less likely to see a problem associated with having only one reviewer, which could be because the top journals have a higher bar in the referee process and may only need one negative referee to reject a paper. Moreover, editors are not persuaded by the value of double-blind reviews as a mechanism for reducing bias. This skepticism most likely reflects the view that there is no such thing as a truly blind review given the prevalence of working papers drafts being available online (Pontille and Torny 2014). Overall, editors of top-ranked journals are less likely to view as problematic some of the editorial practices that our survey addressed. Much of this is probably explained by growing pressures at the top journals from publishers and from the professional associations for citations and faster turnaround times, which may in turn pressure editors to prioritize expediency over fairness. There is more pressure than ever for economics journals to shorten their time to decision, especially in light of the relatively fast turnaround times at many science journals where increasing numbers of economists are publishing. This sort of pressure is likely to be greater at the level of the top journals than other journals since more money is at stake and there is more to lose from top-ranked authors choosing to submit elsewhere.

One of the linchpins of the review process is the notion that scientific achievements are to be judged without reference to the scientists’ personal or social characteristics – including a scientist’s position in the social structure of science itself (Crane 1967, p. 195. ) Editors (and especially younger editors) generally agree that the status and prestige of an author are important features for the journal and the article, the status of the reviewer is not. Editors with a US affiliation are even less likely than editors outside the US to place a high weight on the reviewer’s status. With the proliferation of journals and growing demands on economists’ time, editors are likely to place relatively less weight on the status of reviewers because it has become increasingly difficult to find highly-published reviewers who are willing and able to serve as reviewers. To explore this assertion, we examined journal statistics on time to decision and found that journals with US-based editors take slightly longer in terms of their average time to decision than journals with editors outside of the US.24 Assuming that finding willing reviewers is one of the determinants of longer decision times, this would explain why US-based journals place relatively less emphasis on reviewer status.

Also of note are the results examining responses to common critiques of the discipline. Economics has long been criticized for exhibiting a narrow methodological approach that favors theoretical modeling over empirical testing using real-world data. As Klamer and Colander reported in their landmark study The Making of an Economist, only 3 percent of graduate students in top US programs saw “having a thorough knowledge of the economy” as important for their academic success in economics while 57 percent believed that “excellence in mathematics” was very important for success in their career (Klamer and Colander 1990, p. 18); Blaug 1997; Snowdon and Vane 1999). This critique of economics and its penchant for producing “idiot savants” prompted the AEA in 1988 to establish a Commission on Graduate Education in Economics, but their findings did little to quell concerns which continue to be expressed by Nobel Laureates on the left, and right, and in the middle of the ideological spectrum.25

Results further indicate that editors in general have little use for such critiques. They don’t share the view that “mathiness” is a problem in the discipline and don’t share the notion that economics relies too much on theoretical modeling. Editors of the top-ranked journals tend to disagree more than the rest that mathiness is a problem. This finding is consistent with results in Bigo and Negru (2014) showing that economists approve of the emphasis on mathematics in the profession and if anything, they favor increasing the rigor of mathematical models rather than reducing the emphasis on math.

The top journals in economics have seen substantial changes in the past six decades in the share of theoretical versus empirical articles that they publish. As Daniel Hammermesh shows, the share of purely theoretical articles has plummeted since the early 1970s. Instead, journals have increasingly published empirical articles using readily available data and utilizing original data generated by the author (Hammermesh 2013). Hence, the lack of agreement on theoretical versus empirical models may reflect an awareness of the new reality in methodological approaches to economics.

As for new methodological approaches, editors are not particularly enthusiastic about them either at this point. Editors do not agree that economics should focus more on experimental approaches or are they enthusiastic proponents of interdisciplinary research. The growth of journals devoted to experimental research at least attests to the increased acceptance of such an approach (Colander 2015). While editors are more neutral about the notion that interdisciplinary research teams might improve economic knowledge, no such growth in interdisciplinary journals has emerged.

Our results show that little has shaken the resolve of economists—or at least economics editors—to lean in on the virtues of a free market. Editors in our study have considerable agreement on the market as an efficient way to organize production. In fact, no question providing a critique garnered more agreement from the editors. Moreover, editors of top-ranked journals were even more likely to argue that markets are the most efficient way to organize production. Relatedly perhaps, editors appeared to attach lower priority to articles that challenge existing paradigms.

Overall, the results suggest that the profession has not taken many of the critiques of the economics profession too much to heart. Manuscript submissions providing a negative assessment of the discipline may receive a less than enthusiastic nod from editors who profess to be looking for novel and innovative papers. However, our results also suggest that the increasing internationalization of the editorial gatekeepers may help to nudge the academic system along.

Footnotes

1

For a discussion of the increasing importance of publishing in tenure and promotion decisions as well as the pressure to publish for more senior faculty in economics see van Dalen (2020). Heckman and Moktan (2020) analyze the importance of publications in the top five economics journals on tenure decisions. Their evidence shows that publication in these top journals has a powerful influence on tenure decisions.

2

See, for example, Catriona Watson from Rethinking Economics critiqued the 2020 prize selection with her statement “The discipline of economics is depressingly out of touch,” (Walker 2020), and Barkley Rosser (2020) wrote “This is a non-controversial, almost boring, and certainly apolitical award, the committee playing it safe in this tumultuous year.” These statements are from media sources; at the time of writing (Oct. 2020), there were still no critiques published in scholarly journals.

3

See the AAUP, 2019-2020 Faculty Compensation Survey Results, https://www.aaup.org/2019-20-faculty-compensation-survey-results. Also see the Web of Science Group, Journal Citation Reports for 2019 on economics journals, https://clarivate.com/webofsciencegroup/wp-content/uploads/sites/2/dlm_uploads/2019/08/JCR_Full_Journal_list140619.pdf . Estimate of the number of economics faculty at PhD granting institutions includes only tenure/tenure track faculty.

4

Few surveys of editors or reviewers in economics have been done and most have concentrated on editorial policies and not the views of editors in a variety of topics. Mackie (1998) surveyed referees of seven journals in economics providing five open-ended questions. See also Marshall (1959) and Coe and Weinstock (1967). For studies examining the relationship between editors and authors and their interconnections, see Laband and Piette (1994), Medoff (2003), Brogaard et al. (2014), and Colussi (2018). For discussion of the concentration of editors in a small number of institutions, see Yotopoulos (1961), Hodgson and Rothman (1999), Kocker and Sutter (2001); Goyal et al. (2006); Fölster (1995); and Colussi (2018).

5

Robert M. Solow, in his foreword to Secrets of Economics Editors, suggests that these questions emerge in a number of the chapters written by preeminent editors of economics journals. See Michael Szenberg and Lall Ramrattan (2014), xii.

6

See Michael Szenberg and Lall Ramrattan (2014), xii.

7

Much has been written about the influence of top journals on institutions and the faculty from those institutions. See, for example, Colussi (2018), Hammermesh (2013), Goyal et al. (2006), and Hodgson and Rothman (1999).

8

The survey was administered in November 2015.

9

The AIS is the ratio of a journal’s citation influence to the size of the journal’s article contribution, and it is calculated by dividing a journal’s Eigenfactor Score (a rating of the total importance of a journal) by the number of articles in the journal over five years, normalized as a fraction of all articles in all publications. The mean AIS is one; a score greater than one means that the articles in the journal have above-average influence and a score less than one represents below-average influence.

10

The role of editors who make up an editorial team varies among journals. While editors are typically responsible for “desk-reject” decisions, depending on how the journal is managed, the editor may make all decisions regarding which articles to accept or reject or may assign an article to an associate or assistant editor. Because of the growing use of associate editors and their involvement in the review process, we survey editors, co-editors, and associate editors. In our ordered probit estimations, we include associate editors as a separate control group to identify how their views differ from those of editors and co-editors.

11

Response rates in economic surveys include 35.4% in May et al. (2014); 34.4% in Alston et al. (1992); 30.8% in Fuller and Geide-Stevenson (2003), and 26.6% in Klein and Stern (2005).

12

Note that a few questions were answered on a slightly different scale in which 1=not at all important and 5=extremely important, as shown in our tables. Discussion of results is based on (disagree/agree) unless otherwise noted in the text.

13

If editors’ opinions and their decisions to respond are independent after controlling for their editorial position and their sex, then the percentage distributions of the weighted responses in Table 3 are unbiased estimates of the population distributions of all editors.

14

Following the procedure described in Frey et al. (1984) and Block and Walker (1988), the standardized relative entropy (ρ) is defined as the actual entropy for each question divided by the maximum possible entropy over the possible responses for each question. Actual entropy is the sum over the response categories (i=1 to 5) of the probability pi of a particular response category i multiplied by the natural logarithm of the probability pi. That is, actual entropy = Σ pi × ln(pi). In the case of 5 response categories, pi = 0.20 and Σ pi × ln(pi) = − 1.61. Thus, the standardized relative entropy (ρ) for each question in the survey is calculated as entropy/(− 1.61).

15

If editors’ opinions and their decisions to respond to the survey are independent, after controlling for US affiliation, journal rank, PhD vintage, editorial position and sex, then maximum likelihood estimation of the ordered probit model provides consistent estimates of the conditional response probabilities of all editors in the population (given that the response choice of the editor, conditional on the editor’s characteristics, follows an ordered probit specification.).

16

The predicted differences in probabilities of agreement and disagreement calculated from the ordered probit coefficients in Tables 4 through 6 are available upon request.

17

Tables 3 through 5 provide the ordered probit estimation results for those questions that showed a statistically significant association between at least one editor characteristic (US affiliation, journal rank, or Ph.D. vintage) and editors’ response to the questions on common critiques, ethics and editorial practices, and the role of prestige and status in publishing.

18

These results are consistent with the results of May et al. (2014) where they found that women economists were less likely to agree that markets are the most efficient way to allocate resources.

19

The complete set of ordered probit estimates of the associations between country affiliation, journal rank and degree vintage of the editor and editors’ responses to the questions on common critiques of the discipline, ethics and editorial practices, and the role of prestige and status in the profession is available from the authors.

20

The probit results in Table 6 also show that associate editors are more likely to agree with this statement than editors/co-editors—one of only two statements where editors and associate editors have a statistically significant difference in views.

21

The estimated mean responses in Table 3 show that this statement is one of only two where the responses are not statistically significantly different from 3 (the midpoint of the opinion scale) at the 1% significance level.

22

These results are consistent with the overall finding of Colussi [2018] who, examining editors of the AER, JEP, ECA, and AJE, find that 43% of all papers analyzed are authored by at least one scholar who is connected, either through co-authorship, same PhD institution, PhD Advisor, or current affiliation, to at least one editor at the time of publication.

23

The probit results in Table 6 also show that associate editors are more likely to agree with this statement than editors/co-editors—one of only two statements where editors and associate editors have a statistically significant difference in views.

24

The calculations of US-based and non-US-based economics journal averages are based on data found in several blogs on journal acceptances, rejections, and time to decision. On average, US-based journals take 3.4 months until the first response, while non-US-based journals take 2.7 months. These calculations and the underlying data on time to decision are found at https://figshare.com/articles/dataset/Econ_Journals_xlsx/13150748.

25

See, for example, Paul Krugman (2009) “… predictive failure was the least of the field’s problems. More important was the profession’s blindness to the very possibility of catastrophic failures in a market economy… the economics profession went astray because economists . . . mistook beauty, clad in impressive-looking mathematics, for truth”; Milton Friedman who stated “economics has become increasingly an arcane branch of mathematics rather than dealing with real economic problems” (Snowdon and Vane 1999, p. 137); and Mark Blaug who stated “Modern economics is sick . . . Economists have converted the subject into a sort of social mathematics in which analytical rigor is everything and practical relevance is nothing” (Blaug 1997, p. 3).

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

References

  1. Alston Richard M., Kearl J. R., Vaughan Michael B. Is there a Global Consensus Among Economists in the 1990's? American Economic Review: Papers and Proceedings. 1992;82(2):203–209. [Google Scholar]
  2. American Association of University Professors (AAUP), 2019-2020 Faculty Compensation Survey Results. https://www.aaup.org/2019-20-faculty-compensation-survey-results.
  3. Bigo Vinca, Negru Ioana. Mathematical modeling in the wake of the crisis: A blessing or a curse? What does the economics profession say? Cambridge Journal of Economics. 2014;38(2):329–347. doi: 10.1093/cje/bet063. [DOI] [Google Scholar]
  4. Blaug, Mark. 1997. Ugly currents in Modern Economics. Opinions Politiques 18(17): 3-8. Reprinted in Maki, U (ed). 2002. Fact and fiction in economics: Models, realism, and social construction. Cambridge University Press.
  5. Block Walter, Walker Michael. Entropy in the Canadian economics profession: Sampling consensus on the major issues. Canadian Public Policy. 1988;14(2):137–150. doi: 10.2307/3550573. [DOI] [Google Scholar]
  6. Brogaard Jonathan, Engelberg Joseph, Parsons Christopher A. Networks and productivity: casual evidence from editor rotations. Journal of Financial Economics. 2014;111(1):251–270. doi: 10.1016/j.jfineco.2013.10.006. [DOI] [Google Scholar]
  7. Card David, DellaVigna Stefano. What Do Editors Maximize? Evidence from Four Economics Journals. Review of Economics and Statistics. 2020;102(1):195–217. doi: 10.1162/rest_a_00839. [DOI] [Google Scholar]
  8. Card David, Stefano DellaVigna P, Funk, and N. Iriberri. Are Referees and Editors in Economics Gender Neutral? Quarterly Journal of Economics. 2020;135(1):269–327. doi: 10.1093/qje/qjz035. [DOI] [Google Scholar]
  9. Coe Robert K, Weinstock Irwin. Editorial Policies of Major Economic Journals. The Quarterly Review of Economics and Business. 1967;7(4):37–43. [Google Scholar]
  10. Colander David. Intellectual incest on the Charles: Why economists are a little bit off. Eastern Economic Journal. 2015;41:155–159. doi: 10.1057/eej.2014.78. [DOI] [Google Scholar]
  11. Colussi Tommaso. Social ties in academia: A friend is a treasure. The Review of Economics and Statistics. 2018;100(1):45–50. doi: 10.1162/REST_a_00666. [DOI] [Google Scholar]
  12. Crane Diana. The gatekeepers of science: Some factors affecting the selection of articles for scientific journals. The American Sociologist. 1967;2(4):195–201. [Google Scholar]
  13. Fölster Stefan. The perils of peer review in economics and other sciences. Journal of Evolutionary Economics. 1995;5(1):43–57. doi: 10.1007/BF01199669. [DOI] [Google Scholar]
  14. Frey Bruno, Pommerehne Werner W, Schneider Friedrich, Gilbert Guy. Consensus and dissension among economists: An empirical inquiry. American Economic Review. 1984;74(5):986–994. [Google Scholar]
  15. Fuller Dan, Geide-stevenson Doris. Consensus Among Economists: Revisited. The Journal of Economic Education. 2003;34:369–387. doi: 10.1080/00220480309595230. [DOI] [Google Scholar]
  16. Goyal Sanjeev, van der Leij Marco J, Moraga-González Jose Luis. Economics: An emerging small world. Journal of Political Economy. 2006;114(2):403–412. doi: 10.1086/500990. [DOI] [Google Scholar]
  17. Hammermesh Daniel. Six decades of top economics publishing: Who and How? Journal of Economic Literature. 2013;51(1):162–172. doi: 10.1257/jel.51.1.162. [DOI] [Google Scholar]
  18. Heckman James J, Moktan Sidharth. Publishing and promotion in economics: The tyranny of the top five. Journal of Economic Literature. 2020;58(2):419–470. doi: 10.1257/jel.20191574. [DOI] [Google Scholar]
  19. Hodgson Geoffrey M, Rothman Harry. The editors and authors of economics journals: A case of institutional oligopoly? The Economic Journal. 1999;109(453):F165–F186. doi: 10.1111/1468-0297.00407. [DOI] [Google Scholar]
  20. Web of Science Group. 2019. 2019 Journal Citation Reports Full journal list. https://clarivate.com/webofsciencegroup/wp-content/uploads/sites/2/dlm_uploads/2019/08/JCR_Full_Journal_list140619.pdf accessed February 1, 2021.
  21. Klamer, Arjo, and David Colander. 1990. The making of an economist. Westview Press.
  22. Klein Daniel B., Stern Charlotta. Professors and their politics: The policy views of social scientists. Critical Review. 2005;17:257–303. doi: 10.1080/08913810508443640. [DOI] [Google Scholar]
  23. Kocher Martin G, Sutter Matthias. The institutional concentration of authors in top journals of economics during the last two decades. The Economic Journal. 2001;111(472):F405–F421. doi: 10.1111/1468-0297.00637. [DOI] [Google Scholar]
  24. Krugman, Paul. 2009. How Did Economists Get It So Wrong? New York Times Magazine, September 2.
  25. Laband David N, Piette Michael J. Favoritism versus search for good papers: Empirical evidence regarding the behavior of journal editors. Journal of Political Economy. 1994;102(1):94–203. doi: 10.1086/261927. [DOI] [Google Scholar]
  26. Lundberg Shelly, Stearns Jenna. Women in economics: Stalled progress. Journal of Economic Perspectives. 2019;33(1):3–22. doi: 10.1257/jep.33.1.3. [DOI] [Google Scholar]
  27. Mackie, Christopher. 1998. Canonizing economic theory: How theories and ideas are selected in economics. M.E. Sharpe.
  28. Marshall Howard D. Publication policies of the economic journals. The American Economic Review. 1959;49(1):133–138. [Google Scholar]
  29. May Ann Mari, McGarvey Mary G, Whaples Robert. Are disagreements among male and female economists marginal at best? A survey of AEA members and their views on economics and economic policy. Contemporary Economic Policy. 2014;32(1):111–132. doi: 10.1111/coep.12004. [DOI] [Google Scholar]
  30. Medoff Marshall H. Editorial favoritism in economics? Southern Economic Journal. 2003;70(2):425–434. doi: 10.2307/3648979. [DOI] [Google Scholar]
  31. Pontille, David, and Didier Torny. 2014. The Blind Shall See! The question of anonymity in journal peer review. Ada: A Journal of Gender, New Media, and Technology. Available at https://adanewmedia.org/2014/04/issue4-pontilletorny/.
  32. Proops John LR. Entropy, information and confusion in the social sciences. Journal of Interdisciplinary Economics. 1987;1(4):225–242. doi: 10.1177/02601079X8700100403. [DOI] [Google Scholar]
  33. Rosser, Barkley. 2020. “In the face of total turbulence, go totally conventional for the nobel prize,” blog. https://heterodox.economicblogs.org/angry-bear/2020/rosser-turbulence-nobel-prize.
  34. Snowdon, Brian, and Howard R. Vane (eds.). 1999. Conversations with leading economists: interpreting modern macroeconomists. Edward Elgar.
  35. Solow, Robert M. Foreword in Szenberg, Michael and Lall Ramrattan, eds. 2014. Secrets of Economics Editors. Cambridge, MA: MIT Press.
  36. Szenberg, Michael and Lall Ramrattan, eds. 2014. Secrets of Economics Editors. Cambridge, MA: MIT Press.
  37. van Dalen Hendrik P. How the publish-or-perish principle divides a science: The case of economists. Scientometrics. 2020 doi: 10.1007/s11192-020-03786-x. [DOI] [Google Scholar]
  38. Walker, Andrew. 2020. “Nobel: US auction theorists win Economics Prize,” BBC News. https://www.bbc.com/news/business-54509051.
  39. Yotopoulos Pan A. Institutional affiliation of the contributors to three professional journals. The American Economic Review. 1961;51(4):665–670. [Google Scholar]

Articles from Eastern Economic Journal are provided here courtesy of Nature Publishing Group

RESOURCES