Skip to main content
Wiley Open Access Collection logoLink to Wiley Open Access Collection
. 2025 Aug 15;63(1):e70012. doi: 10.1111/cars.70012

Prestige at Play: University Hierarchies and the Reproduction of Funding Inequalities

Julien Larregue 1,, Alice Pavie 1
PMCID: PMC12801375  PMID: 40815578

ABSTRACT

This article examines the relationship between university prestige, disciplinary cultures, and the (re)production of funding inequalities in the humanities and social sciences. We combine qualitative and quantitative methods by analyzing: (1) data on 56,680 successful and unsuccessful grant applications submitted to the Canadian Social Sciences and Humanities Research Council; (2) 43 interviews with past members of review committees, including in economics, history, sociology, and political science. Our findings show that university affiliations significantly influence funding allocation: even after controlling for other factors, scholars at more prestigious and larger institutions are more likely to secure grants for greater amounts. For the Insight grants, applicants affiliated with U3 universities receive, on average, nearly 20,000$ more than their colleagues from institutions outside the U15. This effect is strongest in disciplines where scientific quality is clearly defined and tightly linked to institutional status. In contrast, in disciplines where the definition of merit is more ambiguous and debated, evaluators rely less on university affiliation, and prestige plays a diminished role. These divergences highlight the need to distinguish between the formal, general norms adopted by funding agencies and the unwritten, situated norms that review committees rely on to evaluate and rank applications within their respective fields.

Keywords: competition, funding, grant, social sciences, university hierarchies

1. Introduction

The stratification of the academic field has been a mainstay of the research on the sociology of science. A minority of universities and researchers concentrate the bulk of the resources within the scientific field, be they defined in terms of symbolic, economic, or social capital (Brint and Carr 2017; Clauset et al. 2015; Nielsen and Andersen 2021; Velloso et al. 2004). In addition to this unequal repartition, a well‐known Matthew effect is also at play: as empirically demonstrated by Harriet Zuckerman (1977) in her classic study of Nobel laureates and then further elaborated upon by Robert K. Merton (1968, 1988), researchers and institutions who have accumulated more scientific capital tend to be disproportionately rewarded for their work, while less advantaged ones get less credit than their actual contributions might deserve. In other words, early success increases the likelihood of future success, further reinforcing existing hierarchies.

While there is a growing body of evidence showing that funding is also subject to concentration phenomena and Matthew effects (on the Netherlands, see Bol et al. 2018; on China, see Zhi and Meng 2016), the precise extent of institutional inequalities in the distribution of scientific grants remains somewhat elusive. One important limitation in the literature lies in the fact that most research only considers successful grant applications (Bellotti et al. 2022; see, for instance, Lauer and Roychowdhury 2021; Nakhaie et al. 2023), with the consequence that measures and explanations are imprecise at best. To date, only a handful of studies were able to access complete data from funding agencies (see for instance Bol et al. 2018; Burns et al. 2019; Larregue and Nielsen 2024; Steinþórsdóttir et al. 2020; Witteman et al. 2019). Yet, this is essential because without clear knowledge of the initial pool of successful and unsuccessful applications, it is simply impossible to identify the origins of the possible discrepancies in the distribution of grants across scholars and institutions. In particular, we cannot assess whether observed inequalities are due to the fact that some groups tend to apply less proportionally to their size, if they result from the practices of the evaluation committees (Lamont 2009), or both. Such data is also important to analytically distinguish between success rates and the size of the awarded grants: potential inequalities may stem not only from the number of grants awarded to differently situated scholars, but also from the amounts awarded.

Another important limitation lies in the analytic disconnection between quantitative and qualitative approaches to scientific inequalities. Quantitative analyses of funding distribution tend to approach science as a homogeneous space from which we can derive generalizations (see for instance Bol et al. 2018), even though we know that it is fragmented and the object of struggles (Bourdieu 1975). While this macro‐level work is important for documenting broad dynamics, it is as crucial to account for how these patterns may vary across fields. Although this latter aspect has been thoroughly studied from a qualitative angle (Lamont 2009), more research is needed to understand how the evaluation culture of disciplines relates statistically to the (re)production of funding inequalities (Larregue and Nielsen 2024).

In this article, we combine an original dataset of 56,680 grant applications that were submitted to the Social Sciences and Humanities Research Council (thereafter SSHRC) between 2000 and 2021 and 43 interviews with former council members to analyze funding inequalities among Canadian universities. We address three main questions: (1) How does funding success differ between institutions? (2) Do the awarded amounts vary accordingly? (3) Are these patterns consistent across disciplines? We provide evidence that university affiliations play a predominant role in funding attribution: even when accounting for other factors, scholars who are employed in more prestigious and bigger institutions are more likely to obtain grants, for larger amounts. Importantly, our findings show that this effect varies across scientific fields: while disciplines like management or economics exhibit clear elitist patterns, university hierarchies play a somewhat less important role in humanities‐oriented disciplines like history, anthropology or fine arts.

Drawing on insights from the sociology of science and the sociology of quantification (Bourdieu 1975; Espeland and Stevens 2008), we argue that grant reviewers use university affiliation as a judgment device to stabilize the uncertainty inherent in the evaluation of grant applications (Roumbanis 2017). This uncertainty manifests along two dimensions: at the applicant level, where proposed research involves promises that are inherently difficult to verify; and at the committee level, where evaluators may be uncertain about the limits of their own expertise to assess proposals. In such contexts, university affiliation conveys not only symbolic capital, but a set of readily interpretable and institutionally trusted signals (Kharchenkova and Velthuis 2018). As a result, applicants affiliated with more prestigious institutions receive higher scores for their CVs, thus improving their overall chances of securing funding. This dynamic is especially pronounced in disciplines with strict and consensual definitions of scientific quality like economics, where university affiliation aligns closely with other recognized signals of worth. Importantly, focusing on applications’ generic characteristics—university affiliation, number and type of publications, prior funding, and so forth—is also a way for reviewers to ensure their commensurability (Espeland and Stevens 1998). Faced with the challenge of ranking singular, seemingly incommensurable applications, committees lean on transversal signals of past achievement as proxies for future performance. As with credit scores in finance, these serve as seemingly “dispassionate, impartial and objective” (Fourcade and Healy 2017, 25) measures of the applicants’ value.

2. The Work of Evaluating Grant Applications

Extensive research has been devoted to understanding what factors influence the peer‐review process in the context of scientific funding. A famous experiment conducted at the US National Science Foundation concluded that it largely depended on chance and, in particular, on who was appointed as reviewer (Cole et al. 1981). While chance clearly plays a role in funding outcomes, structural mechanisms such as cumulative advantage often reinforce and amplify initial disparities over time. As DiPrete and Eirich (2006) explain, a strict cumulative advantage process implies that current resources—such as past publications or prior grants—increase one's chances of securing future ones. In contrast, a cumulative disadvantage may result from prolonged exposure to a lower‐status position, leading to both direct and interaction effects on outcomes across the career. For instance, individual characteristics and institutional affiliations (e.g., gender or university prestige) may influence both initial opportunities and the long‐term returns on acquired resources. A researcher affiliated with a less prestigious institution may be penalized not only due to perceptions about institutional quality but also due to material constraints such as reduced research time or administrative support. An article on the US National Institutes of Health research awards thus found that “working at an institution with the most NIH funding (ranked 1 to 30 in total grant funding) increased the R01 award probability by 9.7 percentage points” (Ginther et al. 2011, 1018). Though each application might be perceived as a singular evaluation, over time, the symbolic credit, procedural knowledge, and institutional support accumulated by some individuals and organizations create a system that structurally favors them in grant competitions—a dynamic that clearly reflects the operation of cumulative advantage mechanisms.

The literature also makes it clear that disciplines and research domains influence and mediate evaluation processes. This is visible in the literature on gender inequalities in funding. For instance, Larregue and Nielsen (2024) have studied the working of an interdisciplinary social science committee to show how gender inequalities in scientific funding are partly reproduced and mediated by knowledge hierarchies. Compared to other projects, tendentially feminine research topics and methods had more chances of being discredited by reviewers. Similarly, while women are significantly less likely to be funded by the Canadian Institutes of Health Research, the extent of this disadvantage varies across research domains (Burns et al. 2019; Tamblyn et al. 2018). A natural experiment indicated that this gender gap might be most consequential when evaluators focus on applicants’ CV instead of their project (Witteman et al. 2019).

The capacity of some disciplines to develop their own, autonomous definitions of scientific quality plays an important role in how peer review separates the wheat from the chaff (Lamont 2009). For instance, the so‐called superiority of economists (Fourcade et al. 2015), which translates into a strict ranking of journals and institutions as well as in widely shared views on the unequal worth of different methods and research areas, leads to very clear outcomes: projects and scholars that deviate from this orthodoxy are much less likely to be funded. Hence, funding distribution is not only a material mechanism but a symbolic process through which scholars and fields of research are ranked and given a certain value.

Because such orderings are context‐dependent and may vary across disciplines, it is unclear how generic factors like university affiliations (and their associated prestige) factor in these processes. While some studies report clear correlations (Liao 2021), others do not find institutional prestige to be a decisive factor in grant outcomes (Bornmann and Daniel 2006). In Canada, it has been found that the prestige of universities influences the size of grants among successful SSHRC beneficiaries (Nakhaie et al. 2023), but it is unclear whether this pattern is stable across disciplines or if it matters during the evaluation process. In her ethnographic study of how US panels distribute fellowships and grants, Lamont (2009, 5) observed that “Evaluators are most concerned with disciplinary and institutional diversity, that is, ensuring that funding not be restricted to scholars in only a few fields or at top universities.” However, committee members’ explicit commitment to diversity might not necessarily translate into increased funding opportunities for applicants from less prestigious universities; there could be a gap between individual reviewers’ meritocratic beliefs and the aggregate result of their work.

Moreover, we can expect institutional prestige to play a different role, and have a different meaning, across disciplines. If excellence and diversity are indeed “additive considerations” (Lamont 2009, 5) for grant reviewers, diverging conceptions of how scientific excellence relates to institutional placement might influence review processes. In this article, we suggest that the role of university prestige partly depends upon the weight that reviewers give to the applicants’ CV versus their project. While SSHRC rules explicitly state what the respective weight of the CV and project should be, in practice committees implement discipline‐specific practices that can give precedence to one or another.

3. Methods and Data

3.1. Funding Data

Our investigations are grounded on a database of 56,680 grant applications that were submitted to the Social Sciences and Humanities Research Council between 2000 and 2021. This dataset, obtained through a data sharing agreement, includes information on the submission year, the language of the application, the primary discipline of the project, the outcome of each proposal (acceptance or rejection), the scores given to application for each of the main evaluation criteria, and the amount awarded when successful. The data include applications to the three following funding programs:

  • Standard Research Grants (2000–2011) can be valued at up to 250,000$ over 3 years. The success rate over the studied period is 38%.

  • Insight Development Grants (2012–2021) are valued between 7000$ and 75,000$ over 1–2 years. The success rate over the studied period is 38%.

  • Insight Grants (2012–2021) can be valued at up to 400,000$ over 2–5 years. The success rate over the studied period is 34%.

Additionally, the dataset provides information on the institution, gender, and age of all applicants at the time of submission. Only the main applicant was considered for analysis. The primary applicant is the one formally responsible for the submission and administrative coordination of the project, and is typically the main contributor to the design, writing, and overall direction of the proposal. Moreover, we only have information about co‐applicants if they have applied as the main applicant for another application. However, this focus introduces certain limitations. It does not capture the potential influence of team composition, interdisciplinarity, or collaborative dynamics that may also affect funding outcomes.

Using the initial dataset, we designed several additional independent variables. At the applicant level, we calculated the total number of applications that a given professor has submitted, the number of SSHRC grants obtained in the past, and the size of the team for each project. At the university level, we further coded and included the geographical location (province), the primary working language (English, French, bilingual), the level of prestige (categorized as U3, U12, or non‐U15), and the size (number of students). For this last variable, the following classification was applied:

  • Large Universities (20,000+ students)—Examples: University of Toronto, University of British Columbia, McGill University.

  • Medium‐Sized Universities (10,00020,000 students)Examples: Queen's University, University of New Brunswick, Wilfried Laurier University.

    [Correction added on 11 November 2025, after first online publication: Queen's University, University of Saskatchewan, Université du Québec à Montréal has been changed to Queen's University, University of New Brunswick, Wilfried Laurier University.]

  • Small Universities (fewer than 10,000 students)—Examples: Acadia University, Bishop's University, Mount Allison University.

  • Very Small (fewer than 1000 students)—Examples: University of Trinity College, Tyndale University, Pontifical Institute of Mediaeval Studies.

Following these various steps, we conducted a binomial logistic regression to estimate the effects of the following 11 independent variables on the funding success of SSHRC applicants: gender of the main applicant, age of the main applicant, total number of SSHRC applications of the main applicant, team size of the project, funding program, year of application, university prestige, university province, university language, university size, and language of the project. Our dependent variable is a binary measure of whether a given application received funding or not. The regression was conducted on R Studio with the function glm.

We then focused our analysis on 30 research‐active universities—those with the highest share of SSHRC applications during the period. We compared their overall success rates, followed by the individual applicants' chances of success, controlling for the previously mentioned variables. We also examined the average grant amounts awarded to applicants at each university.

Finally, we analyzed success probabilities by university ranking within each discipline, using average marginal effects derived from a logistic regression model. Similarly, we assessed differences in grant amounts by disciplinary ranking, this time relying on descriptive statistics.

The main limitation of this quantitative analysis is that it does not take into account applicants’ publication records. Such data could, in principle, be retrieved from bibliometric databases such as Web of Science. However, the expected analytical benefit would be limited relative to the significant effort involved in collecting and matching this information at scale. Moreover, our objective is not to assess individual academic productivity per se, but rather to highlight broader institutional hierarchies—hierarchies that already reflect, among other factors, differences in the publication patterns and research visibility of their members.

3.2. Interview Material

To understand the mechanisms underlying the unequal allocation of funding, these statistical analyses were complemented by semi‐structured interviews with professors who served on SSHRC evaluation committees between 2014 and 2024 for the Insight and Insight Development programs. The composition of the review committees for each type of funding is publicly accessible on the SSHRC website. Members were contacted through their institutional email addresses. We aimed to achieve a balance across disciplines, career stages (assistant, associate, and full professors), university prestige, geographical location, and socio‐demographic characteristics (gender, age, ethnicity, and language), ensuring a diversity of perspectives.

Interviews were held in English or French depending on the interviewees’ preference. The interviews primarily explored the practical organization of the evaluation process, the criteria employed by committee members to assess funding applications, the relative weight given to applicants’ CVs versus the content of their research projects, and the nature of the discussions that take place during committee meetings. They lasted between 42 and 88 min. All interviews were recorded, fully transcribed, and anonymized. We focused on the interviewees’ experiences within the SSHRC committees and ask them to provide concrete examples whenever possible. As underlined by Orupabo and Mangset (2022, 321) in their study of academic hiring practices, focusing on practical information serves as a methodological tool to address social desirability bias. Interviewees are generally less preoccupied with presenting themselves in a favorable light when recounting processes and events compared to when they are directly asked about their opinions, meanings, or values.

The qualitative material was thematically coded using NVivo, with particular attention to passages concerning perceptions of the applicant's university and the ways in which institutional affiliation—both directly and indirectly—influenced the evaluation process. These themes were then analyzed in relation to the disciplinary background of each interviewee to identify potential variations across fields. A total of 43 researchers were interviewed, spanning six disciplines: political science (15), sociology (13), economics (7), history (6), geography (1), and management (1).

The four main disciplines were selected due to their contrasting positions within the scientific field (Renisio 2015), the diversity of their evaluative cultures (Lamont 2009), and their divergent attitudes toward symbolic and status hierarchies. Economics holds a dominant position in the social sciences, characterized by its quantitative orientation and formalism; its evaluation methods emphasize a strict hierarchy of publications and institutional affiliations, alongside highly centralized and internationalized recruitment processes (Fourcade et al. 2015). Conversely, history—a “literary” discipline primarily employing qualitative methods—features predominantly national patterns of scientific production and recruitment (Heilbron and Bokobza 2015; Mangset 2009). As we shall see, historians often resist hierarchies based on institutional positions. Political science and sociology occupy intermediate positions between these extremes, although political science aligns more closely with economics than sociology does (Fourcade et al. 2015). Both disciplines exhibit internal polarization in Canada (Larregue and Warren 2024)—translating in divisions between qualitative and quantitative approaches (Platt 2005), French and English literatures (Cornut and Roussel 2011), and tensions regarding national and international dynamics of publication (van Bellen and Larivière 2024), and recruitment (Cornut et al. 2012; Khelfaoui and Gingras 2024).

Despite efforts to ensure a diverse and balanced sample, this qualitative approach presents certain limitations. The study relies on retrospective accounts, which may be influenced by memory biases or selective recollection. Moreover, while the interviews aimed to elicit concrete examples, participants may still underreport practices perceived as problematic or controversial. Finally, although the sample includes disciplinary and institutional variety, it remains limited in size and cannot fully capture the breadth of experiences and perspectives present across all SSHRC committees.

4. The Adjudication Process

Before presenting the findings, it is important to briefly describe the process and organization of the SSHRC evaluations. For each of the three programs, applications are peer‐reviewed by committees constituted according to disciplinary expertise. While some committees for Insight Development grants can be interdisciplinary, Insight committees are typically focused on one or two proximate disciplines (for instance, political science and public administration, or sociology and demography). Before meeting collectively to decide on the final ranking, each committee member conducts a preliminary evaluation of a subset of applications, with two or three evaluators assigned per file. For the Insight grants, SSHRC also seeks external reviews to support the committee's deliberations.

The repartition of the applications among the committee members is an administrative task handled by a SSHRC officer. Reviewers may and, in practice, often have to assess applications that are not related to their own research interests or fields of research. They assign a score for each of the three main criteria—Challenge, Feasibility, and Capability—for every application they are assigned (we only had access to the scores for a subpart of our dataset). The Challenge criterion refers to the purpose and importance of the project; Feasibility refers to the methods and material means used to carry it out; Capability refers to the applicant's expertise, as demonstrated by their CV. The scoring is weighted, with the “Challenge” and “Capability” criteria each accounting for a larger portion of the score than the “Feasibility” criterion.

Preliminary scores are used to establish a provisional ranking. When they meet collectively, members do not typically review and discuss all the applications. Unless important discrepancies in members’ assessments are noticed, top‐scored and bottom‐scored applications are rarely examined. The discussions focus on the intermediate applications that are around the funding line. In the event of a persistent disagreement between reviewers regarding the evaluation of a particular application, a collective vote may be held. After discussing each of these applications, the committee reviews and finalizes the ranking. This final list divides the adjudicated applications into those recommended for funding and those that are not.

5. Results

5.1. The Cumulative Advantages of Prestigious Universities

Descriptive statistics show that success in funding applications at the SSHRC is indeed correlated with institutional affiliation: professors who are employed in bigger and more prestigious universities have higher likelihood of getting grants. Applications from candidates affiliated to a U3 university (McGill University, University of British Columbia, University of Toronto) represent 19.2% of all applications, but 24.6% of those funded, while those from candidates outside the U15 represent 46.4% of all applications, but only 39.3% of those funded. These gaps are even wider when considering the broader structure of academic employment in Canada. Between 2016 and 2020, U3 university teaching staff represented 15.2% of all university staff and 19.2% of grant applications, while non‐U15 university staff represented 54.7% of all staff and 46.4% of grant applications.1 In brief, U3 professors tend to apply more and, when they do, they have more success.

The success rate is also correlated with university size. Largest universities have the highest success rate (40% for the whole period), followed by the medium‐sized universities (34%), with the very small universities and small ones exhibiting the lowest success (27% and 25%, respectively). Interestingly, the drop‐out rate (no reapplication after one non‐funded grant proposal) is structured similarly, with professors affiliated to less prestigious universities giving up more often (28.2% for U3, 32% for U12, 35.7% for extra U15).

The effect of university prestige and size persists even when accounting for other factors, as demonstrated in Tables 1 and 2. A binomial logistic regression was performed to predict the success of funding applications between 2000 and 2021 (n = 56,680), and results are presented using average marginal effects (AMEs) and predicted probabilities. All the variables related to the characteristics of the main applicant's university are significantly correlated with funding outcomes.

TABLE 1.

Binomial logistic regression predicting funding success in SSHRC applications (2000–2021, n = 56,680)—average marginal effects.

Characteristic Average marginal effects 95% CI p value
Age 0 0.00, 0.00 <0.001
Team size
 1 applicant
 2 applicants 0.01 0.00, 0.02 0.1
 3 or more applicants 0.03 0.02, 0.04 <0.001
Gender
 Man
 Non binary 0 −0.06, 0.06 >0.9
 Woman −0.01 −0.02, 0.00 0.029
Prior funding
 No
 Yes 0.15 0.14, 0.16 <0.001
Program
 Insight
 Insight development 0.1 0.09, 0.12 <0.001
 Standard research grants 0.11 0.09, 0.12 <0.001
Project language
 English
 French −0.04 −0.06, −0.03 <0.001
University language
 English
 Bilingual −0.03 −0.05, −0.01 0.002
 French 0.05 0.02, 0.07 <0.001
Province
 Ontario
 Alberta −0.03 −0.05, −0.01 <0.001
 British Columbia 0.02 0.01, 0.04 0.001
 Québec −0.01 −0.03, 0.01 0.2
 Rest of Canada −0.06 −0.08, −0.05 <0.001
University ranking
 U3
 U12 −0.06 −0.08, −0.05 <0.001
 Extra U15 −0.11 −0.13, −0.10 <0.001
University size
 Large
 Medium −0.02 −0.03, −0.01 0.004
 Small −0.08 −0.10, −0.06 <0.001
 Very small −0.07 −0.10, −0.03 <0.001
Year 0 0.00, 0.00 0.12

Abbreviation: CI, confidence interval.

TABLE 2.

Binomial logistic regression predicting funding success in SSHRC applications (2000–2021, n = 56,680)—average marginal predictions.

Characteristic Average marginal predictions 95% CI
Gender
 Man 37.10% 36.5%, 37.7%
 Woman 36.20% 35.5%, 36.8%
 Non binary 37.10% 30.7%, 43.4%
Age
 26 39.70% 38.6%, 40.8%
 40 37.70% 37.2%, 38.2%
 46 36.90% 36.4%, 37.3%
 54 35.70% 35.2%, 36.2%
 92 30.60% 28.7%, 32.5%
Team size
 1 applicant 36.00% 35.5%, 36.5%
 2 applicants 36.90% 36.0%, 37.9%
 3 or more applicants 38.90% 37.8%, 39.9%
Prior funding
 No 31.50% 31.0%, 32.0%
 Yes 46.90% 46.0%, 47.7%
Program
 Insight 29.70% 28.8%, 30.6%
 Insight development 39.80% 38.3%, 41.2%
 Standard research grants 40.70% 39.8%, 41.5%
Year
 2000 35.60% 34.1%, 37.0%
 2006 36.20% 35.5%, 36.9%
 2010 36.60% 36.2%, 37.1%
 2015 37.20% 36.4%, 38.0%
 2021 37.80% 36.3%, 39.4%
University ranking
 U3 43.80% 42.6%, 44.9%
 U12 37.70% 36.8%, 38.6%
 Extra U15 32.60% 31.9%, 33.3%
Province
 Ontario 37.60% 36.9%, 38.3%
 Alberta 34.40% 32.7%, 36.0%
 British Columbia 40.00% 38.7%, 41.3%
 Québec 36.60% 35.4%, 37.8%
 Rest of Canada 31.50% 30.1%, 32.8%
University language
 English 36.00% 35.5%, 36.6%
 Bilingual 32.80% 30.8%, 34.7%
 French 40.70% 38.8%, 42.6%
University size
 Large 37.80% 37.3%, 38.4%
 Medium 35.80% 34.6%, 37.0%
 Small 29.90% 28.4%, 31.4%
 Very small 31.30% 28.1%, 34.4%
Project language
 English 37.40% 36.9%, 37.9%
 French 33.20% 31.8%, 34.7%

Abbreviation: CI, confidence interval.

First, applicants from the U3 universities (McGill University, University of British Columbia, University of Toronto) exhibit the highest predicted probability of success (43.8%). In contrast, those from U12 universities show a lower likelihood of success (37.7%), and those from institutions outside the U15 have the lowest predicted probability (32.6%). These differences are statistically significant, with AMEs of –0.06 for U12 and –0.11 for non‐U15 universities compared to U3 (p < 0.001). A similar pattern is observed for university size. Applicants from large universities have the highest predicted success (37.8%), whereas those from small (29.9%) and very small (31.3%) universities are significantly less likely to obtain funding. The corresponding AMEs are –0.08 and –0.07, respectively (p < 0.001), reinforcing the persistent influence of institutional capacity on funding outcomes.

Geographic location also plays a role. Compared to Ontario (37.6%), applications from Alberta (34.4%) and from the rest of Canada (31.5%) show significantly lower predicted probabilities of success. The AMEs are –0.03 and –0.06, respectively (p < 0.001), while applicants from British Columbia (40.0%) have slightly higher chances (AME = 0.02, p = 0.001). Québec shows no significant difference from Ontario.

Interestingly, the role of language is not straightforward and individual level factors must be distinguished from institutional dynamics. Applications written in French are associated with a lower predicted probability of success (33.2% vs. 37.4% for English), with a negative AME of –0.04 (p < 0.001). However, affiliation with a Francophone university corresponds to a higher predicted success rate (40.7%) compared to English‐language universities (36.0%), with a positive AME of 0.05 (p < 0.001). Although it is beyond the scope of this paper to provide a thorough analysis of this apparent paradox, these findings demonstrate the need to differentiate between individual and institutional levels when accounting for the effect of language in science.

Gender and age are also associated with differences in predicted funding success. Predicted probabilities show that women (36.2%) have slightly lower chances of success compared to men (37.1%), while non‐binary applicants (38.8%) exhibit the highest predicted success rate. However, these differences are modest in magnitude. The average marginal effect for women is negative and statistically significant (–0.01, p = 0.029), while the effect for non‐binary applicants is not statistically significant due to wide confidence intervals. Predicted probabilities decrease with age: younger applicants (e.g., age 26) have a predicted success rate of 39.7%, while the rate declines steadily with age to 35.7% at age 54. This relationship is captured by a small but statistically significant negative marginal effect of age on funding success (AME = 0.00, p < 0.001), suggesting a slight but consistent age‐related disadvantage over time.

While being affiliated with a prestigious university is generally a key factor in securing funding at the SSHRC, U15 universities do not necessarily exhibit the same levels of success in grant applications (Table 3). We can obtain a more fine‐grained picture by looking at the performances of 30 research active universities (representing 83.7% of the full sample) constituted of all U15 institutions and 15 non‐U15 institutions that are most represented in SSHRC applications. Height universities, seven of which are from the U15, appear to be overfunded: they receive more grants than their proportion of applications. The vast majority of universities, however, proportionally receive less funding than one could expect from looking at their share in the full sample of applications. Again, there are important differences across institutions: while some are close to equilibrium, others are starkly disadvantaged: –16.6% for the University of Manitoba, –26.4% for the University of Saskatchewan, or else –33.4% for Brock University.

TABLE 3.

Comparison of the share of 30 research active universities in grant applications versus funded applications (2000–2021, n = 47,459).

University Total applications Funded applications Percent difference
University of Toronto U3 10.1% 12.4% 22.9%
McGill University U3 5.4% 6.6% 21.1%
University of British Columbia U3 7.4% 8.7% 17.2%
Simon Fraser University 2.9% 3.4% 16.6%
Western University U12 3.2% 3.6% 11.3%
Université de Montréal U12 6.4% 7.1% 10.9%
Queen's University U12 3.0% 3.2% 9.6%
McMaster University U12 2.6% 2.7% 4.9%
University of Alberta U12 4.4% 4.4% −1.1%
University of Waterloo U12 2.8% 2.7% −2.9%
Université Laval U12 4.2% 4.1% −2.9%
University of Victoria 2.6% 2.5% −3.6%
Université du Québec à Montréal 4.8% 4.6% −4.4%
Carleton University 2.8% 2.6% −5.2%
York University 5.1% 4.8% −5.6%
University of Ottawa U12 5.0% 4.7% −6.2%
Wilfrid Laurier University 1.9% 1.8% −6.4%
University of Calgary U12 3.6% 3.2% −9.7%
Dalhousie University U12 1.7% 1.5% −12.1%
University of Guelph 1.9% 1.7% −12.5%
Concordia University 4.1% 3.5% −14.1%
Université de Sherbrooke 1.5% 1.2% −15.9%
University of Manitoba U12 2.2% 1.9% −16.6%
Memorial University of Newfoundland 1.5% 1.1% −23.6%
University of Saskatchewan U12 1.8% 1.4% −26.4%
Toronto Metropolitan University 1.4% 1.0% −30.1%
Brock University 2.1% 1.4% −33.4%
University of New Brunswick 1.1% 0.7% −33.8%
University of Regina 1.1% 0.7% −38.6%
University of Windsor 1.3% 0.8% −41.4%

These trends are confirmed if we analyze performance across these 30 institutions while controlling for a few independent factors.2 As Figure 1 clearly shows, there are wide disparities in funding success across universities, including within the U15 group. In line with our hypothesis, three universities lead the way: University of Toronto (predicted probability of success = 49.1%), McGill University (predicted probability of success  = 48.5%), and the University of British Columbia (predicted probability of success = 47.0%).

FIGURE 1.

FIGURE 1

Predicted probabilities of SSHRC funding success among a sample of 30 research active universities (2000–2021, n = 47,459).

While most U15 universities are indeed located in the top half of the distribution, others are less likely to secure funding. For instance, the University of Calgary has a predicted probability of success of 32.4% (95% CI: 29.2%–35.6%), ranking 19th and occupying a median position within this top 30. Other U15 institutions are even less advantaged: Dalhousie University (31.2%, 95% CI: 27.4%–34.9%), University of Manitoba (30.4%, 95% CI: 26.9%–33.9%), and University of Saskatchewan (25.1%, 95% CI: 21.9%–28.3%) show notably lower chances of success at the SSHRC. This highlights that the self‐formed U15 group, which expanded since the early 1990s, only partially reflects the actual scientific hierarchies. Notably, among the five universities that joined the group in 2006 and 2011 (University of Calgary, Dalhousie University, University of Ottawa, University of Manitoba, and University of Saskatchewan), only the University of Ottawa falls in the top‐performing half, with a predicted probability of 38.8% (95% CI: 36.3%–41.3%).

In contrast, some institutions that are not part of the U15—possibly because they are not medical schools—appear among the best performers. Applications from Simon Fraser University have a predicted probability of 43.4% (95% CI: 39.5%–47.2%), ranking 5th overall. Université du Québec à Montréal follows at 10th place with 37.2% (95% CI: 33.9%–40.4%), and University of Victoria ranks 14th with 35.0% (95% CI: 31.7%–38.3%).

Researchers from the largest and most prestigious universities not only have higher chances of securing funding, but they also tend to receive larger grants when successful: U3 universities got 25.8% of the total funding on the 2000–2021 period when they represented 24.6% of the successful applications; U12 universities got 36.2% for 36.0% of the successful applications; the rest got 37.9% when they constituted 39.3% of the successful applications. As we saw previously, disparities are also visible within the subsample of 30 research active universities (Figure 2). The biggest discrepancies appear in the Insight program: the average amount allocated to U3 projects (67,286$) is 20.5% higher than that of other U15 universities (55,817$) and 38.8% higher than that of projects conducted at institutions outside the U15 (48,488$). Hence, while the average amount obtained by McGill University researchers in the Insight program is 76,104$, it is 54,354$ for the University of Ottawa, 50,189$ for Toronto Metropolitan University, 38,413$ for the University of Calgary, and 23,003$ for the University of Windsor. The discrepancies are even more important when we divide the average amounts received by the number of applicants per project: for the Insight program, the average individual amount allocated to U3 projects (48,524$) is 44% higher than those of other U15 universities (33 783$) and 69% higher than that of projects conducted at institutions outside the U15 (28,781$).

FIGURE 2.

FIGURE 2

Average amounts in Canadian dollars received across among a sample of 30 research active universities and SSHRC funding programs (2000–2021, n = 47,459). [Colour figure can be viewed at wileyonlinelibrary.com]

This ranking closely matches the success rates analyzed previously: the correlation coefficient between universities’ odds ratios and average amount received is 0.94. Put otherwise, the more successful a university is, the bigger the grants. While our interview material confirms that committees perform budget cuts when they review and select projects for funding, these are usually limited to about 10% or 20% of the requested budget. It is thus unlikely that the observed disparities result primarily from the work of the committees. To some extent, this may be due to applicants from more prestigious universities being more likely to request, and consequently be awarded, larger sums.

5.2. University Hierarchies Across Fields: CV Disciplines and Project Disciplines

Up to now, we have approached SSHRC applications as a homogeneous set. Yet, review committees from different disciplines are not necessarily giving the same weight to university hierarchies as they evaluate and rank projects. It is thus important to analyze the effect of institutional affiliations across fields. Figure 3 displays the predicted probabilities of funding success from the binomial logistic regression (Tables 1 and 2) for different groups of universities, broken down by discipline. While some disciplines clearly advantage applicants from prestigious universities, hierarchies seem to play a somewhat less important role in others. For example, applications from universities outside the U15 in economics have a predicted success probability of 29.6%, compared to 41.6% for U3 institutions. Similarly, in management, business and administrative studies, the probability of success for extra‐U15 applications is 29.5%, versus 44.1% for those from U3 institutions. Conversely, the gap is much narrower in disciplines such as anthropology, where extra‐U15 applications have a predicted probability of 36.1%, quite close to the 37.3% predicted for U3 applications. Still, the best that applicants from non‐U15 institutions can hope for is to not being disadvantaged: in no discipline do they surpass their U15 peers in predicted success rates.

FIGURE 3.

FIGURE 3

Predicted probabilities of funding success of U3, U12 and non‐U15 universities across disciplines (2000–2021, n = 56,680). [Colour figure can be viewed at wileyonlinelibrary.com]

Again, the discrepancies pertain not only to success rates but also to the amounts that are awarded. Disciplines that exhibit starker differences between U15 universities and non‐U15 universities also tend to grant higher amounts to the former (Figure 4). Across all programs, the average amounts awarded to projects from non‐U15 universities are 62.6% lower than those from U3 universities in management, 54.2% in economics, 50% in criminology, and 49.9% in urban studies. In fine arts, literature and anthropology, on the other hand, the differences are smaller: 11%, 24.5%, and 21%, respectively. Similarly, compared to U3 scholars, professors affiliated with the rest of the U15 consistently receive smaller amounts on average (the only exception being law). Hence, as we have previously observed, there is a clear statistical correlation between success rates and the amounts distributed across disciplines: the coefficient for the correlation between non‐U15 odds ratio across disciplines and the gap in the amounts distributed to U3 versus non‐U15 institutions is 0.67, and 0.50 when we include U12 universities.

FIGURE 4.

FIGURE 4

Percentage difference in Canadian dollars amounts received by non‐U15 universities across disciplines (2000–2021, n = 47,459). [Colour figure can be viewed at wileyonlinelibrary.com]

The data on Insight grants indicate that such disparities are mostly stemming from the evaluation of applicants’ CV, which the SSHRC documentation refers to as “Capability” (Table 4). While U3 professors receive an average grade of 4.88 (out of 6), applicants from institutions outside the U15 get 4.62, a difference of 0.26. Among the three criteria of evaluation, this is the one where university effects are the largest. For the “Challenge” criterion based on the project evaluation, the average difference between U3 and non‐U15 applicants is 0.17; for the “Feasibility” criterion, it is 0.12. The widest gaps in the “Capability” criterion are found in the disciplines that tend to advantage applicants from the most prestigious universities: management (0.60), economics (0.49), philosophy (0.47), or else political science (0.34) are more favorable to the CV of U3 professors than anthropology (0.18), geography (0.15), or fine arts (0.12).

TABLE 4.

Average scores received across university groups for the three main evaluation criteria of Insight applications (n = 16,927).

Universities Challenge Feasibility Capability
Non‐U15 4.12 4.14 4.62
U12 4.17 4.20 4.71
U3 4.28 4.27 4.88

The gradient of funding concentration that we can observe in Figures 3 and 4 is mirrored in how members of review committees perceive the importance of university affiliations across disciplines. Interviews conducted with historians, sociologists, political scientists, and economists bring to light the different understandings of, and roles played by, university hierarchies in the evaluation process. While some disciplines mainly base their evaluations on applicants’ CV, which ends up advantaging scholars who are affiliated with prestigious institutions, others prioritize the project, leaving more room for applicants outside of the U3 or U15 to demonstrate the worth of their application.

This continuum is visible in how committee members from different disciplines discuss the role of institutional inequalities in the reviewing process. Hence, in line with the position of their discipline on Figures 3 and 4, historians tend to reject the idea that university prestige is an indicator of quality, and that grants primarily go to professors affiliated to larger institutions:

In the case of history, talent is so evenly distributed across the country that the quality of historians working at any institution is inspiring, right? And so that's why sitting on these committees is great because you see people at schools, very small schools, very under‐resourced, and they're still proposing amazing projects. I'd be surprised if the success rate was significantly different between large institutions. You know, in humanities, there might be some sort of small effects, but I'd be surprised if there was a significant difference between, say Mount Allison and University of Waterloo. (Historian, U15 university, Ontario)

To be sure, this historian's observation is contradicted by our statistical findings. Overall, applicants from bigger institutions receive more grants, and for larger amounts, including in humanities. Yet, they provide a sense of the relative importance conveyed to university prestige and size across fields when compared to the views of scholars who are in more hierarchical disciplines. Historians do not believe that there is a correlation between an applicant's university affiliation and their project's value. Their judgment is mainly based on the content of the project, including its methodological rigor and feasibility (Lamont 2009, 82).

At the other end of the disciplinary continuum, economists place much greater emphasis on the CV. Because of the hierarchical nature of economics, which heavily relies on explicit rankings (of journals, departments, subfields, methods, etc.) and widely shared quality indicators, committee members are prone to resort to these standardized scales of worth in their assessment of the applications. In particular, the number and kind of articles that applicants have published serve as a generic scale of scientific worth. In some cases, a good CV can even compensate for a project's shortcomings:

I think sometimes it would enter as a countervailing force against a somewhat weak or somewhat questionable, maybe not a weak proposal, but as a countervailing force against concerns you might have about the substance of the proposal. Maybe they didn't word this clearly, and so I'm not 100% sure I'm on board with what they're planning to do, but they've published in Econometrica so I'm willing to give them a bit of a benefit of the doubt. (Economist, U15 university, Alberta)

The case of economics also makes it clear that funding is part of a larger academic market where prestige and resources are already unequally distributed (Musselin 2009). The role conferred to the CV in university hires, coupled with the fact that universities and departments are also ranked according to their “excellence,” means that committees in economics mostly record and reinforce previously constituted hierarchies:

There's just more projects coming from them [prestigious universities] and I don't think there was an advantage to them, but the faculty members from the bigger universities tend to be somewhat higher powered. So I mean like UBC is competing for the best new PhDs in the labor market every year so when they hire one of them that's a very excellent person to start off with, so the projects coming from there and the type of caliber of candidate is going to be pretty high. (Economist, U15 university, Québec)

Between history and economics, some disciplines like sociology and political science attribute a more balanced role to both the CVs and projects in their assessments. The nature of their evaluation also differs. When they evaluate CVs, a number of sociologists and political scientists adopt a more diversified conception of publications, stressing that journal articles or scholarly books are not the only valuable outcomes. Public outreach publications, like reports directed at the government or community groups, are recognized as meaningful. They also emphasize elements that are considered less important by historians and economists, including students’ supervision:

For established colleagues, I pay a lot of attention to their efforts in mentoring and supervising students. If someone has had a 20‐year career and has only supervised two people, you could say they lack sufficient experience—that it's inadequate. Especially because I'm aware that, in departments, workload and supervision distribution can be an issue. So, I also think there's a challenge in that area. It's possible to be extremely, extremely, extremely productive if you're not doing any supervision. (Sociologist, non‐U15 university, Québec)

This can constitute a decisive element for applicants from smaller institutions with no master's or doctoral programs. Although sociologists and political scientists generally reject the hierarchical views espoused by economists when it comes to scientific excellence, they do pay attention to how one's university affiliation might impact their capacity to hire and train students and, thus, the feasibility of their project. While interviewees from these two disciplines generally express a tolerance for “alternative” training plans—for instance, integrating undergraduate instead of graduate students—it is seen as the applicant's responsibility to explain how their working conditions will not stand in the way of their project. In the absence of a clear, convincing justification, applicants from smaller institutions might end up getting lower scores:

I would even say that it can be detrimental to applicants [from smaller universities] because it's not always straightforward if, for example, they say they're going to hire a research assistant. If they're at a university that doesn't have graduate students at the master's or doctoral level, it can hurt their application. So, I think it's simply the reality of being at a smaller university in general. (Political scientist, non‐U15 university, Ontario)

University affiliations are also important for the kind of support that projects might get before even arriving in front of the SSHRC committees. Hence, even in disciplines like sociology where there seems to be a widely shared agreement that institutional prestige should not play a role in the outcomes, reviewers sometimes end up giving more credit to applicants who have secured resources within their university, as this demonstrates the feasibility of their project:

It's pretty clear that you cannot use the ranking and reputation of the university as a way of thinking about the value of the application. And I can't say for sure whether there's an implicit bias around those things or not. But I will say that universities that put cash on the table in supporting applications, that does really help an application. When you get an application coming in and all the support is in‐kind, and you know, we would see applications coming in where it was “we have rooms that we can use,” “we can use printing services and the photocopier,” definitely I think that hurts an application when seen alongside an application where a university says “we are providing four graduate student fellowships at fifteen thousand dollars each to support this project.” (Sociologist, non‐U15 university, British Columbia)

We see in this example that the impact of university inequalities on funding success is channeled in various ways within the work of the SSHRC committees. Sometimes, affiliation to a prestigious institution serves as a mere quality and status indicator, especially in disciplines like economics, where clear rankings are upheld and reproduced. At other times, it plays a role in the evaluation of projects’ feasibility: professors’ ability to secure financial support from their institution can be the deciding factor between a successful and an unsuccessful application. Again, awareness and acceptance of the role played by university affiliations differ across fields, as such dynamics are more or less aligned with disciplinary cultural frames and categories. This aligns with our quantitative observations that university prestige and size matter more in some disciplines than others. Still, despite these variations, many of the committee members that we interviewed mentioned instances where being affiliated with a prominent institution constitutes a clear advantage in funding competitions.

6. Discussion and Conclusion

Our study demonstrates a strong link between university hierarchies and the success of grant applications in social sciences and humanities. SSHRC data indicate that applications from scholars affiliated with a U3 university (McGill University, University of British Columbia, and University of Toronto) account for 19.2% of all submissions, but 24.6% of the funded ones. In contrast, applications from candidates outside the U15 make up 46.4% of all submissions, but only 39.3% of the funded ones. The logistic regression that we performed shows that the effect of university hierarchies persists even when accounting for other factors. In fact, university prestige is, after the obtention of prior funding, the most influential factor. Compared to professors affiliated to a U3 universities (predicted probability = 43.60%), applicants from U12 universities have a lower probability of success (predicted probability = 37.7%), while those from universities outside the U15 face an even greater disadvantage (predicted probability = 32.7%). And when they are successful, applicants from less prestigious institutions also get smaller amounts of money than their colleagues from prominent universities. For the Insight grants, applicants affiliated with U3 universities receive, on average, nearly 20,000$ more than their colleagues from institutions outside the U15 (extra U15 = 48,488$; U12 = 55,817$; U3 = 67,286$). These patterns contribute to processes of cumulative (dis)advantage in science.

We have argued that grant reviewers use university affiliation as a judgment device to stabilize the uncertainty inherent in the evaluation of grant applications (Roumbanis 2017). Yet, the correlation between university prestige and grant outcomes is not as strong in every discipline. While applicants from outside the U15 are about half as likely to succeed in economics and roughly a third as likely in management, no statistically significant disadvantage is observed in anthropology or geography. The interviews we conducted with members of review committees helped us to make sense of these intertwined mechanisms. As we showed, the different weight given to university affiliations across fields is indicative of a tension in evaluation practices: while disciplines like economics put a strong emphasis on the CV, others like history prioritize the project. Other disciplines, including sociology and political science, are situated halfway between these two poles. When evaluating CVs, sociologists and political scientists also adopt a broader perspective on scholarly contributions, valuing not only journal articles and books but also public outreach publications and activities like government reports and community engagement. They also prioritize aspects like student supervision, which are less emphasized by historians and economists.

Hence, while the SSHRC issues formal, standardized evaluation criteria, rules established at the agency level are subsequently interpreted and applied differently across committees, which operate as decentralized, semi‐autonomous entities (Moore 1973). It is thus crucial to distinguish the general norms adopted by funding agencies from the situated norms that reviewers draw upon to appraise and rank applications. This underscores the relevance of a field approach to science (Bourdieu 1975), especially one that accounts for the normative dimension of evaluation. By shedding light on the fact that review practices are intrinsically localized and bounded, our findings emphasize the need to account for disciplinary cultures both in studies and in attempted reforms of scientific institutions.

Our results also highlight the importance of the evaluation context in the (re)production of social inequalities in science. Given the competitive nature of funding applications, reviewers must find a way to evaluate proposals fairly and render them comparable to rank them by relative merit. Leveraging generic criteria like university prestige or past publications in reputable venues enables committees to hierarchize applicants while reducing the uncertainty inherent in evaluating research proposals. Importantly, these signals of worth are largely qualitative in nature and do not need to rely on contested bibliometrics (Gingras 2016). As such, it is doubtful that a shift toward narrative‐style CVs would significantly mitigate this phenomenon.

Given the potentially negative consequences and decreasing marginal returns of funding concentration (Aagaard et al. 2020; Mongeon et al. 2016), understanding the processes at play is important. While symbolic capital alone may influence funding outcomes, it is likely that the concentration of funding among a small group of universities is not the sole result of evaluation practices at the committee level. University affiliations confer access to various forms of capital (Bourdieu 1986) that shape applicants’ chances of securing funding: symbolic capital, as institutional prestige; economic capital, through internal funding and material support; and social capital, via privileged access to expert networks and disciplinary knowledge. Because disciplinary rules of evaluation are partly informal and unwritten, access to insider knowledge is key to crafting an application that will resonate with the committee's expectations. Hence, several committee members from less prominent institutions stated that they had disseminated the insights gained through their experience at the SSHRC to colleagues within their departments and universities, with the aim of enhancing their chances of obtaining grants.

The patterns we observe might also directly result from the structure of the Canadian academic market. Some interviewees suggested that researchers with the most highly valued characteristics are concentrated within the largest and most prestigious universities, and that unequal success chances are thus already built‐in the system by the time applications are reviewed at the SSHRC level. Disparate working conditions across universities, especially when it comes to teaching and administrative loads, research support, and internal funding, might further widen the gap.

This explanation, however, is not entirely convincing. The concentration of successful researchers in a select group of universities cannot be detached from the fact that prestigious affiliations can increase scholars’ scientific productivity and rewards throughout their career (Allison and Long 1990; Merton 1988). Assuming that scholars who are in the U3 have more merit, as the representatives of some disciplines do, is in effect a self‐fulfilling prophecy. This is particularly clear in disciplines where university hierarchies are closely linked to valued publication patterns, which in turn serve as a key criterion for allocating funding. For instance, if review committees in economics placed less emphasis on CVs, specifically on past publications in a restricted list of English‐speaking journals—the “tyranny of the top five” (Heckman and Moktan 2020)—applicants from less prestigious institutions would likely exhibit higher success rates. This is exactly what we observe in history, where the importance given to research projects during SSHRC evaluations mitigates the correlation between university prestige and success rates. Not that historians do not gauge applicants’ CVs as well. Scholars who have published one or multiple monographs, depending on their career stage, are clearly valued by the committees. Yet, because historians do not abide by a clear ranking of publications and publishing houses, diverse research profiles can be valued and regarded as equally respectable. These disciplinary contrasts show that the greater success of researchers from the most prestigious universities does not reflect inherently greater merit, but rather the way merit is defined and assessed within the scientific field—a social construct that varies across disciplines and depends on the alignment between institutional positions, publication patterns, and the prevailing scientific hierarchies within each discipline.

Larregue, J. , and Pavie A.. 2026. “Prestige at Play: University Hierarchies and the Reproduction of Funding Inequalities.” Canadian Review of Sociology/Revue canadienne de sociologie 63, no. 1: e70012. 10.1111/cars.70012

Funding: This research has been supported by the SSHRC (grant number 435‐2023‐0882) and FRQSC (grant number 328913).

Endnotes

1

Statistics Canada, Table 37‐10‐0108‐01, “Number and salaries of full‐time teaching staff at Canadian universities” (excluding medical and dental). Accessible online at: https://www150.statcan.gc.ca/t1/tbl1/en/tv.action?pid=3710010801&request_locale=en

2

Gender of the applicant, size of the research team, prior funding, as well as program, year and language of application.

References

  1. Aagaard, K. , Kladakis A., and Nielsen M. W.. 2020. “Concentration or Dispersal of Research Funding?” Quantitative Science Studies 1, no. 1: 117–149. 10.1162/qss_a_00002. [DOI] [Google Scholar]
  2. Allison, P. D. , and Long J. S.. 1990. “Departmental Effects on Scientific Productivity.” American Sociological Review 55, no. 4: 469–478. 10.2307/2095801. [DOI] [Google Scholar]
  3. Bellotti, E. , Czerniawska D., Everett M. G., and Guadalupi L.. 2022. “Gender Inequalities in Research Funding: Unequal Network Configurations, or Unequal Network Returns?” Social Networks 70: 138–151. 10.1016/j.socnet.2021.12.007. [DOI] [Google Scholar]
  4. Bol, T. , de Vaan M., and van de Rijt A.. 2018. “The Matthew Effect in Science Funding.” Proceedings of the National Academy of Sciences 115, no. 19: 4887–4890. 10.1073/pnas.1719557115. [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Bornmann, L. , and Daniel H.‐D.. 2006. “Potential Sources of Bias in Research Fellowship Assessments: Effects of University Prestige and Field of Study.” Research Evaluation 15, no. 3: 209–219. 10.3152/147154406781775850. [DOI] [Google Scholar]
  6. Bourdieu, P. 1975. “The Specificity of the Scientific Field and the Social Conditions of the Progress of Reason.” Social Science Information 14, no. 6: 19–47. [Google Scholar]
  7. Bourdieu, P. 1986. The Forms of Capital. Edited by Richardson J. G., 241–258. Greenwood Press. [Google Scholar]
  8. Brint, S. , and Carr C. E.. 2017. “The Scientific Research Output of U.S. Research Universities, 1980–2010: Continuing Dispersion, Increasing Concentration, or Stable Inequality?” Minerva 55, no. 4: 435–457. 10.1007/s11024-017-9330-4. [DOI] [Google Scholar]
  9. Burns, K. E. A. , Straus S. E., Liu K., Rizvi L., and Guyatt G.. 2019. “Gender Differences in Grant and Personnel Award Funding Rates at the Canadian Institutes of Health Research Based on Research Content Area: A Retrospective Analysis.” PLOS Medicine 16, no. 10: e1002935. 10.1371/journal.pmed.1002935. [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Clauset, A. , Arbesman S., and Larremore D. B.. 2015. “Systematic Inequality and Hierarchy in Faculty Hiring Networks.” Science Advances 1, no. 1: e1400005. 10.1126/sciadv.1400005. [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Cole, S. , Cole J. R., and Simon G. A.. 1981. “Chance and Consensus in Peer Review.” Science 214, no. 4523: 881–886. 10.1126/science.7302566. [DOI] [PubMed] [Google Scholar]
  12. Cornut, J. , and Roussel S.. 2011. “Canadian Foreign Policy: A Linguistically Divided Field.” Canadian Journal of Political Science/Revue Canadienne De Science Politique 44, no. 3: 685–709. 10.1017/S0008423911000540. [DOI] [Google Scholar]
  13. Cornut, J. , Simard C., Jegen M., and Cardinal L.. 2012. “L'embauche dans les Départements de Science Politique Francophone au Québec et au Canada. Un Bilan des Années 2000–2010.” Politique Et Sociétés 31, no. 3: 87–108. 10.7202/1014961ar. [DOI] [Google Scholar]
  14. DiPrete, T. A. , and Eirich G. M.. 2006. “Cumulative Advantage as a Mechanism for Inequality: A Review of Theoretical and Empirical Developments.” Annual Review of Sociology 32: 271–297. 10.1146/annurev.soc.32.061604.123127. [DOI] [Google Scholar]
  15. Espeland, W. N. , and Stevens M. L.. 1998. “Commensuration as a Social Process.” Annual Review of Sociology 24: 313–343. 10.1146/annurev.soc.24.1.313. [DOI] [Google Scholar]
  16. Espeland, W. N. , and Stevens M. L.. 2008. “A Sociology of Quantification.” European Journal of Sociology / Archives Européennes De Sociologie / Europäisches Archiv Für Soziologie 49, no. 3: 401–436. [Google Scholar]
  17. Fourcade, M. , and Healy K.. 2017. “Seeing Like a Market.” Socio‐Economic Review 15, no. 1: 9–29. [Google Scholar]
  18. Fourcade, M. , Ollion E., and Algan Y.. 2015. “The Superiority of Economists.” Journal of Economic Perspectives 29, no. 1: 89–114. [Google Scholar]
  19. Gingras, Y. 2016. Bibliometrics and Research Evaluation: Uses and Abuses. The MIT Press. [Google Scholar]
  20. Ginther, D. K. , Schaffer W. T., Schnell J., et al. 2011. “Race, Ethnicity, and NIH Research Awards.” Science 333, no. 6045: 1015–1019. 10.1126/science.1196783. [DOI] [PMC free article] [PubMed] [Google Scholar]
  21. Heckman, J. J. , and Moktan S.. 2020. “Publishing and Promotion in Economics: The Tyranny of the Top Five.” Journal of Economic Literature 58, no. 2: 419–470. 10.1257/jel.20191574. [DOI] [Google Scholar]
  22. Heilbron, J. , and Bokobza A.. 2015. “Transgresser Les Frontières En Sciences Humaines et Sociales en France.” Actes De La Recherche En Sciences Sociales 210: 108–121. [Google Scholar]
  23. Kharchenkova, S. , and Velthuis O.. 2018. “How to Become a Judgment Device: Valuation Practices and the Role of Auctions in the Emerging Chinese Art Market.” Socio‐Economic Review 16, no. 3: 459–477. 10.1093/ser/mwx057. [DOI] [Google Scholar]
  24. Khelfaoui, M. , and Gingras Y.. 2024. “Dynamique et Structure Du Marché de L'emploi Universitaire Québécois dans les disciplines Des Sciences Sociales, 1900–2020.” Recherches Sociographiques 65, no. 1: 37–66. 10.7202/1113757ar. [DOI] [Google Scholar]
  25. Lamont, M. 2009. How Professors Think: Inside the Curious World of Academic Judgment. Harvard University Press. https://books.google.fr/books?hl=en&lr=&id=slK0xmSu33MC&oi=fnd&pg=PP6&dq=How+professors+think:+Inside+the+curious+world+of+academic+judgment&ots=h6rYmWijoM&sig=EybnlAQS2QXxgcGODEfVZqQYlkA. [Google Scholar]
  26. Larregue, J. , and Nielsen M. W.. 2024. “Knowledge Hierarchies and Gender Disparities in Social Science Funding.” Sociology 58, no. 1: 45–65. 10.1177/00380385231163071. [DOI] [Google Scholar]
  27. Larregue, J. , and Warren J.‐P.. 2024. “Une Discipline Clivée: la Sociologie au Québec.” Recherches Sociographiques 65, no. 1: 7–14. 10.7202/1113755ar. [DOI] [Google Scholar]
  28. Lauer, M. S. , and Roychowdhury D.. 2021. “Inequalities in the Distribution of National Institutes of Health Research Project Grant Funding.” Elife 10: e71712. 10.7554/eLife.71712. [DOI] [PMC free article] [PubMed] [Google Scholar]
  29. Liao, C. H. 2021. “The Matthew Effect and the Halo Effect in Research Funding.” Journal of Informetrics 15, no. 1: 101108. 10.1016/j.joi.2020.101108. [DOI] [Google Scholar]
  30. Mangset, M. 2009. The Discipline of Historians: A Comparative Study of Historians' Constructions of the Discipline of History in English, French and Norwegian Universities. Thèse de doctorat, Institut d'études politiques.
  31. Merton, R. K. 1968. “The Matthew Effect in Science.” Science 159, no. 3810: 56–63. [PubMed] [Google Scholar]
  32. Merton, R. K. 1988. “The Matthew Effect in Science, II: Cumulative Advantage and the Symbolism of Intellectual Property.” Isis: An International Review Devoted to the History of Science and Its Cultural Influences 79, no. 4: 606–623. [Google Scholar]
  33. Mongeon, P. , Brodeur C., Beaudry C., and Larivière V.. 2016. “Concentration of Research Funding Leads to Decreasing Marginal Returns.” Research Evaluation 25, no. 4: 396–404. 10.1093/reseval/rvw007. [DOI] [Google Scholar]
  34. Moore, S. F. 1973. “Law and Social Change: The Semi‐Autonomous Social Field as an Appropriate Subject of Study.” Law & Society Review 7, no. 4: 719–746. [Google Scholar]
  35. Musselin, C. 2009. The Market for Academics. Routledge. 10.4324/9780203863060. [DOI] [Google Scholar]
  36. Nakhaie, R. , Lippert R. K., and Cukarski D.. 2023. “Granting Inequities: Racialization and Gender Differences in Social Science and Humanities Research Council of Canada's Grant Amounts for Research Elites.” Canadian Ethnic Studies 55, no. 2: 25–49. [Google Scholar]
  37. Nielsen, M. W. , and Andersen J. P.. 2021. “Global Citation Inequality Is on the Rise.” Proceedings of the National Academy of Sciences 118, no. 7: e2012208118. 10.1073/pnas.2012208118. [DOI] [PMC free article] [PubMed] [Google Scholar]
  38. Orupabo, J. , and Mangset M.. 2022. “Promoting Diversity but Striving for Excellence: Opening the ‘Black Box’ of Academic Hiring.” Sociology 56, no. 2: 316–332. 10.1177/00380385211028064. [DOI] [Google Scholar]
  39. Platt, J. 2005. “La Spécificité Du Québec et Du Canada dans les Méthodologies en Sociologie.” Sociologie Et Sociétés 37, no. 2: 91–118. 10.7202/012914ar. [DOI] [Google Scholar]
  40. Renisio, Y. 2015. “L'origine Sociale Des disciplines.” Actes De La Recherche En Sciences Sociales 5, no. 210: 10–27. [Google Scholar]
  41. Roumbanis, L. 2017. “Academic Judgments Under Uncertainty: A Study of Collective Anchoring Effects in Swedish Research Council Panel Groups.” Social Studies of Science 47, no. 1: 95–116. 10.1177/0306312716659789. [DOI] [PubMed] [Google Scholar]
  42. Steinþórsdóttir, F. S. , Einarsdóttir Þ., Pétursdóttir G. M., and Himmelweit S.. 2020. “Gendered Inequalities in Competitive Grant Funding: An Overlooked Dimension of Gendered Power Relations in Academia.” Higher Education Research & Development 39, no. 2: 362–375. 10.1080/07294360.2019.1666257. [DOI] [Google Scholar]
  43. Tamblyn, R. , Girard N., Qian C. J., and Hanley J.. 2018. “Assessment of Potential Bias in Research Grant Peer Review in Canada.” Canadian Medical Association Journal 190, no. 16: E489–E499. 10.1503/cmaj.170901. [DOI] [PMC free article] [PubMed] [Google Scholar]
  44. van Bellen, S. , and Larivière V.. 2024. “Les revues canadiennes en sciences sociales et humaines: Entre Diffusion Nationale et Internationalisation.” Recherches Sociographiques 65, no. 1: 15–35. 10.7202/1113756ar. [DOI] [Google Scholar]
  45. Velloso, A. , Lannes D., and de Meis L.. 2004. “Concentration of Science in Brazilian Governmental Universities.” Scientometrics 61, no. 2: 207–220. 10.1023/b:scie.0000041649.24713.ca. [DOI] [Google Scholar]
  46. Witteman, H. O. , Hendricks M., Straus S., and Tannenbaum C.. 2019. “Are Gender Gaps Due to Evaluations of the Applicant or the Science? A Natural Experiment at a National Funding Agency.” Lancet 393, no. 10171: 531–540. 10.1016/S0140-6736(18)32611-4. [DOI] [PubMed] [Google Scholar]
  47. Zhi, Q. , and Meng T.. 2016. “Funding Allocation, Inequality, and Scientific Research Output: An Empirical Study Based on the Life Science Sector of Natural Science Foundation of China.” Scientometrics 106, no. 2: 603–628. 10.1007/s11192-015-1773-5. [DOI] [Google Scholar]
  48. Zuckerman, H. 1977. Scientific Elite: Nobel Laureates in the United States. The Free Press. [Google Scholar]

Articles from Canadian Review of Sociology are provided here courtesy of Wiley

RESOURCES