Skip to main content
PLOS One logoLink to PLOS One
. 2023 Apr 26;18(4):e0280576. doi: 10.1371/journal.pone.0280576

Science, interrupted: Funding delays reduce research activity but having more grants helps

Wei Yang Tham 1,*
Editor: Joshua L Rosenbloom2
PMCID: PMC10132550  PMID: 37099515

Abstract

I study how scientists respond to interruptions in the flow of their research funding, focusing on research grants at the National Institutes of Health (NIH), which awards multi-year, renewable grants. However, there can be delays during the renewal process. Over a period beginning three months before and ending one year after these delays, I find that interrupted labs reduce overall spending by 50% but over 90% in the month with the largest decrease. This change in spending is mostly driven by a decrease in payments to employees that is partially mitigated when scientists have other grants to draw on.

Introduction

In many fields of science, research is a resource-intensive endeavour. It requires people, capital, and the management of these resources. In addition, scientists must obtain and manage the funding necessary to acquire these inputs. This includes dealing with the possibility that funding may not arrive in the amount or at the time they want it to. How scientists respond to uncertainty and liquidity constraints in funding is therefore an important part of the research production function.

In general, however, this aspect of a scientist’s job is difficult to observe on a large scale. The UMETRICS dataset [1], which consists of administrative data from universities on transactions from sponsored projects, helps to bridge this gap. I use UMETRICS to study how Principal Investigators (abbreviated as PIs; for the remainder of the paper, I use the terms “lab” and “PI” interchangeably) funded by the National Institutes of Health (NIH) respond to funding delays or “interruptions”.

I first document that when funding is guaranteed and available, scientists tend to maintain spending at a steady level (Fig 1). On average, after a “ramping up” period in the first year of a project period (i.e. the NIH term for a multi-year grant), spending is relatively flat until the final year of the project period, when it steadily decreases. This pattern suggests that in the absence of uncertainty about funding or liquidity constraints (and conditional on how the NIH disburses funds), scientists have a preference for a stable rate of spending.1

Fig 1. This figure shows spending per month for R01 project periods that last three, four, five, or six years, relative to spending in the first month of the project period.

Fig 1

Estimates are from a regression of total expenditure (arcsinh-transformed) on a set of dummies for each month in a project period with project period fixed effects, with the first month of the project period as the excluded category. Separate regressions are run by project period length. Standard errors are clustered by expiring R01 project period.

Next, I study how scientists respond to funding delays or “interruptions”, focusing on a particular type of NIH grant, the “R01”. R01 grants are generally regarded as being necessary to establish an independent research lab in the biomedical sciences. They can be renewed periodically (typically every four to five years) at the end of each project period, at which point the following scenarios may occur:

  1. The project’s new funding stream begins as soon as its previous one ends.

  2. The project is interrupted: its new funding stream only begins some time after its previous one ends.2

Interruptions can arise for different reasons. A funding agency that is uncertain about its budget may engage in “precautionary saving” and delay spending to the end of the fiscal year [2]. There may also be disruptions to the funding allocation process, such as the government shutdown of Fiscal Year 1996 [3], which slowed down the processing of paperwork and led to peer review meetings being postponed (see also Appendix Section).

An interruption can be thought of as a combination of (a) a liquidity shock from the scientist’s inability to access funding for some period of time and (b) an uncertainty shock about when or even whether they will get the funding in the first place. Although I do not distinguish between these two mechanisms, their combined effect in the form of interruptions is an important policy question. Delays in NIH funding are a real concern among researchers [4]. Understanding the role of interruptions as a potential impediment to science helps policymakers to determine how much attention should be paid to this issue.

I estimate the effect of interruptions with a difference-in-differences design that compares outcomes for interrupted and uninterrupted labs, defining “interrupted” projects as those where funding was renewed after more than 30 days.3 To allow for the possibility that principal investigators (PIs) can dampen the effects of interruptions with other grants, I run the analysis separately on PIs with one R01 and PIs with multiple R01s.4

I find that interrupted labs with one R01 reduce spending significantly. Over a two-year period centered on the expiry of an R01 project period, interrupted labs spend 33% less than uninterrupted labs in the average month.5 The change in spending is not uniform over time but V-shaped.6 At its lowest point, spending is 96% lower for interrupted labs. This decrease starts about three months before the official grant expiry date and may be driven by uninterrupted labs spending from their renewed budget in advance, if they are informed of their successful renewal early enough. After R01 expiry, spending drops sharply before starting to recover, taking about nine months after expiry to catch up with uninterrupted labs. Over a 16-month period (from three months before expiry—when spending begins to decrease—to 12 months after grant expiry), the decrease in total spending is 52%.

When a PI has multiple R01s, spending remains stable throughout for interrupted PIs, indicating that there is fungibility across research grants. This is supported by the employee-level analysis. Across occupations, employees who are linked to multiple R01s either experience a lesser or zero decline in the probability of being paid by a grant, whether by their PI or by any grants at all.

I also look at whether PIs adjust different components of spending differently in response to interruptions. For PIs with one R01, both vendor and labor spending decrease substantially (by over 90% at their lowest points), but vendor spending does not recover as quickly. For PIs with multiple R01s, there is some decrease in the number of employees but vendor payments are relatively stable. The decrease in employees for both PI types may be due to labor expenditure constituting a larger share of spending and entailing longer-term commitments that PIs are unable to commit to until they know their funding status. However, this does not necessarily mean that employees are being dismissed by their institution or even removed from the research team, as there may be alternative sources of funding for some employees (e.g. teaching positions for graduate students), although some occupations (e.g. postdoctoral researchers) may be more vulnerable than others.

In my final set of results, I estimate the impact of a funding interruption on research output as measured by publications and citation-weighted publications. However, these estimates are not precise enough to determine whether interruptions affect output and, if they do, to what extent. This illustrates how traditional measures such as publications and patents may not provide the complete picture because they occur at a lower frequency and only capture one aspect of the research production process.

One limitation of this paper is that interruptions are not randomly assigned. While interruptions are driven in part by external events such as government shutdowns, the NIH may prioritize projects or PIs that are perceived to be of higher quality. This is less likely to be a problem for the results where inputs are an outcome, given that the use of inputs is more likely to be driven by budget constraints and the specific needs of the project. This is more likely to bias upward (in magnitude) the results involving publications, although the publication pre-trends do not indicate that interrupted PIs were on a less productive trajectory leading up to the year they were interrupted.

Another limitation is that I do not observe the scientists’ full set of funds. UMETRICS does not record the spending of internal funds (i.e. those directly provided to a scientist by their institution). I interviewed a university administrator who works on grant management about the role of internal funding in such situations. While they could not put a number on the extent to which internal funding makes up for the funding gap, they said that it was generally harder to get internal funding for personnel than non-personnel expenditures. Thus even when internal funds are an option, interruptions may still result in PI-employee separations.

In addition, my measure of PI spending is limited to spending through NIH grants. While this ensures a high degree of accuracy in linking NIH PIs to transactions, it also naturally raises the question of whether the amount of funding from non-NIH grants has a substantive effect on the results discussed thus far. This is unlikely for two reasons.

First, the results on whether interrupted employees continue to be paid on any grant are consistent with the overall set of results and thus do not suggest that there is a substantial pool of non-NIH grants being used to offset the effects of interruptions. Second, other research and government statistics show that if a research group is federally funded, it is also mostly federally funded and the NIH is the largest federal funder of life sciences research (see Appendix Section for more details). In short, focusing on NIH funding provides substantial coverage of researcher funding.

This paper builds on work using granular data to unpack the role of the “lab” in science, dating back to the anthropological work of [5] in 1979. On a larger scale, [6] use a complete personnel roster of principal investigators in the MIT Department of Biology from 1966 to 2000 to study the role of different types of personnel in research production. [7] use the UMETRICS dataset as well to estimate the marginal product of scientific funding and show how employee composition changes when funding increases. This paper adds to the body of work by examining how uncertainty and liquidity constraints affect the use of research inputs and by highlighting the value of high frequency data in studying the knowledge production process.

This paper is also part of a literature in innovation economics studying how uncertainty affects innovators’ productivity and choices. [8] study the Howard Hughes Medical Institute (HHMI) Investigator Program, which gives grantees more freedom over research direction and effectively gives them longer grant cycles compared to R01s, thus insulating them from the type of disruptions that can arise in the R01 renewal process. They find that HHMI scientists are more likely to produce high-impact papers and explore new research directions. While the insights from [8] are important, there are practical difficulties to expanding a resource-intensive program like the HHMI’s. Thus, understanding where improvements can be made within the current system is important as well.

The results of this study highlight that a funding agency’s decision to delay the renewal of a project may not be costless. Even when the project is eventually funded, there can be disruptions to the use of inputs, team capital [9], and the employment or training of personnel. This has two major implications for how we fund projects. Firstly, it suggests that there is value to having the budgets of science funding agencies planned over a longer-term horizon to reduce uncertainty [10]. Secondly, funding agencies delay projects if they expect that higher quality projects may be available later in the fiscal year. Agencies should consider that the cost of disrupting a project could be larger than the improvement in quality from delaying its decision, especially if their measures of project quality are imperfect.

Background and conceptual framework

NIH funding

The NIH is responsible for an annual budget of about US$40 billion, much of which is disbursed through research grants. A core part of the NIH’s mission is funding basic science to generate fundamental knowledge that tends to have long-term, rather than immediate, impact.

The NIH is funded every fiscal year by congressional appropriation. This is part of a broader process whereby the US Congress passes regular appropriations bills to fund a wide range of government operations.7 If appropriations have not been made by the beginning of the fiscal year, Congress can enact a “continuing resolution” to provide temporary funding. If a continuing resolution is not enacted and a “funding gap” occurs, then federal agencies have to begin a “shutdown” of projects and activities that rely on federal funds.

It is typically taken as given that regular appropriations will not have been made by the beginning of the fiscal year on 1 October, and that federal agencies will have to operate under a continuing resolution for at least some portion of the year. Under a continuing resolution, the NIH continues to fund existing projects, albeit at an inititally reduced rate. However, it might also choose to delay funding for new or renewed projects in response to uncertainty about the size of the NIH’s budget for the fiscal year.

To illustrate, suppose that at the beginning of the fiscal year, the NIH knows (1) its budget and (2) its own ranking of projects available to be funded (rank could be based on project quality but also other factors such as NIH priorities). In this scenario, the NIH knows which projects it wishes to fund and whether it can fund them before the projects are set to run out of funding. Thus, there are no funding interruptions.

The scenario above illustrates that funding interruptions arise from uncertainty about either (1) the NIH’s budget or (2) the quantity and quality of projects that need funding that fiscal year, or (3) both. Some uncertainty over projects is built-in as there are three review cycles throughout the fiscal year.

Scientist perspective

The R01 is designed to provide enough funding to establish an independent research career. An R01 project period lasts for 4–5 years, after which it must be renewed in order to receive additional funding.8 The same project can last for multiple project periods.

Ideally, a researcher wants to maintain R01 funding for as long as possible. Toward the end of each project period, the principal investigator (PI) has to apply to renew their project for another project period of 4–5 years. PIs typically apply for renewal 1–2 years before a project period ends in order to receive funding continuously. In addition to the time taken to prepare the application itself, PIs have to take into account other factors such as potentially having to resubmit an application that is rejected the first time.

Data and variable construction9

UMETRICS

I use the 2019 release of the UMETRICS data, which is housed at the Institute for Research on Innovation and Science (IRIS). UMETRICS core files are administrative records of transactions from sponsored projects from 31 member universities. The time span covered by each university’s records varies, spanning 2001 to 2018 overall.10 Payments from a project can go to one of three categories: vendors, subawards, or personnel.11

Lab/PI total direct expenditure. A key outcome variable is direct expenditure from grants, which excludes the overhead costs that are paid to universities as a percentage of a grant award. Although I define the timing and length of funding delays around the R01 grant, I sum up outcomes to the level of the PI/lab. Specifically, for each R01, I find its associated PIs at the point of renewal. I then sum up spending for each PI across all NIH grants that they are associated with at a given point in time.12

Lab/PI vendor and labor expenditure. I repeat the above procedure for payments to vendors and payments to labor. Payments to vendors includes purchases of equipment or services. UMETRICS does not include salaries, so payments to labor are backed out as the remainder after subtracting vendor and subaward payments from total expenditure (i.e. Labor = TotalVendorSubaward).

Lab/PI employee counts. Count of the number of employees paid by a PI through the PI’s NIH grants.

Employee-level outcomes. The next part of my analysis is at the individual employee level. The UMETRICS data contains unique employee identifiers so that I can follow an employee’s employment status over time. For each PI-R01-renewal combination, I identify employees paid by the PI every month over the 10–12 months before R01 expiry (i.e. the first three months of the panel). This is a heuristic to identify personnel who are more likely to be long-term members of the PI’s lab or who were not already scheduled to end their tenure with the focal PI. I then create a monthly panel following their employment status from 9 months before expiry to 12 months after expiry.

I construct two outcome variables. The first is whether the employee is paid by the focal PI through any of the PI’s grants in a given month. This can be thought of as a proxy for whether employee-PI matches are disrupted. The unit of interest is an employee-PI combination, and the data structure is an employee-PI-R01-renewal monthly panel.

The second outcome measure is whether the employee is paid by any grants from any PI in any given month. Even if an employee is separated from their usual PI they may be shifted to a different project, so this captures the overall “employment status” of the employee. The unit of interest here is an employee, so the data structure is an employee-R01-renewal monthly panel.

Both of these outcomes are non-absorbing—the indicator can be on-then-off or off-then-on in consecutive months and do not necessarily indicate that an employee as “exited” employment, which we do not observe.

UMETRICS also provides occupation categories for employees at the time they were paid. These classifications enable us to see how outcomes may evolve differently for different occupations. For example, we may expect the results for faculty to be different than the results for postdocs. For sample size reasons, any analysis that involves splitting the sample by occupation focuses on the largest five occupation categories: Faculty, Graduate Student, Post-graduate Researcher, Research, and Research Facilitation. A full list of occupational categories and descriptions is available in Appendix Section.

ExPorter

ExPorter is publicly available data provided by the NIH.13 It contains data on NIH-funded projects from 1985 to the present, including identifiers that IRIS has used to link projects to their transcations in UMETRICS. It also provides links to publications, patents, and clinical studies that cite support from the NIH. I describe the key variables constructed from ExPORTER below:

Length of funding gaps and interruptions. Number of calendar days between the end of a project period and the beginning of the next project period.

Grant portfolio (“Number of R01s”). An interruption’s effect on a PI may vary by the size of their grant portfolio. I count the PI’s number of NIH grants from one year before to one year after the focal R01 expired. I define the size of the PI’s grant portfolio based on the number of “R01-equivalent” or “P01” grants that the PI had, including the focal R01. I follow the NIH definition of R01-equivalent grants.14 P01 grants provide funding for multiple research projects with a common theme.15 For brevity, I will refer to this variable as the “Number of R01s” without explicitly defining the other types of grants included.

Since employees are not necessarily in charge of their own grants, I need to define funding support for employees. When the unit of interest is an employee-PI combination, this is the grant portfolio of the focal PI, as defined above. When the unit of interest is an employee, I identify all PIs that paid the employee at any time during the 10–12 months before R01 expiry. I then count the number of R01s (including R01-equivalents and P01s) that those PIs were in charge of during the 24-month period used in the analysis.

Lab/PI publications. I use publication counts as a proxy for research output. ExPorter provides a crosswalk between NIH projects and publications in PubMed, a database for publications in biomedical research and the life sciences. These can then be aggregated up to the PI or lab-level. I also weight publications by 3-year forward citations (i.e. citations from years X, X + 1, and X + 2 if a paper was published in year X).

Author-ity & web of science

Author-ity [11] is a dataset of disambiguated author names based on a snapshot of MEDLINE, a bibliographic database with a focus on “biomedicine and health,” in 2009.16 Each observation in Author-ity is a cluster of names and articles that are predicted to belong to the same author. Clarivate Analytics Web of Science (WoS) is a citation indexing database that is not publicly available. I use a version of WoS that indexes articles and citations up to and including 2013.

The final sample for estimating the effect of interruptions on research output combines ExPORTER and WoS to create a panel that spans 1985 (earliest ExPORTER year) to 2011 (latest WoS year allowing for 3-year forward citations). Additional covariates such as career age are added from Author-ity. Section describes how this panel is constructed and how it relates to estimation.

Empirical strategy

The goal of this study is to estimate how outcomes for labs/PIs change when they experience an R01 interruption. In doing so, it is important to take into account the decrease in spending observed at the end of a grant (Fig 1). Thus, a natural comparison group for labs with interrupted R01s is labs with R01s are at similar stages in the cycle (i.e. labs with R01s that were also expiring but were successfully renewed).

I begin by identifying instances where an R01 was successfully renewed within the fiscal year it expires. I then stack all combinations of renewed R01s and PIs of those R01s to create a balanced PI-R01-renewal monthly panel spanning 24 months—one year before and one year after the expiry. I define an R01 as (a) “interrupted” if took more than 30 calendar days to be renewed or (b) “uninterrupted” or “continuous” if it took fewer than 30 calendar days to be renewed. I then estimate event study specifications that allow us to see how labs respond to interruptions month-to-month.

Event-study: Research inputs

The main specification I estimate is an event study centered around the expiry month of a project period.

yLRt=e=-1012βe*1(e=t-texpiry)1(Interrupted)+δLR+γe+ϵLRt

I index PIs as L, R01s as R, and the year-month as t. texpiry is the year-month that the R01 grant R expires. e is the number of months before expiry (i.e. e = 0 when R expires and e < 0 before the grant expires). I restrict the sample to the one year before and after the R01 R expires, i.e. e starts at month −11 and ends at 12. e = −11 is excluded from the specification.

yLRt is a variable at the PI-level, such as total spending across all of the PI’s grants), δLR are PI-R01-renewal fixed effects, and γe are fixed effects for months relative to expiry. The coefficients of interest are βe, where t = −10, −9, …, 11, 12.

An important literature on misspecification of the two-way fixed effects model for the staggered difference-in-differences design has emerged in recent years [1215]. Compared to the typical setting considered in this literature, one advantage of the setting in this paper is being able to define a “post-treatment” period for control units, i.e., the expiry date of the R01 grant. This means that we can define time relative to treatment for both treatment and control units. Thus, the above specification estimates an event study for a difference-in-differences with only one treatment period in relative time, thereby avoiding the key issue raised in the literature.

Employee-level outcomes

Continuing with the same notation, I index employees as i. To assess whether the employee-PI relationship was affected, I estimate the following regression specification with employee-PI-R01-renewal fixed effects and relative time fixed effects:

1(PI-paid-employee)iLRt=e=-1012βe*1(e=t-texpiry)1(Interrupted)+δiLR+γe+ϵiLRt

For looking at the effects on the employee, I estimate:

1(Any-grant-paid-employee)iRt=e=-1012βe*1(e=t-texpiry)1(Interrupted)+δiR+γe+ϵiRt

Event-study: Research outputs

Through ExPorter, I draw on the universe of NIH-sponsored scientists to estimate the effect of funding restrictions on research output. I use a “stacked regression” approach [16, 17].17 For each calendar year, I find all R01-PI combinations where the R01 was eventually renewed. The set of all such R01-PI combinations for each year form a “treatment cohort”. Each cohort is then stacked to form a PI-R01-renewal by year panel that starts 4 years before and ends 5 years after an interruption, restricted to all interruptions that took place from 1989 to 2006.18

I then estimate a variant of the two-way fixed effects specification but with cohort-specific time and unit fixed effects to address issues arising from staggered treatment timing in difference-in-differences [13, 15, 18, 19].

I index PIs with i, renewed R01s with R, and calendar year with t. c indexes treatment cohorts or calendar year R expired. Treatment cohorts are defined as the set of PIs that were renewed in the same calendar year, some of whom were interrupted and some not. yiRt is a measure of research production (e.g. publications), δiR are PI-R01-renewal fixed effects, and γtc are treatment cohort by calendar year fixed effects. ttexpiry is years since interruption, with 0 being the interruption year.

I then apply the Coarsened Exact Matching (CEM) procedure [[20]; see the Appendix Section for details] estimate the following event-study specification and its “static” counterpart with the matching weights:

yiRt=k=-55βk*1(k=t-texpiry)1(Interrupted)+δiR+γtc+ϵiRtyiRt=βstatic*1(t-texpiry1)1(Interrupted)+δiR+γtc+ϵiRt)t-texpiry=-4,-3,,0,,5

Inverse hyperbolic sine

Unless otherwise stated or the outcome is a binary variable, I apply an inverse hyperbolic sine (also asinh or arcsinh) transformation to the outcome variable for all regressions, which approximates a natural logarithm and is defined at zero.19

Descriptive statistics

The analysis within UMETRICS uses a sample of 356 PI-R01-renewals with one R01 (284 uninterrupted, 72 interrupted) and 693 PI-R01-renewals with multiple R01s (528 uninterrupted, 165 interrupted). The timing of R01 renewals ranges from 2003 to 2017, which results in a panel ranging from 2002 to 2018. Fig 11 in the Appendix shows the years in which R01 expiries in this sample occurred.

Fig 2 compares characteristics of the expiring (and eventually renewed) R01 project periods from the UMETRICS sample used in the upcoming analysis. Fig 2A and 2B show the distribution of funding gaps. Overall, a little over 20% of R01s were “interrupted” or renewed more than 30 days after their expiry date. The funding gap experienced by interrupted R01s has a wide range with a maximum of over 300 calendar days.

Fig 2. This figure compares the characteristics of expiring (and eventually renewed) R01 project periods that are used in the UMETRICS analysis.

Fig 2

Each unit of observation is an R01 project period. Fig 1A shows the number of R01s that were renewed within 30 calendar days of their expiry. Fig 1B is a histogram (30-day bins) for the number of days till renewal for R01s not renewed within 30 days. Fig 1C shows the smoothed density for total funding in the expiring project period for interrupted and uninterrupted R01s. Fig 1D shows the same figure for funding per year (total funding / length of expiring R01 project period). Fig 1E shows the proportion of projects by the expiring project period’s length.

Fig 2C and 2D show that interrupted and continuous R01s are similarly funded, whether in terms of total funding over the entire project period or funding per year, with interrupted projects being slightly bigger. Interrupted projects are more likely to have had a six-year project period (Fig 2E).

R01s are awarded for a maximum of five years, thus six-year projects periods are likely to have originally been five-year awards that exercised a one-year no-cost extension and are more likely to be interrupted because they no longer have the option to extend. This might affect the results if spending trends differ by project length. I address this by exact matching on the project period length of the focal R01 for outcomes related to spending.20

In the first month of the panel, the median team for PIs with one R01 had 4 employees in total, 1 faculty, and 1 research employee, and 0 for the remaining occupations.21 The median team for PIs with multiple R01s had 8 employees, 2 faculty, 1 research staff, 1 postgraduate researcher, and 0 for the remaining occupations. The most common occupations on the average team are faculty, research staff, graduate students, postgraduate researchers, and research facilitators, after which other occupations are much less likely to be on a team.22 Median expenditure at the beginning of the panel for PIs with one R01 was $17,100 for total direct expenditure, $13,900 for labor payments, $900 for vendor payments, and 0 for subaward payments. For PIs with multiple R01s, the same statistics were $40,200 for total direct expenditure, $30,900 for labor payments, $4,100 for vendor payments, and $0 for subaward payments.

The analysis for research output (i.e. based on the ExPORTER database) uses a sample of 10890 PI-R01-renewals with one R01 (8934 uninterrupted, 1956 interrupted) and 11570 PI-R01-renewals with multiple R01s (9390 uninterrupted, 2180 interrupted). The timing of R01 renewals ranges from 2003 to 2017, which results in a panel ranging from 2002 to 2018. Fig 11 in the Appendix shows the years in which R01 expiries in this sample occurred.

Results

Spending

Fig 3A shows the event study estimates for total spending by PIs, with separate estimates by whether the PI had one R01 (green) or multiple R01s (brown) around the time of expiry. The “1 R01” graph (green) shows that for PIs with one R01, total spending starts decreasing about three months before the official expiry date. At the lowest point, spending is 96% lower for PIs with interrupted R01s. The decrease in the stock of spending from month -3 to month 12 is 52%.23

Fig 3. This figure shows event-study estimates (with 95% confidence intervals) of the difference in spending between PIs of interrupted and uninterrupted R01s.

Fig 3

Treated and control groups are matched on length of the expiring R01 project period and the regression is weighted using the matching weights. Each panel shows the estimates for a different outcome variable: total expenditure by PI (A), total vendor expenditure by PI (B), total labor expenditure by PI (C), and total number of employees paid by PI (D). Month 0 is the month that the focal R01 expires. Month -11 is the excluded category for the regression. Regressions are run separately on subsamples of PIs that have one R01 grant (green) or multiple R01s (brown), including R01-equivalents and P01 grants. Standard errors are clustered by expiring R01 project period.

A priori, we might not expect interruptions to affect how PIs spend their funds if restrictions on when or how they can spend those funds make it difficult to deviate (e.g. if they can only spend those funds within the original budget period). Since we observe a divergence in spending even before the official expiry date, this indicates that PIs have some ability to shift spending outside of the official budget periods. This can happen in two ways.

First, PIs can engage in “pre-award spending”, where they make a request to incur costs 90 days before the official start date of the grant.24 Thus, if PIs of uninterrupted grants know early enough that they will be funded, some of the divergence in spending could be due to them making use of pre-award spending.

Second, PIs of interrupted grants may be be able to delay spending beyond the official end date of the grant. For example, they might want to hold off on hiring a postdoctoral researcher until there is more certainty that they have the funding to support them. While these actions are not mutually exclusive, a university administrator said that in their experience, pre-award spending was used “quite often”, whereas saving funds for later was not common.25 Based on these remarks and the timing of the divergence in spending, the first explanation (pre-award spending by uninterrupted grants) seems to be the more likely reason for what we observe.

Fig 3B and 3C show the event study estimates for labor payments and vendor payments respectively. For labor spending, the change in spending patterns look broadly similar to those for overall spending. Vendor payments decrease less than labor payments in asinh points but still substantially by percentage, with a 93% decrease at its lowest. Vendor payments also do not recover as quickly as labor payments.

Finally, Fig 3D shows the event study estimates using employee counts as the outcome variable. These results are consistent with what we see for labor payments. In the month after the expiry of the project period, interrupted PIs with one R01 pay about 87% fewer personnel. In the same month, interrupted PIs with multiple R01s pay about 22% fewer employees.26

Employees

In addition to their effects on research production, interruptions may have disruptive effects on employees. One concern is that it may force employee turnover in a lab. For instance, a staff scientist may have to switch between PIs in order to maintain their salary or employment, or a postdoctoral researcher may be forced to leave if renewal funding does not become available quickly enough to fund their position. In addition to the personal disruption to employees, there may be a loss of team-specific capital [9].

To get at these issues, I focus on the following outcome variables: whether an employee was (a) paid by the same PI on an NIH grant or (b) paid on any grants at all. I subset the data by employee occupation and number of NIH grants an employee is associated with (as detailed in the Employee-level outcomes section).27

Fig 4 shows the event-study estimates by occupation subsample. Across all occupations, interrupted employees associated with one R01 are less likely to be paid by the same PI or by any grant; on the other hand, for those associated with multiple R01s, the probability of being paid decreases less or not at all. Over time, interrupted and uninterrupted employees converge in their probability of being paid. However, for the Postgraduate, Graduate Student, and Research occupations, employees with one R01 remain 13, 6, and 5 percentage points less likely (respectively) to be paid on any grant a year after R01 expiry.

Fig 4. This figure shows event study coefficients with 95% confidence intervals (clustered by expiring R01 project period) of the difference in probability of being paid for employees on an interrupted project relative to those on an uninterrupted project.

Fig 4

The same event study is estimated on subsamples by occupation and number of R01s. Panel A is for the outcome variable of whether an employee is paid by the PI of the renewed R01. Panel B is for the outcome variable of whether an employee is paid on any grants.

These results raise the question of whether employees in those occupations are paid less or even leave their institution, because it may be harder to find internal sources of funding for them.28

Research output

In my final set of results, I estimate the effect of interruptions on publications at the yearly level. Table 1 shows the estimates from the “static” specification. None of the estimates are statistically different from zero. This result is consistent with the event study figures (Appendix Section), which do not show obvious differential trends or levels in publications before and after an interruption.29

Table 1. Static difference-in-differences estimates.

No. of Pubs Cite-weighted Pubs
1 R01 (1a) 2+ R01 (1b) 1 R01 (2a) 2+ R01 (2b)
Interrupted-by-Post 0.002 -0.016 0.013 -0.024
(0.016) (0.015) (0.029) (0.025)
Num.Obs. 108900 115700 108900 115700
R2 Adj. 0.764 0.802 0.668 0.708

This table shows the ‘static’ difference-in-difference estimates and 95% confidence intervals of the difference in publication output if a PI had an interrupted R01. The regression includes treatment-cohort-by-year and PI-R01-renewal fixed effects and uses weights from coarsened exact matching on age, gender, and pre-interruption publications (raw counts and citation-weighted). Dependent variables are raw publication counts and citation-weighted (3-year forward citations) publications, both arcsinh-transformed. Standard errors are clustered by expiring R01 project period. Event study plots are available in the Appendix.

One reason for the imprecise estimates may be that interruptions are simply not particularly disruptive in practice. For instance, labs may be able to mitigate the temporary halt in spending by devoting more time to aspects of research that do not immediately need money (e.g. thinking of new ideas, writing).

Another reason for these results may be that universities can sufficiently mitigate the effects of a funding interruption through mechanisms such as bridge funding. In this case, resources have to be diverted to counteract the “true” negative effect of interruptions on research output.

Finally, standard measures of productivity such as publications may be too coarse relative to the true effects of funding interruptions. In addition, variable publication lags may result in the effects of an interruption being “smeared” across several years and therefore hard to detect.

Conclusion

I study how NIH-funded researchers respond to funding interruptions. Using transaction-level data, I am able to examine these effects at a level of granularity that was previously unavailable. I find evidence that interruptions are disruptive to research. PIs spend less either because of the uncertainty about whether or when they will be funded again, or because they are not able to draw on funds from their next budget, or both. These changes may, in turn, be disruptive to the work and training of employees, who become less likely to be paid on grants. In ongoing work, we investigate whether this affects a wider range of employment outcomes such as earnings or even having to leave their institution.

These results point to two important policy implications. First, policies to reduce uncertainty can help us to avoid the costs of disruptive events such as funding interruptions. An example would be a multi-year appropriation for funding agencies’ budgets. Second, given that some amount of uncertainty is unavoidable, how organizations choose to react to uncertainty is an important policy lever that can be more realistically adjusted. This paper underscores that organizations’ risk aversion comes with costs that should factor into decision-making.

Appendix

This appendix provides additional details on the background and data of the paper, as well as supplementary results not in the main text.

Additional background

Scientists’ concern about uncertainty

Uncertainty over funding is a real concern among scientists. DrugMonkey, an anonymous blog run by an NIH-funded researcher, has a post titled “Never Ever Trust a Dec 1 NIH Grant Start Date”. The post warns that projects that are due to be funded on December 1—that is, on the first funding cycle of the fiscal year—are rarely funded on time due to delays in Congress passing the budget.

Even well-established researchers report that uncertainty over funding limits their ability to do research. An article in the San Diego Union Tribune about the impact of NIH budget uncertainty features a prominent cancer researcher, Dr. David Cheresh, expressing that “(t)he uncertainty that the NIH feels reflects itself in my willingness to hire.” Dr. Cheresh is an NIH MERIT awardee with over 70,000 citations, suggesting that even scientists with strong track records are affected by the lack of long-term budget planning.30

Time series of interruptions

Fig 5 shows the time series of interrupted R01s. For each fiscal year, the graph shows the percentage of R01 projects renewed within the same fiscal year that experienced a greater than 30-day gap between expiry and renewal. There is an increase in the rate of interruptions over time, though with substantial fluctuations around the overall trend. I also highlight the spike in interruptions in Fiscal Year 1996—during there were two federal government shutdowns (including one that lasted three weeks) that were reported to be highly disruptive to the grant making process [3]—as suggestive that the NIH does respond to delays in the federal budgeting process.

Fig 5. Proportion of renewed R01s that experienced an interruption by fiscal year.

Fig 5

An interruption is defined as a gap in funding of more than 30 days. Source: NIH ExPorter.

Variation in interruptions across NIH Institutes and Centers

The NIH is comprised of 27 Institutes and Centers, commonly known as “ICs”. Each IC is focused on a particular disease (e.g. National Cancer Institute) or body system (e.g. National Heart Lung Blood Institute). ICs administrate their own budgets and thus may choose to respond to budget uncertainty differently. The National Institute of Allergy and Infectious Diseases (NIAID), for example, describes itself as being “assiduous about issuing awards using funds from the CR (continuing resolution)”.31

Fig 6 repeats Fig 5, showing the percentage of R01 projects that experienced a greater than 30-day gap, but for two different ICs (NIAID and NCI) rather than for the NIH as a whole. In recent years, NIAID has had a consistently lower proportion of projects experience interruptions than NCI. Even when there was an acute shock to the budgetary process during the 1996 government shutdown, both ICs appear to have responded differently, with NCI having more than 40% of its projects interrupted compared to just over 10% for NIAID.

Fig 6. Variation in interruptions across NIH Institutes and Centers (ICs).

Fig 6

The figure shows the proportion of interrupted projects by fiscal year for the National Cancer Institute (NCI) and National Institute of Allergy and Infectious Diseases (NIAID). Source: NIH ExPorter.

Internal funding

Although internal funding is not observed in UMETRICS, I interviewed a university administrator who works with faculty on grant management to gain a better understanding of how Principal Investigators (PIs) and their institutions react to funding delays. This interview is referenced in the main text and here I provide a fuller description of its content.

The administrator expressed that the university’s willingness to provide internal funding varies by individual circumstances such as whether the faculty member is making an effort to find other funding sources or whether they could enter into a collaboration with another faculty member. While they could not explicitly state the extent to which internal funding could make up for the funding gap, they said that it was generally easier to get internal funding for non-personnel and “essential” expenditures such as live animals. Conversely, they said it was harder to get internal funding for personnel and a PI facing funding difficulties might therefore be advised to downsize their lab. In this situation, graduate students may find a new mentor and/or teach instead, while postdocs would have to find a new job.

Data

The paper uses three main data sources.

  1. NIH ExPorter

  2. UMETRICS (2019 release)

  3. Author-ity

NIH ExPorter

NIH ExPorter is publicly available data from the NIH that can be found at https://exporter.nih.gov/. ExPorter provides the following types of data that can be linked to each other: Projects, Project Abstracts, Publications citing support from projects, Patents citing support from projects, Clinical Studies citing support from projects.

Defining project periods

NIH projects are assigned a core project number that is used over multiple project periods. The funds for a project period are allocated from the NIH to the project over multiple budget periods.32 Each budget period is recorded as a row in the ExPorter Projects data. However, ExPorter does not provide identifiers for project periods. The rest of this section explains how I construct these identifiers.

At the end of each project period, they can apply to renew funding for that project for a new project period. Thus, a project can be last for multiple project periods.

Although project periods last 4–5 years, the funds for a project are technically released over multiple budget periods. Each budget period is typically a year in length. ExPorter reflects this by having a new row for each time a project funds are allocated to a project. For example, project number R01GM049850, led by PI Jeffrey A. Simon, was funded from FY 1996 to FY 2017, except for FY 2013. Table 2 below shows the first two project periods that it was funded.

Table 2. Example of NIH ExPorter data before aggregation into project periods.
PI Name Core Project Num Fiscal Year Application Type Comment
Simon, Jeffrey A R01GM049850 1996 1 New
Simon, Jeffrey A R01GM049850 1997 5 Continuation
Simon, Jeffrey A R01GM049850 1998 5 Continuation
Simon, Jeffrey A R01GM049850 1999 5 Continuation
Simon, Jeffrey A R01GM049850 2000 2 Renewed
Simon, Jeffrey A R01GM049850 2001 5 Continuation
Simon, Jeffrey A R01GM049850 2002 5 Continuation
Simon, Jeffrey A R01GM049850 2003 5 Continuation

The NIH makes data on awarded grants publicly available through its ExPorter database. While projects can be identified through their R01 core project numbers, there is no explicit identifier for project periods. I describe below how I define project periods using ExPorter variables and data structure.

The key to defining project periods is using the Application Type variable. This is a one-digit code that describes the type of “application” funded (see Table 3 for a full list of application types). For our purposes, the application type allows us to distinguish between what the NIH calls “competing” and “noncompeting” awards. “Competing” funds are provided as a result of having gone through a competitive process against other grant application. “Noncompeting” funds are provided as part of an already awarded project period. For the typical project, funds disbursed in the first year (i.e. just after the application process) are competing and funds awarded in subsequent years are noncompeting.

Table 3. Full list of application types for NIH grants.
Type Stage
1 New
2 Renewal
3 Competing Revision
4 Extension
5 Noncompeting Continuation
6 Change of Organization Status (Successor-in-Interest)
7 Change of Grantee or Training Institution
8 Change of Institute or Center
9 Change of Institute or Center

I identify R01 project periods as follows:

  1. Identify all budget periods with an application type of 1, 2, or 9. These are taken to be the beginning a project period.

  2. Assign a set of budget periods to the same project period if they begin in-between the beginnings of two project periods that belong to the same project.

  3. Take the beginning of the budget period to be the start of the first budget period

  4. Take the end of the budget period to be the end of the budget period that ends the latest. If the budget period ends after the beginning of the next project period, assign the end of the budget period to be one day before the next project period starts.

Monthly panel in UMETRICS

  1. Identify R01s renewed within the same fiscal year

  2. Identify all PIs associated with each renewed R01

  3. For period of interest (for example, 12 months before and after R01 expiry), create a monthly panel for each PI-R01 renewal

  4. Restrict panel months that (a) are covered by NIH ExPorter and (b) are covered by in all 3 UMETRICS datasets (award, vendor, subaward)

  5. Restrict to expiring R01 project periods that lasted for 6 years or less

  6. Restrict to PI IDs that appear in the expiring project period and the renewed project period

Yearly panel for publication outcomes

I use a similar “stacking” procedure as described above to construct a PI-R01-renewal by year panel that starts 4 years before an interruption and ends 5 years after, restricted to all interruptions that took place from 1989 to 2006. The earliest possible year in the panel is 1985, the earliest year in ExPorter. The latest possible year in the panel is 2011, so the latest possible citation for a 3-year forward citation window is from 2013, which is the final year indexed in the version of Web of Science that I use. For each treatment cohort (indexed by interruption year), I exclude units that were interrupted less than 5 years before the beginning of the cohort to reduce the possibility that previous interruptions might affect the estimates. Finally, if a PI has multiple R01s renewed within the same year, I assign the PI’s interruption status based on the R01 with the longest gap between expiry and renewal.

NIH coverage of overall grant portfolio

My measure of PI spending is limited to spending through NIH grants. This restriction ensures a high degree of accuracy in linking NIH PIs to transactions. As discussed in the main text, the results on whether interrupted employees continue to be paid on any grant are consistent with the overall set of results and thus do not give us a reason to think that there is a substantial pool of non-NIH grants being used to offset the effects of interruptions.

Other research also suggest that focusing on NIH funding provides substantial coverage of researcher funding. [21] estimate that about 70% of research groups (as defined by a community detection algorithm) in the UMETRICS data rely on federal funding for 90% of their funding. [22] estimate that in 2002, the NIH was by far the largest funder of biomedical resesarch, funding over $20 billion in research compared to $1.2 billion by the Department of Defense. More recent data from the Survey of Federal Funds for Research and Development show that in the life sciences, the NIH has provided about 80% of federal funding for research (basic and applied research combined) in colleges and universities since 2003.33

Negative transaction amounts in UMETRICS

Some transactions in UMETRICS are negative amounts. These can appear for a number of reasons including returns, discounts, reversing a purchase that was wrongly assigned, or money that was unused and refunded. In general, it is not possible to separately identify these reasons. If the negative amount is related to a purchase it is also not possible to identify that purchase (e.g. in the case of discounts or returns). Thus, I treat negative amounts as occurring at the transaction date when summing up transaction amounts to the PI-month level. If expenditure in a PI-month remains negative after summing up, I assign a value of zero. In the final sample, I also exclude PIs that have an unusually high amount of negative expenditure relative to the rest of the sample. Specifically, over the 24-month period covered by the panel, I sum up across months where total expenditure was negative and then across all months where total expenditure was positive. I exclude a PI if the absolute value of total negative expenditure was greater than or equal to the absolute value of total positive expenditure. Fig 7 shows the distribution of the ratio of total negative to total positive expenditure.

Fig 7. This is a histogram of the ratio of total negative to total positive expenditure amounts for a PI, as described in the section on negative transaction amounts in UMETRICS.

Fig 7

The ratio is given a value of zero if total positive expenditure was zero and total negative expenditure is also zero.

UMETRICS

I use the 2019 release of the UMETRICS data set, which is housed at IRIS (Institute for Research on Innovation & Science). In this appendix I describe the most relevant components of the dataset to this paper. A summary documentation of the data is publicly available at this link. The UMETRICS Core Collection consists of administrative data from universities “drawn directly from sponsored projects, procurement, and human resources data systems”. The Core Collection consists of four datasets: award, vendor, subaward, employee. “Award” data record the total expenditure from an award in a given transaction period, while the “vendor” and “subaward” data record payments to a vendor and subaward in a given transaction period respectively. “Employee” data record when an employee is paid by an award, but do not contain information on wages. In the analysis, payments to labor are backed out as the remainder after subtracting vendor and subaward payments from total expenditure (i.e. Labor = TotalVendorSubaward).

In addition to the Core Collection, there is also an Auxiliary Collection and Linkage Collection that consist of data linking the Core Collection to information such as institution characteristics or external grant data such as NIH ExPorter.

The 2019 UMETRICS release consists of data from 31 Universities. I restrict the sample to projects in institutions where transaction periods are at the monthly level. For employee data, pay periods that last longer than a month are converted to monthly, assuming employed each month contained in the quarter.

Employee occupational classifications

Figs 8 and 9 (from the UMETRICS 2019 data manual) describe in detail the occupational classification categories used in UMETRICS 2019, along with examples of job titles that fall under each category.

Fig 8. This is one of two screenshots of tables from the UMETRICS 2019 manual describing the employee occupation categories.

Fig 8

Fig 9. This is one of two screenshots of tables from the UMETRICS 2019 manual describing the employee occupation categories.

Fig 9

Inverse hyperbolic sine

Unless otherwise stated or the outcome is a binary variable, I apply an inverse hyperbolic sine (also asinh or arcsinh) transformation to the outcome variable for all regressions, which approximates a natural logarithm and is defined at zero. The approximation is worse at smaller values [23]. For “large” outcomes i.e. spending amounts, I convert estimates to percentage changes using the standard exp(β^)1 for log transformations. When the outcome variable is “small” (e.g. for counts of employees in a lab), I use the mean of the arcsinh-transformed outcome variable for interrupted PIs (asinh(y0)) to back out the percentage change as follows:

y1y0-1=sinh(asinh(y0)+β^)sinh(asinh(y0))-1

Additional descriptive statistics and results

Distribution of renewal gap if not interrupted

Fig 10 is a histogram of the time between expiry and renewal for R01s from the UMETRICS analysis sample that were not interrupted, i.e., renewed within 30 days. The distribution is concentrated at 1 day because funding tends to start on the first day of the month. There is another small spike at 15 days, which similarly is because the fifteenth of the month is the next most common day to start funding.

Fig 10. This figure shows the distribution of the time between expiry and renewal for R01 project periods that were not interrupted, i.e., renewed within 30 days.

Fig 10

Each unit of observation is an R01 project period. The figure is a histogram with 1-day bins.

Timing of R01 expiry

Fig 11 shows the timing of R01 expiry in each of the samples used in the paper. Fig 11A shows the proportion of R01 expiries occuring in each year for the UMETRICS sample, while Fig 11B shows the same for the ExPORTER sample (where the outcome variable was publications).

Fig 11. This figure shows the timing of R01 expiry for each PI-R01 combination in the analysis samples.

Fig 11

Figure A shows the proportion of PI-R01s from the UMETRICS sample where expiry occurred in a given year, while Figure B shows the same for the ExPORTER sample.

Spending

Event study without matching

Fig 12 repeats the main event study estimation without matching and the results are similar.

Fig 12. This figure shows event-study estimates (with 95% confidence intervals) of the difference in spending between PIs of interrupted and uninterrupted R01s.

Fig 12

Each panel shows the estimates for a different outcome variable: total expenditure by PI, total vendor expenditure by PI, total labor expenditure by PI, and total number of employees paid by PI. Month 0 is the month that the expiring R01 expires. Month -11 is the excluded category for the regression. Regressions are run separately on subsamples of PIs that have one R01 grant (green) or multiple R01s (brown), including R01-equivalents and P01 grants. Standard errors are clustered by expiring R01 project period.

Matching on NIH IC and university

I repeat the event study estimation, this time matching NIH Institute and Center (IC) and university. This procedure substantially reduces the sample size but the results remain similar (Fig 13).

Fig 13. This figure shows event-study estimates (with 95% confidence intervals) of the difference in spending between PIs of interrupted and uninterrupted R01s.

Fig 13

Treated and control groups are matched on length of the expiring R01 project period, NIH IC, and university, and the regression is weighted using the matching weights. Each panel shows the estimates for a different outcome variable: total expenditure by PI, total vendor expenditure by PI, total labor expenditure by PI, and total number of employees paid by PI. Month 0 is the month that the focal R01 expires. Month -11 is the excluded category for the regression. Regressions are run separately on subsamples of PIs that have one R01 grant (green) or multiple R01s (brown), including R01-equivalents and P01 grants. Standard errors are clustered at the expiring R01 level.

Spending distribution

Fig 14 shows the average arcsinh-transformed spending per month for interrupted and interrupted projects. For labs with one R01, uninterrupted labs decrease spending in the months before grant expiry, which then undergoes a gradual increase with the beginning of the new grant period. Interrupted labs also decrease spending before expiry but the decrease is much more pronounced. In addition, the drop in spending continues into the first month after expiry before recovering.

Fig 14. Average total direct expenditures (arcsinh transformed) per month for interrupted and uninterrupted projects, separately calculated for Principal Investigators with one R01 and those with at least two R01s.

Fig 14

Distribution of spending

Fig 15 shows how the entire spending distribution changes over time. For clarity, I only show select months. The decrease in spending is driven by a “spreading” of the distribution, rather than a shifting. This results in a mass of PIs at zero, but there also remain a substantial portion of interrupted PIs that continue to spend similar amounts to uninterrupted PIs.

Fig 15. Histogram of total direct expenditures for each month relative to R01 expiry.

Fig 15

Unit of observation is a PI-R01 period.

Length of interruption

I repeat the event-study analysis, allowing the length of the interruption to vary by estimating separate coefficients for interruptions that lasted 31 to 90 days and interruptions that were more than 90 days.

I index PIs as L, R01s as R, and the year-month as t. texpiry is the year-month that the R01 grant R expires. e is the number of months before expiry i.e. e = 0 when R expires amd e < 0 before the grant expires. I restrict the sample to the one year before and after the R01 R expires, i.e. e starts at month −11 and ends at 12. e = −11 is excluded from the specification.

The specification is:

yLRt=e=-1012βm11(e=t-texpiry)1(Interrupted(30,90])+e=-1012βm21(e=t-texpiry)1(Interrupted(90,))+δLR+γe+ϵLRt

Fig 16 displays the coefficients. For labs with only one R01, longer interruptions lead to greater drop in spending and a longer recovery. This accords with the intuition that a longer interruption would mean a longer time without access to funding. However, even in Month 3, spending does not recover completely for interruptions lasting between 31 to 90 days, indicating that even when funding becomes available, labs may need time to scale up their work again. This is confirmed in the next subsection with an analysis centered around the date of R01 renewal instead of R01 expiry.

Fig 16. This graph shows event-study estimates from a balanced panel of R01-PIs 12 months before and after the focal R01’s expiry month, covering a period of 24 months.

Fig 16

Separate event study coefficients are estimated for interruptions that were 31 to 90 days and interruptions that were more than 90 days. The regressions include R01-PI fixed effects and relative-to-expiry month fixed effects. Month 0 is the month that the project’s budget expires. These regressions are run separately on subsamples of PIs that have one R01 grant (left) or multiple R01s (right), including R01-equivalents and P01 grants. Month -11 is the excluded category for the regression. 95% confidence intervals are clustered by expiring R01 project period.

In addition, allowing the length of interruptions to vary reveals that even for labs with multiple R01s, spending is affected by longer interruptions. While the difference is still smaller than for labs with one R01, it is still substantial. For interruptions of more than 90 days, spending decreases by 73% at the lowest month.

Callaway-Sant’anna estimator

I repeat the main event study estimation (i.e. with research inputs as outcome variables) with the [13] doubly robust estimator, including the length of the expiry R01 project period as a control. The results are similar (Fig 17).

Fig 17. This figure shows event study estimates (with 95% confidence intervals) of the difference in spending between PIs of interrupted and uninterrupted R01s.

Fig 17

Estimation is done using the Callaway-Sant’anna (2020) doubly robust estimator, including the length of the expiring R01 project period as a covariate. Each panel shows the estimates for a different outcome variable: total expenditure by PI, total vendor expenditure by PI, total labor expenditure by PI, and total number of employees paid by PI. Month 0 is the month that the focal R01 expires. Month -10 is the excluded category for the regression. For comparison with the main results, the point estimates and confidence intervals have been adjusted so that the point estimate for Month -11 is 0. Regressions are run separately on subsamples of PIs that have one R01 grant (green) or multiple R01s (brown), including R01-equivalents and P01 grants. Standard errors are clustered by expiring R01 project period.

Note that the main event study estimates in the paper use the earliest possible month (11 months before R01 expiry) as the omitted time category. However, the did package used to implement the [13] estimator requires there to be a pre-treatment period i.e. month -11 cannot be the omitted month. To make these results comparable with the main results, I let month -10 be the omitted time category and then subtract β11^ from all the point estimates and confidence bounds, so that month -11 is effectively the reference month.

Spending recovery after renewal

To see how quickly spending recovers after R01 renewal, I repeat the main analysis on spending. Whereas the original analysis is centered around the date of R01 expiry, this is centered around the date of R01 renewal and spans one year before and one year after renewal. The event study coefficients in Appendix Fig 18A show that once funds are available, there is a noticeable jump in spending. However, the recovery in spending is not immediate and takes about three months. The pattern is similar for labor payments (Appendix Fig 18B), while vendor payments recover more slowly ((Appendix Fig 18C)), which is also the case in the analysis in the main text.

Fig 18. This figure shows event-study estimates (with 95% confidence intervals) of the difference in spending between PIs of interrupted and uninterrupted R01s.

Fig 18

Each panel shows the estimates for a different outcome variable: total expenditure by PI (A), total vendor expenditure by PI (B), total labor expenditure by PI (C), and total number of employees paid by PI (D). Month 0 is the month that the focal R01 expires. Month 11 is the excluded category for the regression. Regressions are run separately on subsamples of PIs that have one R01 grant (green) or multiple R01s (brown), including R01-equivalents and P01 grants. Standard errors are clustered by expiring R01 project period.

Fig 19A to 19C repeat the same exercise while allowing for the event study coefficients to vary by interruption length. Total spending and labor payments do not recover differently for PIs that experienced longer interruptions. However, vendor payments recover faster for longer interruptions.

Fig 19. This graph shows event-study estimates from a balanced panel of R01-PIs 12 months before and after the focal R01’s renewal month, covering a period of 24 months.

Fig 19

Separate event study coefficients are estimated for interruptions that were 31 to 90 days and interruptions that were more than 90 days. The regressions include R01-PI fixed effects and relative-to-expiry month fixed effects. Month 0 is the month that the project’s budget was renewed. These regressions are run separately on subsamples of PIs that have one R01 grant (left) or multiple R01s (right), including R01-equivalents and P01 grants. Month 11 is the excluded category for the regression. 95% confidence intervals are clustered by expiring R01 project period.

Employee counts at PI/Lab-level

Table 4 displays summary statistics on the employees paid by a PI one year before R01 expiry, for the sample of PI-R01-renewals used in the main analysis of the paper.

Table 4. Count of employees paid by PI at one year before expiry.

Occupation 1 R01 2+ R01
median mean sd median mean sd
Count All 4.00 5.52 6.97 8.00 11.02 14.85
Faculty 1.00 1.47 2.13 2.00 3.15 6.16
Postgraduate 0.00 0.65 1.50 1.00 1.30 1.91
Research 1.00 1.16 2.09 1.00 2.51 4.62
Clinical 0.00 0.07 0.38 0.00 0.15 1.26
Graduate Student 0.00 0.87 1.72 0.00 1.38 2.25
Instructional 0.00 0.02 0.15 0.00 0.06 0.35
Other 0.00 0.10 0.43 0.00 0.15 0.55
Other Staff 0.00 0.02 0.32 0.00 0.02 0.27
Research Facilitation 0.00 0.63 2.22 0.00 1.57 4.10
Technical Support 0.00 0.15 0.86 0.00 0.28 1.09
Undergraduate 0.00 0.37 1.17 0.00 0.46 1.83

Event studies of employee counts by occupation

Fig 20 repeats the same analysis but using counts within occupation. I show the results for the five most common occupations: faculty, postgraduate researchers, graduate students, research, and research facilitation. Except for Research Facilitation, we see a similar pattern for all categories as we do for the total employee count.

Fig 20. This graph shows event-study estimates from a balanced panel of R01-PIs 12 months before and after the focal R01’s expiry month, covering a period of 24 months.

Fig 20

The same specification is estimated for each occupation separately, where the outcome is the total number of employees of that occupation paid by the focal lab/PI. The regressions include R01-PI fixed effects and relative-to-expiry month fixed effects. Month 0 is the month that the project’s budget expires. These regressions are run separately on subsamples of PIs that have one R01 grant (top) or multiple R01s (bottom), including R01-equivalents and P01 grants. Month -11 is the excluded category for the regression. 95% confidence intervals are clustered by expiring R01 project period. Percentage changes (plotted as text) are calculated using the median number of employees for interrupted labs at month -11 as baseline.

Employee-level results

Fig 21 shows the average probability each month of being paid by the same PI and being paid by any grant at all for employees associated with interrupted and uninterrupted R01s. In both cases, the probability of being paid diverges between interrupted and uninterrupted employees. This divergence begins earlier for the “any grant” outcome. Fig 22 repeats this analysis by employee occupation.

Fig 21. The left column of this figure plots the average probability every month that an employee is paid by the focal PI.

Fig 21

The right column plots the average probability that an employee is paid by any grant at all. Employees linked to one R01 are represented in the top row. Employees linked to 2 or more R01s are represented in the bottom row.

Fig 22. This figure plots the average probability of being paid by the same PI or any grants at all in a given month for employees on interrupted (green) and uninterrupted projects (red).

Fig 22

The data are also subset by employees associated with only one R01-equivalent or 2 or more R01-equivalents.

Publications

Coarsened exact matching

To estimate the effect of interruptions on publications, I find all instances where an R01 was successfully renewed within the fiscal year it expires. I then stack all combinations of renewed R01s and PIs of those R01s to create an R01-PI panel.

For PI characteristics, I use Author-ity [11], a dataset of disambiguated author names based on a snapshot of MEDLINE in 2009, and which has been probabilistically linked to PI IDs in ExPorter through the AuthorLink dataset.

I apply Coarsened Exact Matching [20]. The variables I match on are: gender, career age at the time of R01 expiry, and publications (raw counts and weighted by 3-year forward citations) in the pre-treatment period (before R01 expiry). Career age is coarsened at 10-year intervals. Pre-treatment publications are coarsened at percentiles 0, 25, 50, 75, 90, 95. I then estimate event study specifications, with the estimates plotted in Fig 23.

Fig 23. This figure plots the event study coefficients estimating the difference in publication counts (arcsinh-transformed) between PIs that had an interrupted R01 and PIs that had a continuously funded R01, relative to publications in the year of R01 renewal.

Fig 23

R01-PI and treatment cohort-calendar year fixed effects are included. 95% confidence intervals are clustered by expiring R01 project period. The left/red plot is for PIs that only had one R01 and the right/blue plot is for PIs that had equivalent grants.

Event study

Heterogeneity. I repeat the static difference-in-difference estimates on subsamples by NIH IC (Fig 24) and by whether a PI was above or below the median career age (Fig 25). The estimates are statistically insignificant in all cases and overall there is no indication of a detectable effect of interruptions on research output.

Fig 24. This figure shows “static” difference-in-difference estimates and 95% confidence intervals of the difference in publication output if a PI had an interrupted R01, estimated separately on NIH IC subsamples.

Fig 24

The regression includes treatment-cohort-by-year and PI-R01-renewal fixed effects. Dependent variables are raw publication counts, arcsinh-transformed. Standard errors clustered by expiring R01 project period.

Fig 25. This figure shows “static” difference-in-difference estimates and 95% confidence intervals of the difference in publication output if a PI had an interrupted R01, estimated separately on subsamples divided by whether the PI was above or below the median career age in the sample.

Fig 25

The regression includes treatment-cohort-by-year and PI-R01-renewal fixed effects. Dependent variables are raw publication counts, arcsinh-transformed. Standard errors clustered by expiring R01 project period.

Supporting information

S1 File

(PDF)

Acknowledgments

I am grateful to Bruce Weinberg, David Blau, and Kurt Lavetti, for their invaluable guidance when I started this project as a graduate student. I am thankful to Kyle Myers for his feedback and support in the later stages of this project. This work would not have been possible without the support of IRIS at the University of Michigan, with special thanks to Natsuko Nicholls and Beth Uberseder. I am grateful to BriAnne Crowley for sharing her time and insights on grant management. The paper benefitted from helpful comments by discussants and participants at: the Aug 2018 UMETRICS meeting, Census ARiS Brown Bag, NBER-IFS International Network on the Value of Medical Research meeting, OSU Micro Lunch, SUNY Albany Department of Economics Seminar, MTEI Seminar at EPFL, RISE2 Workshop at the Max Planck Institute for Innovation and Competition (MPI), the Economics of Science & Engineering Seminar at Harvard Business School, and the 14th Workshop on the Organisation, Economics, and Policy of Scientific Research at MPI.

Data Availability

The key data in this paper on grant transactions are confidential and housed at the Institute for Research on Innovation and Science (IRIS) at the University of Michigan. Readers can apply for access to the data and code at https://iris.isr.umich.edu/research-data/access/ or contact IRISdatarequests@umich.edu for guidance. Data that can be shared are hosted on the Open Science Framework at https://osf.io/ekq47/ (DOI: 10.17605/OSF.IO/EKQ47) Citation data were obtained under license from Clarivate Analytics (https://clarivate.com). Readers can contact Jeffrey Clovis (IP&Science) jeff.clovis@Clarivate.com and Ann Beynon (IP&Science) ann.kushmerick@Clarivate.com for information on obtaining the same data.

Funding Statement

WYT received support from the National Institute on Aging of the National Institutes of Health under Award Number R24AG048059 to the National Bureau of Economic Research https://www.nia.nih.gov/ The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

References

  • 1. Lane JI, Owen-Smith J, Rosen RF, Weinberg BA. New linked data on research investments: Scientific workforce, productivity, and public value. Research Policy. 2015;44: 1659–1671. doi: 10.1016/j.respol.2014.12.013 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2. Liebman JB, Mahoney N. Do expiring budgets lead to wasteful year-end spending? Evidence from federal procurement. American Economic Review. 2017;107: 3510–49. doi: 10.1257/aer.20131296 [DOI] [Google Scholar]
  • 3. Mervis J, Marshall E. Science budget: When federal science stopped. Science. 1996;271: 136–136. doi: 10.1126/science.271.5246.136.a [DOI] [PubMed] [Google Scholar]
  • 4.DrugMonkey. Never, ever, ever, nuh-uh, no way, ever trust a dec 1 start date! Never, ever, ever, nuh-uh, no way, ever trust a dec 1 start date! 2009. Available: http://drugmonkey.scientopia.org/2009/12/15/never-ever-ever-nuh-uh-no-way-ever-trust-a-dec-1-start-date/
  • 5. Latour B, Woolgar S. Laboratory life: The construction of scientific facts. Princeton University Press; 2013. [Google Scholar]
  • 6. Conti A, Liu CC. Bringing the lab back in: Personnel composition and scientific output at the MIT department of biology. Research Policy. 2015;44: 1633–1644. doi: 10.1016/j.respol.2015.01.001 [DOI] [Google Scholar]
  • 7. Bae J, Sattari R, Weinberg BA. The marginal scientific product of investments in science. The Ohio State University; 2020. [Google Scholar]
  • 8. Azoulay P, Graff-Zivin JS, Manso G. Incentives and creativity: Evidence from the academic life sciences. The RAND Journal of Economics. 2011;42: 527–554. doi: 10.1111/j.1756-2171.2011.00140.x [DOI] [Google Scholar]
  • 9. Jaravel X, Petkova N, Bell A. Team-specific capital and innovation. American Economic Review. 2018;108: 1034–73. doi: 10.1257/aer.20151184 [DOI] [Google Scholar]
  • 10. Freeman R, Van Reenen J. What if congress doubled r&d spending on the physical sciences? Innovation policy and the economy. 2009;9: 1–38. doi: 10.1086/592419 [DOI] [Google Scholar]
  • 11. Torvik VI, Smalheiser NR. Author name disambiguation in MEDLINE. ACM Transactions on Knowledge Discovery from Data (TKDD). 2009;3: 11. doi: 10.1145/1552303.1552304 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12. Goodman-Bacon A. Difference-in-differences with variation in treatment timing. Journal of Econometrics. 2021;225: 254–277. doi: 10.1016/j.jeconom.2021.03.014 [DOI] [Google Scholar]
  • 13. Callaway B, Sant'Anna PH. Difference-in-differences with multiple time periods. Journal of Econometrics. 2021;225: 200–230. doi: 10.1016/j.jeconom.2020.12.001 [DOI] [Google Scholar]
  • 14. Sun L, Abraham S. Estimating dynamic treatment effects in event studies with heterogeneous treatment effects. Journal of Econometrics. 2021;225: 175–199. doi: 10.1016/j.jeconom.2020.09.006 [DOI] [Google Scholar]
  • 15. Borusyak K, Jaravel X. Revisiting event study designs. Available at SSRN 2826228. 2017. [Google Scholar]
  • 16. Cengiz D, Dube A, Lindner A, Zipperer B. The effect of minimum wages on low-wage jobs. The Quarterly Journal of Economics. 2019;134: 1405–1454. doi: 10.1093/qje/qjz014 [DOI] [Google Scholar]
  • 17. Baker A, Larcker DF, Wang CC. How much should we trust staggered difference-in-differences estimates? Available at SSRN 3794018. 2021. [Google Scholar]
  • 18. Goodman-Bacon A. Difference-in-differences with variation in treatment timing. National Bureau of Economic Research; 2018. [Google Scholar]
  • 19. Abraham S, Sun L. Estimating dynamic treatment effects in event studies with heterogeneous treatment effects. Available at SSRN 3158747. 2018. [Google Scholar]
  • 20. Iacus SM, King G, Porro G. Causal inference without balance checking: Coarsened exact matching. Political analysis. 2012; 1–24. doi: 10.1093/pan/mpr013 [DOI] [Google Scholar]
  • 21. Funk R, Glennon B, Lane J, Murciano-Goroff R, Ross M. Money for something: Braided funding and the structure and output of research groups. 2019. [Google Scholar]
  • 22. Moses H, Dorsey ER, Matheson DH, Thier SO. Financial anatomy of biomedical research. Jama. 2005;294: 1333–1342. doi: 10.1001/jama.294.11.1333 [DOI] [PubMed] [Google Scholar]
  • 23. Bellemare MF, Wichman CJ. Elasticities and the inverse hyperbolic sine transformation. Oxford Bulletin of Economics and Statistics. 2020;82: 50–61. doi: 10.1111/obes.12325 [DOI] [Google Scholar]

Decision Letter 0

Joshua L Rosenbloom

14 Jun 2022

PONE-D-22-12290Science, Interrupted: Funding Delays Reduce Research Activity but Having More Grants HelpsPLOS ONE

Dear Dr. Tham,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. Both reviewers are quite positive about the contribution your analysis makes and think the paper has the potential to be even better.  In their comments they note a number of questions that are raised but not fully addressed in the current version.  In addition, they make a number of suggestions concerning the empirical implementation.  In particular around the choice of control group, potential for selection into treatment and the use of event study methodology.  Finally, they offer some useful suggestions for improving the clarity of presentation.

I encourage you to consider adopting the changes recommended by the reviewers, or to clarify the reasons for the choices you have made where you diverge from them.

Please submit your revised manuscript by Jul 29 2022 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Joshua L Rosenbloom

Academic Editor

PLOS ONE

Journal Requirements:

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at 

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and 

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf.

2. Please provide additional details regarding participant consent. In the ethics statement in the Methods and online submission information, please ensure that you have specified (1) whether consent was informed and (2) what type you obtained (for instance, written or verbal, and if verbal, how it was documented and witnessed). If your study included minors, state whether you obtained consent from parents or guardians. If the need for consent was waived by the ethics committee, please include this information.

If you are reporting a retrospective study of medical records or archived samples, please ensure that you have discussed whether all data were fully anonymized before you accessed them and/or whether the IRB or ethics committee waived the requirement for informed consent. If patients provided informed written consent to have data from their medical records used in research, please include this information.

3. We note that you have stated that you will provide repository information for your data at acceptance. Should your manuscript be accepted for publication, we will hold it until you provide the relevant accession numbers or DOIs necessary to access your data. If you wish to make changes to your Data Availability statement, please describe these changes in your cover letter and we will update your Data Availability statement to reflect the information you provide.

4. Please include your full ethics statement in the ‘Methods’ section of your manuscript file. In your statement, please include the full name of the IRB or ethics committee who approved or waived your study, as well as whether or not you obtained informed written or verbal consent. If consent was waived for your study, please include this information in your statement as well. 

5. Please ensure that you refer to Figure 13 in your text as, if accepted, production will need this reference to link the reader to the figure.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: Yes

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: Yes

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: No

Reviewer #2: Yes

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: Report for: “Science, Interrupted: Funding Delays Reduce Research Activity but Having More Grants Helps”

The paper shows that when NIH grant funds are interrupted (delay in renewal), there is a reduction in expenditures by labs inputs and employees. This reduction is limited to PIs who have no other R01 grants that may be used to mitigate the impacts.

The paper is well written, interesting and important. I commend the author for their analysis. I hope my suggestions below help improve any lingering issues:

1. Temporary nature, and aggregate effect: The effects seem temporary; but when the expenditures rebound, the do not rise high enough to compensate. Can we quantify, the overall change in expenditures (i.e., the stock, not flow) over the two years, starting 3 months before the interruption? This would help us get a take-home number on the long-run reduction in spending by the lab because of the renewal delay.

2. Event-studies literature: There is a long new literature on event-studies specifications. The author has cited the literature (Goodman-Bacon 2018; Callaway and Sant’Anna 2018; Abraham and Sun 2018; Borusyak and Jaravel 2017), but it would be great to see the methods applied. The code is readily available on their websites, so implementation isn’t too difficult. The author also says “The Online Appendix discusses these in detail”, but I had trouble finding it there. It would be good to see the event studies using the methods cited (Callaway and Sant’Anna 2018; Abraham and Sun 2018; Borusyak and Jaravel 2017), or in fact using the method of the stacked-regression in the appendix of the Cenzig, Dube, Lindner, Zipperer (2019) QJE paper.

3. Three months pre-trends and selection into delay: The fall in expenditures begins 3 months before the interruption. Can we have a short discussion about why 3 months before? The reader may be worried that this is a sign of pre-trends, and so “selection into treatment” – that is, the projects that were doing poorly, and so cutting expenditures, were the ones that were not renewed. So we may have reverse causality: the delay doesn’t lead to a reduction in expenditures, but rather the project doing poorly causes the delay. Any way to help understand why this concern is not relevant would help.

4. Minor: The first regression equation has \\delta_{LR} fixed effects, but the text following it discusses it as \\delta_{iR} (employee level instead of PI level). I believe it’s just a minor typo.

5. Minor: Online Appendix: Many times the text refers the reader to the Appendix, but it is hard to find where in the appendix we should look at. I suggest saying “refer to Online Appendix Section A.3” or “Figure A.4 in the Online Appendix”.

Reviewer #2: The paper submitted analyzes the impact of delays in R01 grant recipients receiving their funds on both the employment of individuals in their labs and the scientific output of the labs. The paper finds that delays in funding has a sharp, but temporary adverse effect on employment for PIs with a single R01. In contrast, PIs who have multiple R01 grants show little change in their employment of graduate students, undergraduates, and research facilitators. Additionally, while these funding delays have large, temporary effects on employment, they do not significantly decrease the rate of production of publications.

Overall, I enjoyed reading the paper, find it methodologically sound, and consider the finding of a sizeable impact of funding delays to be compelling. Below I list a few questions and suggestions for revising the manuscript.

1. The analysis is well done and compelling. The missing component, however, is what the practical impact is of the large decline in both working employees and vendor spending. Take the employees, for example. If graduate students are released from labs for some number of months, are they truly not working? If so, how does that impact the career success of those graduate students? Are they more likely to not graduate if they had that disruption in their work? Or, do graduate students continue to work in the labs but without compensation—creating the dip you find in your analysis, but without an impact on the training of the graduate students themselves? Figure 4 begins to address this, but teasing out which of these stories is correct is important since in some the impact is fairly limited while in others the impact is substantial and would be a call for additional policy response.

2. The empirics are generally clear and appropriate. The question that I would like the paper to address is why you used the control group that you chose for the analysis. One might imagine running similar analysis using the same labs but during an earlier time period as a control. You could also imagine restricting the control group to be within the same university or field. I would suggest that the motivation for the choice of control group and the reason for dismissing other potentially valid control group specifications be made explicit in the paper.

3. It is not clear what the aggregate impact of funding delays is on science. If scientists use less of their funds when the funding is delayed, they will presumably be able to spend more later to make up for that dip and expend the entirety of their grant. In the event study plots, however, it is not clear where the labs are spending more than their steady-state level. Could you please explain why we don’t see higher than average spending after the dip is done? Could you also try to estimate what is the total impact over the course of the grant if labs spend less in the period when funding is delayed and more in later periods?

4. The most notable heterogeneity is the difference between labs with one versus multiple R01 grants. Could you provide additional information about why labs are able to compensate for delays using other R01s but are not able to compensate with other non-R01/departmental funds? Relatedly, could please provide a figure that shows that PIs with multiple R01s increase their spending on their other R01 to compensate for the decreased spending on the delayed R01 (if true). If you do not find a commensurate increase in expenditures on the unaffected second R01 grant then why are PIs with multiple grants able to maintain their level of spending?

5. Additional exploration of heterogeneity by fields, size of lab, PI age, and other attributes could be very interesting to readers. For example, Appendix Section 1.3 clearly shows heterogeneity in the rates of interruptions across fields. Similarly, heterogeneity in the impact on employees by employee attributes could be of interest to policymakers.

6. Appendix Section 1.2 says that the time series of interruptions provide evidence that the NIH is responsive to delays in the federal budgeting process. Could the author please expand on how the figure demonstrates this?

7. The author sometimes refers to labs. Can the author please clarify if a single PI is equivalent to a “lab” or if a lab can have multiple PIs?

8. Why is the preferred specification in the main text not matched on project period length and the matched version in Appendix 3.1.1 instead of the other way around? I find the matched project length version more compelling, however, if there is a reason to prefer the unmatched version, I would ask the author to note that in the empirical framework section.

9. What is the difference between the employees with the title “Research” and the title “Research Facilitation”?

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: Yes: Gaurav Khanna

Reviewer #2: No

**********

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

Decision Letter 1

Joshua L Rosenbloom

14 Nov 2022

PONE-D-22-12290R1Science, Interrupted: Funding Delays Reduce Research Activity but Having More Grants HelpsPLOS ONE

Dear Dr. Tham,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

Both reviewers state that you have addressed the bulk of the concerns raised regarding the initial submission.  This version is very close to ready for publication.  Reviewer #2 raises a number of questions, however, that should not require any further data analysis or investigation, but if answered will enhance the usefulness and impact of your article. I will not need to seek external review at this point. However, I want to give you the opportunity to strengthen your contribution by considering the queries from Reviewer #2. These are mostly asking for clarification of your choices or providing further clarification.  Simply summarize the changes you have made or your reasons for not making changes in the response to reviewers.  Once I am satisfied you have given these suggestions careful thought I will be happy to accept this article for publication.

Please submit your revised manuscript by Dec 29 2022 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Joshua L Rosenbloom

Academic Editor

PLOS ONE

Journal Requirements:

Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #1: All comments have been addressed

Reviewer #2: (No Response)

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: Yes

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: Yes

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: Yes

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: I congratulate the authors for a well executed manuscript. I believe it would make an important contribution to the literature. I encourage the authors to share all data and code if possible.

I look forward to other papers that examine what happens to grad students.

Reviewer #2: Thank you for the opportunity to read this revised manuscript. I am delighted with the changes that the authors have made. I am particularly pleased with the way in which the authors addressed the staggered difference-in-differences literature.

I do believe that there are three areas that the authors could improve before publication. First, the section describing the data could be clearer. In particular, I would like more details about the sample selection brought into the main text. Second, the most interesting aspects of this paper are the heterogeneity in the effect of interruptions across researchers. The authors have done some work on this front, but I think that this could greatly enhance the impact of the paper. Lastly, while the authors have made progress on addressing why the total spending of researchers is less, I still feel that this could be more pointedly addressed.

1. The main text data section could be a clearer on the sample timeframe. Specifically, am I correct that the data on employment goes from 2001 to 2018, but the research output is measured for 2001 to 2013? The main data section mentions the raw databases that are linked together to form the analytical dataset, but it would be helpful if the authors also described the main analytical dataset, including what time period is actually covered for the outcome variable and covariates in the main text.

2. Some of the most interesting results from this paper could be the heterogeneity across research and subfield of research. I would recommend bringing Appendix D.4.3 into the main text and expanding on what can be learned from it. The heterogenous effects on young researchers are particularly interesting; is this evidence of a Matthew Effect? I am less clear on what the conclusion is regarding the heterogeneity across NIH centers, but would appreciate if the authors could expand on it.

3. I am still a little confused about the aggregate spending impact. If the total spending change is 52% decline, where did the remaining money go? I understand that a researcher might forgo hiring a graduate student, and thus the researcher will have lower labor costs. But does that mean that the researchers are simply not spending down their full grants now? If data is not available to address this, I would look qualitative evidence or comments from PIs regarding how they handle the excess funds at the end of their grant.

4. Figure 2(b) – This graph is very useful for understanding if there is a sharp distinction between those with less than a 30 day and those with more than 30 day interruption. Could you please bin the days of interruption, such that we can see with more granularity the distribution under ~60 days?

5. I appreciate the inclusion of Figure 6 showing the rate of interruptions over time. My one concern is that the timeline ends at 2014, but you are using data through 2018. Even if you filtered to PIs with a year’s worth of monthly data following the disruption, I would expect this graph to go through 2017. Could you please either update that figure or explain why it is cut at 2014?

6. On page 13, you say that the coefficients of interest are beta-t, however, I think this might be a typo. In the equation, I only see a beta-m. I think that to be consistent, it might be useful to stick to beta-e.

7. For the equations on page 14, I think that there are missing summation terms on the interaction of the beta coefficients and the relative-time indicators.

8. Given the lack of impact on research outcomes and yet the massive impact on employee counts (particularly for single R01), is there some way to tell if the labs employees are simply switching to uncompensated work? For example, can you see if the names of the researchers on the publications from the labs change when the PIs reduce the employment during the interruption? If you could show that there was uncompensated work done, this would be a contribution to policy debates about grad student and postdoc compensation.

9. The authors use the UMETRICS data through 2018. I imagine that the reason for not expanding into more recent years is because the authors do not want to conflate funding disruptions with the COVID-19 disruptions. I would encourage the authors to mention the motivation for the sample time frame chosen in the main text.

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: Yes: Gaurav Khanna

Reviewer #2: No

**********

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

Decision Letter 2

Joshua L Rosenbloom

4 Jan 2023

Science, Interrupted: Funding Delays Reduce Research Activity but Having More Grants Helps

PONE-D-22-12290R2

Dear Dr. Tham,

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

Joshua L Rosenbloom

Academic Editor

PLOS ONE

Additional Editor Comments (optional):

Reviewers' comments:

Acceptance letter

Joshua L Rosenbloom

20 Feb 2023

PONE-D-22-12290R2

Science, Interrupted: Funding Delays Reduce Research Activity but Having More Grants Helps

Dear Dr. Tham:

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department.

If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org.

If we can help with anything else, please email us at plosone@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Dr. Joshua L Rosenbloom

Academic Editor

PLOS ONE

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    S1 File

    (PDF)

    Attachment

    Submitted filename: interruptions_response-to-reviewers.pdf

    Attachment

    Submitted filename: interruptions_response_round2.pdf

    Data Availability Statement

    The key data in this paper on grant transactions are confidential and housed at the Institute for Research on Innovation and Science (IRIS) at the University of Michigan. Readers can apply for access to the data and code at https://iris.isr.umich.edu/research-data/access/ or contact IRISdatarequests@umich.edu for guidance. Data that can be shared are hosted on the Open Science Framework at https://osf.io/ekq47/ (DOI: 10.17605/OSF.IO/EKQ47) Citation data were obtained under license from Clarivate Analytics (https://clarivate.com). Readers can contact Jeffrey Clovis (IP&Science) jeff.clovis@Clarivate.com and Ann Beynon (IP&Science) ann.kushmerick@Clarivate.com for information on obtaining the same data.


    Articles from PLOS ONE are provided here courtesy of PLOS

    RESOURCES