Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2011 Apr 6.
Published in final edited form as: Acad Med. 2009 Nov;84(11):1491–1499. doi: 10.1097/ACM.0b013e3181bb2ca6

Institutions’ Expectations for Researchers’ Self-Funding, Federal Grant Holding and Private Industry Involvement: Manifold Drivers of Self-Interest and Researcher Behavior

Brian C Martinson 1, A Lauren Crain 2, Melissa S Anderson 3, Raymond De Vries 4
PMCID: PMC3071700  NIHMSID: NIHMS281110  PMID: 19858802

Abstract

Background

Private industry involvement is viewed as tainting research with self-interest, whereas public funding is generally well-regarded. Yet, dependence on “soft money” also triggers researcher and university self-interest. No empirical research has compared these factors’ effects on academic researchers’ behaviors.

Methods

In 2006–2007, a survey was mailed to 5,000 randomly selected biomedical and social science faculty at 50 top-tier research universities in the United States. Measures included a university’s expectations or nonexpectations that researchers obtain external grant funding, the receipt or nonreceipt of public research funding, any relationships with private industry, and research-related behaviors ranging from the ideal, to the questionable, to misconduct.

Results

Being expected to obtain external funding and receiving federal research funding were both associated with significantly higher reports of 1 or more of 10 serious misbehaviors (P < .05) and neglectful or careless behaviors (P < .001). Researchers with federal funding were more likely than were those without to report having carelessly or inappropriately reviewed papers or proposals (9.6% vs. 3.9%; P < .001). Those with private industry involvement were more likely than were those without to report 1 or more of 10 serious misbehaviors (28.5% vs. 21.5%; P = .005) and to have engaged in misconduct (12.2% vs. 7.1%; P = .004); they also were less likely to have always reported financial conflicts (96.0% vs. 98.6%, P < .001).

Conclusions

The free play of university and individual self-interests, combined with and contributing to the intense competition for research funding, may be undermining scientific integrity.


“So I have just one wish for you the good luck to be somewhere where you are free to maintain the kind of integrity I have described, and where you do not feel forced by a need to maintain your position in the organization, or financial support, or so on, to lose your integrity. May you have that freedom.”

—Richard Feynman, Surely You’re Joking, Mr. Feynman!

The principle of disinterestedness has long been an ideal in the realm of science. There are increasing concerns, however, that the systems in place for raising the money required to conduct science can affect science in a way that violates the norm of disinterestedness. Whether public support (primarily from the federal government) for research is preferable to private (industry) support is a contentious issue, but, in the current setting, it may be increasingly moot, because the two types of funding are increasingly intertwined at multiple levels. This development can be seen in the recently documented high prevalence of academy industry relationships at the institutional level,1 in the increasing variety of “experimental” institutional models of corporate funding of academic research,2 and in the increasingly large proportion of individual academic researchers who receive funds from both public and private sources.

There is a tendency to view private industry involvement as potentially tainting research with self-interest, whereas public funding is regarded as posing little threat to the validity of research findings. Recent evidence shows that the imperatives of private interests can indeed lead to unwanted behaviors on the part of researchers who are the recipients of corporate largess,3,4 but how accurate is the perception that public funding of research, per se, safeguards the integrity of science? And what are the behavioral ramifications of a research climate that prompts academic researchers to increasingly rely on external funding—so-called “soft money”—for their salaries, be that funding public or private?

Over the past 30 years, universities—and particularly their medical schools, where the majority of biomedical and behavioral research is conducted—have made changes in their hiring, promotion, and tenure policies, nearly all of which have led to expectations that increasing proportions of faculty salaries will be derived from external research funding,5 obtained by the faculty members themselves; such expectations have resulted in an intensification of the competitive environment for federal research funding. These changes in the “contract” between universities and academic researchers, creating a winner-take-most “tournament” environment,6 have increased opportunities for the play of self-interest in biomedical science at both the university and individual levels. Academic health centers have become dependent on these external research funds,7, 8 and, at the same time, individual researchers within them face increasing uncertainty and risks as they find their career prospects ever more closely tied to their success in garnering external research dollars. Yet, the impact of these changes on the integrity of academic institutions and the researchers they employ remains largely unexamined. Data we have collected from academic researchers at 50 top research universities provide one of the first opportunities to examine these issues empirically.

The role of funding in the work and integrity of science is one aspect of a larger concern about the way science is shaped by the environment in which it is conducted. In previous research,913 we examined several aspects of the research environment that are associated with appropriate and inappropriate behavior on the part of scientists. Our data, along with the work of others,14 pointed to the need for more research into which aspects of the research environment foster, and which undermine, integrity in science. In this report, we use new data to examine the relationship between the ways that scientists fund their work and their self-reporting of both commendable and problematic behaviors.

Method

Data collection

We obtained prior approval for this research from the Regions Hospital Institutional Review Board, the oversight body with responsibility for all research conducted at HealthPartners Research Foundation, and from the University of Minnesota Institutional Review Board. In late 2006 and early 2007, we conducted a mailed survey of 5,000 faculty members, selected at random from within 500 departments across 50 randomly selected, top-tier research universities in the United States. We asked respondents to report their own behaviors, ranging from “ideal” behavior (e.g., disclosing conflicts of interest, following regulations), to “questionable” (e.g., inadequate record-keeping, cutting corners in order to complete a project) behavior, to outright misconduct (e.g., falsification or fabrication of data).

We selected universities to be representative of the premier research institutions in the United States that are receiving substantial government funding for academic medical research. We accomplished this by constructing a listing of 96 institutions from the membership of the Association of American Universities and from among the institutions that fell into the “comprehensive doctoral with medical/veterinary schools” category of the Carnegie classification as of the fall of 2005.15 Among these universities, we considered eligible those that did not restrict online directory access, that had a medical school, and that had sufficient website functionality to support creation of the sampling frame. We randomly selected 50 universities that met these criteria.

We used university websites and directory listings to identify the departments at each institution within each of five disciplinary areas that are heavily funded by NIH: biology, chemistry, medicine, social sciences, and allied health sciences. For each institution, we generated a list of all departments in these areas that had at least 11 faculty members and included in the sampling frame two departments from each of these five areas, randomly selected from each university, for a total of 500 departments.

Individuals in the selected departments who were listed in Web-based directories in the summer of 2006 were considered study-eligible if they appeared to be regular research faculty (i.e., assistant, associate, or full professor, including those with clinical or research designations). We excluded individuals who clearly were not faculty, as well as doctorate-level employees who were temporarily employed, who were postdoctoral fellows, who were retired, and who held adjunct, teaching, or clinical appointments with no evidence of research responsibilities. Finally, we randomly selected 10 eligible faculty members from each sampled department, for a total of 5,000 faculty members.

To ensure anonymity, survey responses were never linked to the identities of universities, departments, or individual respondents. However, we coded each survey with random numbers to denote the surveys that had been mailed to the same de-identified institutions and departments. To further ensure the anonymity of both the individuals and institutions represented in our data, we destroyed the table linking these random numbers to universities once data collection was completed. Moreover, we destroyed all individually identifying data used for contacting participants in the survey fielding process. Of the 5,000 surveys mailed, 4,915 were delivered to study-eligible individuals; 1,703 surveys yielded usable data, for a response rate of 35%.

Measures

The key outcomes were faculty members’ self-reports of behaviors they had engaged in that could be either detrimental to or supportive of the integrity of research. The misbehavior items were refinements of items from previous research.9 For example, we included three items to measure misconduct as formally defined by the U.S. Office of Science and Technology Policy.16 We also asked about 10 exemplary behaviors reflecting ideals of ethical behavior in science as addressed in Steneck’s ORI Introduction to the Responsible Conduct of Research.17 (The specific wording for all items we analyze here can be found in Appendix 1.) Questions about ideal behaviors preceded questions about misbehavior and misconduct on the survey.

We asked survey respondents to indicate whether they had engaged in any of 30 specified misbehaviors during the previous three years. We report some misbehavior results as composite measures and others as single items. To compare the current results with those in our earlier study, we created a composite roughly comparable to that we referred to in earlier publications9,11,13 as the 10 most serious misbehaviors, as judged by a panel of compliance officers,9 subject to a variety of item refinements. The “misconduct” composite encompassed fabrication of data, falsification of data, and plagiarism (FFP). Finally, the “neglect/carelessness” composite represents four neglectful or careless behaviours (see Appendix 1). Two single-item indicators assessed “circumventing human subjects requirements” and “careless peer review” (Appendix 1). For outcomes based on single question items, we considered respondents to have engaged in an instance of misbehavior if they reported having engaged in that specific behavior. For the composite outcomes, which were based on multiple question items, we considered respondents to have engaged in that misbehavior if they reported having engaged at least once in any single behavior among those included in a composite.

We also asked respondents to report how often they personally engage in each of 10 ideal behaviors, with response options of “never,” “seldom,” “about half the time,” “usually,” “always,” and “not applicable.” Responses to these items were all highly skewed toward the favorable end of the scale, so we contrasted those who reported “always” engaging in an ideal behavior with all others, exclusive of “not applicable” responses.

We also asked respondents to report the percentage of their salaries that they were expected to cover through external research funding, whether they currently held federal research grants, and the types of involvements they had with private, for-profit firms outside the university. We created a binary indicator of external funding expectation to denote whether an individual was expected to cover 20% or more of his or her salary through external grants. We also created an indicator, federal funding, of the respondent’s being either a principal investigator or co-investigator on one or more federally funded research grants. We asked whether, in the previous 3 years, they had received honoraria, consulted or advised, received research funding, or served on a board of directors or as a founder or officer or had otherwise been involved in a fiduciary relationship with any private, for-profit entities. The indicator “private interests” denotes involvement of any of these kinds with a private, for-profit company.

Statistical analysis

The modest response rate of 35% raised concern about the potential for nonresponse bias. To appropriately assess the likelihood that this relatively low response rate would adversely affect the accuracy of statistical estimates in these data, we had to consider the extent to which the likelihood that an individual would respond to the survey (i.e., response propensity) was correlated with each key outcome.18,19 We specifically constructed the sampling frame so that it enabled a systematic assessment, using propensity score methods, of the extent to which nonresponse bias was present and also enabled diagnostic analyses to assess whether weighting to correct for nonresponse was appropriate.20,21 Response propensity scores were highly predictive of response likelihood and virtually unrelated to the misbehavior outcomes. Under these circumstances, the application of response propensity weights would not reduce response bias but would increase the standard errors of the misbehavior outcomes. Response propensity was, however, related to the likelihood of endorsing the ideal behaviors. As such, we report unweighted analyses for the misbehavior outcomes, and we report analyses weighted to adjust for response propensity for the ideal behaviors.20 Nonetheless, there likely is unmeasured covariance between response likelihood and reported behaviors, which could have resulted in misestimation of behaviors; if, as seems plausible, those who misbehave are less likely to have responded, then misbehaviors are underestimated in our data.

Respondents’ reports of whether they had engaged in each of the ideal behaviors or the misbehaviors were predicted by using multilevel logistic regression so that the nested data structure does not result in inflation of the type I error rate. For each outcome, we estimated a saturated model that included the three main effects—expectation of obtaining external funding, federal funding, and private interests—as well as the two-way and the three-way interactions. We removed nonsignificant interaction terms until a main effects model, or a model with main effects and significant interactions, remained. Modeled covariates were the number of years elapsed since the award of the doctoral-level degree, sex, race-ethnicity, tenure status, and field of study (Table 1). The multilevel models specified a binomial error distribution with a logit link function, nested faculty within academic departments, and used residual pseudo-likelihood estimation and subject-specific linearization. In the few instances in which there was insufficient department-level variance for calculation of a non-zero variance estimate, we used standard logistic regression.

Table 1.

Raw Distributions of Key Analytic Variables

Variables Values
Years since award of doctorate, mean (SD) 21.2 (12.1)
Female, % 35.2
Racial or ethnic minority, % 14.4
Tenure status, %
 Tenured 62.5
 Tenure track 20.3
 Not tenure track 17.2
Field of study, %
 Biology 23.7
 Chemistry 18.1
 Allied health sciences 22.3
 Medicine 15.7
 Social Sciences 20.3
Predictors of primary interest, %
 Expectation by administration that researcher will obtain at least ≥20% external funding 53.7
 Involvement in for-profit company 34.0
 Principal Investigator or Co-Investigator on federal grant 62.9
External funding not expected, %
 Industry not involved 30.2
 Industry involved 13.0
External funding expected, %
 Industry not involved 35.3
 Industry involved 21.6
External funding not expected, %
 No federal funding 24.9
 Federal funding 18.3
External funding expected, %
 No federal funding 11.5
 Federal funding 45.4
Industry not involved, %
 No federal funding 27.3
 Federal funding 38.7
Industry involved, %
 No federal funding 9.8
 Federal funding 24.2
Misbehavior at least once in previous three years, %
 Neglect or carelessness 60.4
 Ten most serious misbehaviors 23.4
 Misconduct* 8.0
 Careless or inappropriate peer review 8.2
 Circumventing human-subjects requirements 2.7
Ideal behaviors “always” practiced, %
 Disclose financial conflict of interest 96.4
 Comply with human-subjects regulations and laws 88.2
 Preserve anonymity and intellectual rights 84.9
 Maintain data integrity and confidentiality 82.8
 Provide rationale for peer review judgments 81.6
 Recuse self from reviewing colleagues’ work 76.8
 Coauthors are able to justify authorship 63.3
 Monitor trainees’ work and development 57.5
 Set clear rules with trainees 48.5
 Clear agreements with collaborators 34.0
*

Misconduct includes fabrication of data, falsification of data, and plagiarism.

Results

Self-reported misbehavior, misconduct, and ideal behavior

More than half (60%) of respondents reported having engaged in one or more careless or neglectful behavior in the previous three years. Nearly one-quarter reported having engaged in one or more of the 10 most serious misbehaviors; in contrast, one-third of respondents in our previous study reported these behaviors. In aggregate, fully 8% of respondents reported having engaged in one or more form of misconduct, whereas less than 2% of respondents in our previous study reported formal misconduct. Of the three FFP behaviors in the misconduct composite, plagiarism is by far the most commonly reported (7.1%; falsification, 0.8%, fabrication, 0.2%). In addition, 8.2% of respondents reported having engaged in careless or inappropriate peer review of papers or proposals, and 2.7% indicated that they had circumvented or ignored aspects of human-subjects research requirements (Table 1).

Generally speaking, a very high proportion of respondents indicated that they “always” engaged in ideal behavior. Although 96% of respondents reported that they always disclose financial conflicts of interest, there is more room for improvement in other behaviors, such as always complying with human-subjects regulations (88%) or managing data to maintain integrity and confidentiality (83%).

We asked about three ideal behaviors related to peer review. Whereas 85% of respondents reported always preserving the anonymity and intellectual rights of those whose work they review, and 82% reported always providing rationale for their peer review decisions, only 77% said they always recuse themselves from reviewing the work of their close colleagues. Two ideal-behavior items had to do with interactions with trainees. Only 57% of respondents reported that they always monitored their trainees’ work to ensure they were developing into responsible researchers, and only 48% reported that they always set clear rules with trainees about performance expectations and intellectual credit. Two ideal-behavior items addressed relationships with colleagues. Whereas 63% of respondents reported that, on all of the manuscripts they have co-authored, each coauthor can explain his or her contributions to the manuscript and can justify his or her authorship, only 34% indicated that, at the outset of research projects, they always establish clear agreements about intellectual property with their colleagues on the project.

Tables 2 and 3 present the proportion of respondents predicted to endorse each of the ideal behaviors and misbehaviors, showing where these means differ significantly by the expectations for funding, the receipt of federal funding, involvement of private interests, or some interaction of these predictors. Cells containing a dash denote nonsignificance, whereas “NA” indicates main effects for which interpretation is not appropriate because of the presence of a significant interaction.

Table 2.

Model-based Marginal Proportions for Statistically Significant Effects of Primary Predictors on Misbehavior Outcomes*

Effects and interactions Neglect or carelessness, % The 10 most serious misbehaviors, % Misconduct, % Careless, inappropriate peer review, % Circumvention of human-subjects requirements, %
Main effects
 External funding of ≥20% expected by administration
  No NA NA -- -- 5.6
  Yes NA NA -- -- 2.0
 Federal funding held
  No NA NA -- 3.9 --
  Yes NA NA -- 9.6 --
 Private industry Involvement
  No -- 21.5 7.1 -- --
  Yes -- 28.5 12.2 -- --
Interactions
 External funding not expected by administration
  Federal funding
  •No 48.3 21.5 -- -- --
  •Yes 70.4 33.2 -- -- --
 External funding expected
  Federal funding
  •No 68.7 29.9 -- -- --
  •Yes 70.2 29.9 -- -- --
*

NA, interpretation was not appropriate because of the presence of a significant interaction. All models were adjusted for sex, number of years since receipt of highest degree, tenure status, field of study, and minority status.

Misconduct consists of fabrication of data, falsification of data, and plagiarism.

Table 3.

Model-based Marginal Proportions for Statistically Significant Effects of Primary Predictors on Ideal Behavior Outcomes*

Effects and interactions Disclosure of conflict of interest, % Compliance with human-subjects requirements, % Setting of clear rules with trainees, % Monitoring of trainees’ development, % Maintenance of data integrity, %
Main effects
 External funding expected by administration
  No -- -- 51.7 -- --
  Yes -- -- 45.1 -- --
 Federal funding held
  No -- NA 51.7 58.2 NA
  Yes -- NA 44.8 50.9 NA
 Private industry involvement
  No 98.7 NA -- -- NA
  Yes 96.1 NA -- -- NA
Interactions
 No private industry involvement
  Federal funding
  •No -- 88.1 -- -- 81.3
   •Yes -- 88.5 -- -- 82.3
 Private industry involvement
  Federal funding
  •No -- 93.5 -- -- 87.2
  •Yes -- 83.3 -- -- 77.1
*

NA, interpretation was not appropriate because of the presence of a significant interaction. All models were adjusted for sex, number of years since receipt of highest degree, tenure status, field of study, and minority status.

Relating external funding expectations to behavior

The primary analyses quantified the extent to which funding expectations, holding federal grant funding, fiduciary relationships with private entities, or interactions among these three variables were predictive of ideal behaviors and misbehaviors. The first question we sought to answer by this approach was whether external funding expectations were associated with behavior. The expectation that a researcher would obtain external funding was, on its own, related to a lower likelihood that an individual would report having engaged in one type of misbehavior and in one ideal behavior. Respondents who experienced expectations of obtaining external funding were less likely than those not experiencing those expectations to have reported circumventing human-subjects requirements (P = .02; Table 2) or to have reported setting clear rules with trainees about performance expectations and intellectual credit (P = .04; Table 3).

Federal funding and behavior

Current status as a Principal Investigator or Co-Investigator on a federally funded grant was related to a greater likelihood of one type of misbehavior and a lower likelihood of two ideal-type behaviors. Researchers with federal funding were more likely than those without such funding to report that they had carelessly or inappropriately reviewed papers or proposals (P < .001; Table 2). They were also less likely than unfunded respondents to set clear rules with trainees (P = .03) or to have always monitored their trainees’ work to ensure their development into responsible researchers (P = .02; Table 3).

Involvement with private industry and behavior

Involvement with private for-profit companies was related to a greater likelihood of engaging in two misbehaviors and a lower likelihood of one ideal-type behavior. Those with private interests were more likely than those without private interests to report that they had engaged in one of the 10 most serious misbehaviors (P = .005) and to have engaged in misconduct (i.e., FFP; P = .004; Table 2), and they were less likely to have always reported financial conflicts (P < .001; Table 3).

Interaction effects between primary predictors and behavior

There were two significant interactions between the expectation of obtaining soft money and holding federal funding, one with respect to engaging in one of the 10 most serious misbehaviors (P = .04) and one with respect to carelessness or neglect (P < .001; Table 2). Both interactions suggest that either being expected to fund at least 20% of one’s salary through external funding or holding federal funding increased the risk of engaging in these misbehaviors, but that their combined occurrence did not add increased risk. Respondents with neither a funding expectation nor federal funding were less likely to have engaged in one of the 10 most serious misbehaviors (21.5%) or neglect (48.3%) than were those with only federal funding (10 most serious misbehaviors, 33.2%; neglect, 70.4%), those with a funding expectation but no federal funding (10 most serious misbehaviors, 29.9%; neglect, 68.7%), or those with both (10 most serious misbehaviors, 29.9%; neglect. 70.2%). These relationships suggest that, whereas an expectation to obtain external funding may have some beneficial effects (e.g., less circumvention of human-subjects requirements), it more frequently put respondents at risk of engaging in less-than-ideal behavior.

Involvement with private interests moderated the relationships between holding federal funding and complying with laws that govern research on human subjects (P < .05) and between federal funding and managing data to maintain integrity and confidentiality (P = .03). Among those without private interests, holding a federal grant was unrelated to compliance (88.5% compared with 88.1%), whereas respondents with such involvements who also held federal funding were less likely to comply with these laws (83.3%) than were those with such involvements but without federal funding (93.5%). Similarly, holding a federal grant was unrelated to maintaining data integrity among those without private interests (82.3% compared with 81.3%), whereas, among those with private involvement, federal funding was related to a lower likelihood of reporting always maintaining data integrity (77.1% compared with 87.2%). Researchers engaged with private interests and, perhaps to a less severe extent, public funding for research were more likely to report falling short of behavioral ideals.

Conclusions

Our descriptive findings reveal troublingly high levels of reported neglect, carelessness, inappropriate peer review, and misconduct (at least in terms of plagiarism). They also reveal clearly suboptimal levels of behavioral ideals, particularly with respect to the interpersonal and social elements of doing science—i.e., recusing oneself from reviewing the work of close colleagues, setting clear rules with and monitoring the work and development of trainees, and setting clear boundaries and expectations with collaborators.

With respect to our inferential findings, it is interesting to note that we found significant associations between all three of the primary variables of interest and one or more behavioral outcome. Moreover, when we found significant associations, they were largely in an unfavorable direction. The one exception to this pattern was the finding of less circumvention of human-subjects requirements among those expected to obtain external research funding than among those without such expectations. Many will not be surprised to see that the involvement of private interests is associated with more reported misconduct, with the 10 most serious misbehaviors, and with a somewhat lower likelihood of disclosing conflicts of interest. It may be more surprising to some that we also found the expectation of external funding, the holding of federal grants, or the combination of these factors to be positively associated with neglectful behaviors, with careless or inappropriate peer review, and with less-than-ideal treatment and supervision of trainees.

Several broad observations arise from our inferential results. First, the situational imperatives of the soft-money world may lead to compromised behavior on the part of those who must participate in it. The data suggest that self-interest, of whatever form, may pose challenges to the integrity of behavior in science, but that the relationships between self-interest and behavior are not well captured by simplistic, black-and-white views that cast some forms of self-interest as bad and others as benign. Second, it is clear that, when it comes to federal funding, both the expectation to obtain such funding and the holding of federal grants are associated with lower levels of some ideal behaviors and higher levels of misbehavior. Taken together, these observations suggest that simplistic explanations of the behavior of scientists, particularly those explanations that fail to take into account the interactions of individuals with the imperatives of their environments, will not serve us well in our attempts to eliminate bad behavior and foster integrity in science.

External funding expectations

Being dependent on soft money for research support means having a certain amount of “skin in the game.” Those with much to lose may be more willing to bend or break the rules if they perceive such behavior as necessary for their career survival. Being expected to obtain external funding for at least 20% of one’s salary is associated on the one hand with less circumvention of human-subjects requirements and on the other hand with a lower likelihood of setting clear rules with trainees, and is also implicated, along with the holding of federal funding, in the self-report of neglect and of 1 or more of the 10 most serious misbehaviors.

Source of funding

Involvement with private firms is associated with a greater likelihood of reporting misconduct (primarily plagiarism), one or more of the 10 most serious misbehaviors, and less likelihood of disclosing financial relationships, whereas holding federal research funding is associated with a significantly greater likelihood of reporting inappropriate or careless peer review and with lower levels of some ideal behaviors, particularly the guidance and monitoring of trainees. Moreover, those holding federal research funding and also having involvement with private firms are the least likely to report maintaining data integrity. Clearly, bad behavior is not restricted to those with a financial stake in either source.

Limitations

Caution is warranted when interpreting differences between the current findings and those of our first study9—particularly with respect to the aggregate level of reporting engagement in one or more of the 10 most serious misbehaviors. This is because of inherent differences in the targeted samples as well as refinements to our behavioral items between the studies. As in our first study, our dependence on self-reporting leads us to believe that there may be some underreporting of misconduct and misbehavior, despite our assurances of respondents’ anonymity.

Another limitation is that the academic researchers in this sample are, on average, 21 years out from having obtained their doctoral degrees. It is primarily the more recent, junior hires who have been most heavily exposed to the competitive environment for funding as academic medical centers have increasingly moved to tenure without guaranteed salary and to non-tenure-track and wholly grant-funded positions, which are more easily terminated if grant funding dries up.22 Thus, it is reasonable to suspect that our results underestimate the impact of the competitive environment for funding on more recent generations of researchers.

One unexpected observation is the association of external funding expectations with a significantly lower level of reported violation of human-subjects requirements. This association appears to be largely confined to the subset of researchers with external funding expectations ranging between 20% and 30%. These individuals are also significantly more likely to hold a PhD than an MD, to be teaching part-time, to be tenured or on the tenure track, and to be in one of the allied health fields, and they are significantly less likely to be in medical or social science fields. In other words, these are individuals who fit a prototypically career-academic mold and who are working outside the fields most directly affected by human-subjects regulations and mechanisms. On the other hand, we observe the lowest levels of always complying with human-subjects requirements among those with both federal funding and involvement with private companies. It is quite likely that such individuals are farther along and more well established in their careers and, perhaps, less fearful of the consequences resulting from such indiscretions.

Most of those who have concerned themselves with the issue have conceived of misbehavior in science as explainable primarily at the level of individual scientists, but it is increasingly clear that scientists’ behavior is strongly influenced by the situational imperatives of their participation in the intensely competitive environment of soft-money support for research. We believe that a focus on environmental influences on behavior may help us to better understand the behavior not only of the individual scientists but also of the institutions in which they are employed. Whether one is concerned about universities entering into contracts with private industry that violate their own rules and standards in the interest of competing for private dollars23 or about the behavior of individual researchers violating norms of conduct, at the root of both of these behaviors is the current intensity of competition for both public and private resources. Our findings indicate that this situation threatens the integrity of academic research, and, to date, there is no clear evidence that such intense competition maximizes innovation and discovery. One may argue that these are the costs of university and academic health center dependence on soft money, but is this a price worth paying?

The airline industry in this country provides an interesting and useful analogy to the current zero-sum competition between universities and between scientists competing for research funding. In an April 21, 2008, op-ed column in The New York Times, Robert Crandall wrote:

Right now, airlines schedule more flights than the runways, terminals, and air-traffic control system can accommodate. Airlines cannot unilaterally reduce flights because doing so would grant other airlines a competitive advantage. In the short term, the only solution is a government mandate that limits flights to the number the system can handle.24

Federal funding for biomedical research, just [as for] air-traffic-control systems, is a public good, subject to overconsumption by entities whose business models are predicated on the good’s continued and expanded availability. Many have argued that the solution to this intense competition is further increases in the federal research budget,25 but others have argued that recent evidence does not support this position8,26,27 and that more plausible solutions lie in the creation and adoption of new business models on the part of universities.7,8 If the ultimate goal of competition for funding in science is selecting for a handful of ultimate winners, we might find a continuation and expansion of the current “tournament” model6 of competition in science to be fitting. If, however, our goals are the creation and maintenance of a healthy science workforce that produces high-quality science while at the same time maximizing the likelihood of fundamental breakthroughs in knowledge, we might ask whether the current, zero-sum competition for funding is the best we can do. It has long been an article of faith that competition in science is good and necessary. It’s time we consider what that faith is costing us.

Acknowledgments

The authors wish to acknowledge the excellent work of Emily Ronning and Dana McGree in several key aspects contributing to this manuscript including project coordination, sample frame development, and survey fielding.

Disclaimers

This research was supported by award no. R01-NS052885 from the National Institute of Neurological Disorders and Stroke and the Department of Health and Human Services Office of Research Integrity through the collaborative Research on Research Integrity Program. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institute of Neurological Disorders and Stroke, the National Institutes of Health, or the Office of Research Integrity. The work of RDV on this project was also supported by NIH grants nos. K01-AT000054 and G13-LM008781. None of the authors had a conflict of interest.

Appendix 1 Survey items regarding researchers’ behaviors

Ideal behaviors

  1. “I properly disclose financial conflicts of interest in my research.”

  2. “I comply with regulations and laws that govern research on human subjects.”

  3. “I manage data in ways that maintain data integrity and confidentiality.”

  4. “I recuse myself from reviewing grants and publications submitted by close colleagues.”

  5. “I preserve the anonymity and intellectual rights of the persons whose work I review.”

  6. “When writing peer reviews, I provide a rationale for my judgments.”

  7. “When working with trainees, I set clear rules for factors such as performance expectations and intellectual credit.”

  8. “I monitor trainees’ work to ensure that they are developing into responsible researchers.”

  9. “At the outset of collaborative projects, I encourage the establishment of clear agreements regarding intellectual ownership of the research results or products.”

  10. “On my publications, all coauthors can explain the contributions that justify their authorship.”

Misconduct and questionable research practices

Single-item measures

  1. Circumventing or ignoring aspects of human-subjects research requirements such as informed consent or confidentiality

  2. Inappropriate or careless peer review of papers or proposals

Items making up the 10-most-serious-misbehaviors composite*

  1. Overlooking others’ use of flawed data or methods

  2. Compromising the rigor of a study’s design or methods in response to pressure from a commercial funding source

  3. Unauthorized use of confidential information about research subjects

  4. Making up research data, other than in situations such as simulation studies

  5. Inappropriately altering or “cooking” research data

  6. Compromising the rigor of a study’s design or methods in response to pressure from a not-for-profit funding source (such as government or a private foundation)

  7. Not properly disclosing involvement in firms whose products are based on one’s own research

  8. Using another’s words or ideas without giving proper credit

  9. Inappropriately altering or suppressing research results in response to pressure from a commercial funding source

  10. Relationships with students, research subjects, or supervisees that may be interpreted as questionable

  11. Inappropriately altering or suppressing research results in response to pressure from a not-for-profit funding source such as government or a private foundation

  12. Circumventing or ignoring aspects of human-subjects research requirements such as informed consent or confidentiality

Items making up the FFP composite

  1. Inappropriately altering or “cooking” research data

  2. Making up research data, other than in situations such as simulation studies

  3. Using another’s words or ideas without giving proper credit

Items making up the neglect/carelessness composite

  1. Circumventing or ignoring aspects of materials-handling research requirements such as biosafety or radioactive materials

  2. Inadequate record-keeping related to research projects

  3. Inadequate monitoring of research projects because of work overload

  4. Cutting corners in the hurry to complete a project

Footnotes

*

Twelve items make up the list of the 10 most-serious misbehaviors because of the following changes from our first study: we broke down item 1 from our initial publication into items 4 and 5; we combined items 2 and 8 from our initial publication into the current item 12; we omitted item 7 from our initial study; and we have broken down item 10 from our initial study into four items, given here as items nos. 2, 6, 9, and 11.

FFP, three forms of misbehavior: fabrication of data, falsification of data, and plagiarism.

Contributor Information

Dr. Brian C. Martinson, Senior research investigator, HealthPartners Research Foundation, Minneapolis, Minnesota.

Dr. A. Lauren Crain, Research investigator, HealthPartners Research Foundation, Minneapolis, Minnesota.

Dr. Melissa S. Anderson, Professor of higher education, Educational Policy and Administration, University of Minnesota, Minneapolis, Minnesota.

Dr. Raymond De Vries, Associate professor, Bioethics Program, University of Michigan Medical School, Ann Arbor, Michigan.

References

  • 1.Campbell EG, Weissman JS, Ehringhaus S, et al. Institutional Academic Industry Relationships. JAMA. 2007;298(15):1779–1786. doi: 10.1001/jama.298.15.1779. [DOI] [PubMed] [Google Scholar]
  • 2.Bero L. “Experimental” institutional models for corporate funding of academic research: Unknown effects on the research enterprise. Journal of Clinical Epidemiology. 2008;61(7):629–633. doi: 10.1016/j.jclinepi.2008.01.002. [DOI] [PubMed] [Google Scholar]
  • 3.Blumenthal D, Campbell EG, Gokhale M, et al. Data Withholding in Genetics and the Other Life Sciences: Prevalences and Predictors. Acad Med. 2006;81(2):137–145. doi: 10.1097/00001888-200602000-00006. [DOI] [PubMed] [Google Scholar]
  • 4.Kaiser J. ETHICS: Senate Inquiry on Research Conflicts Shifts to Grantees. Science. 2008;320(5884):1708. doi: 10.1126/science.320.5884.1708. [DOI] [PubMed] [Google Scholar]
  • 5.Bunton SA, Mallon WT. The Continued Evolution of Faculty Appointment and Tenure Policies at U.S. Medical Schools. Academic Medicine. 2007;82(3):281–289. doi: 10.1097/ACM.0b013e3180307e87. [DOI] [PubMed] [Google Scholar]
  • 6.Freeman R, Weinstein E, Marincola E, Rosenbaum J, Solomon F. Competition and Careers in Biosciences. Science. 2001;294(5550):2293–2294. doi: 10.1126/science.1067477. [DOI] [PubMed] [Google Scholar]
  • 7.Korn D. Academic Medicine in an Era of Resource Constraints. The Academic Health Center in the 21st Century—1995 Duke Private Sector Conference; Durham, NC: Duke University; 1995. [Google Scholar]
  • 8.Martinson BC. Universities and the money fix. Nature. 2007;449(7159):141–142. doi: 10.1038/449141a. (commentary) [DOI] [PubMed] [Google Scholar]
  • 9.Martinson BC, Anderson MS, De Vries R. Scientists behaving badly. Nature. 2005;435(7043):737–738. doi: 10.1038/435737a. (editorial) [DOI] [PubMed] [Google Scholar]
  • 10.de Vries R, Anderson MS, Martinson BC. Normal Misbehavior: Scientists Talk about the Ethics of Research. Journal of Empirical Research on Human Research Ethics. 2006;1(1):43–50. doi: 10.1525/jer.2006.1.1.43. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Martinson BC, Anderson MS, Crain AL, de Vries R. Scientists’ Perceptions of Organizational Justice and Self-Reported Misbehaviors. Journal of Empirical Research on Human Research Ethics. 2006;1(1):51–66. doi: 10.1525/jer.2006.1.1.51. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Anderson MS, Ronning EA, De Vries R, Martinson BC. The Perverse Effects of Competition on Scientists’ Work and Relationships. Science and Engineering Ethics. 2007;13(4):437–461. doi: 10.1007/s11948-007-9042-5. [DOI] [PubMed] [Google Scholar]
  • 13.Anderson MS, Horn AS, Risbey KR, Ronning EA, De Vries R, Martinson BC. What Do Mentoring and Training in the Responsible Conduct of Research Have To Do with Scientists’ Misbehavior? Findings from a National Survey of NIH-Funded Scientists. Acad Med. 2007;82(9):853–860. doi: 10.1097/ACM.0b013e31812f764c. [DOI] [PubMed] [Google Scholar]
  • 14.Institute of Medicine and National Research Council Committee on Assessing Integrity in Research E. Integrity in Scientific Research: Creating an Environment that Promotes Responsible Conduct. Washington, DC: The National Academies Press; 2002. [PubMed] [Google Scholar]
  • 15.The Carnegie Classification of Institutions of Higher Education. [Accessed July 16, 2009]; Available at: ( http://www.carnegiefoundation.org/classifications/index.asp)
  • 16.Office of Science and Technology Policy. [Accessed July 16, 2009];Federal Policy on Research Misconduct. Available at: ( http://www.ostp.gov/cs/federal_policy_on_research_misconduct)
  • 17.Steneck NH. ORI Introduction to the Responsible Conduct of Research. Washington, DC: U.S. Government Printing Office; 2004. [Google Scholar]
  • 18.Groves RM. Nonresponse Rates and Nonresponse Bias in Household Surveys. Public Opin Q. 2006;70(5):646–675. [Google Scholar]
  • 19.Groves RM, Brick JM, Couper M, et al. Issues Facing the Field: Alternative Practical Measures of Representativeness of Survey Respondent Pools. [Accessed July 31, 2009];Survey Practice. 2008 October;:14–18. Available at: ( http://www.surveypractice.org/2008/10)
  • 20.Little RJ, Vartivarian S. On weighting the rates in non-response weights. Stat Med. 2003;22(9):1589–1599. doi: 10.1002/sim.1513. [DOI] [PubMed] [Google Scholar]
  • 21.Crain AL, Martinson BC, Ronning EA, McGree D, Anderson MS, DeVries R. Supplemental Sampling Frame Data as a Means of Assessing Response Bias in a Hierarchical Sample of University Faculty. Presented at the Annual Meeting of the American Association for Public Opinion Research; New Orleans, LA. May 17, 2008. [Google Scholar]
  • 22.Liu M, Mallon WT. Tenure in Transition: Trends in Basic Science Faculty Appointment Policies at U.S. Medical Schools. Academic Medicine. 2004;79(3):205–213. doi: 10.1097/00001888-200403000-00003. [DOI] [PubMed] [Google Scholar]
  • 23.Finder A. At One University, Tobacco Money Is a Secret. New York Times; May 22, 2008. p. A1. [Google Scholar]
  • 24.Crandall R. Charge More, Merge Less, Fly Better. New York Times Online; Apr 21, 2008. [Accessed July 17, 2009]. Available at: ( http://www.nytimes.com/2008/2004/2021/opinion/2021crandall.html?_r=2001&scp=2001&sq=%2022Charge%2020More,%2020Merge%2020Less,%2020Fly%2020Better%2022&st=cse) [Google Scholar]
  • 25.Brugge JS, Clardy J, Davidson R, et al. Within Our Grasp—Or Slipping Away? Assuring a New Era of Scientific and Medical Progress. Cambridge, Mass: Harvard University; 2007. [Google Scholar]
  • 26.Freeman RA, Van Reenen J. Be Careful What You Wish For: A Cautionary Tale about Budget Doubling. Issues in Science and Technology. 2008 Fall;:27–31. [Google Scholar]
  • 27.Teitelbaum MS. Research Funding: Structural Disequilibria in Biomedical Research. Science. 2008;321(5889):644–645. doi: 10.1126/science.1160272. [DOI] [PubMed] [Google Scholar]

RESOURCES