Skip to main content
Translational Behavioral Medicine logoLink to Translational Behavioral Medicine
. 2012 Nov 21;2(4):531–534. doi: 10.1007/s13142-012-0179-7

Confessions of a team science funder

Robert T Croyle 1,
PMCID: PMC3717939  PMID: 24073153

When I arrived at the National Cancer Institute in July of 1998, my colleagues and I were afforded a remarkable opportunity: create and build a new division of the National Institutes of Health (NIH)’s largest institute. Visionary leadership, a rapidly growing budget, a wealth of new positions to fill, and an intriguing array of scientific opportunities presented a unique context within which to revisit the most recalcitrant and complex challenges in cancer control. There was only one (of many, as we were later to learn) problem. Many of the career staff who had been reassigned into our newly formed Behavioral Research Program shared neither our excitement nor our enthusiasm. As a newly arrived team leader, I was soon awash in a tidal wave of skepticism. With no government experience, I was viewed as naïve and unrealistic. Many of the most experienced staff were also the most cynical and pessimistic. Having felt unsupported and disconnected from their previous leadership, they were dispirited, demoralized, and reluctant to commit to ambitious new objectives. Collaboration within and between agencies was minimal; trust was extinct. Furthermore, there were substantive disagreements about mission and priorities. To a large degree, these reflected the training and disciplines of the staff. Advocates of public health programs, trained primarily in public health or preventive medicine, argued that funds should primarily support contracts to state health departments to implement what was already known. Skeptical about the population-level impact of behavior change interventions, this group supported close coordination with advocates and community leaders to effect policy change. In contrast, members of the academic research community, including a few social and behavioral scientists who had recently moved into government, felt that new and better evidence was critically needed to understand the mechanisms underlying health behavior and improve the efficacy of clinical interventions.

I soon learned that the National Institutes of Health was emerging from its own internal battles concerning the relative importance of traditional clinical research versus basic biomedical science. The Genome Project and its related spin-offs were generating a contagious confidence among basic scientists in their ability to reinvent and revitalize what they viewed to be a moribund clinical enterprise. Clinical programs within the agency were shuttered, a new generation of basic scientists were promoted or recruited, and physiology gave way to molecular biology. The expanding budget of the NIH enabled new investments in technology, the development of model systems, the expansion of university facilities, and an increasingly reductionist model of medicine and public health. Gene discovery ruled the day, catalyzed by competition both within and between the public and private sectors. Research on health care, policy, economics, and interpersonal processes were viewed as beyond the appropriate domain of the NIH.

Many behavioral scientists at NIH shared the sense of isolation and lack of support reported by many of my National Cancer Institute (NCI) colleagues. But within NCI, change was underway. President Clinton appointed Barbara Rimer as Chair of the National Cancer Advisory Board, the first woman and first behavioral scientist to serve in this role. The NCI Director, Richard Klausner, initiated a major reorganization of the institute, creating the Division of Cancer Control and Population Sciences in late 1997 and appointing Rimer as its first director Robert Hiatt, a physician and epidemiologist, joined shortly thereafter as Deputy Division Director. Together, the three of us, with input from many re-energized colleagues, immediately set out to formulate strategic scientific goals that focused on expanding support for interdisciplinary research. Within the domain of behavioral research, our primary strategy was both explicit and expensive: to support and advance transdisciplinary team science, implemented though centers of excellence with stringent requirements for conceptual and methodological integration in the complex problem domains that we believed were critical to address in order to accelerate progress in cancer control research and practice: tobacco use, communication, population health disparities, and obesity. These signature initiatives (totaling nearly $285 million in funding from the National Cancer Institute over the last decade) were carefully designed to bring together diverse disciplines with limited histories of prior collaboration to support novel collaborative science. We made a special effort to entice and support investigators in disciplines that had not been central to the NCI constituency: geographers, sociologists, anthropologists, social workers, economists, psychometricians, statistical modelers, kinesiologists, urban designers, journalists, cognitive scientists, and environmental health experts, just to name a few. But we also conceived the transdisciplinary centers as a grand experiment in team science, incorporating evaluation efforts designed not only to assess the impact of the work, but also to advance the science of team science itself through the development and application of new methods.

What follows are some personal reflections on the lessons learned from the 13 years since the first of these initiatives, the Transdisciplinary Tobacco Use Research Centers (TTURC), was launched (in collaboration with the National Institute on Drug Abuse, the Robert Wood Johnson Foundation, and the National Institute on Alcohol Abuse and Alcoholism). Because most of the articles in this issue discuss team science issues from the perspective of investigators as members or leaders of project teams, my focus will be a more distant and global one, spanning across many years, many initiatives, and the team-related issues that arise among federal and non-federal funders as well as investigators.

TEAM SCIENCE: VALUE OR VALIDATED STRATEGY?

As noted by several of the authors, the perceived value of teams in health care and research is still a work in progress. Within the NIH biomedical research culture (including many of those funded by NIH), the traditional R01 grant model reigns supreme, especially among basic scientists. On the other hand, social, behavioral, and public health scientists seem to be more receptive to the concept of team science, in part because some of these fields (especially multisite clinical and population science research) often require a team to conduct the project. It is important to admit that many of the strongly held views on both sides are not based on scientific evidence, but on experience-based tacit knowledge or, in the case of many social scientists, an intrinsic interest in teams and, more broadly, interpersonal processes. Advocating for the team science center RFA concepts that our group has launched over the past several years reinforced repeatedly my view that the worth of team science is more often a value held by adherents (often with passionate zealotry), who are entirely comfortable setting aside the scientific method and relying on anecdotes when applied to the evaluation of funding mechanisms and their relative productivity. If one has a firmly held disparaging view of social science, even the most systematic and comprehensive evaluation is unlikely to be persuasive, because, of course, it largely relies on social science methods. While I fully endorse the importance of developing more rigorous methods for evaluating team science process and products, it is only realistic to recognize that some scientists will continue to rely only on their own expert opinion, reporting that “I know good science when I see it.” The epistemology of expert opinion, after all, is the foundation of most forms of peer review.

TEAMS THAT GO WRONG

The wealth of practical suggestions of Gadlin and Bennett [1] include a special point concerning the importance of assessing the functioning of teams from members as the group is formed and progresses into task work. Within government, my own experience is that not conducting this explicit ongoing assessment is one of the most common mistakes made by team leaders (myself included). Because team leaders often occupy their role by default (because of rank or role) or are appointed to the role, team members often have no way to change leadership, provide honest criticism, or modify the team process to increase productivity or collegiality. The only options left to team members are passive avoidance, grumbling compliance, or complaining to fellow team members. Team members who can vote with their feet and miss team meetings will do so. Government is especially susceptible to dysfunctional teams because it is often more formal and hierarchical than academia. Rank, title, and position often trump content expertise. In addition, frequent turnover of politically appointed leaders leads many staff to limit their level of commitment or effort, knowing that the next election will usher in a whole new set of leaders whose priorities may be the opposite of those they replaced. Finally, it often seems that some of the worst leaders are the self-appointed, eager to take on any leadership opportunity that arises, regardless of their level of expertise or skills as a team manager.

Hall et al. [2] provide many examples of collaborative team processes within the Transdisciplinary Research on Energetics and Cancer and TTURC centers. Although it has been harder to document in a compelling manner, my own observation of these and other initiatives reinforces the evidence that a tremendous proportion of the likelihood of success depends on a leader who is open, supportive, accessible, and organized. Many aspects of these qualities are now being captured in recently developed measures that they describe (e.g., collaborative readiness). But perhaps the hardest construct to measure directly is intellectual scientific leadership, the ability to identify substantive scientific connections between investigators and their specific scientific questions and methods. The level and breadth of scientific expertise necessary to be maximally successful is not something easily taught. And certainly a self-report measure of “How brilliant a scientist are you?” is of limited value. But it is the compelling magnetic force of scientific credibility and intelligence that discovers a linkage among the factors that disrupt a molecular pathway studied within a cell line, a mouse model, and a young cancer patient. As indicated by multiple authors, trainees are often in the best position to evaluate this essential skill set, with publication citation impact a longer-term outcome measure.

THE UNDERGROUND TEAM: HELPFUL IN ACADEMIA, IMPORTANT IN HEALTHCARE, ESSENTIAL IN GOVERNMENT

This issue has covered a wide range of challenges and potential solutions concerning scientific teams, including negotiations, power, co-authorships, and participation in team discussions. The health care setting creates even stronger interdependencies and the rapidly growing appreciation for effective teams in medicine is encouraging. A recent NCI initiative has focused on the science of health care teams, incorporating lessons learned from the business sector. Almost every aspect of the Affordable Care Act, quality metrics, payment bundling, and the rapid consolidation of health care reinforces the importance of better understanding teams in health care and how to improve them. Unfortunately, the development of validated consensus measures of care coordination and the medical home have lagged behind the need for their implementation. Transitions in health care create risks to appropriate follow-up; this is a substantial problem in cancer care, and although some research is underway, more is urgently needed and more funding is now available. A recent funding announcement from NCI, for example, focuses on follow-up care plans for cancer patients. Currently, many providers rely on informal networks of specialist colleagues, basing patient hand-offs more on their personal relationships with colleagues than on system supports, whether electronic or not. The burgeoning growth of patient navigators is as much a symptom of a broken, disconnected system as it is a solution to a problem that, ideally, should not exist at all.

The underground team has long been an essential aspect of a functioning government. Informal staff networks of colleagues across institutes and agencies ensure that the ongoing business of government gets done, a fact largely unappreciated by scholars in public administration. The persistent focus on senior leadership in government belies the fact that as one moves up the chain of command in, for example, the executive branch, leaders have fewer staff, fewer resources at their immediate disposal, less flexibility in time management, and a more limited ability to make decisions that are not cleared by the next level of leadership. Horizontal teams, often without a designated leader, can often share information more rapidly (by avoiding going up and over the chain of command), can enlist help from more colleagues on short notice, and respond directly at the ground level to address a problem. Sharing staff across agencies through temporary assignments can be especially effective, essentially “gluing” teams together. Network analyses can be a helpful diagnostic to describe and improve these relationships. In addition, these lateral networks provide an important safety net when designated team leaders are incompetent or lack the requisite specialized expertise to solve a complex issue. Succession planning is another weakness. Although widely discussed within the business sector, succession planning in government occurs mostly at the level of political appointees during administration transitions, but mid-level management changes can be frequent and highly disruptive. More research on horizontal, cross-organizational teams and how teams can play a role in effective succession planning would be welcome in both the academic research and government sectors.

A FINAL WORD

When we launched the first in a series of team science center initiatives, I found it remarkable how few validated tools had been developed within or outside the NIH to evaluate their performance. NIH was funding more center grants, but making almost no investment in how to evaluate them. This dearth of tools was the catalyst for launching an initiative, first known as the Evaluation of Large Initiatives project, led by Bill Trochim, and later renamed the Science of Team Science Initiative, supported by Dan Stokols and currently led by Kara Hall. Many other fellows, research assistants, contractors, NCI staff, and consultants have participated in the many aspects of this initiative and are acknowledged in the previous articles.

Within our own organization (NCI), one of the most valuable consequences of our team science centers initiatives was the increase in teamwork and collaboration related to the management of these large and complex initiatives. We have had to learn to practice what we were attempting to preach to the investigators. Obviously, if the funder staff do not communicate and collaborate with each other effectively (yes, it happens), then there is little chance that the investigators will do so. Therefore, being able to function effectively in a team is a criterion for both hiring and firing. But the largest benefit has been on the morale, energy, engagement, and excitement among the staff involved. Because the centers involve many disciplines, we have brought together program staff who otherwise would not have worked together. The scale and ambition of big science concerning a focused problem with consensus support as a priority for both science and practice provides a common goal that increases our group’s identity and the sharing of knowledge. The involvement of many other funders, including the foundation sector, has complemented the program teams and created numerous additional opportunities, including specialized spin-off teams. During my first few years at NIH, many staff at other institutes thought it strange that I and my staff spent so much time meeting with and collaborating with other agencies, such as the Centers for Disease Control and Prevention. Fortunately, the extent and acceptability of interagency collaboration is now welcome and stronger than ever. The inclusion of an Implementation Science unit led by Russ Glasgow has strengthened the relevance and usability of the science we support and built teams that lead and train others in the appropriate use of evidence within local contexts.

Our hope for this entire endeavor is both local, from a funder’s perspective, and much broader, from an academic perspective. At the local level, we hope we have strengthened the ability of NIH and other research and program funders within and outside of government to provide more rigorous assessments of their investments. Responsible funders must evaluate how their dollars are spent, whether they come from the taxpayer or from donors. Better evaluation not only enables more effective science and public health policy, but also contributes to the science of measurement itself, which in turn provides investigators in all fields with a better toolkit for expanding our knowledge of goal-oriented human interaction. By further integrating the evidence from the numerous disciplines that have studied teams in a variety of contexts, we can increase the efficiency and return on investment for all medical research.

Footnotes

Implications

None

References

  • 1.Gadlin H, Bennett M. Dear Doc: advice for collaborators. Transl Behav Med. 2012;4(4). [DOI] [PMC free article] [PubMed]
  • 2.Hall KL, Vogel AL, Stokols D, Morgan G, Gelhert S. A four-phase model of transdisciplinary research: goals, team processes, and strategies. Transl Behav Med. 2012;4(4). [DOI] [PMC free article] [PubMed]

Articles from Translational Behavioral Medicine are provided here courtesy of Oxford University Press

RESOURCES