Skip to main content
Transactions of the American Clinical and Climatological Association logoLink to Transactions of the American Clinical and Climatological Association
. 2025;135:333–344.

LEADING A TRANSDISCIPLINARY TEAM MODEL ACROSS A COMPREHENSIVE CAMPUS

Robert S Dipaola 1,
PMCID: PMC12323462  PMID: 40771621

ABSTRACT

As the state’s land-grant research institution, we implemented a transdisciplinary research strategy in the College of Medicine (COM) forming multiple research teams selected with criteria for success and progress. To assess for key factors, we reviewed the literature and data from studies conducted at the University of Kentucky (UK) that included a quantitative study with a mixed-methods approach to assess team dynamics, collaboration, and research outcomes, as well as a qualitative study (1,2). Team interactions were positively associated with scholarly products. We also experienced an approximate doubling of the COM National Institutes of Health (NIH) funding over four years. Based on these data and experiences, we developed a process for future team building.

In summary, we describe a team-based model with consideration of evidential criteria for structure, monitoring, and success metrics, and we developed a process that could be used by leadership to develop transdisciplinary teams across the university for research, education, or service.

INTRODUCTION

The COM was charged with growing its extramural research funding. As we asked the question of how to create the most effective approach and process for team-based efforts and impact to grow research, we considered first the opportunities to leverage expertise from multiple colleges, research centers, and a large academic health system across a comprehensive campus. We further asked if we could develop a methodologically rigorous model that could be tested in other institutions.

To start, the University of Kentucky (UK) has 19 deans overseeing 17 colleges, the graduate school, and libraries. Seven of the 17 colleges are health care related (medicine, pharmacy, dentistry, nursing, health sciences, public health, and social work). With its missions of research, education, and service, UK has over 36,000 students, has invested over $6 billion in infrastructure over 10 years, and boasts a comprehensive health system. The health system, known as UK HealthCare, has a budget of approximately $5 billion, 1,700 licensed beds, 2.95 million ambulatory visits annually, and 71,000 discharges per year. It has been ranked as U.S. News & World Report’s No. 1 hospital in Kentucky nine years in a row. Given multiple research priority areas, with most focused on health disparities in Kentucky, we focused our initial team-based transdisciplinary approach on research in the College of Medicine that would have potential impact on these health care disparities.

With regard to team-based approaches, we distinguished transdisciplinary from multidisciplinary and interdisciplinary. Although little definitive data on this topic exist, we considered a transdisciplinary approach to be the most synergistic that would bring various experts to the table early in any process who would all focus on the problem or decision to be made versus coming to the table with preconceived plans (3).

In other words, a transdisciplinary approach brings multiple members with various disciplines to the table so that the insights transcend their distinct fields. Building a team with such a transdisciplinary model requires great attention to membership, leadership, and team dynamics. Membership must be considered very carefully, and the team needs to launch with an effective leader or effective co-leaders.

MATERIALS AND METHODS

In setting up a team-based model, we were guided by methods developed by National Cancer Institute (NCI)-designated cancer centers, specifically based on cancer center program review requirements [NIH Department of Health and Human Services Funding Opportunity for Cancer Center Support Grants (CCSG), PAR-20-043].

More specifically, in building and conducting team science, we modeled the CCSG guidelines for cancer center programs. CCSG review of scientific programs as the basic structure of team science is rigorous in terms of assessment for cohesiveness, making sure all the members can contribute effectively, are communicating, and include measures of interactive research such as inter-programmatic and intra-programmatic publications and grants.

There is a clear focus on the importance of selecting program members with specific justification of membership based on their peer-reviewed funding and/or contribution to the thematic program focus.

Evidence of program member interaction is another area of importance and requires several members with funding who have demonstrated success. For example, to even be considered a CCSG program, there must be at least seven R01 equivalent grants spread across at least five members. In total, we used these CCSG requirements and guidelines to create criteria for the development of a transdisciplinary team approach.

As an additional review of the science of teams and team science, there is a larger body of work with various qualitative and quantitative studies looking at the structure and function of a team, some of which look at the correlation of various parameters to outcome measures.

Two books that have extensively reviewed this topic included one driven by the NCI and the other by the National Academies (4,5). Additional studies are in the health care and research literature (5). For example, a study by McGuier, et al published in Implementation Science (6) reviewed the data from 55 studies over 22 years and assessed for strength of evidence with GRADE and GRADE-CERQual methodologies, the latter of which was published by Lewin, et al in Implementation Science (7). They describe some areas with high or moderate confidence related to implementation outcomes including the following:

  • Stable team members who are engaged

  • Nonhierarchical approaches to team communication

  • Best expertise included in the team

  • Effective communication

  • Problem recognition

  • Adaptive team functioning

Input, mediator, outcome, input (IMOI) or input, process, output (IPO) models have been described (4–6). An adaptation of these models is shown in Table 1, which describes key features of an initial structure with member selection, leader selection, and a clear focus on a problem.

Table 1.

The Science of Team Science*

Category Component
Inputs
  • Member selection/structure (use analytics)

  • Thematic problem focus (center vs. department)

  • Leadership (credible, experienced)

  • Resources (top-down, bottom-up)

Process (Mediators)
  • Meeting timing/process

  • Cognitive: Mental models (clarity on tasks/roles)

  • Cognitive: Transactive memory (interdependence)

  • Behavioral processes (communication, conflict management)

  • Affective/motivational (trust/psychological safety/cohesion)

Output
  • Key metrics (e.g., NIH vs. total funding)

  • Progress reports

  • Training/member satisfaction

*Adapted from multiple sources (References 37).

Notice that we included resource identification in the input phase, representing not only those individuals with the greatest expertise and a focus on the problem but also sufficient administrative effort to be sure to provide support for the team, perhaps through a pilot award as an example. A top-down approach is important to help shape the team with additional members and with a required process to be sure to round out expertise. Additional efforts to reward team science could be accomplished through changes in the appointment and promotion process, if possible.

The required process should include an outlined meeting cadence and rules of conduct to be sure that the tasks and roles of members are clear and that team dynamics are optimized. This middle process or mediator part is vital but not typically measured well.

In the following section on results, we reviewed two studies done at our institution looking at standardized assessments of team interactions that could be utilized and tested. This gives an opportunity to test whether the model setup is effective in fostering the types of team interactions, communication, team member trust, etc., to maximize the translation of information and decision making among key experts.

Finally, the team’s output metrics should be clearly defined. We have seen the benefit of required frequent progress reports (even every six months) to help keep the team focused on key outcomes and progress. In the next section on results, we integrate these key concepts from the literature as well as our own studies to create a transdisciplinary model that can be used and tested in future studies of team dynamics.

RESULTS

A Transdisciplinary Model

Based on the NCI-designated cancer center program structure and literature review of the science of team science, we created a methodologically rigorous model and an initiative to grow research in the UK College of Medicine.

We initially selected and supported 12 teams after a peer-review process of applications with specific criteria. We funded these programs with structure and metrics while developing guidelines for optimal structure and function as shown in Table 2. We now call it our 3M (members structure, meeting process, metrics) transdisciplinary strategy.

Table 2.

3M Transdisciplinary Strategy

Category Component
Members Structure
  • Member selection/structure (e.g., use of analytics such as Scholars@UK)

  • Thematic problem focus (e.g., selected cohesive areas within university research priority areas)

  • Leadership [e.g., assured accomplished lead(s)]

  • Institutional support (e.g., faculty-driven peer review, refined by interview/administrative process with both college and UK HealthCare support)

Meeting Process
  • Meeting timing/process (e.g., strict requirements/slide template)

  • Behavioral processes/team dynamics (e.g., attention to meeting interactivity, chair mentoring/training)

  • Affective/motivational/psychological safety (e.g., requirement for early career faculty mentoring)

Metrics
  • Key metrics (e.g., interactivity, NIH, early career faculty grants)

  • Progress reports (e.g., every six months and funding distribution dependent on report metrics)

  • Training

The 12 programs were supported with funds distributed over two years in six-month installments based on progress reports. The funded programs were selected after a solicitation and proposal submission. Leadership then selected the most promising team proposals through a peer group assessing for criteria included in Table 2.

Regarding members structure, the importance of the leader was emphasized, with the understanding that a leader who has not demonstrated skill at writing and receiving grants and publications would be less capable of mentoring members. We encouraged teams to consider a co-leadership model with a clinical and basic science co-leader. For overall member selection, analogous to NCI-designated cancer center guidance, members should have related expertise and be able to demonstrate a cohesive group, perhaps with multiple disciplines (e.g., a neurologist and neuroscientist).

Additionally, since we use institutional support, it is an opportunity to discuss adding or refining the planned structure. At UK, we used an analytical approach to refine members with programs like scholars@UK that gives data on investigators with a particular research focus and their levels of success so that we could check members proposed as well as those who could be added to strengthen the team.

Regarding the meeting process, we again defined the necessary requirements to receive institutional support including the need to meet monthly with a review of specific metrics (overall research funding and measures of interactivity), a review of all early career members’ success and mentorship plans, etc. We also recommend meeting with team co-chairs regularly to discuss best practices for fostering interactivity in their team meetings. Slade, et al studied and defined some survey instruments that can be used to assess the best team interactions (1).

Finally, metrics are critically important to target and monitor success. Simple but impactful metrics should be chosen. We used NIH funding since it was more clearly linked to rigorous peer review and easier to access, even with national data. We also used measures of interactivity such as the percentage of publications with more than one or two members as authors as well as team grants. Finally, we required brief but important frequent progress reports at least every six months and tied funding distributions to those assessments.

The 12 teams focused on areas such as stroke, e-cigarette use, pediatric injury, ACL surgery, platelet use in sepsis, colon cancer, Amyotrophic Lateral Sclerosis (ALS), opiate use, tobacco and pregnancy, diabetes, precision medicine, and hearing loss. Each team had co-leaders and members refined as described earlier.

Although correlative in nature, our data showed that the UK College of Medicine increased NIH funding during this period by approximately twofold, which was the largest increase in multiple years.

We then wanted to learn from this experience, define a standard process, and have methods to test the process for continual quality improvement. We felt that a methodologically rigorous and standardized approach to creating and conducting team activity is important as we do this in health care service, research, and education, but rarely stop to test approaches and measure outcomes related to various models.

Review of Studies to Assess Meeting Process or Team Dynamics

In an effort to assess team dynamics and collaboration of team members, Dr. Emily Slade, from the UK College of Public Health; one of our behavioral scientists, Dr. Hilary Surratt; and other investigators studied the initial 12 teams we launched in the UK College of Medicine using a mixed-methods approach (1).

They tested eight different survey methods developed on Qualtrics including an 18-question survey validated by Thompson, et al (8). They found that the quality of team interactions was positively associated with the achievement of scholarly products such as publications, grant proposal submissions, and grants awarded (p=0.02). They also identified in the qualitative section of the study additional successes to foster career development and acceleration for early career researchers.

The survey by Thompson, et al that was used to assess the quality of interactions was originally studied and validated in the context of medical education. Thompson, et al concluded that the survey had evidence of reliability and validity with the capacity to distinguish between the quality of interactions in teams but noted that more work was needed to understand generalizability (8). The study by Slade, et al then applied the questionnaire in the research setting, as noted earlier. To give some context, examples of 18 topics in the survey were as follows:

  • Team members encouraged one another to express their opinions and thoughts.

  • Different points of view were respected by team members.

  • Team members worked to come up with solutions that satisfied all members.

  • Team members resolved differences of opinion by openly speaking their minds.

Clearly, the role of leadership in setting the stage and co-chairs for managing the interactions were important to a successful team. Surratt, et al also published a qualitative assessment of these teams through an interview process (2). There were multiple findings related to the importance of institutional support, mentorship, and home departments essentially having “skin in the game.” Table 3 summarizes some of the qualitative findings.

Table 3.

Qualitative Findings

Examples of Findings
  • Mentorship of clinical faculty

  • Career development for early career faculty

  • Protected time/coverage for physician-scientists

  • Attention to policies on promotion and tenure

  • Chair/institutional support

DISCUSSION

Developing and leading teams is critical for success in health care, research, educational, and corporate settings. Studies on the science of teams or team science give valuable insight into team creation and conduct. However, few studies attempt to synthesize valuable clues to create a template for leaders that could be used for team management and for standardizing future testing and optimization.

This review utilized multiple sources to identify key factors in creating such a template or “checklist” based on institutional experience and a review of quantitative and qualitative studies of those experiences. These findings were developed starting with insight from NCI-designated cancer center guidance, metrics, and success and were then refined through a review of the literature of existing studies as well as testing our own institutional experiences. Figure 1 was created based on important criteria (some of which are proven by prior studies) to provide a helpful template or checklist that could be used to standardize processes. As shown in Figure 1, the template is divided into three phases.

Fig. 1.

An overall leadership team process.

Phase Checklist Item Scoring
Phase I: Team Investment and Award Process Awards request with a peer review: Defined with clear requirements in the request for award as follows:
Aims/goals (feasible/aligned with institutional priorities)
Co-chairs (accomplished leaders)
Members with identification of members who will be mentored
Feasible methods
Outcome metrics (relevant and feasible)/budget
Phase II: Refinement Presentation of selected team submissions for interview-like discussion and negotiation with co-chairs
Negotiate and refine membership
Refine outcome measures, faculty progress/mentoring, etc.
Discuss meeting frequency and meeting conduct
Clarify frequency of progress reports (recommend every six months)
Phase III: Progress and Monitoring Review progress and six-month progress reports
Leadership attendance/guidance of meetings and periodic meetings/mentoring of team chairs
Distribution of budget at six-month intervals based on progress
Work on institutional processes to promote team science (promotion criteria, protected time, etc.)
Align institutional strategy/decisions with selected teams (recruitment, infrastructure support)
Use validated surveys to assess team dynamics

Phase I describes the initiation of the process through a request for awards and peer review of those applications. We suggested that the instructions for applications include key priorities such as clear aims, accomplished co-chairs, membership expertise that fits with the theme of the proposal, feasible methods, and defined outcome metrics. We also suggested that the application requirements be simplified (typically approximately four pages). Figure 1 has a column for scoring so that this template could be used by the peer-review team selecting teams that would be funded by the institution.

Phase II is important with the institution holding interviews of selected applications so that co-chairs could present to the leadership team and members of the peer-review committee. During the interview, there should be negotiation to refine membership. With an understanding of institutional expertise, leadership can often identify other members who should be considered for inclusion to strengthen the team. In this regard, we recently launched research team programs in a partnership between health care, vice president for research, and colleges including the college of medicine as well as other colleges. As one example, we worked in phase II to refine a team that included experts from the college of medicine departments of radiology and division of cardiology to partner in AI and cardiac imaging with experts in engineering that had an internationally known program of research using CT imaging and AI for reading ancient scrolls (9). This effort has basically partnered strong AI engineering technology with clinical imaging for future discovery, bringing together a team of researchers from disciplines that would not normally be at the table together.

It is also important to refine outcome metrics to be simple and intuitionally important. For example, in the UK College of Medicine pilot, we prioritized NIH funding versus overall funding as a more competitive metric.

Finally, in Phase II, it is important to define the required meeting conduct and frequency. We have used the interview process to create the actual award letter based on negotiated and agreed-upon points. The scoring on Phase II then is an assessment of institutional leadership organizing and managing the interview process.

Phase III includes follow-through and monitoring. Bringing all chairs of the teams together periodically to discuss team dynamics and progress was helpful in our experience. Distributing the budget at six-month intervals helped with “hard-wired” assessments. We have often awarded teams for two to three years with every six months an assessment and budget distributions. Efforts to have alignment with the overall research strategy have been important. Essentially, when we spent much effort to identify and fund teams each in a focused, cohesive area of research, it made sense to align institutional direction to recruit and support these prioritized areas. Finally, we have been particularly appreciative of the quantitative and qualitative studies done by our institution that have provided a direction on validated assessments to study the quality of team dynamics (1,2).

CONCLUSIONS

Taken together, our review of studies of teams and team science, forged by our own experiences with both quantitative and qualitative assessments, resulted in a templated process to create and manage teams in large institutions.

We defined an approach from initial CCSG guidelines and the literature on team science dividing team approaches into three areas of focus modified as membership, meetings, and metrics, or what we have called a 3M approach. Using this approach, we developed a process (Figure 1) for leaders to use. We believe this approach is relevant to research, education, and clinical service and brings together disciplines for greater impact.

We also emphasized the importance of standardization so that future studies could be conducted to test and refine this process and understand generalizability. We hope that attention to the leadership of teams will result in more impact in areas of research, health care, and education as well as working in more corporate settings.

Taken together, we described a team-based model with consideration of evidence with criteria for structure, monitoring, and success metrics and developed a process that could be used by leadership to develop transdisciplinary programs across a university for research, education, and/or service.

ACKNOWLEDGMENTS

Thank you to the following individuals: Roxie Allison, CPA, associate chief financial officer, UK College of Medicine; Jordan Gieselman, MBA candidate 2026, creative services manager, UK College of Medicine; Lauren Greathouse, MBA, assistant dean for communications and strategy, UK College of Medicine; Emily Slade, PhD, assistant professor of biostatistics, UK College of Public Health; Hilary Surratt, PhD, associate professor of behavioral science, UK College of Medicine.

DISCUSSION

Schnapp, Madison: The concept of transdisciplinary science is awesome, and I’m wondering if some of the barriers that often come up are credit during the promotion process, how the funds flow, whether it’s indirect, etc. Did you make any changes to any of those processes to minimize barriers to it?

DiPaola, Lexington: That’s such a great question, and I’ve had the opportunity to do that. When we think about this from an institutional perspective across the United States for example, it’s important to have leadership sit down and understand that. We have changed, in most of the clinical departments in the College of Medicine, the statements of evidence that go through the appointments and promotions process. The great thing is that those processes ultimately roll up to the provost’s office, so it’s been an opportunity to change that. We haven’t changed all of it yet, but we’re moving in that direction.

Rosenthal, Winston-Salem: Bob, that was a wonderful presentation with some really great take-home points. We’ve been putting a lot of effort into trying to operationalize a learning health system at Wake Forest and capitalize on the size of our health system as a laboratory for innovation. There are lots of challenges in terms of bringing investigators and health systems together and doing it in a way where you’re circumventing the different timeframes with which the health system and investigators operate. As you’ve tried to do this in Kentucky, what lessons have you learned about engaging investigators and motivating them to address some of your biggest health system challenges?

DiPaola, Lexington: We actually set up a structure where we got agreement in terms of the health care leadership/operational leadership that’s dealing with the day-to-day challenges. We have IV fluid shortages going on right now. Can you imagine them sitting down with investigators worried about wanting support for their research when they’re dealing with that day to day? We set up a structure where we met every single week; it’s called excel research, and we put it in that meeting. It’s a selective group that includes the leadership of the university, the vice president for research, the appropriate deans, especially the dean of the college of medicine, and the leadership in the health system. Every single week, the topic is programmatic research that impacts health care. The investigators actually come there to sell an understanding of why it’s valuable to the health system and the support that they need operationally. Sometimes it’s space—they literally need space to conduct clinical research in the system. Some progress is made just by the fact that everybody is sitting down on a regular basis. When support is necessary, a discussion over matching funds is usually required. The health system will put in something if the college or the provost office puts in something. We are trying to hardwire that into the system. It’s actually run by health care even though everybody is coming to the table. I don’t know if that helps.

Rosenthal, Winston-Salem: That’s great. I would love to compare notes with you at some point.

DiPaola, Lexington: I would be happy to do that. Thank you.

REFERENCES

  • 1.Slade E, Kern PA, Kegebein RL, Liu C, Thompson JC, Kelly TH, King VL, DiPaola RS, Surratt HL. Collaborative team dynamics and scholarly outcomes of multidisciplinary research teams: a mixed-methods approach. J Clin Transl Sci . 2023 Feb 3;7(1):e59. doi: 10.1017/cts.2023.9. PMID: 37008617; PMCID: PMC10052417. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Surratt HL, Otachi JK, Slade E, Kern PA, King V, Kelly TH, DiPaola RS. Optimizing team science in an academic medical center: a qualitative examination of investigator perspectives. J Clin Transl Sci . 2023 Jan 23;7(1):e57. doi: 10.1017/cts.2023.3. PMID: 37008610; PMCID: PMC10052375. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Rosenfield, et al. Soc Sci Med . 1992 Dec;35(11):1343–57. doi: 10.1016/0277-9536(92)90038-r. and Choi et al., Clin and Investigational Med, 2007, 29: 351-64. [DOI] [PubMed] [Google Scholar]
  • 4.National Research Council . Washington, DC: The National Academies Press; 2015. Enhancing the Effectiveness of Team Science. Available at: [DOI] [PubMed] [Google Scholar]
  • 5.Hall KL, Vogel AL, Croyle RT. City: Springer Cham; 2019. Strategies for Team Science Success. [DOI] [Google Scholar]
  • 6.McGuier EA, Kolko DJ, Aarons GA, Schachter A, Klem ML, Diabes MA, Weingart LR, Salas E, Wolk CB. Teamwork and implementation of innovations in healthcare and human service settings: a systematic review. Implement Sci . 2024 Jul 15;19(1):49. doi: 10.1186/s13012-024-01381-9. PMID: 39010100; PMCID: PMC11247800. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Lewin S, Bohren M, Rashidian A, et al. Applying GRADE-CERQual to qualitative evidence synthesis findings—paper 2: how to make an overall CERQual assessment of confidence and create a summary of qualitative findings table. Implement Sci . 2018;13(Suppl 1):10. doi: 10.1186/s13012-017-0689-2. Available at . [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Thompson BM, Levine RE, Kennedy F, Naik AD, Foldes CA, Coverdale JH, Kelly PA, Parmelee D, Richards BF, Haidet P. Evaluating the quality of learning-team processes in medical education: development and validation of a new measure. Acad Med . 2009 Oct;84(10 Suppl):S124–7. doi: 10.1097/ACM.0b013e3181b38b7a. PMID: 19907373. [DOI] [PubMed] [Google Scholar]
  • 9.Parker CS, Parsons S, Bandy J, Chapman C, Coppens F, Seales WB. From invisibility to readability: recovering the ink of Herculaneum. PLoS ONE . 2019;14(5):e0215775. doi: 10.1371/journal.pone.0215775. Available at: [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from Transactions of the American Clinical and Climatological Association are provided here courtesy of American Clinical and Climatological Association

RESOURCES