Abstract
An increase in cross-disciplinary, collaborative team science initiatives over the last few decades has spurred interest by multiple stakeholder groups in empirical research on scientific teams, giving rise to an emergent field referred to as the science of team science (SciTS). This study employed a collaborative team science concept-mapping evaluation methodology to develop a comprehensive research agenda for the SciTS field. Its integrative mixed-methods approach combined group process with statistical analysis to derive a conceptual framework that identifies research areas of team science and their relative importance to the emerging SciTS field. The findings from this concept-mapping project constitute a lever for moving SciTS forward at theoretical, empirical, and translational levels.
During the past decades, expanding investments in team science have resulted in greater interest in research across scientific disciplines and knowledge domains to address complex environmental, social, and health problems. These developments have been propelled by researchers’ increasing commitment and scientific capacity to address complex societal problems (Disis and Slattery, 2010; Wuchty et al, 2007). Science teams are formed to address:
the inherent complexity of contemporary public health, environmental, political, and policy challenges … and the realization that an integration of multiple disciplinary perspectives is required to better understand and ameliorate these problems. (Stokols et al, 2008b).
Working in teams increases the likelihood that scientists can integrate multiple and divergent perspectives and, as a result, develop new insights and solutions (Hackman, 1990, 2011). The problems they address require not just a mingling of disciplines, but a cross-disciplinary team able to collaborate in such a way that their efforts are coordinated and integrated (Fiore, 2008; NAS, 2004). Although it is possible for team science to be unidisciplinary, team science most often connotes cross-disciplinarity (multi-, inter-, and trans-disciplinarity), a composite term for team science programs and projects that differ in the degree to which they interact and integrate across disciplinary, professional, and institutional boundaries (Crowley et al, 2010; Fiore, 2008; Klein, 2010; Rosenfield, 1992; Stokols et al, 2008a; Wagner et al, 2011).
Despite this growth in collaborative research, the scientific community continually struggles with overcoming the challenges arising from this complex form of teamwork (Cummings and Kiesler, 2005, 2007, 2008; Olson and Olson, 2000). As such, science policy must be developed to help address the theoretical and practical challenges emerging from this form of collaborative endeavor. Further, scientific, social scientific, philosophical, and humanistic research is needed to help understand the team processes that drive knowledge production in such teams; that is, help examine how new knowledge is generated in collaborating teams of scientists. This need has given rise to an empirical area of inquiry referred to as the science of team science — SciTS, pronounced ‘sights’ (Annual International Science of Team Science Conference Homepage, 2010; Börner et al, 2010; Falk-Krzesinski et al, 2010a,b; Stokols et al, 2008a). This field promotes understanding of cross-disciplinary research conducted by scientific teams by examining the processes through which teams organize, communicate, and conduct research. SciTS also helps to understand how teams collaborate to achieve scientific breakthroughs that would not be attainable through either individual efforts or a sequence of additive contributions.
SciTS is a topic of growing interest, as evidenced by a tremendous increase in publications about the topic since 2001 (Börner et al, 2010; Falk-Krzesinski, 2010). The growing SciTS literature provides numerous case studies of team science programs and projects (Adler and Stewart, 2010; Department of Energy, 2009; Huerta et al, 2005; Kahn, 1992; Miller, 2008; NIH, 2010; NSF, 2008). This literature also includes lessons about social and cognitive influences, as well as strategies and guidelines for achieving effective collaboration (Bennett et al, 2010; Fuqua et al, 2004; Hall et al, 2008a,b; Keyton et al, 2008; Stipelman et al, 2010; Stokols, 2006; Stokols et al, 2008b). In addition, researchers have begun to examine the dynamics of knowledge integration in collaborative research and problem-solving by teams (Derry et al, 2005; Hirsch Hadorn, 2008; Jordan, 2006; Paletz and Schunn, 2010).
However, despite forward momentum, definitions of core terminology and typologies of practice and theory related to SciTS too often remain impressionistic or parochial; areas of inquiry remain somewhat disconnected; and methodological approaches have been limited. So that the scientific community can more strategically understand and improve collaborative science, more research is needed to validate claims for team science; though a systematic understanding of what SciTS research entails is also crucial at this early point in the emergence of the field (Börner et al, 2010; Falk-Krzesinski et al, 2010a; Fiore, 2008).
We describe here an effort that was initiated to address this need. The goal was to produce a comprehensive, discipline-neutral taxonomy of team science issues. In so doing, we aimed to develop a SciTS research agenda and encourage more systematic, rigorous investigation of team science. We achieved this taxonomy by applying concept-mapping methodology, which elicits diverse, open contributions from multiple stakeholders, followed by quantitative and qualitative categorization of results. Here we describe the methodology and results of the mapping of the SciTS field, and discuss implications for advancing research and practice in team science.
Concept-mapping: the methodology
Concept-mapping has long been used as a method for knowledge elicitation and evaluation. The concept-mapping process enables a group to describe its ideas on any topic of interest (Trochim, 1989) and represent those ideas visually in the form of a map. For example, concept-mapping has been effectively applied in medicine and public health (Trochim et al, 2006a; Trochim and Kane, 2005) in order to develop logic models for the evaluation of large research projects (Anderson et al, 2006).
The process typically requires participants to brainstorm a large set of statements relevant to the topic of interest. Individual participants then sort these statements into groups of similar ones, rate each statement on one or more scales and, subsequently, interpret the maps that result from data analyses. The analyses typically include a two-dimensional multidimensional scaling (MDS) of the unstructured sorted data, a hierarchical cluster analysis of the MDS coordinates, and computation of average importance ratings for each statement and cluster of statements. The resulting maps display the individual statements in two-dimensional (x,y) space with more similar statements located nearer each other. The visualizations also depict how the statements are grouped into clusters that partition the space on the map. Finally, participants are led through a structured interpretation session designed to help them understand the maps and label them in a substantively meaningful way.
Methods
Concept-mapping the science of team science
A critical feature of this project was that a diverse group of stakeholders utilized a collaborative methodology — concept-mapping — to elicit and map a research agenda for the SciTS, in effect using a team science approach to map research issues germane to team science. Concept-mapping has been used in a variety of biomedical (Leischow et al, 2008; Robinson and Trochim, 2007; Stillman et al, 2008; Trochim and Kane, 2005; Trochim et al, 2006b) and science management contexts (Kagan et al, 2010; Quinlan et al, 2008; Trochim et al, 2008). It is especially appropriate for involving multiple participants in conceptualizing a complex topic in order to develop a theoretical framework or as a basis for subsequent planning and evaluation. Concept-mapping thus offered significant benefits for collaborative knowledge elicitation about SciTS, and the methodology was especially appropriate for the need to create a taxonomy of issues.
Concept-mapping
Uses web technology that enables low-cost, convenient, asynchronous input from multiple, dispersed stakeholders;
Allows for the input of multiple stakeholders from numerous domains so as to fully map a problem space;
Implements a structured group process that minimizes participant burden and encourages participation;
Applies advanced multivariate statistical analyses to help ensure rigorous, credible results;
Provides graphic output that is interpretable by a wide variety of audiences, in addition to standard statistical output; and
Facilitates development of definitions of closely related concepts (Trochim, 1989).
We set out to create a taxonomy of issues related to SciTS. This task entailed eliciting and integrating the perspectives of multiple team science stakeholders regarding topics important for a comprehensive research agenda for SciTS. Further, we sought to assess the relative importance of addressing these issues. Finally, from this second step, we mapped an initial conceptual framework for SciTS, which serves to guide the research agenda for SciTS: a SciTS research roadmap.
Stakeholders
Project leads
This project was facilitated and managed by the Cornell Office for Research on Evaluation (CORE) located in Ithaca, New York, and the Evaluation Key Function Committee of the Weill Cornell Clinical and Translational Research Center. Staff of these organizations served as the facilitator group.1 The Institutional Review Board-approved process took place over a period of approximately 18 weeks in the winter and spring of 2010. The faculty and staff in Research Team Support and Development at the Northwestern University Clinical and Translational Sciences (NUCATS) Institute were responsible for developing and maintaining the SciTS stakeholder distribution list (described below in ‘Participants’) and managing all project communications to participants.
The co-authors, a team of nine stakeholders, served as the steering committee for this project. Steering committee members were drawn from the 2010 Annual International Science of Team Science (SciTS) Conference Program Committee2 to reflect disciplinary diversity with a focus on the empirical study of team science. The steering committee worked with the facilitator group to finalize the proposed plans for the project, including defining the project focus, determining participants and how they would be contacted, setting the project schedule, developing the brainstorming and rating focus prompts, and deciding how the results would be disseminated and utilized. They maintained responsibility for project oversight, identified contact information for participants, reviewed and approved the brainstormed outcomes and reviewed and approved initial results. Then they participated in a teleconference to interpret the results, gain an understanding of the conceptual framework, and discuss implications for team science. The total time commitment from steering committee members was approximately 6 hours/member, spread over the 18-week project duration.
Participants
Participants were selected to represent the diverse stakeholder groups relevant to the emerging field of the SciTS, including:
Team science practitioners (predominantly principal investigators leading cross-disciplinary research centers);
Investigators studying scientific teams;
Team science funders/policy-makers;
Research development professionals; and
Data providers and analytics developers.
Participants were a subset of invited stakeholder attendees at the 2010 Annual International Science of Team Science (SciTS) Conference, non-attending SciTS conference invitees, as well as stakeholders recruited through additional email lists, web discussion groups, and professional groups and organizations.3 Purposeful sampling (Mason, 1996) ensured that the full heterogeneity of relevant stakeholders was reflected in the mapping of issues. Participants were divided into two groups that are distinguished by their level of involvement in the project:
- An invited group of over 800 stakeholders was asked to be involved in each phase of the project as key informants. The invited group included: team science practitioners, team science researchers, team science funders/policy-makers, team science analytic tool developers, and research development professionals. The list of stakeholders and their email addresses was compiled by the Research Team Support and Development faculty and staff at NUCATS over a period of six months. Stakeholders were identified via authorship of publications related to SciTS and team research more broadly; attendee lists from SciTS conferences/meetings/workshops; grant award databases and websites providing information about principal investigators on research center and related grant awards from the NIH, NSF, NASA, Departments of Energy and Defense, and numerous foundations; program officers for the research center and related grant programs; websites of analytic tool products; and referrals solicited from members of the SciTS conference program committee. The invited group members were able to participate by:
- Contributing their ideas and opinions about SciTS factors through web-based brainstorming (10–15 minutes); and
- Conducting a web-based rating of the relative importance of addressing each factor (15–20 minutes).
- Respondents, a subset of the invited group, actively participated in the brainstorming process (described below) used to generate topic statements. Respondent self-reported demographics demonstrate diversity within the group:
- 54.8% male, 45.2% female;
- 45.2% practitioners of team science, 54.8% researchers studying team science;
- Expertise in biomedicine and social sciences (≈ 50%); business administration, engineering/math/computer science, humanities, physical sciences, and other (≈ 50%);
- Employment sector: academic (≈ 65%) and business/private, consulting, government, and nonprofit (≈ 35%);
- 82.2% doctorate or equivalent educational background
- A core group, a subset of the invited group, consisting of 24 people, participated fully in issue generation and in organizing and rating activities. Core group members included all members of the steering committee, the remaining SciTS program committee members, and many conference presenters. The core group members participated by:
- Contributing their ideas and opinions about issues through web-based brainstorming (10–15 minutes);
- Conducting a web-based unstructured sort of the synthesized set of factors (30 minutes–1 hour); and
- Conducting a web-based rating of the relative importance of addressing each factor (15–20 minutes).
Generation of SciTS topics
During the generation step, respondents of the invited group created SciTS topic statements using a web-based, structured brainstorming process (Coxon, 1999; Osborn, 1948). The focus prompt for brainstorming was the following completion statement:
One topic that should be part of a comprehensive research agenda for the science of team science is …
The focus prompt, developed by the steering committee, was designed to assure that the brainstormed statements were focused, concise, and grammatically and syntactically similar.
Respondents entered their statements directly into the website and could immediately see their ideas along with those of the other participants. Web-based brainstorming5 was anonymous to ensure that participants felt free to enter any issues they considered relevant and respondents were able to return to the website as often as desired during the brainstorming period. Respondents generated a total 240 brainstormed SciTS topic statements from 87 unique IP addresses. While it is not possible to directly ascertain how many statements each respondent entered, the brainstorming site was monitored by the primary facilitator and one member of the steering committee6 to assess trends in statement-entering. That monitoring revealed that statements were entered one at a time or in small groups of three to five.
The facilitator group used content analysis procedures to edit and synthesize the brainstormed issues such that they represented, as well as possible, the details in the original brainstormed set. This analysis involved coding each brainstormed statement with one or more keywords, sorting them alphabetically by keywords, identifying identical or highly similar statements, and creating a single statement that could represent multiple similar ones. The steering committee reviewed, revised, and finalized this synthesis to arrive at the final set of 95 synthesized statements, representing SciTS issues (statements listed in Table 1). Of the 95 final statements, 39 (41.1%) were original brainstormed statements taken verbatim, 17 (17.9%) were original statements with only slight modifications in wording either for clarity or for grammatical consistency with the focus statement, and 39 (41.1%) were syntheses of two or more similar statements from the original 240 that were brainstormed.
Table 1.
No. | Synthesized statement | ARIR | ||
---|---|---|---|---|
Measurement and evaluation of team science | 8 | Measurement of key constructs (e.g. Collaboration, disciplinarity, team effectiveness, personal/behavioral characteristics, team processes, readiness, synergy, productivity, shared knowledge) | 4.44 | |
13 | Evaluation of team science and its impacts | 4.22 | ||
65 | Measuring effectiveness of team science on multiple levels: individual team, impact of research, effectiveness of team science funding programs, etc. | 4.16 | ||
2 | How to evaluate success of team science-based research centers | 4.14 | ||
3 | Comparing the effects of team science versus traditional science in advancing scientific knowledge | 4.08 | ||
58 | Evaluating and learning from successful teams | 4.00 | ||
17 | Research on methodology and measurement of team science | 3.97 | ||
77 | Strengthening the research methods for studying scientific teams (e.g. using quasi-experimental methods) | 3.87 | ||
18 | Social network analysis of scientific teams | 3.79 | ||
79 | Infrastructures to capture relevant data to better assess team science outcomes | 3.77 | ||
69 | Importance of developing multi-method strategies to assess processes and outcomes of team science | 3.73 | ||
22 | How network information can provide insight into performance and evaluation of teams | 3.68 | ||
63 | Using publication and bibliometric data (e.g. citation rates, impact factors) to assess team science | 3.63 | ||
89 | Best approach(es) to assessing scientific teams within an institution | 3.63 | ||
46 | Economic value created by team science | 3.61 | ||
29 | How to measure an increase in team science activity and collaboration at an institution, in comparison with other institutions | 3.54 | ||
50 | Key performance indicators to encourage team science evaluation into individual development and professional growth | 3.52 | ||
88 | To assess whether the findings produced by team science are more broadly disseminated, as compared to traditional science | 3.48 | ||
52 | How to evaluate existing and new tools | 3.44 | ||
83 | How to demonstrate an effective team in a grant proposal | 3.40 | ||
4 | How to use team science approaches and methods in the investigation of team science | 3.40 | ||
32 | The availability of organizational structure data as a data source | 3.35 | ||
64 | How network information can provide insight into performance and evaluation of teams | 3.11 | ||
Cluster average: | 3.74 | |||
Definitions and models of team science | 45 | Best practices of team science | 4.16 | |
73 | Developing testable hypotheses about team science | 3.98 | ||
72 | Theories and models of team science | 3.85 | ||
62 | The definitions of team, scientific team, and team science | 3.21 | ||
15 | Definition of different types of disciplinarity (interdisciplinary, multi-disciplinarity, transdisciplinarity) | 3.03 | ||
Cluster average: | 3.65 | |||
Institutional support and professional development for teams | 74 | Resources and infrastructure needed within and across institutions to promote collaboration and team science | 4.03 | |
31 | Incentives and incentive systems for team science | 3.89 | ||
25 | How the university tenure and promotion system can be restructured to encourage team science | 3.78 | ||
80 | Processes and methods that encourage and support teams (e.g. group activities, scientific conferences, grant opportunity distribution, systems-based approaches) | 3.73 | ||
92 | Ethical issues in conducting team science (e.g. IP ownership, defining collaborative relationships, attributing credit for work) | 3.70 | ||
5 | Training and education issues in team science | 3.67 | ||
90 | Use of collaborative computerized tools to support and enhance team science | 3.63 | ||
12 | The effects of team science on the scientist's work and career | 3.61 | ||
75 | Funding to support the science of team science, research on team science | 3.61 | ||
16 | Individual benefit/risk analysis to engaging in team science | 3.29 | ||
81 | Co-authorship and multi-principal-investigator authorship in team science | 3.27 | ||
55 | Timing, with regards to investigator career stage, in team science | 3.13 | ||
85 | Relationships between team science in the academy and industry | 3.10 | ||
Cluster average: | 3.57 | |||
Disciplinary dynamics and team science | 59 | Using team science and interdisciplinary research to support emerging areas of science | 3.94 | |
38 | How to overcome disciplinary traditions to move toward interdisciplinary traditions | 3.78 | ||
36 | Applying what is known about teams in different disciplines (e.g. management) and contexts (e.g. international) | 3.60 | ||
19 | How best to disseminate findings and best practices from the science of team science | 3.59 | ||
82 | Variations in team science related to disciplinarity | 3.44 | ||
10 | Understanding differences between intra- vs. inter-institutional scientific teams | 3.37 | ||
33 | Relationships and connections between multi-, inter- and transdisciplinary research efforts and team science | 3.16 | ||
Cluster average: | 3.55 | |||
Structure and context for teams | 30 | Keys for success in team science | 4.23 | |
9 | The relationship between productivity and the composition of teams | 4.11 | ||
37 | The network characteristics of productive science team members and subgroups | 3.83 | ||
53 | Contextual/situational factors that influence the effectiveness of team collaboration | 3.81 | ||
67 | The effect of research centers in promoting a team science approach | 3.69 | ||
40 | How research networking tools can enhance team science | 3.68 | ||
20 | The relationships among creativity, innovation and the composition of teams | 3.67 | ||
70 | What types of team organizations are best at facilitating team science | 3.60 | ||
41 | Whether collaborative spaces for team science encourage collaboration | 3.56 | ||
42 | The impact of team size on process and outcomes in team science | 3.55 | ||
57 | How team dynamics can impact science | 3.53 | ||
14 | How the changing ecology and structure of teams influence future scientific collaborations | 3.51 | ||
11 | The effects of the type and complexity of research question on team science | 3.50 | ||
28 | A study of team science outcomes with junior versus senior principal investigators | 3.05 | ||
54 | Use and impact of community-based organizations and community clinical practices in teams | 2.87 | ||
7 | Status of the team as it appears to external individuals and groups | 2.84 | ||
56 | Effects of sustained, hard team work | 2.58 | ||
Cluster average: | 3.51 | |||
Management and organization for teams | 35 | Organizational policies that foster team science | 4.13 | |
1 | Types of organizational structures of team science | 3.76 | ||
93 | The management of scientific teams | 3.71 | ||
71 | Disciplinary language barriers in team science | 3.60 | ||
78 | How to sustain scientific teams | 3.56 | ||
66 | Virtual organizations and team science | 3.39 | ||
76 | Value of rotating team leadership | 3.15 | ||
61 | Membership in multiple, potentially overlapping, potentially conflicting teams | 3.05 | ||
86 | Formal vs. informal organizational structures of institutions | 3.05 | ||
Cluster average: | 3.49 | |||
Characteristics and dynamics of teams | 84 | Leadership characteristics that drive effective team science | 3.79 | |
48 | Issues to consider when initiating or building a new team | 3.76 | ||
34 | What factors contribute to the development of trust in different collaborations | 3.71 | ||
68 | Optimal team composition (e.g. specialists, generalists, boundary spanners) to enable use of diverse expertise | 3.70 | ||
24 | Ideal composition of scientific teams | 3.65 | ||
23 | Communication styles in teams | 3.65 | ||
87 | Social skills and competencies required for successful team science | 3.58 | ||
43 | Collaborative readiness factors | 3.58 | ||
21 | How roles in teams are defined and communicated, and by whom | 3.57 | ||
60 | Personal and behavioral factors in team science collaborations | 3.55 | ||
49 | Finding potential/likely research collaborators | 3.53 | ||
91 | Status differences and power dynamics within the team | 3.35 | ||
94 | Different types of conflicts that occur in scientific teams and how to address these effectively | 3.35 | ||
6 | Heterogeneity of team membership | 3.30 | ||
27 | The psychological and personality factors associated with being an effective team scientist | 3.30 | ||
39 | How teams grow, shrink, expire over time | 3.23 | ||
26 | The influence of research team morale | 3.14 | ||
47 | Team member physical proximity (co-location) | 3.13 | ||
44 | Team member interchangeability | 3.06 | ||
51 | Gender differences in team contributions | 3.05 | ||
95 | Why people join teams | 2.97 | ||
Cluster average: | 3.43 |
Note: The 95 synthesized SciTS topic statements represented in Figure 1 are organized into seven clusters. Here, each statement is listed by cluster in descending order according to its individual average relative importance rating (ARIR). The cluster average represents all of the statements within a cluster
Structuring SciTS topics
In the structuring step, participants provided information about how the synthesized statements might be grouped and rated for relative importance. As with brainstorming, this information was collected over the web. The structuring step involved two distinct activities: rating and sorting the synthesized statements.
- Rating (invited group): For the rating activity, participants rated each of the 95 synthesized statements on a 5-point Likert-type response scale. Because participants were unlikely to have submitted statements that are totally unimportant with respect to the focus, instructions stressed that the rating be considered a relative judgment of the importance of each item, compared to all the other synthesized statements. The specific rating was done in response to the prompt:
where 1 = relatively unimportant, compared with the rest of the statements and 5 = extremely important, compared with the rest of the statements. Relative importance ratings were completed by invited group members by way of 62 unique IP addresses.Please rate each statement for its relative importance to a comprehensive research agenda in the study and/or practice of team science Sorting (core group only): Unstructured sorting, or the pile sort method, was used because it can accommodate a large number of items (Weller and Romney, 1988). For the sorting activity, core group members were asked to group the 95 synthesized statements ‘in a way that makes sense to you’ and to name each group (Coxon, 1999; Rosenberg and Kim, 1975; Weller and Romney, 1988). The only restrictions in this sorting task were that there could not be: (a) groups having only one item; (b) one group consisting of all items; or (c) a ‘miscellaneous’ group. The web software allowed the participants to create, delete, and name new groups and to move statements from one group to another. Fifteen core group members completed the sorting activity.
Concept-mapping data analysis
The concept-mapping analysis7 involved a sequence of steps:
Construction of a square similarity matrix based on the sorting co-occurrences (Weller and Romney, 1988);
A two-dimensional multidimensional scaling (MDS) (Kruskal and Wish, 1978) of the similarity matrix; and
Hierarchical cluster analysis (Anderberg, 1973; Everitt, 1980) of the MDS coordinates using Ward’s algorithm.
The MDS configuration is graphed in a two-dimensional ‘point map’ that displays the location of all the synthesized statements with statements closer to each other generally expected to be more similar in meaning. A ‘cluster map’ is also generated that displays the statement points enclosed by polygon-shaped boundaries for the clusters.
The 1-to-5 importance rating data are averaged across responses for each synthesized statement and each cluster. This rating information is depicted graphically in a ‘point-rating map’ showing the original point map with the average rating per item displayed as vertical columns in the third dimension, and in a ‘cluster-rating map’ that shows the cluster average rating using the third dimension.
Results
The following materials were available for use in the concept-map analysis session:
List of the original 240 brainstormed statements;
List of the 95 synthesized statements, representing SciTS topics, grouped by cluster;
Point-rating map showing the MDS placement of the synthesized statements and their identifying numbers, with average statement ratings overlaid; and
Cluster map showing the cluster solution, with average cluster ratings overlaid (see Figure 1).
Concept-map analysis
For the MDS analysis, the stress value was 0.29252 after 12 iterations. Stress is considered the primary indicator of ‘goodness of fit’ to the sort data that serves as the input for MDS. The value obtained here is comparable to the median value of 0.29702 reported in the only known meta-analysis which synthesized 33 separate concept-mapping studies and consequently this map appears to be a reasonable fit to the input data given comparable norms (Trochim, 1993). A two-dimensional point map generated through MDS analysis was developed and the statements were arranged into seven clusters, as determined by the hierarchical cluster analysis of the MDS coordinates. The analysis placed statements frequently sorted together closer to one another on the map than statements sorted together less frequently. The resultant cluster map appears in Figure 1.
A preliminary analysis of the results was conducted by the steering committee, which convened by email and teleconference to review and interpret the various analytic products. These analysis activities follow a previously described structured process (Trochim, 1989).
Prior to the teleconference, each steering committee member was asked to read through the set of synthesized statements in each cluster and generate a short phrase or word to describe or label the set of statements as a cluster. The facilitator then led the steering committee in discussion that worked cluster-by-cluster to achieve group consensus on an acceptable label for each cluster. Clusters were ultimately labeled (refer to Figure 2):
Measurement and evaluation of team science;
Structure and context for teams;
Characteristics and dynamics of teams;
Management and organization for teams;
Institutional support and professional development for teams;
Disciplinary dynamics and team science; and
Definitions and models of team science.
The facilitator reminded the group that, in general, clusters closer together on the map should be conceptually more similar than clusters farther apart, and asked them to assess whether this seemed to be true or not. Group members were then asked to identify interpretable clusters of clusters or ‘regions’. Just as in labeling the clusters, the group arrived at a consensus label for each of the identified regions. Regions were labeled: nuts and bolts; the team; meta-issues; and support (refer to Figure 2).
Concept-map rating analysis
The facilitator noted that all of the material presented until this point used only the sorting data and multivariate map analyses. The results of the rating task were then presented through point- and cluster-rating reports (Table 1).
The numerical rating of a point or cluster represents the average relative importance rating for that statement or cluster of statements. It is important to note that the importance ratings represent relative and not absolute importance. Specifically, even though a cluster may have the lowest relative rating, the value does not mean the cluster is not important to SciTS; in fact, all of the clusters were rated over the middle value on the importance ratings.
The average cluster ratings are overlaid onto a combined cluster/region map in Figure 2 to create the SciTS concept map. Again, steering committee members examined these data to determine whether they made intuitive sense and to discuss implications given the focus of this project.
The top 10 synthesized statements by average relative importance rating are listed in Table 2. These statements represent the highest relatively important SciTS issues gleaned from this concept-mapping study. The clusters rated most important were ‘Measurement and evaluation of team science’ and ‘Definitions and models of team science’. And five of the 10 top-rated statements reside in the ‘Measurement and evaluation of team science’ cluster.
Table 2.
No. | Synthesized statement | ARIR |
---|---|---|
8 | Measurement of key constructs (e.g. collaboration, disciplinarity, team effectiveness, personal/behavioral characteristics, team processes, readiness, synergy, productivity, shared knowledge) | 4.44 |
30 | Keys for success in team science | 4.23 |
13 | Evaluation of team science and its impacts | 4.22 |
45 | Best practices of team science | 4.16 |
65 | Measuring effectiveness of team science on multiple levels: individual team, impact of research, effectiveness of team science funding programs, etc. | 4.16 |
2 | How to evaluate success of team science-based research centers | 4.14 |
35 | Organizational policies that foster team science | 4.13 |
9 | The relationship between productivity and the composition of teams | 4.11 |
3 | Comparing the effects of team science versus traditional science in advancing scientific knowledge | 4.08 |
74 | Resources and infrastructure needed within and across institutions to promote collaboration and team science | 4.03 |
Note: The top-rated SciTS topic statements are ordered by average relative importance rating (ARIR). Five of the 10 statements reside in the ‘Measurement and evaluation of team science’ cluster
Given that these data were culled from both researchers engaging in the study of science teams, as well as researchers engaged in scientific teamwork, the ratings of relative importance are particularly significant. First, ratings of importance to terminology and metrics illustrate that this is a young field and the scientific foundation still needs to be established. But these findings may also serve as a compass, pointing to those areas of SciTS that will have the most immediate impact on team science once more fully, and more systematically, understood.
Discussion
A multi-level framework emerges
We have interpreted the resulting concept map of the SciTS field as representing a complex problem space of interrelated component clusters. The final interpreted map (Figure 2) suggests a comprehensive and multi-level framework that has broad applicability for helping to shape future directions of SciTS research and practice. This integrated view allows researchers from different disciplines to focus their varied lenses within and across clusters so as to more fully address the clusters’ tightly coupled nature while examining them via psychological, organizational, and network sciences. The SciTS concept map also provides a mechanism for evaluating the development and maturing of the team science literature, and where gaps in team science knowledge exist.
The multi-dimensional relationships and multi-level issues associated with the clusters and regions that emerged from the concept-mapping process can be interpreted within a systems framework. Systems-level thinking is most appropriate for areas of inquiry spanning multiple problems and multiple disciplines (Börner et al, 2010; Stokols et al, 2008b; Trochim et al, 2006a). Specifically, a systems approach is a general conceptual orientation concerned with the interrelationships between parts and their relationships to a functioning whole, often understood within the context of an even greater whole (Churchman, 1984; Emery, 1969; Von Bertalanffy, 1950).
The micro, meso, and macro levels of analysis, which encompass the issues identified as salient for advancing the SciTS field (Börner et al, 2010), warrant a mixed-methods approach for exploring complex and dynamic multi-level interdependencies consistent with a systems approach. In particular, a systems approach does not value one method of analysis over another. Rather, it provides an opportunity for an integration of theory and methods, each allowing for a differing level of granularity. Macro-level research involves analyses at population levels and seeks to identify patterns of collaborations that are broad in scope. This approach could include network types of analyses examining coordination across disciplinary boundaries (Aboelela et al, 2007; Haines et al, 2010) or it might include sociological analyses of fields of knowledge and their integration (Klein, 1996). Meso-level research examines group dynamics and the social processes driving collaboration within science teams (Fiore, 2008). Finally, micro-level research focuses on individual researchers within science teams and, for example, how education and training are related to particular career paths (Borrego and Newswander, 2010; Klein, 2008; Misra et al, 2010; Mitrany and Stokols, 2005; Nash, 2008).
Within the SciTS concept map (Figure 2), we can see sets of tightly coupled clusters that are likely to be largely iterative in their interactions. A systems approach allows consideration of how these clusters can be studied in isolation and/or collectively, and how they interrelate across levels.
For example, clusters such as ‘Disciplinary dynamics’ and ‘Characteristics and dynamics of teams’ should be studied in parallel, and via a micro–meso approach. This approach would help understand how, for example, cross-disciplinary team research affects — and is affected by — social-cognitive factors or the particular communication patterns and group processes emerging during collaboration. Similarly, a research plan can be developed in support of the varied methodological approaches necessary, or most appropriate, for a given problem area within a cluster. For example, the ‘Management and organization’ cluster, while appropriate for macro-and meso-level approaches, would require a blend of organizational and administrative science methods to examine the input/process/output factors (Hackman, 1987; McGrath, 1964; Stokols et al, 2003, 2005; Trochim et al, 2008) associated with, for example, varied incentives for cross-disciplinary science teams.
The ‘Meta-issues’ clusters cross macro and meso levels in that they are associated with both broader disciplinary issues as well as group- and team-level outcomes. Similarly, the ‘Support’ clusters cross macro and meso levels in that the administrative and technical factors for team science must act as the collaborative infrastructure both within teams, and across research units within universities. Finally, the ‘Team’ clusters require micro–meso–macro levels of analysis.
What is key to recognize is that linear or sequential process models could not adequately capture the complexity inherent in SciTS and may even be misleading. Thus, following views espoused in the health sciences, we favor a systems view, where an interdependent and iterative set of clusters can be:
viewed as a coherent whole, while the relationships among the components are also recognized and seen as critical to the system. (Mabry et al, 2008)
In addition to highlighting the multi-level interdependencies among the various facets of team science depicted in the concept map, systems theory also suggests the value of maximizing the level of congruence or fit (Adamopoulos, 1982; Wicker, 1972) that exists among the phenomena encompassed by each cluster of the map. For instance, certain team structures and contexts (e.g. teams involving geographically dispersed vs. co-located team members) may require different kinds of management strategies (e.g. solo-vs. shared-leadership arrangements, with the latter being more essential for remote collaborations) and high versus low levels of technological support (e.g. cyber-infrastructures to support remote collaboration). To the extent that these different facets of team science are congruent or well-matched, team members should be better able to collaborate effectively. High levels of incongruity or imbalance among the various facets of team science, on the other hand, are likely to create disequilibrium and jeopardize collaborative success (Falk-Krzesinski et al, 2010a; Stokols et al, 2008b).
From a congruence perspective, efforts to promote strategic team science would entail maximizing the fit between alternative arrangements for conducting team science (e.g. place-based centers vs. spatially dispersed virtual ‘collaboratories’) and particular leadership and team structures, managerial approaches, and technological resources (Contractor, 2009; Falk-Krzesinski et al, 2010a; Gray, 2008a; Guimerà et al, 2005; Olson and Olson, 2000; Olson et al, 2008; Whitfield, 2008). Greater congruence among these dimensions of team science would be expected to enhance collaborative processes and to promote higher levels of productivity.
A roadmap for SciTS research
Despite the burgeoning interest in team science, it is not yet empirically clear exactly how and when collaborative efforts actually enhance the scientific enterprise. We know that there are significant expenditures in time and money that are often associated with increased collaboration and that working collectively poses important challenges that more solitary science avoids. It is precisely because the empirical foundation for collaboration is still at such an early stage that there is so great a need to establish a scientific endeavor designed to address such issues.
This concept-mapping study constitutes a lever for moving SciTS forward at theoretical, empirical, and translational levels. In this section we illustrate how the concept-mapping conceptual framework can be used to develop a roadmap for SciTS research, and we provide representative ways in which the SciTS research roadmap can be used to guide the development of team science research directions, funding, and policy. But it leaves for future research the challenge of implementing a SciTS research agenda and enhancing our understanding of what really works in team science and under what conditions.
The SciTS concept map encompasses a broad set of clusters, which represent important areas of inquiry for studying scientific teamwork. Accordingly, it offers a guide for funders of SciTS research by identifying areas of investigation to promote. For instance, the various clusters or regions identified in the map can serve as a basis for creating more specific research areas that funders of team science can use to enlarge the portfolio of funding necessary to understand and improve team science. Along these lines, the multiple clusters and regions identified in the SciTS concept map can also provide a basis for developing a priori theoretical propositions about how various facets of team science relate to each other in a dynamic and possibly predictive fashion.
Further, this framework illustrates how different stakeholder groups could have primary interests in different clusters or across clusters and, more importantly, how different stakeholder groups would need to collaborate to examine issues. For example, team science researchers involved with the generation of models could work with program staff at funding agencies to identify particular gaps in understanding that require additional research, as well as how to facilitate research to test such models in situ. Similarly, researchers in evaluation could work with team science practitioners to understand how the management of collaborations can be more effectively assessed and how the practice of team science and its associated team dynamics lead to varied performance outcomes. SciTS researchers working to understand effective organization and management would need to partner with multiple stakeholders so as to examine the leadership and contextual factors that influence science collaborations.
With regard to the practical aspects of science performed in teams, the clusters can also be used to develop detailed conceptual taxonomies to foster the creation of methods for supporting scientific collaboration. These taxonomies, for example, could then be used to develop alternative infrastructures for doing team science, multiple team science goals, and management and support strategies. Further, a SciTS research roadmap could be used as a springboard for directing further research about the diverse factors that influence the effectiveness of team science projects and initiatives.
This project was undertaken in part to provide an empirical foundation for informed discussion at the First Annual International Science of Team Science (SciTS) Conference, held on 22–24 April 2010 in Chicago (Falk-Krzesinski et al, 2010a). The study was conducted as part of the planning activities and preliminary results of the study were shared with conference attendees. Taking the project full circle, the conference program committee used the SciTS concept map to help identify and select the session topics and organize the overall program for the second annual SciTS conference that was held in Chicago in April 2011 <http://scienceofteamscience.northwestern.edu/>. Four of the 13 conference sessions focused on issues related to the ‘Measurement and evaluation of team science’, the SciTS issue that received the highest relative importance rating in the concept-mapping study. The remaining nine conference sessions presented a balanced approach to the additional topics highlighted by the other six clusters in the SciTS concept map.
This SciTS concept-mapping project extends earlier conceptualizations and scientific forums on team science (Cummings and Kiesler, 2007; Fiore, 2008; Gray, 2008b; Kessel et al, 2008; Mâsse et al, 2008; NAS, 2004; Olson et al, 2008; Rhoten, 2003, 2004; Rhoten and Parker, 2004; Stokols et al, 2008a, 2010; Trochim et al, 2008). It bridges prior findings on multi-, inter-, and trans-disciplinary research collaboration and the new dynamics of problem-focused team research by applying collaborative concept-mapping techniques used by experts in the field to more systematically identify the particular areas of research important for the developing field of SciTS.
Whereas prior approaches to team science have involved systematic analyses of both primary and secondary data sources, we added new empirical methods to research on SciTS. In this way we used concept-mapping strategies to develop a framework for future theory development, empirical research, and translational strategies for the field. In short, the SciTS concept-mapping study conveys actionable science that can be translated to guide successful team science.
Acknowledgements
We thank Latonia Trimuel, Research Team Support and Development at the Northwestern University Clinical and Translational Science (NUCATS) Institute, for her role managing the electronic distribution list of perspective participants used throughout this study.
This article and the SciTS conference were made possible by grant awards UL1RR025741 and U24RR029822 from the National Institutes of Health (NCRR CTSA and ARRA) to Northwestern University; UL1RR024996 from the National Institutes of Health (NCRR CTSA) to Weill Cornell Medical College; 0814364 from the National Science Foundation (DRL REESE) to Cornell University; 0915602 from the National Science Foundation (SES SciSIP and IOS) to the University of Central Florida; 0838564 from the National Science Foundation (IIS VOSS) to Northwestern University; and by conference support from the NIH National Cancer Institute, Division of Cancer Control and Population Sciences and the Northwestern University Clinical and Translational Sciences (NUCATS) Institute; and by a philanthropic donation from Bill and Sheila Lambert, sponsors of the Lambert Family Communication Conference of the School of Communication at Northwestern University.
Footnotes
Co-author William Trochim served as the primary Facilitator throughout the process.
Co-author Cathleen Kane is part of CORE and not a member of the SciTS Conference Program Committee.
Interdisciplinary Network for Group Research (INGroup) distribution list; National Organization of Research Development Professionals (NORDP) listserv; and the Network for Transdisciplinary Research (td-net) distribution list.
Additive for members of the steering committee.
Web-based services for this project were accomplished using the Concept System Global© software provided exclusively through Concept Systems Inc., Ithaca, NY.
Co-author Holly Falk-Krzesinski.
All statistical and graphic analyses were accomplished using the Concept System© Core Program available exclusively through Concept Systems Inc., Ithaca, NY.
Copyright of Research Evaluation is the property of Beech Tree Publishing and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use.
Contributor Information
Holly J Falk-Krzesinski, Email: h-falk@northwestern.edu, Research Team Support and Development, Northwestern University Clinical and Translational Sciences (NUCATS) Institute, Northwestern University, Chicago, IL 60611, USA.
Noshir Contractor, Department of Industrial Engineering and Management Sciences, Northwestern University, Evanston, IL 60208, USA.
Stephen M Fiore, Department of Philosophy and Institute for Simulation, University of Central Florida, Orlando, FL 32826, USA.
Kara L Hall, Division of Cancer Control and Population Sciences, National Cancer Institute, Bethesda, MD 20850, USA.
Cathleen Kane, Clinical and Translational Science Center, Weill Cornell Medical College, Ithaca, NY 14853, USA.
Joann Keyton, Department of Communication, North Carolina State University, Raleigh, NC 27695, USA.
Julie Thompson Klein, Department of English and Office for Teaching and Learning, Wayne State University, Detroit, MI 48202, USA.
Bonnie Spring, Department of Preventive Medicine, Northwestern University Feinberg School of Medicine, Chicago, IL 60611, USA.
Daniel Stokols, Department of Planning, Policy and Design and Department of Psychology and Social Behavior, University of California Irvine, Irvine, CA 92697, USA.
William Trochim, Department of Policy Analysis and Management, Cornell University, Ithaca, NY 14853, USA.
References
- Aboelela SW, Merrill JA, Carley KM, Larson E. Social network analysis to evaluate an interdisciplinary research center. Journal of Research Administration. 2007;38:97–108. [Google Scholar]
- Adamopoulos J. The perception of interpersonal-behavior: dimensionality and importance of the social-environment. Environment and Behavior. 1982;14:29–44. [Google Scholar]
- Adler NE, Stewart J. Using team science to address health disparities: MacArthur network as case example. Annals of the New York Academy of Sciences. 2010;1186:252–260. doi: 10.1111/j.1749-6632.2009.05335.x. [DOI] [PubMed] [Google Scholar]
- Anderberg MR. Cluster Analysis for Applications. New York, NY: Academic Press; 1973. [Google Scholar]
- Anderson LA, Gwaltney MK, Sundra DL, et al. Using concept-mapping to develop a logic model for the prevention research centers program. Preventing Chronic Disease: Public Health Research, Practice and Policy. 2006;3:1–9. [PMC free article] [PubMed] [Google Scholar]
- Annual International Science of Team Science Conference Homepage. Northwestern University Clinical and Translational Sciences Institute; 2010. [Google Scholar]
- Bennett LM, Gadlin H, Levine-Finley S. Collaboration and Team Science: a Field Guide. Bethesda, MD: National Institutes of Health; 2010. [Google Scholar]
- Börner K, Contractor N, Falk-Krzesinski HJ, et al. A multilevel systems perspective for the science of team science. Science Translational Medicine. 2010;2:cm24. doi: 10.1126/scitranslmed.3001399. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Borrego M, Newswander LK. Definitions of interdisciplinary research: toward graduate-level interdisciplinary learning outcomes. Review of Higher Education. 2010;34:61+. [Google Scholar]
- Churchman CW. The Systems Approach. Dell: 1984. [Google Scholar]
- Contractor N. The emergence of multidimensional networks. Journal of Computer-Mediated Communication. 2009;14:743–747. [Google Scholar]
- Coxon APM. Sorting Data: Collection and Analysis. Thousand Oaks, CA: Sage; 1999. [Google Scholar]
- Crowley S, Eigenbrode SD, O’Rourke M, Wulfhorst JD. Cross-disciplinary localization:a philosophical approach. Multi-Lingual. 2010 Sep 1–4; [Google Scholar]
- Cummings JN, Kiesler S. Collaborative research across disciplinary and organizational boundaries. Social Studies of Science. 2005;35:703–722. [Google Scholar]
- Cummings JN, Kiesler S. Coordination costs and project outcomes in multi-university collaborations. Research Policy. 2007;36:1620–1634. [Google Scholar]
- Cummings JN, Kiesler S. Who collaborates successfully? Prior experience reduces collaboration barriers in distributed interdisciplinary research. Paper presented at: Proceedings of the ACM 2008 Conference on Computer Supported Cooperative Work; San Diego, CA. 2008. [Google Scholar]
- Department of Energy. Energy Innovation Hubs. US Department of Energy; 2009. [Google Scholar]
- Derry SJ, Gernsbacher MA, Schunn CD. Interdisciplinary Collaboration: an Emerging Cognitive Science. Mahwah, NJ: Lawrence Erlbaum; 2005. [Google Scholar]
- Disis M, Slattery J. The road we must take: multidisciplinary team science. Science Translational Medicine. 2010;2:22cm29. doi: 10.1126/scitranslmed.3000421. [DOI] [PubMed] [Google Scholar]
- Emery FE. Systems Thinking: Selected Readings. Harmondsworth: Penguin; 1969. [Google Scholar]
- Everitt B. Cluster Analysis. 2nd edn. New York, NY: Halsted Press, a division of John Wiley & Sons; 1980. [Google Scholar]
- Falk-Krzesinski H. Science of Team Science. Northwestern University; 2010. EndNote Reference Library. [Google Scholar]
- Falk-Krzesinski HJ, Börner K, Contractor N, et al. Advancing the science of team science. Clinical and Translational Sciences. 2010a;3:263–266. doi: 10.1111/j.1752-8062.2010.00223.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Falk-Krzesinski HJ, Hall K, Stokols D, Vogel A. Wikipedia: the Free Encyclopedia. Wikimedia Foundation; 2010b. Science of team science. [Google Scholar]
- Fiore SM. Interdisciplinarity as teamwork: how the science of teams can inform team science. Small Group Research. 2008;39:251–277. [Google Scholar]
- Fuqua J, Stokols D, Gress J, et al. Critical issues in the study of transdisciplinary scientific collaboration. Substance Use and Misuse. 2004;39:2073–2074. [PubMed] [Google Scholar]
- Gray B. Enhancing transdisciplinary research through collaborative leadership. American Journal of Preventive Medicine. 2008a;35:S124–S132. doi: 10.1016/j.amepre.2008.03.037. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Gray DO. Making team science better: applying improvement-oriented evaluation principles to evaluation of cooperative research centers. In: Coryn CLS, Scriven M, editors. New Directions for Evaluation. Wiley Periodicals, Inc; 2008b. pp. 73–87. [Google Scholar]
- Guimerà R, Uzzi B, Spiro J, Amaral LAN. Team assembly mechanisms determine collaboration network structure and team performance. Science. 2005;308:697–702. doi: 10.1126/science.1106340. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hackman JR. The design of work teams. In: Lorsch JW, editor. Handbook of Organizational Behavior. Englewood Cliffs, NJ: Prentice-Hall; 1987. pp. 315–342. [Google Scholar]
- Hackman JR. Groups that Work (and Those that Don’t): Creating Conditions for Effective Teamwork. 1st edn. San Francisco, CA: Jossey-Bass; 1990. [Google Scholar]
- Hackman JR. Collaborative Intelligence: Using Teams to Solve Hard Problems. San Francisco, CA: Berrett-Koehler; 2011. [Google Scholar]
- Haines V, Godley J, Hawe P. Understanding interdisciplinary collaborations as social networks. American Journal of Community Psychology. 2010:1–11. doi: 10.1007/s10464-010-9374-1. [DOI] [PubMed] [Google Scholar]
- Hall KL, Feng AX, Moser RS, et al. Moving the science of team science forward: collaboration and creativity. American Journal of Preventive Medicine. 2008a;35:S243–S249. doi: 10.1016/j.amepre.2008.05.007. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hall KL, Stokols D, Moser RP, et al. The collaboration readiness of transdisciplinary research teams and centers findings from the National Cancer Institute’s TREC Year-One evaluation study. American Journal of Preventive Medicine. 2008b;35:S161–S172. doi: 10.1016/j.amepre.2008.03.035. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hirsch Hadorn G. Handbook of Transdisciplinary Research. Dordrecht/London: Springer; 2008. [Google Scholar]
- Huerta MF, Farber GK, Wilder EL, et al. NIH roadmap interdisciplinary research initiatives. PLoS Computational Biology. 2005;1:e59. doi: 10.1371/journal.pcbi.0010059. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Jordan GB. Factors influencing advances in basic and appplied research: variation due to diversity in research profiles. In: Hage J, Meeus MTH, editors. Innovation, Science, and Institutional Change. Oxford, UK: Oxford University Press; 2006. pp. 173–195. [Google Scholar]
- Kagan JM, Rosas SR, Trochim W. Integrating utilization-focused evaluation with business process modeling for clinical research improvement. Research Evaluation. 2010 Oct;19(4):239–250. doi: 10.3152/095820210X12827366906607. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kahn RL. The MacArthur Foundation Program in Mental Health and Human Development: an experiment in scientific organization. A MacArthur Foundation Occasional Paper. 1992 [Google Scholar]
- Kessel FS, Rosenfield PL, Anderson NB. Interdisciplinary Research : Case Studies from Health and Social Science. 2nd edn. Oxford/New York: Oxford University Press; 2008. [Google Scholar]
- Keyton J, Ford DJ, Smith FL. A mesolevel communicative model of collaboration. Communication Theory. 2008;18:376–406. [Google Scholar]
- Klein JT. Crossing Boundaries: Knowledge, Disciplinarities, and Interdisciplinarities. Charlottesville, VA: University Press of Virginia; 1996. [Google Scholar]
- Klein JT. Education. In: Hirsch Hadorn G, Hoffmann-Riem H, Biber-Klemm S, editors. Handbook of Transdisciplinary Research. Dordrecht/London: Springer Science; 2008. pp. 399–410. [Google Scholar]
- Klein JT. A taxonomy of interdisciplinarity. In: Frodeman R, Klein JT, Mitcham C, editors. The Oxford Handbook of Interdisciplinarity. Oxford, UK: Oxford University Press; 2010. pp. 15–30. [Google Scholar]
- Kruskal JB, Wish M. Sage University Paper series on Quantitative Applications in the Social Sciences, number 07-011. Newbury Park, CA: Sage; 1978. Multidimensional Scaling. [Google Scholar]
- Leischow SJ, Best A, Trochim WM, et al. Systems thinking to improve the public’s health. American Journal of Preventive Medicine. 2008;35:S196–S203. doi: 10.1016/j.amepre.2008.05.014. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Mabry PL, Olster DH, Morgan GD, Abrams DB. Interdisciplinarity and systems science to improve population health: a view from the NIH Office of Behavioral and Social Sciences Research. American Journal of Preventive Medicine. 2008;35:S211–S224. doi: 10.1016/j.amepre.2008.05.018. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Mason J. Qualitative Researching. London: Sage; 1996. [Google Scholar]
- Mâsse LC, Moser RP, Stokols D, et al. Measuring collaboration and transdisciplinary integration in team science. American Journal of Preventive Medicine. 2008;35:S151–S160. doi: 10.1016/j.amepre.2008.05.020. [DOI] [PubMed] [Google Scholar]
- McGrath JE. Social Psychology: a Brief Introduction. New York: Holt, Rinehart & Winston; 1964. [Google Scholar]
- Miller K. Biomedical Computation Review. Simbios at Stanford University, National NIH Center for Biomedical Computing; 2008. Successful collaborations: social scientists who study science have noticed a trend; pp. 7–15. [Google Scholar]
- Misra S, Stokols D, Hall KL, Feng A. Transdisciplinary training in health research: distinctive features and future directions. In: Kirst M, Schaefer-McDaniel N, Huang S, O’Campo P, editors. Converging Disciplines: a Transdisciplinary Research Approach to Urban Health Problems. New York: Springer; 2010. [Google Scholar]
- Mitrany M, Stokols D. Gauging the transdisciplinary qualities and outcomes of doctoral training programs. Journal of Planning Education and Research. 2005;24:437–449. [Google Scholar]
- Nash JM. Transdisciplinary training: key components and prerequisites for success. American Journal of Preventive Medicine. 2008;35:S133–S140. doi: 10.1016/j.amepre.2008.05.004. [DOI] [PubMed] [Google Scholar]
- NAS, National Academy of Sciences. Facilitating Interdisciplinary Research. Washington, DC: NAS; 2004. [Google Scholar]
- NIH, National Institutes of Health. The NIH Common Fund Interdisciplinary Research. NIH; 2010. [Google Scholar]
- NSF, National Science Foundation. NSF; 2008. NSF-Wide Investments. [Google Scholar]
- Olson GM, Olson JS. Distance matters. Human-Computer Interaction. 2000;15:139–178. [Google Scholar]
- Olson JS, Hofer EC, Bos N, et al. A theory of remote scientific collaboration (TORSC) In: Olson GM, Zimmerman A, Bos N, editors. Scientific Collaboration on the Internet. Cambridge, MA: MIT Press; 2008. [Google Scholar]
- Osborn AF. Your Creative Power. New York: Scribner; 1948. [Google Scholar]
- Paletz SBF, Schunn CD. A social-cognitive framework of multidisciplinary team innovation. Topics in Cognitive Science. 2010;2:73–95. doi: 10.1111/j.1756-8765.2009.01029.x. [DOI] [PubMed] [Google Scholar]
- Quinlan KM, Kane M, Trochim W. Evaluation of large research initiatives: outcomes, challenges, and methodological considerations. New Directions for Evaluation. 2008;118:61–72. [Google Scholar]
- Rhoten D. A Multi-Method Analysis of the Social and Technical Conditions for Interdisciplinary Collaboration. Hybrid Vigor Institute; 2003. [Google Scholar]
- Rhoten D. Interdisciplinary research: trend or transition. Items and Issues: Social Science Research Council. 2004;5:6–11. [Google Scholar]
- Rhoten D, Parker A. Risks and rewards of an interdisciplinary research path. Science. 2004;306:2046. doi: 10.1126/science.1103628. [DOI] [PubMed] [Google Scholar]
- Robinson JM, Trochim WMK. An examination of community members’, researchers’ and health professionals’ perceptions of barriers to minority participation in medical research: an application of concept-mapping. Ethnicity and Health. 2007;12:521–539. doi: 10.1080/13557850701616987. [DOI] [PubMed] [Google Scholar]
- Rosenberg S, Kim MP. The method of sorting as a data gathering procedure in multivariate research. Multivariate Behavioral Research. 1975;10:489–502. doi: 10.1207/s15327906mbr1004_7. [DOI] [PubMed] [Google Scholar]
- Rosenfield PL. The potential of transdisciplinary research for sustaining and extending linkages between the health and social-sciences. Social Science and Medicine. 1992;35:1343–1357. doi: 10.1016/0277-9536(92)90038-r. [DOI] [PubMed] [Google Scholar]
- Stillman F, Hoang M, Linton R, et al. Mapping tobacco industry strategies in South East Asia for action planning and surveillance. Tobacco Control. 2008;17 doi: 10.1136/tc.2006.017988. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Stipelman B, Feng A, Hall K, et al. The relationship between collaborative readiness and scienctific productivity in the Transdisciplinary Research on Energetics and Cancer (TREC) Centers. Annals of Behavioral Medicine. 2010;39:s143. [Google Scholar]
- Stokols D. Toward a science of transdisciplinary action research. American Journal of Community Psychology. 2006;38:63–77. doi: 10.1007/s10464-006-9060-5. [DOI] [PubMed] [Google Scholar]
- Stokols D, Fuqua J, Gress J, et al. Evaluating transdisciplinary science. Nicotine Tobacco Research. 2003;5(Suppl 1):S21–S39. doi: 10.1080/14622200310001625555. [DOI] [PubMed] [Google Scholar]
- Stokols D, Harvey R, Gress J, et al. In vivo studies of transdisciplinary scientific collaboration: lessons learned and implications for active living research. American Journal of Preventive Medicine. 2005;28:202–213. doi: 10.1016/j.amepre.2004.10.016. [DOI] [PubMed] [Google Scholar]
- Stokols D, Hall KL, Taylor BK, Moser RP. The science of team science: overview of the field and introduction to the supplement. American Journal of Preventive Medicine. 2008a;35:S77–S89. doi: 10.1016/j.amepre.2008.05.002. [DOI] [PubMed] [Google Scholar]
- Stokols D, Misra S, Moser RP, et al. The ecology of team science: understanding contextual influences on transdisciplinary collaboration. American Journal of Preventive Medicine. 2008b;35:S96–S115. doi: 10.1016/j.amepre.2008.05.003. [DOI] [PubMed] [Google Scholar]
- Stokols D, Hall KL, Moser RP, et al. Evaluating cross-disciplinary team science initiatives: conceptual, methodological, and translational perspectives. In: Frodeman R, Klein JT, Mitcham C, editors. The Oxford Handbook on Interdisciplinarity. New York: Oxford University Press; 2010. pp. 471–493. [Google Scholar]
- Trochim WMK. An introduction to concept-mapping for planning and evaluation. Evaluation and Program Planning. 1989;12:1–16. [Google Scholar]
- Trochim WMK. The reliability of concept-mapping. Annual Conference of the American Evaluation Association; Dallas, TX. 1993. [Google Scholar]
- Trochim W, Kane M. Concept-mapping: an introduction to structured conceptualization in health care. International Journal for Quality in Health Care. 2005;17:187–191. doi: 10.1093/intqhc/mzi038. [DOI] [PubMed] [Google Scholar]
- Trochim W, Cabrera DA, Milstein B, et al. Practical challenges of systems thinking and modeling in public health. American Journal of Public Health. 2006a;96:538–546. doi: 10.2105/AJPH.2005.066001. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Trochim WM, Cabrera DA, Milstein B, et al. Practical challenges of systems thinking and modeling in public health. American Journal of Public Health. 2006b;96:538–546. doi: 10.2105/AJPH.2005.066001. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Trochim WM, Marcus SE, Masse LC, et al. The evaluation of large research initiatives: a participatory integrative mixed-methods approach. American Journal of Evaluation. 2008;29:8–28. [Google Scholar]
- Von Bertalanffy L. An outline of general system theory. British Journal for the Philosophy of Science. 1950;1:134–165. [Google Scholar]
- Wagner CS, Roessner JD, Bobb K, et al. Approaches to understanding and measuring interdisciplinary scientific research (IDR): a review of the literature. Journal of Informetrics. 2011:1–13. In press. [Google Scholar]
- Weller SC, Romney AK. Systematic Data Collection. Newbury Park, CA: Sage; 1988. [Google Scholar]
- Whitfield J. Group theory. Nature. 2008;455:720–723. doi: 10.1038/455720a. [DOI] [PubMed] [Google Scholar]
- Wicker AW. Processes which mediate behavior-environment congruence. Behavioral Science. 1972;17:265–277. [Google Scholar]
- Wuchty S, Jones BF, Uzzi B. The increasing dominance of teams in production of knowledge. Science. 2007;316:1036–1038. doi: 10.1126/science.1136099. [DOI] [PubMed] [Google Scholar]