Table 2.
Measures of research collaboration outcomes
| Author | Outcome | Sample/ population |
Detail about outcome measurement | If scale: number of Items |
If scale: response options |
Reliability | Validity |
|---|---|---|---|---|---|---|---|
| Counts or numerical representations of products | |||||||
| Ameredes et al. [45] | Number of extramural grants submitted and received | 110 trainees involved in multidisciplinary translational teams at one NIH CTSA institution | Not specified how data for counts obtained | NA | NA | NR | NR |
| Number of publications over 3-year period | Not specified how data for counts obtained | ||||||
| Cummings et al. [13] | Number of publications in final NSF reports over 4- to 9-year period | 549 research groups funded by NSF from 2000 to 2004 | Count of publications for each research group listed in final report to NSF or, if no final report, last annual report, between 2000 and 2009. Included archival conference proceedings, journal articles, book chapters, public reports on project. Each group’s publications counted only once, regardless of number of co-authors | ||||
| Number of cumulative publications for research group pre/post NSF funding listed in Google Scholar | Count of publications for each research group extracted through Google Scholar search engine divided into pre-NSF funding and post-NSF funding (through 2009). Each group’s publications counted only once, regardless of number of co-authors | Reliability evaluated for 10% of sample using raters recruited via Amazon’s Mechanical Turk. 5 raters compared each extracted publication with corresponding author’s Web page or resume publication listings. Publication counted if 4 of 5 raters agreed. Rater agreement was 94% | |||||
| Number of cumulative publications for research group pre/post NSF funding listed in Thomas Reuters [formerly ISI] Web of Science and Social Science database | Count of publications for each research group extracted through Thomas Reuters (formerly ISI) Web of Science and Social Science database divided into pre-NSF funding and post-NSF funding (through 2009). Each group’s publications counted only once, regardless of number of co-authors | ||||||
| Number of cumulative citations for research group publications pre/post NSF funding listed in Thomas Reuters (formerly ISI) Web of Science and Social Science database | Count of citations of each unique publication for each research group extracted through Thomas Reuters (formerly ISI) Web of Science and Social Science database divided into pre-NSF funding and post-NSF funding (through 2009). Each group’s publications counted only once, regardless of number of co-authors | ||||||
| Hughes et al. [33] | Number of co-authored publications over 4-year period | Active investigators at one NIH CTSA site | Count of co-authored publications between 2006 and 2009 involving two or more investigators. Used Ruby scripts to automatically harvest publication information from the US National Center for Biotechnology Information’s PubMed database | NR | |||
| Number of co-authored grant proposals over 4-year period | Count of co-authored grant proposals submitted between 2006 and 2009 involving two or more investigators (institutional data and additional data from NIH RePORT) | ||||||
| Lee and Bozeman [38] | Number of peer-reviewed journal papers (as a measure of individual productivity) | 443 science faculty affiliated with NSF or DOE research centers at US universities | Count of peer-reviewed journal papers from 2001-2003 obtained from SCI-Expanded through the ISI Web of Science. Authors identified by matching name, department, and institution found on respondent’s CV | ||||
| Fractional count of co-authored peer-reviewed journal papers (divided by number of co-authors) (as a measure of individual productivity) | Count of co-authored peer-reviewed journal papers divided by number of co-authors from 2001 to 2003 obtained from SCI-Expanded through the ISI Web of Science. Authors identified by matching name, department, and institution found on respondent’s CV | ||||||
| Lee [46] | Number of invention disclosures | 427 university faculty – science and engineers from research-intensive universities on the NSF list of top 100 research universities | Self-reported number on one-time anonymous survey | ||||
| Number of patents obtained | Self-reported number on one-time anonymous survey | ||||||
| Number of patents pending | Self-reported number on one-time anonymous survey | ||||||
| Lööf and Broström [47] | Income from new or improved products introduced during 1998-2000 as a proportion of sales income in year 2000 | 2071 manufacturing and business firms in Sweden that have collaborations with universities – data from Community Innovation Survey III in Sweden 1998-2000 | Survey data reported by firms; income from new or improved products introduced during 1998-2000 as a proportion of sales income in year 2000 | ||||
| Number of patent applications by industry partners in 2000 | Survey data reported by firms; count of patent applications by industry partners in 2000 | ||||||
| Luke et al. [48] | Number of grant submissions over 4-year period | 1272 research members of one institutional NIH CTSA center | Counts of new extramural submissions over 4-year period as maintained in university database, including federal, state, local, and foundation grants, contracts, programs, and sub-agreements, excluding renewals, resubmissions, etc. | ||||
| Number of publications over 4-year period | Counts of publications over 5-year period based on bibliometric data obtained from Elsevier Scopus | ||||||
| Mâsse et al. [19] | Number of submitted and published articles and abstracts | 216 research faculty, staff, and trainees from NCI TTURC | Counts of submitted and published articles and abstracts (to date) reported in written survey by participants (research faculty, staff, and trainees) in Year 3 of center | ||||
| Petersen [49] | Normalized number of publications per year | 473 pairs of research collaborators who published in Thompson Reuters Web of Knowledge publications (spanning 15,000 career years, 94,000 publications, and 166,000 collaborators) | Publication counts aggregated over relevant time periods, normalized by the baseline average calculated over the period of analysis | ||||
| Normalized number of citations per year | Citation count in a given census year converted to a normalized z score (to correct for older publications that have more time to accrue citations than newer publications) | ||||||
| Philbin [35] | Number of publications and conference proceedings (as indicator of technology knowledge sharing and improvement) | None – measure proposed based on lit review and interviews with 32 university and industry representatives involved in research collaborations | Counts of publications in scientific journals and peer-reviewed conference proceedings (no suggestion regarding what data source would be used to ascertain counts) | NA | NA | ||
| Quality of research publication as measured by a citation index (as indicator of technology knowledge sharing and improvement) | Citation index value for each publication (exact citation index not specified) | ||||||
| Number of students associated with collaboration (as indicator of technology knowledge sharing and improvement | Count of number of students, including postgraduate masters and PhD levels, associated with the collaboration | ||||||
| Financial value of projects according to sponsor and sector (as indicator of project and business knowledge sharing and improvement) | US dollar calculation of financial value of projects according to sponsor and sector, including measures for growth and decline and market share (no calculation details provided) | ||||||
| Third-party recognition of collaboration results (e.g., awards) (as indicator of technology sustainability of collaboration) | Level of third-party recognition of collaboration results (e.g., number of awards) (no specifics regarding measurement other than count of awards) | ||||||
| Number of university students recruited as new staff into the company (as indicator of social sustainability of collaboration) | Number of students from the university recruited as new staff into the company (no specifics provided) | ||||||
| Completion of project milestones or deliverables (as indicator of projects and business knowledge sharing and improvement) | Description of completion of project milestones or deliverables that were achieved according to time, cost, and quality requirements (no specifics provided) | ||||||
| Number of staff exchanges and student placements (as indictor of social knowledge sharing and improvement) | Counts of staff exchanges and student placements (no specifics provided) | ||||||
| Attendance at key events (as indicator of social knowledge sharing and improvement) | Percentage attendance at key events, such as customer and milestone reviews and invited lectures (no specifics as to what the denominator would be) | ||||||
| Value of follow-on work and spin-off projects that have arisen as a consequence of initial funding (as indicator of project and business sustainability of collaboration) | Value of follow-on work and “spin-off” projects that have arisen as a consequence of initial funding (no specifics of how value would be quantified) | ||||||
| Value of intellectual property including patents and license agreements arising from the collaboration (as indicator of project and business sustainability of collaboration) | Value of intellectual property including patents and license agreements arising from the collaboration (no specifics regarding how value is quantified) | ||||||
| Long-term return on investment accrued from research investment (as indicator of project and business sustainability of collaboration) | Level of long-term return on investment accrued from research investment (no specifics provided) | ||||||
| Efficiency of contract management (as indicator of projects and business knowledge sharing and improvement) | Measure of the efficiency of contract management (e.g., submission of invoices) (no specifics provided) | ||||||
| Extent of adoption of research results in new products and services developed by the company (as indicator of technology sustainability of collaboration) | Extent of adoption of research results in new products and services by the company (no specifics provided) | ||||||
| Stvilia et al. [50] | Number of publications | 89 scientific teams conducting experiments at the NHMFL between 2005 and 2008 | Counts of publication from a list of publications between 2005 and 2009 downloaded from the NHMFL website | NR | NR | ||
| Trochim et al. [17] | Number of citations: total, self-adjusted, and expected | 216 research faculty, staff, and trainees from NCI TTURC | Bibliometric analysis of publications resulting from TTURC research and citing TTURC grant. Analysis produces both total, self- adjusted, and expected citation counts | ||||
| Wang and Hicks [51] | Average number of citations for new or repeated co-author teams | 43,996 publications data from 1310 US scientists funded by NSF | Average number of forward citations (over 5-year period post publication) per paper for new and repeated (within last 3 years) co-author team papers based on lifetime publication data from Thomas Reuters Web of Science | ||||
| Wuchty et al. [8] | Number of citations for each paper/patent | 19.9 million papers in the ISI Web of Science database and 2.1 million patents (all US patents registered since 1975) | Count of all research articles in ISI Web of Science database published since 1944; count of all US registered patents since 1975 | ||||
| Quality indicators of counted products | |||||||
| Lee et al. [52] | Novelty of research | 1493 research-active faculty in science, engineering, and social sciences with publications included in Thomson Reuters Web of Science with at least two authors from the same institution on the paper | Formula cited in Lee et al. (2014) paper uses two steps: (1) calculate the commonness of co-cited journal pairs for the whole Web of Science database; (2) calculate the novelty of papers based on their references for the sampled papers (and taking only the 10th percentile) | NA | NA | NR | NR |
| Impact of research based on citation percentiles | High impact defined as being in the top 1% of most cited papers in that Web of Science field in that year | ||||||
| Trochim et al. [17] | Journal impact factor for each publication | Research faculty, staff, and trainees at NCI TTURC | Bibliometric analysis of journal impact factor for each publication resulting from TTURC research and citing TTURC grant; defined as average number of citations of a journal of all articles published in previous 2 years | ||||
| Journal performance indicator for each publication | Bibliometric analysis of journal performance indicator for each publication resulting from TTURC research and citing TTURC grant; defined as average number of publications to date for all publications in a journal in a particular year | ||||||
| Field performance indicator for each publication | Bibliometric analysis of field performance indicator for each publication resulting from TTURC research and citing TTURC grant; defined as journal performance indicator for all journals in a field | ||||||
| 5-Year Journal IF for each publication | Bibliometric analysis of 5 Year journal IF for each publication resulting from TTURC research and citing TTURC grant; defined as average number of citations to publications over a 5-year period | ||||||
| Wuchty et al. [8] | RTI with and without self-citations | 19.9 million papers in the ISI Web of Science database and 2.1 million patents (all US patents registered since 1975) | RTI = Mean number of citations received by team-authored work divided by the mean number of citations received by solo-authored work (>1 = team produced more highly cited papers than sole authors; <1 = vice versa; if = 1, no difference between sole and team authors) | ||||
| Self-reported perceptions of outcomes | |||||||
| Ameredes et al. [45] | Perceived competency (confidence) in NIH CTSA recommended translational research competencies | 32 early career scholars at one NIH CTSA institution | Each of 99 items (reflecting 15 competencies) rated in written survey by participants; final score used in analysis was an average of items for each competency sub-scale | 99 | 6-point scale ranging from 0 to 5; specific anchors not provided; higher scores indicated confidence | NR | Construct validity: Principal components analysis |
| Greene et al. [37] | Perceived impact. Unclear because only have sample items and scales not constructed. Six sample questions appear to measure perceived impact on health plan, research organization, and individual | Investigators and project staff from the HMO Cancer Research Network over a 5-year period | Measured using structured questions with Likert-type responses included in annual survey sent to all consortium sites/members | Six sample questions only | 4-point Likert scale ranging from “agree” to “disagree” (with a “can’t evaluate” option) | NR | |
| Hager et al. [53] | Perceived research self-efficacy (skill) | Six interprofessional faculty fellows (dentists, pharmacists, physicians) | Research self-efficacy scale developed by Bieschke, Bishop and Garcia [54] | Not specified | 100-point response scale; no anchors provided | ||
| Hall et al. [12] | Investigators’ perceptions of center as a whole, as well as how they feel as a member of center in first year of center | 56 Investigators and staff from four NCI TREC Centers | Semantic-differential/impression Scale: Ratings on a 7-point continuum on word/phrase pairs such as conflicted – harmonious; not supportive –supportive; scientifically fragmented – scientifically integrated | Not specified | 7-point continuum; no anchors provided | Cronbach’s alpha = 0.98 | |
| Completing Deliverables Scale: Investigators’ expectations for their projects’ meeting projected year 1 deliverables | Not specified how final score computed | One item for each project | 5-point Likert scale ranging from “highly unlikely” to “highly likely”; each project rated separately | NA | Convergent validity: Inverse correlation between duration of involvement in transdisciplinary projects at their center and researchers’ confidence in meeting year 1 deliverables | ||
| Hall et al. [12]a | Collaborative Productivity Scale: Perception of collaborative productivity within center, including productivity of scientific meetings, centers’ overall productivity | Rate the collaboration within your center: Productivity of collaborative meetings, overall productivity of center (rated on 5-point scale “very poor” to “excellent”); in general, collaboration has improved your research productivity (5-point scale from “strongly disagree” to “strongly agree”). Unclear whether three items were summed, summed and averaged, or if some other calculation was used to determine final scale value | Unclear; appears to be three items | 5-point Likert scale ranging from “very poor” to “excellent”. Also asked to respond to a statement about collaboration and research productivity, rating on a 5-point scale from “strongly disagree” to “strongly agree” with central “neither” response option | Cronbach’s alpha = 0.95 | Convergent validity: the better the perceived collaborative productivity, the better the collaboration satisfaction, more confidence in completion of deliverables, more perceived institutional resources for collaboration, better impressions, better interpersonal collaborations | |
| Hall et al. [12]b | Cross-Disciplinary Collaboration Activities Scale: Perceived frequency of engagement in collaborative activities outside of one’s primary field | Please assess the frequency with which you typically engage in each of the activities listed below (e.g., read journals or publications outside of your primary field) | 9 | 7-point Likert scale ranging from “never” to “weekly” | Cronbach’s alpha = 0.81 | Convergent validity: Higher frequency of cross-disciplinary collaborative activities correlated with stronger multidisciplinary and interdisciplinary/transdisciplinary research orientation | |
| Hanel and St-Pierre [55] | Originality of Innovation: Firm representatives’ perceptions/ ratings of most important innovation in terms of originality | 5944 manufacturing provincial enterprises included in the Statistics Canada Survey of Innovation 1999 that reported research and development collaborative arrangements with universities to develop new or significantly improved products or manufacturing processes during the previous 3 years (1997–1999) | Firms asked if most important innovation was a “world first,” a “Canadian first,” or a “firm first” | NA | NA | NR | NR |
| Mâsse et al. [19]a | Methods Index: Perceptions of new methods created (in general), specifically development or refinement of methods for gathering data | 216 research faculty, staff, and trainees from NCI TTURC | Average of seven items – each item rated in written survey by participants (research faculty, staff, and trainees) | 7 | 4-point Likert scale ranging from “no progress” to “excellent progress”; also option of “does not apply” | Convergent validity: Correlations with satisfaction with collaboration, impact of collaboration, trust and respect, and transdisciplinary integration | |
| Science and Models Index: Perceptions of new science and models of tobacco use; to include understanding multiple determinants of the stages of nicotine addiction | Average of 17 items; each item rated in written survey by participants (research faculty, staff, and trainees) | 17 | |||||
| Improved Interventions Index: Perceptions of improved interventions developed (in general – most items not specific to tobacco use); specifically progress in pharmacologic interventions | Average of 12 items; each item rated in written survey by participants (research faculty, staff, and trainees) | 12 | |||||
| Oetzel et al. [23]c | Partnership Synergy: Partner’s ability to develop goals, recognize challenges, respond to needs, and work togetherDomain: Intervention/Research | 138 PIs/PDs and 312 academic or community partners from 294 CBPR projects with US federal funding in 2009 | Each item rated in written survey by participants. Final score used in analysis was an average of five items | 5 | 5-point Likert scale:1 = not at all; 2 = very little; 3 = somewhat; 4 = mostly; 5 = to a great extent | Cronbach’s alpha = 0.90 | Construct validity: Confirmatory factor analysis. Convergent validity: Correlations with structural/individual dynamics scales, relational dynamics scales, and other outcome variables |
| Systems and Capacity Changes: Partner Capacity Building. Develops the skills to benefit individual membersDomain: Outcomes | Each item rated in written survey by participants. Final score used in analysis was an average of three items | 4 | Cronbach’s alpha = 0.80 | ||||
| Systems and Capacity Changes: Agency Capacity Building. Develops the reputation and the skills of agencies involved in the partnership Domain: Outcomes | Each item rated in written survey by participants. Final score used in analysis was an average of three items | 4 | Cronbach’s alpha = 0.87 | ||||
| Systems and Capacity Changes. Changes in Power Relations: Degree to which power and capacity has been developed in the community membersDomain: Outcomes | Each item rated in written survey by participants. Final score used in analysis was an average of five items | 5 | 5-point Likert scale:1 = strongly disagree; 2 = disagree; 3 = neither agree nor disagree; 4 = agree; 5 = strongly agree | Cronbach’s alpha = 0.81 | |||
| Systems and Capacity Changes: Sustainability of Partnership/Project: Likelihood of the project and partnership continuing beyond the funding periodDomain: Outcomes | Each item rated in written survey by participants. Final score used in analysis was an average of three items | 3 | Cronbach’s alpha = 0.71 | ||||
| Health Outcomes: Community Transformation: Policy changes and community improvementDomain: Outcomes | Each item rated in written survey by participants. Final score used in analysis was an average of four items | 7 | 5-point Likert scale:1 = not at all; 2 = very little; 3 = somewhat; 4 = mostly; 5 = to a great extent | Cronbach’s alpha = 0.79 | |||
| Health Outcomes: Community Health Improvement. Improvement of health for the community as a result of the project | Single item rated in written survey by participants: Overall, how much did your research project improve the health of the community? | 1 | 5-point Likert scale:1 = not at all; 2 = a little; 3 = somewhat; 4 = quite a bit; 5 = a lot | NA | |||
| Philbin [35] | Satisfaction: Perception of collaborators’ satisfaction (as indicator of social knowledge sharing and improvement) | None – measure proposed based on literature review and interviews with 32 university and industry representatives involved in research collaborations | Satisfaction of students, academic staff, and industrial contacts involved in the collaboration (no specifics provided) | NA | NA | NA | NA |
| Value of Technology Improvement Delivered: Company’s perception of value of technology improvements delivered (as indicator of technology sustainability of collaboration) | Survey of company representatives to measure the value of technology improvements delivered associated with the research collaboration (no specifics provided) | ||||||
| University and company staffs’ perceptions of Incorporation of knowledge developed into continuing professional development (as indicator of technology sustainability of collaboration) | Measurement of how knowledge developed is being incorporated into continuing professional development for both university and company staff involved (no specifics provided) | ||||||
| Perceptions of relevance of research to company’s business objectives (as indicator of projects and business knowledge sharing and improvement) | Measure of relevance of research to company’s business objectives (no specifics provided) | ||||||
| University and company’s perceptions of the percentage alignment of research to organizational strategies (as indicator of project and business sustainability of collaboration) | Percentage alignment of research to organizational strategy, for both university and company perspectives (no specifics provided) | ||||||
| Perceptions of the extent of personal relationships between company and university resulting from the collaboration (as indicator of social sustainability of collaboration) | Numerical measure for the extent of personal relationships between company and university resulting from the collaboration (no specifics provided) | ||||||
| Perceptions of the level of interactions between senior levels of the collaborators, especially at company board level and senior academic faculty level (as indicator of social sustainability of collaboration) | Level of interactions between senior levels of the collaborators, especially at company board level and senior academic faculty level (no specifics provided) | ||||||
| Perceptions of the level of influence by university faculty on company’s corporate strategy (as indicator of social sustainability of collaboration) | Level of influence by university faculty on company’s corporate strategy (no specifics provided) | ||||||
| Trochim et al. [17] d | Methods Progress Scale: Perceptions of progress on development of methods in last 12 months (Intermediate term outcome) | 216 research faculty, staff, and trainees from NCI TTURC | Self-report survey items administered annually | 7 | 4-point Likert scale:1 = no progress; 2 = some progress; 3 = good progress; 4 = excellent progress | NR | NR |
| Science and Models Scale: Perceptions of progress on development of science and models in last 12 months (Intermediate term outcome) | 17 | ||||||
| Progress on Development of Interventions Index: Perceptions of progress on development of new and Improved interventions in last 12 months (Intermediate term outcome) | 12 | ||||||
| Policy Impact Index: Perceptions of progress on policy outcomes in last 12 months (long-term outcome) | 4 | Yes/no | |||||
| Translation to Practice Index: Perceptions of progress on translation into practice outcomes in last 12 months (long-term outcome) | 9 | ||||||
| Health Outcomes Impact Scale: Perceptions of optimism regarding positive health outcomes from center research within next 5 years (long-term outcome) | 6 | 5-point Likert scale:1 = not at all optimistic; 2 = somewhat optimistic; 3 = moderately optimistic; 4 = very optimistic; 5 = extremely optimistic | |||||
| Peer review perceptions of outcomes | |||||||
| Hall et al. [12]e | Written Products Protocol: Cross-disciplinary collaboration and productivity | 21 center developmental project proposals from four NCI TREC centers | Unclear how final score calculated | 37 item protocol used to evaluate proposals. Dimensions of cross-disciplinarity assessed: disciplines represented, levels of analysis, type of cross-disciplinary integration, scope of transdisciplinary integration, an overall assessment of general scope of each proposal | Items describing proposal with various response formats; one item – rate whether “unidisciplinary”, “multidisciplinary”, “interdisciplinary” or “transdisciplinary” proposal, two items re: transdisciplinary integration and scope of proposal using 10-point Likert scale ranging from “none” to “substantial” | Inter-rater reliability of r = 0.24–0.69. Highest inter-rater reliability was for experimental types (0.69), number of analytic levels (0.59), disciplines (0.59), and scope (0.52). Lowest inter-rater reliability was for type of cross-disciplinary integration (0.24) | Convergent validity: Higher number of disciplines in proposal, the broader its integrative score, larger its number of analytic level. The higher the type of disciplinarity, the broader its overall scope |
| Philbin [35] | Knowledge Improvement Index: Level of knowledge improvement (as indicator of technology knowledge sharing and improvement) | None – measure proposed based on literature review and interviews with 32 university and industry representatives involved in research collaborations | Independent numerical rating of the level of knowledge improvement (unit of analysis not specified; no specifics provided) | NA | NA | NA | NA |
| Trochim et al. [17]d | Peer review of progress on development of methods in last 12 months (intermediate term outcome) | 216 research faculty, staff, and trainees from NCI TTURC | Peer review of Subproject Annual Progress Report Summaries from seven centers for progress on outcome; two randomly assigned reviewers for each | 1 | 5-point scale; anchors not specified | >80% agreement by both raters with no more than 1-point difference. Also, both Kendall’s tau b and Spearman’s rho demonstrated positive and significant agreement | NR |
| Peer review of progress on development of science and models in last 12 months (Intermediate term outcome) | Peer review of Subproject Annual Progress Report Summaries from seven centers for progress on outcome; two randomly assigned reviewers for each | 1 | |||||
| Peer review of progress on development of Interventions in last 12 months (Intermediate term outcome) | Peer review of Subproject Annual Progress Report Summaries from seven centers for progress on outcome; two randomly assigned reviewers for each | 1 | |||||
| Qualitative descriptions of outcomes | |||||||
| Armstrong and Jackson-Smith [56] | Improved interdisciplinary understanding for: individuals, research team, and university/ systematic (Integrative Capacity) | 24 team members: 11 academic researchers; 6 non-academic team members (from related organizations and companies); 7 graduate students | 30- to 60-minute semi-structured interviews with qualitative analysis of interview transcripts | NA | |||
| Team capacity to work in integrated manner, for: individuals, research team, and university/systematic (Integrative Capacity) | 30- to 60-minute semi-structured interviews with qualitative analysis of interview transcripts | ||||||
| Development of research plan for: individuals, research team, and university /systematic (Integrative Capacity) | 30- to 60-minute semi-structured interviews with qualitative analysis of interview transcripts | ||||||
| Hager et al. [53] | Perceptions of impact of faculty development collaborative research fellowship | Six interprofessional faculty fellows (dentists, pharmacists, physicians) | Qualitative observations made and recorded after each seminar or learning session and analyzed for themes at end of fellowship | ||||
| Stokols et al. [25] | Transdisciplinary Conceptual Integration; e.g., a transdisciplinary economic model (to assess the costs of smoking), new research proposal development by transdisciplinary teams, new directions for transdisciplinary collaborations | NIH TTURC investigators at three centers | Descriptions provided by TTURC investigators through open-ended “periodic interviews” from 1999 to 2004 | ||||
| Vogel et al. [57] | Perceived impacts of transdisciplinary team science | 31 investigators, staff, and trainees from the NCI TREC centers | 15 question in-depth semi-structured interviews (interview guide provided) | ||||
| Healtd indicators/outcomes | |||||||
| Aguilar-Gaxiola et al. [58] | 60 specific categories of community health indicators (e.g., life expectancy; preventable hospitalizations) within an organizing structure of nine determinants of health (e.g., general health status) | 21 health indicator projects | Not provided in this article | NA | NA | ||
NA, not applicable; NR, not reported; NIH, National Institutes of Health; CTSA, Clinical and Translational Science Award; NSF, National Science Foundation; ISI, Institute for Scientific Information; DOE, US Department of Energy; SCI, Science Citation Index; CV, curriculum vitae; NCI, National Cancer Institute; IF, impact factor; TTURC, Transdisciplinary Tobacco Use Research Centers; NHMFL, National High Magnetic Field Laboratory;RTI, relative team impact; TREC, Transdisciplinary Research on Energetics and Cancer; PI, principal investigator; PD, project director; CBPR, community-based participatory research.
Details obtained by cross-referencing article (TTURC Researcher Survey 2002) from https://cctst.uc.edu/sites/default/files/cis/survey-TTURC_research.pdf [44].
Details obtained by cross-referencing article (TREC Baseline survey) from https://www.teamsciencetoolkit.cancer.gov/Public/TSResourceMeasure.aspx?tid=2%26rid=36 [42].
Original instrument available at http://cpr.unm.edu/research-projects/cbpr-project/index.html--scroll to 2 (Quantitative Measures – “Key Informant” and “Community Engagement” survey instruments). Developmental work on measures from Oetzel et al. (2015) continues in an NIH NINR R01 (Wallerstein [PI], 2015–2020 “Engage for Equity” study (see http://cpr.unm.edu/research-projects/cbpr-project/cbpr-e2.html).
Details obtained by cross-referencing article (TTURC Researcher Survey 2002) from https://cctst.uc.edu/sites/default/files/cis/survey-TTURC_research.pdf [44] and from Kane and Trochim [59].
Details obtained by cross-referencing article (NCI TREC Written Products Protocol 2006-09-27) from https://www.teamsciencetoolkit.cancer.gov/Public/TSResourceMeasure.aspx?tid=2%26rid=646 [43].