Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2019 Aug 1.
Published in final edited form as: Acad Med. 2018 Aug;93(8):1162–1170. doi: 10.1097/ACM.0000000000002119

Charting the Publication and Citation Impact of the NIH Clinical and Translational Science Awards (CTSA) Program From 2006 Through 2016

Nicole Llewellyn 1, Dorothy R Carter 2, Latrice Rollins 3, Eric J Nehl 4
PMCID: PMC6028299  NIHMSID: NIHMS931220  PMID: 29298181

Abstract

Purpose

The authors evaluated publication and citation patterns for articles supported by Clinical and Translational Science Awards (CTSA) hub investment over the first decade of the CTSA program. The aim was to elucidate a pivotal step in the translational process by providing an account of how time, hub maturity, and hub attributes were related to productivity and influence in the academic literature.

Method

In early 2017, the authors collected bibliometric data from PubMed, Web of Science InCites, and NIH iCite for articles citing any CTSA hub grants published from hub inception through 2016. They compiled data on publication and citation rates, and indices of relative citation impact aggregated by hub funding year cohort. They compared hub-level bibliometric activity by multi- versus single-institution structure and total monetary award sums, compiled from NIH RePORTER.

Results

From 2006–2016, CTSA hubs supported over 66,000 publications, with publication rates accelerating as hubs matured. These publications accumulated over 1.2 million citations, with some articles cited over 1,000 times. Indices of relative citation impact indicated that CTSA-supported publications were cited more than twice as often as expected for articles of their publication years and disciplines. Multi-institutional hubs and those awarded higher grant sums exhibited significantly higher publication and citation activity.

Conclusions

The CTSA program is yielding a robust and growing body of influential research findings with consistently high indices of relative citation impact. Preliminary evidence suggests multi-institutional collaborations and more monetary resources are associated with elevated bibliometric activity, and therefore, may be worth their investment.


Since 2006, the National Center for Research Resources and, later, the National Center for Advancing Translational Sciences of the National Institutes of Health (NIH) have contributed approximately $4.5 billion toward establishing a U.S. network of 64 Clinical and Translational Science Awards (CTSA) “hubs” (i.e., organizations created at medical research institutions to serve as centers for clinical and translational science).1,2,3 Within the hubs, scientists, clinicians, educators, research administrators, and community members have pooled their expertise toward the common pursuit of translational science, or the translation of basic science discoveries into medical treatments and health practice. Given these substantial investments, it is necessary to regularly evaluate the impact of the CTSA program on clinical and translational science. Quantification of the full spectrum of translation, from laboratory research to bedside treatment, is elusive. However, bibliometrics, the study of publication and citation activity, provides one way of understanding a pivotal step in the translational process. Publication data describe the dissemination of research results, including what information was made accessible; citation data describe the utilization and influence of those research results across the academic literature. Although bibliometric analysis does not provide a full picture of all scientific communication necessary for translation, the majority of the biomedical knowledge and discoveries that lay the foundation for eventual clinical practice will pass through the academic literature in some form. Thus, examining bibliometric patterns of how research findings are documented and shared in the literature is one method of charting the progress of translational science.

In this report, our goal is to provide a quantitative, longitudinally informed description of publication and citation patterns for articles resulting from CTSA hub investment over the first 10 years of the program. We offer a summary of aggregate bibliometric information across the 64 CTSA hubs and illustrate longitudinal trends in publication rate and scientific impact. Further, we examine two broad hub attributes as they relate to bibliometric activity: multi-institutional status and total monetary award amount. Our approach is intended to provide a comprehensive account of the dissemination of CTSA-supported research overall, across time, and across different types of hubs, and to help explain the extent to which CTSA research has shaped the academic literature.

Citations: Influence on the Scientific Literature

Measuring citation activity around a set of publications is important for understanding the reach and influence of that research. Indeed, this was put forward as a viable common metric for assessing products of the CTSA program.4 CTSA program–wide bibliometric analyses were reported at earlier stages of the program,1,5,6 but the value of past citation analyses was limited by the inability to compare results across publication years and fields. In this study, we employed cutting-edge analytic techniques to normalize results within year of publication and within well-defined areas of research. These innovative methodologies will inform our knowledge of the scope of research supported by the CTSA program from 2006 through 2016, allowing for a consistent understanding of dissemination efforts across hubs of different ages, structures, and disciplinary focuses.

Longitudinal Patterns of Bibliometric Activity

Assessing longitudinal patterns of bibliometric activity confers a valuable understanding of how trajectories manifest over time and affords comparisons across hubs. The 64 CTSA hubs were founded at different points over the 10-year study period, and therefore represent different stages of organizational development. Hubs founded in more recent years are still building their programs and forming early libraries of supported publications. Hubs founded in the earlier years have completed entire funding cycles and may now be operating under renewals or expansions. Mature hubs are expected to have achieved some of their goals for operational capacity and to have amassed long-running collections of supported publications.

Evaluating the different funding cohorts of hubs separately provides a nuanced picture of how hub maturity affects indices of bibliometric activity. Moreover, assessing trajectories of publication activity and citation accumulation by cohort illustrates the pace and momentum with which findings are disseminated. Therefore, we present the data by funding year cohort to provide information beyond overall patterns and allow inference about the long-term value of investment in the CTSA program.

CTSA Hub Attributes: Multi-Institutional Collaboration and Award Amount

A key imperative of the CTSA program is to promote scientific collaboration as a driver of high-quality translational science.1 Uniting professionals and stakeholders from different backgrounds is crucial for accelerating the progress of the field.7,8,9 Rubio4 proposed institutional collaboration as an important area for cross-hub measurement and comparison (i.e., CTSA Common Metric). Hubs have freedom in how they are constructed, and they are varied in their composition. Some have organized themselves as multi-institutional consortiums of two or more geographically linked institutions with complementary strengths and resources. Some of the multi-institutional consortiums reflect collaborations between research-intensive universities and minority-serving institutions charged with addressing health disparities.10 Multi-institutional consortiums may have advantages such as bringing together professionals with different perspectives, methods, and resources, and facilitating collaboration across disciplinary silos. However, there are also coordination costs and monetary costs associated with managing and administering complex hubs that span different institutions.7,11,12 Therefore, in addition to examining overall bibliometric patterns, we compared publication and citation accumulation across hubs that are structured differently with regard to inter-institutional collaboration.

Further, financial resources may enhance a hub’s ability to support operations that promote bibliometric productivity and may enable multi-institutional hubs to overcome coordination challenges. Thus, as a first step in assessing the relative value of large, resource-rich, and multi-institutional CTSA hubs, we examined the associations among multi-institutional status, total monetary award amount, and bibliometric return on investment. We hypothesized that investment in multi-institutional collaborations and larger monetary investment would each be predictive of greater publication output and citation influence, after accounting for hub maturity.

Method

Publication data collection

In January 2017, we compiled CTSA grant numbers using the NIH RePORTER (Research Portfolio Online Reporting Tools) system.3 Queries included all past and present UL1 awards (main hub cooperative award), KL2 and TL1 awards (associated training awards), and supplemental awards funded by the National Center for Research Resources or National Center for Advancing Translational Sciences. Although most publications supported by a CTSA hub would be expected to cite the hub’s main UL1 award, we wished to be comprehensive by incorporating in our searches all CTSA funding mechanisms, including the smaller KL2 and TL1 awards and supplemental awards. We linked grant numbers to their corresponding hubs and then conducted PubMed13 search queries, retrieving lists of publications that had formally cited each hub grant as of that time. These lists included ePubs indexed ahead of print and publications that were or were not indexed in PubMed Central. (NIH-funded publications are required to be publicly accessible via PubMed Central, but recent publications may not yet be indexed there and older publications may predate the requirement.14) From these lists, we removed 158 (< 1%) publications that were published before the first year of the cited grant as we assumed the grant citation to be inaccurate. These publications’ grant citations may be attributable to General Clinical Research Center (GCRC2) grants, which were precursors to the CTSA program but would be considered outside the scope of our project, which centers on the impact of the CTSA program.

Next, we used the PubMed IDs (PMIDs) of the identified publications to query and retrieve citation information in Clarivate Analytics’ Web of Science15 (WoS) and the associated InCites16 application. The WoS datasets did not include all publications found in PubMed, often because these were too recently published (ePubs ahead of print, or publications not yet imported into WoS’s database, which updates at varying rates depending on the journal) or were from journals not indexed by WoS. InCites is updated quarterly from WoS data and includes further fewer publications than WoS. On average, 87% of PMIDs identified in PubMed for a hub were matched in InCites. We generated InCites citation information for each of the 64 CTSA hubs, yielding datasets that included the following for each matched publication as of the time of data retrieval: PMID, reference information, number of citations, Category Normalized Citation Impact (CNCI), Journal Impact Factor (JIF), and JIF Percentile,.

Citation impact metrics

The CNCI, a recently developed proprietary metric from InCites, is an adjusted index of citation impact, normalized to publication year and research category.17 The CNCI score reflects the ratio of the observed number of citations attributed to an article to the expected number of citations for a typical article of that research area and publication year. A score of 3, for instance, means that an article was cited 3 times more frequently than average, or 3 times what would be expected for a similar article from that year and discipline. CNCI scores were available for all PMIDs indexed in InCites.

The JIF, another metric from InCites, is an unadjusted measure of typical citation rates for the journals in which articles are published.17 For example, a JIF of 5 means that articles published in that journal in the past two years were cited, on average, 5 times in the metric year. The JIF Percentile reflects the percentile ranking of each journal by field of research, serving as an adjusted index of journal-level impact within a given discipline. JIFs and JIF Percentiles were, on average, available for 98% of PMIDs indexed for a given hub in InCites.

In the interest of robustness and consilience, in February 2017 we carried out a complementary data collection, using the previously identified PMIDs to query the NIH Office of Portfolio Analysis iCite application18 and retrieve citation information. An alternative to the WoS InCites application, NIH’s iCite yields citation information compared against a somewhat different, although overlapping, reference group (NIH-funded research rather than WoS-indexed research). iCite aims to index all NIH-funded articles, and thus, all CTSA-supported PMIDs found in PubMed. This means that even newer articles (e.g., ePubs ahead of print) and articles from smaller journals are generally found in iCite, which can be considered more comprehensive than InCites with regard to raw publication and citation counts. For our sample, on average, 99% of PMIDs found in PubMed for a given hub were matched in iCite. However, iCite provides less data about each publication compared with InCites, which provides more citation impact information (e.g., CNCI and JIF). Therefore, we selected InCites for our primary analyses, but report iCite data for validation purposes.

The iCite datasets for each CTSA hub included the following for each matched publication as of the time of data retrieval: PMID, reference information, number of times cited, and the Relative Citation Ratio (RCR)19, The RCR is a field-normalized metric, similar to the CNCI; it approximates the citation impact of an article relative to similar NIH-funded papers. The RCR score reflects a ratio of the observed number of citations to the expected number of citations for articles within a particular co-citation network. RCR data are only available for PMIDs that are at least one calendar year old. On average, RCR scores were available for 80% of PMIDs for a given hub–fewer than for the CNCI.

Hub attributes

We assigned two broad attributes to each CTSA hub to evaluate bibliometric activity: multi-institutional status and total monetary award amount.

Multi-institutional status

We labelled a hub as multi-institutional if it self-identified in its grant abstract or hub website20 as a partnership between two or more academic/research institutions not otherwise formally linked. Although most CTSA hubs have affiliations and collaborations with area medical/community organizations and intra-institutional bodies, for the purpose of this analysis we considered a hub to be multi-institutional if (1) it officially joins multiple distinct research institutions, such as multiple universities, medical or municipal research centers, or separate research institutes; (2) it has leadership or principal investigators from more than one institution; or (3) it is part of a Research Centers at Minority Institutions–sponsored partnership.10 We did not include hubs comprising two or more campuses of an already-connected state university system or an existing university/medical center partnership.

Total monetary award amount

We calculated total monetary award amount for each hub by summing UL1, KL2, TL1, and supplemental award amounts (drawn from NIH RePORTER) across all years of operations. Award amounts roughly correspond to the operating sizes of hubs, currently classified by the NIH as small, medium, and large, with funds allocated accordingly (< $4.5 million, $4.5–$6 million, and > $6 million annual direct costs, respectively).21

Data analysis

First, to provide a quantitative summary of bibliometric patterns across CTSA hubs, we used SPSS version 24 (IBM SPSS Inc., Armonk, New York) to calculate descriptive and longitudinal statistics for the WoS InCites and NIH iCite datasets. We calculated each hub’s citations per publication by dividing the citation count in InCites or iCite by the publication count in PubMed. For example, if our PubMed search identified 500 publications that cited a given hub’s grant numbers and InCites documented 1,000 citations, then we would calculate 1,000 citations/500 publications, or 2 citations per publication. This method effectively assumes 0 citations for any publications indexed in PubMed that were not also indexed in InCites. Although such publications are likely to have few citations, it is unlikely that they have none; therefore, our method yielded a conservative estimate of the true citations/publications ratio.

To examine the relationships among hub attributes and bibliometric outcomes, we carried out first-order bivariate correlations. To examine the unique contributions of multi-institutional status and total monetary award amount to bibliometric activity, while controlling for the age of the hub (using year funded), we conducted path analyses of the hypothesized associations using AMOS version 24 (IBM SPSS Inc., Chicago, Illinois). To determine the path model fit, we examined the χ2/df ratio, comparative fit index (CFI), incremental fit index (IFI), root mean square error of approximation (RMSEA), Akaike information criterion (AIC), and Browne-Cudeck criterion (BCC). Good model fit is reflected by χ2/df ratios < 3,22 fit indices > .90,22,23 RMSEA values ≤ .08, and the lowest AIC/BCC values among models tested.24 Details of specific analyses are provided in the Results section.

Results

Overall CTSA program bibliometrics

Table 1 summarizes hub-level descriptive statistics for all 64 CTSA hubs, stratified by funding year cohort, for total number of publications and citations, citations per publication, CNCI score, JIF, and JIF Percentile. During 2012–2016, one or no new hubs were funded each year; therefore, Table 1 presents data collapsed across these funding years, which represent nascent hubs still in their first cycle of funding. Table 1 also includes descriptive statistics based on the NIH iCite dataset, including citations per publication and RCR, as validating support for the data drawn from the WoS InCites dataset. Hub means in Table 1 were calculated as an unweighted average of the values for each individual hub in the indicated funding year cohort.

Table 1.

Hub-Level Descriptive Statistics by Funding Year Cohort for Publications Supported by the 64 CTSA Hubs Through 2016a

Year funded No. of hubs PubMed data Web of Science InCites data
Publication count Citation count Citations per publication, hub meanb CNCI score, hub meanc JIF, hub mean JIF Percentile, hub mean
Total no. Hub mean Total no. Hub mean
2006 12 18,860 1,572 358,386 29,866 19.19 2.21 5.87 77.48
2007 12 19,520 1,627 328,692 27,391 16.41 2.03 5.61 76.42
2008 14 12,849 918 189,699 13,550 15.44 2.01 5.74 76.32
2009 8 7,602 950 86,648 10,831 12.39 2.01 5.02 74.14
2010 9 3,822 425 32,302 3,589 8.27 1.76 4.73 73.33
2011 5 3,775 755 39,217 7,843 7.90 1.72 4.70 72.38
2012–16 4 291 73 2,754 689 7.33 3.45 6.24 78.59
All years
(SD)
[range]
64 66,719 1,043
(815)
[16–3,877]
1,037,698 16,214
(16,243)
[3–87,652]
13.84
(6.37)
[0.19–34.82]
2.08
(.86)
[0.55–7.94]
5.46
(1.08)
[3.83–9.54]
75.70
(3.12)
[66.37–84.51]
Year funded NIH iCite data
Citation count Citations per publication, hub meanb RCR score, hub meanc
Total no. Hub mean
2006 417,689 34,807 22.15 2.19
2007 385,993 32,166 19.51 2.04
2008 221,490 15,821 18.12 2.11
2009 114,347 14,293 15.37 2.06
2010 40,574 4,508 10.44 1.75
2011 52,179 10,436 10.09 1.64
2012–16 4,151 1,038 10.86 4.18
All years
(SD)
[range]
1,236,423 19,319
(19,094)
[5–101,987)
16.63
(7.13)
[0.31–38.38]
2.16
(1.12)
[0.33–10.15]

Abbreviations: CTSA indicates Clinical and Translational Science Awards; CNCI, Category Normalized Citation Impact; JIF, Journal Impact Factor; NIH, National Institutes of Health; RCR, Relative Citation Ratio.

a

Data drawn from PubMed13 and the Web of Science InCites database16 in January 2017 and the NIH iCite database18 in February 2017. Higher numbers of citations in the iCite data reflect the availability of more recent publication and citation data. JIF and JIF Percentiles reflect indices as of the time of data collection in January 2017

b

Citations per publication is calculated as a ratio of the number of citations indexed in the indicated data source divided by the number of publications indexed in PubMed for a given hub.

c

Relative citation impact indices, calculated as the ratio of actual rate of citation to the expected rate of citation based upon year of publication and research area.

Results revealed that from 2006 through 2016, more than 66,000 publications cited at least one of the 64 CTSA hubs as having contributed to the research. On average, each hub supported over 1,000 publications and more than 100 publications per year, with wide variability across hubs of different sizes and funding year cohorts. As of the end of 2016, CTSA-supported publications had been cited more than 1.2 million times, with an average of over 2,000 citations per hub per year. On average, publications from a given hub were cited more than 16 times apiece, with extremely high variability (range: 0–3,119 citations; 15 hubs supported publications with > 1,000 citations). On average, total numbers of publications and citations exhibited a downward trend from those CTSAs founded in 2006 to those founded later (2012–2016), which is to be expected as publications and citations accumulate over time.

The overall mean CNCI score across the 64 CTSA hubs was 2.08, indicating that these publications were cited more than twice as often as comparable articles from the same year and research category. The CNCI results were supported by converging RCR data: the overall mean RCR score of 2.16 similarly indicated these articles were cited more than twice as often as comparable NIH-funded papers. The variability of hub mean CNCI scores was high; the newest hubs (2011–2016 cohorts) were more susceptible to misleadingly low or high means due to small sample sizes but established hubs (2006–2010 cohorts) all had mean CNCI scores well above the global average of 1.0.17

It is evident from a comparison of the WoS InCites and NIH iCite data that the differing methodologies behind these applications yield consistent results. This finding is supported by past research showing that CNCI and RCR scores yield consistent values for relative citation impact.25 In this study, hub mean CNCI scores and RCR scores were strongly and significantly correlated with one another (r = .97, P < .001), despite different calculation algorithms and slightly different publication samples. Total number of citations from the iCite database tended to be higher, in part because iCite indexes almost all NIH-funded PMIDs. Also, the iCite data were collected at a slightly later date than the InCites data, allowing more citations to accumulate and likely accounting for some of the difference.

As of data collection, CTSA-supported articles had been published in over 3,300 journals indexed in WoS, covering 186 WoS-designated research areas. The overall mean JIF across hubs was 5.46, meaning that the journals in which the research was published received an average of 5.46 citations per article per year (over the prior two years). There was relatively low variability in mean JIF across hub cohorts, especially among the more established hubs. Because it is difficult to infer relative impact of journals across widely varying research fields, we also analyzed the rank percentiles of JIFs within their respective fields. The mean JIF Percentile across all CTSA-supported publications was 75.70%, indicating that the research was published in journals that ranked, on average, in the top quartile in their field for citation influence.

Trajectories of publication and citation accumulation

To understand the effects of elapsed time on bibliometric trajectories, we charted the longitudinal accumulation of publications and citations for all CTSA hubs. In light of the reliability between the WoS InCites and NIH iCite results, as reported above, and because InCites provides more information/metrics, we utilized the InCites dataset to assess longitudinal patterns. The number of PubMed-indexed publications, citations per publication, and mean CNCI score were broken down by publication year and then aggregated by funding year cohort.

Figure 1 presents longitudinal results by cohort for the established (2006–2010) cohorts that had completed entire funding cycles, illustrating how the maturity of a hub affects not only the number of publications but also the rate at which they accumulate. We excluded the newer cohorts due to insufficient publication data. The data depicted start in the year subsequent to the initial funding year (e.g., 2007 for the 2006 cohort) and end in 2015, so as to represent only full calendar years of publication information. Most hubs were funded part-way into their first year, and some 2016 publications were not yet indexed in PubMed or WoS at the time of data collection. The cumulative distribution across time reveals an accelerating growth rate, such that each year’s publication output surpasses the previous year’s, with older cohorts exhibiting steeper curves as they grow stronger and more established. This is consistent with preliminary results from an earlier evaluation of the first five years of CTSA-supported publications,6 and furthers the understanding that the longer CTSA hubs are in operation, the more productive they become.

Figure 1.

Figure 1

Cumulative number of publications, summed within individual, established CTSA hub funding year cohorts (2006–2010) and accrued across the indicated range of years (2007–2015). Publication data were drawn from PubMed in January 2017. Data depicted start in the year subsequent to the initial funding year (e.g., 2007 for the 2006 cohort) and end in 2015, so as to represent only full calendar years of publication information. (Most hubs were funded partway into their first year, and some 2016 publications were not yet indexed in PubMed at the time of the study in early 2017.) Abbreviation: CTSA indicates Clinical and Translational Science Awards.

Because cumulative citation counts are dependent upon the number of publications available, we analyzed citations per publication over time, rather than total numbers of citations over time. We calculated citations per publication across all cohorts because values did not vary apart from the starting point/year of oldest publications. Citations per publication rose over the years. The most recent articles, published in 2015, averaged 5 citations per article at a given hub, after the first year. The earliest articles, published in 2007 (and corresponding to the oldest hubs), averaged more than 65 citations per article at a given hub, after nine years (data not shown). The rate of increase was notably fast, signifying an accumulation of citations quicker than expected rates: the average InCites Category Expected Citation Rate for this dataset is approximately 28 citations after nine years.17 Additionally, the increase over time was fairly steady with no plateau, indicating that citation rates were not declining nine years after publication.

We also assessed the relationships between publication year and CNCI score within the established (2006–2010) cohorts and found the regression slope effect sizes to be extremely small and most often not statistically significant (Bs = −.07–.05), indicating that CNCI scores did not vary substantially across time. This was not surprising as this metric intentionally adjusts for publication year.

Predicting productivity: The roles of multi-institutional collaboration and award amount

Table 2 summarizes first-order correlations among multi-institutional status (coded as 1 = multi-institutional, 0 = single institution), total monetary award amount (sum), year funded, and bibliometric outcomes (total number of publications, total number of citations, and mean CNCI score).

Table 2.

Matrix of Intercorrelations Among Hub Attributes and Bibliometric Activity for All 64 CTSA Hubs, 2006–2016a

Year funded Multi-institutional status Total monetary award amount Total no. of publications Total no. of citations Mean CNCI score
Year funded −.06 −.60d −.54d −.55d .07
Multi-institutional status .18 .35c .27b −.15
Total monetary award amount .72d .76d .03
Total no. of publications .94d −.05
Total no. of citations .05
Mean CNCI score

Abbreviations: CTSA indicates Clinical and Translational Science Awards; CNCI, Category Normalized Citation Impact; NIH, National Institutes of Health.

a

Data drawn from NIH RePORTER,3 PubMed,13 NIH iCite application,18 and Web of Science InCites application.16 Single-institution hubs (n = 40) had a mean of 827 publications and 15,431 citations; multi-institutional hubs (n = 24) had a mean of 1,402 publications and 25,800 citations.

b

P < .05.

c

P < .01.

d

P < .001.

The single-institution hubs (n = 40) had a mean of 827 publications and 15,431 citations; the multi-institutional hubs (n = 24) had a mean of 1,402 publications and 25,800 citations. Multi-institutional status was statistically significantly associated with the total number of publications (P < .01) and citations (P < .05), but not with year funded or total monetary award amount. Total monetary award amount was strongly and statistically significantly associated with the total number of publications (P < .001) and citations (P < .001), as well as year funded (P < .001). None of the variables tested were associated with mean CNCI score. Citations being dependent on publications, the two were strongly associated with one another (P < .001).

Figure 2 depicts the path model schematically, with arrows indicating directional relationships and unstandardized path coefficients representing the relationship effect sizes. The model included paths reflecting hypothesized associations among multi-institutional status, total monetary award amount (in millions), total number of publications, total number of citations, and year funded. Path analyses go beyond correlations by using regressions to include hub attributes as simultaneous predictors of publication and citation activity, and testing whether unique associations hold while controlling for one another. We removed paths from year funded to multi-institutional status, total number of publications, and total number of citations, and between multi-institutional status and total monetary award amount due to non-significance, which improved the model fit. This final model showed a strong fit to the data: χ2(4) = 5.66, not significant; χ2/df = 1.42; CFI = .99; IFI = .99; RMSEA = .08; minimal AIC/BCC. Year funded was statistically significantly associated with higher total monetary award amounts (P < .001), which was, in turn, associated with higher total publications (P < .001) and citations (P < .001). Multi-institutional status was statistically significantly associated with more publications (P <.01) and marginally significantly associated with more citations (P < .1).

Figure 2.

Figure 2

Unstandardized path coefficients and levels of significance for the hypothesized path model of CTSA hub multi-insitutional status and total monetary award amount (in millions of U.S. dollars) predicting total numbers of publications and citations, adjusting for the year the CTSA hub was originally funded (between 2006–2016), for all 64 CTSA hubs. Arrows indicate directional relationships. Nonsignificant paths, which were removed from the model, are indicated by dotted lines. Abbreviation: CTSA indicates Clinical and Translational Science Awards.

aP < .1.

bP < .01.

cP < .001.

Discussion

This study sought to characterize the bibliometric impact of the CTSA program, an ambitious undertaking to organize and focus resources on the acceleration of translational discoveries. From 2006 to 2016, the program expanded from 12 initial hubs to 64 hubs, supporting tens of thousands of publications that have been cited more than one million times and counting. Overall mean CNCI and RCR scores greater than 2 show that this citation record is more than double the expected rate of citation, implying that CTSA-sponsored research is firmly above average in terms of quality and impact. Mean JIF indices show that these articles tend to be published in journals that are above average in terms of research influence, and the large variety of journals publishing these articles speaks to the breadth of content areas covered by this research.

Longitudinal growth trends for the earliest funding year cohorts signify strong publication productivity and momentum. Newer cohorts appear to be following similar trajectories, suggesting that they may follow in the footsteps of their predecessors. The rising rate of citations per publication over time, seen for even the oldest publications, foreshadows the likely trajectories of citation accumulation for newer articles. The steady increase in citations per publication does not show evidence of levelling off, indicating that publications have not yet reached their peak impact. This pattern of growth concurs with and extends research conducted at earlier stages of the CTSA program.6,26 In addition, the longitudinal stability of the CNCI scores speaks to the validity of this metric as time independent, as well as to the consistency with which CTSA-supported research reaches exceptional levels of influence.

Our results offer preliminary evidence that hubs constituting multi-institutional partnerships and those allocated more monetary resources tend to support especially large quantities of publications and citations. We found that multi-institutional status and total monetary award amount independently predict greater return on investment in terms of bibliometric activity, and these effects hold beyond the effect of the maturity of the hub.

Limitations and future directions

One limitation of this large-scale analysis is our initial step of compiling lists of publications to be attributed to CTSA hubs. Certainly, not every publication resulting from CTSA support is indexed in PubMed, which aims to index all NIH-funded research. If authors cite CTSA grants when publishing an article, then the article is expected to be indexed in PubMed; however, the requirement to cite funding sources is difficult to enforce and the possibility of errors and omissions is inherent.27 Adopting processes such as the Vanderbilt Institute for Clinical and Translational Research’s resource and publication tracking system28 may aid in maximizing funding source citations and achieving the most accurate picture of bibliometric output. Nonetheless, we believe that the results based on the lists we generated using PubMed represent the best feasible approximation of what publications can be reasonably and formally claimed by hubs, especially since current NIH guidelines29 require publications to cite their funding sources in order to be reported as products.

Another data collection limitation is that, on average, 13% of a given hub’s publications found in PubMed were not indexed in the WoS InCites application, which we used for the citation analysis. However, we believe this attrition had minimal meaningful impact on the citation results because a large proportion of unindexed publications were expected to be newly published articles (e.g., ePub ahead of print) or articles published in narrowly focused journals, and therefore unlikely to have large numbers of citations. We took this into account when calculating citations per publication by assuming 0 citations for publications not indexed in WoS InCites rather than excluding them from the analysis. Therefore, our results likely reflect underestimates of total citations and citations per publication, especially for the most recent publication years. The NIH iCite application is still in development, but we anticipate that it will eventually be a better source than InCites for raw citation information for CTSA-supported research, because iCite endeavors to index all NIH-funded research with little loss of data.

Another limitation concerns the limited scope of bibliometrics with regard to the spectrum of translational science activities. Although the study of publication and citation data examines a key rung on the ladder between conducting basic research and translating findings to the clinic and the community, bibliometrics leaves a great deal of the process unmeasured.27 Further, it is not assured that all CTSA-grant-citing publications will necessarily be relevant to translation. Future efforts to connect bibliometric activity to more advanced stages of translation, such as patenting, licensing, and adoption of methods, drugs, and devices, could say more about how much progress is being made by the CTSA program. Classifying CTSA-grant-supported publications according to their phase of the translational spectrum may be one avenue for providing a more nuanced understanding of the research areas that have seen the most progress or need additional attention to optimally foster translation. This endeavor would be work-intensive and subjective, but some efforts have been devoted toward automating such a classification process.30 Although analysis of publication output describes a precursor to the translation process, the study of citation activity, especially over significant amounts of time, adds a valuable understanding of the influence and use of those publications—perhaps hinting at how those findings will drive translational science forward. Qualitative analysis of publications—including title and abstract content; the number, order, and contributions of authors; and cross-disciplinary citation31—may be some promising ways of adding value to bibliometric data.

Our subjective assignment of multi-institutional status is another limitation of this study. Hubs have significant autonomy in how they are structured, making them as varied as the regions and institutions they represent. Thus, it is not straightforward to classify hubs as single institution versus multi-institutional. We decided on our multi-institutional status criteria with the intent of recognizing hubs that identify themselves as consortia of institutions that would not have been brought together if not for the existence of the CTSA program. This is a reasonable condition, but it could also be argued that the value of utilizing resources across a single-institution hub—that is, a large and powerful university or already-linked institutions—should not be underestimated. Ultimately, the distinction we made among CTSA hubs was subjective and indicative of a broad difference between hubs that may be somewhat wider and more varied in emphasis and hubs that may be smaller or more focused. Notably, the hubs we classified as multi-institutional showed significantly higher rates of publication and citation activity, even though they were not any older or more heavily funded than single-institution hubs. Therefore, we do not believe multi-institutional hubs’ higher rates of bibliometric activity are attributable to larger operating size.

However, the factors that contribute to greater productivity in multi-institutional CTSA collaborations remain to be determined. It could be that these hubs promote network ties across investigators at different institutions, allowing them to work in concert to produce more products than they otherwise would,32 or that investigators work independently but their access to shared resources across institutions boosts their productivity.7,8,9 Alternatively, multi-institutional hubs may support a larger number of investigators, which may account for larger numbers of publications and citations. Investigator number was outside the scope of this study, as investigator affiliation is inconsistently defined across differently tracked and structured hubs. Our results should be taken as preliminary evidence that multi-institutional partnerships tend to support more bibliometric products and therefore may be worthy of the sometimes greater investments in time, resources, and coordination costs they entail.11 More work is needed, including a thorough qualitative and network analysis of the different collaborative structures represented by CTSA hubs,33,34,35 in order to drawn firmer conclusions.

Apart from analyzing publications by hub composition or phase of translation, another potentially informative way of categorizing articles is by the type/amount of CTSA program support received. We did find a positive relationship between total monetary award amount and bibliometric output, but further exploration linking the resource inputs (i.e., which sub-award or CTSA service) to bibliometric outputs would provide a better understanding of what results from different components of the CTSA program. It is likely that there would be significant overlap of support from different areas, making such an analysis complex but informative. Indeed, one recent study found differences between work funded through clinical resources versus training or pilot awards.25 It would be interesting to assess whether research supported by multiple grant mechanisms, or even multiple hubs, tends to be more impactful than research receiving less support, and to perform an economic analysis weighing different amounts and allocations of costs against benefits returned.

Conclusion

This evaluation serves to describe the degree and character of the publication and citation impact of research supported by CTSA resources over the first decade of the CTSA program. This report delivers an account of how over one million citations have been distributed across cohorts and across time for the more than 60,000 publications to which the CTSA program has contributed. From 2006–2016, research supported by CTSA hubs resulted in a robust catalog of publications spanning a wide breadth of subject areas and receiving more than double the expected citations per article. Findings from this research have been shared and referenced at consistently superior rates, suggesting exceptional importance and sustained impact within their respective fields. We hope that this progress made in shaping the academic literature will lay a strong foundation for furthering the field of translational science and allowing for the practical application of findings.

Acknowledgments

The authors wish to thank Kimberly Powell, MLIS, Emory Life Sciences Informationist, for her invaluable assistance in collecting and interpreting publication and citation data. In addition, the authors would like to thank John Hanfelt, PhD, Elizabeth Pittman Thompson, and Andrew West, MBA, MHA, from Emory University for their much appreciated feedback on this report.

Funding/Support: This research was supported by the National Center for Advancing Translational Sciences of the National Institutes of Health under Award Number UL1TR000454. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.

Footnotes

Other disclosures: None reported.

Ethical approval: Reported as not applicable.

Contributor Information

Nicole Llewellyn, Manager of research projects, Evaluation and Continuous Improvement Program, Georgia Clinical & Translational Science Alliance, Emory University School of Medicine, Atlanta, Georgia; ORCID: https://orcid.org/0000-0003-1267-2720.

Dorothy R. Carter, Assistant professor, Department of Psychology, University of Georgia, Athens, Georgia, and member of the Evaluation and Continuous Improvement Program, Georgia Clinical & Translational Science Alliance, Emory University School of Medicine, Atlanta, Georgia.

Latrice Rollins, Assistant director of evaluation and institutional assessment, Prevention Research Center, Morehouse School of Medicine, and member of the Evaluation and Continuous Improvement Program, Georgia Clinical & Translational Science Alliance, Emory University School of Medicine, Atlanta, Georgia.

Eric J. Nehl, Assistant research professor, Emory University Rollins School of Public Health, and director, Evaluation and Continuous Improvement Program, Georgia Clinical & Translational Science Alliance, Emory University School of Medicine, Atlanta, Georgia.

References

  • 1.Institute of Medicine. The CTSA Program at NIH: Opportunities for Advancing Clinical and Translational Research. Washington, DC: National Academies Press; 2013. [PubMed] [Google Scholar]
  • 2.Califf RM, Berglund L. Linking scientific discovery and better health for the nation: the first three years of the NIH’s Clinical and Translational Science Awards. Academic Medicine. 2010;85(3):457–462. doi: 10.1097/ACM.0b013e3181ccb74d. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.National Institutes of Health. Research portfolio online reporting tools. NIH RePORTER. https://projectreporter.nih.gov/reporter.cfm. Accessed January 31, 2017.
  • 4.Rubio DM. Common metrics to assess the efficiency of clinical research. Evaluation & the Health Professions. 2013;36(4):432–446. doi: 10.1177/0163278713499586. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Frechtling J, Raue K, Michie J, Miyaoka A, Spiegelman M. The CTSA National Evaluation Final Report. Rockville, MD: Westat; 2012. [Google Scholar]
  • 6.Steketee M, Frechtling J, Cross D, Schnell J. Final Report on CTSA-supported Publications: 2006–2011. Rockville, MD: Westat; 2012. [Google Scholar]
  • 7.Hara N, Solomon P, Kim SL, Sonnenwald DH. An emerging view of scientific collaboration: Scientists’ perspectives on collaboration and factors that impact collaboration. Journal of the American Society for Information Science and Technology. 2003;54(10):952–965. [Google Scholar]
  • 8.Shi Q, Xu B, Xu X, Xiao Y, Wang W, Wang H. Diversity of social ties in scientific collaboration networks. Physica A: Statistical Mechanics and Its Applications. 2011;390(23–24):4627–4635. [Google Scholar]
  • 9.Sosa ME. Where do creative interactions come from? The role of tie content and social networks. Organization Science. 2011;22(1):1–21. [Google Scholar]
  • 10.Ofili EO, Fair A, Norris K, et al. Models of interinstitutional partnerships between research intensive universities and minority serving institutions (MSI) across the Clinical Translational Science Award (CTSA) consortium. Clinical and Translational Science. 2013;6(6):435–443. doi: 10.1111/cts.12118. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Bikard M, Murray F, Gans JS. Exploring trade-offs in the organization of scientific work: Collaboration and scientific reward. Management Science. 2015;61(7):1473–1495. [Google Scholar]
  • 12.Cummings JN, Kiesler S. Coordination costs and project outcomes in multi-university collaborations. Research Policy. 2007;36(10):1620–1634. [Google Scholar]
  • 13.National Library US. of Medicine. PubMed. https://www.ncbi.nlm.nih.gov/pubmed. Accessed January 31, 2017.
  • 14.National Institutes of Health. (Notice Number: NOT-OD-08-033).Revised policy on enhancing public access to archived publications resulting from NIH-funded research. ( https://grants.nih.gov/grants/guide/notice-files/NOT-OD-08-033.html). Revised 2013. Accessed November 16, 2017.
  • 15.Clarivate Analytics. Web of Science. https://webofknowledge.com/. Accessed January 31, 2017.
  • 16.Clarivate Analytics. InCites. https://incites.thomsonreuters.com/. January 31, 2017.
  • 17.Thomson Reuters. InCites Handbook II. http://ipscience-help.thomsonreuters.com/inCites2Live/indicatorsGroup/aboutHandbook.html). Revised 2014. Accessed January 31, 2017.
  • 18.National Institutes of Health Office of Portfolio Analysis. iCite. https://icite.od.nih.gov. Accessed February 15, 2017.
  • 19.Hutchins BI, Yuan X, Anderson JM, Santangelo GM. Relative Citation Ratio (RCR): A New Metric That Uses Citation Rates to Measure Influence at the Article Level. PLoS Biol. 2016;14(9):e1002541. doi: 10.1371/journal.pbio.1002541. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.National Center for Advancing Translational Sciences. CTSA Program Hubs. https://ncats.nih.gov/ctsa/about/hubs. Revised 2017. Accessed December 12, 2017.
  • 21.National Institutes of Health. Clinical and Translational Science Award U54 Program Announcement (PAR) Number: PAR-15-304. https://grants.nih.gov/grants/guide/pa-files/PAR-15-304.html. Published 2015. Accessed July 10, 2017.
  • 22.Kline RB. Principles and Practice of Structural Equation Modeling. New York: Guilford Press; 1998. [Google Scholar]
  • 23.Bentler PM. Comparative fit indexes in structural models. Psychological Bulletin. 1990;107(2):238–246. doi: 10.1037/0033-2909.107.2.238. [DOI] [PubMed] [Google Scholar]
  • 24.Browne MW, Cudeck R. Alternative ways of assessing model fit. Sociological Methods & Research. 1992;21(2):230–258. [Google Scholar]
  • 25.Schneider M, Kane C, Rainwater J, et al. Feasibility of common bibliometrics in evaluating translational science. Journal of Clinical and Translational Science. 2017;1(1):45–52. doi: 10.1017/cts.2016.8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Zhang Y, Wang L, Diao T. The quantitative evaluation of the Clinical and Translational Science Awards (CTSA) program based on science mapping and scientometric analysis. Clinical and Translational Science. 2013;6(6):452–457. doi: 10.1111/cts.12078. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Sibbald SL, MacGregor JC, Surmacz M, Wathen CN. Into the gray: A modified approach to citation analysis to better understand research impact. Journal of the Medical Library Association. 2015;103(1):49–54. doi: 10.3163/1536-5050.103.1.010. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Harris PA, Kirby J, Swafford JA, et al. Tackling the “so what” problem in scientific research: A systems-based approach to resource and publication tracking. Academic Medicine. 2015;90(8):1043–1050. doi: 10.1097/ACM.0000000000000732. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29.National Center for Advancing Translational Sciences. Research Performance Progress Report: Guidelines for 2017. Clinical and Translational Science Award (CTSA) Specific Instructions. https://ncats.nih.gov/files/CTSA-RPPR-instructions-2017.pdf. Published December 2016. Accessed February 15, 2017.
  • 30.Surkis A, Hogle JA, DiazGranados D, et al. Classifying publications from the Clinical and Translational Science Award program along the translational research spectrum: A machine learning approach. Journal of Translational Medicine. 2016;14(1):235. doi: 10.1186/s12967-016-0992-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.Long JC, Hibbert P, Braithwaite J. Structuring successful collaboration: A longitudinal social network analysis of a translational research network. Implementation Science. 2016;11:19. doi: 10.1186/s13012-016-0381-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32.Nagarajan R, Peterson CA, Lowe JS, Wyatt SW, Tracy TS, Kern PA. Social network analysis to assess the impact of the CTSA on biomedical research grant collaboration. Clinical and Translational Science. 2015;8(2):150–154. doi: 10.1111/cts.12247. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33.Bian J, Xie M, Topaloglu U, Hudson T, Eswaran H, Hogan W. Social network analysis of biomedical research collaboration networks in a CTSA institution. Journal of Biomedical Informatics. 2014;52:130–140. doi: 10.1016/j.jbi.2014.01.015. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34.Hughes ME, Peeler J, Hogenesch JB. Network dynamics to evaluate performance of an academic institution. Science Translational Medicine. 2010;2(53):53ps49. doi: 10.1126/scitranslmed.3001580. [DOI] [PubMed] [Google Scholar]
  • 35.Okamoto J. Scientific collaboration and team science: A social network analysis of the centers for population health and health disparities. Translational Behavioral Medicine. 2015;5(1):12–23. doi: 10.1007/s13142-014-0280-1. [DOI] [PMC free article] [PubMed] [Google Scholar]

RESOURCES