Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2014 Nov 13.
Published in final edited form as: Eval Health Prof. 2013 Sep 30;37(1):50–70. doi: 10.1177/0163278713506112

The Translational Research Impact Scale: Development, Construct Validity, and Reliability Testing

Allard E Dembe 1, Michele S Lynch 1, P Cristian Gugiu 1, Rebecca D Jackson 1
PMCID: PMC4230009  NIHMSID: NIHMS600690  PMID: 24085789

Abstract

Increasing emphasis is being placed on measuring return on research investment and determining the true impacts of biomedical research for medical practice and population health. This article describes initial progress on development of a new standardized tool for identifying and measuring impacts across research sites. The Translational Research Impact Scale (TRIS) is intended to provide a systematic approach to assessing impact levels using a set of 72 impact indicators organized into three broad research impact domains and nine subdomains. A validation process was conducted with input from a panel of 31 experts in translational research, who met to define and standardize the measurement of research impacts using the TRIS. Testing was performed to estimate the reliability of the experts’ ratings. The reliability was found to be high (ranging from .75 to .94) in all of the domains and most of the subdomains. A weighting process was performed assigning item weights to the individual indicators, so that composite scores can be derived.

Keywords: translational research, impact, measurement, validity, reliability, CTSA

Introduction

How can the true impact of biomedical research best be assessed? Investigators and research institutions often assess impact by looking at easily determinable measures, such as the publication of articles in peer-reviewed journals, the impact factors of those journals, success in acquiring research grants, and the awarding of patents for novel inventions. In translational research, the ability to move drug discovery and basic research findings into animal or human trials might be considered a significant impact in a field. Similarly, the adoption of trial results into general medical practice might be considered a significant impact resulting from the research performed.

However, there are currently no standardized or systematic measurement techniques available for gauging the overall impact of biomedical and translational research. Many traditional measures of research impact, such as publication rates, impact factors, and patent awards, are relatively narrow in scope, and do not adequately measure the intermediate or long-term outcomes of the research enterprise, such as adoption of new diagnostic or therapeutic practices, changes in public policy, or improvements in population health.

The need to determine the true value of research has become more acute as government and private research institutes strive to justify expenditures and document tangible outcomes from research programs. The director of the National Institutes of Health (NIH) has estimated that NIH-sponsored biomedical research produces a two to one economic return on research investment (Collins, 2012). The recent establishment of NIH’s “Impact of Research” website underscores the importance that agency attaches to measuring the ultimate benefits of biomedical research. According to that website, NIH’s US$4 billion investment in the Human Genome Project has produced US$796 billion in economic growth between 2000 and 2010—a 141-fold return on investment (National Institutes of Health, 2012).

The need to develop uniform methods for measuring research impact has taken on additional importance as the result of NIH’s establishment of the Clinical and Translational Science Award (CTSA) program in 2006. Part of the rationale behind development of the CTSA program was to better understand and measure the contributions of translational research at academic research institutions. Various definitions of “translational research” have been proposed encompassing the continuum from translating basic science discovery derived from the bench all the way to translating findings to the community and modifying public policy. Some taxonomies classify the range of translational research into a four-phase model (T1 to T4; e.g., Khoury et al., 2007) while others adopt a two-phase (T1 to T2; e.g., Sung et al., 2003) or three-phase (T1 to T3) classification scheme (e.g., Dougherty & Conway, 2008; Westfall, Mold, & Fagnan, 2007). According to a three-phase framework developed by members of the Association for Clinical Research Training, translational research comprises three components including T1 research that expedites the movement from basic research and patient-oriented research to new scientific understanding and standards of care; T2 research that facilitates the movement from patient-oriented research and population-based research to better patient outcomes, implementation of best practices, and improved health status in communities; and T3 research promoting interaction between laboratory-based research and population-based research to stimulate scientific understanding of human health and disease (Rubio et al., 2010).

From the inception of the CTSA program, NIH has emphasized the need to systematically evaluate the extent to which these efforts are resulting in demonstrable benefits for individuals and communities. Indeed, NIH requires all CTSA programs to establish a process to measure program outcomes and document accomplishments.

Despite these ongoing efforts to assess the short- and long-term consequences of translational research, there are obstacles impeding such programs from comprehensively assessing their impacts. Glasgow (2009) has identified a variety of barriers in assessing the impacts from translational research programs, for example, the complexity of considering multilevel effects spanning diverse research settings, and investigators from a variety of disciplines. Quinlan, Kane, and Trochim (2008) point to difficulties in measuring translational research outcomes related to the need of collecting data from diverse sources in formats that may be incompatible to one another.

Moreover, there are currently no consistent standards or uniform measurement protocols specifying exactly what outcomes or impacts ought to be measured. As a result, some research organizations might place comparatively greater emphasis on measuring overt indicators of work output, such as grant awards, publications, patents, and number of investigator trainees. Others might focus more strongly on process and efficiency measures such as the time needed to obtain institutional review board (IRB) approvals or to recruit and schedule clinical trial participants. Likewise, some research sites may consider it important to evaluate structural factors, such as the ability to form interdisciplinary translational research project teams or increased communications between research units. Ultimately, research sponsors will want to see that the investment in translational research programs helps to achieve the longer term goals of improving medical practice and population health.

This article describes the Translational Research Impact Scale (TRIS), which is designed to provide a standardized measurement tool that will provide a practical, objective identification and measurement of the overall impact of translational research activities conducted in a research environment. It is intended to be used to measure the diversity of short- and long-term impacts that result from translational research activities. The ultimate goal underlying TRIS development is the potential for greater capacity to identify and document significant achievements at institutions conducting translational research, along with the ability to uniformly measure trends over time and potentially across research sites using a standardized impact assessment scale. This report describes the initial development and conceptual framework of the TRIS, a construct validation process for the TRIS undertaken by an expert panel that involved collecting evidence for validity based on test content, and reliability testing of the resulting measurement scale.

Conceptual Model

TRIS derives from a logic model originally developed by the W. K. Kellogg Foundation (2004a, 2004b). The basic framework for logic models developed by the Kellogg Foundation has been utilized in a variety of contexts including health care, educational programs, international aid, energy development, and scientific research. Several studies have applied logic models to clinical and biomedical research (Hayes, Parchman, & Howard, 2011; Kagan, Kane, Quinlan, Rosas, & Trochim, 2009; Sanders, Robinson, Forster, Plax, & Brosco, 2005).

Common to these logic models is the assumption of a relationship between program inputs and activities and subsequent outputs and tangible outcomes, including short-term, intermediate, and long-term impacts. Applying the Kellogg Foundation’s basic model to the translational research enterprise, inputs potentially include such things as laboratories, investigators, information systems, cell lines, funding, and other basic resources required for translational research. According to the model, program activities encompass the planning and conducting of translational research studies, animal and human preclinical investigations, the recruitment and training of investigators, IRB submissions, and other functions related to translational research. Outputs are the direct results of those activities, such as the number of translational science training programs held, genomic assays run, the quantity of clinical trial subjects recruited, the number of grant submissions made, or the number of relevant articles published. Outcomes include the specific changes that have occurred in processes, knowledge, treatments, methods, techniques, medical practice, or other indicators of tangible changes made, related to achieving the goals of translational research. Under this organizational scheme, impacts are the ultimate effects (e.g., on society, communities, patients, providers, etc.) resulting from the translational research process. Contextual factors can also be considered in the logic mode, such as ambient macroeconomic conditions, NIH policies, national health care reform legislation, and so on.

Some short-term outcomes (e.g., improvements in the research process) may occur relatively quickly (e.g., immediately or within a couple years), while others might take several years (e.g., 2–5 years) to achieve. Long-term outcomes (e.g., changes in medical practice, guideline development, and drug approval) might take 5–10 years or more. Demonstrable impacts on individuals and communities brought about by translational research (such as improved health status, disease rates, health care cost improvements) might not be manifest until a considerable time passes.

A diagram portraying a simplified logic model containing examples of relevant components that may be particularly germane to identifying and measuring the impacts achieved through translational research programs can be accessed at http://cph.osu.edu/sites/default/files/docs/TRIS_Logic_Model.pdf

Method

Literature Review

A systematic review of existing research literature was conducted to help identify indicators of translational research impact. Google Scholar and PubMed databases were searched using the following terms: translational research, clinical and translational research, biomedical, measure, measurement, outcome, impact, research impact, and interdisciplinary. Articles were restricted to those published between January 1, 1998, and December 31, 2012, to help ensure relevancy to the contemporary understanding of translational research. We excluded articles that assessed research impact solely on the basis of bibliographic considerations, for example, the impact factor of published research articles (e.g., Rosas, Kagan, Schouten, Slack, & Trochim, 2011; Sypsa & Hatzakis, 2009).

Nearly 100 articles matched one or more of these criteria. Two investigators reviewed each article and compiled a subset of 26 articles that were appraised as focusing most directly on the measurement of research impact in the biomedical and translational sciences. A listing of these articles is provided in Table 1. Some of the articles involved empirical studies estimating research impact (e.g., RAND Europe, 2006), while other articles described general approaches and conceptual models relevant to the topic (e.g., Weiss, 2007). Among those 26 articles, 79 potential indicators and measures of research impact were reported.

Table 1.

Articles Mentioning Potential Translational Research Impact Indicators.

Article citation Indicator numbers
1. Aries and Sclar (1998) 70, 71
2. Australian Research Council (2008) 27, 31, 32, 36, 37, 38
3. Donaldson, Rutledge, and Ashley (2004) 48, 54, 55
4. Dougherty and Conway (2008) 25, 46, 54, 55, 57, 60, 69
5. Grant, Cottrell, Cluzeau, and Fawcett (2000) 57,54
6. Hanney, Grant, Wooding, and Buxton (2004) 5, 29, 38, 49, 52, 57, 70
7. Haynes and Haines (1998) 50, 57, 67
8. Heller and de Melo-Martin (2009) 1, 2, 5, 6, 7, 11, 13, 16, 18, 19, 47, 73, 74
9. Kalucy, Mclntyre, Jackson-Bowers, and Reed (2009) 29, 31, 37, 40, 42, 67, 70
10. Kessler and Glasgow (2011) 18, 21, 22, 60
11. Kuruvilla, Mays, Pleasant, and Walt (2006) 1, 3, 5, 11, 14, 15, 17, 21, 23, 24, 32, 34, 35, 38,
39, 40, 41, 42, 47, 50, 51, 52, 54, 56, 57, 63,
66, 68, 69, 70, 73, 75, 77, 78, 79
12. Lavis, Ross, McLeod, and Gildiner (2003) 26, 27, 28, 29, 30, 31, 32, 33, 34
13. Lewison (2003) 26, 32, 35, 37, 39, 40, 44, 52, 55, 56, 57, 62
14. Mankoff, Brander, Ferrone, and Marincola (2004) 4, 16, 18, 28, 31, 47, 49, 52, 58
15. Nathan (2002) 11, 12
16. Pang et al. (2003) 24, 31, 35, 40, 47, 57, 67, 68
17. Pober, Neuhauser, and Pober (2001) 8, 9, 12, 13, 14, 18, 19, 43, 47, 49, 52, 78
18. RAND Europe (2006) 36, 38, 70
19. Sarli, Dubinsky, and Holmes (2010) Extensive list (>40 indicators):
20. Sung et al. (2003) 2,7,8, 13, 17, 18, 49, 52
21. Trochim, Kane, Graham, and Pincus(2011) 29, 30, 47, 57
22. Trochim, Marcus, Masse, Moser, and Weld (2008) 1, 3, 4, 5, 12, 22, 25, 44
23. Weiss (2007) 1, 10, 12, 21, 22, 27, 35, 36, 38, 40, 42, 47, 55,
61, 63, 69, 70, 76, 77, 78
24. Westfall, Mold, and Fagnan (2007) 20, 47, 54
25. Woolf (2008) 30, 34, 43, 44, 45, 56, 59, 72, 74
26. Zerhouni (2007) 33, 43, 58, 73

Each potential indicator was categorized into one of the three broad research impact domains (research-related impacts, translational impacts, and societal impacts) divided into nine more specific subdomains (Table 2). Identification of these domains and subdomains was influenced heavily by the health research impact framework developed by Kuruvilla, Mays, Pleasant, and Walt (2006) and similar efforts from the Becker Library at Washington University (Sarli, Dubinsky, & Holmes, 2010).

Table 2.

Translational Research Impact Scale: Domains, Subdomains, and Impact Indicator.

Impact
indicator
number
Brief description of impact indicator
Domain 1. Research-related impacts

Subdomain 1: research direction and resources (RDR)
  RDR 1 Research needs are identified, such as gaps in research,
unanswered questions, or identification of new areas for
investigation
  RDR 2 New methods are developed for collecting and storing data; and/
or building new database systems
  RDR 3 New research methods or techniques are developed or extended
  RDR4a Translational research concepts, definitions, and terminology are
better defined or elucidated (e.g., the nomenclature of TI-T4
to describe phases of translational research)
  RDR 5 Effective new research networks and collaborations are formed
  RDR 6 Qualified research personnel are recruited
Subdomain 2: research management and conduct (RMC)
  RMC 7 Improved methods for recruitment of study participants are
developed and/or implemented
  RMC 8 IRB processes are improved (e.g., shortening the time to
approval, ease of completion, and appropriateness of decisions)
  RMC 9 New research projects and teams are formed
  RMC 10 Translational research studies are successfully completed
  (RMC 11)a Improved retention and continuity of research team/teams is
achieved
  RMC 12 Improvements in the quality or quantity of teamwork across
disciplines occur
  RMC 13 Process efficiencies in the conduct of research are achieved
  RMC 14 Researchers or staff are recognized for leadership in the field
  RMC 15 A particular translational study or the research unit receives an
award
  RMC 16 Internal communication among individual researchers and among
research units is improved
  RMC 17 More of the unit’s researchers and staff serve on regional or
national research organizations (e.g., NIH study sections)
  RMC 18 Significant positive changes in the research environment and
culture occur
  RMC 19 Barriers in the research process are identified and strategies for
overcoming those barriers are developed
  RMC 20 There is an increased emphasis on conducting practice-based
research (in “real-world” clinical settings)
Subdomain 3: research methods (RM)
  RM 21 New analytical techniques are developed
  RM 22 New or improved mixed-methods approaches are used
  RM 23 New or improved methods for conducting studies spanning
multiple disciplines are established
  RM 24 New methods of synthesizing results from varying disciplines are
implemented
  RM 25 New ways of measuring outcomes from translational research
studies are adopted
Subdomain 4: research results (RR)
  RR 26 The number and rate of grant submissions among translational
researchers increases
  RR 27 The number and rate of grant awards among translational
researchers increases
  RR 28 Novel or innovative discoveries are made
  RR 29 New scientific knowledge and/or techniques result
  RR 30 Efficacy of a new treatment (or new application of an established
treatment) is demonstrated
  RR 31 New medical or research devices or products are developed
  RR 32 Patent/patents for a biomedical device or product are obtained
  RR 33 New biomarkers to accelerate delivery of health care are
identified and/or validated
  RR 34 A discovery (e.g., drug, device, or measurement instrument) is
brought to market
Subdomain 5: research dissemination (RD)
  RD 35 The number of publications authored or coauthored by
translational researchers increases
  RD 36 The average impact factor of journal articles authored or
coauthored by translational researchers increases
  RD 37 The participation of translational researchers in making
presentations at national conferences increases
  RD 38 The total number of citations for articles authored or coauthored
by our researchers increases
  RD 39 Media coverage for articles or projects involving the unit’s
researchers increases
  RD 40 There is greater diffusion of knowledge and techniques in a broad
context (e.g., in community-based health care practice)
  (RD 41)a More of the unit’s researchers serve as journal editors or
members of journal editorial boards
  RD42 Scientific advances by a unit’s researchers (e.g., study results)
reach a broader array of audiences
Domain 2. Translational impacts (TI)
Subdomain 6: translational impacts (TI)
  TI 43 Findings and discoveries from bench science are incorporated
into studies involving animals or humans
  TI 44 Findings and discoveries from clinical trials are incorporated into
clinical guidelines, or otherwise accepted as good medical
practice
  TI 45 Improvements in the delivery of effective and efficient health care
services are made
  TI 46 The effectiveness of different treatment and intervention choices
are determined in a manner that is useful for clinicians
  TI 47 Improvements result in better quality of patient care
  TI 48 The incidence of medical errors decreases
  TI 49 Increased training in translational research methods occurs
among health care providers and support personnel
  TI 50 Health information technologies are enhanced
  TI 51 Techniques are put into place that effectively constrain costs and
enhance cost effectiveness
  TI 52 Translational research training development increases
  TI 53 New prevention techniques are incorporated into medical practice
  TI 54 Medical practice becomes more evidence based
  TI 55 Patient outcomes improve
  TI 56 Positive health behaviors expand (within particular populations)
  TI 57 Clinical guidelines are developed and promulgated
  TI 58 Advancements in personalized health care are made based upon
genetic sequencing
  TI 59 The patient-clinical relationship is demonstrably strengthened
  TI 60 The translation of research findings into clinical practice and
outcomes accelerates
Domain 3. Societal impacts (SI)
Subdomain 7: policy development (PD)
  (PD 61)a Policies and procedures for protection of human subjects are
strengthened
  (PD 62)a There is improved regulation of new technologies and devices
  PD 63 Clinical and translational science is conducted consistent with
recognized ethical principles
  PD 64 Community-based health programs are developed
  (PD 65)a There are Improved reimbursement practices and policies for
providers
  PD 66 Self-efficacy and empowerment among health care consumers
increases
  PD 67 Research findings help to inform the decision-making process and
policy development
Subdomain 8: community improvement (CI)
  CI 68 Improvements are made in health literacy of consumers and
specific populations
  CI 69 The health status of consumers and specific populations is
improved
  CI 70 Economic benefits are obtained (e.g., greater employment
opportunities, medical cost savings)
  (CI 71)a Job growth and development expand in a particular area or region
  CI 72 Effective public health initiatives are established in a particular
area or region
  CI 73 Disparities in health and the provision of health care are reduced
  CI 74 Organizational coordination raises awareness and stimulates
enhanced community programming concerning health issues
  CI 75 New knowledge regarding sustainable development helps to
promote and protect the health of communities
Subdomain 9: consumer resources and behavior (CRB)
  CRB 76 Consumer understanding and support of translational research
increase
  CRB 77 Consumers feel more empowered about health issues and their
ability to affect change
  CRB 78 There is improved communication and understanding about
health risks among consumers
  CRB 79 Health education and health literacy expands

Note. IRB = institutional review board; NIH = National Institutes of Health.

a

Indicators not included in final indicator set.

Validation Process

An initial set of 79 potential research impact indicators was identified through the systematic literature review, as described above. The aim of the expert panel process was to help confirm the selection of the indicator set as appropriate markers of translational research impact. A solicitation for participation in an expert panel was sent to 72 senior researchers in the field of clinical and translational research, representing a variety of perspectives along the scientific translational continuum from T1 to T4 research. All panel members were associated with one research university that is funded by a CTSA from NIH. Thirty-one investigators agreed to participate in the validation process. Information regarding the characteristics of the expert panel members can be accessed at http://cph.osu.edu/sites/default/files/docs/TRIS_Expert_Panel_Members.pdf

Several group meetings were held, at which the panel members were presented with each of the 79 indicators and asked to assess the extent to which they agreed that each indicator appropriately measured the target construct (i.e., the impact of translational research). Panelists were instructed to provide their answers independently from one another. Responses were recorded using a 5-point ordinal scale (strongly agree, agree, neutral, disagree, strongly disagree, with 1 = strongly agree and 5 = strongly disagree). A statistical summary of the means and standard deviations of the panel members’ responses was compiled. We established a decision criterion under which at least a majority (more than 50%) of the expert panel participants must have responded either strongly agreed or agreed in order for that indicator to be retained in the final indicator set.

Additionally, in order to determine item weights, each panel member was asked to compare the importance of a particular item relative to others, also on a 5-point ordinal scale (much more important than others, more important than others, equally important as others, less important than others, and not important, with 1 = much more important than others and 5 = not important). Item weights were only determined for the indicators that met the indicator retention criterion specified above. The weighting value given to each indicator was determined by calculating how far the mean for a particular indicator deviated from the mean for all indicators aggregated on the weighting scale. So, for example, if the mean of a particular indicator was 2.2 and the aggregated mean weighting for all indicators was 2.5, then the weight assigned to that indicator was [(2.5 – 2.2)/2.5] + 1 = 1.12.

Reliability Determination

The reliability of these responses were estimated using a variety of approaches, including calculation of the coefficient α (Cronbach, 1951), Ordinal α and ω (Gugiu, Coryn, & Applegate, 2010; Zumbo, Gadermann, & Zeisser, 2007), and the nonparametric bootstrap split-half reliability (Gugiu, 2011) techniques. Reliability estimations were performed for both the initial construct validation process and for the item weighting process. Several reliability estimation techniques were employed to ensure that results were generally consistent across testing methods. Ordinal a and ordinal ω tests are appropriate in this context, owing to our use of ordinal scales. However, those methods assume that bivariate normality exists between each pair of items. Since testing this assumption is not easy and requires further assumptions, a nonparametric reliability estimator (the bootstrap/Spearman test) was employed along with Cronbach’s α (typically used for continuous data) as additional comparisons. Findings among the various techniques were quite similar. The reliability calculations were performed using SAS statistical software, version 9.3.

Results

Overall, the average rating per indicator was 1.98 on the ordinal scale (with 1.0 being the strongest level of agreement). Of the 79 proposed indicators, 7 (#4, 11, 41, 61, 62, 65, and 71) of the indicators failed to meet the inclusion criterion, and were thus dropped from further consideration as a suitable indicator of translational research impact. The average item weighting value among indicators was 2.59 on the 5-point ordinal scale comparing perceived differences in importance among items.

For both the construct validation and item weighting processes, the degree of reliability for expert panel agreement was generally high, with estimates within the three domains ranging from .87 (based on the Bootstrap/Spearman test) to .94 (based on the Cronbach’s ω test) for the construct validation process (Table 3), and ranging between .88 (based on the Cronbach’s α test) and .96 (based on the Ordinal ω test) for the item weighting process domains (Table 4). Estimates of reliability for the degree of expert panel agreement in the nine subdomains for the construct validation process all exceeded .66 (based on the Bootstrap/Spearman test), and were greater than .75 in five of those subdomains. Similar results were observed in the reliability estimates of expert panel agreement for the item weighting process, with estimates greater than .71 in all subdomains except the Research Direction and Resources subdomain, which had reliability estimates ranging from .52 to .63, depending on the test used.

Table 3.

Reliability Estimates for Item Inclusion Process.

Variables Number of
items
Cronbach’s
α
Ordinal
α
Ordinal
ω
Bootstrap
(Spearman)
Domain 1 Research-related impacts (RRI) Q1–Q42 42 .9212 .9410 .9442 .9245
Research direction and resources
(RDR)
Q1–Q6 6 .7898 .8367 .8485 .7890
Research management and conduct
(RMC)
Q7–Q20 14 .8469 .8858 .8915 .8549
Research method (RM)s Q21–Q25 5 .7494 .8135 .8302 .7810
Research results (RR) Q26–Q34 9 .6680 .7681 .7913 .6844
Research dissemination (RD) Q35–Q42 8 .6746 .7245 .7444 .7253
Domain 2 Translational impacts Q43–Q60 18 .8861 .9191 .9276 .9004
Translational impacts (TI) Q43–Q60 18 .8861 .9191 .9276 .9004
Domain 3 Societal impacts Q61–Q79 19 .9136 .9317 .9397 .8695
Policy development (PD) Q61–Q67 7 .6841 .7372 .7400 .6613
Community benefit (CB) Q68–Q75 8 .9048 .9297 .9343 .8884
Consumer resources and behaviors
(CPB)
Q76–Q79 4 .7509 .8056 .8337 .6814
Overall Q1–Q79 79 .9545 .9659 .9671 .9442

Table 4.

Reliability Estimates for Item Weighting Process.

Variables Number
of items
Cronbach’s
α
Ordinal
α
Ordinal
ω
Bootstrap
(Spearman)
Domain 1 Research-related impacts (RRI) Q1–Q42 42 .9129 .9247 .9248 .9020
Research direction and resources
(RDR)
Q1–Q6 6 .5478 .5645 .6317 .5170
Research management and conduct
(RMC)
Q7–Q20 14 .8480 .8715 .8757 .8531
Research method (RM)s Q21–Q25 5 .8404 .8889 .8997 .8008
Research results (RR) Q26–Q34 9 .7251 .7742 .7705 .7106
Research dissemination (RD) Q35–Q42 8 .7808 .8138 .8439 .787
Domain 2 Translational impacts Q43–Q60 18 .8830 .9130 .9250 .8976
Translational impacts (TI) Q43–Q60 18 .8830 .9130 .9250 .8976
Domain 3 Societal impacts Q61–Q79 19 .9416 .9531 .9559 .9272
Policy development (PD) Q61–Q67 7 .8350 .8590 .8704 .7879
Community benefit (CB) Q68–Q75 8 .9140 .9366 .9404 .9093
Consumer resources and behaviors
(CPB)
Q76–Q79 4 .8609 .8991 .9098 .8379
Overall Q1–Q79 79 .9542 .9623 .9644 .9595

Discussion

This study represents perhaps the first attempt to create a uniform standardized scale for measuring the impacts of translational research. As pressure mounts to justify and document research expenditures, it becomes increasingly important to use validated and customized metrics that capture the distinctive features of translational research. TRIS potentially allows for benchmarking and comparison of impacts among research institutions.

Concentrating on the development of weighted scales based on a uniform set of indicators provides a specific measurement approach that has been lacking in other attempts to measure and quantify translational research impacts. At this writing, the closest movement so far in this direction was the April 2012 CTSA National Evaluation Report prepared by Westat Inc. for NIH (Frechtling, Raue, Michie, Miyaoka, & Spiegelman, 2012). That evaluation process used an assortment of inputs including surveys of investigators, analysis of publications data, reviews of site annual reports, and field visits to provide an overall assessment of progress by CTSA programs. The evaluation reported on quantifiable indicators of CTSA site processes and results. It did not include any aggregate measure or score of CTSA performance. Importantly, the goal and orientation of that project is conceptually distinct from our attempt to create a scale by which to measure the ultimate impacts of translational research. While some of the domains are common to both projects (e.g., publications, grants, extent of collaboration, use of research resources), our development of the TRIS focuses directly on the need to score the attainment of impact in various areas to calculate domain and composite scores that can be used as a basis for comparing performance and ultimate impact of research efforts across research sites.

Our development of TRIS was motivated primarily by a desire to develop a uniform method for accessing translational research impacts across CTSA sites. However, we attempted to structure the measurement tool in a way that potentially could be adopted within many kinds of research organizations, including commercial biomedical research facilities, academic institutions without CTSA programs, and other appropriate settings.

Our development of TRIS and the associated construct validity and reliability testing processes we performed were based on a methodical analytical process consisting of systematic literature review and expert panel rating to help develop and refine the measurement tool. By contrast, the Westat report was broadly based but not designed to produce a particular set of quantified research impact indicators. That report does not specify the basis for selecting particular evaluation items, how many or what specific types of evaluation metrics were used, or how the separate results obtained from the evaluation process could be combined into a synthetic whole that could be used for comparisons and benchmarking among sites.

Challenges, Limitation, Future Research

This study focuses on the initial development of TRIS, the selection of appropriate indicators for denoting construct validity, reliability assessment, and item weighting. These achievements represent the initial progress that has been made toward the larger goal of actually putting the TRIS into practice. However, more work is needed to reach that goal. Next steps will include further testing of the tool’s psychometric properties and the beginning of field testing. Field testing will be important in determining the ability to use available data to measure levels of attainment for each impact indicator. Also, the appropriate time frames for measuring impact need to be determined. For example, some impacts might be measurable in a relatively brief period of time (e.g., 1 or 2 years), while others might require a longer time frame to measure true impact in a particular area. To facilitate the generalizability of the measurements, it will be necessary to extend field testing and operationalizing of the TRIS scale across research sites that differ in their relative focus on T1 to T4 research activities.

Additionally, the measurement process for each particular indicator will need to be operationalized to yield a measure of impact attainment. One way of developing that system, for example, would involve assigning possible impact attainment criteria for each indicator to allow for classifying the existing level of impact attainment into one of the several values (e.g., on a Likert-type scale). Field studies will be helpful in determining the feasibility and cost of compiling those measurements. Unless the impacts can be measured and quantified easily, TRIS may fail to be used by research institutions. The next phase of this project that will develop the requisite measurement processes is expected to be completed by the end of 2014. Other CTSA sites have been approached about the possibility of collaborating in data collection and field testing of the instrument.

Our eventual goal is to develop an aggregate composite score for measuring research impact attainment across sites. In subsequent studies, we intend to explore the internal validity of the instrument. Due to the low sample size in the current study, factor analysis was not a viable option for this pilot study. Once TRIS’ internal validity and reliability have been satisfactorily addressed, we will then investigate its criterion validity, using empirical data of impact attainment at various sites.

Based on our inclusion criteria and the ratings of the expert panelists, 7 of the 79 indicators originally identified (#4, 11, 41, 61, 62, 65, and 71) were dropped from the final indicator set. Four of the deleted group focused on policy development and societal impacts, which may have been perceived by some panelists as less discrete or measurable for assessing research impact. The item (#4) regarding clarifying the terminology used to describe the T1–T4 translational spectrum also was dropped, perhaps indicating some continuing confusion about that concept among researchers.

We believe that developing and testing a uniform process for measuring translational research impact is essential to the success of the CTSA program and, ultimately, to the broader translational research enterprise in the U.S. NIH and other sponsors of research need to see demonstrable results from research expenditures and tangible improvements in population and community health. TRIS is being developed as a standardized instrument to provide uniform impact metrics that allow for comparing results across sites and documenting accomplishments in a consistent way. Our progress to date sets the foundation for further refinement of the tool and field testing with empirical data to establish criterion validity and measurement feasibility.

Acknowledgments

Funding

The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This project described was supported by Award Number 8UL1TR000090 from the National Center For Advancing Translational Sciences. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Center For Advancing Translational Sciences or the National Institutes of Health.

Footnotes

Declaration of Conflicting Interests

The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

References

  1. Aries N, Sclar E. The economic impact of biomedical research: A case study of voluntary institutions in the New York metropolitan region. Journal of Health Politics, Policy and Law. 1998;23:175–193. doi: 10.1215/03616878-23-1-175. [DOI] [PubMed] [Google Scholar]
  2. Australian Research Council. Excellence in research for Australia (ERA) initiative. Canberra, Australia: Author; 2008. Retrieved from http://www.arc.gov.au/pdf/ERA_ConsultationPaper.pdf. [Google Scholar]
  3. Collins FS. Protecting the future of U.S. biomedical research. 2012 Retrieved December 7, 2012, from http://www.nih.gov/about/director/12072012_state-ment_biomedicalresearch.htm.
  4. Cronbach LJ. Coefficient alpha and the internal structure of tests. Psychometrika. 1951;16:297–334. [Google Scholar]
  5. Donaldson NE, Rutledge DN, Ashley J. Outcomes of adoption: Measuring evidence uptake by individuals and organizations. Worldviews on Evidence-Based Nursing. 2004;S1:S41–S51. doi: 10.1111/j.1524-475X.2004.04048.x. [DOI] [PubMed] [Google Scholar]
  6. Dougherty D, Conway P. The “3T’s” road map to transform U.S. health care: The “how” of high-quality care. Journal of American Medical Association. 2008;299:2319–2321. doi: 10.1001/jama.299.19.2319. [DOI] [PubMed] [Google Scholar]
  7. Frechtling J, Raue K, Michie J, Miyaoka A, Spiegelman M. The CTSA national evaluation final report. 2012 Retrieved April 3, 2012, from https://www.ctsacen-tral.org/sites/default/files/files/CTSANationalEval_FinalReport_20120416.pdf.
  8. Glasgow RE. Critical measurement issues in translational research. Research on Social Work Practice. 2009;19:560–568. [Google Scholar]
  9. Grant J, Cottrell R, Cluzeau F, Fawcett G. Evaluating ‘payback’ on biomedical research from papers cited in clinical guidelines: Applied biblio-metric study. British Medical Journal. 2000;320:1107–1111. doi: 10.1136/bmj.320.7242.1107. [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Gugiu PC. Summative confidence (Unpublished doctoral dissertation) Kalamazoo, MI: Western Michigan University; 2011. [Google Scholar]
  11. Gugiu PC, Coryn CLS, Applegate EB. Structure and measurement properties of the patient assessment of chronic illness care (PACIC) instrument. Journal of Evaluation in Clinical Practice. 2010;16:509–516. doi: 10.1111/j.1365-2753.2009.01151.x. [DOI] [PubMed] [Google Scholar]
  12. Hanney SR, Grant J, Wooding S, Buxton MJ. Proposed methods for reviewing the outcomes of health research: The impact of funding by the UK’s ‘Arthritis Research Campaign.’. Health Research Policy and Systems. 2004;2:4. doi: 10.1186/1478-4505-2-4. [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Hayes H, Parchman ML, Howard RA. Logic model framework for evaluation and planning in a primary care practice-based research network (PBRN) Journal of the American Board of Family Medicine. 2011;24:576–582. doi: 10.3122/jabfm.2011.05.110043. [DOI] [PMC free article] [PubMed] [Google Scholar]
  14. Haynes B, Haines A. Barriers and bridges to evidence based clinical practice. British Medical Journal. 1998;317:273–276. doi: 10.1136/bmj.317.7153.273. [DOI] [PMC free article] [PubMed] [Google Scholar]
  15. Heller C, de Melo-Martin I. Clinical and translational science awards: Can they increase the efficiency and speed of clinical and translational research? Academic Medicine. 2009;84:424–432. doi: 10.1097/ACM.0b013e31819a7d81. [DOI] [PubMed] [Google Scholar]
  16. Kagan JM, Kane M, Quinlan KM, Rosas S, Trochim WM. Developing a conceptual framework for an evaluation system for the NIAID HPV/AIDS clinical trials networks. Health Research Policy and Systems. 2009;7:1–16. doi: 10.1186/1478-4505-7-12. [DOI] [PMC free article] [PubMed] [Google Scholar]
  17. Kalucy L, Mclntyre E, Jackson-Bowers E, Reed R. Exploring the impact of primary health care research projects using the Payback Framework. Health Research Policy and Systems. 2009;7:11. doi: 10.1186/1478-4505-7-11. [DOI] [PMC free article] [PubMed] [Google Scholar]
  18. Kessler R, Glasgow RE. A proposal to speed translation of healthcare research into practice; dramatic change is needed. American Journal of Preventive Medicine. 2011;40:637–644. doi: 10.1016/j.amepre.2011.02.023. [DOI] [PubMed] [Google Scholar]
  19. Khoury ML, Gwinn M, Yoon PW, Dowling N, Moore CA, Bradley L. The continuum of translation research in genomic medicine: How can we accelerate the appropriate integration of human genome discoveries into health care and disease prevention? Genetics in Medicine. 2007;9:665–674. doi: 10.1097/GIM.0b013e31815699d0. [DOI] [PubMed] [Google Scholar]
  20. Kuruvilla S, Mays N, Pleasant A, Walt G. Describing the impact of health research: A research impact framework. BMC Health Services Research. 2006;6:134. doi: 10.1186/1472-6963-6-134. [DOI] [PMC free article] [PubMed] [Google Scholar]
  21. Lavis J, Ross S, McLeod C, Gildiner A. Measuring the impact of health research. Journal of Health Services Research & Policy. 2003;8:165–170. doi: 10.1258/135581903322029520. [DOI] [PubMed] [Google Scholar]
  22. Lewison G. Beyond outputs: New measures of biomedical research impact. Aslib Proceedings: New Information Perspectives. 2003;55:32–42. [Google Scholar]
  23. Mankoff S, Brander C, Ferrone S, Marincola F. Commentary: Lost in translation: Obstacles to translational medicine. Journal of Translational Medicine. 2004;2:14. doi: 10.1186/1479-5876-2-14. [DOI] [PMC free article] [PubMed] [Google Scholar]
  24. Nathan D. Commentary: Careers in translational clinical research—Historical perspectives, future challenges. The Journal of the American Medical Association. 2002;287:2424–2427. doi: 10.1001/jama.287.18.2424. [DOI] [PubMed] [Google Scholar]
  25. National Institutes of Health. Our economy. 2012 Retrieved from http://www.nih.gov/about/impact/economy.htm.
  26. Pang T, Sadana R, Hanney S, Bhutta Z, Hyder A, Simon J. Knowledge for better health—A conceptual framework and foundation for health research systems. Bulletin of the World Health Organization. 2003;81:815–820. [PMC free article] [PubMed] [Google Scholar]
  27. Pober JS, Neuhauser CS, Pober JM. Obstacles facing translational research in academic medical centers. The Federation of American Societies for Experimental Biology (FASEB) Journal. 2001;15:2303–2313. doi: 10.1096/fj.01-0540lsf. [DOI] [PubMed] [Google Scholar]
  28. Quinlan KM, Kane M, Trochim WM. Evaluation of large research initiatives: Outcomes, challenges, and methodological considerations. New Directions for Evaluation. 2008;118:61–72. [Google Scholar]
  29. RAND Europe. Measuring the benefits from research. Cambridge, England: Author; 2006. Retrieved from http://www.rand.org/Pubs/researchbriefs/2007/RAND-RB9202.pdf. [Google Scholar]
  30. Rosas SR, Kagan JM, Schouten JT, Slack PA, Trochim WMK. Evaluating research and impact: A bibliometric analysis of research by the NIH/NIAID HIV/AIDS clinical trials networks. PLoS ONE. 2011;6:e17428. doi: 10.1371/journal.pone.0017428. [DOI] [PMC free article] [PubMed] [Google Scholar]
  31. Rubio DG, Schoenbaum EE, Lee LS, Schteingart DE, Marantz PR, Anderson KE, Esposito K. Defining translational research: Implications for training. Academic Medicine. 2010;85:470–475. doi: 10.1097/ACM.0b013e3181ccd618. [DOI] [PMC free article] [PubMed] [Google Scholar]
  32. Sanders LM, Robinson TN, Forster LQ, Plax K, Brosco JP. Evidence-based community pediatrics: Building a bridge from bedside to neighborhood. Pediatrics. 2005;115:1142–1147. doi: 10.1542/peds.2004-2825H. [DOI] [PubMed] [Google Scholar]
  33. Sarli C, Dubinsky EK, Holmes K. Beyond citation analysis: A model for assessment of research impact. Journal of the Medical Library Association. 2010;98:17–23. doi: 10.3163/1536-5050.98.1.008. [DOI] [PMC free article] [PubMed] [Google Scholar]
  34. Sypsa V, Hatzakis A. Assessing the impact of biomedical research in academic institutions of disparate sizes. BMC Medical Research Methodology. 2009;9:33. doi: 10.1186/1471-2288-9-33. [DOI] [PMC free article] [PubMed] [Google Scholar]
  35. Sung NS, Crowley WF, Genel M, Salber P, Sandy L, Sherwood LM, Rimoin D. Central challenges facing the national clinical research enterprise. The Journal of the American Medical Association. 2003;289:1278–1287. doi: 10.1001/jama.289.10.1278. [DOI] [PubMed] [Google Scholar]
  36. Trochim W, Kane C, Graham M, Pincus H. Evaluating translational research: A process marker model. Clinical and Translational Science. 2011;4:153–162. doi: 10.1111/j.1752-8062.2011.00291.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  37. Trochim WM, Marcus SE, Masse LC, Moser RP, Weld PC. The evaluation of large research initiatives: A participatory integrative mixed-methods approach. American Journal of Evaluation. 2008;29:8–28. [Google Scholar]
  38. Weiss A. Measuring the impact of medical research: Moving from outputs to outcomes. American Journal of Psychiatry. 2007;164:206–214. doi: 10.1176/ajp.2007.164.2.206. [DOI] [PubMed] [Google Scholar]
  39. Westfall J, Mold J, Fagnan L. Practice-based research—’Blue highways’ on the NIH roadmap. The Journal of the American Medical Association. 2007;297:403–406. doi: 10.1001/jama.297.4.403. [DOI] [PubMed] [Google Scholar]
  40. W. K. Kellogg Foundation. Logic model development guide. Battle Creek, MI: Author; 2004a. [Google Scholar]
  41. W. K. Kellogg Foundation. Evaluation handbook. Battle Creek, MI: Author; 2004b. [Google Scholar]
  42. Woolf S. The meaning of translational research and why it matters. The Journal of the American Medical Association. 2008;299:211–213. doi: 10.1001/jama.2007.26. [DOI] [PubMed] [Google Scholar]
  43. Zerhouni EA. Translational research: Moving discovery to practice. Clinical Pharmacology & Therapeutics. 2007;81:126–128. doi: 10.1038/sj.clpt.6100029. [DOI] [PubMed] [Google Scholar]
  44. Zumbo BD, Gaderman A, Zeisser C. Ordinal versions of coefficients alpha and theta for Likert rating scales. Journal of Modern Applied Statistical Methods. 2007;6:21–29. [Google Scholar]

RESOURCES