Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2016 Oct 1.
Published in final edited form as: Acad Med. 2015 Oct;90(10):1302–1308. doi: 10.1097/ACM.0000000000000759

Evaluating Academic Scientists Collaborating in Team-Based Research: A Proposed Framework

Madhu Mazumdar 1, Shari Messinger 2, Dianne M Finkelstein 3, Judith D Goldberg 4, Christopher J Lindsell 5, Sally C Morton 6, Brad H Pollock 7, Mohammad H Rahbar 8, Leah J Welty 9, Robert A Parker, for the Biostatistics, Epidemiology, and Research Design (BERD) Key Function Committee of the Clinical and Translational Science (CTSA) Consortium10
PMCID: PMC4653084  NIHMSID: NIHMS702575  PMID: 25993282

Abstract

Criteria for evaluating faculty are traditionally based on a triad of scholarship, teaching, and service. Research scholarship is often measured by first or senior authorship on peer-reviewed scientific publications and being principal investigator on extramural grants. Yet scientific innovation increasingly requires collective rather than individual creativity, which traditional measures of achievement were not designed to capture and, thus, devalue. The authors propose a simple, flexible framework for evaluating team scientists that includes both quantitative and qualitative assessments. An approach for documenting contributions of team scientists in team-based scholarship, non-traditional education, and specialized service activities is also outlined. While biostatisticians are used for illustration, the approach is generalizable to team scientists in other disciplines.


The authors offer three key recommendations to members of institutional promotion committees, department chairs, and others evaluating team scientists. First, contributions to team-based scholarship and specialized contributions to education and service need to be assessed and given appropriate and substantial weight. Second, evaluations must be founded upon well-articulated criteria for assessing the stature and accomplishments of team scientists. Finally, mechanisms for collecting evaluative data must be developed and implemented at the institutional level. Without these three essentials, contributions of team scientists will continue to be undervalued in the academic environment.

Traditional criteria for evaluation of academic faculty evolved at a time when research was typically conducted by individuals or small groups. The complexity of today’s research often requires large interdisciplinary teams whose members possess complementary yet critical skills. The National Institutes of Health and other major research agencies promote team science,1 which often results in research projects having hundreds of contributors.2 Those responsible for judging academic productivity, including department chairs, institutional promotion committees, provosts, and deans, must learn how to evaluate performance in this increasingly complex framework. The classical metrics of being principal investigator (PI) on a grant or first/senior author on a paper can substantially devalue the contributions of a team scientist.3 New approaches have been suggested,4 and a quarter of academic health centers (AHCs) have revised their promotion guidelines to include emphasis on interdisciplinary team science.5 Yet AHCs lag in implementing solutions.6 We are aware of just one institution, Arizona State University, that has detailed faculty evaluation criteria for the interdisciplinary scientist. This system involves quantitative scoring and weighing across multiple domains.4,7 It took years to develop, required knowledge of the field, is specific to the Art, Media, and Engineering School, and needed a new informatics tool for data collection.8 Elsewhere, promotion decisions still rely heavily on qualitative recommendations in letters from internal and external evaluators.

The growing importance of collaboration in clinical and translational science, involving disciplines such as biostatistics, biomedical informatics, biochemistry, biophysics, bioengineering, and health economics, drives the need for more systematic evaluation criteria relevant to today’s scientific architecture. In general, team scientists bring specific skills and perspectives to projects and are often the only team members with their specialist expertise. Without such expertise, project teams would struggle with partial or suboptimal solutions. Because team scientists often engage in numerous collaborations, they may need to possess both a generalist’s breadth and a specialist’s knowledge. For example, when collaborative biostatisticians have an understanding of the underlying medical or biological domain, they can use their statistical expertise and creativity to select, apply, or even develop statistical techniques that fit the problem at hand. Although team scientists may not be lead authors or PIs, their contributions are critical to a project’s success.

Team scientists’ other scholarly activities may also differ from the conventional activities assessed by promotion committees. For example, team scientists frequently mentor or teach one-on-one, and provide ad hoc training and targeted lectures as required to build a collaborative program. Similarly, when evaluating service, a team scientist with a specialized skill may be responsible for reviewing every protocol across a range of disciplines. Traditional scientists typically review a limited number of protocols within their own specialty areas, but the biostatistician or bioinformatician may be called upon to review every protocol for review committees. Administrative roles, such as directing a scientific core facility (i.e., recruiting, training, supervising, evaluating, and retaining masters and junior doctoral level faculty and staff), also demonstrate scholarship and leadership.

The need to better understand how to evaluate team scientists parallels the evolving recognition of clinician-educators in the educational mission of AHCs, with some institutions developing tracks and assessment criteria for such individuals.910 Since team scientists come from many disciplines, including the clinical, basic, and data sciences, the same criteria cannot be applicable to all.

The purpose of this article is therefore to propose a framework for developing evaluation criteria that can be applied broadly to appointment, promotion, and tenure decisions. We focus primarily on team scientists’ contributions to collaborative research, although we do discuss the non-traditional contributions of team scientists to education, service, and leadership. We emphasize that evidence of contributions should span the duration of a team scientist’s career, and that a representative sample of their collaborations be summarized to reflect the impact the team scientist has had on collaborative research. Our discussion uses the example of the biostatistician, which is perhaps one of the more common interdisciplinary team science roles; but the paradigm is generalizable to other team scientists. Although we focus on promotion, our remarks also pertain to appointment criteria. In this new era of team science, an appointment letter for a team scientist at any level should outline expectations for collaboration, criteria for success, and resources provided to achieve these goals.11

Considerations for Evaluation and Promotion

It is typical for academic institutions to have clear guidelines for evaluating academic career progression, such as the pathway from instructor to professor. Traditional pathways include explicit expectations that are difficult to achieve for all but lead or senior authors on publications or the PIs on grants. This presents a barrier to career development for team scientists. To enfranchise the team scientist, traditional criteria need to evolve.

Common to nearly all evaluation guidelines is the requirement to demonstrate evidence of independent contributions to a field, recognition of those contributions by leaders in the field, and ultimately leadership in the field. For a team scientist, independence and leadership is not necessarily noted by being the PI, but instead may be noted by being the subject matter expert responsible for a disciplinary contribution to the overall project. Evaluation of team scientists requires assessment of intellectual contributions to collaborative efforts as middle authors.12 As institutions develop guidelines reflecting the role of the team scientist, they should provide concrete examples of the kinds of milestones important for career progression within a team scientist’s pathway. Several institutions have provided biostatisticians with guidelines on how to obtain credit for grant funding as a co-investigator or author on a publication for which the biostatistician is neither the lead nor senior author.1314

Suggested Criteria for Evaluating Team Scientists

Beyond an annotated curriculum vitae, the ubiquitous evaluation tool in academic medicine is the letter of reference. Whether these are internal or external, such letters provide an important delivery mechanism for evaluative commentary. They also provide careful and systematic documentation of the impact of a particular project on the field as well as the impact the team scientist has on the program of collaborative research. We assert that while internal evaluators are primed to assess the impact of an individual’s contribution to a project, external references will frequently be possible since collaborative research often spans multiple institutions. In such cases, external evaluators should always be sought. When relying on letters of recommendation, there is a potential for a “quid-pro-quo,” with the team members systematically supporting each other. To avoid this, we suggest that recommendations be sought from individuals in the team who would not benefit from such a situation. Such evaluators would typically be those above the rank of the person being evaluated, and ideally those already at the highest academic rank or tenured at their own institution. Keeping evaluations anonymous may also reduce the potential for biased evaluations.

Beyond letters of recommendation, we propose that it is paramount for a team scientist to clearly articulate his or her contributions to the overall success and impact of collaborative research. Accompanying the reference letters should be a clear personal statement, describing contributions in the typical triad of scholarship, teaching, and service. In the following exposition, we describe the range of activities that might be relevant to document and highlight for a biostatistician as an illustration, and how a department chair might obtain data on the scientific impact of an individual.

Academic scholarship

A biostatistician’s role in research encompasses four distinct phases of a project: design, implementation, analysis, and reporting. In Chart 1 (with four parts corresponding to the above-mentioned phases, adapted from our prior work),15 we offer a mechanism for classifying contributions. The evaluator is asked to classify the contribution to an activity as major, moderate, or minor, by checking “yes” and to provide a qualitative statement describing the contribution. If applied consistently across multiple research projects, it is clear this approach clearly could be used to develop a strong evaluation framework that provides quantifiable evidence of contribution and supportive text for a chair’s letter. We recommend that department chairs and/or evaluation committees collect such information from the PI of major collaborative projects periodically throughout a team scientist’s career at the institution.

Chart 1.

Recommended Format for Assessment of Scientific Contributions for a Biostatisticiana

Type of activity Check if yes Sample comments
Design activities
Major: Substantive input into the overall design of the research protocol or grant application.
  • Convinced me that the aims did not hold together as a package, so we revised the aims along the lines suggested.

  • Changed the basic design of the study, reducing many of the feasibility concerns we had.

Moderate: Writing one or more specialist sections of the research protocol or grant application.
  • Wrote statistical material (sample size) and analysis plans.

  • Material was well integrated into the specific aims.

Minor: Overall critical review to sharpen a research protocol and grant application before submission, without major substantive changes.
  • Reviewed the whole grant we had prepared and suggested ways to make the grant more compelling.

  • Pointed out that some of the material was confusing and suggested ways to fix this and made the steps more actionable.

Implementation activities
Major: Regular (ongoing) participation in study meetings with the study team including primary investigator.
  • Provided input and comments on a range of issues including recruitment problems during the start-up phase, and a sudden increase in dropouts after one of the coordinators quit.

  • Worked with us through four substantive protocol amendments during the course of the study.

#x02003;Moderate: Implementation of data collection, data management, and quality control activities.
  • Supervised the staff doing data entry and data management, and those that created our study database.

  • Prompted by the statistician’s query, the data management staff identified some issues while the study was on-going, leading to retraining all the coordinators.

Minor: Advising only on specific issues when requested by the principal investigator.
  • Gave us good advice when we were having recruitment issues.

  • Discussed problems maintaining high continuation rate.

Analysis activities
Major: Planning and directing the analyses.
  • Developed statistical analysis plan, supervised staff doing the analysis, planned subsequent analyses based on the results we had.

  • Developed the analysis plan for several spin-off papers from the project.

Moderate: Preparing written material summarizing the results of the analyses and/or preparing formal reports.
  • We received a very clear and comprehensive summary of the results, so it was straightforward to write the paper.

  • The summary prepared included some basic tables which we adopted for the manuscript.

Minor: Performing the analyses.b
  • During the meeting, the analyses needed for the manuscript were done.

  • Emailed us the analyses we had requested.

Manuscript reporting activities
Major: Substantive input into the overall organization and presentation of the manuscript.
  • Prepared the first draft of the results section, prepared the statistical methods section, dealt with reviewer issues about the analysis.

  • Convinced us to include specific supplementary tables which the reviewer commented as a strength of the paper.

Moderate: Preparation of statistical material for manuscript.
  • Wrote the statistical methods section.

  • Suggested ways to improve the presentation of results, including changing several of the primary tables.

Moderate: Assistance with preparation of rebuttal and resubmission.
  • After rejection, advised us on additional analyses and helped us prepare a submission to the journal, which published the paper.b

  • Helped us to respond to reviewer comments about the analysis which we had done.

Minor: Critical review of the manuscript.
  • Reviewed final manuscript, helped clarify some of results and wording of conclusions.

a

Adapted in part from Parker RA, Berman NG. Criteria for authorship for statisticians in medical papers. Stat Med. 1998;17:2289-99.

b

The extent of the analysis could raise this to a major contribution.

Contribution to publications

The intellectual contribution of a team scientist may be integral to the scientific findings, but the team scientist is typically neither the first nor the senior author on resulting publications.13 Metrics for the team scientist should include credit for contributions to papers at appropriate levels (Chart 1, manuscript reporting activities and relevant material from design, implementation, and analysis activities). When assessing a biostatistician’s contribution to a publication, evaluators should appropriately weigh various aspects of the study. We suggest that “major” intellectual contributions at two or more stages—for example in the design and analysis of a study—should clearly be counted as consistent with first or senior author contributions. In the biostatistician’s case, we believe that planning and directing the analysis with subsequent contributions to manuscript preparation is evidence of independent contribution to a field of research.

Some interpretation of the contribution is inevitable. For example, when a biostatistician is brought into a project to assist only with a rebuttal to peer review, the perceived contribution may be minor, but the intellectual contribution may be essential for the manuscript to be successful. It is imperative that any quantitative expression of contribution be accompanied by supporting evidence in the form of qualitative statements from both the external evaluator and the faculty member being evaluated.

Contribution to grantsmanship

Team scientists are essential contributors to extramurally funded research, but they are rarely named as PIs. A team scientist’s contributions must be evaluated based on the independence of their involvement and the extent of scientific leadership required for their contributions. Typical activities in grant preparation for a team scientist include preparation of one or more specialist sections of the grant application (Chart 1, design activities). Substantive input to the overall design of the study implies a major scientific contribution over and above that of the team scientist’s specialty, and provides evidence of scientific leadership in the research project as well as technical excellence in the team scientist’s specialty. We note, however, that while the team scientist’s individual contribution might be outstanding, if the grant fails to be funded, the team scientist will be penalized similarly to the PI. We recommend that special efforts be made to collect evaluation material describing the team scientist’s contribution to the application regardless of whether it is funded.

Contribution to programs of research

Many collaborative research programs span an extensive period of time. It is not unusual for a single project to last many years; some have lasted for decades. Thus, a team scientist may join an ongoing project, such as an epidemiological cohort study, successfully guide the study for multiple years, but transition into another role before the study is closed. It is important, therefore, to include contributions to study implementation in a team scientist’s portfolio (Chart 1, implementation activities). Although not typically recognized as an intellectual contribution, keeping a study running well and consistent with its design is an essential contribution. We believe that such contributions should be considered supplementary to the more traditional contributions. Moreover, a team scientist can demonstrate scientific leadership if it becomes necessary to modify or amend the project during the course of its implementation to ensure scientific integrity.

Ways to obtain evaluations

For team scientists working with multiple investigators across many programs, we recommend that evaluation metrics such as those in Chart 1 describing the contribution to a grant application be collected when the grant is submitted. Comments about the success or impact of the team scientist’s role should also be abstracted from the grant reviews. We recommend this information be obtained either from the PIs of a representative sample of grants, or systematically for all larger grants, depending on the team scientist’s focus. Evaluation of contributions to publications, particularly analysis and reporting as described in Chart 1, should be obtained from the lead or senior author when manuscripts are accepted for publication. For team scientists contributing to large projects, we recommend that evaluative data be collected at least annually from the lead investigator.

Using evaluations to assess academic scholarship

We assert that there is no absolute criterion for judging scholarship. If a team scientist’s contributions across both a number of evaluators and a number of years are considered major by their collaborators, this is substantive evidence for scientific excellence. Similarly, if collaborators almost never consider contributions to be more than moderate, and most are considered minor, then we would hesitate recommending promotion. When a team scientist is neither outstanding nor poorly performing, we suggest weighing the evidence based on the commentary and its sources. A contribution listed as minor but noted to change the course of the research may be weighted higher than a major contribution that appears to be routine. Similarly, comments from scientists with a long history of collaborative research may be more meaningful than comments from junior researchers because of their broader experience by which to judge the team scientist.

Teaching

Many team scientists do not teach formal courses with accompanying student evaluations. Instead, ad hoc lectures, workshops, and training programs are the norm. Regular participation in clinical journal clubs, in scientific review committees, and in research projects all provide nontraditional opportunities for team scientists to educate others.16 In addition, team mentoring typically engages team scientists as secondary mentors, a contribution that is essential for the development of the next generation of investigators.

Chart 2 lists some of the teaching opportunities for a team scientist. Included is our assessment of whether contributions might be gauged as major, moderate, or minor, and what additional information would aid evaluation. We recommend that the evaluator contact the activity leader (e.g., coordinator of a seminar series or journal club in which a team scientist has participated) for information about how the scientist contributed to teaching and mentoring. Useful questions include: did participants feel that they learned something from the team scientist; and would you want the team scientist to participate again? To complement feedback, we suggest that mentees’ primary mentors be contacted for their appraisal of the degree and effect of the secondary mentor’s contributions.

Chart 2.

Recommended Format for Assessment of Teaching and Service Activities for a Biostatistician

Type of activity Check if yes Additional information needed
Teaching activities
Major: Course organizer of a formal course (including CME courses) spanning full semester Title of course, syllabus, number of lectures, contact hours, formal course assessment
Minor to major: Specific lectures as part of a formal course Title of course, number of lectures, contact hours, appraisal from course director
Major: Multiple lectures added up to at least 24 contact hours
Moderate: 4–24 contact hours
Minor: < 4 contact hours
Minor to major: Ad hoc lectures for a special audience Specialty of the audience, title of course, number of lectures, contact hours, appraisal from individual requesting the lectures. Importance of activity depends on the number of lectures prepared specially for the group
Major: Multiple lectures added up to at least 24 contact hours
Moderate: 4–24 contact hours
Minor: < 4 contact hours
Minor to moderate: Participation as biostatistician in a regularly scheduled clinical journal club or meeting Specialty of the group, number of sessions attended, appraisal from leader of club/meeting about impact
Moderate: 12 or more meetings/year
Minor: < 12 meetings / year
Minor to major: Mentoring as part of a formal grant (e.g., K award) Mentee name, number of meetings in past year, appraisal from primary mentor
Major: moderate contact with multiple (>3) mentees
Moderate: 16 or more contact hours/year
Minor: < 16 contact hours/year
Minor to moderate: Training during statistical collaboration Project, number of sessions, areas discussed
Moderate: Significant teaching of statistical methods with long-term collaborator (appraisal from collaborator desirable)
Minor: Training during ad hoc collaboration
Service activities
Minor to major: Service on a legally required institutional committee (e.g., institutional review board, institutional animal care and use committee, radiation safety) or national review committee (e.g., National Institutes of Health study section) Details of activity determine level of involvement, should be obtained from chair of committee
Major: Chair/vice-chair/primary statistical reviewer on all grants
Moderate: Statistical reviewer (responsibility shared with other statisticians)
Minor: Ad hoc reviewer of <10% of grants submitted to the committee
Minor to major: Service on a degree-granting committee (e.g., thesis defense committee) Major: Primary advisor
Moderate: Co-advisor, sole statistician on committee
Minor: Other supporting role
Minor to major: Service on an oversight committee (e.g., data safety monitoring board) Major: Chair/vice-chair/primary statistician
Moderate: Secondary statistician as reviewer
Minor: Ad hoc member of the committee
Minor to major: Other institutional committees (e.g., search committee, curriculum committee, etc.) Details of activity determine level of importance, should be obtained from chair of the committee.a
Minor to major: Service to professional associations Details of activity determine level of importance, should be obtained from president of the association. a
Minor to major: Editorial service Details of activity determine level of importance, should be obtained from the editor of the committee.a
a

Importance depends on the role within the activity (member with or without voting rights, leadership role with formal designation, etc.), the importance of the activity itself, and the time commitment of the individual.

Service

Chart 2 lists different service activities that a team scientist might perform. Because of the dearth of team scientists engaged in collaborative research, their expertise may be in high demand and so service on scientific review and oversight committees can place a large burden on the team scientist. This is particularly true for biostatisticians who are increasingly called on to evaluate study design, analysis plans, and data management for all studies under review. We suggest that the evaluator contact the review committee chair to obtain specific details regarding the extent of the team scientist’s involvement and influence on committee decisions.

Leadership

Leadership is demonstrated by the influence the team scientist has with collaborators. Influence should manifest as major contributions in the design, conduct, analysis, or reporting of a study. Leadership can also manifest in teaching and in service. For example, when a team scientist commonly influences committee deliberations and decisions, it is an indication of being not only an effective reviewer but also an effective communicator and scientific thought leader. Similarly, a team scientist who is able to teach scientists outside his or her specialty about important topics relevant to the research program demonstrates leadership. We note that team scientists often lead core facilities that support multiple research programs. In evaluating a team scientist’s leadership, we contend that directing such a core may be considered equivalent to directing an investigator-initiated independent research award.

Discussion

In 2012, the Association of American Medical Colleges surveyed 126 AHCs about their faculty personnel policies.17 While tenure and promotion guidelines have been revised to include emphasis on interdisciplinary team science (in ~25% AHCs), broadening the definition of scholarship (in ~32% AHCs), and increasing the relative weight of research (in ~13% AHCs) and teaching (in ~22% AHCs), evaluation of individual contributions to team science remains a challenging undertaking, especially when an individual’s role is to contribute expertise as a collaborator.

The evaluation of team scientists must keep pace with the rapidly evolving complexity of research programs. Beyond the traditional counts of publications, grants, courses, and committees, evaluations should include:

  • descriptions of contributions to manuscripts beyond authorship position;

  • a synthesis of reviews of grant proposals, specifically the parts where the team scientists contribution is mentioned;

  • specific comments from lead investigators about an individual’s contributions to papers and grants;

  • information from atypical sources about educational activities, such as training grant PIs, journal club leaders, and seminar series directors; and

  • assessments of service activities that recognize the level and importance of the specified activity.

The purpose of these evaluations is not to determine how much time an individual spends on the activity, but how much impact, independence, and leadership the team scientist demonstrates. Although a team scientist may spend hundreds of hours reading protocols for a review committee, if the reviews are routinely ignored by the rest of the committee, then the team scientist is not being effective. Demonstrating leadership and being able to work independently within the team setting is essential for career advancement.

It may be argued that our recommendations are relevant beyond the team scientist. While true, evaluating the impact, independence, and leadership of team scientists requires special attention. Internal and external reviewers responsible for evaluating a team scientist need specific direction for assigning value to a variety of contributions, and for weighting them according to the team scientist’s defined role. Because few individuals possess the full range of expertise and skills required to conduct clinical and translational research in today’s environment, team science is becoming the norm. For team science to flourish, institutions must acknowledge the stature and professional accomplishments of all contributors to the team effort. Institutions must create a clear process for academic career progression that recognizes these contributions and commit to a consistent framework for evaluation. Our suggestions provide a flexible framework for gathering the relevant quantitative and qualitative information that is needed to support the evaluation of a team scientist.

Acknowledgments

The authors wish to thank all the members of the Biostatistics, Epidemiology, and Research Design (BERD) Key Function Committee of the Clinical and Translational Science Awards (CTSA) Consortium. They are especially grateful to the following individuals, who provided comment and edits on drafts of this manuscript: Arlene Ash, PhD (University of Massachusetts Medical School), Rickey Carter, PhD (Mayo Clinic), Elizabeth Delong, PhD (Duke University School of Medicine), Erin Fox, PhD (University of Texas Health Science Center at Houston), Patrick Heagerty, PhD (University of Washington School of Public Health), Elizabeth Kopras (University of Cincinnati), Maurizio Macaluso, PhD (University of Cincinnati), Matthew S. Mayo, PhD (University of Kansas Medical Center), Robert Oster, PhD (University of Alabama School of Medicine), Paul J. Nietert, PhD (Medical University of South Carolina), Sowmya Rao, PhD (University of Massachusetts Medical School), Nawar Shara, PhD (Georgetown-Howard Universities), Heidi Spratt, PhD (University of Texas Medical Branch, Galveston), Yu Chang, PhD (Vanderbilt University Medical School), Arthur E. Blank, PhD (Albert Einstein College of Medicine), and Tim Carey, MD, MPH (University of North Carolina School of Medicine, Chapel Hill).

The authors thank the National Institutes of Health (NIH), and in particular, Laura Lee Johnson, PhD, of the National Center for Complementary and Alternative Medicine. The authors also thank Sarah A. Bunton, PhD, of the Association of American Medical Colleges, and Thanassis Rikakis, PhD, of the Carnegie Mellon University.

Funding/Support: This project was funded in whole or in part with federal funds from the National Center for Advancing Translational Sciences, the NIH, through the CTSA Program, part of the Roadmap Initiative, Re-Engineering the Clinical Research Enterprise. The article was approved by the CTSA Consortium Publications Committee.

NIH CTSA funding was awarded to the Weill Cornell Medical College (2UL1 TR000457-06), the University of Miami School of Medicine (UL1TR000460), the University of Massachusetts Medical School (UL1RR031982), Mass General Hospital, Harvard Medical School(UL1 TR001102), University of Texas Health Science Center at Houston (UL1TR000371), New York University School of Medicine (UL1TR000038), the University of Cincinnati Medical School (UL1 TR000077), the University of Pittsburgh Medical School (UL1RR024153, UL1TR000005), the University of Texas Health Science Center at San Antonio (UL1 TR000149, previously UL1 RR025767), Georgetown-Howard Universities Center for Clinical and Translational Science (Grant # UL1TR000101, previously UL1RR031975), the University of Texas Medical Branch, Galveston (UL1TR000071), Northwestern University Feinberg School of Medicine (ULT1R000150), and the University of Michigan Medical School (UL1TR000433).

Footnotes

Other disclosures: None reported.

Ethical approval: Reported as not applicable.

Contributor Information

Madhu Mazumdar, Division of Biostatistics and Epidemiology, Department of Public Health, Weill Cornell Medical College, and director, Research Design and Biostatistics Core, Clinical and Translational Sciences Center, New York, New York, at the time this article was written. She is director, Institute of Healthcare Delivery Science, Mount Sinai Health System, New York, New York.

Shari Messinger, Division of Biostatistics, Department of Public Health Sciences, University of Miami Miller School of Medicine, and director, Research Design and Biostatistics Core, Miami Clinical and Translational Sciences Institute, Miami, Florida.

Dianne M. Finkelstein, Harvard Medical School, and Department of Biostatistics, Harvard School of Public Health; and chief of biostatistics unit, Massachusetts General Hospital, Boston, Massachusetts.

Judith D. Goldberg, Departments of Population Health and Environmental Medicine, New York University School of Medicine, and director, Study Design, Biostatistics, and Clinical Research Ethics Program, NYU-HHC Clinical and Translational Science Institute, New York, New York.

Christopher J. Lindsell, Department of Emergency Medicine, associate dean for clinical research, College of Medicine, and co-director, Biostatistics, Epidemiology, and Research Design, Center for Clinical and Translational Science and Training, University of Cincinnati, Cincinnati, Ohio.

Sally C. Morton, Department of Biostatistics, Graduate School of Public Health, and director, Comparative Effectiveness Research Core, Clinical and Translational Sciences Institute, University of Pittsburgh, Pittsburgh, Pennsylvania.

Brad H. Pollock, Division of Clinical and Translational Sciences, Department of Internal Medicine; professor, Biostatistics, Epidemiology, and Research Design; and director, Informatics, Institute for the Integration of Medicine and Science, University of Texas Health Science Center, San Antonio, Texas, at the time this article was written. He is currently professor and chair, Department of Public Health Sciences, School of Medicine, University of California at Davis, Sacramento, California.

Mohammad H. Rahbar, Division of Clinical and Translational Sciences, Department of Internal Medicine; professor, Division of Epidemiology, Human Genetics and Environmental Sciences, University of Texas School of Public Health at Houston; and director, Biostatistics, Epidemiology, and Research Design, Center for Clinical and Translational Sciences, the University of Texas Health Science Center at Houston, Houston, Texas.

Leah J. Welty, Department of Preventive Medicine, Northwestern University Feinberg School of Medicine, Chicago, Illinois.

Robert A. Parker, Biostatistics, Department of Biostatistics, University of Michigan; and Director, Biostatistics, Epidemiology, and Research Design core, Michigan Institute for Clinical and Health Research, at the time this article was written. He is currently director of Biometry, Medical Practice Evaluation Center, Massachusetts General Hospital, and associate professor of medicine (biostatistics), Harvard Medical School, Boston, Massachusetts.

References

RESOURCES