Skip to main content
Journal of General Internal Medicine logoLink to Journal of General Internal Medicine
. 2007 Oct 6;23(2):122–128. doi: 10.1007/s11606-007-0397-8

Effectiveness of a 1-Year Resident Training Program in Clinical Research: A Controlled Before-and-After Study

Bernd Löwe 1,2,, Mechthild Hartmann 2, Beate Wild 2, Christoph Nikendei 2, Kurt Kroenke 3, Dorothea Niehoff 2, Peter Henningsen 4, Stephan Zipfel 5, Wolfgang Herzog 2
PMCID: PMC2359160  PMID: 17922168

Abstract

Background

To increase the number of clinician scientists and to improve research skills, a number of clinical research training programs have been recently established. However, controlled studies assessing their effectiveness are lacking.

Objective

To investigate the effectiveness of a 1-year resident training program in clinical research.

Design

Controlled before-and-after study. The training program included a weekly class in clinical research methods, completion of a research project, and mentorship.

Participants

Intervention subjects were 15 residents participating in the 1-year training program in clinical research. Control subjects were 22 residents not participating in the training program.

Measurements and Main Results

Assessments were performed at the beginning and end of the program. Outcomes included methodological research knowledge (multiple-choice progress test), self-assessed research competence, progress on publications and grant applications, and evaluation of the program using quantitative and qualitative methods.

Results

Intervention subjects and controls were well matched with respect to research experience (5.1 ± 2.2 vs 5.6 ± 5.8 years; p = .69). Methodological knowledge improved significantly more in the intervention group compared to the control group (effect size = 2.5; p < .001). Similarly, self-assessed research competence increased significantly more in the intervention group (effect size = 1.1; p = .01). At the end of the program, significantly more intervention subjects compared to controls were currently writing journal articles (87% vs 36%; p = .003). The intervention subjects evaluated the training program as highly valuable for becoming independent researchers.

Conclusions

A 1-year training program in clinical research can substantially increase research knowledge and productivity. The program design makes it feasible to implement in other academic settings.

KEY WORDS: medical education, curriculum, clinical research, evaluation studies, research training

INTRODUCTION

Clinical research is vital to ensure continuing advances in health care. However, in the United States as well as in Europe, medical education provides only limited training in clinical research, and a decline in the number of physician-scientists has been identified for at least 2 decades.17 Therefore, the National Institute of Health has recently made substantial investments in training programs and career development awards for junior investigators.3,8 A wide range of clinical research programs is now accessible,911 including feasible types of scaled-down programs in settings where funding is not available.

A systematic review of resident research curricula showed some encouraging results regarding increased research productivity of the participants.12 However, in the 41 studies reviewed, evaluation methods were often rudimentary, only 5 studies reported pre-post intervention testing of learners’ knowledge, and no curriculum was evaluated as a prospective controlled study.12

A recent study in the German health care system confirmed that subjective research skills, objective research knowledge, and research productivity are relatively low among postgraduate physicians and psychologists.13 To improve this situation, a resident training program in clinical research was launched at the University Medical Center Heidelberg in 2005. It was established during residency to allow an early initiation into research training and to better integrate clinical and research training years.14 Given that clinical research requires expertise of many kind of investigators,8 the training program included both physicians and psychologists. A 1-year period was chosen to provide rigorous research training without excessive length.2

This study investigates the effectiveness of the structured 1-year resident training program in clinical research using a controlled before-and-after study design. In addition to the primary outcomes of research knowledge and productivity, participant evaluation of the training program was assessed.

METHODS

Subjects and Study Design

A 1-year training program in clinical research was conducted in the Department of Psychosomatic and General Internal Medicine at the University of Heidelberg from October 2005 to October 2006. To assess its effectiveness, we compared program participants to a control group from 2 similar university departments of Psychosomatic Medicine at the University of Tübingen and at the Technical University of Munich. To achieve comparability among the study centers, we chose 3 Medical Schools with top rankings in research and teaching.15,16 In the largest German university ranking, including all 35 German Medical Schools, the medical faculties of Heidelberg, Munich, and Tübingen ranks #2, #1, and #6, respectively.17 The medical schools of Heidelberg, Munich, and Tübingen are also similar in terms of number of students in the clinical section of their medical education (1,586; 1,337, and 1,522, respectively), the number of full professors for the clinical specialties (75, 76, and 72, respectively), annual research money per full professor (438,000€, 458,000€, and 484,000€, respectively), and annual publications per full professor (19.8, 17.5, and 22.1, respectively).15 The study subjects included residents in Internal Medicine, psychotherapy and psychosomatics, psychiatry, and psychology. Residencies for both physicians and psychologists in Germany include internship and have a total duration of 5 to 6 years. The career paths for residents at the 3 participating universities are identical and all 3 departments have a main focus in clinical research. Residents at university departments in Germany are expected to perform research, but the high priority for patient care poses some constraints on research productivity.

At the time of this study, structured training programs for clinical research were not provided at either of the control sites. At all 3 sites, inclusion criteria were identical: residency in medicine or psychology, active participation in a research project, and informed consent. Exclusion criteria were insufficient knowledge of the German language. Demographic characteristics and study outcomes were assessed at the beginning of the program (baseline) and 1 week after the end of the program (1-year follow-up). Participation in the program was free of charge. All residents from the Department of Psychosomatic and General Internal Medicine at the University of Heidelberg working in a research project were expected to participate in the training program and to complete both assessments. Correspondingly, all residents working in a research project in the corresponding departments in Munich and Tübingen were expected to complete both assessments.

Training Program in Clinical Research

The training program consisted of 3 elements shown to be associated with successful research activity:12 a) provision of methodological research knowledge within the scope of a “Clinical Research Methods” course, b) mentorship by an experienced researcher, and c) work on an individual research project. The “Clinical Research Methods” course was conducted once a week with a total of 33 lessons lasting 90 minutes each. A total of 20 lecturers held the lessons. Structure and content of the course are detailed in Table 1. The mentors were encouraged to follow rules for effective mentorship,18,19 but no additional time for mentoring was available. The training program was not supported by extramural funding and was conducted as “in service training”. The participants had no extra time for research. The structure of the Heidelberg curriculum is comparable to other training programs in clinical research, such as the “Clinical Investigator Training Enhancement (CITE) Program” of the Regenstrief Institute, Indiana University, Indianapolis (http://www.regenstrief.org/training/research), but the dose of the individual training elements is considerably lower because of the necessary integration into a busy work setting.

Table 1.

Clinical Research Methods Course

Individual lessons (90 minutes each) Sequence
Basics of clinical studies (6 sessions)
 Introduction to clinical research 1.
 Anatomy and physiology of clinical research 2.
 Efficient electronic literature search 3.
 Psychometric principles 15.
 Randomization and blinding 24.
 Clinical research ethics/Filing applications for the institutional review board 29.
Design of clinical studies (8 sessions)
 Descriptive studies 5.
 Cohort studies 8.
 Economic analyses 14.
 Qualitative research design 17.
 Introduction to clinical intervention studies 19.
 Psychotherapeutic process research 25.
 Pharmaceutical studies 26.
 Experimental design 31.
Interpretation of clinical studies (4 sessions)
 Causality in clinical studies/prognostic studies 9.
 Which therapy is best? Interpretation of controlled clinical studies 11.
 Using conjoint analysis to understand patient preferences 23.
 Meta-analysis and review 33.
Biostatistics (9 sessions)
 Basic concepts of statistics 4.
 Diagnostic tests 6.
 Chi2 test 7.
 Hypothesis testing (t test, nonparametric procedures) 10.
 Correlation, investigator agreement, and reliability 13.
 Analysis of variance (ANOVA) 16.
 Case-control studies/logistic regression analysis 21.
 Linear regression analysis 22.
 The critical question: estimating sample size and power 30.
Communication of study results (4 sessions)
 Presentation of scientific results: abstract, paper, poster 18.
 CONSORT statement 20.
 How to write successful research applications 27.
 How to write original articles for peer-reviewed journals 28.
Practical sessions (2 sessions)
 Questions from ongoing projects 12.
 Qualitative research life: focus group 32.

Measures

We used several different assessment methods to cover the different dimensions of research knowledge and research productivity and to compensate for potential limitations in any 1 method.20Methodological research knowledge was considered the primary outcome because knowledge of a domain has proven to be the single best determinant of expertise.21,22 Cognitive knowledge is best assessed using written form tests, and multiple-choice questions have especially high reliability.2124 Therefore, we used multiple-choice questions to assess methodological research knowledge according to Miller’s pyramid of competence.25 The multiple choice items with 3 different question formats, namely, single best answer, pick n item format, and K-PRIM type, were constructed by the lecturers of the “Clinical Research Methods” course in accordance with multiple-choice question item-writing principles.26,27 Pretesting ensured the quality of the multiple-choice questions, and a panel of 4 experts reviewed the items. A total of 28 questions covering “biostatistical knowledge” (15 items) and “basics of clinical research” (13 items) fulfilled the quality standards and were included in the multiple-choice test. To evaluate participants’ learning progress, the test was written twice in identical format according to progress-testing principles.28 Internal consistency, assessed at follow-up, was good for the total scale (α = 0.84) and acceptable for the 2 subscales (biostatistical knowledge, α = 0.81; basics of clinical research, α = 0.60).

Self-assessed research competence was measured with 12 items covering “interpretation of clinical studies”, “designing clinical studies”, “biostatistical competence”, and “presentation of study results” (3 items each, e.g., item 7: “I feel competent in writing original journal articles”). Six-point Likert scales were used to assess agreement with the items with higher scores indicating higher self-assessed research competence. Internal consistency of the total self-assessed research competence scale (α = 0.95) and the 4 subscales (α = 0.93, α = 0.82, α = 0.91, α = 0.85, respectively) was very good.

The number of original publications, reviews or meta-analysis, book articles, and grant proposals was assessed at the beginning and at the end of the program, differentiating between first- and co-authorship. An independent literature search on all participants, using the databases PUBMED, Web of Science, SCOPUS, PSYNDEX, and PSYCHLIT, confirmed that the subjects’ self-reports were valid. Given that the 1-year period of the training program was considered too short to expect a substantial effect in terms of published articles, we additionally assessed whether subjects had presented research results at scientific meetings during the training program and whether they were currently writing original journal articles and book articles at the end of the program.

The quality of the program was investigated by an evaluation of each of the individual 33 “Clinical Research Methods” lessons directly after the course using 5 questions regarding relevance of content, didactic quality, commitment of the lecturer, increase of knowledge, and overall evaluation of the lesson. The quality of the whole training program was assessed at the end of the program using 5 questions regarding quality of content, lecturers, teaching methodology, utility for one’s own career development, and relevance to one’s own research activity. Six-point Likert sales were used for those purposes. Participants were also asked if they would participate again in a similar program. Finally, a focus group was conducted at the end of the program to more thoroughly assess the participants’ perceptions of the program and potential opportunities for improvement.29,30

Statistical Analysis

Program effectiveness was tested by comparing change scores of the continuous outcomes using Analysis of Covariance (ANCOVA) according to the General Linear Model, controlling for the potential confounders of gender, age, and dissertation status. Change scores were analyzed because this allows progress testing28,31 independent from the level at baseline and because this method is especially efficient if baseline and follow-up data are highly intercorrelated.32,33 For comparisons of the number of publications and grant proposals, the statistical assumptions for analysis of variance were not met because the number was generally low and the distributions were skewed. Therefore, we performed χ2 tests or Fisher’s exact tests (if cell sizes <5) for these outcomes.

Given that the sample size in the intervention group was fixed by the size of the class of the training program, power analysis was performed for an estimated sample of 15 intervention subjects and 21 control subjects at the end of the program. Given a 0.05 level of significance (two-sided) and a power of 0.80, this sample size was sufficient to detect large differences of approximately 1.0 standard deviation between the groups.34 Group differences between continuous outcomes were also transformed to effect sizes (difference between mean scores divided by standard deviation of control group).35 Missing values were extremely rare (<1%). Analyses included data only for subjects who completed baseline and follow-up assessments. Imputations for loss to follow-up using the last observation carried forward did not significantly change the results. To minimize bias, all multiple-choice questions were evaluated using a computer program. Statistical analyses were performed using Statistical Analysis System (SAS, Version 9.1; Cary, NC, USA).

RESULTS

Characteristics of Intervention and Control Subjects

Of the 20 residents who registered for the training program, 2 were not admitted because they were not able to regularly participate in the clinical methods class. Eighteen subjects met inclusion criteria and started with the training program. During the course of the program, 2 subjects were not able to continue because of change of their professional employment, and 1 subject quit because of time conflicts. Thus, the 1-year training program was completed by 15 of 18 subjects (83%). All intervention subjects participated in the baseline and follow-up assessments. In the control group, baseline participation rate was 81% (26 of 32) and follow-up rate was 85% (22 of 26). Table 2 shows baseline demographic characteristics, methodological research knowledge, self-assessed research competence, publications, and grant proposals of subjects who completed both baseline and follow-up assessments. The groups did not differ significantly with respect to age, years of research experience, university degree, dissertation status, self-assessed research competence, written publications, and grant applications. However, there were fewer women in the intervention compared to the control group, and intervention subjects achieved higher baseline test scores regarding research knowledge.

Table 2.

Baseline Characteristics of Study Subjects and Controls at Baseline

  Intervention group (n = 15) Control group (n = 22) Group differences P value*
Female, n (%) 8 (53.3) 19 (86.4) .02
Mean age, yr (SD) 31.6 (4.5) 35.5 (7.6) .06
Mean research experience, yr (SD) 5.1 (2.2) 5.6 (5.8) .69
Dissertation completed, n (%) 7 (46.7) 7 (31.8) .36
University degree, n (%) .46
 Physician 10 (66.7) 12 (54.5)
 Psychologist 5 (33.3) 10 (45.5)
Mean methodological research knowledge (SD)
 Total test score (0–28) 11.6 (2.1) 8.9 (3.2) .008
 Subscale “Biostatistical Knowledge” (0–15) 5.5 (1.6) 4.9 (2.4) .35
 Subscale “Basics of Clinical Research” (0 – 13) 6.1 (1.1) 4.0 (1.6) <.001
Mean self-assessment of research competence (SD)
 Total research competence scale score (1–6) 3.2 (0.8) 2.8 (0.9) .25
 Subscale “Interpretation of Clinical Studies” (1–6) 3.1 (0.9) 2.5 (1.0) .11
 Subscale “Designing Clinical Studies” (1–6) 3.3 (1.0) 2.9 (1.0) .18
 Subscale “Biostatistical Competence” (1–6) 2.1 (0.9) 2.1 (1.0) .83
 Subscale “Presentation of Study Results” (1–6) 3.6 (1.2) 3.3 (1.2) .51
Completed publications, n (%)
 ≥ 1 Original paper in first authorship 5 (33.3) 5 (22.7) .48
 ≥ 1 Original paper in co-authorship 10 (66.7) 11 (50.0) .32
 ≥ 1 Review or meta-analysis in first authorship 2 (13.3) 2 (9.1) 1.0
 ≥ 1 Review or meta-analysis in co-authorship 1 (6.7) 2 (9.1) 1.0
 ≥ 1 Book article in first authorship 5 (33.3) 7 (31.8) .92
 ≥ 1 Book article in co-authorship 4 (26.7) 6 (27.3) 1.0
Applications for funding, n (%)
 ≥ 1 Grant proposal submitted 5 (22.7) 7 (46.7) .13
 ≥ 1 Grant proposal accepted for funding 3 (20.0%) 3 (13.6%) .67

*T tests were used for continuous data. χ2 test and Fisher’s Exact Test (if cell sizes <5) were used for categorical data.

Change in Methodological Research Knowledge and Self-assessed Research Competence

Table 3 shows the change in methodological research knowledge and change in self-assessed research competence for intervention subjects and controls. For both outcomes, intervention subjects experienced greater improvement than did controls. As expected, baseline and follow-up scores were highly intercorrelated, both of methodological research knowledge (total score, r = .58; subscale scores, r = .84, and r = 62; all p < .001) and self-assessed research competence (total score, r = .78; subscale scores, r = .72, r = .81, r = .43, r = .83, resp.; all p < .01). This supported the statistical comparisons of change scores.32 The methodological research knowledge total score improved by a mean of 8.4 (SD = 2.9) additional correct answers in the intervention group compared to 1.2 (SD = 2.8) additional correct answers in the control group (effect size = 2.5; p < .001). With respect to self-assessed research competence, intervention subjects had significantly greater improvements in the total scale score as well as the subscales “interpretation of clinical studies” and “biostatistical competence”.

Table 3.

Change in Methodological Research Knowledge and Self-Assessed Research Competence After 1 Year

Outcome Intervention group (n = 15) Control group (n = 22) Group differences
Effect size* P value†
Mean change in methodological research knowledge (SD)‡
 Total test score 8.4 (2.9) 1.2 (2.8) 2.5 <.001
 Subscale “Biostatistical Knowledge” 5.9 (2.2) 0.3 (2.1) 2.7 <.001
 Subscale “Basics of Clinical Research” 2.5 (1.5) 1.0 (1.8) 0.8 .08
Mean change in self-assessment of research competence (SD)‡
 Total research competence scale score 0.4 (0.9) −0.4 (0.7) 1.1 .01
 Subscale “Interpretation of Clinical Studies” 1.8 (1.6) 0.7 (1.0) 1.0 .04
 Subscale “Designing Clinical Studies” 0.9 (1.1) 0.4 (1.0) 0.5 .85
 Subscale “Biostatistical Competence” 0.4 (1.2) −0.7 (1.0) 1.1 .03
 Subscale “Presentation of Study Results” 2.0 (1.6) 1.1 (0.9) 0.9 .07

*Effect size is the difference between mean scores divided by standard deviation of control group.

ANCOVA according to the General Linear Model, adjusted for gender, age, and dissertation status (df = 4, 32)

Change score is 1-year follow-up score minus baseline score

Publications and Grant Proposals During Time of Training Program

Outcomes regarding actual research activity during the year of the training program are summarized in Table 4. Significantly more intervention subjects compared to controls were currently working on a journal article (87% vs 36%, p = .003), had presented scientific results at a research meeting (80% vs 41%, p = .04), and had completed at least 1 original paper as co-author (60% vs 18%, p = .01) during the time of the training program. However, the groups did not differ significantly with respect to completed original articles as first author, reviews, meta-analyses, or book articles. With respect to grant applications, intervention subjects had completed significantly more applications for funding as co-investigators compared to controls (47% vs 5%, p = .004), but not as principal investigators. Finally, intervention subjects had significantly more grant applications accepted for funding (33% vs 0%, p = .007).

Table 4.

Publications and Grant Proposals During 1 Year of Training Program

Outcome Intervention group (n = 15) Control group (n = 22) Group differences P value*
Currently writing journal article, n (%) 13 (86.7) 8 (36.4) .003
Currently writing book article, n (%) 1 (6.7) 3 (13.6) .63
Presentation at scientific meeting during last year, n (%) 12 (80.0) 9 (40.9) .04
Completed publications during last year, n (%)
 ≥ 1 Original paper in first authorship 7 (46.7) 5 (22.7) .13
 ≥ 1 Original paper in co-authorship 9 (60.0) 4 (18.2) .01
 ≥ 1 Review or meta-analysis in first authorship 0 (0.0) 2 (9.1) .50
 ≥ 1 Review or meta-analysis in co-authorship 0 (0.0) 2 (9.1) .50
 ≥ 1 Book article in first authorship 4 (26.7) 4 (18.2) .69
 ≥ 1 Book article in co-authorship 3 (20.0) 1 (4.6) .28
Applications for funding during last year, n (%)
 ≥ 1 Grant proposal submitted in first authorship 1 (6.7) 0 (0.0) .41
 ≥ 1 Grant proposal submitted in co-authorship 7 (46.7) 1 (4.6) .004

*For group comparisons, χ2 test or Fisher’s Exact Test (if cell sizes <5) were used.

Evaluation of Training Program

The 33 individual sessions of the “Clinical Research Methods” course received an excellent evaluation with a mean evaluation score between “very good” and “good” (Table 5). Similarly, the training program as a whole received high evaluation scores. Fourteen of the 15 participants (93%) indicated they would participate again in a similar program.

Table 5.

Evaluation of Clinical Research Course by Intervention Group

  N Evaluation* M (SD)
Mean evaluation of 33 individual course lessons†
 Relevance of content 408 1.4 (0.6)
 Didactic quality 409 1.8 (0.9)
 Commitment of lecturer 409 1.3 (0.6)
 Increase in knowledge 388 2.1 (0.9)
 Overall evaluation of lesson 408 1.7 (0.8)
Evaluation of total program at end of program
 Quality of content 15 1.5 (0.5)
 Lecturers 15 1.4 (0.5)
 Teaching methodology 15 1.8 (0.6)
 Utility for own career 15 2.0 (1.0)
 Utility for own research activity 15 1.8 (1.0)

*Evaluation was measured on a 1 to 6 scale: 1 = “very good”; 2 = “good”; 3 = “moderate”; 4 = “sufficient”; 5 = “poor”; 6 = “inadequate”

Evaluations of individual lessons were made at the end of each lesson.

Content analysis of the focus group, in which 13 of the 15 (87%) intervention subjects participated, revealed that the participants evaluated the training program as profound and highly valuable for becoming an independent researcher as well as for the own career development. As an additional benefit, the participants noted that the program generated a positive research culture and enhanced research as a priority alongside clinical work and teaching. With respect to mentorship, the intervention subjects noted that contact time with the mentor was generally low and that it was not higher compared to the time before the program. In addition, the participants felt somewhat overloaded with the time requirements for the program in that their clinical workload was not reduced.

DISCUSSION

This study has several major findings. First, the structured 1-year training program led not only to improved research knowledge and self-assessed research competence; it was also accompanied by increased research productivity. Second, enhanced research activity consisted of more presentations at scientific meetings, increased writing activity, and co-authorship on original papers and grant applications. This suggests that, whereas the intervention subjects made great progress, 1 year was, as expected, insufficient to produce independent investigators. Third, the integration of such a training program into residencies with demanding clinical workloads is feasible. Despite competing demands, the participating residents highly valued the training program. Our controlled before-and-after study design substantiates the effectiveness of similar training programs from earlier studies using cross-sectional or pre-post uncontrolled intervention designs.12 Given that residents in medical specialties have passed through the same medical education system and given that an earlier study did not find differences in research output between physicians and psychologists,13 it is likely that our results also apply to other medical specialties and health care professions.

With the provision of methodological research knowledge, mentorship by an experienced researcher, and work on an individual research project, the program included 3 elements established as key factors for successful research training.12 Nevertheless, owing to the lack of extramural funding, other important factors such as protected time for research and extra funding for travel costs to scientific meetings36,37 were not provided. As evidenced by the focus group at the end of the training program, the most frequently encountered obstacle was lack of time. In fact, insufficient time is probably the most common obstacle to completing research in general.38,39 Even without protected time, however, the program was moderately effective. In particular, this type of program may be more generalizable to institutions not having substantial federal support for research training. Suggestions for maximizing limited research time39 that were provided at the beginning of the program might have helped the participants to structure their time for completing their research projects. Most importantly, the program made research a priority, generated a positive research culture, helped in understanding strengths and weaknesses of clinical research studies, and encouraged networking among the participants. In addition to the structured provision of research knowledge, these factors might have motivated the participants of the training program to successfully complete their research projects.

Our study has several limitations. First, our sample size was limited by the size of the training program. Second, the study was controlled but participants were not randomized. Whereas we were able to adjust for some group differences, the possibility of unmeasured confounders remains. Third, the intervention group had somewhat higher scores in the multiple-choice test regarding methodological research knowledge at the beginning of the program. However, our outcome was change in methodological research knowledge,32,33 and the increase of research knowledge was much greater in the intervention subjects compared to the controls. Finally, whereas the knowledge provided in the Clinical Research Class was identical for all participants, the intensity of mentorship varied depending on the individual mentor–trainee dyad. However, given that the focus group revealed that the time with the mentor was not substantially increased by the program, differences in mentorship between the study sites are unlikely to be a major factor accounting for the study findings.

In summary, a comprehensive 1-year resident training program can increase research knowledge, self-assessed research competence, and research productivity. Such a program might be advantageous for other academic institutions with minimal external funding for clinical research training. Nevertheless, insufficient time appears as a key obstacle to completing research projects38 and taking on larger roles as first authors and principal investigators. Therefore, obtaining funding for at least some research time should remain a priority. In the meanwhile, structured training programs in clinical research can be beneficial despite resource constraints.

Acknowledgment

We thank the intervention and control subjects who made this study possible. The first author gratefully acknowledges the opportunity to participate in the “Clinical Investigator Training Enhancement (CITE) Program” of the Regenstrief Institute, Indiana University, Indianapolis, during a research fellowship funded by the Max-Kade-Foundation, New York, and the German Research Foundation (DFG) in 2003/2004.

Conflict of Interest Statement None disclosed.

References

  • 1.Wyngaarden JB. The clinical investigator as an endangered species. N Engl J Med. 1979;301:1254–9. [DOI] [PubMed]
  • 2.Schrier RW. Ensuring the survival of the clinician-scientist. Acad Med. 1997;72:589–94. [DOI] [PubMed]
  • 3.Nathan DG. Careers in translational clinical research—historical perspectives, future challenges. JAMA. 2002;287:2424–7. [DOI] [PubMed]
  • 4.Nathan DG. The several Cs of translational clinical research. J Clin Invest. 2005;115:795–7. [DOI] [PMC free article] [PubMed]
  • 5.Association of American Medical Colleges (AAMC). Promoting translational and clinical science: the critical role of medical schools and teaching hospitals. Report of the AAMC’s task force II on clinical research. Washington, DC: Association of American Medical Colleges; 2006.
  • 6.Phillipson EA. Is it the clinician-scientist or clinical research that is the endangered species? Clin Invest Med. 2002;25:23–5. [PubMed]
  • 7.Khadaroo RG, Rotstein OD. Are clinician-scientists an endangered species? Barriers to clinician-scientist training. Clin Invest Med. 2002;25:260–1. [PubMed]
  • 8.Sung NS, Crowley WF, Genel M, et al. Central challenges facing the national clinical research enterprise. JAMA. 2003;289:1278–87. [DOI] [PubMed]
  • 9.Solomon SS, Tom SC, Pichert J, Wasserman D, Powers AC. Impact of medical student research in the development of physician-scientists. J Investig Med. 2003;51:149–56. [DOI] [PubMed]
  • 10.Gallin EK, Le Blancq SM. Launching a new fellowship for medical students: the first years of the Doris Duke Clinical Research Fellowship Program. J Investig Med. 2005;53:73–81. [DOI] [PubMed]
  • 11.Mark AL, Kelch RP. Clinician scientist training program: a proposal for training medical students in clinical research. J Investig Med. 2001;49:486–90. [DOI] [PubMed]
  • 12.Hebert RS, Levine RB, Smith CG, Wright SM. A systematic review of resident research curricula. Acad Med. 2003;78:61–8. [DOI] [PubMed]
  • 13.Hartmann M, Wild B, Herzog W, et al. Der klinische Forscher in der psychosozialen Medizin: status, Kompetenzen und Leistungen [Working as a clinician-scientist in psychosocial medicine: Status, skills and research productivity]. Psychother Psychosom Med Psychol. 2007 (in press). [DOI] [PubMed]
  • 14.Phillipson EA. Clinical/research residency programs for the clinician scientist. Clin Invest Med. 1997;20:259–60. [PubMed]
  • 15.Centre for Higher Education (CHE) and DIE ZEIT. Centre for Higher Education (CHE) / DIE ZEIT university ranking. Available at: http://www.daad.de/deutschland/hochschulen/hochschulranking/06544.en.html. Accessed July 27, 2007.
  • 16.Deutsche Forschungsgemeinschaft. DFG Förder-Ranking Medizin 2003. [German Research Foundation: research funding ranking]. Available at: http://www.dfg.de/ranking/archiv/ranking2003/institutionen/Wc879d5f312ae.html. Accessed July 6, 2007.
  • 17.Focus Online. Focus Uni-ranking 2007. Available at: http://www.focus.de/wissen/campus/hochschulen. Accessed July 27, 2007.
  • 18.Healy C, Welchert A. Mentoring relations: a definition to advance research and practice. Educ Res. 1990;19:17–21.
  • 19.Ramani S, Gruppen L, Kachur EK. Twelve tips for developing effective mentors. Med Teach. 2006;28:404–8. [DOI] [PubMed]
  • 20.Epstein RM. Assessment in medical education. N Engl J Med. 2007;356:387–96. [DOI] [PubMed]
  • 21.McCoubrie P. Improving the fairness of multiple-choice questions: a literature review. Med Teach. 2004;26:709–12. [DOI] [PubMed]
  • 22.Glaser R. Education and thinking: the role of knowledge. Am Psychol. 1984;39:193–202. [DOI]
  • 23.Wass V, Van der Vleuten C, Shatzer J, Jones R. Assessment of clinical competence. Lancet. 2001;357:945–9. [DOI] [PubMed]
  • 24.Downing S. Assessment of knowledge with written test formats. In: Norman G, Van der Vleuten C, Newble D, eds. International Handbook of Reserach in Medical Education, Vol. 2. Dordrecht: Kluwer; 2002:647–2.
  • 25.Miller GE. The assessment of clinical skills/competence/performance. Acad Med. 1990;65:S63–7. [DOI] [PubMed]
  • 26.Stagnaro-Green AS, Downing SM. Use of flawed multiple-choice items by the New England Journal of Medicine for continuing medical education. Med Teach. 2006;28:566–8. [DOI] [PubMed]
  • 27.Haladyna T. Developing and validating multiple choice test items. 3rd Ed. Mahwah, NJ: Lawrence Erlbaum Associates; 2002.
  • 28.Verhoeven BH, Snellen-Balendong HA, Hay IT, et al. The versatility of progress testing assessed in an international context: a start for benchmarking global standardization? Med Teach. 2005;27:514–20. [DOI] [PubMed]
  • 29.Barbour RS. Making sense of focus groups. Med Educ. 2005;39:742–50. [DOI] [PubMed]
  • 30.Hauer KE, Teherani A, Dechet A, Aagaard EM. Medical students’ perceptions of mentoring: a focus-group analysis. Med Teach. 2005;27:732–4. [DOI] [PubMed]
  • 31.Rademakers J, Ten Cate TJ, Bar PR. Progress testing with short answer questions. Med Teach. 2005;27:578–82. [DOI] [PubMed]
  • 32.Vickers AJ. The use of percentage change from baseline as an outcome in a controlled trial is statistically inefficient: a simulation study. BMC Med Res Methodol. 2001;1:6. [DOI] [PMC free article] [PubMed]
  • 33.Vickers AJ. Analysis of variance is easily misapplied in the analysis of randomized trials: a critique and discussion of alternative statistical approaches. Psychosom Med. 2005;67:652–5. [DOI] [PubMed]
  • 34.Hulley SB, Cummings SR, Browner WS, Grady D, Hearst N, Newman TB. Designing clinical research. An epidemiologic approach. 2nd ed. Philadelphia: Lippincott; 2001.
  • 35.Cohen J. Statistical power analysis for the behavioral sciences. 2nd ed. Hillsdale, NJ: Lawrence Earlbaum Associates; 1988.
  • 36.DeHaven MJ, Wilson GR, O’Connor-Kettlestrings P. Creating a research culture: what we can learn from residencies that are successful in research. Fam Med. 1998;30:501–7. [PubMed]
  • 37.Schultz HJ. Research during internal medicine residency training: meeting the challenge of the Residency Review Committee. Ann Intern Med. 1996;124:340–2. [DOI] [PubMed]
  • 38.Gill S, Levin A, Djurdjev O, Yoshida EM. Obstacles to residents’ conducting research and predictors of publication. Acad Med. 2001;76:477. [DOI] [PubMed]
  • 39.Kroenke K. Conducting research as a busy clinician-teacher or trainee. Starting blocks, hurdles, and finish lines. J Gen Intern Med. 1996;11:360–5. [DOI] [PubMed]

Articles from Journal of General Internal Medicine are provided here courtesy of Society of General Internal Medicine

RESOURCES