Abstract
Purpose
The University of Puerto Rico (UPR), Medical Sciences Campus (MSC) post-doctoral Master of Science in Clinical and Translational Research (MSc) program aims to train Hispanic post-doctoral candidates to advance their careers and become successful clinical and translational researchers geared to help eliminate health disparities. Its curriculum highlights the use of technology and online resources to maximize time use of time and efforts. As part of the assessment efforts, the program’s Evaluation Committee leads an annual activity, Evaluation Retreat (ER), to evaluate the program’s curriculum, research component, and mentoring experience according to Scholars’ perspective. Results are used by the Program Executive committee for further planning and improvement. This analysis presents the most relevant results from these activities.
Design Methods
Data collection (from last 5 years), include quantitative (online surveys) and qualitative approaches (a group meeting with Scholars.) Questionnaires request Scholars to rate specific features of the program’s research component, mentoring experience and curriculum. It also includes questions about the program in general (major strengths and challenges, and recommendations for improvement. During the group meeting, Scholars discuss these results and present consensus in a plenary session. Quantitative data are managed and analyzed using the statistical software SPSS. Qualitative data are examined using content analysis.
Results
Scholars identified as program’s strengths the networking opportunities (local and with U.S. experts), the diversity of peers and faculty, the faculty support, the technical and audiovisual support, the physical facilities and resources, the guest speakers and consultants, and the quality of the curriculum. Challenges vary as cohorts change, but time limitations and the need for technical/statistical support are always highlighted. Recommendations for improvement emphasize the need for a greater pool of experienced mentors, and more hands-on approaches to address particular skills such as, manuscript development, institutional and federal guidelines for proposal submission, and issues related to the research project management.
Discussion
Evaluation Retreats provide a valuable input to improve a program geared to develop competent clinical researchers. Findings evidence the program’s commitment with providing the foundation for an enhanced mass of clinical researchers.
Keywords: Assessment, research methods
1 INTRODUCTION
The University of Puerto Rico post-doctoral Master of Science in Clinical Research (MSC) is the only multidisciplinary NIH supported academic degree program that offers training and career development in clinical and translational research in Puerto Rico. It aims to train Hispanic post-doctoral candidates to advance their careers and become successful clinical and translational researchers geared to help eliminate health disparities.
1.1 Program description
The post-doctoral Master in Clinical Science (MSC) program was established in 2002 and has evolved continuously building a well-structured curriculum with a strong research component based on an innovative mentor–mentee design. It is based on a two-year competency-based curriculum that consists of didactic instruction including a hybrid approach between traditional classroom (during evenings) and online instruction, Friday’s seminars, a mentoring system, and experiential learning through the design and implementation of a research project. Its curriculum highlights the use of technology and online resources to maximize time use of time and efforts. In 2010–2011 the degree curriculum was revised using, as the standard, the NIH-defined competencies required of a master’s-level clinical and translational researcher for basic knowledge, skills, and attributes which resulted in the re-envisioned Master of Science in Clinical and Translational Research (MCTR) program [1]. The research sponsored by the MCTR program addresses minority health and health disparities research through basic, clinical, and behavioral approaches. The competency-based curriculum is based on the following competencies, with the incorporation of translational research in each graduate’s profile:
Be able to develop and implement ethnically and culturally appropriate clinical and translational research aimed at reducing health disparities in Hispanic populations;
Conduct ethically responsible clinical and translational research;
Build and lead effective collaborative networks in one’s area(s) of clinical and translational research interest;
Communicate effectively orally and in writing;
Be able to work collaboratively, interdependently, and effectively with individuals from other disciplines on the clinical and translational research team; and
Become a lifelong self-sufficient learner.
The MCTR program aspires on training investigators who will be able to lead and expand clinical and translational research in Puerto Rico targeting specific health conditions of high priority to the Puerto Rican Hispanic population following a multidisciplinary approach [2]. This promotes more interaction among disciplines, enhance team approach to problem solving and foster interdisciplinary collaboration. For the research component, scholars develop an original research project which is expected to be completed by the end of their second year. A detailed description of its development has been described elsewhere [3].
1.2 Overview of the Evaluation plan
Since its establishment, evaluation has been an essential element for the program development and improvement. Evaluation plans have been updated as the program develops and have guided all assessment activities meeting institutional and scholars needs [4]. Essentially, the Evaluation Plan includes three broad dimensions: program structure, program processes and scholars’ outcomes. It includes plans for monitoring the progress of scholars and making corrections to improve the quality and effectiveness of each candidate’s experiences, plans for evaluating research progress and plans for terminating candidates for lack of performance. Qualitative and quantitative methods are used to evaluate aims achievement. This evaluation plan is monitored by an Evaluation Committee (EC) and is implemented by the program’s Executive Committee. The Program Evaluation Committee assists the Executive Committee with the systematic collection, analysis, and reporting of detailed information about the implementation, monitoring, and outcomes of the overall program. The Evaluation Committee meets periodically as needed to plan and develop data gathering and evaluation tools and processes, in addition to monitor the implementation of the evaluation plan.
The Program Evaluation Committee has developed and implemented evaluation strategies and tools using valid and reliable processes. Results of evaluation activities are presented and discussed regularly with members of the Executive Committee for implementation of changes or modifications as needed. The process program’s evaluation comprises assessment of all training activities including course evaluations, research seminars evaluation, and an annual Evaluation Retreat. Evaluation forms and methods are designed by the Evaluation Committee and appraised by the Executive Committee previous to implementation. The course evaluations are distributed after the scholars complete each course. The faculty and the academic coordinator receive a report of the evaluation results to modify and improve the course when needed. The course evaluations form assesses the appropriateness of the courses objectives, content, and training components such as, written materials, textbooks, and exams. They also evaluate whether or not the courses fulfilled the stated objectives. The Research Liaison, the Academic Coordinator, and the scholar’s Research Committee monitor the progress of each scholar’s research project to assess the adequacy of the project’s research question and methods.
The Annual Evaluation Retreat is planned by the Evaluation Committee and addresses those program’s issues that the Executive Committee would need to emphasize or improve. Qualitative (discussion groups and open-ended self-administered questionnaires) and quantitative approaches (surveys) are used during this activity. The Annual Evaluation Retreat also provides the opportunity to assess some of the outcomes measures designed to evidence the competencies and training objectives accomplishments. This is done using self-administered forms already developed by the Evaluation Committee.
All these evaluation activities lead to a continuous pathway of quality assessment and improvement where evaluation and feedback to those been evaluated is taking place, a course of action is employed to facilitate that the recommendations are put into operation and the progress is again appraised. This process is expected to ensure that deficiencies are identified and addressed and to modify administrative and training approaches to maximize impact.
In terms of Program’s outcomes, the program is evaluated according to established benchmarks with their corresponding criteria on diversity recruitment, scholar’s competencies and attainment of program goals.
1.2.1 Annual Evaluation Retreat
The Annual Evaluation Retreat aims to review and make recommendations about an overall view of the program. Previous to the evaluation retreat activity, two electronic surveys are sent to Scholars: 1) aims to evaluate the program’s research component, mentorship experience, and curriculum sequence; 2) assess self-perceived attainment of program’s competencies. This report will include a summary of results from the first survey described and the Evaluation retreat discussion for the last five years.
During the annual Evaluation Retreat, scholars have the opportunity to answer individually a short form where they identify program’s overall strengths, challenges and offer recommendations for improvement. After few minutes, the Evaluation Coordinator and/or an Evaluation Committee Member facilitate a group discussion aiming to reach consensus among scholars in the three areas discussed: programs’ strengths, challenges and recommendations. Scholars are in charge of the discussion and of writing the results in provided big sheets of paper. Results from this consensus and from the surveys are summarized by the Evaluation Coordinator in a report presented and discussed with the program’s Executive Committee. All data is presented confidentially and without any possible identifier to guarantee anonymity of participants.
Results from previous Evaluation Retreats have helped to identify specific Scholars’ concerns about the curriculum content and sequence, research Components, and mentoring-related experiences. These concerns have been discussed by the program leadership and action-plans have been developed and implemented. The current curriculum sequence and the structure of the Research experience are the results of several modifications implemented to improve the Scholars’ achievement of goals and has been done taking into account the Scholars feedback.
This report presents a summary of the most relevant results from the Annual Evaluation Retreats from the last five years.
2 METHODS
Data collection (from last 5 years), include quantitative (online surveys) and qualitative approaches (a group meeting with Scholars) [5]. Response rates for the electronic survey ranged from 50% to 100% for an overall response rate of 72% (see Table 1). Questionnaires request Scholars to rate specific features of the program’s research component, mentoring experience and curriculum. It also includes questions about the program in general (major strengths and challenges, and recommendations for improvement). During the Annual Evaluation Retreat group meeting, Scholars discuss these results and present consensus in a plenary session. Participation rates ranged from 50% to 100% for an overall rate of 89% (see Table 1).
Table 1.
Summary of Response Rates and Evaluation Retreats’ Participation Rate by Scholars’ Cohort
| Cohort | Total Number of Scholars | Number of electronic survey respondents | Response rate (%) | Number of Evaluation Retreat participants | Participation rate (%) |
|---|---|---|---|---|---|
| 1 | 6 | 6 | 100 | 6 | 100 |
| 2 | 8 | 5 | 63 | 8 | 100 |
| 3 | 8 | 4 | 50 | 4 | 50 |
| 4 | 7 | 4 | 57 | 7 | 100 |
| 5 | 9 | 8 | 89 | 9 | 100 |
| Total | 38 | 27 | 72 | 34 | 89 |
The electronic survey consists of a self-administered questionnaire developed by the Evaluation committee and used in previous Evaluation retreats. SurveyMonkey© [6] was the electronic platform used to create the questionnaire. The survey included closed and open-ended questions. The first section of the questionnaire assessed their opinion about the research component. It included a closed-ended question asking them to rate nine specific features of the program’s research component, two questions about their mentoring experience, and one item about Friday’s research seminars. They were asked to answer a Lickert-type scale that ranges from 0= Not applicable; 1= Very Poor; 2= Poor; 3= Fair; 4=Good; and 5= Excellent. It also included seven open-ended questions assessing the following: whether or not the research question of their current project was related to the research topic submitted during the admission process (letter of intent); how the seminar where they presented the research question and the one where they presented the research plan have helped them to develop their research project; and what would they consider the major strengths and challenges of the program’s research component, and how they would like to see the research experience enhanced. The second section asked about the mentoring experience in more detail. It included an assessment of their mentor role and performance. This was measured by a close-ended question that asked whether or not they agree with 19 statements. They were asked to answer a Lickert-type scale that ranges from 0= Not applicable; 1= “Strongly disagree”; 2= “Disagree”; 3= “Agree”, and 4= “Strongly agree”. It also included three questions asking them about how much time per month they spend working with their mentors, how that time was spent, and a request for suggestions/recommendations to enhance the mentoring experience. A third section asked them about the curriculum sequence, whether or not the time where the “Clinical Research Protocol Development” is scheduled is appropriate and requested suggestion for curriculum improvement. The final section requested them to identify the overall program’s strengths, challenges and asked them for recommendations to improve the program.
Reponses from the survey were downloaded, managed and analyzed using the statistical software SPSS© [7]. Statistical data analysis consisted of exploratory descriptive analyses such as, percent, frequencies, and measures of Central tendency (mean, median and mode) and dispersion (Standard deviations). Qualitative data from the survey and from the evaluation retreat discussion were examined using content analysis. [8]
3 RESULTS
Scholars identified as program’s strengths the networking opportunities (local and with U.S. experts), the diversity of peers and faculty, the faculty support, the technical and audiovisual support, the physical facilities and resources, the guest speakers and consultants, and the quality of the curriculum. Challenges vary as cohorts change, but time limitations and the need for technical/statistical support are always highlighted. Recommendations for improvement emphasize the need for a greater pool of experienced mentors, and more hands-on approaches to address particular skills such as, manuscript development, institutional and federal guidelines for proposal submission, and issues related to the research project management.
3.1 Curriculum
The proportion of Scholars who agreed with the curriculum sequence changes by cohort. This question was not included in the cohort 1 survey. The most noticeable difference was for cohort 2 where most Scholars reported that do not agree with the Curriculum sequence in which the courses were currently offered (see Fig. 1). This specific year was when the curriculum revision was done incorporating academic experiences addressing translational competencies. Scholars recommended some changes, mostly in courses such as Biostatistics and Scientific Communication, in addition to more hand-on experiences. Results from the following cohorts show a more balanced proportion of agreement evidencing progress in the curriculum development as the Executive Committee considers the opinion of the Scholars.
Fig. 1.
Percent Distribution of Scholars by Whether or Not Respondents are in Agreement with the Curriculum Sequence and Cohort. (n= 17)
3.2 Research component assessment
Fig. 2 shows a summary of the ratings of several Research Component elements that were assessed. The ratings vary from “very poor” to “excellent”. A great percent of respondents considered the overall research experience as good or excellent (85%). It is important to notice that overall, the “Process of reporting the evaluation results of their performance during the seminars” was the item where 68% of respondents qualified it from “fair” to “very poor”. Elements qualified as good to excellent by at least 50% or more of scholars were the following: “Technical support provided for the development of the research project”, and “Guidelines provided for the development of the research project”.
Fig. 2.
Percent Distribution of All Respondents by Their Ratings to Selected Research Component Elements
3.3 Mentoring Experience
Table 2 shows a summary of Scholars responses to their mentoring experience assessment by cohort. The percent shown is of those who reported that agreed or strongly agreed to the statements included. In general, results show that the most Scholars had a satisfactorily experience. Scholars agreed that their mentors were accessible and met adequately most of the qualities and tasks assessed by the questionnaire. There were items that score below 70%, indicating that there is space for improvement and it varies by cohort. A lower proportion of Scholars from cohort 1 reported satisfaction with the following elements: “Their mentors guide them to perform the future role of a clinical researcher” (60%), and “help them put theory into practice” (40%). Other mentoring experience element with a low percent of Scholars agreeing with the statement in two cohorts (1 and 3) was “My mentor helps me understand the rationale behind the way I practice” (see table 2).
Table 2.
Percent Distribution of Scholar Reporting Satisfaction with Elements of the Mentoring Experience
| Mentoring Experience Elements | Cohort 1 % (n) |
Cohort 2 % (n) |
Cohort 3 % (n) |
Cohort 4 % (n) |
Cohort 5 % (n) |
|---|---|---|---|---|---|
| a. My mentor has been accessible. | 80 (4) | 100 (5) | 100 (4) | 75 (3) | 88 (7) |
| b. My mentor facilitated an active learning process. | 80 (4) | 100 (5) | 100 (4) | 75 (3) | 88 (7) |
| c. Communication with my mentor has been agile and effective. | 80 (4) | 100 (5) | 75 (3) | 75 (3) | 88 (7) |
| d. My mentor demonstrates to be prepared for each meeting. | 80 (4) | 100 (5) | 100 (4) | 75 (3) | 88 (7) |
| e. My mentor helps me to identify my specific needs. | 80 (4) | 100 (5) | 75 (3) | 75 (3) | 88 (7) |
| f. My mentor contributes in meeting my needs. | 80 (4) | 100 (5) | 100 (4) | 75 (3) | 88 (7) |
| g. My mentor manages time efficiently. | 80 (4) | 100 (5) | 100 (4) | 75 (3) | 88 (7) |
| h. My mentor is keen to teach. | 80 (4) | 100 (5) | 75 (3) | 75 (3) | 88 (7) |
| i. My mentor gives guidance whenever necessary. | 80(4) | 100 (5) | 100 (4) | 75 (3) | 88 (7) |
| j. My mentor guides me to perform the future role of a clinical researcher. | 60 (3) | 100 (5) | 75 (3) | 75 (3) | 88 (7) |
| k. My mentor helps me put theory into practice. | 40 (2) | 100 (5) | 75 (3) | 75 (3) | 88 (7) |
| l. My mentor provides me with feedback on my performance. | 80 (4) | 100 (5) | 75 (3) | 75 (3) | 88 (7) |
| m. My mentor stimulates me to thinking critically. | 80 (4) | 100 (5) | 100 (4) | 75 (3) | 88 (7) |
| n. My mentor helps me understand the rationale behind the way I practice. | 60 (3) | 80%(4) | 50 (2) | 75 (3) | 88 (7) |
| o. My mentor suggests me alternative ways of performing a task. | 80 (4) | 100 (5) | 100 (4) | 75 (3) | 88 (7) |
| p. My mentor points out my mistakes in my performance without embarrassing me. | 80 (4) | 100 (5) | 75 (3) | 75 (3) | 88 (7) |
| q. My mentor provides me chances to express my opinion on my performance. | 80 (4) | 100 (5) | 75 (3) | 75 (3) | 88 (7) |
| r. My mentor recommends sources of relevant references to me. | 80 (4) | 100 (5) | 75 (3) | 75 (3) | 88 (7) |
| s. I fulfilled the agreements established with my mentor. | 80 (4) | 80 (4) | 100 (4) | 75 (3) | 88 (7) |
| Total number of respondents | 100 (5) | 100 (5) | 100 (4) | 100 (4) | 100 (8) |
3.4 Overall Program Assessment
During the Annual Evaluation Retreat, Scholars are given the opportunity to identify and reach consensus on main strengths and challenges of the program, and to provide recommendations to address those challenges. Table 3 presents a summary of those issues mentioned by at least three cohorts. Scholars recognized the program’s strengths in its focus on research (National Institutes of Health or NIH oriented); the access to good mentors, external resources, consultants and guest speakers; the support received by faculty, administrative, technical and audiovisual staff; the quality of its faculty, and the diversity of the Scholars fostering a multidisciplinary team environment. They identified specific challenges such as, issues related to poor communication among faculty coordinating courses; concern about what they consider a heavy load of courses that limit the time to efficiently develop their research protocol, and opportunities for more hands-on experiences to apply what has been taught. They also perceived uncertainty in the decision-making process, faculty expectations and roles of faculty, mentors, and research committee regarding their research protocol approval. Recommendations for improvement emphasized the need for a greater pool of experienced mentors, and more hands-on approaches to address particular skills such as, manuscript development, institutional and federal guidelines for proposal submission, and issues related to the research project management, among other specific recommendations by cohort.
Table 3.
Summary of Consensus on Program’s Strengths and Challenges by Cohort
| Cohort 1 | Cohort 2 | Cohort 3 | Cohort 4 | Cohort 5 |
|---|---|---|---|---|
|
PROGRAM’S STRENGTHS
| ||||
| Focus on research | Focus on research | Focus on research | Focus on research | |
| Great external resources, mentors, consultants and guest speakers and networking opportunities | Great external resources, mentors, consultants and guest speakers and networking opportunities | Great external resources, mentors, consultants and guest speakers and networking opportunities | Great external resources, mentors, consultants and guest speakers and networking opportunities | |
| Good faculty, administrative, technical and audiovisual support | Good faculty, administrative, technical and audiovisual support | Good faculty, administrative, technical and audiovisual support | Good faculty, administrative, technical and audiovisual support | |
| Competent Faculty | Competent Faculty | Competent Faculty | ||
| Diverse and multidisciplinary background of Scholars | Diverse and multidisciplinary background of Scholars | Diverse and multidisciplinary background of Scholars | ||
| Protected time | Protected time | Protected time | ||
|
| ||||
|
PROGRAM’S CHALLENGES
| ||||
| Poor communication among faculty coordinating courses | Poor communication among faculty coordinating courses | Poor communication among faculty coordinating courses | Poor communication among faculty coordinating courses | |
|
| ||||
| Lack of balance between time for courses and protocol development | Lack of balance between time for courses and protocol development | Lack of balance between time for courses and protocol development | ||
|
| ||||
| Time management | Time management | Time management | ||
|
| ||||
|
PROGRAM’S STRENGTHS
| ||||
| Limited opportunities to apply what is taught/more hands-on approaches | Limited opportunities to apply what is taught/more hands-on approaches | Limited opportunities to apply what is taught/more hands-on approaches | ||
|
| ||||
| Uncertainty in the decision-making process/expectations about research protocol approval | Uncertainty in the decision-making process/expectations about research protocol approval | Uncertainty in the decision-making process/expectations about research protocol approval | ||
4 DISCUSSION
Evaluation Retreats provide a valuable input to improve a program geared to develop competent clinical researchers. There was a higher response to the open-discussion activity portion of the Annual Evaluation Retreat. It provides the opportunity to identify issues not reported when completing the electronic survey. Moreover, it gives the opportunity to reach consensus among Scholars, identifying concerns and opinions of the group in addition to individual ideas. These data, including Scholars recommendations, are used to develop an annual action plan specifically addressing those challenges identified and testing strategies across time to further refining the program considering Scholars perspective and experiences. Throughout the time, the curriculum has been revised, research components guidelines and processes have been developed, and several approaches have been incorporated to increase the group of potential mentors, among others. However, detailed description of these is out of the scope of this report. Results presented are specific to this program and do not intend to be generalizable to similar programs. The Annual Evaluation Retreat has been a formative strategy that has helped the program’s leaders in developing a curriculum, research component, and mentoring experiences that will yield successful clinical researchers. Findings evidence the program’s commitment with providing the foundation for an enhanced mass of clinical researchers.
Acknowledgments
Grant Support:
The project described was supported by The National Institutes of Health: Award Number HCTRECD R25MD007607 from the National Institute on Minority Health and Health Disparities. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.
References
- 1.Westfall JM, Mold J, Fagnan L. Practice-based research – “blue highways” on the NIH roadmap. JAMA. 2007;297(4):403–406. doi: 10.1001/jama.297.4.403. [DOI] [PubMed] [Google Scholar]
- 2.Estape ES, Mays MH, Harrigan R, Mayberry R. Incorporating translational research with clinical research to increase effectiveness in healthcare for better health. Clinical and Translational Medicine. 2014;3:20. doi: 10.1186/2001-1326-3-20. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Estape ES, Segarra B, Baez A, Huertas A, Diaz C, Frontera WR. Shaping a new generation of Hispanic clinical and translational researchers addressing minority health and health disparities. P R Health Sci J. 2011;4:167–75. [PMC free article] [PubMed] [Google Scholar]
- 4.Rubio DM, Schoenbaum EE, Lee LS, Schteingart DE, Marantz PR, Anderson KE, Platt LD, Baez A, Esposito K. Defining translational research: implications for training. Acad Med. 2010;85:470–475. doi: 10.1097/ACM.0b013e3181ccd618. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5.Grembowski D. The Practice of Health Program Evaluation. Sage Publications, Inc; 2001. [Google Scholar]
- 6.SurveyMonkey Inc. Palo Alto, California, USA. www.surveymonkey.com.
- 7.SPSS Inc. SPSS Statistics for Windows, Version 17.0. Chicago: SPSS Inc; 2008. [Google Scholar]
- 8.McDavid JC, Hawthorne LR. Program Evaluation and Performance measurement: An introduction to practice. Sage Publications, Inc; 2006. [Google Scholar]


