Skip to main content
Journal of Clinical and Translational Science logoLink to Journal of Clinical and Translational Science
. 2019 Jul 24;3(2-3):75–81. doi: 10.1017/cts.2019.381

Indices of clinical research coordinators’ competence

Carlton A Hornung 1,2,, Phillip A Ianni 3, Carolynn T Jones 4, Elias M Samuels 3, Vicki L Ellingrod 3,5; for the DIAMOND Investigators
PMCID: PMC6759060  NIHMSID: NIHMS1528790  PMID: 31552144

Abstract

Introduction:

There is a clear need to educate and train the clinical research workforce to conduct scientifically sound clinical research. Meeting this need requires the creation of tools to assess both an individual’s preparedness to function efficiently in the clinical research enterprise and tools to evaluate the quality and effectiveness of programs that are designed to educate and train clinical research professionals. Here we report the development and validation of a competency self-assessment entitled the Competency Index for Clinical Research Professionals, version II (CICRP-II).

Methods:

CICRP-II was developed using data collected from clinical research coordinators (CRCs) participating in the “Development, Implementation and Assessment of Novel Training In Domain-Based Competencies” (DIAMOND) project at four clinical and translational science award (CTSA) hubs and partnering institutions.

Results:

An exploratory factor analysis (EFA) identified a two-factor structure: the first factor measures self-reported competence to perform Routine clinical research functions (e.g., good clinical practice regulations (GCPs)), while the second factor measures competence to perform Advanced clinical functions (e.g., global regulatory affairs). We demonstrate the between groups validity by comparing CRCs working in different research settings.

Discussion:

The excellent psychometric properties of CICRP-II and its ability to distinguish between experienced CRCs at research-intensive CTSA hubs and CRCs working in less-intensive community-based sites coupled with the simplicity of alternative methods for scoring respondents make it a valuable tool for gauging an individual’s perceived preparedness to function in the role of CRC as well as an equally valuable tool to evaluate the value and effectiveness of clinical research education and training programs.

Key words: Core competency, clinical research professional, clinical research coordinator, assessment tool, exploratory factor analysis

Introduction

The timely and successful translation of pharmaceuticals and medical devices into clinical applications to improve human health requires a well-prepared and competent workforce of clinical research professionals that includes principal investigators, research coordinators, monitors, administrators, regulatory affairs experts, informaticians, data managers, statisticians, and others. Appropriate training and mastery of the competencies characterizing each role in the research process is essential for the efficient conduct of clinical and translational research [13]. Accordingly, there is a critical need for tools to assess an individual’s preparedness to execute his or her role in the research process; tools to assess an individual’s need for continuing education and training; and, tools to evaluate the quality of education and training programs that prepare individuals to work in the clinical research enterprise.

Several steps have been taken to identify the core competencies that define the clinical research profession. An initial step was undertaken by the Joint Task Force (JTF) on the Harmonization of Competencies for the Clinical Research Profession[47]. The JTF was composed of key stakeholders in the clinical research enterprise, including representatives from academic institutions; the pharmaceutical industry; clinical research professional organizations including the Association of Clinical Research Professionals (ACRP), Society of Clinical Research Associates (SoCRA), and the Consortium of Academic Programs in Clinical Research (CoAPCR). The JTF identified eight distinct theoretical domains consisting of 51 core competencies that characterize the clinical research process. The eight competency domains they identified are:

  • Scientific concepts and research design (SC): Knowledge of scientific concepts related to the design and analysis of clinical trials.

  • Ethical and participant safety considerations (EP): Care of patients, aspects of human subject protection and safety in the conduct of a clinical trial.

  • Medicines development and regulation (MD): Knowledge of how drugs, devices, and biologicals are developed and regulated.

  • Clinical trial operations (CT): Study management, GCP compliance, safety management, and handling of investigational product.

  • Study and site management (SM): Site and study operations.

  • Data management and informatics (DM): How data are acquired and managed during a clinical trial.

  • Leadership and professionalism (LP): The principles and practices of leadership and professionalism in clinical research.

  • Communication and teamwork (TW): All elements of communication within the site and between the site and sponsor. Teamwork skills necessary for conducting a clinical trial.

The JTF’s efforts were expanded by work supported by the National Center for Advancing Translational Sciences (NCATS) at the National Institutes of Health (NIH). In 2015, NCATS supported the implementation of the Enhancing Clinical Research Professionals’ Training and Qualification (ECRPTQ) project [8]. Subsequently, NCATS supported the initiation of the Development, Implementation and Assessment of Novel Training in Domain-Based Competencies (DIAMOND) project in 2017. The primary aim of the DIAMOND project was the creation of a federated database structured around the ECRPTQ competency framework to curate information about research training opportunities for clinical research professionals working throughout the CTSA consortium. The DIAMOND investigators’ second aim was the development and validation of competency-based tools designed to assess the ability of clinical research professionals to perform their roles in the clinical research enterprise and to evaluate the need for and quality of clinical research education and training programs.

A competency-based assessment inventory for principal investigators and physician scientists called the Clinical Research Appraisal Inventory (CRAI) [9] was developed by Mullikin and colleagues. While the CRAI has undergone several modifications [1012], it has become the standard for assessing self-perceived competency of principal investigators (PI) to conduct clinical trials. Some of the competencies included in the inventory are not functions that would ordinarily be performed by the other members of the clinical research team who play essentially adjuvant and supporting roles (e.g., research managers, regulatory affairs specialists, data managers) in the research process. Accordingly, in our view, the CRAI is not optimal for assessing the preparedness or training needs of non-PI team members or to assess the utility of educational programs that are essential to prepare them to efficiently carry out their functions defined in the research protocol. However, a tool comparable to CRAI was not available to assess the competence or training needs of support personnel.

To address the need for such a tool, DIAMOND investigators collaborated with representatives of CoAPCR (CAH and CTJ) to analyze survey data collected by the JTF to create a tool to assess the preparedness of those playing supportive roles in the clinical research process. In 2014, JTF surveyed over 2000 clinical research professionals working in all roles and across a wide spectrum of clinical research settings (e.g., medical centers; contract research organizations [CRO]; private research settings; community hospitals) around the world to assess their self-perceived competence to perform the functions defined by the 51 core competencies. Details of the methods and results have been published elsewhere [1]. Briefly, exploratory factor analysis (EFA) of data obtained from clinical research professionals employed in the USA or Canada identified 20 core competencies that formed five factors defining self-perceived competence to perform “General” research functions (e.g., GCPs) and four sub-scales reflecting specialized research functions: “Ethics and Patient Safety,” “Medicines Development,” “Data Management,” and “Scientific Concepts”. Together the “General Index” and the four specialized indices make up the first version of the Competency Index for Clinical Research Professionals (CICRP-I). The five measures were highly correlated and had high face validity with reasonable psychometric properties. Most importantly, scores on the General Index, the Ethics and Patient Safety scale, and the Medicine Development scale differed significantly (p < 0.05) among those who reported their role to be research coordinators, administrators, regulatory affairs specialists, or data managers [1].

These findings suggested that there remains a need to create an assessment tool specifically for the role of clinical research coordinator (CRC) comparable to the CRAI that was created as an assessment tool for the role of principal investigator. We reasoned that in the routine performance of their role, CRCs perform a wide range of functions in the clinical research process and therefore they must be prepared to carry out the activities described by core competencies in each domain of clinical research identified by the JTF. Further, and in contrast to CRCs, other professionals in the research process perform what are essentially supporting roles (e.g., data managers, regulatory affairs specialists) and therefore need to have expertise in one or more competency domains related to their specialized area of responsibility. However, they need not have a command of the broad array of core competencies required of the CRC. We also reasoned that the CRCs working at the DIAMOND CTSA hubs are more likely to have greater experience in coordinating the most complicated trials and protocols across all phases of clinical and translational research in contrast to CRCs who work at less research-intensive sites such as community-based settings or private physician’s offices where they often double as care providers. Here we report the development of a tool to assess the self-perceived competence of CRCs and validate that tool by testing the hypothesis that CRCs working at CTSA hubs will demonstrate greater self-perceived competence than CRCs who function in less research-intensive settings.

Methods

Online surveys were administered utilizing existing email groups of clinical research professionals working at the DIAMOND CTSA hubs and their partnering hospitals: University of Michigan, Ohio State University and National Children’s Hospital, University of Rochester, and Tufts University and Tufts Medical Center. Each of the hubs used a standardized approach to recruitment with the recruitment language and the use of list-servs common to all sites. All four of these universities carry a Carnegie I classification as high research activity institutions, and we believe are representative of the other CTSA research-intensive settings in the United States. This research was determined to be exempt by the investigational review boards (IRBs) at all sites.

The survey solicited information on demographic characteristics, education, current and previous work experience in various clinical research roles and settings. The 95 respondents who reported that they were currently working as a CRC and had at least 1 year of CRC experience are the subjects of this analysis. These CRCs indicated how confident they felt to perform the functions defined by the 20 CICRP core competencies and their desire for additional training in each of the eight JTF competency domains. In scoring themselves on the core competencies, the DIAMOND CRC respondents used an 11-point format (i.e., 0 = “Not at all Confident” to 10 = “Completely Confident”), which is a scoring method similar to that used in CRAI. These data therefore provide a unique opportunity to develop an assessment tool for CRCs based on their self-perceived competence for work at research-intensive clinical and translational science award (CTSA) institutions.

An EFA with principal axis extraction and promax rotation (kappa = 4) was performed using SPSS 25. We determined the number of factors using a scree plot that shows the eigenvalue associated with each factor (Fig. 1). The scree plot failed to provide a clear indication of the number of factors but suggested a range from two to four factors. We excluded the 4-factor model because its eigenvalue was less than 1 – a conventional rule for model selection. We examined the pattern matrices of 2- and 3-factor models and rejected the 3-factor model because the factors were not coherent in the sense that each of the three factors did not clearly pertain to distinct competency domains or research functions and because the 3-factor solution involved a number of items that loaded on more than one factor.

Fig. 1.

Fig. 1.

Scree plot of the eigenvalues. Abbreviation: CICRP-II, Competency Index for Clinical Research Professionals, version II.

In the 2-factor model, 19 of the 20 core competencies exhibited a high loading on either the first or second factor while a single item had low loading on the second factor and somewhat higher loading on the first. Nonetheless, we included that item on factor 2 so that each factor was defined by 10 core competencies (as opposed to 11 on factor 1 and 9 on factor 2). We calculated Cronbach’s Alpha to gauge the impact on reliability of including the questionable item on the second factor (i.e., a 10 and 10 item solution) versus on the first factor (i.e., an 11 and 9 item solution) and found that the 10 and 10 solution had an increase in Cronbach’s Alpha for factor 1 of 0.005 and an decrease of 0.004 for factor 2; differences were judged to be inconsequential particularly given the practical benefits of having two factors that are of equal length and that are easy to directly compare. Most importantly, however, in the 10 and 10 item solution each of the resulting factors was clearly defined by a different set of core competencies. The first factor was defined by core competencies that pertain to Routine functions (i.e., Good Clinical Practice) carried out by CRCs in their everyday professional activities, while the second factor was clearly defined by core competencies that pertain to more Advanced and specialized regulatory functions performed by CRCs. Table 1 shows the mean, standard deviation, and factor score coefficients for each item.

Table 1.

Twenty CICRP items administered to DIAMOND CTSA sites (N = 95)

CICRP 20 Items Routine
(Factor 1)
Advanced
(Factor 2)
Mean
(SD)
EP3: Apply relevant national and international principles of human subject protections and privacy throughout all stages of a clinical study. 0.938 8.15 (1.85)
LP4: Describe the impact of diversity and demonstrate cultural competency in the design and conduct of clinical research. 0.839 7.61 (2.34)
DM5: Describe and develop processes for data quality assurance. 0.788 7.34 (2.19)
EP5: Describe the ethical issues involved when dealing with vulnerable populations and the need for additional safeguards. 0.767 8.25 (1.88)
LP3: Identify and apply the professional guidelines and codes of ethics related to the conduct of clinical research. 0.726 8.20 (1.91)
DM3: Describe and assess best practices and the importance of informatics for standardizing data collection, capture, management, analysis, and reporting. 0.669 7.54 (2.11)
SS3: Recognize the management and training approaches to mitigate risk to improve clinical study conduct. 0.616 7.03 (2.26)
SC5: Critically analyze clinical and translational study results. 0.543 5.66 (2.79)
CT6: Differentiate the types of adverse events (AEs) that may occur during clinical studies, explain the identification process for AEs, and describe the reporting requirements to IRBs/IECs, sponsors, and regulatory authorities. 0.519 8.13 (1.94)
EP1: Differentiate between standard of care and clinical study activities. 0.472 8.57 (2.03)
MD3: Explain the investigational products development process and the activities that integrate commercial realities into the life cycle management of medical products. 0.965 5.03 (2.85)
MD5: Describe the specific processes and phases that must be followed in order for the regulatory authority to approve the marketing authorization for a medical product. 0.863 5.04 (3.08)
CT8: Describe the reporting requirements of global regulatory bodies relating to clinical study conduct. 0.826 5.32 (2.87)
MD4: Summarize the legislative and regulatory framework that supports the development and registration of investigational products and ensures their safety, efficacy, and quality. 0.761 4.64 (2.79)
MD2: Describe the roles and responsibilities of the various institutions participating in the investigational product development process. 0.738 6.06 (2.67)
CT4: Compare and contrast the regulations and guidelines of global regulatory bodies relating to the conduct of clinical studies. 0.683 4.88 (2.92)
SS5: Identify the legal and regulatory responsibilities, issues, liabilities, and accountabilities that are involved in the conduct of clinical studies. 0.613 6.78 (2.53)
CT9: Describe the role and process for monitoring a study. 0.456 7.60 (2.10)
EP2: Define the concepts of clinical equipoise and therapeutic misconception as they relate to the conduct of clinical studies. 0.401 4.63 (3.12)
SC3: Explain the elements of clinical and translational study design.** 0.297 6.91 (2.50)
*

Each competency statement is preceded by an abbreviation of one of the eight ECRPTQ core competency domains, and a number that indicates the original number for that competency statement. Abbreviations: CICRP, Competency Index for Clinical Research Professionals; CT, clinical trial operations; CTSA, clinical and translational science award; DM, data management and informatics; EP, ethical and participant safety considerations; LP, leadership and professionalism; MD, medicines development and regulation; SC, scientific concepts and research design; SS, study and site management.

**

SC3 had a higher loading on Factor 1 (0.454) but was included in Factor 2 to equalize the number of Items in each factor. This had inconsequential effects on scale reliability.

Scoring the CICRP-II

We created three scales for both Routine Competencies (Factor 1) and for Advanced Competencies (Factor 2). The first are factor regression scores obtained by multiplying the respondent’s 0–10 self-rating times the item’s factor regression coefficient and summing across all 20 competency items for each factor. Factor regression scores have a mean of zero and unit variance with approximately 95% of cases having scores between ± 1.96. While factor regression scores are the most precise, using factor regression scoring is tedious, not easily applied in practice, and comparisons across populations can be problematic.

The second scoring method sums a respondent’s 0–10 self-rating of competence across the 10 core competencies defining Routine Competencies and the 10 items defining Advanced Competencies. This method uses only the 10 core competency items for each factor and is therefore easier to score but less precise than the factor regression method. The summed score for each factor has a potential range from 0 to 100 with higher scores indicating higher self-competency ratings.

The third scoring method dichotomized responses to the 20 competencies with 0–5 collapsed to indicate “Not Competent” (scored 0) and responses of 6–10 collapsed to indicate “Competent” (scored 1). A count score for the Routine and Advanced factors was created by simply counting the number of items on each factor that a respondent claimed competence. The count scores are easiest to calculate with each factor having a potential range from 0 to 10. (It should be noted that the cut point between 5 and 6 to define “competency” is arbitrary and users may prefer a lower or a higher point to define competence in their application.)

Table 2 gives the correlations between the scales under the three alternative scoring systems. The correlation between the two factors ranges from 0.627 to 0.688, depending on the scoring method used. This indicates that the two factors share less than 50% of the variance in the 20 core competencies, which indicates that although they are closely related, there are two distinct factors. Factor regression scores for each factor are highly correlated with their respective summed scores (0.991 and 0.986, respectively). This means that summed scores can be used in place of the more complicated factor regression methods without loss of precision. Further, both factor regression and factor summed scores are correlated at 0.899 or greater with their respective factor count scores indicating that the count scoring method can be used with very little loss of precision compared to calculating factor regression scores.

Table 2.

Correlations between factors with alternative scoring methods (DIAMOND Data; N = 95)

Routine Regression Advanced Regression Routine Sum Advanced Sum Routine Count Advanced Count
Routine Regression 1.000 0.687 0.991 0.695 0.899 0.663
Advanced Regression 1.000 0.688 0.986 0.601 0.921
Routine Sum 1.000 0.688 0.901 0.660
Advanced Sum 1.000 0.609 0.931
Routine Count 1.000 0.627
Advanced Count 1.000

Table 3 presents the statistical properties of each method for scoring the Routine and Advanced factors. The regression scores for both Routine and Advanced Competencies have zero means and variances near 1.0. Further, as one might expect, scores on both the summed score and count score for Routine Competency were higher than the comparable scores for Advanced Competency, suggesting that even experienced CRCs at research-intensive CTSAs feel more competent to deal with Routine clinical research functions than with more esoteric and Advanced functions. This evidence of internal consistency reliability is supported by Cronbach’s Alpha values of 0.913 (Routine Competency) and 0.911 (Advanced Competency) using the summed scoring method and 0.856 (Routine Competency) and 0.862 (Advanced Competency) using the count scoring method.

Table 3.

Statistical characteristics with alternative scoring methods (DIAMOND Data; N = 95)

Routine
Regression
Advanced
Regression
Routine
Sum
Advanced
Sum
Routine
Count
Advanced
Count
Mean 0.000 0.000 76.47 56.89 8.41 5.76
Median 0.177 −0.034 79.00 56.00 9.00 6.00
Standard Deviation 0.97 0.97 16.08 20.66 2.32 3.16
Minimum −3.84 −2.22 12.00 11.00 0.00 0.00
Maximum 1.37 1.90 100.00 100.00 10.00 10.00
Cronbach’s Alpha 0.913 0.911 0.856 0.862

Validation of the CICRP-II

Our approach to validity and validation testing is consistent with those of Sullivan [13] and Kane [14]. Kane proposes an argument-based approach to validity that requires that claims of validity be judged according to the structure and plausibility of the validity argument. In order to advance a compelling argument about the validity of CICRP-II while simultaneously adhering to standard practices in clinical and translational research, this study focuses primarily on known groups validity (also known as between-groups validity)[15]. While other well-known types of validity tests can be equally rigorous, the narrow scope and focus of the present study prevented some further tests from being conducted such as tests of the predictive validity on long-term outcomes. These and other limitations are detailed later in this work.

To assess the between groups validity, we tested the hypothesis that CRCs at the DIAMOND sites would report higher self-perceived competence to perform both Routine and Advanced clinical research functions than CRCs working outside CTSA research intensive sites. We used data collected by similar survey research methods from two populations of clinical research professionals. The DIAMOND survey collected data from clinical research professionals at the four CTSA hubs. Ninety-five of those respondents who said that they were currently working as a CRC and had 1 year or more experience in that role constitute the sample of DIAMOND CRCs. The JTF surveyed clinical research professionals working in various research settings across the USA and Canada. Eighty-one respondents who said they were working as a CRC in one or another of these settings constitute the JTF CRC sample for this analysis.

Table 4 presents characteristics of the DIAMOND CRC and the JTF CRC samples. Two-thirds of the JTF CRC respondents reported having a bachelor’s degree or less with 28.4% reporting having a master’s degree and less than 5% having a doctorate compared to just over half of the DIAMOND CRCs having a bachelor’s, 35.8% with a master’s, and 10.5% with a doctoral degree. While years of education were not statistically significantly different between the JTF and DIAMOND samples, 22% of DIAMOND CRCs have academic credentials (i.e., post bachelors certificate and master’s) specifically in clinical research compared to only 7% of CRCs who responded to the JTF survey (p = 0.012). There are also significant differences between JTF and DIAMOND CRC respondents in their years of clinical research experience (p = 0.035). Some 37% of JTF respondents have 11 or more years of experience in clinical research compared to only 19% of DIAMOND respondents. Further, there are notable differences in professional society membership. The professional organization of choice among CRCs responding to the JTF survey was ACPR, whereas CRCs at the CTSA hubs were more likely to be members of SoCRA and over 80% of members of each society had passed their society’s certification examinations.

Table 4.

Characteristics of CRCs in the JTF and Diamond survey data

JTF DIAMOND p
N % N %
Education
 ≤ Bachelor’s 54 66.7 51 53.7
 Master’s 23 28.4 34 35.8
 Doctorate 4 4.9 10 10.5 0.158
Clinical Research Degrees
 Yes 6 7.4 21 22.3 0.012
Years of Experience
 < 2 16 28.4 15 15.8
 2 – 5 21 25.8 43 45.3
 6 – 10 14 17.3 19 20.0
 11 – 20 21 25.9 13 13.7
 > 20 9 11.1 5 5.3 0.035
Professional Membership
 ACRP 29 35.8 13 13.7 <0.001
 Certified 26 32.1/89.7 11 11.6/84.7
 SoCRA 16 19.8 30 31.6 0.075
 Certified 13 16.0/81.2 27 28.4/90.0

Abbreviations: ACRP, Association of Clinical Research Professionals; CRC, clinical research coordinators; JTF, Joint Task Force; SoCRA, Society of Clinical Research Associates.

Results

We calculated scores for the CICRP-I General Index and its four subscales as well as scores on the Routine and Advanced Factor (CICRP-II) for CRC respondents to the JTF survey (N = 81) and the CRC respondents to the DIAMOND survey (N = 95). We used the simplified count scoring method rather than the arithmetically cumbersome factor regression scoring or the summed scoring method because the JTF data were only available as dichotomous responses to the core competencies. The results are shown in Table 5.

Table 5.

Self-assessed competency of CRCs on CICRP-I and CICRP-II. [JTF (N = 81) and DIAMOND Surveys (N = 95)]

Data Set Mean Standard deviation p
CICRP-I Factors
General Clinical Research JTF
DIAMOND
5.65
7.38
3.02
2.46

<0.001
Medicines Development JTF
DIAMOND
2.52
2.81
1.72
1.74

NS
Ethics and Patient Safety JTF
DIAMOND
3.89
4.49
1.22
1.09

<0.001
Data Management JTF
DIAMOND
3.17
3.96
1.66
1.32

<0.001
Scientific Concepts JTF
DIAMOND
2.04
3.27
1.67
1.50

<0.001
CICRP-II Factors
Routine Functions JTF
DIAMOND
6.54
8.41
2.60
2.32

<0.001
Advanced Functions JTF
DIAMOND
4.58
5.76
2.96
3.16

<0.001

Abbreviations: CICRP, Competency Index for Clinical Research Professionals; CRC, clinical research coordinators; JTF, Joint Task Force.

CRCs from the DIAMOND CTSA hubs scored much higher than the JTF CRCs on the General Index and all four subscales in the CICRP-I and both the Routine and Advanced factors on CICRP-II. The differences were statistically detectable beyond the 0.001 level for all comparisons except the CICRP-I Medicine Development subscale. These large and consistent differences between CRCs participating in the JTF survey and those participating in the DIAMOND project in both CICRP-I and II occur even though the JTF respondents had significantly more years of experience as a clinical research professional. These differences in self-perceived competence clearly justify the decision to create CICRP-II based only on experienced CRC respondents at research-intensive sites and clearly confirm between groups validity.

Discussion

While it is widely recognized that the clinical research enterprise requires a well-trained workforce that is competent to execute increasingly complex drug and device development protocols, it is important to understand the different settings in which trials are being conducted and that the key in all of these settings is the CRC and the centrality of their role in the increasingly high-stakes and complicated nature of clinical research. Competency-based training is critical to the task demands of CRCs. Preparing and retaining all types of clinical research professionals is made more difficult by frequent staff turnover and limited opportunities for advancement particularly at academic health centers [1618].

Because CICRP-I was created from self-reported competency data collected from individuals working in all roles and across all settings in the clinical research enterprise, it is therefore a tool with general applicability to assess preparedness and training needs for all roles [19]. The four subscales provide a degree of specificity in assessing competence and the need for training with scores on the subscales of CICRP-I used to identify an individual’s training needs – whether it be related to GCPs, ethics and patient safety, data management, or regulatory affairs pertaining to medicine and device development. In contrast, CICRP-II was created to measure the self-perceived competence of experienced CRCs at research-intensive sites and is therefore the “gold-standard” for assessing preparation and need for training related to Routine and/or Advanced research functions. The CICRP-II can be used as a high-precision tool for use in research (factor regression scores). Both CICRP I and CICRP-II are easy-to-use tools for self-assessment (summed or count scores) and as a quick-to-score tool for use in human resource offices for informal evaluation. Both the CICRP-I and CICRP- II indices and directions for scoring are available on the DIAMOND Portal at: https://clic-ctsa.org/diamond.

Either tool can be a source of data related to an individual’s self-perceived preparedness, for principal investigators or project managers to use when assessing whether an individual has the qualifications for a specific role on a research team and if not, what additional training would be necessary. Similarly, funding agencies, CROs, site management organizations (SMOs), and even IRBs could use CICRP data along with other assessment tools (e.g., CRAI) as an indicator of the overall readiness of a research team and whether the team includes individuals with competency across all domains of the research process. Finally, both measures can be recommended by an organization (e.g., ACRP and SoCRA) as a self-assessment tool to help an individual gauge his or her readiness to sit for the certification exam.

Assessing the competence of an individual to work in the clinical research enterprise through certification exams is necessary but only one aspect of assuring a competent workforce. There is also the need to evaluate and accredit education and training programs that purport to prepare individuals for clinical research work. There is currently no formal mechanism to assess the quality of the many training programs offered by various vendors both online and on-site. Most of these programs are subject to limited evaluation, and it is essential that assessment tools and procedures be developed to evaluate their quality and effectiveness. The CoAPCR, in collaboration with the pharmaceutical industry as well as professional organizations including ACRP and SoCRA, has in place a Council on Accrediting Allied Health Education Programs (CAAHEP) – approved process for evaluating and accrediting degree-granting academic programs. Currently, many CoAPCR member schools are utilizing the CICRP indices to evaluate their programs, and several have begun the formal accreditation process.

While CICRP-I and CICRP-II were developed to further the professionalization of the clinical research workforce, we do not recommend that either be used as the sole measure of an individual’s competence to function in the clinical research enterprise or as a sole measure of the quality or effectiveness of an education or training program. Self-assessments of competence are often prone to overconfidence, which can produce biased estimates of one’s actual level of competence [20,21]. The ultimate judge of the CICRP-I and CICRP-II tool will depend on their correlation with yet-to-be-developed objective measures of performance in the clinical research enterprise.

Limitations

There were notable differences between the JTF and DIAMOND surveys. The JTF survey included CRPs who were working in academic health centers and some of them may have been employed at a CTSA or another research-intensive institution. If there were respondents from CTSAs in the JTF survey, their presence would tend to reduce the magnitude of observed difference in perceived competence between JTF and DIAMOND CRCs. In other words, the differences across both CICRP-I and CICRP-II that we found are likely to be conservative.

There were also differences in the data collection methods particularly in terms of the core competencies. First, the JTF survey asked respondents to rate their self-perceived competence on 51 core competencies using a four-point scale that was dichotomized into “not competent” and “competent.” Factor analysis identified 20 core competencies forming five factors. The DIAMOND survey asked respondents to rate their self-perceived competence on these 20 core competencies on an 11-point scale that factor analytic methods yielded two factors. Second, the wording of the core competencies specified by JTF has undergone wordsmithing with input from several stakeholders beginning with the ECRPTQ project. The modifications were minor changes in grammar, simplification of sentence structure, or word changes such as replacing “clinical trial” with “clinical research” to make the item more broadly applicable. Regardless of how minor, some changes may have created non-negligible differences in the interpretation of the item and thus changes in the factor structure. Additional data, including that being collected at CoAPCR institutions as well as data from the clinical research workforce, are necessary to confirm the value of the CICRP indices for assessing the preparedness of the clinical research workforce.

Acknowledgments

The authors acknowledge several individuals who made substantial contributions to this work. Professor Susan Murphy, ScD, OTR and Mary Burgess provided editorial guidance, and the Development, Implementation and Assessment of Novel Training in Domain-Based Competencies (DIAMOND) project team provided the material support for the research, including REDCap support and statistical assistance. This work was supported by the National Center for Advancing Translational Sciences, National Institutes of Health, through award number U01TR002013.

Footnotes

Financial Support

Supported by NCATS U01TR002013 “Development, Implementation and Assessment of Novel Training in Domain-Based Competencies (DIAMOND)”; and in part by Ohio State University UL1T4002733, the Michigan Institute for Clinical and Health Research (MICHR), UL1TR002240, Tufts Clinical and Translational Science Institute (CTSI) UL1TR002544, and the University of Rochester’s Clinical and Translational Science Institute (UL1TR000042). In addition, support was received from the Enhancing Clinical Research Professionals’ Training and Qualification (ECRPTQ) UL1TR000433.

Disclosures

There are no conflicts of interest.

References

  • 1. Hornung CA, et al. Competency indices to assess the knowledge, skills and abilities of clinical research professionals. International Journal of Clinical Trials 2018; 5(1): 46–53. [Google Scholar]
  • 2. Institute of Medicine. Transforming Clinical Research in the United States. Washington, DC: National Academies Press, 2010. [PubMed] [Google Scholar]
  • 3. Califf R, et al. Appendix D: Discussion Paper – The clinical trials enterprise in the United States: a call for disruptive innovation In: Institute of Medicine, ed. Envisioning a Transformed Clinical Trials Enterprise in the United States: Establishing an Agenda for 2020. Washington, DC: National Academies Press; 2012. [PubMed] [Google Scholar]
  • 4. Jones CT, et al. Defining competencies in clinical research: Issues in clinical research education. Research Practitioner 2012; 13(3): 99–107. [Google Scholar]
  • 5. Sonstein S, et al. Global self-assessment of competencies, role relevance, and training needs among clinical research professionals. Clinical Researcher 2016; 30(6): 38–45. DOI: 10.14524/CR-16-0016 [DOI] [Google Scholar]
  • 6. Sonstein SA, et al. Moving from compliance to competency: A harmonized core competency framework for the clinical research professional. Clinical Researcher 2014; 28(3): 17–23. [Google Scholar]
  • 7. Joint Task Force for Clinical Trial Competency. Core competency framework, 2.0 2017. Retrieved from http://clinicaltrialcompetency.org.
  • 8. Shanley T. Enhancing clinical research professionals’ training and qualifications Retrieved from http://www.ctsa-gcp.org/. Accessed February 12, 2016.
  • 9. Mullikin E, Bakken LL, Betz NE. Assessing the research self-efficacy in physician scientists: The clinical research appraisal inventory. Journal of Clinical Assessment 2007; 88(9): 1340–1345. [Google Scholar]
  • 10. Lipira L, et al. Evaluation of clinical research training programs using the clinical research appraisal inventory. Clinical and Translational Science 2010; 3(5): 243–248. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11. Robinson G, et al. A shortened version of the Clinical Research Appraisal Inventory: CRAI-12. Academic Medicine 2013; 88(9): 1340–1345. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12. Eller L, Lev EL, Bakken LL. Development and testing of the Clinical Research Appraisal Inventory-Short Form. Journal of Nursing Measurement 2014; 22(1): 106–119. [DOI] [PubMed] [Google Scholar]
  • 13. Sullivan GM. A primer on the validity of assessment instruments. Journal of Medical Graduate Education 2011; 3(2): 119–120. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14. Kane MT. An argument-based approach to validity. Psychological Bulletin 1992; 112(3): 527–535. [Google Scholar]
  • 15. DeVellis RF. Scale Development: Theory and Applications. 2nd ed Newbury Park, CA: Sage Publications, 2003. [Google Scholar]
  • 16. Snyder D, et al. Retooling institutional support infrastructure for clinical research. Contemporary Clinical Trials 2016; 48: 139–145. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17. Causey M. Professional pathways boost staff retention in clinical research settings. ACRP Blog; 2017. Retrieved from www.acrpnet.org/2017/04/24/professional-pathways-boost-staff-retention-clinical-research-settings/
  • 18. Applied Clinical Trials. Clinical trials talent survey report; 2018. Retrieved from =http://www.appliedclinicaltrialsonline.com/node/351341/done?sid=15167
  • 19. Bandura A. A guide for constructing self efficacy scales, Chapter 14 In: Pajores F, Urdan T, eds. Self-Efficacy Beliefs of Adolescents. Greenwich, CT: Information Age Publishing; 2006. 307–337. [Google Scholar]
  • 20. Bjork RA. Assessing our own competence: Heuristics and illusions In: Gopher D, Koriat A eds. Attention and Performance XVII. Cognitive Regulation of Performance: Interaction of Theory and Application. Cambridge, MA: MIT Press; 1998. 435–459. [Google Scholar]
  • 21. Dunning D, Heath C, Suls JM. Flawed self-assessment: Implications for health, education, and the workplace. Psychological Science in the Public Interest 2004; 5(3): 69–106. [DOI] [PubMed] [Google Scholar]

Articles from Journal of Clinical and Translational Science are provided here courtesy of Cambridge University Press

RESOURCES