Skip to main content
Clinical and Translational Science logoLink to Clinical and Translational Science
. 2023 Oct 12;16(12):2530–2542. doi: 10.1111/cts.13645

uConsent: Addressing the gap in measuring understanding of informed consent in clinical research

Richard F Ittenbach 1,, J William Gaynor 2, Jenny M Dorich 3,4, Nancy B Burnham 2, Guixia Huang 1, Madisen T Harvey 2, Jeremy J Corsmo 5
PMCID: PMC10719467  PMID: 37828723

Abstract

The purpose of this study was to establish the technical merit, feasibility, and generalizability of a new measure of understanding of informed consent for use with clinical research participants. A total of 109 teens/young adults at a large, pediatric medical center completed the consenting process of a hypothetical biobanking study. Data were analyzed using a combination of classical and modern theory analytic methods to produce a final set of 19 items referred to as the uConsent scale. A requirement of the scale was that each item mapped directly onto one or more of the Basic Elements of Informed Consent from the 2018 Final Rule. Descriptive statistics were computed for each item as well as the scale as a whole. Partial credit (Rasch) logistic modeling was then used to generate difficulty/endorsability estimates for each item. The final, 19‐item uConsent scale was derived using inferential methods to yield a set of items that ranged across difficulty levels (−3.02 to 3.10 logits) with a range of point‐measure correlations (0.12 to 0.50), within‐range item‐ and model‐fit statistics, varying item types mapped to both Bloom's Taxonomy of Learning and required regulatory components of the 2018 Final Rule. Median coverage rate for the uConsent scale was 95% for the 25 randomly selected studies from ClinicalTrials.gov. The uConsent scale may be used as an effective measure of informed consent when measuring and documenting participant understanding in clinical research studies today.


Study Highlights.

  • WHAT IS THE CURRENT KNOWLEDGE ON THE TOPIC?

There is general consensus in the field that informed consenting practices are flawed and in desperate need of revision.

  • WHAT QUESTION DID THIS STUDY ADDRESS?

Can the practice of informed consent in clinical research be improved through the use of a validated, rigorously derived, and generalizable measure of understanding?

  • WHAT DOES THIS STUDY ADD TO OUR KNOWLEDGE?

Our paper provides evidence to the clinical research community that it is indeed possible to measure and evaluate one's understanding of key components of a research study, using an instrument based on educational theory and premised upon the required regulatory components. This instrument may be used across a wide range of clinical research studies as demonstrated by our analysis of randomly selected consent forms from ClinicalTrials.gov.

  • HOW MIGHT THIS CHANGE CLINICAL PHARMACOLOGY OR TRANSLATIONAL SCIENCE?

Better informed research participants will translate to more committed participants with better compliance, reduced potential for harm, shorter completion timelines, and increased trust in science.

INTRODUCTION

Regulatory agencies, federal advisory boards, and other authoritative bodies have recommended an overhaul of the current informed consent process used in clinical research. Agencies have suggested that informed consent documents and the processes they represent are overly formalistic, unnecessarily complex, and designed to protect institutions more so than the participants they were intended to serve. 1 , 2 , 3 , 4 After 40 years of implementation, the US Food and Drug Administration (FDA) guidance, and multiple systematic reviews of the literature, there is general consensus that consenting practices used today are flawed and in desperate need of revision. 5 , 6 , 7 , 8 , 9

Whereas “understanding” or the ability to comprehend key components of a research study have always been an important part of the informed consent process, it is now a required component of the Basic Elements of Informed Consent, 2018 Final Rule (45CFR46.116(a)(5)(i‐ii)), which is designed to strengthen research participants' ability to make informed decisions about enrolling in clinical research. 10 Ironically, seven separate systematic reviews of the literature and a consensus paper from the Clinical Trials Transformation Initiative (CTTI) have concluded that research participants simply do not understand key components of the informed consent process—and that there is no “gold standard” for evaluating understanding. 5 , 6 , 7 , 9 , 11 , 12 , 13 , 14 Some adults do not even read entire portions of the consent document, a phenomenon that has persisted for decades. 15 , 16 Such findings underscore the need for a more modern, rigorously derived measure of understanding to assure and document participant understanding in the critically important informed consent process. 4 , 17 , 18 , 19

Historically, the practice of informed consent has emphasized “informing” participants rather than assuring one's “understanding.” Unfortunately, asking individuals to participate in research studies without understanding what is expected of them not only violates a fundamental tenet of human subjects’ research, 15 but introduces barriers that undermine the integrity of the research process, including diminished trust in science and the investigative team, increased potential for harm, poor research participant compliance with study requirements, and delayed timelines and budgets. Ironically, although quizzes and tests were recommended in the original Belmont Report, 20 and the aforementioned reviews of the literature have documented their use over the years, 5 , 7 , 9 , 13 , 14 , 20 the barometer for evaluating understanding has rested on rather primitive strategies involving a single question (“Do you understand…”?), ad hoc quizzes created by study staff, or parroting back phrases as in the “teach‐back” method, 21 , 22 , 23 rather than by using a psychometrically sound measure of understanding. Evidence to date suggests that assessment of research participants' understanding of informed consent can be improved, 4 , 24 and that methods used to assess understanding have not always reflected important changes in the regulatory requirements, instructional methodology, or scale development techniques.

A measure of understanding, no matter how concise, well composed, or psychometrically strong, should never be presumed to take the place of a discussion between a research participant and a study team member responsible for consent. But a psychometrically valid measure of understanding can be a strong ally in the process. It can help document areas of specific concern for follow‐up as well as stimulate additional discussion regarding research participation. Therefore, the purpose of this study was threefold: design and construct a measure of understanding that addresses the required Basic Elements of Informed Consent with items of varying levels of difficulty (aim 1), field test and document the integrity of the measure using rigorous methods of measure and scale development (aim 2), and demonstrate the generalizability of our measure to other research studies registered in ClinicalTrials.gov (aim 3).

METHODS

Sample

Participants consisted of 109 teens and young adults, 18–24 years of age, recruited at a single pediatric medical center. A recruiting flyer was distributed through the institution's clinical trials office via email to the 17,000 employees requesting volunteers willing to participate in a study to help researchers develop a tool to better explain research studies to teens and young adults. Participants received a $20 Amazon gift card for their time.

Older teens and young adults were selected as the target group for this study because of the team's willingness to engage a relatively underserved segment of the general population, those transitioning from adolescence to young adulthood—and moving from dependance to independence. This is an age group that has historically been viewed as difficult to engage and recruit, despite being on the cusp of responsibility for their own decision making and health care, but for whom the capacity for understanding and decision making is comparable to the larger adult population. 25

Procedures

The current study had three distinct components: item generation (aim 1), measure development (aim 2), and generalizability to other clinical research studies (aim 3). The study was approved by the primary institution's institutional review board (IRB) and reporting done in accordance with Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) guidelines for observational studies. 26

Item generation (aim 1)

An actual, active informed consent document used for participation in a research biorepository in the cardiac center at the Children's Hospital of Philadelphia served as the basis for this study. The consent document was first highlighted and annotated with respect to seven of the nine required elements of informed consent. Alternative procedures and compensation/treatment for injuries were not included, as these consent elements have qualifiers on whether they are required to be included in the consent form, and therefore are not present in many consent documents. An item bank of 91 informed consent questions (hereafter referred to as “items”) was created by two members of the study team (authors R.F.I. and J.J.C.) in accord with the 2014 Educational and Psychological Standards for Testing. 27 The set of 91 items was then reviewed, modified, and refined for content and grammar by the broader team.

All items were required to map onto information in the biobank consent document, required basic elements of informed consent, and Bloom's Taxonomy of Learning. Bloom's Taxonomy is a well‐documented framework for organizing test items across a range of educational programs. 28 , 29 , 30 , 31 , 32 Of the 91 items in the initial item pool, 44 were selected for inclusion in the experimental uConsent scale covering the seven selected required basic elements of informed consent, distributed across Bloom's five categories: knowledge recall, simple understanding, application, analysis/synthesis, and evaluation, and further stratified into either factual or conceptual domains. The 44‐item form was then vetted with a panel of potential participants and professionals for review and feedback prior to implementation. The review panel included two young adults in the age range and three professionals trained in education, diversity and inclusion, and measure development, respectively, for their feedback and recommendations regarding reading level, appropriateness, bias, cultural sensitivity, and psychometric integrity.

Measure development (aim 2)

The experimental set of 44 items was converted to an electronic data capture system using REDCap software and administered to participants remotely. Included was an introductory screen with questions to verify eligibility criteria, a brief demographic questionnaire, the Quality of Informed Consent (QuIC) 33 scale for validation, and contact information for distribution of a $20 gift card in return for time spent completing the study. Administration time was 20–30 min.

Items were evaluated using a combination of classical (correlation‐based) and modern theory generalized linear (Rasch) methods. Items were evaluated for smoothness, modality, sufficiency of cell counts, directionality, unique/shared variance, point‐measure (item‐total) correlations, and likelihood for endorsability (difficulty). 34 , 35 A Rasch Partial Credit model, using joint maximum likelihood estimation methods, was deemed most appropriate given that it places all responses on a common, linearly derived scale, its ability to handle varying response options, provide strong estimates of precision and model fit, and rank‐orders items with respect to endorsability (difficulty) among participants. 36 Preliminary estimates of criterion‐related validity were computed using a series of Spearman rho correlations: (a) among items; (b) between items and uConsent total score; and (c) between uConsent total score and QuIC total score, a frequently used instrument for measuring understanding in the field of clinical research today. Aim 2 data were analyzed using Winsteps version 5.4.3 and SAS version 9.4.

Generalizability to other clinical research studies (aim 3)

To demonstrate that our new uConsent scale could be used with other clinical research studies, 25 research studies with corresponding consent forms were randomly selected from the ClinicalTrials.gov database and analyzed using qualitative content analysis. 37 , 38 ClinicalTrials.gov is a web‐based resource of registered, global clinical research studies maintained by the National Library of Medicine that is available for use by investigators, potential research participants, and the general public worldwide. It was leveraged as a resource in this study to take advantage of the “consent form posting requirement” in the 2018 Final Rule (Ss 46.102(b), 46.116(h)). For purposes of this study, and to generate a plausible population of potential studies to which our uConsent scale may be applied and tested, studies meeting the following conditions in ClinicalTrials.gov were identified and downloaded for sampling: (a) an interventional study, (b) adults 18 to 64 years of age, (c) studies beginning after January 1, 2019 (coinciding with 2018 Final Rule), and (d) inclusion of an informed consent document. This process resulted in 224 research studies.

Prior to sampling selected cases, however, and to get a sense of the types and breadth of studies contained in the ClinicalTrials.gov database initiated after the mandatory compliance date of the 2018 Final Rule, all 224 submissions were then coded into one of five broad thematic areas and tabulated: chronic illness (18%), cancer‐related illnesses (21%), acute illness/injury (29%), mental health and dependency disorders (11%), and all others (21%). Because of the potential for submissions to vary from year to year, variability in coding among raters, relative balance among categories, and the need to assure a sufficiently broad sample from which to generalize, a stratified random sampling approach was used for sample selection. Five studies from each of the aforementioned thematic areas were selected for a total of 25 studies for analysis. Two raters (authors J.J.C. and J.M.D.) reviewed and coded all 25 consent forms to estimate uConsent's coverage of the sample using content analytic methods. Prior to coding, the 19 items in the final uConsent instrument were reviewed to remove study and disease‐specific references. Text in any item that included a study or disease‐specific reference was replaced with a “fill in the blank” field, in order to allow the uConsent to be easily applied to any study or disease condition. For example, (I3) “… the likelihood that the study will help scientists understand ‘heart disease’” was updated as follows: “… the likelihood that the study will help scientists understand (insert name of disease/condition under study).” To assure a consistent scoring algorithm across forms and raters, two training sets were drawn and used: four practice consent documents to orient our raters, and five test documents to verify congruence prior to formal analysis. Inter‐rater congruence was computed to be 96% between the two raters. All data were analyzed using SAS version 9.4 and MaxQDA version 2022.0.0.

RESULTS

The sample of teens/young adults averaged 21.5 (mean) ± 1.6 (SD) years of age, the youngest of which was 18.1 and the oldest 24.3 years old. Thirteen (12%) participants reported a 12th grade education or less, whereas 61 (56%) were currently in or had some college/technical school; 33 (30%) had a college degree, and two (2%) had a post‐graduate degree. Nearly two‐thirds (62%) reported an annual income of less than $50 k/year (6% reported more than $100 k/year). The sample was largely female participants (78%), 91% White, 3% Black, 3% Native American/Asian, and 3% more than one race (95% non‐Hispanic; see Table 1).

TABLE 1.

The uConsent participant demographics (N = 109).

Variable f (%) n Mean (SD) Minimum, maximum
Participant age 104 21.5 (1.6) 18.1, 24.3
uConsent total score 109 31.0 (4.1) 19.0, 40.0
QuIC 109 60.2 (7.7) 31.0, 70.0
Participant's education
Less than high school graduate 3 (2.8)
High school graduate 10 (9.2)
Partial college/technical school 61 (56.0)
College graduate 33 (30.3)
Post graduate degree 2 (1.8)
Ethnicity
Hispanic/Latino 4 (3.7)
Not Hispanic/Latino 104 (95.4)
Prefer not to answer 1 (0.9)
Annual income
Less than $25k 41 (38.0)
$25k to $49.9k 26 (24.1)
$50k to $74.9k 11 (10.2)
$75k to $99.9k 4 (3.7)
$100k to $149.9k 2 (1.8)
$150k or more 5 (4.6)
Prefer not to answer 19 (17.6)
Race
American Indian/Alaskan Native 1 (0.9)
Asian 2 (1.8)
Black/African American 3 (2.8)
More than one race 3 (2.8)
White 99 (90.8)
Prefer not to answer 1 (0.9)
Sex
Female 85 (78.0)
Male 23 (21.1)
Prefer not to answer 1 (0.9)

Abbreviation: QuIC, Quality of Informed Consent.

Measure development

Criteria for inclusion in the 19‐item uConsent scale included: (a) range of low (easy) to high (hard) Rasch logistic values to account for a range of ability levels; (b) well‐fitting items with respect to scores from the Rasch model (±2σ); (c) range of point‐measure (item total) correlations to reflect a combination of both unique and shared variance; (d) unimodal, monotonic, and ordered response categories; (e) sufficient number of responses per category for modeling and analysis; (f) inclusion of multiple item types (true/false [T/F], multiple choice, and short answer); (g) representation across Bloom's taxonomy and required elements of informed consent; and (h) coefficient alpha at or above α Coef = 0.60. Having items that spanned Bloom's five categories of assessment were deemed essential to reflect increasingly deeper levels of understanding of the content reflected in the various uConsent items.

Of the 44 items making up the experimental item set, 19 were selected for the final scale. The uConsent scale had an overall mean of 31.0 (SD = 4.1), with scores ranging from a low of 19 to a high of 40 with a coefficient alpha of α Coef = 0.60 across all 19 items. Measures of both skewness (γ 3 = −0.40) and kurtosis (γ 4 = −0.03) demonstrated a well‐behaved distribution amenable for parametric analysis. Item level means, standard deviations (SDs), and medians (ranges) are provided in Table 2.

TABLE 2.

The uConsent item‐level statistics by level of difficulty/endorsability (N = 109).

Item description (N) JMLE difficulty (SE) Descriptive statistics a Fit statistics Point‐measure correlation Item type Regulatory requirement Bloom's taxonomy
Mean (SD) Mdn (Rng) InFit OutFit
Specific purpose (I5) 3.10 (0.43) 1.06 (0.23) 1 (1–2) 0.22 −0.07 0.12 Binary Benefits Recall (C)
Study purpose (I1) 1.90 (0.27) 1.16 (0.36) 1 (1–2) 0.06 −0.21 0.24 Binary Purpose Recall (F)
Learn something (I6) 1.57 (0.25) 1.20 (0.40) 1 (1–2) 0.08 −0.21 0.26 Binary Benefits Understanding (C)
Main goal of study (I3) 0.84 (0.21) 1.33 (0.47) 1 (1–2) 0.76 0.83 0.21 Binary Purpose Recall (C)
Privacy (I18) 0.81 (0.14) 0.61 (0.77) 0 (0–2) 0.81 1.23 0.35 Binary Confidentiality Analysis/Synthesis (F)
Specific purpose (I19) 0.69 (0.16) 0.77 (0.65) 1 (0–2) −0.19 −0.33 0.42 Short Answer Purpose Evaluation (F)
Associated risks (I9) 0.46 (0.20) 2.91 (0.50) 3 (2–4) 0.59 0.42 0.21 Multiple Choice Risks Application (C)
Identification (I11) 0.44 (0.12) 1.73 (0.97) 1 (1–3) −0.01 0.72 0.48 Multiple Choice Confidentiality Application (C)
Purpose of IRB (I17) 0.20 (0.20) 0.47 (0.50) 0 (0–1) −2.51 −2.45 0.50 Short Answer Who to Contact Understanding (C)
Risk description (I13) 0.10 (0.14) 2.68 (0.79) 3 (1–4) 0.28 −0.08 0.44 Multiple Choice Risks Application (F)
Study procedures (I2) 0.08 (0.20) 1.50 (0.50) 1 (1–2) −2.98 −2.73 0.53 Binary Purpose Understanding (C)
Confidentiality (I15) −0.06 (0.15) 1.05 (0.71) 1 (0–2) −0.60 −0.68 0.48 Short Answer Confidentiality Evaluation (C)
Share information (I10) −0.19 (0.13) 2.47 (0.85) 2 (1–4) 0.58 0.92 0.40 Multiple Choice Re‐Identification Analysis/Synthesis (C)
Serious risks (I14) −0.20 (0.15) 1.11 (0.72) 1 (0–2) −0.13 –0.14 0.43 Short Answer Risks Evaluation (C)
Volunteering (I16) −1.12 (0.16) 1.55 (0.67) 2 (0–2) −0.52 −0.36 0.44 Short Answer Voluntariness Analysis/Synthesis (C)
Withdrawal (I12) −1.15 (0.13) 3.70 (0.91) 4 (1–4) 0.89 0.84 0.41 Multiple Choice Who to Contact Understanding (F)
Participation (I4) −1.88 (0.28) 1.85 (0.36) 2 (1–2) 0.22 0.44 0.20 Binary Voluntariness Evaluation (C)
Identifiability (I8) −2.57 (0.36) 1.92 (0.28) 2 (1–2) 0.33 0.11 0.13 Binary Re‐Identification Understanding (F)
Legal protection (I7) −3.02 (0.43) 1.94 (0.23) 2 (1–2) 0.18 −0.12 0.18 Binary Confidentiality Application (F)

Note: N = 19 items, with each item number listed immediately after a brief item descriptor (In).

Abbreviations: C, conceptual; F, factual; JMLE, joint maximum likelihood estimation; IRB, institutional review board; Mdn, median; Rng, range; SE, Standard Error.

a

Empirically derived z‐scores.

The 19 items ranged in difficulty/endorsability from a low of −3.02 (Item 7 [I7] Is the hospital legally required to protect your health information?), to a high of 3.10 (I5, Information from this study could better help scientists; see Table 2). Relatively difficult items included: “The purpose of this study is to help scientists better understand the causes and potential treatments for…” (I1, 1.90) and “You could learn something new about your own health…” (I6, 1.57). Simpler questions included items such as: “You may participate in this study without fully understanding it” (I4, −1.88), and “If you wish to withdraw from this study, whom would you contact?” (I12, −1.15). Items falling in the center of the distribution suggest a balanced likelihood of endorsing a specific response for the average participant, and included items addressing confidentiality (I15, −0.06), study procedures (I2, 0.08), and sharing of information with others (I10, −0.19). See Table 2 for a more complete list of Rasch difficulty/endorsability values.

Point measure (item‐total) correlations using Rasch logit scores ranged from a ρ = 0.12 for study purpose (I5) to ρ = 0.50 for the purpose of the IRB (I17). Two items yielded larger fit‐statistics, purpose of the IRB (I17), and description of study procedures (I2), yet the fit statistics did not deviate markedly from the standard ±2σ limit, hence, these items were retained for scale coherence. With respect to criterion‐related validity, the uConsent total score correlated r S = −0.10 (p = 0.29) with the QuIC score (mean = 60.2, SD = 7.7), suggesting no statistically significant relationship between the two measures. Pairwise correlations among the uConsent items ranged from a low of r S = 0.00 (I1, I19) to a high of r S = 0.45 (I1, I3) suggesting items with unique as well as shared variance.

Generalizability for clinical research

Twenty‐five informed consent documents were randomly selected from the global ClinicalTrials.gov database, reviewed, and evaluated to determine how well the consent documents mapped onto uConsent's 19 items. The sample of documents drawn represented a broad range of studies in terms of sample size (N = 14 to 2500, median = 50 subjects) and study duration (1.5 weeks to 5 years, median = 16 weeks). There was also a wide range of sponsors including academic (e.g., University of Alabama at Birmingham, University of Minnesota, and University of Pennsylvania), federal (e.g., National Institute of Allergy and Infectious Diseases, and the National Cancer Institute), industry (e.g., Amgen, Sanofi, and Cellular Sciences), charitable foundation (e.g., Andrews Research Foundation, Aultman Health Foundation, and Gates Foundation), and health department (Hennepin County, MN) organizations, as well as many representing a combination of sponsors (e.g., Vanderbilt University Medical Center and AbbVie; Cedars‐Sinai Hospital and Alexion; and University of Utah and Juvenile Diabetes Research Foundation).

All 25 studies were interventional in nature with 15 of the 25 (60%) reporting randomization of subjects and/or treatments to subjects. Not surprisingly, the sample represented a broad range of clinical conditions, from chronic (e.g., Alzheimer's disease, bladder control, and heart failure) to cancer (e.g., breast, multiple myeloma, and gastric), to acute infection (e.g., coronavirus disease 2019 (COVID‐19) booster efficacy, intervention, and long COVID), to mental health (e.g., depression in bipolar disorder, opioid use disorder, and post‐traumatic stress disorder), and other (e.g., pre‐eclampsia in pregnancy, knee osteoarthritis, and T1 diabetes).

The median coverage rate for the intact set of 19 items was 95% across forms and ranged from a low of 68% to a high of 100%. The median coverage rate for an item when examined across all 25 consent documents was 100% and ranged from 64% (item 8) to 100% (11 of the 25; see Table 3). Coverage across all five of the thematic research areas was comparable with only the mental health studies having modestly more variability than the others.

TABLE 3.

The uConsent item coverage when mapping to the 25 informed consent documents sampled from ClinicalTrials.gov (N = 25).

uConsent item Informed consent document by medical condition
Chronic Oncologic Acute Mental health Other %Cov
A B C D E F G H I J K L M N O P Q R S T U V W X Y
Purpose
1. The purpose of this study is to help scientists better understand the causes and potential treatments for people with congenital heart disease 100
2. The study team has a complete list of genetic tests that will be carried out with your samples 100
3. The main goal of this study is to help scientists better understand genetic conditions that affect children, teens, and young adults 100
19. What is the purpose of a “biorepository”? 100
Risks
9. The following risks could be associated with this study 100
13. Which of the following best describes the risks in this study? 100
14. If a risk were presented you as ‘serious,’ what would that mean to you? graphic file with name CTS-16-2530-g001.jpg graphic file with name CTS-16-2530-g001.jpg graphic file with name CTS-16-2530-g001.jpg graphic file with name CTS-16-2530-g001.jpg graphic file with name CTS-16-2530-g001.jpg graphic file with name CTS-16-2530-g001.jpg graphic file with name CTS-16-2530-g001.jpg 72
Benefits
5. Information from this study could help scientists better understand the healthcare needs of children, teens, and young adults with heart disease graphic file with name CTS-16-2530-g001.jpg graphic file with name CTS-16-2530-g001.jpg graphic file with name CTS-16-2530-g001.jpg graphic file with name CTS-16-2530-g001.jpg graphic file with name CTS-16-2530-g001.jpg 80
6. You could learn something new about your own health from the genetic test results from this study 100
Confidentiality
7. The hospital is not required by law to protect your health information graphic file with name CTS-16-2530-g001.jpg graphic file with name CTS-16-2530-g001.jpg 92
11. Is it possible that someone could identify you from your participation in the study? 100
15. What does “loss of confidentiality” mean? graphic file with name CTS-16-2530-g001.jpg 96
18. What is the difference between “privacy” and “confidentiality”? 100
Who to contact
12. If you wish to withdraw from this study, whom would you contact? 100
17. What is the purpose of an Institutional Review Board? graphic file with name CTS-16-2530-g001.jpg graphic file with name CTS-16-2530-g001.jpg graphic file with name CTS-16-2530-g001.jpg graphic file with name CTS-16-2530-g001.jpg 84
Voluntariness
4. You may participate in this study without fully understanding its risks or purpose graphic file with name CTS-16-2530-g001.jpg graphic file with name CTS-16-2530-g001.jpg 92
16. What does it mean to “volunteer” for a research study? 100
Re‐identification in future research
8. It is possible that the study team will include your first name or your date of birth in reports or publications that come from this study graphic file with name CTS-16-2530-g001.jpg graphic file with name CTS-16-2530-g001.jpg graphic file with name CTS-16-2530-g001.jpg graphic file with name CTS-16-2530-g001.jpg graphic file with name CTS-16-2530-g001.jpg graphic file with name CTS-16-2530-g001.jpg graphic file with name CTS-16-2530-g001.jpg graphic file with name CTS-16-2530-g001.jpg graphic file with name CTS-16-2530-g001.jpg 64
10. What is the primary reason we cannot share results of this study with you? graphic file with name CTS-16-2530-g001.jpg 96
Within‐form coverage 95 100 100 84 100 100 100 95 84 89 95 100 100 84 95 95 95 68 89 100 95 84 95 95 100

Note: A–Y denotes 25 consent forms from ClinicalTrials.gov; item does (•) or does not (Inline graphic) map to a form; %cov = item coverage across forms (median 100%); uConsent within‐form coverage = 95%.

DISCUSSION

Informed consent remains a required, but poorly understood component of the clinical research process. The process and the documents that are used in support of it are overly formalistic, unnecessarily complex, and designed to protect institutions more so than the participants they were intended to serve. 1 , 2 , 3 , 4 Consenting practices used today are flawed and in desperate need of revision, 5 , 6 , 7 , 8 , 9 with many research participants failing to read entire portions of a consent document, even for the most serious of studies. 15 , 16 Yet, the consent process continues to be a cornerstone of ethical research involving human subjects (2018 Final Rule, Ss 46.102(b), 46.116(h)).

Notable features of the new uConsent scale include: an item set that covers seven of the nine required basic elements of informed consent, a sixth grade reading level, an item set based on educational and psychological theory and practice (items ranging from easy to hard, varying item types to offset guessing/response patterns, and presence of a cohesive scale), and, importantly, items that may be broadly applied to most clinical research studies.

The fact that the uConsent scale has the aforementioned characteristics puts it ahead of all other approaches currently in use today. As multiple systematic reviews of the literature and the CTTI summary report have indicated, there is currently no gold standard when it comes to scientifically acceptable approaches to assessing one's understanding of informed consent. 5 , 6 , 7 , 9 Current practices for evaluating understanding among study participants rest on ad hoc approaches by staff not trained in assessing understanding in general, or within specific segments of the population. As noted previously, questions and questionnaires have been among the historical ad hoc approaches used to estimate a research participant's level of understanding, but with varying degrees of success. 5 , 7 , 9 , 13 , 14 , 23 With real‐time scoring, a further benefit of the uConsent scale is that research staff will have immediate access to item‐level information regarding areas of weakness within a consent document, and specific information regarding areas of regulatory importance that a person may be having trouble with.

With additional information comes an added responsibility to act. The uConsent scale is not designed to be a one‐and‐done measure of understanding, nor is it intended to establish an absolute pass/fail threshold to allow participation in a research study. Rather, the uConsent scale is intended to highlight opportunities for additional and, if needed, in‐depth dialog between a research participant and research staff by identifying areas of weakness in a person's level of understanding. Implicit in this dialog is a deep and unwavering commitment to a research participant's autonomy in the consent process—by ensuring that each participant understands what is expected of them and what risks they are undertaking by volunteering to be a part of the research process. However, even with the inclusion of better instruments like the uConsent scale, and more informative dialog, there will undoubtedly be times when an individual's responses indicate that they simply do not understand the required components of the research, irrespective of the amount of time spent covering additional information. This is not a weakness of the scale or the process, but exactly what the instrument is designed to do. Having the ability to distinguish potential participants who understand from those who do not understand is important to the research process and very much in keeping with the tenets of informed consent.

As such, an individual's results on the uConsent scale are not intended to be used in a vacuum by the research team in a consenting process. However, there are likely to be times when the uConsent scale suggests that an individual, or group of individuals, is not achieving a sufficient level of understanding of the required components of a given study. This is important information that should be used to inform the study team and perhaps even influence the conduct of the study. Specifically, such results may suggest isolated or systematic deficiencies in the consenting process or documents. Such results may also suggest a need to improve the expertise of those charged with obtaining consent or that a particular study population is in need of additional protections in the research process. Finally, although controversial, it is also important to note that participation in clinical research should not be considered analogous to receiving clinical/medical care and therefore it is an acceptable part of the clinical research process that not every study is right for every person. 24 Such information is not only reasonable but also justifiable in light of the appropriate ethical and regulatory burden placed on the clinical research community to obtain true “informed” consent.

To date, any attempts at evaluating participants' understanding of informed consent have provided only passing attention to scale development and rigorous applications of learning theory needed for this important prerequisite of a clinical research study. Strategies thus far have generally relied upon a single subjective question such as “Do you understand…?” or “Do you have any questions about …?” or coordinators asking potential participants to restate what they have heard (c.f., Teach‐Back method). Although good in principle, these strategies are often ad hoc, with evaluation left to staff who are generally not trained in assessment or knowledge acquisition of complex material. Finally, self‐generated quizzes are frequently used, which may appear to represent good instructional practice, but are likely to lack any basis in assessment‐related theory or practice. These quizzes most often emphasize recall of facts rather than more difficult and discriminating components of a consenting process, often lack any systematic coverage of key regulatory components, and are not likely to be targeted to the needs of a specific population. Quizzes created by research staff are not likely to deliver on their intended purpose—to distinguish those who “understand” from those who do not.

Of the nine basic elements of informed consent stipulated by the 2018 Final Rule, three often receive greater attention than others: what the study is about, what risks participants are taking on by participating in a study, and how well their personal information is being protected. And so it is with the uConsent scale. Of the 19 items on the scale, 11 are devoted to these three major areas. Care was taken to make certain that the three areas had multiple items devoted to them, that they were spread across Bloom's five categories, and that a range of item types was used (Yes/No, multiple choice, and short answer; see Table 2). Importantly, even the facts that are requested are not stand‐alone facts for purposes of recall, but rather opportunities for knowledge recall that are tied to important ideas of the consent.

Finally, studies can be internally valid but lack generalizability to real‐world conditions. For this reason, it was important to demonstrate the extent to which the uConsent scale would map onto other informed consent forms, and, in so doing, offer the scientific community a scale that could be used with other research studies. Results of our aim 3 analyses suggest that a generalized version of the uConsent scale mapped, on average, onto 95% of the 25 randomly selected studies and each item mapped, on average, onto 100% of the forms drawn from ClinicalTrials.gov.

As positive as these metrics are, there were four items for which mapping onto content was more variable. The four items were: I14 (meaning of severe risk), I5 (potential of information from this study to help scientists), I17 (purpose of the IRB), and I8 (inclusion of identifying information in publications). One interpretation of the lower coverage rates for these four items could be that these items address concepts that are not required basic elements of informed consent in the federal regulations, so it is not expected that these concepts would be described in all consent forms and, thus, the related uConsent items would not necessarily map onto the consent forms.

Closer examination of the aforementioned items suggests that they reflect an important depth of understanding for a truly “informed” consenting process. The goal of the uConsent scale and the evaluation strategy used here is not simply to measure a participant's ability to restate facts from a consent form but, rather, to rigorously measure a research participant's understanding of both the required elements of informed consent as well as concepts deemed critical to making a truly informed decision about their participation in a study. To achieve this goal, in the spirit of the 2018 Final Rule's mandate that informed consent facilitate understanding of a given study, the uConsent scale was designed to measure “understanding” across a range of concepts and of item types, difficulty levels, and regulatory requirements using rigorous measure and scale development methods.

Although the current study has a number of obvious strengths, it also has some notable limitations. First, the most notable limitation is that this study was conducted at a single medical center. The COVID‐19 pandemic completely disrupted recruiting and forced a re‐evaluation of the original design. It is possible that comparable studies conducted in other settings would yield different results. Second, the samples' demographics were primarily women, White, and pursuing higher education, reflective of the hospital and university setting in which the study was conducted. Samples with different profiles may offer different results. Third, one instrument, no matter how strong or well‐validated, should ever be the sole determinant of a person's access to a given research study. Rather, instruments such as the one studied here should be one piece of information to help guide meaningful discussions regarding participation in a study. Finally, use of a hypothetical study for aim 2, whereas intended to represent real‐world conditions, will never have the same impact among its participants as a study of young adults faced with the decision to enroll in a research study.

Several recommendations are offered to move this area of human subjects’ research forward. First, investigators are encouraged to continue finding ways to strengthen and streamline the measurement of understanding within the clinical research process through better questions, techniques, and strategies generalizable to diverse populations, including age, cultural, nationality, education, etc. Second, researchers are encouraged to consider prioritizing the required regulatory components to identify those viewed as most important, valuable, and informative to participants to assist in streamlining the assessment process, in alignment with the “Key Information” concept of the Revised Common Rule. 10 Finally, investigators should continue striving to find ways in which assessment questions can be seamlessly integrated into the consent process to assure that the focus of evaluation remains on the process of what clinical research participants know, what they need to know, and how best to facilitate participation in the research process.

AUTHOR CONTRIBUTIONS

R.F.I., J.J.C., and J.M.D. wrote the manuscript. R.F.I., J.J.C., J.M.D., G.H., J.W.G., N.B.B., and M.T.H. designed the research. R.F.I., J.J.C., J.M.D., G.H., J.W.G., N.B.B., and M.T.H. performed the research R.F.I., G.H., and J.M.D. analyzed the data.

FUNDING INFORMATION

This project was supported by the National Center for Advancing Translational Sciences of the National Institutes of Health (NIH), under Award Number UL1TR001425. The content is solely the responsibility of the authors and does not necessarily represent the official views of the NIH.

CONFLICT OF INTEREST STATEMENT

The authors declared no competing interests for this work.

Supporting information

Appendix S1

Appendix S2

ACKNOWLEDGMENTS

The authors offer a special note of thanks to Ms. Laura Fosset for her invaluable help with the ClinicalTrials.gov database and to Ms. Christine Alvarez for her help and guidance with the REDCap database. We also wish to thank Dr. Maurizio Macaluso for his insight and inspirational suggestion regarding the direction of Aim 3. [Correction added on 21 October 2023, after first online publication: The misspelling name of Christina has been corrected to Christine.]

Ittenbach RF, Gaynor JW, Dorich JM, et al. uConsent: Addressing the gap in measuring understanding of informed consent in clinical research. Clin Transl Sci. 2023;16:2530‐2542. doi: 10.1111/cts.13645

REFERENCES

  • 1. Dickert NW, Eyal N, Goldkind SF, et al. Reframing consent for clinical research: a function‐based approach. Am J Bioeth. 2017;17(12):3‐11. [DOI] [PubMed] [Google Scholar]
  • 2. Hallinan ZP, Forrest A, Uhlenbrauck G, Young S, McKinney R Jr. Barriers to change in the informed consent process: a systematic literature review. IRB: ethics and human. Research. 2016;38(3):1‐10. [PubMed] [Google Scholar]
  • 3. Lidz CW. Informed consent: a critical part of modern medical research. Am J Med Sci. 2011;342(4):273‐275. [DOI] [PubMed] [Google Scholar]
  • 4. Grady C. Enduring and emerging challenges of informed consent. N Engl J Med. 2015;372(9):855‐862. [DOI] [PubMed] [Google Scholar]
  • 5. Flory J, Emanuel E. Interventions to improve research participants' understanding in informed consent for research: a systematic literature review. JAMA. 2004;292(13):1593‐1601. [DOI] [PubMed] [Google Scholar]
  • 6. Lorell BH, Mikita JS, Anderson A, Hallinan ZP, Forrest A. Informed consent in clinical research: consensus recommendations for reform identified by an expert interview panel. Clin Trials. 2015;12(6):692‐695. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7. Nishimura A, Carey J, Erwin PJ, Tilburt JC, Murad MH, McCormick JB. Improving understanding in the research informed consent process: a systematic review of 54 interventions tested in randomized control trials. BMC Med Ethics. 2013;14(1):1‐15. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8. US Department of Health and Human Services , Food and Drug Administration , Center for Drug Evaluation and Research , Office of Good Clinical Practice , Center for Biologics Evaluation and Research , Center for Devices and Radiological Health . Use of Electronic Informed Consent in Clinical Investigations Questions and Answers Guidance for Industry Draft Guidance . Silver Spring, MD: Center for Drug Evaluation and Research 2015. Accessed September 28, 2023. http://www.fda.gov/Drugs/GuidanceComplianceRegulatoryInformation/Guidances/default.htm.
  • 9. Montalvo W, Larson E. Participant comprehension of research for which they volunteer: a systematic review. J Nurs Scholarsh. 2014;46(6):423‐431. [DOI] [PubMed] [Google Scholar]
  • 10. U. S. Department of Health and Human Services . Federal Policy for the Protection of Human Subjects ('Common Rule'). Fed Regist. 2017;82(12):7149‐7274. [PubMed] [Google Scholar]
  • 11. Gillies K, Duthie A, Cotton S, Campbell MK. Patient reported measures of informed consent for clinical trials: a systematic review. PLoS One. 2018;13(6):e0199775. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12. Tamariz L, Palacio A, Robert M, Marcus EN. Improving the informed consent process for research subjects with low literacy: a systematic review. J Gen Intern Med. 2013;28(1):121‐126. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13. Glaser J, Nouri S, Fernandez A, et al. Interventions to improve patient comprehension in informed consent for medical and surgical procedures: an updated systematic review. Med Decis Making. 2020;40(2):119‐143. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14. Tam NT, Huy NT, Thoa LTB, et al. Participants' understanding of informed consent in clinical trials over three decades: systematic review and meta‐analysis. Bull World Health Organ. 2015;93:186‐198H. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15. Ittenbach RF, Senft EC, Huang G, Corsmo JJ, Sieber JE. Readability and understanding of informed consent among participants wth low inomes: a preliminary report. J Empir Res Hum Res Ethics. 2015;10(5):444‐448. [DOI] [PubMed] [Google Scholar]
  • 16. Sharp SM. Consent documents for oncology trials: does anybody read these things? Am J Clin Oncol. 2004;27(6):570‐575. [DOI] [PubMed] [Google Scholar]
  • 17. Tait AR, Voepel‐Lewis T, Malviya S. Do they understand (part I)?: parental consent for children participating in clinical anesthesia and surgery research. Anesthesiology. 2003;98(3):603‐608. [DOI] [PubMed] [Google Scholar]
  • 18. Ondrusek N, Abramovitch R, Pencharz P, Koren G. Empirical examination of the ability of children to consent to clinical research. J Med Ethics. 1998;24(3):158‐165. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19. Unguru Y, Sill AM, Kamani N. The experiences of children enrolled in pediatric oncology research: implications for assent. Pediatrics. 2010;125(4):e876‐e883. [DOI] [PubMed] [Google Scholar]
  • 20. The National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research . The Belmont Report: Ethical Principles and Guidelines for the Protection of Human Subjects of Research. ERIC Clearinghouse; 1979. [PubMed] [Google Scholar]
  • 21. Kripalani S, Bengtzen R, Henderson LE, Jacobson TA. Clinical research in low‐literacy populations: using teach‐back to assess comprehension of informed consent and privacy information. IRB. 2008;30(2):13‐19. [PubMed] [Google Scholar]
  • 22. Lorenzen B, Melby CE, Earles B. Using principles of health literacy to enhance the informed consent process. AORN J. 2008;88(1):23‐29. [DOI] [PubMed] [Google Scholar]
  • 23. Bankert EA. Assessing informed consent communication. In: Bankert EA, Gordon BG, Hurley EA, Shriver SP, eds. Institutional Review Board: Management and Function (3/e). Jones & Barlett Learning; 2022:317‐322. [Google Scholar]
  • 24. Grady C. Improving informed consent. In: Bankert EA, Gordon BG, Hurley EA, Shriver SP, eds. Institutional Review Board: Management and Function (3/e). Jones & Barlett Learning; 2022. [Google Scholar]
  • 25. Ittenbach RF, Corsmo JJ, Miller RV, Korbee LL. Older teens' understanding and perceptions of risks in studies with genetic testing: a pilot study. AJOB Empir Bioeth. 2019;10(3):173‐181. [DOI] [PubMed] [Google Scholar]
  • 26. Simera I, Altman DG, Moher D, Schulz KF, Hoey J. The EQUATOR network: facilitating transparent and accurate reporting of health research. Serials: J Ser Commun. 2008;21:183‐187. [Google Scholar]
  • 27. American Educational Research Association (AERA), American Psychological Association (APA), National Council on Measurement in Education (NCME) . Standards for Educational and Psychological Testing. American Educational Research Association; 2014. [Google Scholar]
  • 28. Adams NE. Bloom's taxonomy of cognitive learning objectives. J Med Libr Assoc. 2015;103(3):152‐153. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29. Anderson LW, Krathwohl DR, Bloom BS. A Taxonomy for Learning, Teaching, and Assessing: A Revision of Bloom's Taxonomy of Educational Objectives. Allyn & Bacon; 2001. [Google Scholar]
  • 30. Bloom BS, Committee of College University Examiners . Taxonomy of Educational Objectives. Longmans, Green; 1964. [Google Scholar]
  • 31. Krathwohl DR. A revision of Bloom's taxonomy: an overview. Theory Pract. 2002;41(4):212‐218. [Google Scholar]
  • 32. Armstrong P. Bloom's Taxonomy. Vanderbilt University Center for Teaching; 2016. [Google Scholar]
  • 33. Joffe S, Cook EF, Cleary PD, Clark JW, Weeks JC. Quality of Informed Consent: a new measure of understanding among research subjects. J Natl Cancer Inst. 2001;93(2):139‐147. [DOI] [PubMed] [Google Scholar]
  • 34. Boone WJ, Staver JR, Yale MS. Rasch Analysis in the Human Sciences. Springer; 2013. [Google Scholar]
  • 35. Bond TG, Fox CM. Applying the Rasch Model: Fundamental Measurement in the Human Sciences. Psychology Press; 2013. [Google Scholar]
  • 36. Masters GN. Partial credit model. In: von der Linden WJ, ed. Handbook of Item Response Theory (Vol 1): Models. Chapman and Hall/CRC; 2016:109‐126. [Google Scholar]
  • 37. Graneheim UH, Lindgren B‐M, Lundman B. Methodological challenges in qualitative content analysis: a discussion paper. Nurse Educ Today. 2017;56:29‐34. [DOI] [PubMed] [Google Scholar]
  • 38. Miller FA, Alvarado K. Incorporating documents into qualitative nursing research. J Nurs Scholarsh. 2005;37(4):348‐353. [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Appendix S1

Appendix S2


Articles from Clinical and Translational Science are provided here courtesy of Wiley

RESOURCES