Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2022 Feb 1.
Published in final edited form as: J Empir Res Hum Res Ethics. 2021 Aug 19;17(1-2):177–192. doi: 10.1177/15562646211037546

Understanding the use of optimal formatting and plain language when presenting key information in clinical trials

Erin D Solomon 1, Jessica Mozersky 1, Matthew Wroblewski 1, Kari Baldwin 1, Meredith Parsons 1, Melody Goodman 2, James M DuBois 1
PMCID: PMC8712347  NIHMSID: NIHMS1726503  PMID: 34410175

Abstract

Recent revisions to the Common Rule require that consent documents begin with a focused presentation of the study’s key information that is organized to facilitate understanding. We surveyed 1284 researchers working with older adults or individuals with Alzheimer’s disease, supplemented with 60 qualitative interviews, to understand current use and barriers to using evidence-based formatting and plain language in key information. Researchers reported using formatting in 42% of their key information sections, and plain language in 63% of their key information sections. Perceived barriers included lack of knowledge, IRB, other members of their team, and the burden associated with implementation. Education and training are required to increase adoption of the practices.

Keywords: informed consent, key information, formatting, plain language, evidence-based health communication, research ethics, implementation science


Informed consent is fundamental to ensuring participant autonomy and respect for individual choices about research participation (Faden & Beauchamp, 1986). However, consent forms are often long, complex, and difficult for research participants to comprehend (Beardsley, Jefford, & Mileshkin, 2007; M. Jefford et al., 2005; Joffe, Cook, Cleary, Clark, & Weeks, 2001; Montalvo & Larson, 2014). Consent form templates have consistently been shown to exceed their own suggested readability standards (Foe & Larson, 2016; Larson, Foe, & Lally, 2015). For example, one study found that consent forms at major U.S. medical schools exceeded their own readability standards by an average of 2.2 grade levels (Paasche-Orlow et al., 2013). Compounding this problem is that up to 43% of U.S. adults have basic or below basic literacy skills (Berkman, Sheridan, Donahue, Halpern, & Crotty, 2011; Kutner, Greenberg, & Baer, 2005). Writing in a manner that facilitates understanding is particularly important for consent forms, as they are key tools used in obtaining informed consent from research participants. Using a quantitative survey and qualitative interviews, the current research sought to establish how frequently researchers report using evidence-based health communication practices in consent documents, what predicts the use of these practices, and barriers to implementing the practices.

Background

Recent revisions to the Federal Policy for the Protections of Human Subjects, or Common Rule, require changes to consent documents to make them more understandable (Department of Health and Human Services, 2018). Informed consent documents must now begin with a “concise and focused presentation” of the key information a participant would require to make a decision regarding participation (Department of Health and Human Services, 2018). The key information should be organized “in a way that facilitates comprehension” (Department of Health and Human Services, 2018). However, the regulations do not specify how this is to be accomplished or provide guidance on how this might be achieved, although official guidance from the Secretary’s Advisory Committee on Human Research Protections is forthcoming (2018).

Additionally, the recent NIH Inclusion Across the Lifespan policy now mandates that older adults (i.e., anyone over the age of 65) be included in research unless there is a scientific or ethical reason to exclude them (National Institutes of Health, 2019). This policy change addresses the longstanding exclusion of older adults from clinical research, which is partially due to the historical tendency to exclude older adults because they are at higher risk of cognitive impairments, and therefore may face challenges with informed consent (Plassman et al., 2008; Prusaczyk, Cherney, Carpenter, & DuBois, 2017). As more older adults are included in research, it will become even more important to implement consent practices that facilitate comprehension and enable research participation.

Several health communication practices have been shown to improve understanding and appreciation of consent documents among research participants (Agre et al., 2003; Flory & Emanuel, 2004; Holmes-Rovner et al., 2005; Iltis et al., 2013; Michael Jefford & Moore, 2008; Kim & Kim, 2015; Nishimura et al., 2013; Rubright et al., 2010). Two such practices include optimizing document formatting and using plain language.

Formatting for increased understanding considers the structure and design of the document, and its intended purpose (Plain Language Association International, 2008; The Plain Language Action and Information Network, 2011). This often includes maximizing white space (i.e., the area of a page without text or images), as well as using bullet points, headings, and large font (Plain Language Association International, 2008; The Plain Language Action and Information Network).

Writing with plain language involves using simple words and phrases, and keeping sentences and paragraphs short (Directorate-General for Translation, 2011; Michael Jefford & Moore, 2008; Kim & Kim, 2015; Plain Language Association International, 2008; The Plain Language Action and Information Network, 2011). Using plain language results in documents in which the intended reader can easily find what they need and use the information (Plain Language Association International, 2008). Writers should consider the audience and purpose of the document, and word choice should be dictated by the audience and by using the simplest word(s) that convey the intended meaning (Directorate-General for Translation, 2011; Plain Language Association International, 2008; The Plain Language Action and Information Network, 2011). Technical jargon should be avoided, or explained in simpler terms (Directorate-General for Translation, 2011; Plain Language Association International, 2008).

Using formatting and plain language is preferred by participants, and has been shown to improve understanding (Holmes-Rovner et al., 2005; Kim & Kim, 2015). The practices are particularly helpful for participants with low reading comprehension levels (Campbell, Goldman, Boccia, & Skinner, 2004; Davis, Berkel, Holcombe, Pramanik, & Divers, 1998). For example, for individuals with reading comprehension skills below the 8th grade level, consent documents using simpler language, headings, and graphics were shown to be more effective at improving understanding when compared to a standard consent form (Campbell et al., 2004).

Evidence suggests formatting and plain language are not widely used (Paasche-Orlow et al., 2013). A recent study analyzing IRB key information templates after implementation of the revised Common Rule found that the majority (59%) of these templates contained no guidance on formatting or usage of plain language (Mozersky, Wroblewski, Solomon, & DuBois, 2020). Regarding formatting, only 17% provided guidance on font size, 9% suggested the use of bullet points, and only 7% addressed the use of white space (Mozersky et al., 2020). In reference to plain language, 41% provided guidance on plain language, and 28% suggested a specific reading level or range (Mozersky et al., 2020). Another study demonstrated that researchers opted to use simplified consent forms when the templates were provided by the IRB (Larson, Teller, Aguirre, Jackson, & Meyer, 2017). This suggests that IRB endorsement or provision of simplified templates increases researchers’ willingness to use these forms.

Given that IRBs do not appear to be providing adequate guidance on how to achieve a “concise and focused presentation” of key information that “facilitates comprehension” (Mozersky et al., 2020), and tend to provide consent form templates above their own recommended reading level (Paasche-Orlow et al., 2013), it is unknown to what extent researchers are adopting formatting and plain language practices in light of the revised regulations. Thus, this study aims to determine how widely these practices have been adopted by researchers, and what the barriers to adoption might be. That is, our study focuses on how to facilitate communication of key information, rather than what information should be communicated within the key information. We focus on research with older adults or Alzheimer’s disease patients given these individuals are at increased risk of cognitive impairments and have historically been excluded from general clinical trials (Prusaczyk et al., 2017; Taylor, DeMers, Vig, & Borson, 2012). NIH policy now mandates inclusion of older adults, and ensuring comprehension of informed consent is critical to this endeavor. We also focus on consent in clinical trials. Given that clinical trials are frequently complex and greater than minimal risk, it is essential for consent documents to clearly communicate these complexities and the risk involved.

The Current Research

The data collected in this report was part of a larger implementation science project (NIA R01AG058254) that aims to facilitate adoption of evidence-based informed consent practices among researchers in the U.S. The project utilizes the Consolidated Framework for Implementation Research (CFIR) to explore the barriers and facilitators of adopting evidence-based consent procedures (Damschroder et al., 2009). CFIR is comprised of five domains (Damschroder et al., 2009). These are 1) characteristics of individuals who are targeted to adopt new practices (Graham & Logan, 2004; Pettigrew & Whipp, 1992), 2) their outer setting (e.g., Office of Human Research Protection regulations or study sponsor rules; Fixsen, Naoom, Blase, Friedman, & Wallace, 2005; Mendel, Meredith, Schoenbaum, Sherbourne, & Wells, 2008; Pettigrew & Whipp, 1992; Stetler, 2001), 3) inner setting (e.g., their local IRB policies or training opportunities; Kilbourne, Neumann, Pincus, Bauer, & Stall, 2007; Pettigrew & Whipp, 1992; Stetler, 2001), 4) intervention characteristics (e.g., whether new practices are promoted repeatedly and whether evidence is offered to support adopting the practices; Pettigrew & Whipp, 1992); and 5) implementation process (e.g., the processes of formatting documents or writing in plain language; Kitson, Ahmed, Harvey, Seers, & Thompson, 1996; Pettigrew & Whipp, 1992). The data from this study will inform a trial that ultimately seeks to increase adoption (i.e., implementation) of these practices. To our knowledge, this will be the first implementation science trial conducted within the domain of research ethics and informed consent.

In this leg of our study, we sought to understand three broad questions pertaining to the use evidence-based health communication practices in the newly required key information sections.

  • 1

    How widespread is the use of evidence-based health communication strategies, such as formatting and plain language, in key information sections?

We sought to understand the current rates of adoption of these practices. We focused our data collection on key information sections because that is the part of the consent document that is clearly specified by the regulations to be organized in a way that “facilitates comprehension.”

  • 2

    What modifiable factors are associated with adoption of formatting or plain language in key information sections?

Determining what modifiable factors are associated with adoption (e.g., attitudes) will yield important information about what interventions might increase researcher’s adoption of the practices. We defined modifiable as any variable that could be targeted for change in our implementation study. For example, we considered attitudes to be modifiable but work setting to be unmodifiable.

  • 3

    What are the perceived barriers to adopting formatting and plain language in key information sections?

Understanding the perceived barriers to adoption, as identified by both researchers and IRB members, will help determine the strategies we will use to increase adoption rates among researchers in the upcoming implementation trial.

Method

To investigate our research questions, we collected both quantitative survey data and qualitative interview data. Using this mixed-method approach yields a more complete picture of implementation and barriers to implementation of formatting and plain language consent practices (Creswell, 2014). We used the CFIR model to guide the development of both our survey and interviews. Specifically, of the five CFIR domains, three pertain to the current study, because they focus on barriers to adopting evidence-based practices: characteristics of individuals (e.g., attitudes, confidence), outer setting (e.g., sponsors, national policies), and inner setting (e.g., IRB requirements).1 This research was approved by the Washington University in St. Louis IRB (#201807033 and 201909154).

Quantitative Survey

Survey development.

The data reported here was part of a larger survey focused on evidence-based informed consent practices, of which formatting and plain language were two of the practices included in the survey. Item writing was conducted by a team of PhD-level experts in the fields of research ethics, bioethics, and survey design (e.g., DuBois & Antes, 2018; DuBois, Chibnall, & Gibbs, 2015; DuBois, Chibnall, Tait, et al., 2015; English, Antes, Baldwin, & DuBois, 2017). The research team drafted and iteratively revised the first pool of survey items as a group, with expert input from Principal Investigators (PIs), Clinical Research Coordinators (CRCs), and IRB members. We modified some items to create a PI version and CRC version where relevant (for instance, PIs were asked “How many clinical trials did you conduct over the past 12 months where you were PI?” whereas CRCs were asked “How many clinical trials did you support over the past 12 months?”). We also made use of skip logic, to ensure participants only viewed the version of the questions relevant to their role as a CRC or a PI, and to limit survey burden whenever possible.

We conducted cognitive interviews to examine survey items for clarity and face validity with individuals with expertise in informed consent regulations, conducting consent procedures, and/or designing consent protocols (N = 8). Interviewees reviewed all survey items. They were asked if any items were unclear or would be difficult to answer, and to suggest a rephrasing of any items they found problematic. Items were revised to improve clarity and reduce overall length and burden of the survey.

Measures.

Adoption.

We presented a short description of the new key information requirement to ensure all participants were aware of its existence and educate those who may not be. We first asked participants how many consent forms they submitted to their IRB over the past year that included the newly mandated key information sections. This provide us with a denominator when calculating rates of adoption. We then asked how many of these informed consent documents they had submitted to an IRB they had personally reformatted to use at least 12-point font and increasing white space, or personally rewritten for plain language. Thus, adoption was calculated by dividing the number of reformatted or rewritten key information sections by the total number of key information sections submitted to their IRB over the past year. Because the formatting practice was assessed with two items (font and white space), we first divided responses to both the font and white space items by 2 and then added those halved scores together before calculating the formatting adoption rate.

Reasons for non-adoption.

Participants who reported not using a practice were presented with a list of options as to why they did not use the practice (e.g., “I did not think this practice was important” or “Someone else on my research team made this change”).

Change already made.

Two of the reasons for non-adoption, “the template provided already used this practice,” and, “someone else on my research team made this change,” indicated that a participant did not have an opportunity to adopt that practice because adoption had already occurred. Therefore, if a participant endorsed either of those response options, we considered them to have already made the change toward adopting the practice. For each practice, the data were coded as “1” if the change had already been made and “0” if it had not been made.

Barriers.

All participants, regardless of whether they had reported using the practice, were asked if anyone might prevent them from implementing the practices. If they responded “yes,” they were presented with a list of options (i.e., “IRB,” “sponsor,” “participants,” “research team members,” and “other”). Responses were coded as “1” if they endorsed the response option and “0” if they did not.

Confidence in resources.

We measured participant’s confidence for implementing the practices using one question for both formatting and plain language, “How confident are you that you have the resources you need to use these practices well?” (1 = “not at all confident”, 5 = “extremely confident”).

Positive attitudes.

We measured attitudes toward formatting and plain language using two questions each, “As a whole, how useful do you think such practices are in improving research participants’ understanding of consent information?” (1 = “not at all useful”, 5 = “extremely useful”) and “How interested are you in improving your use of these practices?” (1 = “not at all interested”, 5 = “extremely interested”). Positive attitude scores were calculated by adding the interest and usefulness items for each practice; these scores ranged from 2 – 10. Cronbach’s alpha for the two items was .69.

Marlowe-Crowne Social Desirability Scale.

The short form version of the Marlowe-Crowne Social Desirability Scale consists of 13 true/false items measuring how concerned a person is with social approval (Crowne & Marlowe, 1960). The scale is often used in survey research to account for individual differences in socially desirable responding. It was included in this research because it is a self-report study. Researchers may want to report using evidence-based health communication practices because they are desirable and beneficial to their work. Participants read the 13 items concerning personal attitudes and traits and indicate whether the statement was true or false as it pertains to them (1 = “true”, 0 = “false”). An overall score was calculated by summing the 13 items, which ranged from 0 to 13. The scale has a KR20 score of .88 (Crowne & Marlowe, 1960). In the current report, the KR20 was .67.

Demographics.

Demographic questions included gender, age, race, and education. We asked additional questions about their work and the trials they worked on.

Recruitment and Procedure.

We utilized non-probability, criterion-based sampling. We targeted researchers whose participants have cognitive impairments or include older adults as these researchers may be especially interested in improving consent understanding. We required a large sample size to fulfill the needs of our larger randomized implementation science trial project, and conducted a power analysis to determine the sample size. We targeted only researchers working in the U.S. because regulatory environments vary importantly across nations; controlling for these variations would require significantly larger sample sizes.

First, we created a recruitment database by querying the Aggregate Analysis of ClinicalTrials.gov (AACT) database, which houses publicly available information about clinical studies (Clinical Trials Transformation Initiative, 2016). The database included 20,613 researchers working on interventional clinical trials in the United States focused on Alzheimer’s disease (527) or involving participants age 65 or older (20,086). All prospective participants were sent a recruitment email, containing a link to the online Qualtrics survey.

In addition to recruiting from the AACT database, recruitment messages were posted to the Association of Clinical Research Professionals (ACRP) social media groups (i.e., Facebook and LinkedIn) and sent in two recruitment emails to 9,774 of ACRP’s members with a link to our online Qualtrics survey.

Informed consent was obtained online prior to completing the survey. We screened participants to verify that they were a CRC or PI, working in the United States, and expected to be involved in at least one new clinical intervention trial that would open within the next 18 months. Participants received a $20 Amazon eGift Card for completing the survey.

Participants.

There were 1284 participants, of whom 18% were PIs (N = 232) and 82% were CRCs (N = 1052). Demographic characteristics of the sample can be found in Table 1.

Table 1.

Demographic Characteristics of Quantitative Survey and Qualitative Interview Samples

Qualitative Interviews
Quantitative Survey PI CRC IRB


Variable % % % %
Gender
 Female 77 65 85 55
 Male 22 35 15 45
 Other <1 0 0 0
 Prefer not to answer 1 0 0 0
Age
 Below 30 17 0 35 0
 30–39 33 25 45 15
 40–49 26 30 10 25
 50 or more 24 45 10 60
Race/ethnicity*
 American Indian/ Alaska Native 1 0 0 0
 Asian 9 20 5 5
 Black/African American 5 0 5 0
 Hispanic or Latino 9 5 5 0
 Native Hawaiian/ Pacific Islander <1 0 0 0
 White 83 75 90 90
 More than one race 3 0 0 0
 Prefer not to answer 2 15 5 5
Education
 High School Diploma or GED 3 0 0 0
 Associate’s Degree 6 0 0 0
 Bachelor’s Degree 38 0 35 0
 Master’s Degree 31 0 50 15
 Doctoral Degree 20 95 10 80
 Other 2 5 0 5
Trial types*
 Drug 76 45 65 85
 Device 48 5 20 90
 Behavioral 31 60 70 90
 Biologics 25 10 20 65
 Surgical 24 0 15 75
Funding sources*
 Federal agencies 65 80 80 100
 Private foundations 36 50 35 85
 Industry 75 65 50 85
 Other 9 5 10 20
≥1 clinical trial open to older adults 99
≥1 clinical trial involving participants with cognitive impairments 34

Note. Quantitative survey sample N = 1284 (232 PIs and 1052 CRCs). Qualitative interview sample N = 60 (20 PIs, 20 CRCs, and 20 IRB members).

*

Participants could select more than one response.

We arrived at 1284 participants after removing participants that screened out (N = 614), worked outside of the United States (N = 4), or that completed the survey multiple times (N = 27). Participants that completed the survey in under five minutes or that provided impossible responses to more than one of the consent practice adoption items (e.g., claiming to have personally changed more key information sections to increase white space than they had personally submitted to an IRB) were also removed from the study (N = 67; Leiner, 2019).

Qualitative Interviews

Recruitment and Procedure.

We conducted semi-structured telephone interviews with PIs, CRCs, and IRB members. IRB members were included because they are an integral aspect of the ethical conduct of research and all consent materials have to be approved by the IRB. Thus, their knowledge and feedback on the practices were a particularly valuable addition to the study. The interviews were conducted prior to administering the quantitative survey, and provide a more complete understanding of the barriers to adoption than can be provided by quantitative data alone (Creswell, 2014).

Recruitment databases for each group were built using publicly available contact information, supplemented by snowball recruitment. PIs and CRCs were identified through trial listings on ClinicalTrials.gov. We used advanced search criteria to ensure that all researchers conduct interventional clinical trials with older adults. We used purposive sampling to ensure the sample represented researchers conducting trials with Alzheimer’s disease patients (CRCs 80%, PIs 75%). IRB members were identified through the websites of the 32 U.S. institutions with an NIA Alzheimer’s Disease Research Center (ADRC) (National Institute on Aging, 2019) and through institutions that were a part of the American Association of Medical Colleges (AAMA) (Association of American Medical Colleges, 2020). IRB members needed to be voting members of their IRB and to have reviewed at least one clinical trial protocol involving older adults or individuals with cognitive impairments in the past year in order to participate.

All stakeholder groups were recruited via email invitation. Participants provided informed consent, then completed an online demographic survey prior to the one-hour, semi-structured telephone interview. Participants received a $40 Amazon eGift Card for participating. All interviews were audio recorded and professionally transcribed.

Materials.

Similar to the survey, the interview questions reported here were part of a larger interview on several evidence-based informed consent practices. The interview questions were also developed using the CFIR model as a framework. Each interview followed a similar format. Participants were first asked about their current informed consent practices, followed by questions about their attitudes and views towards evidence-based consent practices. Questions were slightly adapted depending on whether the participant had ever used the practice or not.

The interviews explored participant views towards consent aids (Nishimura et al., 2013; Rubright et al., 2010), rather than formatting and using plain language in key information, because the study was initiated prior to implementation of the revised Common Rule (January 20th, 2019; see Figure 1). The underlying rationale for consent aids and key information are very similar as they are both designed to provide a short, clear, concise explanation of the key aspects of a study to help a participant decide about participation. In response to the revised Common Rule being implemented during the study, interview guides for IRB members were adapted after that date to include questions regarding the revised Common Rule. Twelve (60%) of IRB member participants were asked these new questions.

Figure 1.

Figure 1

Study timeline.

Participants.

Participants were PIs (N = 20), CRCs (N = 20), and IRB members (N = 20). Demographic characteristics of the sample can be found in Table 1.

Data Analysis

Quantitative survey.

We conducted the survey data analysis using SPSS version 26 and Stata 16. Results for Research Question 1 were conducted by calculating mean percentage rates. Results for Research Question 2 utilized regression analyses, to determine whether any variables that could be influenced by an intervention (e.g., attitudes) were associated with adoption of the practices. Specifically, we conducted hierarchical regressions, and statistically controlled for social desirability and those without an opportunity to adopt by entering the Marlowe-Crowne Social Desirability scale and the “change already made” variable into block 1 of the regressions. Next, we identified the variables that could potentially be modified or influenced by an intervention, and entered them into block 2 of the regressions. These variables were barriers, confidence in resources, and positive attitudes. To restrict the number of variables in the model to a reasonable number, and given our focus on informing implementation efforts, we did not include variables that we are unable to influence, such as funding source. Results for Research Question 3 involved tallying the number and percentage of participants indicating various types of barriers in both the quantitative survey data and the qualitative interview data.

Qualitative interviews.

Qualitative interview transcripts were uploaded to Dedoose, a qualitative data analysis software. The first author (ES) led codebook development with input from the interviewers (KB and MP) and other team members (JM and JD). We used a mixture of inductive and deductive coding, using CFIR to guide our coding approach (Saldaña, 2016). Each stakeholder group was assigned one gold standard coder, and coders (ES and KB) were trained on the codebook. Coders were required to attain a Cohen’s kappa score at or above .80 before coding the data. Cohen’s kappa was calculated a second time mid-way through coding to prevent drift. During coding, the coders met weekly to resolve questions and revised the codebook accordingly.

Results

Research Question 1: Current Adoption Rates

The mean formatting adoption rate was 42.07% (SD = 39.18%), indicating that participants used the formatting practices in less than half of the key information sections they have used in the past 12 months. The mean adoption rate for plain language was 63.16% (SD = 41.94%), indicating that participants used plain language in the majority of their key information sections over the past 12 months on average. Means and standard deviations for study variables can be found in Table 2.

Table 2.

Means and Standard Deviations of Quantitative Survey Sample

Variable M SD Range
Key information sections submitted in past year 5.01 7.88 0–90
Key information sections changed (font) 2.58 5.44 0–50
Key information sections change (white space) 2.89 5.06 0–50
Key information sections changed (plain language) 4.04 6.48 0–80
Formatting adoption rate 42.07% 39.18% 0–100%
Plain language adoption rate 63.16% 41.94% 0–100%
Confidence in resources (formatting) 3.70 .93 1–5
Confidence in resources (plain language) 3.79 .94 1–5
Positive attitudes (formatting) 7.16 1.75 2–10
Positive attitudes (plain language) 8.01 1.61 2–10
Marlowe-Crowne Social Desirability 9.19 2.47 1–13

Note. N = 1284. Mean percentage for the adoption rate variables is the average percent of key information sections participants changed to use the practice, out of all the key information sections they submitted to the IRB in the past year.

Research Question 2: Factors associated with adoption

Next, we utilized survey data to determine whether any of the variables that could be influenced by an intervention (e.g., attitudes) were associated with adoption of the practices.2 The overall regression model with formatting adoption as the dependent variable was significant, adj. R2 = .29, p < .01 (see Table 3). Not seeing the IRB as a barrier and having a more positive attitude toward formatting were both statistically significant predictors of formatting adoption. This is a mixture of CFIR inner (IRB) and individual (attitudes) domains. For plain language adoption, the overall model was also significant, adj. R2 = .54, p < .01 (see Table 4). Not seeing the sponsor as a barrier, having a more positive attitude toward plain language, and having more confidence in resources were all significant predictors of plain language adoption. This is a mixture of CFIR outer (sponsor) and individual domains (attitudes and confidence).

Table 3.

Regression Analyses Predicting Formatting Adoption using Quantitative Survey Sample

Block Variable B β t p F df adj. R2
1 Overall model <.01 191.64 2, 1004 .28
Marlowe-Crowne <.01 .03 1.11 .27
Change already made −.52 −.53 −19.57 <.01
2 Overall model <.01 52.30 8, 998 .29
Marlowe-Crowne <.01 .02 .77 .44
Change already made −.51 −.51 −19.15 <.01
IRB as barrier −.09 −.07 −2.40 .02
Sponsor as barrier .05 .03 1.06 .29
Team members as barrier .04 .02 .73 .47
Participants as barrier .13 .04 1.35 .18
Positive attitudes .03 .13 4.12 <.01
Confidence in resources .01 .02 .57 .57

Note. The dependent variable was formatting adoption. Bolded variables were significant predictors of formatting adoption.

Table 4.

Regression Analyses Predicting Plain Language Adoption Quantitative Survey Sample

Block Variable B β t p F df adj. R2
1 Overall model <.01 532.06 2, 948 .53
Marlowe-Crowne −.01 −.05 −2.20 .03
Change already made −.78 −.72 −32.41 <.01
2 Overall model <.01 143.06 8, 942 .54
Marlowe-Crowne −.01 −.06 −2.86 <.01
Change already made −.76 −.71 −32.29 <.01
IRB as barrier −.03 −.02 −.73 .46
Sponsor as barrier −.10 −.06 −.25 .01
Team members as barrier −.05 −.02 −1.02 .31
Participants as barrier −.04 −.01 −.32 .75
Positive attitudes .02 .08 3.30 <.01
Confidence in resources .03 .06 2.40 .02

Note. The dependent variable was plain language adoption. Bolded variables were significant predictors of plain language adoption.

Research Question 3: Reasons for Not Adopting and Barriers to Adoption

Quantitative survey.

We first examined participants’ survey responses as to why they had not made the change to their key information. Only participants indicating they had never used formatting (n = 636) or plain language (n = 224) were able to respond to these items because others had implemented the practice. As seen in Table 5, two of the most frequently endorsed response options were “the template provided already used this formatting practice” and “someone else on my research team made this change.” These responses reflect a lack of opportunity to adopt the practices, rather than barriers for implementation, and were used to create the “change already made” variable used in the regression analyses.

Table 5.

Reasons Practices Were Not Adopted by Quantitative Survey Participants

Formatting
Plain Language
Reasons for non-adoption N % of question respondents % of sample N % of question respondents % of sample
The template provided already used this practice* 363 57.1 28.3 127 56.7 9.9
Someone else on my research team already made this change* 80 12.6 6.2 59 26.3 4.6
I was unaware of this practice (CFIR individual) 221 34.8 17.2 12 5.4 .9
I did not think this formatting practice was important (CFIR individual) 106 16.7 8.3 5 2.2 .4
I do not believe the IRB would allow this (CFIR inner) 38 6.0 3.0 11 4.9 .9
I do not have time to make optional changes to study materials (CFIR individual) 8 1.3 .6 7 3.1 .6
I did not want to risk a delay in IRB review time (CFIR inner) 8 1.3 .6 5 2.2 .4
I’m not sure how to do this (CFIR individual) 5 .8 .4 4 1.8 .3
Other 57 9.0 4.4 21 9.4 1.6

Note. Only participants reporting not using the practice answered the question. 636 did not use formatting (i.e., they reported not using one or both of these practices), while 224 did not use plain language. Percentages of question responders are the percentage of participants who selected that response option out of those that answered the question. Percentages of sample are percentage who selected that response option of the total number of participants in the quantitative survey sample (N = 1284).

*

Response option that comprises the change already made variable and does not represent a CFIR domain. Participants could select all response options that applied.

Among the remaining response options, the most common reasons for not adopting formatting was “I was unaware of this practice,” and “I did not think this practice was important.” These fall under the CFIR individual domain, indicating that additional education may be needed. Regarding plain language, none of the remaining reasons for not adopting were endorsed frequently. This suggests that researchers tend to see few barriers to adopting plain language.

We next examined survey items regarding barriers. All participants were asked “Do you think anyone might try to prevent you from using this practice?” If they responded “yes” they selected from a list of options regarding who might prevent them from using the practice. As seen in Table 6, IRB was the most frequently cited entity who might prevent the participant from using the formatting practices, followed by sponsors, and research team members. Similarly, IRB was the most cited entity who might prevent researchers from using plain language, followed by sponsors, and research team members. This is a mixture of CFIR inner (IRB and research team members) and outer domains (sponsors).

Table 6.

Barriers to Adoption of Practices Indicated by Quantitative Survey Participants.

Formatting
Plain Language
Barrier N % of question respondents % of sample N % of question respondents % of sample
IRB (CFIR inner) 135 66.1 10.5 81 51.3 6.3
Sponsor (CFIR outer) 93 45.6 7.3 89 56.3 6.9
Research team (CFIR inner) 57 27.9 4.4 41 26.0 3.2
Participants (CFIR inner) 15 7.4 1.2 8 5.1 .6
Other 21 10.3 1.6 18 11.4 1.4

Note. Only participants reporting that someone might try to prevent them from using the practice answered the question: 204 for formatting, and 158 for plain language. Percentages of question responders are the percentage of participants who selected that response option of those that answered the question. Percentages of sample are percentage who selected that response option of the total number of participants in the quantitative survey sample (N = 1284). Participants could select all that applied.

Qualitative interviews.

Qualitative data was analyzed according to CFIR to determine barriers, and here we report major themes arising in the three relevant CFIR domains: individual characteristics, inner setting barriers, and outer setting barriers.

Individual characteristics.

All PIs, CRCs, and the majority of IRB members reported positive or supportive opinions about facilitating comprehension of informed consent through formatting and plain language or other devices such as consent aids (see Table 7). Individuals all agreed that improvements of this nature would help facilitate comprehension, and help participants understand the key aspects of a study before agreeing to take part. Yet, while only 3 PIs and 4 CRCs reported actually using a consent aid, they recognized the importance of having a simplified document to facilitate comprehension. Participants expressed several concerns (12 PIs, 11 CRCs, 8 IRB) about consent aids, such as potentially being overly-transparent about the study design (e.g., making a placebo or non-intervention arm too obvious), were unclear about the impact on comprehension, or the potential for causing more confusion to certain participants. IRB members expressed the fewest concerns.

Table 7.

Barriers to Adoption of Practices Indicated by Qualitative Interview Participants.

CFIR Domain N of PI transcript N of CRC transcripts N of IRB member transcripts Representative Quotes
Individual Characteristics 20 20 17
Statements of support 20 18 14 “I think the simpler that we can make it definitely the better it will be both for researchers and for participants.” (CRC, male, age 20–29, 2 years as CRC)

“I think it makes it a whole lot more accessible. It makes the research feel less daunting, and then conveys information in a way that I think would be really much more easily processed.” (PI, female, age 40–49, 9 years as PI)

“Making the dense information that’s usually in a consent form just easily understandable to participants because they may miss a lot of the information that’s more in jargon, and even if we write it in a grade level or way that people can still understand it.” (PI, female, age 30–39, 9 years as PI)
Statements of concern 12 11 8 “Sometimes, I think we always have good intentions, when we create things like this that should be helpful, but sometimes, it may—I don’t know, I’d have—I think it’d have to be case-by-case, but I think sometimes we, in our attempts to make things better, sometimes it makes it more complicated, but it sounds like a good idea.” (CRC, female, age 40–49, 21 years as CRC)

“As a matter of fact, I think it might be too much information for them, given their cognitive decline. I think for people who are at early stages, it may be able to be more beneficial, but for people I work with who already have a diagnosis of Alzheimer’s dementia, it might be overwhelming.” (PI, female, age 40–49, 14 years as PI)
“I think it could potentially enhance it and make them much clearer about what it would mean to be part of the study. It also could—it could unintentionally misrepresent the study, either what their role is, or potentially give too much information about if it’s a randomized trial. It’ll clue them in to sort of what arm, it might change some of the blinding, I guess, for them.” (PI, female, age 40–49, 2 years as PI)
Lack of Knowledge/Nee d for Attitudes to Change 9 5 10 “I think barriers and challenges would be there might be disagreement as to what were the most important items to simplify and put on the form. There would be education of researchers and IRB members that would be required.” (IRB member, female, age 50–59, 2 years on IRB)

“I think changing the status quo is always gonna be problematic…I wonder if lawyers are gonna be okay with it. Is it gonna have to go through legal review for passing muster for all the legalese that gets put into consent documents, and what added layers of roadblocks would there be?” (PI, female, 40–49, 9 years as PI)

“I don’t know, because I do think there’s patients who want the level of detail that’s provided in the consent form, and a summary page may be less effective than just providing the full consent form. At the same time, I do also think that, especially you talk about people with memory impairments, having the option of something shorter with bullet points could be more useful. I don’t know. I feel like it would depend on the person you’re working with.” (CRC, female, age 30–39, 5 years as CRC)

“Anything new like this, we’d want to make sure that it has evidence behind it.” (PI, male, age 30–39, 8 years as PI)
Inner Barriers 18 14 16
Burden 9 10 12 “We have two clinical research coordinators, and they only have so many hours in the day to do everything. Time and cost are definitely the big things.” (PI, male, 30–39, 5 years as PI)

“I just have to say that it takes a lot of time. It takes a lot of—in this instance, it was my time, but any team that’s going to do this, it’s going to take some extra time. It’s like doing another document that’s gonna be submitted for approval.” (CRC, female, age 60 or older, 20 years as CRC)

“It’s quite a burden for sites to have to do that and do it well. Then, of course, it has to go through the process of being approved by the sponsor and then back to the site and then to the IRB, and the IRB has stipulations. I’m just saying that that whole process, it’s very important, there’s no doubt, but adding in another—any other document is just, in some ways, adding one more thing.” (CRC, female, age 60 or older, 20 years as CRC)
IRB 9 4 0 “I would be [interested in using the practice] if our IRB would condone the use. I think that that would just—they would not have a clue with what to do with it.” (PI, female, age 60 or older, 27 years as PI)

“I think our IRB won’t understand it, and they would—I’m not sure. I think it would take some explaining to the IRB.” (PI, female, age 50–59, 6 years as PI)
Research team 2 5 9 “Change is hard. People have been using the same documents and the same templates for years and adapting them and updating them, especially when they’ve been researching in the same field for a long time. So I think, for those people, it’s going to be particularly challenging.” (IRB member, female, age 30–39, 5 years on IRB)

“I would imagine the first barrier would be the PI or the researcher conducting the study is not familiar with it. Many times, getting a suggestion from the IRB is often met with initial resistance.” (IRB member, female, age 50–59, 12 years on IRB)

Note. Total N = 60 (20 PIs, 20 CRCs, 20 IRB members). CFIR outer setting barriers were not included in the table because they were rarely mentioned by qualitative interview participants.

We identified “lack of knowledge/need for attitudes to change” as a barrier among PIs (N = 9) and IRB (N = 10) members, although less so among CRCs (N = 5). Here individuals described the historical tendency to do things a certain way, a need for consensus on best practices, or simply not knowing if the practice would help, as barriers to change.

Outer setting barriers.

The outer setting was rarely identified as a barrier, and only arose in 2 PI, 5 CRC, and 5 IRB interviews. Outer setting barriers included sponsors, or problems with accessibility for those with vision impairment or low English proficiency. Because outer setting barriers were so rarely mentioned, they are not included in Table 7.

Inner setting barriers.

The most frequently cited inner barrier among PIs (N = 9), CRCs (N = 10), and IRB members (N = 12) was burden. Burden primarily consisted of concerns about the time and cost of needing to create or modify materials. Notably, PIs also tended to identify IRBs as a barrier (N = 9), which was less of a concern for CRCs (N = 4). In our interviews with IRB members, it is notable that their second biggest barrier after burden was the research team (N = 9). Other inner barriers mentioned challenges determining which content to include, and potential over-simplification of the information.

Discussion

The data presented here shows that PIs and CRCs conducting research in the U.S. with either cognitively impaired adults or older adults had moderate levels of adoption of formatting and plain language in their key information sections. Participants reported using the formatting practices in less than half of their key information documents (42%), and reported using plain language in a majority of their of their key information documents (63%). This is notable, given that past research suggested formatting and plain language were not commonly used in consent documents (Mozersky et al., 2020; Paasche-Orlow et al., 2013).

When examining factors associated with adoption of the practices, we focused on those factors that were modifiable, and could potentially be targeted for an intervention to increase adoption. Of these factors, having a more positive attitude toward the practice consistently predicted adoption of both practices. This is perhaps unsurprising, given that attitudes often predict behavior (Ajzen, 1991). This suggests that interventions aimed at changing attitudes toward formatting and plain language may lead to additional usage of the practices. Additionally, not seeing the IRB as a barrier was a significant predictor of formatting adoption. This finding is not surprising and was corroborated by the qualitative interview findings. Given that a majority of IRB members in our qualitative interviews were supportive of the practice, an intervention could educate researchers that IRBs are likely to allow edits to formatting, even when they resist other changes to their consent templates. Additionally, not seeing the sponsor as a barrier was a predictor of plain language adoption. This suggests that researchers may need resources to help justify making formatting or plain language changes with sponsors. Last, being confident that they had the resources needed to implement the practice was a significant predictor of plain language adoption. Thus, an intervention providing resources and tools for implementing plain language may yield higher adoption levels.

The most commonly cited reason for not adopting the practices was because the template provided already used the practice. This finding was somewhat surprising, given that previous research indicated relatively few key information templates provided guidance on the use of formatting or plain language (Mozersky et al., 2020). Researchers may have assumed that their provided template met best practices, even though the majority of IRB provided templates do not (Mozersky et al., 2020). Additionally, changes to the Common Rule which went into effect approximately 10 months prior to our survey, may have led to some changes in practices. Regulations required a key information section to be organized “in a way that facilitates comprehension” (Department of Health and Human Services, 2018). Our survey focused on this section, because we believe it was the section IRBs would be most amenable to seeing edited for readability and clarity. Some of our outlier responses may be explained with reference to this new requirement: Some respondents claimed to have submitted 90 consent forms with key information over the past year. This is plausible if, for example, a CRC works for a highly active clinical research organization that required all active trials to add a key information section. Obviously, these numbers will likely drop in the future after trials have been brought into compliance. However, we question whether participants said they used formatting and plain language practices simply because they tried to follow new regulatory requirements that key information is presented “in a way that facilitates comprehension.” In the absence of federal guidance on how to implement the updated consent form requirements, institutions developed their own guidance. A review of these guidance forms found that many templates (59%) fail to recommend or require the practices (Mozersky et al., 2020).

Other commonly cited reasons for not adopting the practices included being unaware of the practice, and not believing the practice was important, which was consistent with the qualitative results showing that lack of knowledge about the practices was common. Each of these reasons point to the need for additional education and training for researchers on the importance of using evidence-based health communication practices, and how they can improve participant understanding (Directorate-General for Translation, 2011; Holmes-Rovner et al., 2005; Kim & Kim, 2015; The Plain Language Action and Information Network, 2011). Using the practices is perhaps particularly important in the current study sample, given that all the study participants conduct research with participants with cognitive impairments or who are at increased risk of cognitive impairments due to their age (age 65+) (Plassman et al., 2008; Prusaczyk et al., 2017).

Participants identified several perceived barriers to adopting the practices. Notably, the IRB was commonly reported by researchers as a barrier in both the survey and interview data. It seems that researchers believe both that IRB templates tend to already use the practices, and that the IRB would be a barrier if they tried to adopt the practices. Interestingly, the qualitative interview data suggested that IRB members were supportive of both practices, even though researchers perceived that IRBs would not. Conversely, the IRB members interviewed viewed researchers as a barrier to using the practices, and suggested that the IRB will often approve consent documents researchers submit in their applications that use these practices, as long as the documents also meet compliance criteria. This misperception between researchers and IRB members should be targeted in future education or training, in order to increase the use of these practices.

Another barrier identified in the qualitative interviews was the burden associated with adopting the practices. Participants reported that it would take time and money to make changes of this nature to their consent processes. In contrast, it’s interesting that the quantitative survey participants did not endorse “I do not have time to make optional changes to study materials,” which is the survey equivalent of burden. However, perhaps the more in-depth nature of the interviews allowed researchers to think more broadly about the difficulties of implementing the practices and thus, reported that burden would be a barrier.

Most barriers to adoption fell under the CFIR individual and inner setting domains, with only one outer setting aspect cited. We found that attitudes toward the practices, and lack of awareness and lack of knowledge of the practices contributed to whether researchers had adopted the practices or saw barriers to adoption. There were also numerous aspects of the inner setting that contributed to whether they had adopted the practices or saw barriers to adoption, including the IRB, their research team members, and the burden of implementing the practices. Finally, the only part of the outer setting that was of note was that sponsors were indicated to be a potential barrier to adopting the practices.

It is important to note, though, that few of these reasons for not adopting and barriers to adoption were endorsed by a majority of the survey or interview participants. Given that participants tended to have positive attitudes toward the practices, it seems that researchers like the practices and want to use them. The fact that there are relatively few barriers suggests that adoption rates for the practices could likely be increased further. This could be accomplished by educating researchers who were unaware of the practices or unaware of how useful the practices are, and by providing evidence that IRBs welcome the use of these practices.

Best Practices

Our entire project is focused on adopting best practices for informed consent in research. We found that attitudes toward the evidence-based health communication practices predicted using the practices. Additionally, for one of the practices, we found that being confident they had the resources needed to implement the practice predicted using the practice. These findings suggest that an intervention aimed at improving attitudes, confidence, and availability of resources to help with implementation may yield additional usage of the practices. Having more resources to help implement the practices will also lower the burden associated with their implementation, which was one of the barriers to implementation reported by researchers. Importantly, developing an intervention of this nature is the next step of our overall project.

Furthermore, the evidence-based health communication practices examined here would be beneficial to implement for any participant population or any type of trial. They are also relatively easy to implement. Our larger project is focused on helping researchers implement the practices, focusing on researchers working with older adults or Alzheimer’s disease because these participant populations are at increased risk of cognitive impairment and because NIH policy now mandates inclusion of older adults in research. Enhancing the informed consent process by using evidence-based communication practices is essential to ethically enrolling these participants in trials.

Research Agenda

Future research should examine actual key information sections to determine how frequently the practices were used. Having objective evidence of the extent formatting and plain language are used in key information may yield different results than self-reported data. One barrier to examining actual key information is that they can be difficult to collect. This is because they are a new requirement, and study materials are not often made publicly available. For example, industry funded trials’ study materials can be proprietary. However, collecting a large sample of key information sections may soon be feasible because the revised Common Rule requires investigators to post study consent forms publicly (Department of Health and Human Services, 2018). However, investigators are only required to post one version of their consent form and only after the study has ended. Therefore, it would not be possible to examine key information in a “before” and “after” format, examining any changes toward using evidence-based health communication practices that may have occurred over the course of the trial (which is often several years).

Educational Implications

Our data suggest that researchers need further education and training on evidence-based communication practices. In particular, education or training from IRBs may be of particular value, given that researchers see IRBs as a barrier to using the practices even when IRB members reported support for the practices. Given this, IRBs could communicate their support for the practices through educational trainings or by directly requiring use of the practices in their key information guidance. This would bring key information documents in better compliance with the requirement that they be organized in a way that facilitate comprehension (Department of Health and Human Services, 2018).

Our upcoming implementation trial will address these issues, with the aim of increasing the use of the practices. Given the importance of templates in guiding researchers actions (Larson et al., 2017), our trial will provide templates for formatting a key information document and using plain language, and provide language to justify the practices in their IRB applications. We also plan to ease the burden associated with adopting the practices by providing easy to use tools and resources to help them implement the practices.

Limitations

One limitation to the current research is that it consisted of self-report data. Thus, it is possible that participant’s use of the practices was over-reported, especially compared with previous literature suggesting low use of evidence-based health communication practices (Mozersky et al., 2020; Paasche-Orlow et al., 2013). Additionally, it is possible that formatting and plain language were used in the key information sections, but only to a small extent. Therefore, participants could have reported using the practice, even if the usage was minimal throughout the document. Thus, there could be room for improvement and education among researchers, even though adoption rates are already moderate.

Part of this research (the majority of the qualitative interviews) was conducted prior to the transition to the revised Common Rule in January 2019, which created the key information requirement. Because of this, the qualitative interviews focused on consent aids rather than key information. Consent aids and key information are similar in that they are both designed to provide a short, clear, concise explanation of the key aspects of a study to help a participant decide about participation. However, it’s possible that the interview data on consent aids does not wholly apply to key information sections.

Last, our study sample consisted of researchers working with older adults or individuals with Alzheimer’s disease, and their support and overall adoption levels might reflect the specific populations they work with. We do not know whether researchers working with other populations have similar levels of adoption of evidence-based health communication strategies in their key information sections, have similar predictors of adoption, or perceive similar barriers to adoption. Future research should explore the use of evidence-based communication strategies with other populations.

Conclusion

The current research provides a benchmark for how widespread the use of evidence-based health communication practices are in key information sections. Given that the recent revisions to the Common Rule require the key information be organized to facilitate comprehension (Department of Health and Human Services, 2018), we should expect to see an increase in the use of evidence-based health communication strategies being used. Future research should re-examine use of these practices to determine if adherence to the regulation has increased, especially if further guidance is provided by SACHRP.

Acknowledgments

Source of support: NIA R01AG058254, NCATS UL1TR002345

Footnotes

1

The remaining two CFIR domains, intervention and process, were also represented in our survey and interviews. These data were used to guide the development of our trial rather than to understand adoption of the practices, and thus, are not presented here.

2

We also conducted regression models that included unmodifiable variables. The findings for the modifiable variables remained largely the same regardless of whether unmodifiable variables were included in the regression models.

References

  1. Agre P, Campbell FA, Goldman BD, Kass NE, Boccia ML, McCullough LB, . . . Wirshing D (2003). Improving Informed Consent: The Medium is Not the Message. IRB: Ethics & Human Research, Suppl 25(5), S11–S19. Retrieved from http://www.ncbi.nlm.nih.gov/pubmed/14870732 [PubMed] [Google Scholar]
  2. Ajzen I (1991). The Theory of Planned Behavior. Organizational Behavior and Human Decision Processes, 50(2), 179–211. Retrieved from https://ac.els-cdn.com/074959789190020T/1-s2.0-074959789190020T-main.pdf?_tid=7cb05083-790c-4269-af58-c752d61d33b1&acdnat=1549646014_4871c5e771bd34e8555b8c1189c91763 [Google Scholar]
  3. Association of American Medical Colleges. (2020). AAMC Medical School Members. Retrieved from https://members.aamc.org/eweb/DynamicPage.aspx?site=AAMC&webcode=AAMCOrgSearchResult&orgtype=Medical%20School
  4. Beardsley E, Jefford M, & Mileshkin L (2007). Longer Consent Forms for Clinical Trials Compromise Patient Understanding: So Why are They Lengthening? Journal of Clinical Oncology, 25(9), e13–14. doi: 10.1200/JCO.2006.10.3341 [DOI] [PubMed] [Google Scholar]
  5. Berkman ND, Sheridan SL, Donahue KE, Halpern DJ, & Crotty K (2011). Low Health Literacy and Health Outcomes: An Updated Systematic Review. Annals of Internal Medicine, 155(2), 97–107. doi: 10.7326/0003-4819-155-2-201107190-00005 [DOI] [PubMed] [Google Scholar]
  6. Campbell FA, Goldman BD, Boccia ML, & Skinner M (2004). The effect of format modifications and reading comprehension on recall of informed consent information by low-income parents: a comparison of print, video, and computer-based presentations. Patient Education and Counseling, 53(2), 205–216. doi: 10.1016/S0738-3991(03)00162-9 [DOI] [PubMed] [Google Scholar]
  7. Clinical Trials Transformation Initiative. (2016). Improving Public Access to Aggregate Content of ClinicalTrials.gov. Retrieved from https://aact.ctti-clinicaltrials.org/
  8. Creswell JW (2014). Research Design: Qualitative, Quantitative, and Mixed Methods Approaches (4th ed.). Thousand Oaks: SAGE Publications. [Google Scholar]
  9. Crowne DP, & Marlowe D (1960). A New Scale of Social Desirability Independent of Psychopathology. Journal of Consulting Psychology, 24(4), 349. [DOI] [PubMed] [Google Scholar]
  10. Damschroder LJ, Aron DC, Keith RE, Kirsh SR, Alexander JA, & Lowery JC (2009). Fostering Implementation of Health services Research Findings into Practice: a Consolidated Framework for Advancing Implementation Science. Implementation Science, 4, 15p. [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Davis TC, Berkel HJ, Holcombe RF, Pramanik S, & Divers SG (1998). Informed Consent for Clinical Trials: a Comparative Study of Standard Versus Simplified Forms. JNCI: Journal of the National Cancer Institute, 90(9), 668–674. doi: 10.1093/jnci/90.9.668 [DOI] [PubMed] [Google Scholar]
  12. Department of Health and Human Services. (2018). Federal Policy for the Protection of Human Subjects (45 CFR 46). Retrieved from https://www.ecfr.gov/cgi-bin/retrieveECFR?gp=&SID=83cd09e1c0f5c6937cd9d7513160fc3f&pitd=20180719&n=pt45.1.46&r=PART&ty=HTML
  13. Directorate-General for Translation. (2011). How to Write Clearly. doi: 10.2782/29211 [DOI]
  14. DuBois JM, & Antes AL (2018). Five dimensions of research ethics: A stakeholder framework for creating a climate of research integrity. Academic Medicine, 93(4), 550–555. doi: 10.1097/ACM.0000000000001966 [DOI] [PMC free article] [PubMed] [Google Scholar]
  15. DuBois JM, Chibnall JT, & Gibbs JC (2015). Compliance disengagement in research: Development and validation of a new measure. Science and Engineering Ethics, 22(4), 965. doi: 10.1007/s11948-015-9681-x [DOI] [PMC free article] [PubMed] [Google Scholar]
  16. DuBois JM, Chibnall JT, Tait RC, Vander Wal JS, Baldwin KA, Antes AL, & Mumford MD (2015). Professional Decision-Making in Research (PDR): The validity of a new measure. Science and Engineering Ethics, 22(2), 391–416. doi: 10.1007/s11948-015-9667-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
  17. English T, Antes AL, Baldwin KA, & DuBois JM (2017). Development and preliminary validation of a new measure of values in scientific work. Science and Engineering Ethics. doi: 10.1007/s11948-017-9896-0 [DOI] [PMC free article] [PubMed] [Google Scholar]
  18. Faden RR, & Beauchamp TL (1986). A History and Theory of Informed Consent. New York: Oxford University Press. [Google Scholar]
  19. Fixsen D, Naoom SF, Blase KA, Friedman RM, & Wallace F (2005). Implementation Research: A Synthesis of the Literature. Retrieved from http://ctndisseminationlibrary.org/PDF/nirnmonograph.pdf
  20. Flory J, & Emanuel EJ (2004). Interventions to Improve Research Participants’ Understanding in Informed Consent for Research: A Systematic Review. Journal of the American Medical Association, 292(13), 1593–1601. doi: 10.1001/jama.292.13.1593 [DOI] [PubMed] [Google Scholar]
  21. Foe G, & Larson EL (2016). Reading Level and Comprehension of Research Consent Forms: An Integrative Review. Journal of Empirical Research on Human Research Ethics, 11(1), 31–46. doi: 10.1177/1556264616637483 [DOI] [PubMed] [Google Scholar]
  22. Graham ID, & Logan J (2004). Innovations in Knowledge Transfer and Continuity of Care. Can J Nurs Res, 36(2), 89–103. [PubMed] [Google Scholar]
  23. Holmes-Rovner M, Stableford S, Fagerlin A, Wei JT, Dunn RL, Ohene-Frempong J, . . . Rovner DR (2005). Evidence-based patient choice: a prostate cancer decision aid in plain language. BMC Medical Informatics and Decision Making, 5(1), 16. doi: 10.1186/1472-6947-5-16 [DOI] [PMC free article] [PubMed] [Google Scholar]
  24. Iltis AS, Misra S, Dunn LB, Brown GK, Campbell A, Earll SA, . . . DuBois JM (2013). Addressing Risks to Advance Mental Health Research. Journal of the American Medical Association, 70(12), 1363–1371. doi: 10.1001/jamapsychiatry.2013.2105 [DOI] [PMC free article] [PubMed] [Google Scholar]
  25. Jefford M, Mileshkin L, Raunow H, O’Kane C, Cavicchiolo T, Brasier H, . . . Reynolds J (2005). Satisfaction with the decision to participate in cancer clinical trials (CCT) is high, but understanding is a problem. Journal of Clinical Oncology, 23(16_suppl), 6067–6067. doi: 10.1200/jco.2005.23.16_suppl.6067 [DOI] [PubMed] [Google Scholar]
  26. Jefford M, & Moore R (2008). Improvement of informed consent and the quality of consent documents. The Lancet Oncology, 9(5), 485–493. doi: 10.1016/S1470-2045(08)70128-1 [DOI] [PubMed] [Google Scholar]
  27. Joffe S, Cook EF, Cleary PD, Clark JW, & Weeks JC (2001). Quality of informed consent in cancer clinical trials: a cross-sectional survey. The Lancet, 358(9295), 1772–1777. doi: 10.1016/S0140-6736(01)06805-2 [DOI] [PubMed] [Google Scholar]
  28. Kilbourne AM, Neumann MS, Pincus HA, Bauer MS, & Stall R (2007). Implementing Evidence-based Interventions in Health Care: Application of the Replicating Effective Programs Framework. Implementation Science, 2(1), 42. doi: 10.1186/1748-5908-2-42 [DOI] [PMC free article] [PubMed] [Google Scholar]
  29. Kim EJ, & Kim SH (2015). Simplification Improves Understanding of Informed Consent Information in Clinical Trials Regardless of Health Literacy Level. Clinical Trials, 12(3), 232–236. doi: 10.1177/1740774515571139 [DOI] [PubMed] [Google Scholar]
  30. Kitson A, Ahmed LB, Harvey G, Seers K, & Thompson DR (1996). From Research to Practice: One Organizational Model for Promoting Research-based Practice. J Adv Nurs, 23(3), 430–440. doi: 10.1111/j.1365-2648.1996.tb00003.x [DOI] [PubMed] [Google Scholar]
  31. Kutner M, Greenberg E, & Baer J (2005). National Assessment of Adult Literacy (NAAL). Retrieved from https://nces.ed.gov/NAAL/PDF/2006470.PDF
  32. Larson EL, Foe G, & Lally R (2015). Reading Level and Length of Written Research Consent Forms. Clin Transl Sci, 8(4), 355–356. doi: 10.1111/cts.12253 [DOI] [PMC free article] [PubMed] [Google Scholar]
  33. Larson EL, Teller A, Aguirre AN, Jackson J, & Meyer DD (2017). Assessing Usefulness and Researcher Satisfaction with Consent Form Templates. Journal of Clinical and Translational Science, 1(4), 256–259. doi: 10.1017/cts.2017.296 [DOI] [PMC free article] [PubMed] [Google Scholar]
  34. Leiner DJ (2019). Too Fast, Too Straight, Too Weird: Non-Reactive Indicators for Meaningless Data in Internet Surveys. Survey Research Methods, 13(3), 229–248. doi: 10.18148/srm/2019.v13i3.7403 [DOI] [Google Scholar]
  35. Mendel P, Meredith LS, Schoenbaum M, Sherbourne CD, & Wells KB (2008). Interventions in Organizational and Community Context: A Framework for Building Evidence on Dissemination and Implementation in Health Services Research. Administration and Policy in Mental Health, 35(1–2), 21–37. doi: 10.1007/s10488-007-0144-9 [DOI] [PMC free article] [PubMed] [Google Scholar]
  36. Montalvo W, & Larson E (2014). Participant Comprehension of Research for Which They Volunteer: A Systematic Review. J Nurs Scholarsh, 46(6), 423–431. doi: 10.1111/jnu.12097 [DOI] [PubMed] [Google Scholar]
  37. Mozersky J, Wroblewski MP, Solomon ED, & DuBois JM (2020). How are US institutions implementing the new key information requirement? Journal of Clinical and Translational Science, 1–5. doi: 10.1017/cts.2020.1 [DOI] [PMC free article] [PubMed] [Google Scholar]
  38. National Institute on Aging. (2019). Alzheimer’s Disease Research Centers. Retrieved from https://www.nia.nih.gov/health/alzheimers-disease-research-centers
  39. National Institutes of Health. (2019). Inclusion across the lifespan. Retrieved from https://grants.nih.gov/policy/inclusion/lifespan.htm
  40. Nishimura A, Carey J, Erwin PJ, Tilburt JC, Murad MH, & McCormick JB (2013). Improving Understanding in the Research Informed Consent Process: A Systematic Review of 54 Interventions Tested in Randomized Control Trials. BMC Medical Ethics, 14, 28. doi: 10.1186/1472-6939-14-28 [DOI] [PMC free article] [PubMed] [Google Scholar]
  41. Paasche-Orlow MK, Brancati FL, Taylor HA, Jain S, Pandit A, & Wolf M (2013). Readability of Consent Form Templates: A Second Look. IRB: a Review of Human Subjects Research, 35(4), 12–19. [PubMed] [Google Scholar]
  42. Pettigrew A, & Whipp R (1992). Managing Change and Corporate Performance. In European Industrial Restructuring in the 1990s. London: Palgrave Macmillan. [Google Scholar]
  43. Plain Language Association International. (2008). PLAIN. Retrieved from plainlanguagenetwork.org [Google Scholar]
  44. Plassman BL, Langa KM, Fisher GG, Heeringa SG, Weir DR, Ofstedal MB, . . . Wallace RB (2008). Prevalence of cognitive impairment without dementia in the United States. Annals of Internal Medicine, 148(6), 427–434. Retrieved from http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2670458/ [DOI] [PMC free article] [PubMed] [Google Scholar]
  45. Prusaczyk B, Cherney SM, Carpenter CR, & DuBois JM (2017). Informed Consent to Research with Cognitively Impaired Adults: Transdisciplinary Challenges and Opportunities. Clinical Gerontologist, 40(1), 63–73. doi: 10.1080/07317115.2016.1201714 [DOI] [PMC free article] [PubMed] [Google Scholar]
  46. Rubright J, Sankar P, Casarett DJ, Gur R, Xie SX, & Karlawish JH (2010). A Memory and Organizational Aid Improves Alzheimer Disease Research Consent Capacity: Results of a Randomized, Controlled Trial. American Journal of Geriatric Psychiarty, 18(12), 1124–1132. doi: 10.1097/JGP.0b013e3181dd1c3b [DOI] [PMC free article] [PubMed] [Google Scholar]
  47. Saldaña J (2016). The Coding Manual for Qualitative Researchers (3 ed.). Thousand Oaks, CA: Sage Publications Ltd. [Google Scholar]
  48. Stetler CB (2001). Updating the Stetler Model of Research Utilization to Facilitate Evidence-Based Practice. Nurs Outlook, 49(6), 272–279. doi: 10.1067/mno.2001.120517 [DOI] [PubMed] [Google Scholar]
  49. Taylor JS, DeMers SM, Vig EK, & Borson S (2012). The disappearing subject: exclusion of people with cognitive impairment and dementia from geriatrics research. Journal of the American Geriatric Society, 60(3), 413–419. doi: 10.1111/j.1532-5415.2011.03847.x [DOI] [PubMed] [Google Scholar]
  50. The Plain Language Action and Information Network. (2011). Federal Plain Language Guidelines. Retrieved from https://www.plainlanguage.gov/guidelines/
  51. The Secretary’s Advisory Committee on Human Research Protections. (2018). New “Key Information” Informed Consent Requirements. Retrieved from https://www.hhs.gov/ohrp/sachrp-committee/recommendations/attachment-c-november-13-2018/index.html

RESOURCES