Abstract
Minority populations with health disparities are underrepresented in research designed to address those disparities. One way to improve minority representation is to use community-based participatory methods to overcome barriers to research participation, beginning with the informed consent process. Relevant barriers to participation include lack of individual or community awareness or acceptance of research processes and purposes. These barriers are associated with limited health literacy. To inform recommendations for an improved consent process, we examined 97 consent documents and 10 associated Institutional Review Board websites to determine their health literacy demands and degree of adherence to principles of community-based research. We assessed the reading level of consent documents and obtained global measures of their health literacy demand by using the Suitability and Comprehensibility Assessment of Materials instrument. Although these documents were deemed suitable as medical forms, their readability levels were inappropriate, and they were unsuitable for educating potential participants about research purposes. We also assessed consent forms and Institutional Review Board policies for endorsement of community-based participatory principles, finding that very few acknowledged or adhered to such principles. To improve comprehension of consent documents, we recommend restructuring them as educational materials that adhere to current health literacy guidelines.
Keywords: informed consent, health literacy, institutional review board, community-based participatory research, healthcare disparities
Even though members of racial and ethnic minorities experience an unequal burden of many common illnesses, including cancer, cardiovascular disease, and diabetes, these groups remain underrepresented in research designed to prevent and treat such illnesses (Chen, Lara, Dang, Paterniti, & Kelly, 2014; Hawk et al., 2014). The inadequate participation of women and minorities in clinical research prompted the National Institutes of Health Revitalization Act of 1993 (P.L. 103–43) and its update in 2001, which offered guidelines for the inclusion of these demographic categories. Nevertheless, the representation of racial and ethnic minorities in research funded by the National Institutes of Health still falls short of proportionality (Chen et al., 2014; M. E. Ford et al., 2013; Hawk et al., 2014).
Several studies have sought strategies to increase research participation by underrepresented groups (M. E. Ford et al., 2013; George, Duran, & Norris, 2014; Schmotzer, 2012). Many of the barriers and facilitators identified in these studies are consistent with a conceptual model developed to understand how to improve the recruitment of minority women in research (Brown, Long, Gould, Weitz, & Milliken, 2000). This model identifies three factors governing decisions to participate: individual-level awareness of the purpose, processes, and importance of research; community-level acceptance of research; and individual and community perceptions of the ease of access to research activities. Health literacy is implicated in all three factors, especially during the process of obtaining consent for research participation. The authors advised investigators to communicate more effectively with potential participants, including the provision of understandable consent forms (Brown et al., 2000).
Although not intentionally deceptive, consent forms often use complex language and are designed primarily to document agreement to participate rather than to ensure that participants understand the proposed research (Grady, 2015; Lorenzen, Melby, & Earles, 2008). Indeed, previous studies report that clinical trial participants might have limited understanding of the experimental aspect of such efforts(Nishimura et al., 2013; Putnam et al., 2015; Trantham et al., 2015) (Corbie-Smith, Thomas, Williams, & Moody-Ayers, 1999; Flory & Emanuel, 2004; Tattersall, 2001). Without clear communication about the purpose, process, and benefits of specific studies, community members might not be aware of their importance. In addition, the lack of readily understandable consent forms directly affects their access to research (Brown et al., 2000). Even though Institutional Review Boards (IRBs) play an important role in setting standards for informed consent, they often approve documents that do not conform to their own readability guidelines (Paasche-Orlow, Taylor, & Brancati, 2003).
Reading skills are a component of health literacy as well as of literacy per se. Limited literacy is most prevalent in racial and ethnic minorities, older and less educated adults of all races, and people with poorer self-rated health (Goodman et al., 2013; Kirsch, Jungeblut, Jenkins, & Kolstad, 2002). According to the Program for International Assessment of Adult Competencies in 2012, literacy skills in the general U.S. population have not improved since the 1992 National Adult Literacy Survey (Goodman et al., 2013; Kirsch et al., 2002; Kutner, Greenberg, Jin, & Paulsen, 2006). In the most recent assessment, one in six adults had low literacy skills.
People with limited health literacy have less health knowledge and experience poorer health outcomes than their more health-literate counterparts (Aboumatar, Carson, Beach, Roter, & Cooper, 2013; Margolis, Hampton, Hoffstad, Malay, & Thom, 2015; Moser et al., 2015; Quinlan et al., 2013). Accordingly, the need for clear communication is recognized by investigators working with members of vulnerable populations (Sudore et al., 2006), who cite misperceptions of the purpose and procedures of clinical trials as a key barrier to research participation (Braunstein, Sherber, Schulman, Ding, & Powe, 2008; Evans, Lewis, & Hudson, 2012; J. G. Ford et al., 2005; George et al., 2014). This and similar barriers are more prevalent in underserved communities. Incorporating community-based participatory research (CBPR) methods in recruitment, informed consent, and retention strategies can help to overcome these barriers and enhance minority access to, awareness of, and especially acceptability of research participation (De las Nueces, Hacker, DiGirolamo, & Hicks, 2012; Paskett et al., 2008; Seifer, Michaels, & Collins, 2010; Simonds, Wallerstein, Duran, & Villegas, 2013). A CBPR approach advises that consent forms be approved by relevant community leaders, that they articulate how research data will be used and who can access it, and that they describe the community-level risks and benefits of proposed studies (Flicker, Travers, Guta, McDonald, & Meagher, 2007). However, few university IRBs adhere to these guidelines (Flicker et al., 2007).
Our study had 5 objectives: 1) to evaluate the health literacy demands of the ICDs by scoring them in terms of their suitability for the intended audiences, 2) to compare suitability scores to the level of health literacy recommended by IRB policies, 3) to examine the extent to which the ICDs incorporated relevant CBPR principles, and 4) to compare the resulting ICDs level of CBPR incorporation with the guidance offered by their respective IRBs.
Methods
To inform recommendations for an improved consent process, we examined informed consent documents (ICDs) created for studies conducted under the auspices of the Centers for Population Health and Health Disparities initiative and approved by their IRBs. Ten Centers encompassing 12 academic institutions were funded by the National Cancer Institute and the National Heart, Lung, and Blood Institute from May 2010 through April 2015. All Centers used CBPR methods and focused exclusively on underserved US populations, which are typically characterized by low socioeconomic status, low educational attainment, multiple clinical and psychosocial comorbidities, and poor health literacy (National Center for Education Statistics, 2003, 2013; Schillinger et al., 2002). The Centers’ service populations included Latinos/Hispanics, American Indians/Alaska Natives, African Americans, and residents of Appalachia. We reviewed their ICDs, as well as the templates on which they were based, in terms of their health literacy demands and their correspondence with CBPR principles. We also reviewed the IRB policies governing each Center’s ICDs.
Setting and Document Collection
In 2013, we contacted all 10 Principal Investigators of the Centers for Population Health and Health Disparities and requested the ICDs for all studies described in their original applications. These included letters, brochures, information sheets, and informed consent forms. The latter included consent forms created for interviews, focus groups, medical record review, collection of biospecimens, and combinations of procedures.
Informed Consent Documents (ICDs)
We categorized ICDs by type of material and evaluated each one for health literacy attributes and alignment with CBPR principles, as described below.
Readability.
The Simple Measure of Gobbledygook (SMOG) is a widely endorsed instrument that offers a readability formula recommended for use in healthcare (Wang, Miller, Schmitt, & Wen, 2013). This validated, reliable measure is calculated by counting the number of words with 3 or more syllables in 10 consecutive sentences selected from the beginning, middle, and end of the text. The count is then converted to an approximate grade level by using the validated SMOG conversion table.
Suitability and comprehensibility.
The Suitability and Comprehensibility Assessment of Materials (SAM+CAM) is a validated, reliable tool to assess text-based materials for use by people with low health literacy. It has been successfully applied to ICDs (Helitzer, Hollis, Cotner, & Oestreicher, 2009). It scores materials as 0 (not suitable), 1 (adequate), 2 (superior), or “not applicable” in each of 22 variables in 6 categories: content, literacy demand, numeracy, graphics, layout/typography, and learning stimulation/motivation. The final SAM+CAM score is calculated by dividing the total number of points scored by the total number of possible points to yield a percentage (Helitzer et al., 2009). We applied the SAM+CAM as its creators recommend for health system materials, excluding the sixth category (learning stimulation/motivation) as well as two variables in the remaining categories that are not applicable to ICDs (summary/review and illustrations). These exclusions resulted in the following 5 categories, which collectively encompass 13 variables:
1. Content
refers to the presence of explicit information about the purpose of the study and the behaviors desired of participants.
2. Literacy demand
assesses vocabulary, phrasing, and logical organization.
3. Numeracy
refers to the use of numbers, fractions, percentages, and calculations.
4. Graphics
refers to the inclusion and design of illustrations, tables, and graphs. SAM+CAM uses two variables for this category: appropriateness of illustrations and clarity of tables and charts. We excluded the Illustration variable and focused on evaluating tables and charts as suggested by Helitzer et al. (2009). Tables and charts are assessed on ease of understanding and inclusion of explanatory captions or “how to” instructions.
The creators of SAM+CAM recommend using the validated PMOSE/IKIRSCH document readability formula (Mosenthal & Kirsch, 1998) to score the complexity of tables and charts. The PMOSE/IKIRSCH score is then factored into the SAM+CAM score. For PMOSE/IKIRSCH, the complexity of a table is defined by two factors: structure and density. These refer to the number of “labels” or headings used for columns and rows (structure) and the number of “items” or individual data cells (density). Higher scores denote higher literacy demands, with scores grouped into four categories: “very low” (4 to 8 years of schooling); “low” (8 to 12 years); “moderate” (12 to 14 years); and “very high” (16 years or post-bachelor’s degree).
5. Layout/typography
refers to the spatial and visual organization of text. Higher SAM+CAM scores are given to documents that use wide margins and ample white space to minimize density, topic headings to organize content, and appropriate font sizes and styles to enhance readability.
Alignment with CBPR principles.
Using a published statement of CBPR principles (Israel, Schulz, Parker, & Becker, 1998), we developed a checklist of criteria to determine whether the ICDs addressed community-level principles by offering specific kinds of information, as follows: 1) definition of the community, 2) a requirement for community-level approval, 3) a statement that the community has control over research data, 4) process for communities to withdraw from the study, and 5) a statement of community-level risks and benefits.
Scoring.
The first author trained two research assistants to apply SMOG, SAM+CAM, PMOSE/IKIRSCH, and the checklist of CBPR principles. To ensure accuracy, both research assistants consulted with the first author as they worked. Each one independently applied these assessments to score the ICDs, and then both discussed and resolved any discrepancies among themselves to achieve inter-rater consensus.
IRB Website Analysis
Codebook and review process.
We developed a codebook to assess whether the institutional websites that provided IRB policies included recommendations consistent with our five health literacy categories or mentioned relevant CBPR principles. Then, in May and June 2014, we visited each site and evaluated the IRB policies. Two sites were excluded because they were restricted to internal users. Starting with each participating institution’s main IRB website, the two research assistants independently timed how long it took to access instructions for consent forms. Then, they spent a maximum of 20 minutes extracting sample text that illustrated codebook categories. Three areas were evaluated: consent form templates, health literacy criteria, and CBPR principles.
We calculated SMOG scores for consent form templates because investigators use them to develop their own consent forms. Thus, templates with high literacy demands are likely to yield consent forms with similarly high demands. The SMOG score served as a proxy indicator of the corresponding IRB’s attention to literacy demands. Then, we searched the text at each website for key words related to our five health literacy categories. We noted all references to each category and recorded IRB recommendations verbatim. Finally, we assessed whether each website mentioned a CBPR or a contact person for CBPR projects and whether the website addressed community-level principles 1) definition of the community, 2) a requirement for community-level approval, 3) a statement that the community has control over research data, 4) process for communities to withdraw from the study, and 5) a statement of community-level risks and benefits.
Comparisons between IRB Websites and ICDs.
We used SPSS 24.0 (IBM Corp., Released 2016) to calculate Spearman correlations between the number of criterion mentioned by each Center’s IRB website and mean SAM+CAM scores for ICDs produced by that Center.
Results
Sample Description
We collected 107 ICDs used by 32 research studies at 10 Centers. After excluding 10 ICDs in the form of scripts intended to be read aloud, our final sample comprised 97 ICDs, as detailed in Table 1. The largest single category of ICDs comprised consent forms for adult research participants (n = 49). Other categories comprised assent forms for adult and child participants and consent and assent forms for clinic staff and providers. Also included in our ICD sample were 20 addenda in the form of brochures, letters, and flyers.
Table 1.
SAM+CAM and SMOG scores for informed consent materials by type of form
SAM+CAM Scores a |
Mean SAM+CAM Score (SD) |
SMOG Readability Score (SD) |
|||
---|---|---|---|---|---|
Type of Material | N (%) |
Adequate N (%) |
Superior N (%) |
||
Adult Participant Consent Forms | |||||
Multiple consent b | 9 (9) | 0 | 9 (100) | 80 (6) | 12 (2) |
Intervention c | 22 (23) | 3 (14) | 19 (86) | 80 (8) | 12 (2) |
Survey | 3 (3) | 0 | 3 (100) | 74 (3) | 12 (0.6) |
Interview | 4 (4) | 0 | 4 (100) | 78 (7) | 10 (2) |
Focus group | 5 (5) | 0 | 5 (100) | 75 (4) | 12 (1) |
Audio-recording of clinic visit | 1 (1) | 1 (100) | 0 | 40 | 16 |
Biospecimen | 2 (2) | 1 (50) | 1 (50) | 74 (8) | 13 (2) |
Innovative method | 3 (3) | 1 (33) | 2 (67) | 75 (5) | 12 (1) |
Subtotal | 49 (50) | 6 (12) | 43 (88) | 78 (9) | 12 (2) |
Adult Participant Assent Forms | |||||
Survey | 1 (1) | 1 (100) | 0 | 68 | 12 |
Interview (community leader) | 1 (1) | 1 (100) | 0 | 59 | 13 |
Focus group | 1 (1) | 1 (100) | 0 | 68 | 13 |
Multiple assent b | 4 (4) | 0 | 4 (100) | 76 (2) | 12 (0.5) |
Intervention | 2 (2) | 2 (100) | 0 | 55 (0.3) | 14 (3) |
Subtotal | 9 (9) | 5 (56) | 4 (44) | 68 (9) | 13 (1) |
Staff or Provider Consent Forms | |||||
Survey | 1 (1) | 1 (100) | 0 | 68 | 10 |
Interview | 4 (4) | 1 (25) | 3 (75) | 74 (6) | 11 (2) |
Focus group | 1 (1) | 1 (100) | 0 | 68 | 11 |
Intervention | 1 (1) | 0 | 1 (100) | 73 | 11 |
Audio-recording of clinic visit | 2 (2) | 1 (50) | 1 (50) | 71 (3) | 12 (1) |
Subtotal | 9 (9) | 4 (44) | 5 (56) | 72 (4) | 11 (2) |
Staff or Provider Assent Forms | |||||
Survey | 4 (4) | 1 (25) | 3 (75) | 75 (5) | 12 (0.5) |
Interview | 1 (1) | 0 | 1 (100) | 73 | 12 |
Subtotal | 5 (5) | 1 (20) | 4 (80) | 75 (4) | 12 (2) |
Child Assent | |||||
Focus group | 2 (2) | 1 (50) | 1 (50) | 71 (10) | 11 (3) |
Intervention | 3 (3) | 3 (100) | 0 | 67 (3) | 10 (0.6) |
Subtotal | 5 (5) | 4 (80) | 1 (20) | 68 (6) | 10 (2) |
Addenda d | |||||
20 (21) | 17 (85) | 3 (15) | 61 (11) | 13 (2) | |
Total | 97 (100) | 37 (38) | 60 (62) | 72 (11) | 12 (2) |
Abbreviations: SAM+CAM = the Suitability and Comprehensibility Assessment of Materials, SD = standard deviation, SMOG = Simple Measure of Gobbledygook
Notes:
The SAM+CAM percentage scores were calculated by dividing the sum of ratings by the total possible ratings, and are grouped as follows: 0–39% = not suitable; 40–69% = adequate; and 70–100% = superior. “Not suitable” means not suitable for an audience with low health literacy. None of the materials we reviewed received a SAM+CAM score of “not suitable.”
These forms asked people to consent to more than one study procedure: for example, an interview, a survey, and access to medical records.
Intervention studies typically included an educational session and some other variable (such as a special diet for participants) manipulated by the investigators. These studies often required multiple procedures, such as a survey, an interview, and a blood draw. The consent forms often mentioned randomization.
These included HIPAA forms, flyers, brochures, and other materials used to enhance the consent process.
Scoring Informed Consent Documents (ICDs)
Readability.
The mean SMOG score for all ICDs was grade 12 (standard deviation 1.9, range 8–16). Across types of ICDs, mean SMOG scores ranged from grade 9.7 through grade 16 (Table 1). Across Centers, mean SMOG scores ranged from grade 10.1 through grade 13.7 (Table 2).
Table 2.
Comparison of SAM+CAM criteria for Institutional Review Board websites to SAM+CAM scores for informed consent documents
Institution Identification Letter | A | B | C | D | E | Fa | Same Centerb | Same Centerb | Ka | L | ||
---|---|---|---|---|---|---|---|---|---|---|---|---|
G | H | I | J | |||||||||
Number of ICDs | 14 | 3 | 9 | 8 | 5 | 13 | 6 | 7 | 11 | 21 | ||
SMOG Score for IRB ICD Templates | 15 | 14 | 14 | 15 | 12 | 12 | 11 | 12 | 11 | 12 | ||
Mean SMOG for ICDs | 14 | 12 | 11 | 13 | 11 | 11 | 12 | 13 | 10 | 12 | ||
Readability Guidance | No | Yes | Yes | Yes | Yes | Yes | Yes | No | Yes | Yes | ||
Mean SAM+CAM Content Scorec | 1 | 1 | 2 | 2 | 1 | 1 | 2 | 1 | 1 | 2 | ||
Number of Content Criterion Mentioned (N=1) | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | ||
Mean SAM+CAM Literacy Demand Scorec | 1 | 1 | 2 | 2 | 2 | 2 | 2 | 2 | 2 | 1 | ||
Number of Literacy Criterion Mentioned (N=5) | 5 | 0 | 4 | 4 | 3 | 4 | 2 | 5 | 3 | 0 | ||
Mean SAM+CAM Numeracy Scorec | 2 | 2 | 2 | 2 | 2 | 2 | 2 | 1 | 2 | 2 | ||
Number of Numeracy Criterion Mentioned N=(2) | 0 | 0 | 2 | 1 | 2 | 2 | 2 | 1 | 0 | 0 | ||
Mean SAM+CAM Graphic Scorec | 1 | n/a | n/a | 2 | n/a | 2 | 1 | 2 | 2 | 2 | ||
Number of Graphic Criterion Mentioned (N=1) | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | ||
Mean SAM+CAM Layout/Typography Scorec | 2 | 1 | 1 | 1 | 2 | 1 | 1 | 1 | 2 | 1 | ||
Number of Layout/Typography Criterion Mentioned (N=7) | 6 | 0 | 3 | 3 | 0 | 7 | 2 | 1 | 0 | 4 | ||
Mean SAM+CAM score for ICDsc | 1 | 1 | 2 | 2 | 2 | 2 | 2 | 2 | 2 | 2 | ||
Total Number of SAM+CAM criteria mentioned in IRB websites (N=17) | 17 | 1 | 17 | 6 | 7 | 16 | 8 | 9 | 5 | 7 |
Abbreviations: ICD = informed consent document; IRB = Institutional Review Board; SAM+CAM = Suitability and Comprehensibility Assessment of Materials; SMOG = Simple Measure of Gobbledygook
Notes:
Website was not publically accessible.
Two institutions (with 2 IRBS) under one Center for Population Health and Health Disparities.
SAM+CAM mean scores were calculated for each criteria, and are grouped as follows: 0 = not suitable; 1 = adequate; and 2 = superior. “Not suitable” means not suitable for an audience with low health literacy. None of the materials received a SAM+CAM score of “not suitable.”
Suitability and comprehensibility.
The frequency of SAM+CAM scores for each type of ICD is shown in Table 1. SAM+CAM scores by Center are shown in Table 2. No ICDs were scored as “not suitable,” while 37 (38%) were scored as “adequate” and 60 (62%) as “superior.” Individual components of the SAM+CAM score revealed wider variability (Table 3).
Table 3.
SAM+CAM scores by category
SAM+CAM Category | Materials Assessed, N (%) |
Not Suitable, N (%) |
Adequate, N (%) |
Superior, N (%) |
Mean SAM+CAM Score (SD) |
---|---|---|---|---|---|
Content (purpose, summary, credibility, desired reader behavior) | 97 (100) | 17 (17.5) | 10 (10.3) | 70 (72.2) | 71.7 (30.5) |
Literacy Demand (writing style, vocabulary helpers, confusion reducers, context, scope, length) | 97 (100) | 0 | 28 (28.9) | 69 (71.1) | 70.8 (13.0) |
Numeracy (number presentation, calculation) | 85 (87.6) | 0 | 7 (8.2) | 78 (91.8) | 95.6 (14.0) |
Graphics (clarity of tables, charts, and graphs) | 36 (37.1) | 4 (11.1) | 7 (19.40) | 25 (69.4) | 80.7 (32.1) |
Layout/Typography (layout, typography, sub-headings, organizers) | 97 (100) | 3 (3.1) | 64 (66.0) | 30 (30.9) | 67.0 (15.4) |
Abbreviations: SAM+CAM = Suitability and Comprehensibility Assessment of Materials; SD =
1. Content.
In this category, a high score requires an explicit statement of the purpose of the research and a clear description of the actions and behaviors desired of participants. Most ICDs (72%) fulfilled these criteria, consistent with the fact that such information is a standard requirement for ICDs. However, 18% of ICDs were scored as “not suitable,” typically because they were addenda that requested additional consent. For example, one addendum asked participants for permission to contact their pharmacist to obtain information about prescription medications. These ICDs were often short and lacked an explicit statement of purpose.
2. Literacy demand.
In this category, 29% were scored as “adequate” and 71% as “superior.” ICDs that included “confusion reducers” (explanatory expressions and analogies, avoidance of ambiguous pronouns) and “vocabulary helpers” (plain-language terms and examples) tended to have better scores. Overall, 53% of ICDs used and defined at least one specialist term, and 82% used at least one specialist term that was not defined. Among defined terms were confidentiality, IRB, BMI, HIPAA, and randomized; among undefined terms were risk, catalyst, brachial artery, and anthropometry.
3. Numeracy.
Of the 85 ICDs that used numbers, 92% were scored as “superior” for numeracy and 8% as “adequate.” ICDs typically included numbers to express time, the count of research participants, and the dollar amount of incentives. Even though one ICD provided an example of a simple calculation, it still received a “superior” score. All “adequate” scores were due to the use of fractions or percentages that increased numeracy demand.
4. Graphics.
ICDs scored as “superior” made limited use of tables and charts. “Superior” scores were also associated with relatively simple tables and charts (i.e., those with low PMOSE/IKIRSCH scores) that included explanatory captions. Five ICDs used tables to describe risks or study procedures, and two used flow charts to describe procedures. All these were assessed as appropriate for people with low educational levels. One ICD requested dates and locations of pharmacy use, requiring “moderate” education (i.e., college level). Another ICD included tables that were blank and required participants to provide information, such as their medications and the name of their pharmacy. These tables resulted in a level of complexity that reduced the overall SAM+CAM score to “adequate.” We also scored 27 ICDs that required participants to provide other types of information. For example, one ICD stated, “I allow you to store any leftover blood and urine for future research,” followed by checkboxes and a space for the date. Because this information was required for study purposes, and because the format was scored as “very easy to understand” by PMOSE/IKIRSCH, the corresponding SAM+CAM score was “superior.”
5. Layout/typography.
In this category, 3% of ICDs were scored as “not suitable,” 66% as “adequate,” and 31% as “superior.” Low scores typically resulted from overly dense text, while higher scores were associated with font sizes of at least 12 points, avoidance of narrow font styles, and the use of subheadings and lists to improve document organization.
CBPR Principles
Six ICDs (6% of the total) provided by four centers defined the study community. However, only two ICDs referred to potential community-level benefits, and only the ICDs for a single Center studying American Indian/Alaska Native health mentioned community-level approval. None of the ICDs discussed community-level risks and benefits or set forth a process for community withdrawal from research participation.
IRB Website Review
SAM+CAM.
For the 10 publicly available websites, research assistants needed a mean of 58 seconds to access IRB instructions (range 30–120 seconds). Eight sites recommended a reading level of grade 8 for ICDs (see Table 4). All provided sample consent form text, for which the mean reading level across sites was grade 12 (range 11–13). One site provided guidance on all the health literacy variables that we assessed, and one provided guidance on reading level only.
Table 4.
Health literacy criteria addressed by Institutional Review Board websites
Health Literacy Criteria Provided | Sites N | Example Text from IRB Websites | |
---|---|---|---|
Reading Level | |||
Guidance on grade reading level | 8 | Consent documents should be written at an 8th grade reading level or less for the average adult population. | |
Content | |||
Guidance on the purpose of the document | 8 | Start with an introductory sentence describing the primary purpose of the research as stated in the protocol: State what the study is designed to discover or establish. | |
Literacy Demand | |||
Active, direct writing style | 5 | Whenever possible use active voice and break up the text into short straightforward sections. | |
Personal, conversational writing style | 3 | Use words familiar to the audience. Write consent form in conversational style, as if you were speaking to the reader. | |
Common, explicit words that are clear and specific in meaning | 7 | Lay language should be used. Avoid technical or professional language used in grant submissions or with peers. | |
Simple sentences | 8 | Use short, simple and direct sentences. Use short sentences and limit paragraphs to one main idea. Average sentence length of 15 words or less. | |
Explain or clarify difficult words | 7 | Define terms which might not be familiar to the average person the first time they are mentioned Avoid research and medical jargon whenever possible. If you must use a complicated term, define it in plain language and provide an example, an analogy, or a visual aid. Scientific, technical, and medical terms must be defined or explained in lay terms. |
|
Numeracy | |||
Use of numbers or numerical terms | 4 | Define terms or use lay terms. Include definitions for specific research design features (e.g., double-blind, randomization, placebo-controlled, dose escalation) if these will help participants understand the study. | |
Terms should be defined | 6 | When describing randomization for 2 groups use, “like the flip of a coin,” for more than 2 groups, use “like drawing numbers from a hat.” | |
Graphics | |||
Use of charts, graphs, or tables | 5 | Use photos, graphics or tables if they will help clarify procedures. Use diagrams as helpful additions to narrative. |
|
Layout | |||
General layout and organization | 5 | Leave a 1-inch margin around the entire document. Use of subheadings, bulleted lists, tables, flow charts, etc. to improve communication and readability. |
|
Adequate white space | 2 | Layout balances white space with words and graphics. | |
Visual cueing devices | 3 | Underline, bold, or boxes (rather than all caps or italics) to give emphasis. | |
Size of font | 6 | 12 point at least, and consider larger given audience. Easy to read. |
|
Type of font | 3 | Use black Arial or similar font, preferably 12-point size, or larger when appropriate for the study population. | |
Use of headings | 4 | Titles, subtitles, and other headers help to clarify organization of text. Section headings should be in question format. |
CBPR principles.
None of the IRB websites mentioned CBPR principles in their policies or guidance on ICDs.
Comparison of IRB website criteria and ICDs by Center.
(Table 3). We found no correlation between mean SMOG scores for ICDs and mean SMOG scores for IRB templates for ICDs (Table 3). However, the mean SMOG scores for ICDs from Centers that offered guidelines on readability were superior to those from Centers that did not (grade 12 vs. grade 13.5, p<0.05). In addition, Centers that did not offer guidelines regarding a statement of the purpose of research had significantly lower SAM+CAM scores in the content category.
Discussion
We examined the health literacy demands and adherence to CBPR principles of 97 ICDs created by the Centers for Population Health and Health Disparities. We also reviewed the IRB policies that governed these documents, as stated on websites associated with the same Centers. Although most IRB policies recommended a reading level of grade 8, in agreement with specialists in health literacy (Doak, Doak, & Root, 1996), all ICDs in our sample exceeded that level. Likewise, the consent form templates provided by IRBs imposed a remarkably high literacy demand, with an average reading level of grade 12. These findings surprised us, because all the Centers we examined conduct community-based research with underserved populations. We expected their ICDs to be more readily comprehensible than those designed for less diverse populations, which have been the focus of previous work. Nevertheless, many studies have reported that ICDs often exceed the grade 8 reading level, regardless of their priority populations (Kass, Chaisson, Taylor, & Lohse, 2011; Paasche-Orlow et al., 2003; Terblanche & Burgess, 2010; Terranova et al., 2012).
Our use of SAM+CAM enabled us to conduct a more comprehensive assessment of health literacy than most prior studies, which have focused solely on readability. Despite the relatively high reading levels of the ICDs we analyzed, all received a SAM+CAM score of “adequate” or “superior.” While this discrepancy might seem counterintuitive, SAM+CAM and SMOG measure different aspects of health literacy demand. SMOG limits its focus to readability by using a simple formula involving sentence length and polysyllabic word count; SAM+CAM provides a global measure of health literacy that goes well beyond readability.
Most ICDs in our study were drafted with straightforward sentences in the active voice. According to SAM+CAM, however, their use of vocabulary helpers and confusion reducers was suboptimal, even though several IRB websites provided links to glossaries recommending substitutions for difficult words. Accordingly, these ICDs received suboptimal scores for literacy demand. Regarding layout and typography, most ICDs were scored as “adequate” rather than “superior” because of overly dense text with limited white space. Nevertheless, most used bold fonts, bullets, and headings, which can help to simplify and organize text.
We followed recommendations by the creators of SAM+CAM to treat ICDs differently from health education materials. Because ICDs do not typically include a summary, illustrations, or learning stimulation, we excluded those variables in calculating SAM+CAM. However, several studies on the acceptability of ICDs have found that pictures make consent forms more engaging (Campbell, Goldman, Boccia, & Skinner, 2004; Institute of Medicine, 2015; Murphy, O’Keefe, & Kaufman, 1999).
The SAM+CAM scores we calculated must be interpreted in light of the intended purposes of ICDs. The materials in our study were generally easy for people with low literacy to understand, making them suitable for documenting agreement to participate in research. However, if we assume that ICDs should also educate participants about a study, it becomes appropriate to apply the SAM+CAM categories for scoring educational materials. In that case, most of our SAM+CAM scores would have been reduced by 10%−22%, and none of the ICDs would be considered “superior.”
Both CBPR and health literacy have emerged as important factors in the conduct of research with vulnerable populations, especially in facilitating research participation (George et al., 2014). However, instead of following CBPR guidelines to foreground community empowerment, academic IRBs continue to adhere to a biomedical model of individual risks and benefits (Flicker et al., 2007; Malone, Yerger, McGruder, & Froelicher, 2006). Their ICDs are designed to protect research institutions rather than communities. As CBPR gains recognition, institutions are advised to heed its ethical requirements in conducting research with underserved and low-literate populations. Consistent with our own findings, a recent assessment noted that few university IRBs require ICDs to describe community risks and benefits, community consent, or information about community-level control over data (Flicker et al., 2007). Although IRBs are not mandated to protect communities and should not expand their role without expertise in CBPR, some researchers have recommended that CBPR projects be approved not only by IRB representatives but also by community members.
We find it troubling that consent forms designed for community-based research with vulnerable populations received inappropriate SMOG scores and offered little acknowledgment of CBPR principles. Nevertheless, our evaluation revealed that ICDs accurately reflect the templates provided by IRBs at each Center. IRBs currently offer limited guidance for addressing health literacy and community-level ethics in the informed consent process. Even the US Office for Human Research Protections, which registers IRBs, does not provide explicit recommendations on these issues. Furthermore, despite the ready availability of models for improving informed consent, such models are not systematically recommended by IRBs.
After we completed our website assessments, two important resources for improving ICDs were published: a summary of a workshop on informed consent and health literacy sponsored by the Institute of Medicine, and a toolkit for developing ICDs from the Agency for Healthcare Research and Quality (AHRQ, 2015; Institute of Medicine, 2015). Recommendations in the toolkit encompass several SAM+CAM variables, such as writing short, simple sentences in the active voice; using headings, large fonts, and wide margins; and adding pictures to enhance engagement. Other recommended health literacy practices might also improve the consent process. For example, the teach-back method, whereby participants confirm their comprehension by explaining procedures in their own words, is often cited as a strategy to improve health literacy (DeWalt et al., 2011). This strategy has been helpful in the context of surgical consent documents, making it a promising approach for future exploration (Abrams & Earles, 2007; Lorenzen et al., 2008; Miller, Abrams, Earles, Phillips, & McCleeary, 2011).
Despite these recommendations, incorporating principles of health literacy and CBPR in the consent process remains challenging, in part because ICDs are currently designed as legal rather than educational documents. Our results demonstrate that the standard model for ICDs, involving pages of densely packed text without illustrations, does not promote reading comprehension. Community members have expressed preferences for illustrations and incorporating the voices of community members in consent documents (Institute of Medicine, 2015), although the effectiveness of such features and the appropriate method for implementing them require further research (Nishimura et al., 2013). Future studies should examine the effectiveness of treating ICDs as health education materials and of drafting them accordingly.
This was the first study to evaluate IRB policies in terms of their adherence to health literacy principles. A key strength of our approach is that we went beyond readability to examine features such as layout and graphics that also affect comprehension of ICDs. We also note certain limitations. First, ICDs are only one aspect of the consent process. We did not evaluate other interactions between investigators and potential research participants. These might include providing a clear oral description of the proposed research in language understandable to members of the study community before offering any ICDs. Second, we analyzed only English-language documents, even though some studies used only Spanish-language materials in the field. Third, we limited our sample to studies supported by the Centers for Population Health and Health Disparities. Nonetheless, these Centers were specifically funded to address the needs of underserved communities, and thus might be expected to optimize ICDs for use in these communities. Finally, the Centers included in our study may have used different readability tests for their documents, which may have generated lower grade-level readability scores. There are numerous tools to assess reading level and some tools have demonstrated a variability of up to 5 reading grade levels on the same text (Wang et al., 2013). We used the SMOG readability formula because it proves to be the most reliable. Applying a standardized, valid and reliable tool, such as the SMOG, to all ICDs across IRBs would provide researchers the opportunity to evaluate their text and make necessary modifications to meet the appropriate grade-level readability.
We acknowledge the challenges experienced by investigators who need to reconcile IRB requirements with the expectations of their community partners, and we do not recommend imposing any additional barriers in this process. Research methods and processes can be very complex, making it challenging for researchers to simplify reading levels of consent forms without misleading participants. We simply advise a shift away from perceptions of ICDs as tools intended primarily to document participant consent. We encourage researchers and institutions to regard them instead as a way to educate potential study participants about the purposes, processes, and benefits of research.
Acknowledgements
We acknowledge Partnerships for Native Health at the University of Washington for their support. This project was funded through a Diversity Supplement to grant U54 CA153498 (PIs: Buchwald & Henderson) sponsored by the National Cancer Institute. We acknowledge Raymond M. Harris, PhD, for editing assistance and Christine Martin, Sydney Switzer, Miranda Margetts, and Mattea Grant for assistance in analyzing printed documents and institutional websites.
References
- Aboumatar HJ, Carson KA, Beach MC, Roter DL, & Cooper LA (2013). The impact of health literacy on desire for participation in healthcare, medical visit communication, and patient reported outcomes among patients with hypertension. J Gen Intern Med, 28(11), 1469–1476. doi: 10.1007/s11606-013-2466-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Abrams MA, & Earles B (2007). Developing an informed consent process with patient understanding in mind. N C Med J, 68(5), 352–355. [PubMed] [Google Scholar]
- AHRQ. (2015). The AHRQ Informed Consent and Authorization Toolkit for Minimal Risk Research. Retrieved from http://www.ahrq.gov/funding/policies/informedconsent/
- Braunstein JB, Sherber NS, Schulman SP, Ding EL, & Powe NR (2008). Race, medical researcher distrust, perceived harm, and willingness to participate in cardiovascular prevention trials. Medicine (Baltimore), 87(1), 1–9. doi: 10.1097/MD.0b013e3181625d78 [DOI] [PubMed] [Google Scholar]
- Brown BA, Long HL, Gould H, Weitz T, & Milliken N (2000). A conceptual model for the recruitment of diverse women into research studies. J Womens Health Gend Based Med, 9(6), 625–632. doi: 10.1089/15246090050118152 [DOI] [PubMed] [Google Scholar]
- Campbell FA, Goldman BD, Boccia ML, & Skinner M (2004). The effect of format modifications and reading comprehension on recall of informed consent information by low-income parents: a comparison of print, video, and computer-based presentations. Patient Educ Couns, 53(2), 205–216. doi: 10.1016/S0738-3991(03)00162-9 [DOI] [PubMed] [Google Scholar]
- Chen MS Jr., Lara PN, Dang JH, Paterniti DA, & Kelly K (2014). Twenty years post-NIH Revitalization Act: enhancing minority participation in clinical trials (EMPaCT): laying the groundwork for improving minority clinical trial accrual: renewing the case for enhancing minority participation in cancer clinical trials. Cancer, 120 Suppl 7, 1091–1096. doi: 10.1002/cncr.28575 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Corbie-Smith G, Thomas SB, Williams MV, & Moody-Ayers S (1999). Attitudes and beliefs of African Americans toward participation in medical research. J Gen Intern Med, 14(9), 537–546. [DOI] [PMC free article] [PubMed] [Google Scholar]
- De las Nueces D, Hacker K, DiGirolamo A, & Hicks LS (2012). A systematic review of community-based participatory research to enhance clinical trials in racial and ethnic minority groups. Health Serv Res, 47(3 Pt 2), 1363–1386. doi: 10.1111/j.1475-6773.2012.01386.x [DOI] [PMC free article] [PubMed] [Google Scholar]
- DeWalt DA, Broucksou KA, Hawk V, Brach C, Hink A, Rudd R, & Callahan L (2011). Developing and testing the health literacy universal precautions toolkit. Nurs Outlook, 59(2), 85–94. doi: 10.1016/j.outlook.2010.12.002 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Doak CC, Doak LG, & Root JH (1996). Teaching Patients with Low Literacy Skills Retrieved from http://www.hsph.harvard.edu/healthliteracy/resources/teaching-patients-with-low-literacy-skills/
- Evans KR, Lewis MJ, & Hudson SV (2012). The role of health literacy on African American and Hispanic/Latino perspectives on cancer clinical trials. J Cancer Educ, 27(2), 299–305. doi: 10.1007/s13187-011-0300-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Flicker S, Travers R, Guta A, McDonald S, & Meagher A (2007). Ethical dilemmas in community-based participatory research: recommendations for institutional review boards. J Urban Health, 84(4), 478–493. doi: 10.1007/s11524-007-9165-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Flory J, & Emanuel E (2004). Interventions to improve research participants’ understanding in informed consent for research: a systematic review. JAMA, 292(13), 1593–1601. doi: 10.1001/jama.292.13.1593 [DOI] [PubMed] [Google Scholar]
- Ford JG, Howerton MW, Bolen S, Gary TL, Lai GY, Tilburt J, . . . Bass EB (2005). Knowledge and access to information on recruitment of underrepresented populations to cancer clinical trials. (Prepared by the Johns Hopkins University Evidence-based Practice Center, under Contract No. 290–02-0018.) AHRQ Publication No. 05-E019–1; Retrieved from Rockville, MD: [DOI] [PMC free article] [PubMed] [Google Scholar]
- Ford ME, Siminoff LA, Pickelsimer E, Mainous AG, Smith DW, Diaz VA, . . . Tilley BC (2013). Unequal burden of disease, unequal participation in clinical trials: solutions from African American and Latino community members. Health Soc Work, 38(1), 29–38. [DOI] [PMC free article] [PubMed] [Google Scholar]
- George S, Duran N, & Norris K (2014). A systematic review of barriers and facilitators to minority research participation among African Americans, Latinos, Asian Americans, and Pacific Islanders. Am J Public Health, 104(2), e16–31. doi: 10.2105/AJPH.2013.301706 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Goodman M, Finnegan R, Mohadjer L, Krenzke T, Hogan J, Owen E, & Provasnik S (2013). Literacy, Numeracy, and Problem Solving in Technology-Rich Environments Among U.S. Adults: Results from the Program for the International Assessment of Adult Competencies 2012. First Look (NCES 2014–008). Retrieved from https://nces.ed.gov/pubs2014/2014008.pdf [Google Scholar]
- Grady C (2015). Enduring and emerging challenges of informed consent. N Engl J Med, 372(9), 855–862. doi: 10.1056/NEJMra1411250 [DOI] [PubMed] [Google Scholar]
- Hawk ET, Habermann EB, Ford JG, Wenzel JA, Brahmer JR, Chen MS Jr., . . . Vickers SM (2014). Five National Cancer Institute-designated cancer centers’ data collection on racial/ethnic minority participation in therapeutic trials: a current view and opportunities for improvement. Cancer, 120 Suppl 7, 1113–1121. doi: 10.1002/cncr.28571 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Helitzer D, Hollis C, Cotner J, & Oestreicher N (2009). Health literacy demands of written health information materials: an assessment of cervical cancer prevention materials. Cancer control : journal of the Moffitt Cancer Center, 16(1), 70–78. [DOI] [PubMed] [Google Scholar]
- IBM Corp. (Released 2016). IBM SPSS Statistics for Windows, Version 24.0. Armonk, NY: IBM Corp. [Google Scholar]
- Institute of Medicine. (2015). Informed Consent and health literacy: Workshop summary Retrieved from Washington, DC: [Google Scholar]
- Israel BA, Schulz AJ, Parker EA, & Becker AB (1998). Review of community-based research: assessing partnership approaches to improve public health. Annu Rev Public Health, 19, 173–202. doi: 10.1146/annurev.publhealth.19.1.173 [DOI] [PubMed] [Google Scholar]
- Kass NE, Chaisson L, Taylor HA, & Lohse J (2011). Length and complexity of US and international HIV consent forms from federal HIV network trials. J Gen Intern Med, 26(11), 1324–1328. doi: 10.1007/s11606-011-1778-6 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kirsch IS, Jungeblut A, Jenkins L, & Kolstad A (2002). Adult Literacy in America: A First Look at the Findings of the National Adult Literacy Survey. Washington, D.C.: National Center for Education Statistics. [Google Scholar]
- Kutner M, Greenberg E, Jin Y, & Paulsen C (2006). The Health Literacy of America’s Adults: Results from the 2003 National Assessment of Adult Literacy (NCES 2006–483).
- Lorenzen B, Melby CE, & Earles B (2008). Using principles of health literacy to enhance the informed consent process. Aorn J, 88(1), 23–29. [DOI] [PubMed] [Google Scholar]
- Malone RE, Yerger VB, McGruder C, & Froelicher E (2006). “It’s like Tuskegee in reverse”: a case study of ethical tensions in institutional review board review of community-based participatory research. Am J Public Health, 96(11), 1914–1919. doi: 10.2105/AJPH.2005.082172 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Margolis DJ, Hampton M, Hoffstad O, Malay DS, & Thom S (2015). Health literacy and diabetic foot ulcer healing. Wound Repair Regen, 23(3), 299–301. doi: 10.1111/wrr.12311 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Miller MJ, Abrams MA, Earles B, Phillips K, & McCleeary EM (2011). Improving patient-provider communication for patients having surgery: patient perceptions of a revised health literacy-based consent process. J Patient Saf, 7(1), 30–38. doi: 10.1097/PTS.0b013e31820cd632 [DOI] [PubMed] [Google Scholar]
- Mosenthal PB, & Kirsch IS (1998). A new measure for assessing document complexity; The PMOSE/IKIRSCH document readability formula. Journal of Adolescent & Adult Literacy, 41, 698–637. [Google Scholar]
- Moser DK, Robinson S, Biddle MJ, Pelter MM, Nesbitt TS, Southard J, . . . Dracup K(2015). Health literacy predicts morbidity and mortality in rural patients with heart failure. J Card Fail, 21(8), 612–618. doi: 10.1016/j.cardfail.2015.04.004 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Murphy DA, O’Keefe ZH, & Kaufman AH (1999). Improving comprehension and recall of information for an HIV vaccine trial among women at risk for HIV: reading level simplification and inclusion of pictures to illustrate key concepts. AIDS Educ Prev, 11(5), 389–399. [PubMed] [Google Scholar]
- National Center for Education Statistics. (2003). National Assessment of Adult Literacy. Retrieved from http://nces.ed.gov/naal/kf_demographics.asp
- National Center for Education Statistics. (2013). Program for the International Assessment of Adult Competencies. Retrieved from http://nces.ed.gov/surveys/piaac/
- Nishimura A, Carey J, Erwin PJ, Tilburt JC, Murad MH, & McCormick JB (2013). Improving understanding in the research informed consent process: a systematic review of 54 interventions tested in randomized control trials. BMC Med Ethics, 14, 28. doi: 10.1186/1472-6939-14-28 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Paasche-Orlow MK, Taylor HA, & Brancati FL (2003). Readability standards for informed-consent forms as compared with actual readability. N Engl J Med, 348(8), 721–726. doi: 10.1056/NEJMsa021212 [DOI] [PubMed] [Google Scholar]
- Paskett ED, Reeves KW, McLaughlin JM, Katz ML, McAlearney AS, Ruffin MT, . . . Gehlert S (2008). Recruitment of minority and underserved populations in the United States: the Centers for Population Health and Health Disparities experience. Contemp Clin Trials, 29(6), 847–861. doi: 10.1016/j.cct.2008.07.006 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Putnam LR, Chang CM, Rogers NB, Podolnick JM, Sakhuja S, Matusczcak M, . . . Tsao K (2015). Adherence to surgical antibiotic prophylaxis remains a challenge despite multifaceted interventions. Surgery, 158(2), 413–419. doi: 10.1016/j.surg.2015.04.013 [DOI] [PubMed] [Google Scholar]
- Quinlan P, Price KO, Magid SK, Lyman S, Mandl LA, & Stone PW (2013). The relationship among health literacy, health knowledge, and adherence to treatment in patients with rheumatoid arthritis. HSS J, 9(1), 42–49. doi: 10.1007/s11420-012-9308-6 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Schillinger D, Grumbach K, Piette J, Wang F, Osmond D, Daher C, . . . Bindman AB (2002). Association of health literacy with diabetes outcomes. JAMA, 288(4), 475–482. [DOI] [PubMed] [Google Scholar]
- Schmotzer GL (2012). Barriers and facilitators to participation of minorities in clinical trials. Ethn Dis, 22(2), 226–230. [PubMed] [Google Scholar]
- Seifer SD, Michaels M, & Collins S (2010). Applying community-based participatory research principles and approaches in clinical trials: forging a new model for cancer clinical research. Prog Community Health Partnersh, 4(1), 37–46. doi: 10.1353/cpr.0.0103 [DOI] [PubMed] [Google Scholar]
- Simonds VW, Wallerstein N, Duran B, & Villegas M (2013). Community-based participatory research: its role in future cancer research and public health practice. Prev Chronic Dis, 10, E78. doi: 10.5888/pcd10.120205 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Sudore RL, Landefeld CS, Williams BA, Barnes DE, Lindquist K, & Schillinger D (2006). Use of a modified informed consent process among vulnerable patients: a descriptive study. J Gen Intern Med, 21(8), 867–873. doi: 10.1111/j.1525-1497.2006.00535.x [DOI] [PMC free article] [PubMed] [Google Scholar]
- Tattersall MH (2001). Examining informed consent to cancer clinical trials. Lancet, 358(9295), 1742–1743. [DOI] [PubMed] [Google Scholar]
- Terblanche M, & Burgess L (2010). Examining the readability of patient-informed consent forms. Open Access Journal of Clinical Trials, 2, 157–162. [Google Scholar]
- Terranova G, Ferro M, Carpeggiani C, Recchia V, Braga L, Semelka RC, & Picano E (2012). Low quality and lack of clarity of current informed consent forms in cardiology: how to improve them. JACC Cardiovasc Imaging, 5(6), 649–655. doi: 10.1016/j.jcmg.2012.03.007 [DOI] [PubMed] [Google Scholar]
- Trantham LC, Carpenter WR, DiMartino LD, White B, Green M, Teal R, . . . Godley, P. A. (2015). Perceptions of Cancer Clinical Research Among African American Men in North Carolina. J Natl Med Assoc, 107(1), 33–41. doi: 10.1016/S0027-9684(15)30007-9 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wang LW, Miller MJ, Schmitt MR, & Wen FK (2013). Assessing readability formula differences with written health information materials: application, results, and recommendations. Res Social Adm Pharm, 9(5), 503–516. doi: 10.1016/j.sapharm.2012.05.009 [DOI] [PubMed] [Google Scholar]