Abstract
Patient materials are often written above the reading level of most adults. Tool 11 of the Health Literacy Universal Precautions Toolkit (“Design Easy-to-Read Material”) provides guidance on ensuring that written patient materials are easy to understand. As part of a pragmatic demonstration of the Toolkit, we examined how four primary care practices implemented Tool 11 and whether written materials improved as a result. We conducted interviews to learn about practices' implementation activities and assessed the readability, understandability, and actionability of patient education materials collected during pre- and postimplementation site visits. Interview data indicated that practices followed many action steps recommended in Tool 11, including training staff, assessing readability, and developing or revising materials, typically focusing on brief documents such as patient letters and information sheets. Many of the revised and newly developed documents had reading levels appropriate for most patients and—in the case of revised documents—better readability than the original materials. In contrast, the readability, understandability, and actionability of lengthier patient education materials were poor and did not improve over the 6-month implementation period. Findings guided revisions to Tool 11 and highlighted the importance of engaging multiple stakeholders in improving the quality of patient materials.
Health literacy plays a critical role in comprehension of written health-related materials. And yet numerous studies show that the reading level of patient materials often exceeds the reading skills of many adults. It is estimated that the average U.S. adult can comprehend text written at the eighthto ninth-grade level (Doak, Doak, & Root, 1996; Institute of Medicine Committee on Health Literacy, 2004; National Work Group on Literacy and Health, 1998), although literacy skills are substantially lower among older and low-income adults (Doak et al., 1996; Kutner, Greenberg, Jin, & Paulsen, 2006; Weiss et al., 1994). In contrast, patient materials are often written at or above the 10th-grade level (Aliu & Chung, 2010; Helitzer, Hollis, Cotner, & Oestreicher, 2009; Kaphingst, Zanfini, & Emmons, 2006; Vallance, Taylor, & Lavallee, 2008; Wallace, Turner, Ballard, Keenum, & Weiss, 2005). These high reading levels, in addition to other features that can make documents difficult to understand (e.g., the use of medical terms), render many patient materials unusable for millions of Americans.
The Agency for Healthcare Research and Quality developed the Health Literacy Universal Precautions Toolkit to support primary care practices in their efforts to improve communication with and support for patients of all health literacy levels (DeWalt et al., 2010). As providers cannot always tell which patients have difficulty understanding health information (Bass, Wilson, Griffith, & Barnett, 2002; Lindau et al., 2002; Powell & Kripalani, 2005; Rogers, Wallace, & Weiss, 2006), the Toolkit recommends that practices implement “health literacy universal precautions by making systematic, practice-wide changes to simplify communication and reduce the complexity of health care for all patients.” The first edition of the Toolkit contained 20 tools, each of which provided evidence-based suggestions and guidance on assessing and improving health literacy-related features of the practice environment.
Tool 11 (“Design Easy-to-Read Material”) was developed to educate practice staff on methods for evaluating and creating appropriate patient materials. Key recommendations provided in Tool 11 include (a) training staff in the evaluation and development of easy-to-read materials, (b) assessing the reading level of written documents, (c) ensuring that materials are written at or below the sixth-grade level, (d) following guidelines for clear communication (e.g., avoiding medical terms, including ample white space), and (e) obtaining patient feedback. In addition, the Tool contains several easy-to-read sample documents that practices can adopt or tailor for use in their facilities.
In 2013–2014, we conducted the Demonstration of the Health Literacy Universal Precautions Toolkit (Mabachi et al., 2016). As conceptualized in the health literate care model (Koh, Brach, Harris, & Parchman, 2013), the Demonstration tested the proposition that primary care practices could use the Toolkit to implement a systems approach to improving communication with and support for patients. The Demonstration examined the process and impact of Toolkit implementation in 12 primary care practices and identified needed revisions to each tool. The objective of the current analysis was to investigate use of Tool 11 in the four Demonstration practices that implemented this Tool. Specifically, we sought to (a) understand the strategies practices used in implementing Tool 11 and (b) assess whether use of the Tool resulted in higher quality patient materials.
Methods
Qualitative and quantitative methods were used to achieve our two study objectives. To understand how Tool 11 was implemented, we conducted and analyzed multiple qualitative interviews with each practice, focusing on the action steps practices employed in using the Tool. To assess document quality, we (a) examined the readability of patient materials developed or revised by practices as part of their implementation efforts and (b) conducted a comprehensive assessment of patient education materials collected from participating practices prior to and after Tool implementation. Statistical analyses were conducted to determine whether the quality of this latter set of materials improved between the pre- and postimplementation assessments.
Practice Recruitment
To recruit practices for the Demonstration, we sent e-mails inviting participation to the members of two practice-based research networks (i.e., American Academy of Family Physicians [AAFP] National Research Network and Shared Networks of Colorado Ambulatory Practices & Partners) and to the American College of Physicians. Together, these e-mails reached approximately 1,500 members. Recruitment information also was published on Facebook as well as in AAFP's newsletter and daily e-mail updates.
Of the 66 practices that expressed interest in joining the Demonstration, 19 were identified as high-priority practices to target for participation. High-priority practices were those identified as having the potential to contribute to the diversity of the overall practice sample (e.g., practice type, size, location) and those that had a patient population expected to experience a high rate of limited health literacy. The prevalence of limited health literacy was estimated using the Prevalence Calculator developed as part of Pfizer's Clear Health Communication Initiative.
Screening phone calls were conducted with each high-priority practice to obtain additional information about practices' interest in and preparedness for Demonstration participation. In selecting the final sample, we sought to include practices that had experience conducting quality improvement initiatives and that would provide diversity across the sample on a number of key practice characteristics, including geographic region (i.e., West Coast), practice type (e.g., family medicine), and practice size, as well as patient characteristics (e.g., race, ethnicity, preferred language, age, insurance status). Of the 12 practices selected for participation, four implemented Tool 11. The characteristics of these practices are presented in Table 1.
Table 1. Characteristics of demonstration practices.
| Patient sociodemographics (%) | |||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|
|
|
|||||||||||
| Practice | Type | Region | Location | Patients served | Medicaid | Black | White | Hispanic | Age ≥65 years | Non-English-speaking | Limited health literacy |
| 1 | GIM/residency | West Coast | Urban | 3,336 | 30 | NA | NA | 4 | 23 | 4 | 33 |
| 2 | FM | Midwest | Suburban | 7,219 | 20 | 5 | 75 | 5 | 20 | 20 | 26 |
| 3 | FM/PCMH/residency | Midwest | Peri-urban | 5,000 | 15 | 5 | 92 | 3 | 20 | 5 | 31 |
| 4 | GIM | South | Suburban | 1,500–2,000 | 70 | 60 | 35 | 5 | 30 | 10 | 42 |
Note. GIM = general internal medicine; FM = family medicine; PCMH = patient-centered medical home; NA = not available.
Administrative Approval
Approval to conduct the Demonstration was provided by the institutional review boards (IRBs) of the AAFP and the University of Colorado Denver. Three of the four practices implementing Tool 11 had affiliated IRBs that reviewed and approved their participation. The remaining practice was covered under the AAFP IRB. Data collection was approved by the U.S. Office of Management and Budget.
Tool Implementation
Practices implemented Tool 11 over a 6-month period. Because the Toolkit is available on the Internet for any medical practice to download and use, our goal was to observe what practices would be able to accomplish when they implemented Tool 11 without additional guidance and support. Thus, in this real-world study, we gave practices complete freedom to choose their implementation strategies and provided only minimal support for their activities.
Qualitative Analysis of the Implementation Process
To learn about practices' implementation strategies, we conducted two types of semi-structured interviews. First, at four time points during the implementation period, we conducted process evaluation interviews by telephone with each practice. Second, at the conclusion of the implementation period, we conducted site visits with each practice, during which we conducted three in-depth interviews with practice staff. Both telephone and in-person interviews followed systematic interview protocols and focused on eliciting information regarding the planning and implementation of Tool-related activities. Interviews were conducted by one of six members of the research team, all with prior experience and/or training in qualitative data collection.
All interviews were recorded and transcribed verbatim. Consistent with established qualitative methodology, the analysis of interview data was a continuous, iterative process beginning with initial data collection and continuing throughout and beyond the data generation period (Graneheim & Lundman, 2004; Hsieh & Shannon, 2005; Stemler, 2001). To achieve immersion, two members of the study team independently read transcripts multiple times. To emphasize respondent perspectives and deemphasize team speculations, they established an initial code list using an emergent rather than an a priori approach (Crabtree & Miller, 1999; Stemler, 2001). After initial codes were established, the resulting set of codes was applied to the transcripts and code categories were developed. The two qualitative analysts communicated regularly to discuss emergent new codes, themes, and patterns; confirm intercoder reliability; and triangulate data (Breitmayer, Ayres, & Knafl, 1993; Charmaz, 2006; Teddlie & Tashakkori, 2009). During the analytic process, particular attention was paid to identifying common implementation strategies, barriers to and facilitators of implementation, and strengths/weaknesses of the Toolkit itself.
Assessment of Document Quality
Readability of New and Revised Materials
During postimplementation site visits, we collected copies of written patient materials that were developed or revised during the implementation period. Because most of these materials were extremely brief, nonnarrative, and not geared toward patient education, they were not included in the comprehensive assessment of document quality described below. However, to assess practices' success in developing appropriate materials, when possible, we evaluated the readability of these documents using the Flesch-Kincaid Readability Test (described below).
Comprehensive Assessment of Document Quality Over Time
To evaluate whether written patient materials improved following implementation of Tool 11, we conducted a comprehensive assessment of patient education materials used at the four practices before and after the implementation period.
Selection of Materials for Review
During pre- and postimplementation site visits, the research team requested copies of written materials used with patients at each practice (e.g., intake forms, patient letters, education materials). Because measures of readability are designed to be used with continuous text, and guidelines for clear communication are particularly relevant to educational materials (Doak et al., 1996), we focused our evaluation on patient education materials that included ≥500 words of narrative text.
With the goal of reviewing a similar set of materials across practices, we targeted documents focusing on three commonly represented topics: medications, diabetes, and cardiovascular health. When a single practice had more than one document on a given topic, we prioritized materials that were authored by the clinic or its health system. When a practice did not have a document in a targeted topic area, we selected another patient education document of the appropriate length.
At the postimplementation time point, the study team collected written patient materials and determined whether preimplementation documents selected for evaluation remained in use. If a document selected at the preimplementation time point remained in use at postimplementation, we selected that document for inclusion in the postimplementation evaluation. When a document selected for evaluation at the preimplementation time point was no longer available, a new document was selected.
At the conclusion of this process, we had selected three documents for evaluation from each practice for both the pre- and postimplementation time points. Thus, the review included a total of 12 documents from the preimplementation time point and 12 documents from postimplementation. Seven of the documents selected at the preimplementation time point remained in use at postimplementation and were included in the review at each time point. Across time points, 42% of materials focused on medications, 29% on diabetes, 21% on cardiovascular health, and 8% on other topics.
Evaluation Methods
The readability, understandability, and actionability of each selected document were assessed. Readability represents the difficulty of reading a given document and is typically computed based on the length of sentences and/or words in a text. Because readability scores can vary based on measurement approach, we used three well-established measures: the Simple Measure of Gobbledygook, the Fry Formula, and the Flesch-Kincaid Readability Test (Aliu & Chung, 2010; Fry, 1977; McLaughlin, 1969). These measures each produce a grade-level score indicating the grade at which a person would have to read to comprehend a given document. For example, a document with a Flesch-Kincaid grade level of 8.3 would be comprehensible to most individuals able to read at the eighth-grade level. Online tools were used to compute readability measures (http://www.readabilityformulas.com).
To prepare materials for readability assessment, we selected a sample of text approximately 500 words in length from the beginning of each document. As recommended in previous studies (Aliu & Chung, 2010; Friedman & Hoffman-Goetz, 2006), these samples were altered in the following ways to ensure accurate readability estimates: (a) tables, diagrams, illustrations, and captions were omitted; (b) text was condensed into a single paragraph; (c) decimals, abbreviations, dashes, colons, semicolons, parentheses, bullets, and slashes were removed; and (d) copyright notices, disclaimers, and author names and information were omitted. In addition, section headings and bullet points were included as their own sentences.
Readability is just one factor that influences comprehension of written information. Numerous other features of a document—such as word choice, organization, and layout— can affect patient understanding. For this reason, we also evaluated the understandability and actionability of selected materials using the Patient Education Materials Assessment Tool (PEMAT), a validated method for evaluating patient education materials (Shoemaker, Wolf, & Brach, 2014a, 2014b). A document is considered to be understandable if patients “can process and explain key messages” (Shoemaker et al., 2014b, p. 1). Materials are deemed actionable when patients can identify the self-care actions they can take based on the information provided. A total of 24 PEMAT items are used to assess the understandability and actionability of printed documents. Each item reflects a recommendation for developing documents that are easy to understand or act on. Overall understandability and actionability scores represent the percentage of recommendations to which a document adheres.
Statistical Analysis
Readability and PEMAT scores for the 24 documents selected for pre/post assessment were analyzed using SAS Version 9.4 (SAS Institute, Cary, NC). Means and standard deviations were computed at each time point to examine average readability and PEMAT scores of the selected patient materials. Analyses of variance controlling for practice were used to assess change over time in readability, understandability, and actionability.
Results
Qualitative Findings Related to the Implementation Process
Over the 6-month implementation period, practices implemented several key action steps identified in Tool 11 (see Table 2). All four practices implemented the Tool's first recommendation, providing staff training on methods for producing easy-to-read written materials. Practices also used resources and guidance in the Tool to assess the readability of patient materials and to revise and/or develop new written documents.
Table 2. Activities accomplished over the 6-month implementation period.
| Practice | Implementation activities | Flesch-Kincaid grade level |
|---|---|---|
| Practice 1 | Revised 10 patient form letters (e.g., mammogram reminder letter), assessing the readability of both the original and final versions of each document | |
| • Original letters | M = 8.6, range = 5.1–10.3 | |
| • Final letters | M = 6.2, range = 3.0–9.9 | |
| Created a new patient letter (time for a checkup) | 5.1 | |
| Developed a short patient information sheet comparing fecal immunochemical testing to colonoscopy | 5.1 | |
| Practice 2 | Revised two lengthy patient education brochures | |
| • Understanding Diabetes | 4.8 | |
| • Guide to Warfarin Therapy | 6.0 | |
| Developed a chart explaining common lab tests | 4.3 | |
| Revised the patient appointment card | Could not assess | |
| Practice 3 | Assessed readability and developed a catalog of 69 patient materials used by the practice | |
| Developed three short patient information sheets | ||
| • Know Your A1c (identified current and goal values) | 11.2 | |
| • What's Your Blood Pressure (identified current and goal values) | 8.5 | |
| • Colon Cancer Screening (testing information, recommended provider list) | 9.2 | |
| Developed a medication adherence survey | 6.2 | |
| Practice 4 | Adopted five easy-to-read patient forms provided in the Tool | |
| Revised two of these forms to suit their needs | ||
| • Action Plan Form | 4.4 | |
| • Release of Medical Information Form | 6.8 |
In implementing Tool 11, Practice 3 conducted a readability assessment of all patient materials used in its facility. The practice developed a catalog of 69 documents, identifying their source (e.g., American Medical Association) and location in the clinic and providing information about each document's readability. At the conclusion of the implementation period, practice staff reported that they were about to begin the process of using their readability data to identify materials in need of revision or replacement.
Three of the practices developed or revised patient materials as part of their implementation efforts. Practices 1, 2, and 3 developed brief informational handouts for their patients (e.g., a 1-page information sheet providing patients with their current and target blood pressure values). One practice (Practice 2) also targeted large-scale patient education materials, revising two lengthy documents, including a 20-page booklet on diabetes. Practice 1 focused its efforts mainly on revising patient letters, and in the process, implemented several guidelines for clear communication identified in Tool 11. The practice made letters shorter, formatted text in short paragraphs separated by ample white space, assessed readability both before and after revision, and sought to obtain feedback on each letter from five patients.
During postimplementation interviews, two practices highlighted the influence that being part of a health system had on their implementation experiences. Practice 2 noted that being part of a system was an advantage. Through the system, the practice had access to “professional book editors and designers who have ongoing training in health literacy” as well as document templates created by the system to support the development of appropriate patient materials. These templates were reportedly “designed to optimize readability” and were “evidence-based, taking into account font, type size, white space, line length, visual contrast, etc.”
Staff from Practice 1, in contrast, reported that affiliation with a health system was an obstacle to its implementation process, restricting what documents it could target in its efforts. “We couldn't just pick the standard forms that we use… if we change those, they have to be a system-wide change.” Approval to make document changes was not easy to obtain. “We had to make a request to our organization to update [those forms], but that's going to take at least a year to accomplish.” There was also a perceived lack of attention to health literacy at the system level: “We don't have final say on the content” of some documents, and the people who do “don't necessarily have health literacy as a primary consideration.” As a result, the practice chose to focus its efforts on forms and letters that its own staff had developed and could revise without system-level approval.
Practice 4 took a different approach to implementing Tool 11. This practice adopted five patient documents highlighted in Tool 11 as health literacy–friendly resources. For example, the practice replaced its existing records release form with the Release of Medical Information Form provided in Tool 11. For two documents, minor changes, such as adding the practice's logo, were made.
Findings Related to Document Quality
Readability of New and Revised Materials
Table 2 presents Flesch-Kincaid readability scores for materials that practices created or revised as part of their implementation efforts. For the majority of materials, readability scores met the Tool 11 recommendation that documents be written at the sixth-grade level or below. For Practices 2 and 4, all materials met this threshold. Practice 3, in contrast, was unsuccessful in meeting this recommendation for three of the four materials it developed. It is noteworthy that in replacing its existing records release form with the Release of Medical Information Form provided in Tool 11, Practice 4 reduced the reading demand of its records release form from 16.3 to 6.8.
Patient letters revised by Practice 1 showed consistent improvement in readability. Readability scores dropped from an average of 8.6 at the preimplementation time point to an average of 6.2 at postimplementation. Whereas nine of the 10 letters revised had readability scores of seventh grade or higher at preimplementation, only three remained above the recommended level at postimplementation.
Comprehensive Assessment of Document Quality Over Time
Table 3 presents information about the sources and quality of patient education materials selected for assessment. Although 29% of the materials had been developed by the practices or their health systems, the majority had been created by external organizations, most commonly pharmaceutical companies and for-profit companies publishing educational materials for patients.
Table 3. Assessment of document quality over time (N = 24 documents).
| Selected materials: Source, readability, and PEMAT scores | Preimplementation | Postimplementation | Overall N (%) | p |
|---|---|---|---|---|
| Author, N (%) | ||||
| Practice or health system | 3 (25%) | 4 (33%) | 7 (29%) | |
| Pharmaceutical company | 3 (25%) | 2 (17%) | 5 (21%) | |
| For-profit medical content publisher (e.g., PatientPoint) | 3 (25%) | 4 (33%) | 7 (29%) | |
| Nonprofit organization (e.g., American Cancer Society) | 1 (8%) | 1 (8%) | 2 (8%) | |
| Professional association (e.g., American College of Physicians) | 1 (8%) | 0 (0%) | 1 (4%) | |
| Government agency (e.g., state health department) | 1 (8%) | 1 (8%) | 2 (8%) | |
| Readability, mean (SD) | ||||
| Flesch-Kincaid | 7.4 (2.4) | 7.2 (2.2) | >.05 | |
| Fry Formula | 9.4 (2.9) | 9.4 (3.0) | >.05 | |
| SMOG | 7.6 (1.8) | 7.6 (1.6) | >.05 | |
| PEMAT, mean (SD) | ||||
| Understandability | 50.1% (22.6) | 56.9% (18.8) | >.05 | |
| Actionability | 45.3% (17.5) | 51.8% (18.3) | >.05 |
Note. PEMAT = Patient Education Materials Assessment Tool; SMOG = Simple Measure of Gobbledygook.
Readability results indicated that materials had an average reading level of seventh to ninth grade, both before and after implementation. Reading level varied widely across materials, with a range from the fourth- to the 15th-grade level. There were no significant changes over time in document readability (ps > .05).
Across time points, average understandability and actionability scores ranged from 45% to 57%. These results suggest that only about half of the criteria outlined in the PEMAT were met by the materials evaluated. Across the documents, PEMAT subscale scores varied widely, with a range from 6% to 83%. Although subscale scores increased over time, these changes were not significant (ps > .05).
Discussion
The aims of this study were to understand how practices used Tool 11 and whether the quality of patient materials improved following implementation. Results indicated that practices implemented key action steps recommended in the Tool, including conducting staff training, assessing the readability of patient materials, following recommendations related to document content and formatting, and obtaining patient feedback. Although most materials developed or revised during the implementation period showed acceptable levels of readability, our comprehensive assessment of document quality showed no evidence of improvement in readability, understandability, or actionability over time.
The longer, narrative patient education materials selected for our comprehensive assessment did not improve over the course of the project because practices rarely targeted such documents in their implementation efforts. Typically, practices focused on brief materials that were under their local control, such as patient letters and short information sheets. With the exception of one practice—which assessed the readability of all patient materials—practices did not evaluate or otherwise address the quality of documents produced by external organizations (e.g., pharmaceutical companies). As the majority of materials made available to patients were developed by such entities, failing to target these documents left unaddressed a large portion of the practices' patient materials, including those selected for inclusion in our comprehensive assessment of document quality.
Affiliation with a health system was an important driver of decisions regarding which materials to target for review and revision. For one practice, barriers to obtaining system-level approval of document refinements led the practice to focus its efforts on small, locally controlled materials. For another practice, being part of a health system afforded additional resources, allowing the practice to target large-scale, comprehensive educational materials.
The short timeframe of the implementation period also may have affected the decisions practices made. Knowing that they had only 6 months for their implementation efforts, practices may have chosen to focus their work on short, simple documents as opposed to lengthier patient education materials. In the case of Practice 3, the short implementation period afforded the practice enough time to conduct a thorough readability assessment of all patient materials but did not allow for time to revise or replace materials that performed poorly.
Our findings provide insight into the process of improving patient education materials in primary care settings. First, it is clear that systematically reviewing and revising/replacing patient materials is a lengthy process. To ensure that patient materials are appropriate, practices must be able to commit resources to this effort over a long period of time.
Second, health systems have an important opportunity to influence the availability of easy-to-understand materials in primary care practices. Not only can health systems aid in practices' efforts to improve patient materials, but they are also in a position to efficiently produce and share materials that will be easy for patients to understand. To play this important role, health systems must commit to following recommended practices for producing comprehensible patient materials.
Third, because practices typically failed to target documents produced by external parties, they missed important opportunities to improve their patient materials. Although organizations producing written content for patients should follow accepted guidelines for clear communication, our comprehensive assessment of document quality—which focused heavily on externally produced materials—suggested that that is not always the case. It is critical that practices assess the appropriateness of materials obtained from external sources and that organizations producing patient education content follow recommendations for developing easy-to-understand materials.
Fourth, all parties engaged in the development of patient materials must go beyond readability in evaluating the appropriateness of their written documents. Our results suggested that not only was the reading level of selected materials too high but understandability and actionability were poor. In producing patient materials, developers must address reading level as well as other features that can affect comprehension of written documents.
Study Limitations
Several limitations of our study methods should be considered when interpreting the results. First, our examination of Tool 11 implementation was based on a small sample of practices. Thus, our results may not generalize to all primary care practices. Second, the short implementation period may have altered practices' implementation decisions—as noted previously—and constrained the amount they were able to accomplish. A longer study might show more substantial improvements in patient materials. Third, in our comprehensive assessment of document quality, we focused on documents for which standard assessment tools would be most appropriate (i.e., narrative patient education materials of ≥ 500 words). Because participating practices targeted mainly very brief, nonnarrative, and/or noneducational documents, their work to improve patient materials was not captured by our comprehensive assessment methods.
Conclusions
This analysis provides insight into the use of the Health Literacy Universal Precautions Toolkit for the improvement of written patient materials. In this real-world demonstration, practices followed many of the action steps recommended in Tool 11. However, their efforts focused mainly on small documents under their local control, leaving larger and externally produced patient materials largely unaddressed. Study results highlight the importance of engaging all developers of patient-focused documents in the effort to improve the quality of these materials. Findings from this study guided revisions to Tool 11, which is included in the second edition of the Agency for Healthcare Research and Quality Health Literacy Universal Precautions Toolkit (Brega et al., 2015).
Acknowledgments
Funding: Support for this work was provided by a contract from the Agency for Healthcare Research and Quality (HHSA290200710008, David R. West).
References
- Aliu O, Chung KC. Readability of ASPS and ASAPS educational Web sites: An analysis of consumer impact. Plastic and Reconstructive Surgery. 2010;125(4):1271–1278. doi: 10.1097/prs.0b013e3181d0ab9e. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bass PF, 3rd, Wilson JF, Griffith CH, Barnett DR. Residents' ability to identify patients with poor literacy skills. Academic Medicine. 2002;77(10):1039–1041. doi: 10.1097/00001888-200210000-00021. [DOI] [PubMed] [Google Scholar]
- Brega AG, Barnard J, Mabachi NM, Weiss BD, DeWalt DA, Brach C, et al. West DR. AHRQ Health Literacy Universal Precautions Toolkit. 2nd. Rockville, MD: Agency for Healthcare Research and Quality; 2015. [Google Scholar]
- Breitmayer BJ, Ayres L, Knafl KA. Triangulation in qualitative research: Evaluation of completeness and confirmation purposes. Image. 1993;25(3):237–243. doi: 10.1111/j.1547-5069.1993.tb00788.x. [DOI] [PubMed] [Google Scholar]
- Charmaz K. Constructing grounded theory. London, England: Sage; 2006. [Google Scholar]
- Crabtree BF, Miller WL, editors. Doing qualitative research. 2nd. Thousand Oaks, CA: Sage; 1999. [Google Scholar]
- DeWalt DA, Callahan LF, Hawk VH, Broucksou KA, Hink A, Rudd R, Brach C. Health Literacy Universal Precautions Toolkit (AHRQ Publication No 10-0046-EF) Rockville, MD: Agency for Healthcare Research and Quality; 2010. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Doak CC, Doak LG, Root JH. Teaching patients with low literacy skills. 2nd. Philadelphia, PA: Lippincott; 1996. [Google Scholar]
- Friedman DB, Hoffman-Goetz L. A systematic review of readability and comprehension instruments used for print and Web-based cancer information. Health Education and Behavior. 2006;33(3):352–373. doi: 10.1177/1090198105277329. [DOI] [PubMed] [Google Scholar]
- Fry E. Elementary reading instruction. New York, NY: McGraw-Hill; 1977. [Google Scholar]
- Graneheim UH, Lundman B. Qualitative content analysis in nursing research: Concepts, procedures and measures to achieve trustworthiness. Nurse Education Today. 2004;24(2):105–112. doi: 10.1016/j.nedt.2003.10.001. [DOI] [PubMed] [Google Scholar]
- Helitzer D, Hollis C, Cotner J, Oestreicher N. Health literacy demands of written health information materials: An assessment of cervical cancer prevention materials. Cancer Control. 2009;16(1):70–78. doi: 10.1177/107327480901600111. [DOI] [PubMed] [Google Scholar]
- Hsieh HF, Shannon SE. Three approaches to qualitative content analysis. Qualitative Health Research. 2005;15(9):1277–1288. doi: 10.1177/1049732305276687. [DOI] [PubMed] [Google Scholar]
- Institute of Medicine Committee on Health Literacy. Health literacy: A prescription to end confusion. Washington, DC: National Academies Press; 2004. [Google Scholar]
- Kaphingst KA, Zanfini CJ, Emmons KM. Accessibility of Web sites containing colorectal cancer information to adults with limited literacy (United States) Cancer Causes and Control. 2006;17(2):147–151. doi: 10.1007/s10552-005-5116-3. [DOI] [PubMed] [Google Scholar]
- Koh HK, Brach C, Harris LM, Parchman ML. A proposed ‘health literate care model’ would constitute a systems approach to improving patients' engagement in care. Health Affairs. 2013;32(2):357–367. doi: 10.1377/hlthaff.2012.1205. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kutner M, Greenberg E, Jin Y, Paulsen C. The health literacy of America's adults: Results from the 2003 National Assessment of Adult Literacy (NCES 2006–483. Washington, DC: National Center for Education Statistics; 2006. [Google Scholar]
- Lindau ST, Tomori C, Lyons T, Langseth T, Bennett CL, Garcia PT. The association of health literacy with cervical cancer prevention knowledge and health behaviors in a multiethnic cohort of women. American Journal of Obstetrics & Gynecology. 2002;186(5):938–943. doi: 10.1067/mob.2002.122091. [DOI] [PubMed] [Google Scholar]
- Mabachi NM, Cifuentes M, Barnard J, Brega AG, Albright K, Weiss BD, et al. West DR. Demonstration of the Health Literacy Universal Precautions Toolkit: Lessons for quality improvement. Journal of Ambulatory Care Management. 2016;39(1) doi: 10.1097/JAC.0000000000000102. [DOI] [PMC free article] [PubMed] [Google Scholar]
- McLaughlin GH. SMOG grading: A new readability formula. Journal of Reading. 1969;12(8):639–646. [Google Scholar]
- National Work Group on Literacy and Health. Communicating with patients who have limited literacy skills. Report of the National Work Group on Literacy and Health. Journal of Family Practice. 1998;46(2):168–176. [PubMed] [Google Scholar]
- Powell CK, Kripalani S. Brief report: Resident recognition of low literacy as a risk factor in hospital readmission. Journal of General Internal Medicine. 2005;20(11):1042–1044. doi: 10.1111/j.1525-1497.2005.0220.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Rogers ES, Wallace LS, Weiss BD. Misperceptions of medical understanding in low-literacy patients: implications for cancer prevention. Cancer Control. 2006;13(3):225–229. doi: 10.1177/107327480601300311. [DOI] [PubMed] [Google Scholar]
- Shoemaker SJ, Wolf MS, Brach C. Development of the Patient Education Materials Assessment Tool (PEMAT): A new measure of understandability and actionability for print and audiovisual patient information. Patient Education and Counseling. 2014a;96:395–403. doi: 10.1016/j.pec.2014.05.027. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Shoemaker SJ, Wolf MS, Brach C. The Patient Education Materials Assessment Tool (PEMAT) and user's guide (Version 1.0) Rockville, MD: Agency for Healthcare Research and Quality; 2014b. [Google Scholar]
- Stemler S. An overview of content analysis. Practical Assessment, Research & Evaluation. 2001;7(17) Retrieved from http://pareonline.net/getvn.asp?v=7&n=17. [Google Scholar]
- Teddlie C, Tashakkori A. Foundations of mixed methods research: Integrating quantitative and qualitative approaches in the social and behavioral sciences. Thousand Oaks, CA: Sage; 2009. [Google Scholar]
- Vallance JK, Taylor LM, Lavallee C. Suitability and readability assessment of educational print resources related to physical activity: Implications and recommendations for practice. Patient Education and Counseling. 2008;72(2):342–349. doi: 10.1016/j.pec.2008.03.010. [DOI] [PubMed] [Google Scholar]
- Wallace LS, Turner LW, Ballard JE, Keenum AJ, Weiss BD. Evaluation of Web-based osteoporosis educational materials. Journal of Women's Health. 2005;14(10):936–945. doi: 10.1089/jwh.2005.14.936. [DOI] [PubMed] [Google Scholar]
- Weiss BD, Blanchard JS, McGee DL, Hart G, Warren B, Burgoon M, Smith KJ. Illiteracy among Medicaid recipients and its relationship to health care costs. Journal of Health Care for the Poor and Underserved. 1994;5(2):99–111. doi: 10.1353/hpu.2010.0272. [DOI] [PubMed] [Google Scholar]
