Abstract
Limited health literacy is associated with worse health outcomes. It is standard practice in many primary care clinics to provide patients with written patient education materials (PEM), which often come directly from an electronic health record (EHR). We compared the health literacy of patients in a primary care residency clinic with EHR PEM readability by grade level. We assessed health literacy using the Rapid Estimate of Adult Literacy in Medicine-Short Form (REALM-SF), and determined grade level readability for the PEM distributed for the five most common clinical diagnoses using the Simple Measure of Gobbledygook (SMOG) and Flesch-Kincaid metrics. Among 175 participants, health literacy was ≥9th grade for 76 patients (43.4%), 7th to 8th grade for 66 patients (37.7%), and ≤6th grade for 30 patients (17.1%). Average standard PEM readability by SMOG was grade 9.2 and easy-to-read PEM readability was grade 6.8. These findings suggest a discrepancy between the health literacy of most patients who were surveyed and standard PEM readability. Despite national guidelines encouraging clinicians to provide PEM at an appropriate reading level, our results indicate that PEM from EHR may not be readable for many patients. [Health Literacy Research and Practice. 2017;1(4):e203–e207.]
Health literacy is an important predictor of health status (Berkman, Sheridan, Donahue, Halpern, & Crotty, 2011). Low health literacy is associated with decreased ability to take medications properly, increased emergency department visits, more hospitalizations, and increased health care costs (Berkman et al., 2011; Eichler, Weiser, & Brügger, 2009; Haun et al., 2015). More than one-third of Americans have only basic or below basic health literacy (Kutner, Greenberg, Jin, & Paulsen, 2006). Although 14% of the overall population has below basic health literacy, this number is doubled in Medicare and Medicaid populations at 27% and 30%, respectively (Kutner et al., 2006). Therefore, health literacy is a particularly important subject for residency clinics, where typically more patients are covered by Medicare or Medicaid.
Health care policymakers have stressed the importance of decreasing the discrepancy between the readability of patient education materials (PEM) and the reading level at which many Americans function (Brach et al., 2012; Koh & Rudd, 2015). The Joint Commission now requires that PEM written at the 5th grade level be provided as part of the health care facility accreditation process (The Joint Commission, 2010). The Agency for Healthcare Research and Quality (AHRQ) advocates for universal precautions for health literacy by recommending that physicians assume their patients have a lower level of health literacy (DeWalt et al., 2010).
This study compared the health literacy of patients in an urban residency primary care clinic to the readability of PEM provided by the hospital electronic health record (EHR), a commonly used source of PEM in clinics. Readability of PEM may frequently be above recommended levels (Stossel, Segar, Gliatto, Fallar, & Karani, 2012; Wilson, 2009). We examined whether the advertised grade level of PEM available from the hospital EHR is accurate, and whether the actual grade level is above the health literacy of the clinic patients.
Methods
All eligible English-speaking patients seen at Yale-New Haven Hospital Saint Raphael's Campus Adult Primary Care Continuity Clinic between December 3 and December 17, 2015, were invited to participate. People were ineligible if they did not speak or read English, or if they had a physical or cognitive disability that prevented them from reading or speaking (i.e., blindness or profound intellectual disorder). After arriving, each patient met with a team member who explained the study and obtained verbal consent. Participants then completed a basic demographics questionnaire. Subsequently, trained examiners assessed patients' health literacy with standardized guidelines. Researchers de-identified patient information by using pre-assigned alphanumeric identification codes. The study protocol and all study materials were approved by the Yale Human Subjects Committee (Institutional Review Board).
Rapid Estimate of Adult Literacy in Medicine (REALM) is one of the most commonly used and well-validated literacy assessments in the medical setting (Altin, Finke, Kautz-Freimuth, & Stock, 2014). REALM-Short Form (REALM-SF) was derived from REALM, correlates highly in validation samples (r = 0.94), and was developed in a diverse patient cohort that may closely reflect the urban residency clinic population (Arozullah et al., 2007). REALM-SF categorizes health literacy as low (≤6th grade), marginal (7th–8th grade), or adequate (≥9th grade).
Readability can be measured by several tools. The Center for Medicare and Medicaid Services (CMS) recommends the Simple Measure of Gobbledygook (SMOG) grade, which applies an objective formula based on the number of polysyllabic words (Centers for Medicare and Medicaid Services, 2010). The Flesch-Kincaid grade, which is the most widely used tool to assess readability (Albright et al., 1996), determines grade-level readability with a formula based on the number of syllables, words, and sentences. The Flesch-Kincaid grade was calculated directly from an embedded formula in Microsoft Word. SMOG scoring was performed manually by study authors (O.E.I., E.L., C.P., S.S.) instructed on application of the formula.
Readability assessments using SMOG and Flesch-Kincaid were performed on PEM provided by Elsevier's ExitCare (a vendor of PEM that services our institution), which is integrated into the Epic EHR used at Yale-New Haven Hospital. We determined the five most common clinic appointment diagnoses from November 2014 to October 2015. Team members assessed the readability of the two available versions of PEM: “standard” and “easy-to-read.” Both standard and easy-to-read versions of PEM were available for 4 of the 5 most common diagnoses.
Results
Of the 291 patients scheduled for continuity clinic visits during the study period, 213 arrived of which 186 were eligible and 175 participated. The participants were a majority women (62%) and 70% were younger than age 60 years. Racial/ethnic composition was 55% Black, 25% White, 15% Hispanic, and 5% other (i.e., Asian, Native American, and people who did not self-identify a specific race/ethnicity). When asked whether they read PEM, 144 patients (76.4%) said yes, whereas 41 (23.6%) said no. As assessed by REALM-SF, health literacy levels were ≥9th grade-level for 43.4% of patients, 7th to 8th grade-level for 37.7% of patients, and ≤6th grade-level for 17.1% of patients (Table 1). The five most common clinic diagnoses were hypertension, diabetes mellitus, hyperlipidemia, back pain, and depression, in that order. By SMOG, the average readability grade-level of their PEM were 9.7 and 7.25 for standard and easy-to-read PEM, respectively (Table 2). The Flesch-Kincaid assessment graded readability for standard and easy-to-read PEM at lower grade-levels (6.98 and 4.5, respectively), as compared to SMOG.
Table 1.
REALM-SF Score Grade (Raw Score) | Number of Patients (%) |
---|---|
≤6th grade (0–3) | 30 (17.1) |
7th–8th grade (4–6) | 66 (37.7) |
≥9th grade (7) | 76 (43.4) |
No answer | 3 (1.7) |
Total | 175 (100) |
Note. REALM-SF = Rapid Estimation of Adult Literacy in Medicine-Short Form.
Table 2.
PEM | SMOG | Flesch-Kincaid | ||
---|---|---|---|---|
Standard | Easy-to-Read | Standard | Easy-to-Read | |
Depression | 11.25 | 7.5 | 9 | 4.9 |
Type 2 diabetes | 10.5 | 8.25 | 7.65 | 5.18 |
Hyperlipidemia | 9.5 | - | 6.95 | - |
Hypertension | 9 | 7.25 | 6.35 | 4.85 |
Back pain | 8.25 | 6 | 4.96 | 3.05 |
Average | 9.70 | 7.25 | 6.98 | 4.50 |
Note. PEM = patient education materials; SMOG = Simple Measure of Gobbledygook.
Discussion
Most patients in this study (54.8%) had low or marginal health literacy, reading at or below an 8th grade-level, as determined by REALM-SF. Therefore, standard PEM are written at an inappropriately high level for more than one-half of the clinic population. Although easy-to-read PEM are written at a middle school level, this is still potentially too high for almost 20% of patients. This is particularly important as three-quarters of physicians nationwide routinely distribute PEM (Carrier & Reschovsky, 2009). Of note, despite having low literacy, 76.4% of study participants endorsed reading PEM. Clinicians may fail to provide appropriate health education when using PEM beyond their patient's literacy level, even though many patients regularly attempt to use such materials.
The discrepancy between the SMOG and Flesch-Kincaid grades highlights the difficulty of interpreting such readability assessments. However, we believe more emphasis should be placed on SMOG because it is recommended by CMS, is based on expected 100% comprehension within grade-level, and has been previously described as a more appropriate metric in health literature (Fitzsimmons, Michale, Hulley, & Scott, 2010; Wang, Miller, Schmitt, & Wen, 2013). We believe that a more conservative approach to assessing readability that expects 100% comprehension within grade-level, as SMOG does, is reasonable as incomplete comprehension could lead to vastly different health care decision-making. This approach is consistent with a “Health Literacy Universal Precautions” approach as advocated for by the AHRQ (DeWalt et al., 2010).
Elsevier's ExitCare advertises that its standard PEM are written at 5th to 8th grade-levels, and that easy-to-read PEM are written at the 4th grade-level or below. This study demonstrates that readability of PEM provided in our EHR is in line with the reported readability by Flesch-Kincaid, but is at a higher grade-level when assessed by SMOG. This finding raises the concern that these PEM do not meet Joint Commission (2010) recommendations for health information materials to be available at a 5th grade-level. Several interventions may improve this situation. Vendors of PEM should assess readability with SMOG to help optimize patient understanding. It is essential to have easy-to-read options available for all patients. Finally, renaming levels of PEM (i.e., from “easy-to-read” and “standard” to “standard” and “advanced,” respectively) may eliminate stigma and thereby facilitate appropriate PEM distribution.
Study Limitations
Limitations of this study include exclusion bias, as non-English speakers and patients with significant disabilities impairing sight or speech were excluded. Therefore, the percentage of patients with low or marginal English health literacy was likely underestimated. Given the demographics of our patient population, these findings may only be generalizable to other urban clinics that serve a large proportion of Medicaid and Medicare patients. Despite a high participation rate of eligible patients (94.1%), selection bias must be considered. In our qualitative data, some people who declined to participate cited poor literacy as their reason of refusal. Finally, as this study was not blinded, experimenter bias is a possibility. All listed researchers engaged in both data collection and analysis.
The inconsistency between SMOG and Flesch-Kincaid reveals the challenge of standardizing readability assessments. Although both metrics rely on the number of polysyllabic words, the number of syllables does not always directly correlate with complexity. If a polysyllabic word is defined with easy-to-understand language, it may no longer represent complex terminology. Additionally, some polysyllabic medical terms may be unavoidable. Furthermore, as we did not measure patient comprehension, we cannot explicitly comment on patients' understanding of PEM. Using additional tools that assess comprehension could provide additional insight as to whether PEM are written at an appropriate level.
Conclusions
Our study identifies actionable areas of improvement in the delivery of PEM for health care providers and PEM creators. Ensuring that PEM are available at an appropriate reading level for the health literacy of the clinic population is essential. As a universal precaution, selecting easy-to-read PEM for a general patient population may help maximize readability, and ideally comprehension. Targeted interventions that account for the health literacy level in a patient population may improve doctor-patient communication, patient satisfaction, and health outcomes.
References
- Albright J. de Guzman C. Acebo P. Paiva D. Faulkner M. Swanson J. (1996). Readability of patient education materials: Implications for clinical practice. Applied Nursing Research, 9(3), 139–143. 10.1016/S0897-1897(96)80254-0 [DOI] [PubMed] [Google Scholar]
- Altin S. V. Finke I. Kautz-Freimuth S. Stock S. (2014). The evolution of health literacy assessment tools: A systematic review. BMC Public Health, 14, 1207. 10.1186/1471-2458-14-1207 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Arozullah A. M. Yarnold P. R. Bennett C. L. Soltysik R. C. Wolf M. S. Ferreira R. M. Davis T. (2007). Development and validation of a short-form, rapid estimate of adult literacy in medicine. Medical Care, 45(11), 1026–1033. 10.1097/MLR.0b013e3180616c1b [DOI] [PubMed] [Google Scholar]
- Berkman N. D. Sheridan S. L. Donahue K. E. Halpern D. J. Crotty K. (2011). Low health literacy and health outcomes: An updated systematic review. Annals of Internal Medicine, 155(2), 97–107. 10.7326/0003-4819-155-2-201107190-00005 [DOI] [PubMed] [Google Scholar]
- Brach C. Dreyer B. Schyve P. Hernandez L. M. Baur C. Lemerise A. J. Parker R. (2012). Attributes of a health literate organization. Retrieved from The Joint Commission website: https://www.jointcommission.org/assets/1/6/10attributes.pdf
- Carrier E. Reschovsky J.D. (2009, December). Expectations outpace reality: Physicians' use of care management tools for patients with chronic disease conditions (Issue Brief No. 129). Washington, DC: Center for Studying Health System Change. [PubMed] [Google Scholar]
- Centers for Medicare and Medicaid Services. (2010). Toolkit for making written material clear and effective, part 7: Using readability formulas. Retrieved from https://www.cms.gov/Outreach-and-Education/Outreach/WrittenMaterialsToolkit/Downloads/ToolkitPart07.pdf
- DeWalt D. A. Callahan L. Hawk V. H. Broucksou K. A. Hink A. Rudd R. Brach C. (2010). Health literacy universal precautions toolkit. Retrieved from Agency for Healthcare Research and Quality website: https://www.ahrq.gov/sites/default/files/wysiwyg/professionals/quality-patient-safety/quality-resources/tools/literacy-toolkit/healthliteracytoolkit.pdf
- Eichler K. Wieser S. Brügger U. (2009). The costs of limited health literacy: A systematic review. International Journal of Public Health, 54(5), 313–324. 10.1007/s00038-009-0058-2 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Fitzsimmons P. R. Michael B. D. Hulley J. L. Scott G. O. (2010). A readability assessment of online Parkinson's disease information. The Journal of the Royal College of Physicians of Edinburgh, 40(4), 292–296. 10.4997/JRCPE.2010.401 [DOI] [PubMed] [Google Scholar]
- Haun J. N. Patel N. R. French D. D. Campbell R. R. Bradham D. D. Lapcevic W. A. (2015). Association between health literacy and medical care costs in an integrated healthcare system: A regional population based study. BMC Health Services Research, 15(1), 249. 10.1186/s12913-015-0887-z [DOI] [PMC free article] [PubMed] [Google Scholar]
- The Joint Commission. (2010). Advancing effective communication, cultural competence, and patient- and family-centered care: A roadmap for hospitals. Retrieved from http://www.jointcommission.org/assets/1/6/ARoadmapforHospitalsfinalversion727.pdf
- Koh H. K. Rudd R. E. (2015). The arc of health literacy. The Journal of the American Medical Association, 314(12), 1225–1226. 10.1001/jama.2015.9978 [DOI] [PubMed] [Google Scholar]
- Kutner M. Greenberg E. Jin Y. Paulsen C. (2006). The health literacy of America's adults: Results from the 2003 National Assessment of Adult Literacy. Retrieved from http://nces.ed.gov/pubs2006/2006483.pdf
- Stossel L. M. Segar N. Gliatto P. Fallar R. Karani R. (2012). Readability of patient education materials available at the point of care. The Journal of General Internal Medicine, 27(9), 1165–1170. 10.1007/s11606-012-2046-0 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wang L. W. Miller M. J. Schmitt M. R. Wen F. K. (2013). Assessing readability formula differences with written health information materials: Application, results, and recommendations. Research in Social and Administrative Pharmacy, 9(5), 503–516. 10.1016/j.sapharm.2012.05.009 [DOI] [PubMed] [Google Scholar]
- Wilson M. (2009). Readability and patient education materials used for low-income populations. Clinical Nurse Specialist, 23(1), 33–40. 10.1097/01.NUR.0000343079.50214.31 [DOI] [PubMed] [Google Scholar]