Skip to main content
Journal of Patient Experience logoLink to Journal of Patient Experience
. 2021 Mar 3;8:2374373521998847. doi: 10.1177/2374373521998847

Readability of Patient Education Materials From High-Impact Medical Journals: A 20-Year Analysis

Michael K Rooney 1, Gaia Santiago 2, Subha Perni 3, David P Horowitz 4, Anne R McCall 5, Andrew J Einstein 6, Reshma Jagsi 7, Daniel W Golden 5,
PMCID: PMC8205335  PMID: 34179407

Abstract

Comprehensive patient education is necessary for shared decision-making. While patient–provider conversations primarily drive patient education, patients also use published materials to enhance their understanding. In this investigation, we evaluated the readability of 2585 patient education materials published in high-impact medical journals from 1998 to 2018 and compared our findings to readability recommendations from national groups. For all materials, mean readability grade levels ranged from 11.2 to 13.8 by various metrics. Fifty-four (2.1%) materials met the American Medical Association recommendation of sixth grade reading level, and 215 (8.2%) met the National Institutes of Health recommendation of eighth grade level. When stratified by journal and material type, general medical education materials from Annals of Internal Medicine were the most readable (P < .001), with 79.8% meeting the eighth grade level. Readability did not differ significantly over time. Efforts to standardize publication practice with the incorporation of readability evaluation during the review process may improve patients’ understanding of their disease processes and treatment options.

Keywords: patient education, patient engagement, medical decision making, health literacy

Introduction

Health literacy has been shown to strongly correlate with patient outcomes (1,2). Low health literacy is associated with more hospitalizations, greater use of emergency care, and lower receipt of preventative care measures such as mammograms and influenza vaccines (3). Although the mechanisms underpinning these complex relationships remain incompletely understood, it has been suggested that a combination of patient-specific and systemic factors contribute to well-documented disparities among patients with varying levels of health literacy (4). One proposed systemic factor is the presence of communication barriers between health care consumers and medical professionals, which can disproportionately affect patients with low health literacy (5,6). Patients who are unable to fully comprehend medical information may be at a disadvantage as active participants in their own health care decisions, potentially compromising their access to the highest quality of care (7). Unfortunately, overcoming obstacles of communication can be increasingly difficult as the breadth and complexity of medical care continues to grow (8). It is therefore paramount that action be taken to improve patient–provider communication practices on a large scale.

While direct interfacing with medical professionals is often the most important source of health care education for patients, high-quality written educational material can serve as a valuable adjunct (9). As such, the creation of accurate and understandable written patient education materials represents an opportunity to promote informed medical decision-making. Although the design of effective written educational materials relies upon myriad factors such as content and organization, the readability, defined as the “ease of understanding or comprehension due to the style of writing” also contributes to patient comprehension (10). The American Medical Association (AMA) and National Institutes of Health (NIH) recommend that patient materials be written at the sixth and eighth grade reading level, respectively (11,12).

The aim of this investigation is to evaluate whether patient education materials from widely circulated, high-impact medical journals are written in a way that is understandable for the general public. We hypothesized that such materials would be written at reading levels above national recommendations and therefore may be suboptimal as tools to educate diverse patient populations.

Materials and Methods

Journal Query and Material Identification

The publication history and article submission guidelines of high-impact medical journals were searched in order to identify peer-reviewed patient education materials with potential for broad readership. High-impact journals were defined as those having an impact factor of 10.0 or higher according to the 2017 Journal Citation Report (JCR; Clarivate Analytics). Results were filtered to include only journals falling under 60 predetermined categories in the JCR database (Supplemental Table 1) with potential medical relevance determined upon manual review of all listed categories. For each journal, the entire publication history was searched to identify any patient education materials. Only documents published on or before December 31, 2018 which clearly designate patients or laypersons as the target audience were included. Because data collection was initiated in May 2019, this end date was selected in order to ensure that all time series data reflected full calendar year patterns. For each material, if an electronic version was available, the main text was copied directly into Word (Microsoft) for subsequent analysis. In cases where an electronic version was not readily available, materials were downloaded in PDF format and text was extracted to Word using Acrobat Reader (Adobe) and formatted for analysis.

Identified materials were further categorized as “general education” materials or “research lay summaries” according to their intended scope. For example, some journals publish materials designed to explain general medical conditions, therapies, or concepts to patients (eg, hypertension) and others publish materials designed to explain important research findings to a layperson audience (eg, describing results from a clinical trial). All materials were labeled accordingly and analyzed separately.

Readability Analysis

Readability analysis was conducted using Readability Studio 2012 (Oleandar Software). Indices included in this study were Degrees of Reading Power—Grade Equivalent (DRP-GE) (13); Flesch-Kincaid (FK) readability test (14); Ford, Caylor, Sticht (FORCAST) index (15); Fry score (16); Gunning Fog index (17); Raygor estimate (18); and Simple Measure of Gobbledygook grade (19). These metrics were chosen because they are well-validated and report in grade-level equivalents, therefore allowing for meaningful comparisons across scores. Readability indices are calculated based on textual parameters such as average sentence length, use of polysyllabic words, and frequency of difficult or uncommon words. Individual calculation formulae for each of the included metrics are provided in Supplemental Table 2.

Because every metric is calculated differently, each may be prone to specific biases. For example, the FK score, which is among the most commonly used due to ease of calculation, may underestimate difficulty of materials that contain unfamiliar but short words, as the FK score is dependent only on average sentence and word length. It is therefore often recommended, as was performed in the present study, to use multiple readability metrics in order to minimize the bias of any individual metric on overall results and data interpretation (20).

Statistical Analysis

One-sample t tests were used to compare readability scores of the sample population to national recommendations and 1-way analysis of variance was used to compare readability scores across categorical variables. Chi-square testing was used to compare proportions of materials meeting recommended readability grade levels from various national groups. For this analysis, the most permissive (eg, lowest) score for each material was used and scores were rounded to the nearest whole number grade level. For example, a reported grade level of 8.4 was considered to meet the eighth grade recommendation, while a reported grade level of 8.5 was not. Statistical analyses were performed with R 3.5.1 (R Foundation).

Results

Two thousand five hundred and eighty-five education materials for patients from 10 high-impact medical journals were identified. Six of the included journals were subspecialty journals of the Journal of the American Medical Association (JAMA) Network (JAMA, JAMA Cardiology, JAMA Dermatology, JAMA Internal Medicine, JAMA Oncology and JAMA Pediatrics). The others were the American Journal of Respiratory and Critical Care Medicine (AJRCC), Circulation, Annals of the Rheumatic Diseases (ARD), and Annals of Internal Medicine (AIM). The majority of journals published general education materials for patients; only ARD and AIM published research lay summaries. AIM was the only journal to publish both types of materials.

Mean readability grade levels for materials are summarized in Table 1, with results described individually by the included indices. Readability grade levels were above national recommendations for the majority of journals, regardless of metric. In aggregate, materials were written at a mean readability grade level of 11.2 to 13.8 according to the 7 included metrics. AIM general education materials had the lowest mean readability level by all indices, with means ranging from the 6.8 to 10.5 grade level by various scores. However, AIM research lay summaries were written at mean reading grade levels ranging from 11.2 to 13.7 by the included metrics.

Table 1.

Mean Readability Grade Level of Patient Education Educational Materials From High-Impact Medical Journals According to 7 Readability Indices.

Readability grade level (mean[SD])
Material type Journal name Sample size (n = 2585) DRP-GE Flesch-Kincaid FORCAST Fry Gunning Fog Raygor estimate SMOG
General education AIM (Gen Ed) 94 7.6 [2.6] 7.2 [1.8] 10.5 [1.1] 6.9 [1.5] 9.1 [2.0] 6.8 [2.1] 9.8 [1.3]
AJRCC 85 10.7 [2.4] 9.7 [1.5] 10.6 [0.7] 11.2 [2.8] 11.4 [1.7] 10.4 [2.4] 11.9 [1.3]
Circulation 131 14.6 [2.1] 13.3 [3.4] 11.2 [3.4] 15.2 [2.0] 15.0 [1.8] 14.4 [2.5] 14.9 [1.5]
JAMA 750 13.6 [2.4] 12.2 [1.9] 11.5 [0.7] 14.8 [2.5] 13.8 [2.1] 14.0 [2.8] 13.9 [1.6]
JAMA Cardiology 8 15.7 [2.3] 13.4 [2.5] 11.7 [0.7] 15.5 [2.4] 15.2 [2.7] 15.8 [2.4] 15.0 [2.3]
JAMA Dermatology 24 11.2 [2.0] 10.5 [1.4] 11.0 [0.7] 11.5 [2.3] 12.4 [2.0] 10.9 [1.6] 12.5 [1.4]
JAMA IM 2 9.4 9.2 10.7 10 11.3 8 11.7
JAMA Oncology 30 14.0 [2.0] 12.2 [1.9] 11.5 [0.7] 13.9 [2.5] 13.9 [2.3] 13.7 [2.5] 13.8 [1.6]
JAMA Pediatrics 89 11.7 [2.4] 10.9 [1.5] 10.9 [0.7] 12.2 [2.6] 12.4 [1.9] 11.9 [2.6] 12.8 [1.5]
Research lay summaries AIM (Research) 1229 12.9 [2.0] 11.2 [1.3] 11.2 [0.6] 13.7 [2.5] 12.7 [1.5] 12.3 [2.3] 13.0 [1.1]
ARD 143 12.8 [1.8] 11.3 [1.2] 11.0 [0.5] 13.0 [2.1] 12.9 [1.4] 12.2 [2.1] 13.1 [1.1]

Abbreviations: AIM (Gen Ed) = Annals of Internal Medicine, General Patient Education; AIM (Research), Annals of Internal Medicine, Research Article Lay Summaries; AJRCC, American Journal of Respiratory and Critical Care Medicine; ARD = Annals of the Rheumatic Diseases; DRP-GE, Degrees of Reading Power—Grade Equivalent; FORCAST, Ford, Caylor, Sticht Index; JAMA, Journal of the American Medical Association; JAMA IM, Journal of the American Medical Association, Internal Medicine; SMOG, Simple Measure of Gobbledygook.

The proportions of materials meeting national recommendations are summarized in Figure 1. Further journal-specific information is provided in Supplemental Table 3. The majority of materials failed to meet either the recommended sixth or eighth grade level. In total, 54 (2.1%) of 2585 met the sixth grade level and 215 (8.2%) met the eighth grade level. When analyzed separately by journal and material type, AIM general education materials had the greatest proportion of materials meeting either recommendation (P < .001), with 42.6% meeting the AMA-recommended sixth grade level and 79.8% meeting the NIH-recommended eighth grade level.

Figure 1.

Figure 1.

Proportion of materials meeting national readability recommendations. AIM (Gen Ed) indicates Annals of Internal Medicine, General Patient Education; AIM (Research), Annals of Internal Medicine, Research Article Lay Summaries; AJRCC, American Journal of Respiratory and Critical Care Medicine; ARD, Annals of the Rheumatic Diseases; JAMA, Journal of the American Medical Association; JAMA IM, Journal of the American Medical Association, Internal Medicine. *Small sample size (n = 2).

Figure 2 describes the readability grade levels of published materials over time from 1998 through 2018, again stratified by journal. Individual points represent the readability grade level for materials published in each year, calculated as the mean score of all included readability metrics. Publication dates of the AIM general education materials were not readily available so these materials were excluded from Figure 2. Regardless of journal or material type, readability scores were consistently well above recommendations throughout the analyzed time period, without significant changes over time.

Figure 2.

Figure 2.

Mean readability grade levels of materials over time, stratified by journal. The AMA recommends that patient materials be written at the sixth grade level (red dotted line); the NIH recommends the eighth grade level (blue dotted line). AIM (Gen Ed) indicates Annals of Internal Medicine, general patient education; AIM (Research), Annals of Internal Medicine, Research Article Lay Summaries; AJRCC, American Journal of Respiratory and Critical Care Medicine; ARD, Annals of the Rheumatic Diseases; JAMA, Journal of the American Medical Association; JAMA IM, Journal of the American Medical Association, Internal Medicine; NIH, National Institutes of Health.

Raygor distributions describing the readability of materials are presented in Figure 3, with all materials from the JAMA network combined in panel F. The Raygor estimate is calculated by plotting the number of long words (defined as 6 or more characters) per 100 words on the x-axis and the number of sentences per 100 words on the y-axis. The location of the resulting point on the Raygor grid provides an estimate of the readability of that particular text. Use of short, simple sentences and avoidance of long words therefore translates into a lower grade level score. The Raygor estimate distributions in Figure 3 redemonstrate that the majority of materials are written at inappropriately high reading levels and fail to meet readability recommendations. Furthermore, the majority of points fall below the central demarcation in each Raygor grid, suggesting that use of long words is more likely to be driving increased scores by this metric. Raygor estimates of readability, along with all other metrics used the current study, have been widely applied in the setting of patient education and other clinical scenarios such as evaluation of consent forms or templates for recording patient-reported outcomes (2124).

Figure 3.

Figure 3.

Readability of materials as measured by the Raygor estimate. A, American Journal of Respiratory and Critical Care Medicine; (B) Annals of Internal Medicine (General Education); (C) Annals of Internal Medicine (Research Lay Summaries); (D) Annals of the Rheumatic Diseases; (E) Circulation; (F) Journal of the American Medical Association Network, including all subspecialty journals. Raygor estimates with high percentages of long words or short sentences are considered invalid.

Discussion

In this investigation evaluating the readability of peer-reviewed patient education materials published in high-impact medical journals, we found that the vast majority are written at reading levels far above national recommendations of sixth and eighth grade. Other than AIM general education materials, which encouragingly had a large proportion (79.8%) meeting the NIH target of an eighth grade level, this trend was apparent in all journals and spanned the entire study period from 1998 through 2018. These results reveal an important opportunity to enhance patient education and communication practices on a large scale. Modification of publication procedures by medical journals, for example, through the introduction of readability requirements for patient materials, might lead to meaningful improvements in how patients, particularly those with low health literacy, understand and make medical decisions within an increasingly complex health care system.

Comprehensive and accurate education of patients is a fundamental prerequisite for medical decision-making. Traditional paternalistic models of decision-making held physicians as the sole authoritative party responsible for all aspects of every medical interaction, from diagnosis to treatment choice (25). Under this model, there is minimal need for patient education, as physicians made most decisions (26). However, the shared decision-making model has more recently emerged as the preferred approach for achieving high-quality patient-centered care (27,28). The principles of shared decision-making propose that patients and physicians participate equally in medical decisions, weighing available evidence to choose optimal treatment paths that incorporate patients’ personal values and beliefs (29). Therefore, it is necessary that medical knowledge be communicated in an easily understandable manner so that patients can become active participants who are empowered as fully informed directors of their own care.

Patient education can be delivered through a variety of mechanisms, perhaps most commonly in the form of direct interface with providers. However, written educational material has been shown to enhance patient understanding of medical conditions and possible treatments (30). Written materials can be used as a framework to guide shared decision-making conversations to ensure that all aspects of a particular treatment decision are discussed (31). Furthermore, written materials provide patients and their families a reliable reference after leaving a health care visit. Prior research suggests that use of written materials in conjunction with other forms of patient education can lead to significant improvements in long-term retention of knowledge (32).

This investigation provides evidence that a minority of currently available patient education materials from widely circulated medical journals are written in a way that may be accessible to many health care consumers. The average United States adult reads at the eighth grade level and the average Medicaid enrollee reads at the fifth grade level (33,34). However, the mean reading level of the 2585 identified materials was 11.2 to 13.8 grade by the various indices. Even according to the most forgiving readability metric, only 8.2% of the identified materials met the NIH-recommended eighth grade level and 2.1% met the more aggressive recommendation of the sixth grade level.

When analyzed separately by journal and material type, a high reading level was noted for all materials other than the general education materials from AIM, of which nearly 80% were at or below an eighth grade level. In contrast, research lay summaries published in AIM were written at a mean grade level of 11.2 to 13.7, with only 4.7% of materials meeting the eighth grade recommendation. These differences might exist for several reasons. First, it may be possible that even within a single journal or journal network, different publication requirements may exist for different types of patient education materials. For instance, if readability evaluation is required for all general education materials prior to publication but is not a routine requirement for research lay summaries, then significant differences are likely to arise. Second, it may be possible that the nature of medical concepts themselves may bias the readability of education materials. As compared to research topics, general education materials might more often explain concepts, for example, obesity or stroke, that are easily described with lay terms. Lastly, differences may exist in the scope of research and general education materials, which might affect the flexibility of language that can be used for each type of text. As an example, a document describing a scientific study might require the use of complex terms in order to communicate the message of that study without compromising accuracy. By contrast, a material describing a general medical topic might be more flexible in the use of language, which would allow authors more opportunity to write materials at reading levels consistent with national recommendations. Because readability is universally important for effective written communication, these observed differences, regardless of the precise causal mechanism, should encourage targeted efforts to evaluate and improve patient materials which may be at higher risk of worse readability. For example, as illustrated by our analysis, careful consideration of readability should be given for those materials describing complex topics such as research studies or other scientific subjects.

Increasing the accessibility of widely circulated patient education materials represents a promising opportunity to improve patient outcomes (2). As described previously, low health literacy is disproportionately prevalent in patients with lower levels of educational attainment and is associated with lower rates of adoption of basic health-promotion behaviors such as taking medications as directed, which translate into worse outcomes overall (35,36). One central mechanism to explain these associations is the inability of patients with limited health literacy to fully comprehend medical information that is communicated either verbally or through written text. Efforts to strengthen communication through the development of easily readable health education resources would therefore increase the ability of patients to gain and retain medical knowledge. Improvements in comprehension would promote patient autonomy and encourage active patient participation in medical decisions according to the shared decision-making model.

Although designing easily readable materials without compromising accuracy or breadth of content is challenging, this study provides strategies to guide the development of effective educational texts. Simply promoting awareness of readability as an important contributor to patient understanding of materials is essential. Our results suggest that many journals are not routinely evaluating the readability of patient materials prior to publication. Modification of peer-review or editorial practices to require readability assessment may lead to improvements in the proportions of materials meeting AMA and NIH readability recommendations. However, awareness alone is not sufficient; for authors unfamiliar with principles of readability, it can be difficult to describe complex medical concepts in an easily understandable manner.

The majority of readability metrics are dependent on parameters such as average sentence length, word length, and use of difficult or uncommon terms (Supplemental Table 2). The use of short words and sentences will generally translate into improved readability (20). Our Raygor score analysis (Figure 3) shows that the majority of materials fall below the central demarcation line along the Raygor grid, suggesting that frequent use of complex or long words is driving worse readability scores more often than long sentences. This likely reflects the inherent bias of medical education materials, which often require the use of terminology that may be uncommon or foreign to the lay audience. If possible, intentional efforts to minimize unnecessary use of complex terms, possibly through incorporation of medical abbreviations, could help mitigate this bias. However, abbreviations themselves should be employed with caution, as they may actually increase difficulty of comprehension for readers unfamiliar with medical terminology despite an apparent improvement in readability scores. In some instances, it may simply not be possible to write a comprehensive and accurate description of a medical concept without using certain requisite terms. In such cases, it may be best practice to target the less aggressive eighth grade reading level suggested by the NIH. For authors interested in evaluating readability of text, free online calculators are available (https://readable.com, https://readabilityformulas.com), and some word-processing programs have built-in readability testing capability. However, for the most comprehensive readability evaluations or for analysis of large cohorts of materials, we suggest more rigorous software packages.

Limitations

Although this study evaluates readability of patient education materials from a variety of high-impact medical journals and includes a broad study period, it nonetheless is limited by factors related to the experimental design. First, the goal of the investigation was to identify and analyze peer-reviewed materials with potentially wide readership. In order to objectively identify such materials, we only searched medical journals with an impact factor greater than 10. While impact factor may be a useful and reproducible metric, it is possible that many patient education materials with potentially large readership were not included. Second, determination of the target audience (eg, whether a published material is intended to educate laypersons or medical professionals) may not be explicitly stated, and therefore, decisions regarding material inclusion may vary across reviewers. Third, although each journal’s publication history was reviewed in its entirety, it is possible that some materials in a given journal which would have met inclusion criteria were not included. However, because this study draws from a large sample (n = 2585) of materials, it is unlikely that any missed documents would have led to meaningful differences in the study conclusions.

Additionally, appropriate readability levels alone do not guarantee comprehension of materials. Many other facets of material design, such as layout, font size, and use of graphics, also contribute to the effectiveness of written educational text (24). Evaluation of these components was not attempted in this study. Further, although improvement in readability of materials would theoretically improve education for diverse populations, it is possible that certain individuals may have limited access to these published materials in the first place. Inequal access to education resources could therefore introduce or exacerbate disparities across populations, and efforts to improve readability might not affect comprehension for all patients equally. Lastly, though the included readability metrics have been externally validated, they are not necessarily dependent on direct input from end users (eg, patients). Ideally, the effectiveness of materials would be evaluated by individuals from the target population; however, such evaluation may often not be feasible and therefore surrogate markers such as readability metrics may be required. Nonetheless, this is an important consideration for interpretation and contextualization of this study.

Conclusion

This investigation analyzes the readability of peer-reviewed patient education materials from high-impact medical journals and demonstrates that only a small minority of these materials are written at a grade level appropriate for the general population. Promisingly, general education materials published in AIM had a large proportion of materials (79.8%) meeting this goal. These results suggest that significant differences in review and publication processes exist across medical journals, with some appearing to emphasize readability of patient education materials more than others. This investigation highlights an important opportunity to enhance the large-scale education of patients through improvements in the readability and patient-centered design of written educational material. Efforts to address this need may allow patients from diverse backgrounds and varying levels of educational attainment to better participate as autonomous decision makers in an increasingly complex medical system.

Supplemental Material

Supplemental Material, sj-pdf-1-jpx-10.1177_2374373521998847 - Readability of Patient Education Materials From High-Impact Medical Journals: A 20-Year Analysis

Supplemental Material, sj-pdf-1-jpx-10.1177_2374373521998847 for Readability of Patient Education Materials From High-Impact Medical Journals: A 20-Year Analysis by Michael K Rooney, Gaia Santiago, Subha Perni, David P Horowitz, Anne R McCall, Andrew J Einstein, Reshma Jagsi and Daniel W Golden in Journal of Patient Experience

Author Biographies

Michael K Rooney is currently an intern in the combined preliminary internal medicine program at the University of Texas at Houston and MD Anderson Cancer Center. He will complete his residency training in radiation oncology at MD Anderson.

Gaia Santiago is a medical student at the University of Illinois at Chicago College of Medicine.

Subha Perni is a resident in the Harvard Radiation Oncology Program.

David P Horowitz is an assistant professor of Radiation Oncology at New York-Presbyterian/Colubmia University Irving Medical Center. He serves as Director of Medical Student Education as well as Associate Program Director for the department’s residency program. His clinical focus is gastrointestinal cancer, breast cancer, and lymphoma.

Anne R McCall is an associate professor of Radiation Oncology at the University of Chicago and specializes in the treatment of breast cancer, gynecologic cancer, and lymphoma. She also serves as the Medical Director of Radiation Oncology for the University of Chicago Comprehensive Cancer Center at Silver Cross Hospital.

Andrew J Einstein is an associate professor of Medicine at Columbia University Irving Medical Center and New York-Presbyterian Hospital where he serves as Director of Nuclear Cardiology, Cardiac CT, and Cardiac MRI, and Director of the Advanced Cardiac Imaging Fellowship. He is the chair-elect of the Academic Cardiology Section of the American College of Cardiology, and a member of the board of directors of the American Society of Nuclear Cardiology and the Cardiovascular Council of the Society of Nuclear Medicine and Molecular Imaging. He serves as a member of the Congressionally-chartered National Council on Radiation Protection and Measurements, and served as a voting member of the Food and Drug Administration’s Medical Imaging Drugs Advisory Committee.

Reshma Jagsi is newman family professor and deputy chair in the Department of Radiation Oncology and Director of the Center for Bioethics and Social Sciences in Medicine at the University of Michigan. Dr Jagsi’s research focuses on improving the quality of care received by patients with breast cancer, both by advancing the ways in which breast cancer is treated with radiotherapy and by advancing the understanding of patient decision-making, cost, and access to appropriate care. Her research also considers issues of bioethics and gender equity in academic medicine. She has been elected to the American Society of Clinical Investigation and elected fellow of ASTRO, ASCO, AAWR, and the Hastings Center.

Daniel W Golden is assistant professor of Radiation and Cellular Oncology at the University of Chicago. His research focuses on medical education for trainees and patients. He serves as his department’s medical student clerkship director and associate residency program director. He is the founder and Chair of the Leadership Committee for the Radiation Oncology Education Collaborative Study Group, an international collaborative group aimed at developing and disseminating radiation oncology educational curricula.

Footnotes

Authors’ Note: Ethics approval is not applicable for this article. It does not contain any studies with human or animal subjects. As there are no human subjects, informed consent is not applicable.

Declaration of Conflicting Interests: The author(s) declared the following potential conflicts of interest with respect to the research, authorship, and/or publication of this article. All authors have completed and submitted the ICMJE Form for Disclosure of Potential Conflicts of Interest. Dr. Horowitz reported receiving consulting fees and travel reimbursement from Carl Zeiss and consulting fees from Champions Oncology. Dr. Einstein reported receiving grant funding for unrelated research from the National Heart Lung, and Blood Institute, the National Cancer Institute, the International Atomic Energy Agency, Canon Medical Systems, Roche Medical Systems, and W. L. Gore & Associates; he has received consulting fees from GE Healthcare and W. L. Gore & Associates. Dr. Jagsi has stock options as compensation for her advisory board role in Equity Quotient, a company that evaluates culture in health care companies; she has received personal fees from Amgen and Vizient and grants for unrelated work from the National Institutes of Health, the Doris Duke Foundation, the Greenwall Foundation, the Komen Foundation, and Blue Cross Blue Shield of Michigan for the Michigan Radiation Oncology Quality Consortium. She has a contract to conduct an investigator initiated study with Genentech. She has served as an expert witness for Sherinian and Hasso and Dressman Benzinger LaVelle. She is an uncompensated founding member of TIME’S UP Healthcare and a member of the Board of Directors of ASCO. Dr. Golden reported receiving grant funding from the National Institutes of Health, Radiation Oncology Institute, and Bucksbaum Institute for Clinical Excellence. He is manager of RadOncQuestions LLC and HemOncReview LLC. No other disclosures were reported.

Funding: The author(s) received no financial support for the research, authorship, and/or publication of this article.

ORCID iD: Michael K Rooney, MD Inline graphic https://orcid.org/0000-0002-2860-4653

Supplemental Material: Supplemental material for this article is available online.

References

  • 1. Wolf MS, Gazmararian JA, Baker DW. Health literacy and functional health status among older adults. Arch Intern Med. 2005;165:1946. [DOI] [PubMed] [Google Scholar]
  • 2. Dewalt DA, Berkman ND, Sheridan S, Lohr KN, Pignone MP. Literacy and health outcomes: a systematic review of the literature. J Gen Intern Med. 2004;19:1228–1239. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3. Berkman ND, Sheridan SL, Donahue KE, Halpern DJ, Crotty K. Low health literacy and health outcomes: an updated systematic review. Ann Int Med. 2011;155:97. [DOI] [PubMed] [Google Scholar]
  • 4. Paasche-Orlow MK, Wolf MS. The causal pathways linking health literacy to health outcomes. Am J Health Behav. 2007;31:S19–26. [DOI] [PubMed] [Google Scholar]
  • 5. Seurer AC, Vogt HB. Low health literacy: a barrier to effective patient care. S D Med. 2013;66:51, 53–57. [PubMed] [Google Scholar]
  • 6. Wynia MK, Osborn CY. Health literacy and communication quality in health care organizations. J Health Commun. 2010;15:102–115. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7. Institute of Medicine (US) Committee on Health Literacy. Health Literacy: A Prescription to End Confusion. Washington (DC): National Academies Press (US). 2004. Accessed March 2, 2020. http://www.ncbi.nlm.nih.gov/books/NBK216032/ [PubMed]
  • 8. World Health Organization. Increasing complexity of medical technology and consequences for training and outcome of care: background paper 4, August 2010. 2010. Accessed March 2, 2020. https://apps.who.int/iris/handle/10665/70455
  • 9. Funnell MM, Donnelly MB, Anderson RM, Johnson PD, Oh MS. Perceived effectiveness, cost, and availability of patient education methods and materials. Diabetes Educ. 1992;18:139–145. [DOI] [PubMed] [Google Scholar]
  • 10. Dubay WH. The Principles of Readability. Impact Information; 2014. [Google Scholar]
  • 11. Weis BD. Health literacy: a manual for clinicians. American Medical Association Foundation and American Medical Association; 2003. [Google Scholar]
  • 12. National Institute of Health. How to write easy-to-read health materials.
  • 13. Kibby MW. Test review: the Degrees of Reading Power. J Read. 1981;24:416–427. [Google Scholar]
  • 14. Flesch R. A new readability yardstick. J Appl Psychol. 1948;32:221–233. [DOI] [PubMed] [Google Scholar]
  • 15. Caylor JS, Sticht TG, Fox LC, Ford JP. Methodologies for determining reading requirements of military occupational specialties, 1973. Accessed 1 September 2018. https://eric.ed.gov/?id=ED074343
  • 16. Fry E. A readability formula that saves Time. J Read. 1968;11:513–578. [Google Scholar]
  • 17. Gunning R. The technique of clear writing. McGraw-Hill; 1968. [Google Scholar]
  • 18. Raygor AL. The Raygor readability estimate: a quick and easy way to determine difficulty. In: Pearson PD, Ed. Reading: Theory Rap, ed. National Reading Conference. Clemson; 1977: 259–263. [Google Scholar]
  • 19. McLaughlin GH. SMOG grading: a new readability formula. J Read. 1969;12:639–646. [Google Scholar]
  • 20. Zhou S, Jeong H, Green PA. How consistent are the best-known readability equations in estimating the readability of design standards? IEEE Trans Profess Commun. 2017;60:97–111. [Google Scholar]
  • 21. Perni S, Rooney MK, Horowitz DP, Golden DW, McCall AR, Einstein AJ, et al. Assessment of use, specificity, and readability of written clinical informed consent forms for patients with cancer undergoing radiotherapy. JAMA Oncology. 2019;5:e190260. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22. Papadakos JK, Charow RC, Papadakos CJ, Moody LJ, Giuliani ME. Evaluating cancer patient–reported outcome measures: readability and implications for clinical use. Cancer. 2019;125:1350–1356. [DOI] [PubMed] [Google Scholar]
  • 23. Rooney MK, Sachdev S, Byun J, Jagsi R, Golden DW. Readability of patient education materials in radiation oncology—are we improving? Pract Radiat Oncol. 2019;9:435–440. [DOI] [PubMed] [Google Scholar]
  • 24. Friedman DB, Hoffman-Goetz L. A systematic review of readability and comprehension instruments used for print and web-based cancer information. Health Educ Behav. 2006;33:352–373. [DOI] [PubMed] [Google Scholar]
  • 25. Taylor K. Paternalism, participation and partnership—the evolution of patient centeredness in the consultation. Patient Educ Couns. 2009;74:150–155. [DOI] [PubMed] [Google Scholar]
  • 26. Hoving C, Visser A, Mullen PD, van den Borne B. A history of patient education by health professionals in Europe and North America: from authority to shared decision making education. Patient Educ Couns. 2010;78:275–281. [DOI] [PubMed] [Google Scholar]
  • 27. Barry MJ, Edgman-Levitan S. Shared decision making—the pinnacle of patient-centered care. N Engl J Med. 2012;366:780–781. [DOI] [PubMed] [Google Scholar]
  • 28. Whitney SN. A new model of medical decisions: exploring the limits of shared decision making. Med Decis Making. 2003;23:275–280. [DOI] [PubMed] [Google Scholar]
  • 29. Elwyn G, Frosch D, Thomson R, Joseph-Williams N, Lloyd A, Kinnersley P, et al. Shared decision making: a model for clinical practice. J Gen Intern Med. 2012;27:1361–1367. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30. Friedman AJ, Cosby R, Boyko S, Hatton-Bauer J, Turnbull G. Effective teaching strategies and methods of delivery for patient education: a systematic review and practice guideline recommendations. J Canc Educ. 2011;26:12–21. [DOI] [PubMed] [Google Scholar]
  • 31. Arya R, Ichikawa T, Callender B, chultz O, DePablo M, Novak K, et al. Communicating the external beam radiation experience (CEBRE): perceived benefits of a graphic narrative patient education tool. Pract Radiat Oncol. Epub ahead of print 11 September 2019. doi:10.1016/j.prro.2019.09.001 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32. Wilson EAH, Park DC, Curtis LM, Cameron KA, Clayman ML, Makoul G, et al. Media and memory: the efficacy of video and print materials for promoting patient education about asthma. Patient Educ Couns. 2010;80:393–398. [DOI] [PubMed] [Google Scholar]
  • 33. Davis TC, Wolf MS. Health literacy: implications for family medicine. Fam Med. 2004;36:595–598. [PubMed] [Google Scholar]
  • 34. Weiss BD, Blanchard JS, McGee DL, Hart G, Warren B, Burgoon M, et al. Illiteracy among Medicaid recipients and its relationship to health care costs. J Health Care Poor Underserved. 1994;5:99–111. [DOI] [PubMed] [Google Scholar]
  • 35. Friis K, Lasgaard M, Osborne RH, Maindal HT. Gaps in understanding health and engagement with healthcare providers across common long-term conditions: a population survey of health literacy in 29,473 Danish citizens. BMJ Open. 2016;6:e009627. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 36. Williams MV, Parker RM, Baker DW, Parikh NS, Pitkin K, Coates WC, et al. Inadequate functional health literacy among patients at two public hospitals. JAMA. 1995;274:1677–1682. [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Supplemental Material, sj-pdf-1-jpx-10.1177_2374373521998847 - Readability of Patient Education Materials From High-Impact Medical Journals: A 20-Year Analysis

Supplemental Material, sj-pdf-1-jpx-10.1177_2374373521998847 for Readability of Patient Education Materials From High-Impact Medical Journals: A 20-Year Analysis by Michael K Rooney, Gaia Santiago, Subha Perni, David P Horowitz, Anne R McCall, Andrew J Einstein, Reshma Jagsi and Daniel W Golden in Journal of Patient Experience


Articles from Journal of Patient Experience are provided here courtesy of SAGE Publications

RESOURCES