Abstract
Objectives
To assess readability and understandability of online materials for vocal cord leukoplakia.
Study Design
Review of online materials.
Setting
Academic medical center.
Methods
A Google search of “vocal cord leukoplakia” was performed, and the first 50 websites were considered for analysis. Readability was measured by the Flesch Reading Ease Score (FRES), Flesch-Kincaid Grade Level (FKGL), and Simple Measure of Gobbledygook (SMOG). Understandability and actionability were assessed by 2 independent reviewers with the PEMAT-P (Patient Education Materials Assessment Tool for Printable Materials). Unpaired t tests compared scores between sites aimed at physicians and those at patients, and a Cohen’s kappa was calculated to measure interrater reliability.
Results
Twenty-two websites (17 patient oriented, 5 physician oriented) met inclusion criteria. For the entire cohort, FRES, FKGL, and SMOG scores (mean ± SD) were 36.90 ± 20.65, 12.96 ± 3.28, and 15.65 ± 3.57, respectively, indicating that materials were difficult to read at a >12th-grade level. PEMAT-P understandability and actionability scores were 73.65% ± 7.05% and 13.63% ± 22.47%. Statistically, patient-oriented sites were more easily read than physician-oriented sites (P < .02 for each of the FRES, FKGL, and SMOG comparisons); there were no differences in understandability or actionability scores between these categories of sites.
Conclusion
Online materials for vocal cord leukoplakia are written at a level more advanced than what is recommended for patient education materials. Awareness of the current ways that these online materials are failing our patients may lead to improved education materials in the future.
Keywords: vocal cord leukoplakia, readability, Patient Education Materials Assessment Tool for Printable Materials, understandability, Flesch Reading Ease Score, Flesch-Kincaid Grade Level
The use of search engines is a common first step for patients seeking medical advice. If search results are not understandable, this impairs the patient’s ability to make informed choices and can negatively affect outcomes by potentially delaying care or reducing adherence with care plans. Unfortunately, many resources are written at a level that may not be understandable to most patients. The American Medical Association (AMA) and the National Institutes of Health (NIH) recommend that patient education resources be written at a sixth-grade level.1 In light of this recommendation, many publications have evaluated the readability and quality of online medical education materials in disciplines including urology,2 plastic surgery,3 cardiology,4 ophthalmology,5 rheumatology,6 and otolaryngology.7
These analyses are increasingly important in certain fields where the speed of novel technological developments overtake what can typically be found in traditional print resources or when there is confusion concerning evaluation and treatment of a particular condition. Recent readability assessments within otolaryngology have focused on dysphagia,8 in-office vocal fold injections,9 and oropharyngeal cancer.10 Regarding vocal cord leukoplakia, there remain active discussions within the medical community on malignant potential and desired degree of surgical care (biopsy vs complete microflap removal), as well as the role of nonoperative management, use of angiolytic lasers, and potential for office-based treatment.11-14 Correlations between pathologic classification and biologic behavior have been historically poor such that the World Health Organization recently simplified recommended pathologic categorization for vocal fold dysplasia.15 Given the variations in care that patients may receive depending on the management approach that is recommended by the otolaryngologist, it is important for patients to have access to readable and understandable online materials to learn about available treatment options and participate in informed decision making. To our knowledge, no such readability analysis has been performed on patient education materials for vocal cord leukoplakia. Our aim in this study is to assess the readability and quality of online materials for vocal cord leukoplakia.
Methods
This review of online education materials was not considered human subjects research and was thus exempt from full review by the Johns Hopkins Medicine Institutional Review Board. A Google search was conducted with the search term vocal cord leukoplakia on March 8, 2021. As in other readability studies, Google was selected as it accounts for >70% of all internet searches, making it the most popular internet search engine.16,17
Websites that were advertisements, contained broken links, were not written in English, or were primarily images rather than text were excluded from analysis. Message boards, pages with <30 sentences of text, and image/video-based pages were also excluded. In addition, results that linked to academic research papers were excluded from analysis, as these were not original web content but instead links to published written materials. The top 50 search results identified in this search strategy were considered for review, as the quality of information is thought to decline after the top 50 results.18 Search results that did not meet the criteria were excluded from analysis, as described in other readability studies.19,20
Once identified, the websites were designated as being oriented toward a patient or professional audience, as described in other readability studies within otolaryngology.8,9 Patient-targeted sources were overtly written to address patient audiences in language without technical medical jargon and/or were from medical clinics or hospital centers advertising services to patients. Professional-targeted sources were overtly written to educate and communicate with health care professionals and were often websites hosted by professional societies or online texts. For instance, professional-targeted sources contained information describing how to diagnose leukoplakia with videostroboscopy and discussed technical aspects of the surgical treatment of leukoplakia. The text of each identified site was archived for analysis, and the website address (uniform resource locator [URL]) and access date were recorded.
The readability of the websites was analyzed with the Flesch Reading Ease Score (FRES), the Flesch-Kincaid Grade Level (FKGL) readability test, and the Simple Measure of Gobbledygook (SMOG) Readability Formula, all of which were calculated with an online calculator.21 These readability metrics use a combination of word count, sentence number, and syllables to derive readability scores.
A lower FRES corresponds to lower readability. Scores are generally between 0 and 100. The highest score possible (easiest readability) is 121.22 but only if every sentence consists of a single 1-syllable word. It is also possible to generate negative scores by including words with many syllables.22 For context, the Harvard Law Review has a general readability score in the low 30s. Texts with scores between 90 and 100 are considered “very easy” to understand and are thought to be easily understood by a fifth-grade reader. Lower-FRES texts are progressively less easy to understand.21
In contrast, with the FKGL and SMOG formulas, a lower score indicates easier readability. The result of the FKGL corresponds with a US grade level; for example, a result of 9.3 indicates a ninth-grade reading level. The SMOG formula calculates the number of polysyllabic words and converts this to a corresponding level of education needed to understand a piece of writing.21
Understandability and actionability of each website were evaluated with the Patient Education Materials Assessment Tool for Printable Materials (PEMAT-P). The PEMAT-P is a validated 24-point measure developed by Shoemaker et al23 for the Agency for Healthcare Research and Quality to evaluate understandability and actionability of patient education materials through criteria such as content, word choice, use of numbers, organization, layout/design, and use of visual aids. According to PEMAT-P, education materials are deemed understandable when readers of diverse backgrounds and varying levels of health literacy can process and explain key messages, and materials are deemed actionable when consumers of diverse backgrounds and varying levels of health literacy can identify what they can do based on the information presented.23 Higher PEMAT-P scores correlate to more easily understood materials. Whereas the FRS, FKGL, and SMOG are calculated objectively, PEMAT-P scoring is subjective as various features of the website that contribute to understandability are graded as present or absent by reviewers. Because of the inherently subjective nature of PEMAT-P scoring methodology, PEMAT-P scores were independently calculated by 2 blinded reviewers (M.S. and G.E.S.). If reviewer scores differed by >10 points for a particular website, discrepancies were reviewed to resolve any inadvertent errors in scoring. Interrater reliability was assessed with Cohen’s kappa calculation per Microsoft Excel.
Each website was analyzed for HONcode certification. The HON Foundation (Health on the Net) is a nonprofit organization that strives to identify high-quality online health information through its HONcode certification process. The foundation is supported by the Economic and Social Council of the United Nations. Websites can apply for certification and are then evaluated against the foundation’s 8 key principles to determine eligibility: authority, complementarity, confidentiality, attribution, justifiability, transparency, financial disclosure, and advertising.24 An HONcode toolbar was installed on the research team’s Google Chrome web browser, which automatically indicated whether each website possessed HONcode certification. Microsoft Excel was used to perform unpaired 2-sample t tests to compare PEMAT-P scores from websites with and without HONcode certification and to evaluate statistical significance in differences in PEMAT-P, FRES, FKGL, and SMOG scores between the patient- and physician-targeted websites. An a priori value <0.05 was set as the threshold for statistical significance. A correlation between understandability and FRES, FKGL, and SMOG was also calculated.
Results
Of the 50 websites reviewed, 28 were eliminated because they were online versions of published journal research articles (n = 25) or lacked sufficient text (n = 3). There were 22 sites included in final analysis: 5 physician- and 17 patient-oriented sites. None of the websites analyzed had a reading level of the sixth grade or lower, as recommended by the AMA and the NIH (Table 1). The FRES, FKGL, and SMOG scores (mean ± SD) for the entire cohort of 22 websites were 36.90 ± 20.65, 12.96 ± 3.28, and 15.65 ± 3.57, respectively. These scores are at the “very confusing” range overall, with an expectation that readers would need a 12th- to 16th-grade education to comprehend the materials. The PEMAT-P understandability score was 73.65% ± 7.05% and the actionability score was 13.63% ± 22.47% (Table 2). Cohen’s kappa, calculated to determine interrater reliability, was 0.89 (95% CI, 0.85-0.94), indicating an almost perfect degree of interrater agreement in assignment of PEMAT-P scores per the standards of Landis and Koch.25
Table 1.
Search Results for Vocal Cord Leukoplakia Performed on March 8, 2021.
Abbreviations: FKGL, Flesch-Kincaid Grade Level; FRES, Flesch Reading Ease Score; SMOG, Simple Measure of Gobbledygook.
Table 2.
Comparison of Results for Patient- vs Physician-Targeted Websites.
| Score, mean ± SD | ||||
|---|---|---|---|---|
| Total (n = 22) | Patient oriented (n = 17) | Physician oriented (n = 5) | P valuea | |
| FRES | 36.90 ± 20.65 | 43.87 ± 13.54 | 13.18 ± 22.92 | .0011 |
| FKGL | 12.96 ± 3.28 | 12.05 ± 2.64 | 16.06 ± 3.36 | .0105 |
| SMOG | 15.65 ± 3.57 | 14.54 ± 2.64 | 19.42 ± 3.74 | .0035 |
| Understandability, % | 73.65 ± 7.05 | 75.05 ± 7.27 | 68.89 ± 3.07 | .0833 |
| Actionability, % | 13.63 ± 22.47 | 14.11 ± 21.98 | 12 ± 24 | .855 |
Abbreviations: FKGL, Flesch-Kincaid Grade Level; FRES, Flesch Reading Ease Score; SMOG, Simple Measure of Gobbledygook.
Patient vs physician.
Comparison of patient- and physician-oriented sites for readability, understandability, and actionability measures is shown in Table 2. Patient-oriented sites were statistically more readable than the physician-oriented sites, with P < .02 for comparisons across the FRES, FKGL, and SMOG measures, but overall scores for patient-directed websites still indicate readability to be difficult and at a 12th-grade level. PEMAT-P scores were not statistically different between patient- and physician-oriented sites.
Of the 22 websites included for analysis, only 6 (22.3%) were HONcode verified. Of these 6 sites, 5 were patient targeted and 1 was physician targeted. Readability, understandability, and actionability scores for HONcode-verified versus nonverified sites are shown in Table 3, with no difference in scores when analyzed by HONcode status.
Table 3.
Results for HONcode-Verified Sites.
| HONcode, mean ± SD | |||
|---|---|---|---|
| Verified (n = 6) | Nonverified (n = 16) | P value | |
| FRES | 42.72 ± 24.15 | 34.71 ± 18.72 | .4177 |
| FKGL | 11.92 ± 3.77 | 13.35 ± 2.99 | .3610 |
| SMOG | 14.5 ± 4.86 | 16.09 ± 2.83 | .3478 |
| Understandability, % | 74.44 ± 5.27 | 73.36 ± 7.59 | .7527 |
| Actionability, % | 16.67 ± 24.27 | 12.00 ± 19.91 | .6488 |
Abbreviations: FKGL, Flesch-Kincaid Grade Level; FRES, Flesch Reading Ease Score; SMOG, Simple Measure of Gobbledygook.
Correlations between readability grade level and PEMAT-P understandability were calculated (Figures 1-3). There was a moderately negative correlation between FKGL and understandability (r = −0.35) as well as SMOG and understandability (r = −0.34). A moderately positive correlation was found between FRES reading ease and understandability (r = 0.37). As lower FKGL and SMOG scores indicate higher readability but a higher FRES indicates higher readability, these correlations show that easier-readability scores are associated with modest improvements in understandability.
Figure 1.

Correlation between PEMAT-P understandability score and FKGL. FKGL, Flesch-Kincaid Grade Level; PEMAT-P, Patient Education Materials Assessment Tool for Printable Materials.
Figure 2.

Correlation between PEMAT-P understandability score and SMOG. PEMAT-P, Patient Education Materials Assessment Tool for Printable Materials; SMOG, Simple Measure of Gobbledygook.
Figure 3.

Correlation between PEMAT-P understandability score and FRES. FRES, Flesch Reading Ease Score; PEMAT-P, Patient Education Materials Assessment Tool for Printable Materials.
Discussion
Over 70% of adults seek health-related information online.26,27 Unfortunately, most websites containing patient education material are written at a reading level beyond what is easily understood by most patients. In this study, the 22 sites with online education materials related to vocal cord leukoplakia were written at readability levels above those recommended by the AMA/NIH. These results are consistent with many other readability studies that have been published within otolaryngology7,8,20,28-33 and other specialties.34-43
Similarly, PEMAT-P understandability and actionability scores were low. The understandability score of 73.65% ± 7.05% in this study is comparable but a bit higher than that of similar studies on other topics, which reported scores ranging from 62.8% to 66.0%.44-46 The actionability score in the current study of 13.63% ± 22.47% is comparable to scores in the literature.37,44,45 Although a high PEMAT-P score is better than a low score, there are no established guidelines for a PEMAT-P target to help guide writing. The low actionability score indicates that websites related to vocal cord leukoplakia did not adequately outline discrete steps that a patient could take in evaluation or management of one’s condition. To improve understandability and actionability, these websites might benefit from clear organization and discrete lists of actionable items to communicate next steps in care to patients.
Additional analysis compared patient- and physician-oriented websites. Although both categories had reading levels beyond what is recommended for online content, the patient-oriented websites were more readable than the physician-oriented sites based on FRES, FKGL, and SMOG scores. These results are consistent with other studies within otolaryngology.8,9,44 These results demonstrate an awareness that patient-oriented sites should be written in a way that is more easily read and interpreted. That these scores still fall short of readability standards suggests that even more deliberate care needs to be taken in creation of these online materials.
There were no significant differences in PEMAT-P scores of understandability or actionability between physician- and patient-oriented websites. Many other readability analyses within otolaryngology that used the PEMAT-P did not compare scores between patient- and physician-oriented sites. One study47 did compare PEMAT-P results by authorship type and did not find differences in PEMAT-P scores among the groups (academic institutions, government agencies, websites from private practices, neutral web-based sites, and organizations such as nonprofits). Some studies have compared DISCERN scores (another measure to ascertain quality of websites) between patient and physician sources.8,9 These studies found a difference in DISCERN scores between patient- and physician-oriented websites in materials about in-office vocal fold injection10 but not about swallowing difficulties.9 One study29 focused on nasal septoplasty and found that patient education materials originating from academic institutions had significantly higher scores in some DISCERN criteria than those originating from private clinics. Data from the current study support that there is no difference in understandability and actionability between sites about vocal cord leukoplakia directed at patients and those directed at physicians. Nevertheless, it might be that differences in PEMAT-P and DISCERN scores for these categories of websites may be attributed to whether the topic is related to a procedure rather than to a disease state. Procedural topics such as vocal fold injection or nasal septoplasty might offer more opportunity to present discrete step-by-step instructions to a physician audience than websites about a condition or complaint. More research will be needed to explore this hypothesis.
This study found that 22.3% of included websites were HONcode verified, which is comparable to rates of 8% to 40% reported in other readability studies in the otolaryngology literature.10,47 Although HONcode has been in existence since the early days of the internet, participation in HONcode is voluntary and participating websites must pay for certifications. This model may limit participation and account for the relatively low rate of HONcode verification.
Interestingly, our study did not find a higher level of readability, understandability, or actionability in websites that were HONcode verified. A study on readability in online materials regarding laryngeal cancer found no difference in readability between HONcode- and non-HONcode–verified sites.33 HONcode is meant to addresses the reliability and credibility of information but is not focused on readability or understandability. HONcode also does not rate the quality of information provided on a website, though it does define rules meant to hold website developers to basic ethical standards in the presentation of information and to help ensure that readers always know the source and purpose of the data they are reading.
As mentioned earlier, there was a moderate correlation in this study between easier readability, as measured by FRES, FKGL, and SMOG, and improved understandability, as measured by PEMAT-P. Other studies20,28 have performed similar analysis and found similar correlations, although interestingly a readability study on spasmodic dysphonia47 found no correlation between FKGL and understandability. It is uncertain why some topics might demonstrate correlation between easier readability and improved understandability and others do not, especially as understandability as based on PEMAT-P scoring has more to do with formatting, structure, and the like than it does with length of words or sentences used. However, that the correlation is modest suggests that any attempt to improve websites cannot focus only on a desire to improve readability—goals should independently encompass efforts at improving understandability and actionability as well.
There are some limitations in the present study, which are inherent to all studies that evaluate readability and understandability of patient education materials. The readability formulas utilized were designed to analyze narrative texts rather than medical literature. Consequently, they were not intended to measure the readability of medical jargon, which can be more complicated in content than other narratives despite similar syllable counts or word length. Along this line of reasoning, the FRES, SMOG, and FKGL do not take into account shorter words that are of a higher reading level or are more difficult to understand. Although a limitation, this actually serves to reinforce that the majority of sources are too complex. Last, cohesion between sentences is an important factor in readability, which is not factored into the formulas.
Conversely, it is possible that the readability formulas could overstate complexity of the websites. For instance, the term leukoplakia contains 5 syllables, so the repeated use of this word on a website could lead to a higher level of complexity as calculated with the readability formulas. The term otolaryngology is similarly polysyllabic and may create an increase in syllable-per-word calculations as compared with ear, nose, and throat.
To test this, when all instances of the word leukoplakia were replaced with the word plaque in search result 12, the FRES, FKGL, and SMOG changed from 25.9 to 33.1, 13.5 to 12.5, and 11.4 to 10.8, respectively. This change yielded fairly modest impact, but it does suggest that at least a portion of the poor readability scores may relate to length of medical terminology.
The PEMAT-P tool has some limitations. It was designed to allow a layperson to evaluate the quality of health literature, but it does not assess the scientific accuracy of specialist information. This article did not assess the accuracy of information in the websites, although clearly that is an important issue as well. Additionally, our protocol was limited to websites written in English and did not analyze videos. Websites that are written in different languages and video materials might be of different quality and readability but are beyond the scope of this study.
Conclusion
Websites on vocal cord leukoplakia are written at a level beyond what is recommended by the AMA and the NIH. None of the websites analyzed in this study met the recommendation of a sixth-grade reading level, and in aggregate these websites were above a 12th-grade reading level. Patient-targeted websites were written at a less advanced reading level than professional-targeted sites, but they were not significantly more understandable or actionable. Many patients go online to seek medical knowledge. Written materials are a valuable supplement to verbal communication, and information found online can reinforce topics discussed during face-to-face visits and improve overall understanding of a condition or proposed procedure. It is important that these online education materials be made more readable, understandable, and actionable to help direct appropriate patient-centered care.
Author Contributions
Matthew Shneyderman, conception and design of the work, acquisition of data, analysis; drafting of the manuscript; final approval of the submitted manuscript; agreement to be accountable for all aspects of the work; Grace E. Snow, conception and design of the work, acquisition of data, analysis; revision of the manuscript; final approval of the submitted manuscript; agreement to be accountable for all aspects of the work; Ruth Davis, conception and design of the work, acquisition of data; revision of the manuscript; final approval of the submitted manuscript; agreement to be accountable for all aspects of the work; Simon Best, analysis; revision of the manuscript; final approval of the submitted manuscript; agreement to be accountable for all aspects of the work; Lee M. Akst, conception and design of the work, acquisition of data, analysis; drafting and revision of the manuscript; final approval of the submitted manuscript; agreement to be accountable for all aspects of the work.
Disclosures
Competing interests: None.
Sponsorships: None.
Funding source: None.
References
- 1.Weiss B.Health Literacy: A Manual for Clinicians. American Medical Association Foundation and American Medical Association; 2007. [Google Scholar]
- 2.Gaines T, Malik RD.Readability of pelvic floor dysfunction questionnaires. Neurourol Urodyn. 2020;39(2):813-818. doi: 10.1002/nau.24286 [DOI] [PubMed] [Google Scholar]
- 3.Alwani MM, Campiti VJ, Bandali EH, Nesemeier BR, Ting JY, Shipchandler TZ.Evaluation of the quality of printed online education materials in cosmetic facial plastic surgery. Facial Plast Surg Aesthet Med. 2020;22(4):255-261. doi: 10.1089/fpsam.2019.0013 [DOI] [PubMed] [Google Scholar]
- 4.Arslan D, Sami Tutar M, Kozanhan B, Bagci Z.The quality, understandability, readability, and popularity of online educational materials for heart murmur. Cardiol Young. 2020;30(3):328-336. doi: 10.1017/S104795111900307X [DOI] [PubMed] [Google Scholar]
- 5.Patel AJ, Kloosterboer A, Yannuzzi NA, Venkateswaran N, Sridhar J.Evaluation of the content, quality, and readability of patient accessible online resources regarding cataracts. Semin Ophthalmol. Published online February26, 2021. doi: 10.1080/08820538.2021.1893758 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6.Haque A, Cox M, Sandler RD, Hughes M.A systematic review of internet-based information on dermatomyositis and polymyositis. Int J Rheum Dis. 2020;23(12):1613-1618. doi: 10.1111/1756-185X.13929 [DOI] [PubMed] [Google Scholar]
- 7.Wong K, Levi JR.Readability trends of online information by the American Academy of Otolaryngology–Head and Neck Surgery Foundation. Otolaryngol Head Neck Surg. 2017;156(1):96-102. doi: 10.1177/0194599816674711 [DOI] [PubMed] [Google Scholar]
- 8.O’Connell Ferster AP, Hu A.Evaluating the quality and readability of Internet information sources regarding the treatment of swallowing disorders. Ear Nose Throat J. 2017;96(3):128-138. doi: 10.1177/014556131709600312 [DOI] [PubMed] [Google Scholar]
- 9.Yi GS, Hu A.Quality and readability of online information on in-office vocal fold injections. Ann Otol Rhinol Laryngol. 2020;129(3):294-300. doi: 10.1177/0003489419887406 [DOI] [PubMed] [Google Scholar]
- 10.Schwarzbach HL, Mady LJ, Kaffenberger TM, Duvvuri U, Jabbour N.Quality and readability assessment of websites on human papillomavirus and oropharyngeal cancer. Laryngoscope. 2021;131(1):87-94. doi: 10.1002/lary.28670 [DOI] [PubMed] [Google Scholar]
- 11.Park JC, Altman KW, Prasad VMN, Broadhurst M, Akst LM.Laryngeal leukoplakia: state of the art review. Otolaryngol Head Neck Surg. Published online November10, 2020. doi: 10.1177/0194599820965910 [DOI] [PubMed] [Google Scholar]
- 12.Kim CM, Chhetri DK.Triological best practice: when is surgical intervention indicated for vocal fold leukoplakia? Laryngoscope. 2020;130(6):1362-1363. doi: 10.1002/lary.28527 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13.Chen M, Cheng L, Li C-J, Chen J, Shu Y-L, Wu H-T.Nonsurgical treatment for vocal fold leukoplakia: an analysis of 178 cases. Biomed Res Int. 2017;2017:6958250. doi: 10.1155/2017/6958250 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14.Parker NP.Vocal fold leukoplakia: incidence, management, and prevention. Curr Opin Otolaryngol Head Neck Surg. 2017;25(6):464-468. doi: 10.1097/MOO.0000000000000406 [DOI] [PubMed] [Google Scholar]
- 15.Hellquist H, Ferlito A, Mäkitie AA, et al. Developing classifications of laryngeal dysplasia: the historical basis. Adv Ther. 2020;37:2667-2677. doi: 10.1007/s12325-020-01348-4 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16.Davies D.The 7 most popular search engines in the world: SEO 101. Published January18, 2018. Accessed January 20, 2021. https://www.searchenginejournal.com/seo-101/meet-search-engines/
- 17.Alwani MM, Campa KA, Svenstrup TJ, Bandali EH, Anthony BP.An appraisal of printed online education materials on spasmodic dysphonia. J Voice. Published online December26, 2019. doi: 10.1016/j.jvoice.2019.11.023 [DOI] [PubMed] [Google Scholar]
- 18.Eysenbach G, Powell J, Kuss O, Sa E-R.Empirical studies assessing the quality of health information for consumers on the world wide web: a systematic review. JAMA. 2002;287(20):2691-2700. doi: 10.1001/jama.287.20.2691 [DOI] [PubMed] [Google Scholar]
- 19.Ting K, Hu A.Evaluating the quality and readability of thyroplasty information on the Internet. J Voice. 2014;28(3):378-381. doi: 10.1016/j.jvoice.2013.10.011 [DOI] [PubMed] [Google Scholar]
- 20.Balakrishnan V, Chandy Z, Hseih A, Bui T-L, Verma SP.Readability and understandability of online vocal cord paralysis materials. Otolaryngol Head Neck Surg. 2016;154(3):460-464. doi: 10.1177/0194599815626146 [DOI] [PubMed] [Google Scholar]
- 21.The Flesch grade level readability formula. Readability Formulas. Accessed December 2020. https://readabilityformulas.com/flesch-grade-level-readability-formula.php
- 22.Flesch-Kincaid readability tests. Wikipedia. Accessed December 2020. https://en.wikipedia.org/wiki/Flesch%E2%80%93Kincaid_readability_tests
- 23.Shoemaker SJ, Wolf MS, Brach C.Development of the Patient Education Materials Assessment Tool (PEMAT): a new measure of understandability and actionability for print and audiovisual patient information. Patient Educ Couns. 2014;96(3):395-403. doi: 10.1016/j.pec.2014.05.027 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24.Health on the Net. Accessed December 2020. https://www.hon.ch/en/
- 25.Landis JR, Koch GG.The measurement of observer agreement for categorical data. Biometrics. 1977;33:159-174. [PubMed] [Google Scholar]
- 26.Fox S, Duggan M.Health online 2013. Accessed January 20, 2021. https://www.pewresearch.org/internet/2013/01/15/health-online-2013/
- 27.Steehler KR, Steehler MK, Pierce ML, Harley EH.Social media’s role in otolaryngology–head and neck surgery: informing clinicians, empowering patients. Otolaryngol Head Neck Surg. 2013;149(4):521-524. doi: 10.1177/0194599813501463 [DOI] [PubMed] [Google Scholar]
- 28.Wong K, Gilad A, Cohen MB, Kirke DN, Jalisi SM.Patient education materials assessment tool for laryngectomy health information. Head Neck. 2017;39(11):2256-2263. doi: 10.1002/hed.24891 [DOI] [PubMed] [Google Scholar]
- 29.Grose EM, Holmes CP, Aravinthan KA, Wu V, Lee JM.Readability and quality assessment of internet-based patient education materials related to nasal septoplasty. J Otolaryngol Head Neck Surg. 2021;50(1):16. doi: 10.1186/s40463-021-00507-z [DOI] [PMC free article] [PubMed] [Google Scholar]
- 30.Patel CR, Sanghvi S, Cherla DV, Baredes S, Eloy JA.Readability assessment of internet-based patient education materials related to parathyroid surgery. Ann Otol Rhinol Laryngol. 2015;124(7):523-527. doi: 10.1177/0003489414567938 [DOI] [PubMed] [Google Scholar]
- 31.Wong K, Levi JR.Partial tonsillectomy. Ann Otol Rhinol Laryngol. 2017;126(3):192-198. doi: 10.1177/0003489416681583 [DOI] [PubMed] [Google Scholar]
- 32.Xie DX, Wang RY, Chinnadurai S.Readability of online patient education materials for velopharyngeal insufficiency. Int J Pediatr Otorhinolaryngol. 2018;104:113-119. doi: 10.1016/j.ijporl.2017.09.016 [DOI] [PubMed] [Google Scholar]
- 33.Narwani V, Nalamada K, Lee M, Kothari P, Lakhani R.Readability and quality assessment of internet-based patient education materials related to laryngeal cancer. Head Neck. 2016;38(4):601-605. doi: 10.1002/hed.23939 [DOI] [PubMed] [Google Scholar]
- 34.Powell LE, Andersen ES, Pozez AL.Assessing readability of patient education materials on breast reconstruction by major US academic hospitals as compared with nonacademic sites. Ann Plast Surg. Published online November20, 2020. doi: 10.1097/SAP.0000000000002575 [DOI] [PubMed] [Google Scholar]
- 35.Sharma AN, Martin B, Shive M, Zachary CB.The readability of online patient information about laser resurfacing therapy. Dermatol Online J. 2020;26(4):13030/qt5t9882ct. [PubMed] [Google Scholar]
- 36.Weiss KD, Vargas CR, Ho OA, Chuang DJ, Weiss J, Lee BT.Readability analysis of online resources related to lung cancer. J Surg Res. 2016;206(1):90-97. doi: 10.1016/j.jss.2016.07.018 [DOI] [PubMed] [Google Scholar]
- 37.Siddiqui E, Shah AM, Sambol J, Waller AH.Readability assessment of online patient education materials on atrial fibrillation. Cureus. 2020;12(9):e10397. doi: 10.7759/cureus.10397 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 38.Fortuna J, Riddering A, Shuster L, Lopez-Jeng C.Assessment of online patient education materials designed for people with age-related macular degeneration. BMC Ophthalmol. 2020;20(1):391. doi: 10.1186/s12886-020-01664-x [DOI] [PMC free article] [PubMed] [Google Scholar]
- 39.Kalavar M, Hubschman S, Hudson J, Kuriyan AE, Sridhar J.Evaluation of available online information regarding treatment for vitreous floaters. Semin Ophthalmol. 2021;36(1-2):58-63. doi: 10.1080/08820538.2021.1887898 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 40.Yılmaz FH, Tutar MS, Arslan D, Çeri A.Readability, understandability, and quality of retinopathy of prematurity information on the web. Birth Defects Res. Published online February17, 2021. doi: 10.1002/bdr2.1883 [DOI] [PubMed] [Google Scholar]
- 41.Nickles MA, Ramani SL, Tegtmeyer K, Zhao J, Lio PA.Readability of online patient education materials for juvenile dermatomyositis. Pediatr Dermatol. Published online January15, 2021. doi: 10.1111/pde.14513 [DOI] [PubMed] [Google Scholar]
- 42.Abdouh I, Porter S, Fedele S, Elgendy N, Ni Riordain R.Web-based information on the treatment of the mouth in systemic sclerosis. BMC Rheumatol. 2020;4(1):61. doi: 10.1186/s41927-020-00160-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 43.Huang G, Fang CH, Agarwal N, Bhagat N, Eloy JA, Langer PD.Assessment of online patient education materials from major ophthalmologic associations. JAMA Ophthalmol. 2015;133(4):449-454. doi: 10.1001/jamaophthalmol.2014.6104 [DOI] [PubMed] [Google Scholar]
- 44.Chen LW, Harris VC, Jia JL, Xie DX, Tufano RP, Russell JO.Search trends and quality of online resources regarding thyroidectomy. Otolaryngol Head Neck Surg. Published online November3, 2020. doi: 10.1177/0194599820969154 [DOI] [PubMed] [Google Scholar]
- 45.Barbarite E, Shaye D, Oyer S, Lee LN.Quality assessment of online patient information for cosmetic botulinum toxin. Aesthet Surg J. 2020;40(11):NP636-NP642. doi: 10.1093/asj/sjaa168 [DOI] [PubMed] [Google Scholar]
- 46.Lee SE, Brown WC, Gelpi MW, et al. Understood? Evaluating the readability and understandability of intranasal corticosteroid delivery instructions. Int Forum Allergy Rhinol. 2020;10(6):773-778. doi: 10.1002/alr.22550 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 47.Alwani MM, Campa KA, Svenstrup TJ, Bandali EH, Anthony BP.An appraisal of printed online education materials on spasmodic dysphonia. J Voice. Published online December26, 2019. doi: 10.1016/j.jvoice.2019.11.023 [DOI] [PubMed] [Google Scholar]
