What is known about this subject in regard to women and their families?
Lichen sclerosus is a chronic vulvar skin condition that predominantly affects postmenopausal women and is often underdiagnosed or misdiagnosed, leading to delays in treatment.
Due to the sensitive nature of the disease, patients may be more likely to turn to online resources rather than seek immediate medical attention. However, little is known about the quality of online patient education materials for lichen sclerosus.
Clear, accessible online patient education materials improve health literacy, health outcomes, and may prompt patients to seek medical attention sooner.
What is new from this article as messages for women and their families?
Most online patient education materials about lichen sclerosus are written above the recommended reading level and lack understandability and actionability.
A few resources, including those from the International Society for the Study of Vulvovaginal Disease and the National Health Service, had more favorable reading grade level scores and offer higher-quality information. These may be valuable resources for clinicians seeking to recommend accessible and reliable online health information for patients with lichen sclerosus.
Future efforts in the development of patient education materials should prioritize improving readability, understandability, and actionability for vulvar lichen sclerosus.
Introduction
Patients frequently seek health information through online patient education materials (PEMs), particularly for stigmatized conditions such as vulvar lichen sclerosus (LS). LS requires early recognition and intervention to prevent scarring and malignancy, yet has a 4-year diagnostic delay.1 While PEMs can improve health literacy and care-seeking, their effectiveness depends on readability, clarity, and quality. Dermatology PEMs frequently exceed the American Medical Association’s recommended sixth-grade reading level.2,3 This study evaluates the readability, understandability, actionability, and overall quality of online PEMs related to LS.
Methods
In September 2023, Google searches for “lichen sclerosus” and “vulvar lichen sclerosus” were conducted in Incognito mode. The first 30 results for each query were evaluated for the following inclusion criteria: English-language, publicly available, patient-directed content, and specifically addressing LS.
Readability was assessed using multiple formulas (eg, Flesch–Kincaid and Gunning Fog) as recommended by the Centers for Medicare and Medicaid Services (Table 1).4 Essential medical terms (eg, “clobetasol” and “dermatologist”) were excluded to avoid artificially inflating grade levels (Fig. 1). Understandability and actionability were independently evaluated by 2 raters using the validated Patient Education Materials and Assessment Tool (PEMAT), with ≥70% considered adequate.5 Interrater reliability was calculated using intraclass correlation coefficients. Overall quality was assessed using the Journal of the American Medical Association Benchmark Criteria, which evaluate credibility across 4 domains: authorship, attribution, disclosure, and currency. Correlations between readability and understandability were analyzed using Pearson or Spearman coefficients, depending on data distribution, with Holm correction applied to control for multiple comparisons.
Table 1.
Vulvar lichen sclerosus educational website characteristics (n = 27)
| Website characteristics | n (%) |
|---|---|
| Author name provided | 8 (29.6) |
| Author degree (n = 8): MD or DO/other/unknown | 4 (50.0)/1(12.5)/2(25.0) |
| Author is a dermatologist (n = 7) | 4 (50.0) |
| Year written/modified was noted | 16 (59.3) |
| Material updated within the last year (n = 16) | 7 (43.8) |
| Ads on website | 5 (18.5) |
| Commercial bias in materiala | 3 (11.1) |
| Overall website readability and quality scoresb | Mean ± standard deviation (range) |
| Flesch–Kincaid reading ease | 58.9 ± 10.2 (40-76) |
| Flesch–Kincaid grade level | 8.50 ± 2.11 (5.1-12.6) |
| FORCAST | 10.9 ± 0.62 (9.7-12.2) |
| Gunning Fog score | 10.5 ± 2.15 (7.4-15.0) |
| Simple Measure of Gobbledygook (SMOG) index | 11.2 ± 1.70 (8.8-14.7) |
| Coleman–Liau index | 10.0 ± 1.72 (7.2-13.4) |
| Automated Readability index | 8.25 ± 2.25 (4.4-13.0) |
| Average Understandabilityc | 52.8 ± 15.0 (17.3-76.0) |
| Average Actionabilityc | 38.5 ± 23.7 (0.0-80.0) |
| JAMA benchmark total criteria scored | 1.93 ± 0.383 (0-4) |
Materials were considered commercially biased if they were advertising for a particular product or service within the main text of the educational material.
Reading ease was graded on a scale of 0 to 100, with scores closer to 100 indicating easier to read materials. Flesch–Kincaid grade level, FORCAST, Gunning Fog score, SMOG index, Coleman–Liau index, and the Automated Readability index provide an estimated grade level.
The Patient Education Materials Assessment Tool (PEMAT) was used to determine the understandability and actionability of a material. The PEMAT grades materials on a scale of 0 to 100, with scores closer to 100 indicating higher understandability or actionability.
The Journal of the American Medical Association (JAMA) Benchmark Test uses four criteria (authorship, attribution, currency, and disclosure) to grade the overall quality of the material. Materials that meet all 4 criteria are rated as a 4, while those that meet none of the criteria are rated as zero.
Fig. 1.
Online health resources were assessed for readability using a variety of formulas. To minimize score inflation, unavoidable, polysyllabic terms (eg, dermatologist and clobetasol) were removed to assess readability. This figure depicts the estimated mean readability scores before and after word omission.
Results
Of the 60 webpages reviewed, 27 met the inclusion criteria. Only 33% met the recommended sixth-grade reading level by ≥1 readability formula; 66% required a high school level or higher, and 25.9% reached college level. Only 22% met the PEMAT threshold for understandability, and only 1 was deemed actionable or satisfied all Journal of the American Medical Association benchmark criteria. Interrater reliability for PEMAT scoring was excellent (intraclass correlation coefficient = 0.936 for understandability; 0.962 for actionability). A significant negative correlation was found between readability and understandability across all readability formulas (r = −0.593 to −0.701, P < .008). Full scoring details are available in Supplementary Table 1, http://links.lww.com/IJWD/A84.
Discussion
Online PEMs for LS showed significant deficiencies in readability, understandability, and actionability. Most exceeded recommended reading levels and lacked features that support patient comprehension, such as step-by-step instructions and supporting graphics. The inverse correlation between reading level and understandability suggests that complex language may further hinder comprehension, a concerning issue given the sensitive nature of LS and the potential reluctance to seek care.1
A few materials (Cedars Sinai, MyHealth Alberta, and the National Health Service) demonstrated better readability and may serve as models for future PEM development. High-quality PEMs stated a purpose, used plain language, and presented information logically. They offered actionable guidance, and purposeful visuals reinforced key messages. These findings provide practical guidance for clinicians aiming to improve their own PEMs.
Study limitations include the cross-sectional design, restriction to English-language materials, and potential geographic bias despite the Incognito search mode. Future efforts should focus on developing evidence-based, patient-centered educational resources to better support individuals affected by LS.
Conflicts of interest
None.
Funding
None.
Study approval
Approved as nonregulated research by the UT Southwestern IRB.
Author contributions
JF had full access to all of the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis. MM conceived and designed the study. JF and HRC performed data acquisition, analysis, and interpretation. All authors contributed to the drafting and revision of the manuscript.
Supplementary data
Supplementary material associated with this article can be found at http://links.lww.com/IJWD/A84.
Supplementary Material
References
- 1.Rivera S, Dehner K, Flood A, Dykstra C, Mauskar MM, DeMaria AL. Adverse healthcare experiences are correlated with increased time to diagnosis in women with vulvar inflammatory dermatoses: a retrospective cohort survey. Br J Dermatol 2024;190:761–2. [DOI] [PubMed] [Google Scholar]
- 2.Yousif R, Zheng DX, Chang IA, Wong C, Trinidad J, Carr DR. Readability of online patient educational materials for transgender dermatologic care. J Am Acad Dermatol 2022;87:922–4. [DOI] [PubMed] [Google Scholar]
- 3.Villa NM, Shih T, Rick JW, Shi VY, Hsiao JL. Pyoderma gangrenosum: readability and quality of online health resources. Int J Womens Dermatol 2021;7:850–2. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.McGee J. Using readability formulas: a cautionary note. In: U. S. D. o. H. a. H. Services editor: Centers for Medicare and Medicaid Services; 2010. [Google Scholar]
- 5.Shoemaker SJ, Wolf MS, Brach C. Development of the Patient Education Materials Assessment Tool (PEMAT): a new measure of understandability and actionability for print and audiovisual patient information. Patient Educ Couns 2014;96:395–403. [DOI] [PMC free article] [PubMed] [Google Scholar]

