Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2018 Mar 1.
Published in final edited form as: J Cardiovasc Nurs. 2017 Mar-Apr;32(2):156–164. doi: 10.1097/JCN.0000000000000324

Quality and Health Literacy Demand of Online Heart Failure Information

Maan Isabella Cajita 1, Tamar Rodney 1, Jingzhi Xu 1, Melissa Hladek 1, Hae-Ra Han 1
PMCID: PMC5010526  NIHMSID: NIHMS744380  PMID: 26938508

Abstract

Background

The ubiquity of the Internet is changing the way people obtain their health information. While there is an abundance of heart failure information online, the quality and health literacy demand of these information are still unknown.

Objective

The purpose of this study was to evaluate the quality and health literacy demand (readability, understandability, and actionability) of the heart failure information found online.

Methods

Google, Yahoo, Bing, Ask.com, and DuckDuckGo were searched for relevant heart failure websites. Two independent raters then assessed the quality and health literacy demand of the included websites. The quality of the heart failure information was assessed using the DISCERN instrument. Readability was assessed using seven established readability tests. Finally, understandability and actionability were assessed using the Patient Education Materials Assessment Tool for Print Materials (PEMAT-P).

Results

A total of 46 websites were included in this analysis. The overall mean quality rating was 46.0 ± 8.9 and the mean readability score was 12.6 grade reading level. The overall mean understandability score was 56.3% ± 16.2. Finally, the overall mean actionability score was 34.7% ± 28.7.

Conclusions

The heart failure information found online was of fair quality but required a relatively high health literacy level. Web content authors need to consider not just the quality, but also the health literacy demand of the information found in their websites. This is especially important considering that low health literacy is likely prevalent among the usual audience.

Keywords: health literacy, heart failure, Internet, eHealth, patient education

Background

The universal popularity of the Internet is changing the way people obtain their health information. By providing easy access to a wealth of health information, the Internet is fast becoming an indispensable medium for the delivery of patient education. It is estimated that 59% of all adult Americans search for health information online – among them 48% reported doing so on behalf of another person, 36% searched for health information for themselves, and 11% went online to search for health information for themselves and for other people1. Regardless of the intended consumer of the health information, it is clear that more and more people are turning online for their health information needs. While this increase in self-efficacy to become more health literate is promising, the quality and health literacy demand of the information found online could potentially limit this endeavor.

Health literacy is commonly defined as the “degree to which individuals have the capacity to obtain, process, and understand basic health information and services needed to make appropriate health decisions”2. It is estimated that 36% of adult Americans have low health literacy3. Similarly, among people with heart failure, approximately 39% have low health literacy4. In heart failure, low health literacy has been associated with lower heart failure knowledge5,6, poor self-care behaviors7, poor medication adherence8, and increased risk for rehospitalizations7.

One way to mitigate the ill effects of low health literacy is to ensure that patient education materials are easy to obtain, process, understand, and apply; in other words, the material’s health literacy demand does not exceed the information-consumer’s health literacy skills. Health literacy demand is defined as “the complexity and difficulty of a [health-related] stimulus”.9 The health-related stimulus can be in the form a written document (e.g. brochure, medication label, etc.) or a verbal communication (e.g. patient-doctor communication). While the Internet provides easy access to health information, whether this information is easy to understand and act on is not always guaranteed; this underscores the need to evaluate the health literacy demand of the information found online.

While there have been several studies that assessed the readability of disease-related (peripheral neuropathy, chronic kidney disease, rheumatic diseases, and stroke) and medical procedure-related (spine surgery, interventional radiology, colorectal cancer screening, and liposuction) information found online1017, none have assessed heart failure-related information. Additionally, majority of these studies only assessed the readability of the online health information, which is only one component of health literacy demand.

The purpose of this study is to evaluate the quality and health literacy demand (readability, understandability, and actionability) of the heart failure information found online. The specific objectives of this study include: (1) to describe the quality and health literacy demand of the heart failure information found online; (2) to compare the quality and health literacy demand of the heart failure information among the different types of websites (government-sponsored vs. hospital-affiliated vs. other); and (3) to compare the quality and health literacy demand of the heart failure information found in websites ranked highest in terms of traffic share (top 10) vs. those ranked the lowest (bottom 10).

Methods

Data Collection

The top five search engines in terms of global traffic share (Google, Yahoo, Bing, Ask.com, and DuckDuckGo)18 were used to find websites containing heart failure information using the search term heart failure. The website search and subsequent data harvest were performed between May 28 and June 3, 2015. Considering traffic share, only websites found in the first 5 pages were assessed for inclusion. It has been shown that 98% of the total Internet traffic is limited to the first 5 pages19. Websites were included if they provided information on heart failure overview, symptoms, and treatment (e.g. lifestyle modifications, medical/surgical treatment). Subsequently, websites were excluded if they were advertisements, research articles, or new articles. Figure 1 depicts the website search process. Data on heart failure overview, symptoms, and treatments were copied from the websites unto Word documents for the readability tests. Finally, the websites’ respective ranks were determined according to the amount of traffic they get. Information on the websites’ traffic estimates was obtained from Alexa.com, which provides global traffic ranks based on the websites’ daily average of unique visitors and number of page views over the past 3 months.

Figure 1.

Figure 1

Website Search Process

Measures

The quality of the heart failure information found in the websites was assessed using the 15-item DISCERN instrument20, which has demonstrated validity and reliability2124. The DISCERN instrument is broken down into two sections that assess the reliability of the total information provided and the quality of the information on the treatment choices. Each item is scored on a 5-point scale (1-No, 2/3/4-Partially, 5-Yes), with a total score range of 15–75. An overall score between 15–26 is deemed as ‘very poor’ quality, 27–38 ‘poor’ quality, 39–50 ‘fair’ quality, 51–62 ‘good’ quality, and 63–75 ‘excellent’ quality20. Two independent reviewers (MC and TR) assessed the quality of the included websites. There was moderate inter-rater agreement using the DISCERN tool (κ=0.44).25

Health literacy demand was measured according to its three essential components: readability, understandability, and actionability. Given the variability in readability assessments, seven established readability tests were used to obtain average readability levels, namely: Automated Readability Index26, Coleman-Liau Index27, Flesch-Kincaid Grade Level28, Flesch Reading Ease28, Fry Readability Formula29, Gunning Fog Index30, and Simplified Measure of Gobbledygook (SMOG)31. These tests examine different reading elements (i.e. characters per sentence, syllables per word, words per sentence, sentences per passage, etc.) in different combinations; hence, they were chosen to increase the validity of the readability results.Table 1 shows the formula of the individual readability test. Readability tests were performed using the standard edition of Readability Studio v.2012 (Oleander Software, Ltd, Vandalia, Ohio).

Table 1.

Readability Tests Formulae

Test Formula
Automated Readability Index
4.71(characterswords)+0.5(wordssentences)21.43
Coleman-Liau Index 0.0588L − 0.296S − 15.8
L is the average number of letters per 100 words, S is the average number of sentences per 100 words
Flesch-Kincaid Grade Level
0.39(total wordstotal sentences)+11.8(total syllablestotal words)15.59
Flesch Reading Ease
206.8351.015(total wordstotal sentences)84.6(total syllablestotal words)
Fry Readability Formula
  1. Extract a 100-word passage from the selection

  2. Count the number of sentences in each passage (count half a sentence as 0.5)

  3. Count the number of syllables in each passage

  4. Find the point on the chart (3 samples recommended for best results)

Gunning-Fog Index
0.4[(wordssentences)+100(complex wordswords)]
SMOG Index 1.043×[C×(30/S)]+3.1291
C is the average number of words with3 syllables, S is the average number of sentences

Understandability and actionability were measured using the Patient Education Materials Assessment Tool for Print Materials (PEMAT-P)32. PEMAT-P is composed of 17 Understandability items and 7 Actionability items. Each item is scored on a binary scale (1-Agree, 0-Disagree) with items 6, 8, 9, 11, 12, 16–19, and 25 having a not applicable (N/A) option. The total Understandability and Actionability scores were determined by adding the points and dividing the sum by the total possible points (excluding items that were scored as N/A) and then multiplying the result by 100 (score range: 0% – 100%). PEMAT-P has demonstrated validity and reliability32,33. Two independent reviewers assessed the Understandability (MC and AX) and the Actionability (MC and MH) of the included websites. There was moderate inter-rater agreement (κ=0.49) for PEMAT-Understandability and substantial inter-rater agreement (κ=0.62) for PEMAT-Actionability25. Discrepancies in the ratings were discussed between the rater pairs and then reconciled.

Analysis

Descriptive statistics are reported as means and standard deviations. Shapiro-Wilk test was performed to determine whether the mean quality ratings, readability levels, understandability scores, and actionability scores were normally distributed. One-way ANOVA with Bonferroni correction was used to compare the mean quality ratings and mean understandability scores among the three types of websites. Meanwhile, Kruskal-Wallis test was used to compare the mean readability levels and mean actionability scores among the three types of websites. Two-sample t-test with equal variances was used to compare the mean quality ratings and mean understandability scores between the top- and bottom-ranked websites. Finally, to compare the mean actionability scores and mean readability levels between the top- and bottom-ranked websites two-sample t-test with unequal variances and Mann-Whitney U test were used, respectively. κstatistic was used to determine the inter-rater agreement. Statistical analyses were performed using Stata13 (StataCorp LP, College Station, Texas, USA).

Results

Quality and Health Literacy Demand

A total of 46 websites were included in this analysis (Table 2). Among the 46 websites, 6 were classified as government-sponsored, 17 were hospital-affiliated, and the remaining 23 were classified as ‘other’ for the purpose of this study. The overall mean DISCERN quality rating was 46.0 ± 8.9. Among the 46 websites, 24% were rated as having poor quality, 46% were of fair quality, and 30% were of good quality.

Table 2.

Characteristics of Heart Failure Websites

Websites Type DISCERN
Quality
Rating
PEMAT Score Mean
Reading
Level
U% A%
mayoclinic.org H 56 58.3 60 11.5
emedicine.medscape.com O 53 28.6 0 18.5
heartfailure.org O 60 62.5 66.7 11.2
nhlbi.nih.gov G 49 75 100 9.2
familydoctor.org O 46 61.5 60 9.4
en.wikipedia.org O 52 30.8 0 18
nytimes.com O 56 38.5 40 9.7
nlm.nih.gov G 56 80 40 10.1
ncbi.nlm.nih.gov G 49 61.5 40 9.1
medicinenet.com O 47 76.9 40 9.6
cdc.gov G 43 62.5 0 11.4
healthline.com O 43 69.2 40 12.4
nhs.uk G 59 50 40 11.9
wexnermedical.osu.edu H 35 61.5 60 17.2
en.academic.ru O 60 23.1 20 16.8
patient.co.uk O 53 68.8 60 10.1
medicalnewstoday.com O 48 53.8 40 12
emedicinehealth.com O 49 53.8 20 13.8
uptodate.com O 57 53.8 60 10.6
my.clevelandclinic.org H 55 85.7 66.7 10.8
urmc.rochester.edu H 39 61.5 0 10.6
heart.org O 58 81.3 100 10.4
hopkinsmedicine.org H 50 58.3 33.3 13.1
drweil.com O 29 53.8 40 14.2
merckmanuals.com O 42 38.5 0 17.3
rightdiagnosis.com O 37 46.2 0 13.9
umm.edu H 44 33.3 0 14.2
medtronic.com O 37 50 40 13.3
ghc.org H 37 84.6 40 9
healthcommunities.com O 55 46.2 40 14.1
ucsfhealth.org H 40 41.7 20 11.7
uchospitals.edu H 47 46.2 0 15.8
cedars-sinai.edu H 36 69.2 0 12.6
emoryhealthcare.org H 47 41.7 83.3 12.9
uchealth.staywellsolutionsonline.com H 47 75 0 11.6
pdrhealth.com O 34 31.3 60 11.3
texasheart.org H 47 57.1 33.3 9.8
healthtalk.org O 57 75 0 11.9
health.sjm.com O 48 62.5 40 11.2
nihseniorhealth G 48 87.5 66.7 9.7
mclaren.org H 27 41.7 0 15.6
utswmedicine.com H 33 58.3 20 14
barnesjewish.org H 41 45.5 0 17.9
bettermedicine.com O 37 53.8 60 11.8
winthrop.org H 30 40 0 13.4
stlukeshouston.com H 41 53.8 60 16.1

Note.

Top 10 website (per search traffic),

Bottom 10 website, G – Government, H – Hospital, O – Other, U – Understandability, A – Actionability, ARI – Automated Reading Index, CLI – Coleman-Liau Index, FK – Flesch Kincaid Grade Level, FRE – Flesch Reading Ease, GF – Gunning-Fog Index

Mean quality ratings for each of the DISCERN items varied greatly (Figure 2). On average, the websites consistently rated higher on 3 items, namely: [item 1] having clear aims, [item 2] achieving those aims, and [item 14] making it clear that there are more than one treatment option. Conversely, the websites scored poorly on the following 5 items: [item 4] provide clear sources of information, [item 8] identify areas of uncertainty, [item 11] describe the risks associated with each treatment, [item 12] describe what would happen if no treatment was used, and [item 13] describe how the treatment choices affect overall quality of life.

Figure 2.

Figure 2

Mean DISCERN Quality Ratings

The websites had readability scores that ranged from a 9th grade to more than an 18th grade (graduate school) reading level with the overall mean readability scorebeing 12.6 ± 2.7 grade reading level. The mean scores for the individual readability tests were as follows: 12.8 (Automated Readability Index), 13.1 (Coleman-Liau Index), 11.6 (Flesch-Kincaid), 45.3 (Flesch Reading Ease), 12.8 (Fry), 12.4 (Gunning-Fog Index), and 13.2 (SMOG).

The overall mean PEMAT-Understandability score was 56.3% ± 16.2. There was significant variation in the mean scores for each of the PEMAT-Understandability items (Figure 3). The websites scored consistently higher on 3 items, namely: [item 1] makes its purpose completely evident, [item 7] does not expect the user to perform calculations, and [item 10] presents the information in a logical sequence. Conversely, the websites scored lower on the following 3 items: [item 11] provides a summary, [item 15] uses visual aids, and [item 17] provides clear titles or captions for its visual aids. The overall mean PEMAT-Actionability score was 34.7%± 28.7. The mean scores for each of the PEMAT-Actionability items also varied (Figure 4). Most of the websites identified at least one action that the user can take [item 20] and, to a lesser extent, addressed the user directly when describing the actions [item 21]. However, majority of the websites did not use visual aids to make it easier for the user to follow their instructions.

Figure 3.

Figure 3

Mean PEMAT-Understandability Scores

Figure 4.

Figure 4

Mean PEMAT - Actionability Scores

Comparison by Type

The quality of heart failure information significantly differed among the three types of websites (F=3.35, P=0.04). Government-sponsored websites had the highest mean quality rating (μ=50.7 ±5.8), followed by the ‘other’ group (μ=47.7 ±9.0), and the hospital-affiliated group had the lowest mean quality rating (μ=41.9 ± 8.3). After Bonferroni correction for multiple comparisons, however, the difference was no longer significant.

The readability scores also significantly differed among the three types of websites (χ2=7.4, P=0.025). Government-sponsored websites had the lowest mean reading level (μ=10.2 ± 1.2), followed by the ‘other’ group (μ=12.7 ± 2.8), with the hospital-affiliated group having the highest reading level (μ=13.4 ± 2.5).

Government-sponsored websites tended to have a high Understandability score (μ=69.4%±13.8), followed by hospital-affiliated websites (μ=56.0%±15.6) and the ‘other’ group had the lowest Understandability score (μ=53.1%±16.1). However, these differences failed to reach statistical significance (F=2.6, P=0.09).

Finally, government- sponsored websites also had the highest Actionability score among the three types of websites (μ=47.8%± 33.3), followed by the ‘other’ group (μ=35.9%±27.4), with the hospital-affiliated websites having the lowest mean Actionability score (μ=28.0±29.0). However, these differences failed to reach statistical significance (χ2=2.1, P=0.36).

Comparison by Rank

The top ranked websites (1–10) had a significantly higher quality score (μ=52.4 ±4.6) compared to the bottom-ranked (37–46) websites (μ=40.9±9.4) (t=3.49, P=0.003). Similarly, the top-ranked websites also had a lower reading level (μ=11.6±3.4) compared to the bottom-ranked websites (μ=13.1±2.8). However, the difference in readability was not statistically significant (z=1.7, P=0.10).

The top-ranked websites (μ=57.4%±18.7) had a similar Understandability score as the bottom ranked websites (μ=57.5%±14.8) (t=−.01, P=0.99). On the other hand, the top-ranked websites had a higher Actionability score (μ=44.7% ±30.0) compared to the bottom-ranked websites (μ=28% ±27.7); however, this difference was not statistically significant (t=1.29, P=0.21).

Discussion

Similar to other health-related website analysis studies17,23,24, we found that the mean readability level of the 46 heart failure websites (12+ grade reading level) was way above the recommended 7th grade or below readability level34. It should be noted that the two highest reading levels belonged to websites that do not necessarily cater to a lay audience; however, we decided to include these websites since they appeared on the first page of the search results; hence, easily accessible to the lay consumer. Furthermore, these websites did not explicitly state that they were for a professional audience, and they had topic headings that were similar to those in the more “patient-oriented” websites. A likely reason for the higher reading level of these websites could be the non-use of everyday language. A closer inspection of the Readability Studio output revealed that majority of these websites used difficult, polysyllabic words and long sentences. Another factor could be the complexity of the topic of heart failure itself and the difficulty of thoroughly explaining it and its treatments in a succinct manner. One solution could be that instead of using lengthy texts, a video could be substituted, one that clearly illustrates or explains the rather complex text content. Unfortunately, majority of the websites did not avail of this medium that could potentially promote understandability.

The overall mean ‘fair’ quality rating (μ=46.0 ± 8.9) for the websites was similar to another study that used the DISCERN instrument to assess the quality of online health-related information(Lam et al., 2013). One of the reasons for the less than ‘good’ quality rating was the lack of citations and corresponding references, which makes it hard for the reader to confirm the reliability of the information provided. Additionally, majority of the websites failed to refer to areas of uncertainty (i.e. differences in expert opinion concerning treatment choices). Furthermore, majority of the websites did not describe the risks associated with each treatment option or what would happen if there was no treatment undertaken. Finally, only a few of the websites described how each treatment affected overall quality of life or provided support for shared decision-making (i.e. suggestions for things to discuss with healthcare providers). Given the impact of heart failure and its treatments on a person’s quality of life35 and the known benefits of shared decision-making36, heart failure patients could greatly benefit from websites that provide information on how each treatment option impacts their quality of life, which they could then use as a guide in making their decision. Furthermore, websites that provide tools (e.g. list of questions to ask the doctor) could help empower patients to actively engage in shared decision-making. Therefore, the quality of these heart failure websites could be greatly enhanced if the previously mentioned shortcomings were addressed. The overall mean Understandability and Actionability scores for the 46 heart failure websites were considerably lower than those reported in another study33 that evaluated surgical site infection websites (μ=56.3% vs. 75% for understandability and 34.7% vs. 49% for actionability, respectively) using the same assessment tool, PEMAT. As previously mentioned, the complexity of the topic of heart failure and its associated treatments could have contributed to the lower Understandability score. Furthermore, a closer look at itemized breakdown of the Understandability score revealed that the websites fell short in meeting the recommended use of everyday language and sparse usage of medical jargon. Additionally, the websites that provided lengthy information failed to provide a quick summary of the content. The lack of visual aids and the lack of clear titles/captions of the visual aids also lowered the overall mean Understandability score. Likewise, the main reason for the low Actionability score was the lack of tangible tool (e.g. menu planners, checklists) that could help the user take action. Offering a relevant tool could enhance the consumer’s health literacy by giving them something tangible that could empower them to actually apply the information they have just learned.

Finally, comparing the three types of heart failure websites, government-sponsored websites were found to have significantly better quality ratings and lower reading levels. A possible explanation for this finding is the enactment of the Plain Writing Act of 2010, which requires government agencies to use clear communication that the public can understand and use (PlainLanguage.gov, n.d.). The websites that were ranked the highest in terms of traffic share were not significantly better than the bottom-ranked websites except in overall mean quality rating. Perhaps the better quality information provided by these websites contributed to their bigger traffic share. This study is not without limitations. First, the websites included in this analysis were all written in English, with the majority of the websites produced in the USA and UK; hence, the findings of this study may not be generalizable to heart failure websites written in other languages. Second, given the evolving nature of the Internet, it is quite possible that changes have been made to some of the websites, making our findings less accurate. Despite these limitations, the use of validated instruments and independent raters greatly enhanced the rigor of this study.

Conclusion

Even though overall the heart failure websites included in this analysis were of fair quality, none of the websites met the recommended readability level. Furthermore, they had low mean Understandability and mean Actionability scores, which means that the average health literacy demand of these websites most likely exceeds the health literacy level of their target audience. Web content authors need to consider, not just the quality, but also the health literacy demand of the information found in their websites; especially considering that low health literacy is likely prevalent among the usual audience. Furthermore, clinicians looking for suitable websites to recommend to their heart failure patients can direct them to government-sponsored websites (e.g. nhlbi.nih.gov, nlm.nih.gov), given their better quality and lower health literacy demand. Finally, given the paucity of this type of web analysis, further research is needed to determine the quality and health literacy demand of other health-related online information, particularly in other chronic diseases (e.g. diabetes, chronic lung disease, atrial fibrillation).

What’s New?

  • The heart failure information currently available online is of fair quality but imposes a considerable health literacy demand on the consumer due to its high reading level, poor understandability, and limited actionability.

  • Website developers can make their site content more accessible to those with low health literacy by using everyday language, defining medical terms and using them sparsely, providing a summary (e.g. bulleted key points), using visual aids with clear titles/captions, providing a tangible tool (e.g. menu planner, exercise plan), and by breaking down instructions step-by-step.

  • Clinicians looking for suitable websites to recommend to their heart failure patients could direct them to government-sponsored heart failure websites, which tended to provide better quality information that had lower health literacy demand.

Acknowledgments

Funding:

Maan Isabella Cajita, Jingzhi Xu, and Melissa Hladek are supported by a predoctoral fellowship in Interdisciplinary Training in Cardiovascular Health Research (NIH/NINR T32 NR012704)

Footnotes

Conflict of Interest: none declared

References

  • 1.Pew Research Center. Health topics. [Accessed April 18, 2015];2011 Available at: http://www.pewinternet.org/2011/02/01/health-topics-2/
  • 2.Nielsen-Bohlman L, Panzer AM, Kindig DA, editors. Institute of Medicine. Health literacy: A prescription to end confusion. Washington, DC: The National Academies Press; 2004. [PubMed] [Google Scholar]
  • 3.Kutner M, Greenberg E, Jin Y, Paulsen C. Natl Cent Educ Stat. Washington (DC): US Department of Education; 2006. The health literacy of America’s adults: results from the 2003 National Assessment of Adult Literacy (NCES 2006–483) Available at: http://nces.ed.gov/pubs2006/2006483.pdf. [Google Scholar]
  • 4.Cajita MI, Cajita TR, Han HR. Health literacy and heart failure: A systematic review. J Cardiovasc Nurs. 2015;00(0):1. doi: 10.1097/JCN.0000000000000229. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Chen AMH, Yehle KS, Albert NM, et al. Relationships between health literacy and heart failure knowledge, self-efficacy, and self-care adherence. Res Social Adm Pharm. 2014;10(2):378–386. doi: 10.1016/j.sapharm.2013.07.001. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Dennison CR, McEntee ML, Samuel L, et al. Adequate health literacy is associated with higher heart failure knowledge and self care confidence in hospitalized patients. J Cardiovasc Nurs. 2011;26(5):359–367. doi: 10.1097/JCN.0b013e3181f16f88.Adequate. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Wu JR, Holmes GM, DeWalt DA, et al. Low literacy is associated with increased risk of hospitalization and death among individuals with heart failure. J Gen Intern Med. 2013;28(9):1174–1180. doi: 10.1007/s11606-013-2394-4. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Noureldin M, Plake KS, Morrow DG, Tu W, Wu J, Murray MD. Effect of health literacy on drug adherence in patients with heart failure. Pharmacotherapy. 2012;32(9):819–826. doi: 10.1002/j.1875-9114.2012.01109.x. [DOI] [PubMed] [Google Scholar]
  • 9.Squiers L, Peinado S, Berkman N, Boudewyns V, McCormack L. The health literacy skills framework. J Health Commun. 2012;17(Suppl 3):30–54. doi: 10.1080/10810730.2012.713442. [DOI] [PubMed] [Google Scholar]
  • 10.Hansberry DR, Suresh R, Agarwal N, Heary RF, Goldstein IM. Quality assessment of online patient education resources for peripheral neuropathy. J Peripher Nerv Syst. 2013;18(1):44–47. doi: 10.1111/jns5.12006. [DOI] [PubMed] [Google Scholar]
  • 11.Morony S, Flynn M, McCaffery KJ, Jansen J, Webster AC. Readability of written materials for CKD patients: a systematic review. Am J Kidney Dis. 2015;65(6):842–850. doi: 10.1053/j.ajkd.2014.11.025. [DOI] [PubMed] [Google Scholar]
  • 12.Rhee RL, Von Feldt JM, Schumacher HR, Merkel PA. Readability and suitability assessment of patient education materials in rheumatic diseases. Arthritis Care Res (Hoboken) 2013;65(10):1702–1706. doi: 10.1002/acr.22046. [DOI] [PubMed] [Google Scholar]
  • 13.Sharma N, Tridimas A, Fitzsimmons PR. A readability assessment of online stroke information. J Stroke Cerebrovasc Dis. 2014;23(6):1362–1367. doi: 10.1016/j.jstrokecerebrovasdis.2013.11.017. [DOI] [PubMed] [Google Scholar]
  • 14.Agarwal N, Feghhi D, Gupta R, et al. A comparative analysis of minimally invasive and open spine surgery patient education resources. J Neurosurg Spine. 2014;21(September):468–474. doi: 10.3171/2014.5.SPINE13600. [DOI] [PubMed] [Google Scholar]
  • 15.McEnteggart GE, Naeem M, Skierkowski D, Baird GL, Ahn SH, Soares G. Readability of online patient education materials related to IR. J Vasc Interv Radiol. 2015:1–5. doi: 10.1016/j.jvir.2015.03.019. [DOI] [PubMed] [Google Scholar]
  • 16.Tian C, Champlin S, Mackert M, Lazard A, Agrawal D. Readability, suitability, and health content assessment of web-based patient education materials on colorectal cancer screening. Gastrointest Endosc. 2014;80(2):284.e2–290.e2. doi: 10.1016/j.gie.2014.01.034. [DOI] [PubMed] [Google Scholar]
  • 17.Vargas CR, Ricci JA, Chuang DJ, Lee BT. Online patient resources for liposuction. Ann Plast Surg. 2015;00(00):1. doi: 10.1097/SAP.0000000000000438. [DOI] [PubMed] [Google Scholar]
  • 18. Alexa.com. Top search engines. [Accessed May 28, 2015]; Available at: http://www.alexa.com/topsites/category/Computers/Internet/Searching/Search_Engines.
  • 19.Chitika. Chitika insights the value of Google result positioning. [Accessed May 28, 2015];2013 :10. Available at: http://cdn2.hubspot.net/hub/239330/file-61331237-pdf/ChitikaInsights-ValueofGoogleResultsPositioning.pdf.
  • 20.Charnock D, Shepperd S, Needham G, Gann R. DISCERN: an instrument for judging the quality of written consumer health information on treatment choices. J Epidemiol Community Health. 1999;53:105–111. doi: 10.1136/jech.53.2.105. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Kaicker J, Debono VB, Dang W, Buckley N, Thabane L. Assessment of the quality and variability of health information on chronic pain websites using the DISCERN instrument. BMC Med. 2010;8:59. doi: 10.1186/1741-7015-8-59. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Rees CE, Ford JE, Sheard CE. Evaluating the reliability of DISCERN: A tool for assessing the quality of written patient information on treatment choices. Patient Educ Couns. 2002;47:273–275. doi: 10.1016/S0738-3991(01)00225-7. [DOI] [PubMed] [Google Scholar]
  • 23.Sobota A, Ozakinci G. The quality and readability of online consumer information about gynecologic cancer. Int J Gynecol Cancer. 2015;25(3):537–541. doi: 10.1097/IGC.0000000000000362. [DOI] [PubMed] [Google Scholar]
  • 24.Lam CG, Roter DL, Cohen KJ. Survey of quality, readability, and social reach of websites on osteosarcoma in adolescents. Patient Educ Couns. 2013;90(1):82–87. doi: 10.1016/j.pec.2012.08.006. [DOI] [PubMed] [Google Scholar]
  • 25.Landis JR, Koch GG. The measurement of observer agreement for categorical data. Biometrics. 1977;33:159–174. [PubMed] [Google Scholar]
  • 26.Smith E, Senter R. Automated readability index. Aerosp Med Res Lab. 1967:1–14. [PubMed] [Google Scholar]
  • 27.Coleman M, Liau T. A computer readability formula designed for machine scoring. J Appl Psychol. 1975;60:283–284. [Google Scholar]
  • 28.Flesch R. A new readability yardstick. J Appl Psychol. 1948;32:221–233. doi: 10.1037/h0057532. [DOI] [PubMed] [Google Scholar]
  • 29.Fry E. A readability formula that saves time. J Read. 1968;11:513–516. [Google Scholar]
  • 30.Gunning R. The technique of clear writing. Michigan: McGraw-Hill; 1968. [Google Scholar]
  • 31.McLaughlin H. SMOG grading: A new readability formula. J Read. 1969;12(8):639–646. [Google Scholar]
  • 32.Shoemaker SJ, Wolf MS, Brach C. Development of the Patient Education Materials Assessment Tool (PEMAT): a new measure of understandability and actionability for print and audiovisual patient information. Patient Educ Couns. 2014;96(3):395–403. doi: 10.1016/j.pec.2014.05.027. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33.Zellmer C, Zimdars P, Parker S, Safdar N. Evaluating the usefulness of patient education materials on surgical site infection: A systematic assessment. Am J Infect Control. 2015;43(2):167–168. doi: 10.1016/j.ajic.2014.10.020. [DOI] [PubMed] [Google Scholar]
  • 34.U.S. National Library of Medicine. How to write easy-to-read health materials. 2013 [Google Scholar]
  • 35.Heo S, Lennie TA, Okoli C, Moser DK. Quality of life in patients with heart failure: ask the patients. Heart Lung. 2009;38(2):100–108. doi: 10.1016/j.hrtlng.2008.04.002. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 36.Shay LA, Lafata JE. Where is the evidence? a systematic review of shared decision making and patient outcomes. Med Decis Mak. 2014;35(1):114–131. doi: 10.1177/0272989X14551638. [DOI] [PMC free article] [PubMed] [Google Scholar]

RESOURCES