Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2017 Sep 26.
Published in final edited form as: J Health Commun. 2016 Sep 26;21(SUP2):105–108. doi: 10.1080/10810730.2016.1193919

What patient characteristics influence nurses’ assessment of health literacy?

Kathryn Goggins 1,2,3, Kenneth A Wallston 1,3,4, Lorraine Mion 4, Courtney Cawthon 1,4, Sunil Kripalani 1,2,3,5
PMCID: PMC5078982  NIHMSID: NIHMS821165  PMID: 27668543

Abstract

Overestimation of patients’ health literacy skills is common among nurses and physicians. At Vanderbilt University Hospital (VUH), nurses routinely ask patients the three Brief Health Literacy Screening (BHLS) questions. Data from two studies that recruited patients at VUH, the Health Literacy Screening (HEALS) study and the Vanderbilt Inpatient Cohort Study (VICS), were analyzed to compare the BHLS score recorded by nurses during clinical care with the score recorded by trained research assistants (RAs) during the same hospitalization. Logistic regression models determined which patient characteristics were associated with nurses documenting higher health literacy scores than RAs. Overall, the majority (60%) of health literacy scores were accurate, though nurses recorded meaningfully higher health literacy scores in 28.4% of HEALS patients and 35.6% of VICS patients. In the HEALS cohort, patients who were male and had less education were more likely to have higher health literacy scores recorded by nurses (OR=1.93, 95% CI=1.24-3.00 and OR=0.80, 95% CI=0.74-0.88, respectively). In the VICS cohort, patients who were older, male, and had less education were more likely to have higher health literacy scores recorded by nurses (OR=1.01, 95% CI=1.003-1.02; OR=1.49, 95% CI=1.20-1.84; and OR=0.87, 95% CI=0.83-0.90, respectively). These findings suggest that health literacy scores recorded by nurses for male patients and patients with less education could be overestimated. Thus, healthcare professionals should be aware of this tendency and should verify the results of routine health literacy screening tests, especially in certain patient groups.

Keywords: Health literacy, Screening, Research methods

Introduction

Healthcare professionals commonly overestimate patients’ level of health literacy (Bass, Wilson, Griffith, & Barnett, 2002; Dickens, Lambert, Cromwell, & Piano, 2013; Kelly & Haidet, 2007). The ramifications can be serious and include the provision of patient education which is poorly matched to patients’ needs (Kelly & Haidet, 2007). Inaccurate assessment may reflect shortcomings of the current screening tools, but patient characteristics may also influence assessments (Mancuso, 2009).

Since 2010, nurses at Vanderbilt University Hospital (VUH) have administered a health literacy measure as a part of the routine admission process. Trained research assistants (RAs) have also administered the same measure to a subset of patients participating in research studies; therefore, we are uniquely poised to compare the two. In this analysis, we sought to investigate whether selected patient demographic factors are associated with higher reported health literacy scores by nurses as compared to those assessed by trained RAs.

Methods

Setting and Sample

VUH is a large tertiary care facility that receives approximately 40,000 admissions per year. Adult hospitalized patients were recruited into two research studies at VUH: the Health Literacy Screening (HEALS) study and the Vanderbilt Inpatient Cohort Study (VICS). HEALS recruited adult patients with hypertension from 2010 to 2012, and VICS recruited adult patients with acute coronary syndrome (ACS) and/or acute decompensated heart failure (ADHF) from 2011 to 2015. Patients meeting eligibility criteria were enrolled into the studies and completed interviews with RAs while hospitalized. Both studies were approved by Vanderbilt University's Institutional Review Board. Methods of these studies are reported elsewhere (Meyers et al., 2014; Wallston et al., 2014).

Health Literacy Assessment

Baseline interviews for both studies included the collection of basic demographic information, as well as completion of the Brief Health Literacy Screen (BHLS) (Chew, 2004). The BHLS is a validated, 3-item health literacy screening tool that asks patients to report their level of confidence filling out medical forms, assistance needed to read hospital materials, and difficulty understanding written medical information. Each question has Likert-type response options and is scored from 1 to 5. The items are summed to represent a total score ranging from 3 to 15, with higher scores indicating higher health literacy. At VUH, nurses are educated on health literacy and routinely administer the BHLS questions to patients as part of their clinical duties during the hospital admission process (Cawthon, Mion, Willens, Roumie, & Kripalani, 2014). All BHLS scores are stored electronically in patients’ health records.

Statistical Analysis

We compared the BHLS score that the nurse collected during admission to the BHLS score collected later at the bedside by trained RAs for research purposes during the same hospitalization, using the latter as the reference standard. We dichotomized patients for analysis: patients with health literacy scores recorded by the nurse that were approximately the same as the RA reported score were compared to patients with scores recorded by the nurse that were meaningfully higher than the RA reported score. Based on the number of BHLS questions, response options, and the standard deviation of the difference between the nurse and RA score (3.3 in both studies), we defined a meaningful higher score as a nurse rating 3.0 points or greater as compared to the RA assessment. We ran separate logistic regression models for each study to determine the patient demographic factors (age, gender, race, education) that were associated with meaningfully higher health literacy scores reported by nurses.

Patients for whom the nurse recorded a health literacy score that was meaningfully lower than the score recorded by the RA (essentially, under-reported health literacy) were not included in the analyses reported in this paper.

Results

This analysis included 498 patients enrolled in HEALS and 1,819 patients enrolled in VICS. Across both studies the average patient age was 59.3 years (standard deviation (SD) =13.1) and 55.5% were male. Over 78% of patients were white, and the average educational attainment was 13.6 years (SD=2.9) (Table 1). Nurses’ health literacy assessments were accurate for the majority of patients (60.0% across both studies). However, nurses tended to report higher health literacy scores than trained research staff, by an average of 1.5 points.

Table 1.

Patient characteristics from HEALS and VICS studies.

HEALSc (n=498) VICSd (n=1,819) Total (n=2,317)
Age, yearsa 56.1 ± 14.1 60.1 ± 12.7 59.3 ± 13.1
Gender, Maleb 238 (47.8) 1049 (57.7) 1287 (55.5)
Race, Whiteb 318 (64.0) 1500 (82.7) 1818 (78.5)
Education, yearsa 14.0 ± 3.0 13.5 ± 2.9 13.6 ± 2.9
a

M ± SD

b

N (%)

c

Health Literacy Screen (HEALS) study

d

Vanderbilt Inpatient Cohort Study (VICS)

In HEALS, 124 patients (24.9%) were categorized as having meaningfully higher nurse-reported health literacy scores (i.e., 3 or more points higher than the score recorded by the RA). Logistic regression performed on the HEALS data showed that patients who were male (Odds Ratio (OR)=1.93, 95% Confidence Interval (CI)=1.24-3.00) or less educated (OR=0.80, 95% CI=0.74-0.88) had significantly greater odds of having higher health literacy scores reported by nurses (Figure 1). Race and age were not associated with higher health literacy scores reported by nurses in the HEALS sample.

Figure 1.

Figure 1

Odds of nurse overestimation of patients’ health literacy, by patient characteristics.

In VICS, 593 patients (32.6%) were categorized as having meaningfully higher nurse-reported health literacy scores. Logistic regression performed on the VICS data revealed that patients who were older (OR=1.01, 95% CI =1.003-1.02), male (OR=1.49, 95% CI=1.20-1.84) or less educated (OR=0.87, 95% CI= 0.83-0.90) had higher BHLS scores reported by nurses (Figure 1). As we found with the HEALS sample, race was not associated with likelihood of having higher health literacy score reported by nurses in the VICS sample.

Discussion

Hospital nurses are the primary healthcare professionals responsible for assessing patients’ learning needs and providing appropriate education. Using a brief structured assessment of health literacy in the course of their clinical care, nurses recorded accurate health literacy values for the majority of patients. However, in both studies the data recorded by nurses during routine admission intake interviews overestimated the health literacy of more than one fourth of the patients as assessed by trained RAs using the same tool when they later interviewed patients at the bedside, and this overestimation primarily occurred among male patients and those with less formal schooling. To our knowledge, this is the first study to assess potential biases in the assessment of health literacy scores collected in routine practice.

Overestimation of patients’ health literacy levels may have adverse consequences by leading healthcare professionals to provide information that exceeds patients’ skills and comprehension. Indeed, previous research and clinical experience show this is often the case, and patients commonly leave healthcare encounters with a poor understanding of what has been discussed (IOM, 2004). In this study, we were unable to assess how the level of information and other communication patterns may have differed with accurate vs overestimation of patients’ health literacy levels, but this would be an interesting topic for future research.

It is uncertain how or why demographic characteristics would influence nurses’ assessments of health literacy level, especially among those with less formal education. The BHLS is a subjective assessment of health literacy, in that it involves patients’ self-assessment of their abilities in different healthcare contexts. It may be that during the admission process men tend to present themselves as more competent than they actually feel, but once they are seen by the RAs in their hospital room those same men are more willing to admit to having shortcomings in their understanding of health information. Regarding education level, it is possible that those with more formal education tend to have more stable (and high) BHLS scores, whereas those with less education have more fluctuation in their self-assessment, which presents here as overestimation in the nurses’ documentation. In the future, follow-up interviews of nurses and patients conducted shortly after the health literacy assessment could shed more light on reasons for this variance. In the meantime, it is prudent to verify the results of health literacy screening by reviewing responses with patients during a subsequent encounter, particularly among groups prone to overestimation initially. Clinicians should also be attentive to other possible indicators of low health literacy, such as difficulty recalling medications, asking few questions, or having difficulty teaching back information (Kripalani & Weiss, 2006).

It is not necessary to choose between health literacy screening and a universal precautions approach. Indeed, it is crucial to communicate effectively with all patients, and it is also important to recognize that some patients might need additional assistance understanding information or might need added time for questions. Using health literacy screening tools that are easily administered in the clinical setting would be useful for identifying these patients. Moreover, some patient education interventions are resource-intensive and should be delivered only to higher-risk patients, or they have been shown to provide the most benefit to patients with low health literacy. For example, we recently demonstrated that an educational intervention that aimed to improve outcomes after hospital discharge was effective for patients with inadequate health literacy, but not for patients with adequate health literacy (Bell et al., 2016). Thus, using a screening tool, such as the BHLS, to identify these patients could enable health systems to better target resources. The overestimation phenomenon identified in our study mostly applies to patients who would have been classified as having “adequate health literacy” on the basis of the nurses’ screening assessment during intake, but who would still have need for additional assistance and resources.

Limitations of this study include its performance at a single university hospital. However, because our hospital has employed routine health literacy assessment since 2010, it is uniquely positioned for this research. Another limitation, as mentioned above, is that we did not determine how the accuracy of health literacy scores was related to subsequent patient education or comprehension. Finally, this study treats health literacy assessments done by trained research staff as the reference standard. It is possible that these values were sometimes over- or underestimations of patients’ true health literacy skills.

Conclusion

While most nurse-recorded values were accurate, a significant proportion of patients had their health literacy overestimated during the admission process. Thus, healthcare professionals should verify the results of routine health literacy screening tests, especially in certain patient groups.

Acknowledgements

We acknowledge the following additional members of the VICS research team who contributed to the study design or conduct: Susan P. Bell, MD, MSCI; Olivia Busing; Catherine Couey; Katharine M. Donato, PhD; Vanessa Fuentes; Frank E. Harrell, PhD; Blake Hendrickson; Cardella Leak, MPH; Daniel Lewis; Abby G. Meyers, MD; Monika Rizk; Hannah Rosenberg; Russell L. Rothman, MD, MPP; John F. Schnelle, PhD; Eduard E. Vasilevskis, MD, MPH; Kelly H.S. Wright, MA.

Supported by awards R01 HL109388 (Kripalani), R21 HL096581 (Kripalani), the Vanderbilt University Innovation and Discovery in Engineering and Science (IDEAS) Program Grant Award, and UL1 RR024975-01 (CTSA). The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Heart, Lung, and Blood Institute or the National Institutes of Health. The authors’ funding sources did not participate in the planning, collection, analysis or interpretation of data or in the decision to submit for publication.

References

  1. Bass PF, 3rd, Wilson JF, Griffith CH, Barnett DR. Residents’ ability to identify patients with poor literacy skills. Acad Med. 2002;77(10):1039–1041. doi: 10.1097/00001888-200210000-00021. [DOI] [PubMed] [Google Scholar]
  2. Bell SP, Schnipper JL, Goggins K, Bian A, Shintani A, Roumie CL, Pharmacist Intervention for Low Literacy in Cardiovascular Disease Study, Group Effect of Pharmacist Counseling Intervention on Health Care Utilization Following Hospital Discharge: A Randomized Control Trial. J Gen Intern Med. 2016 doi: 10.1007/s11606-016-3596-3. doi: 10.1007/s11606-016-3596-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Cawthon C, Mion LC, Willens DE, Roumie CL, Kripalani S. Implementing routine health literacy assessment in hospital and primary care patients. Jt Comm J Qual Patient Saf. 2014;40(2):68–76. doi: 10.1016/s1553-7250(14)40008-4. [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. Chew LD, Bradley KA, Boyko EJ. Brief questions to identify patients with inadequate health literacy. Family Medicine. 2004;36(8):558–594. [PubMed] [Google Scholar]
  5. Dickens C, Lambert BL, Cromwell T, Piano MR. Nurse overestimation of patients' health literacy. J Health Commun. 2013;18(Suppl 1):62–69. doi: 10.1080/10810730.2013.825670. doi: 10.1080/10810730.2013.825670. [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. IOM. Health Literacy: A Prescription to End Confusion. The National Academy Press; Washington (DC): 2004. [PubMed] [Google Scholar]
  7. Kelly PA, Haidet P. Physician overestimation of patient literacy: a potential source of health care disparities. Patient Educ Couns. 2007;66(1):119–122. doi: 10.1016/j.pec.2006.10.007. doi: 10.1016/j.pec.2006.10.007. [DOI] [PubMed] [Google Scholar]
  8. Kripalani S, Weiss BD. Teaching about health literacy and clear communication. J Gen Intern Med. 2006;21(8):888–890. doi: 10.1111/j.1525-1497.2006.00543.x. doi: 10.1111/j.1525-1497.2006.00543.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Mancuso JM. Assessment and measurement of health literacy: an integrative review of the literature. Nurs Health Sci. 2009;11(1):77–89. doi: 10.1111/j.1442-2018.2008.00408.x. doi: 10.1111/j.1442-2018.2008.00408.x. [DOI] [PubMed] [Google Scholar]
  10. Meyers AG, Salanitro A, Wallston KA, Cawthon C, Vasilevskis EE, Goggins KM, Kripalani S. Determinants of health after hospital discharge: rationale and design of the Vanderbilt Inpatient Cohort Study (VICS). BMC Health Serv Res. 2014;14:10. doi: 10.1186/1472-6963-14-10. doi: 10.1186/1472-6963-14-10. [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Wallston KA, Cawthon C, McNaughton CD, Rothman RL, Osborn CY, Kripalani S. Psychometric properties of the brief health literacy screen in clinical practice. J Gen Intern Med. 2014;29(1):119–126. doi: 10.1007/s11606-013-2568-0. doi: 10.1007/s11606-013-2568-0. [DOI] [PMC free article] [PubMed] [Google Scholar]

RESOURCES