Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2016 Sep 1.
Published in final edited form as: J Am Board Fam Med. 2015 Sep-Oct;28(5):584–594. doi: 10.3122/jabfm.2015.05.150037

Do Subjective Measures Improve the Ability to Identify Limited Health Literacy in a Clinical Setting?

Melody S Goodman 1, Richard T Griffey 1, Christopher R Carpenter 1, Melvin Blanchard 1, Kimberly A Kaphingst 1
PMCID: PMC4987705  NIHMSID: NIHMS807726  PMID: 26355130

Abstract

Background

Existing health literacy assessments developed for research purposes have constraints that limit their utility for clinical practice, including time requirements and administration protocols. The Brief Health Literacy Screen (BHLS) consists of 3 self-administered Single-Item Literacy Screener (SILS) questions and obviates these clinical barriers. We assessed whether the addition of SILS items or the BHLS to patient demographics readily available in ambulatory clinical settings reaching underserved patients improves the ability to identify limited health literacy.

Methods

We analyzed data from 2 cross-sectional convenience samples of patients from an urban academic emergency department (n = 425) and a primary care clinic (n = 486) in St. Louis, Missouri. Across samples, health literacy was assessed using the Rapid Estimate of Adult Literacy in Medicine-Revised (REALM-R), Newest Vital Sign (NVS), and the BHLS. Our analytic sample consisted of 911 adult patients, who were primarily female (62%), black (66%), and had at least a high school education (82%); 456 were randomly assigned to the estimation sample and 455 to the validation sample.

Results

The analysis showed that the best REALM-R estimation model contained age, sex, education, race, and 1 SILS item (difficulty understanding written information). In validation analysis this model had a sensitivity of 62%, specificity of 81%, a positive likelihood ratio (LR+) of 3.26, and a negative likelihood ratio (LR) of 0.47; there was a 28% misclassification rate. The best NVS estimation model contained the BHLS, age, sex, education and race; this model had a sensitivity of 77%, specificity of 72%, LR+ of 2.75, LR of 0.32, and a misclassification rate of 25%.

Conclusions

Findings suggest that the BHLS and SILS items improve the ability to identify patients with limited health literacy compared with demographic predictors alone. However, despite being easier to administer in clinical settings, subjective estimates of health literacy have misclassification rates >20% and do not replace objective measures; universal precautions should be used with all patients.

Keywords: Biostatistics, Health Literacy


Health literacy, often defined as the degree to which individuals can obtain, process, and understand basic health information and services needed to make appropriate health decisions,1 is a critical predictor of health knowledge, health outcomes, and health care utilization.1,2 Limited health literacy has been associated with a higher rate of hospitalization,36 lower use of preventive services,5 and less effective management of chronic conditions.7 The translation of health literacy measurement beyond the research environment to clinical settings in order to help target potential interventions has been hampered by tools that require administration by staff and face other barriers to completion.810 For example, the S-TOFHLA is timed and can take up to 7 minutes to complete, increasing the potential for interruptions that could affect performance.11

When considering implementation of health literacy assessments in overcrowded and understaffed medical settings, researchers must consider the trade-offs between instrument complexity, patient acceptability, and diagnostic accuracy.12,13 If found to be brief, accurate, and reliable, health literacy screening instruments could be converted to iPad/kiosk applications that patients could complete while awaiting care, as has been done for dementia,14 vision,15 and substance abuse.16 The Brief Health Literacy Screen (BHLS) contains 3 Single Item Literacy Screener (SILS) items, self-administered, brief, subjective questions through which patients report their perceived health literacy skills, avoiding some of the barriers presented by objective screening tools. The diagnostic accuracy and validity of the SILS relative to the Rapid Estimate of Adult Literacy in Medicine (REALM) and Newest Vital Sign (NVS) have been previously reported.11,1719

In prior research, the BHLS has been validated to detect limited health literacy using the S-TOFHLA as the criterion standard in a study of 332 white veterans (area under the receiver operating characteristics curve [AUROC], 0.76–0.87).18 The BHLS was subsequently validated in a large Veterans Administration patient population (n = 1796) of mostly older white men with at least a high school education. The “confident with forms” item performed the best, and the ability to identify patients with limited health literacy varied based on the reference standard (AUROC, 0.74 for S-TOFHLA, 0.84 for REALM). In a subsequent study, Wallace et al20 evaluated the 3 SILS items using the REALM as the criterion standard in a population (n = 305) consisting of predominantly white women with a mean age of 49.5 years. “Confident with forms” was superior to the other questions and demographic information (sex, age, race, educational attainment, health insurance). The ability to identify limited health literacy (AUROC, 0.82) on REALM was similar to that determined by Chew et al.17

In several clinical studies, associations have found between SILS and various health outcomes.2,2124 Limited health literacy measured using SILS has been shown to be associated with discontinuation of antidepression medication among patients with type 2 diabetes,25 perception of low coordination of care and low satisfaction among women with breast cancer,26 health care discrimination among diabetics,27 increased risk of hospital admissions,5 decreased knowledge of chronic disease among hypertensive and diabetic patients,3 poorer physical and mental health among older adults,28 and poorer outcomes among diabetic patients.21 In addition, the BHLS has been validated for use in clinical settings when administered by nurses during patient intake.24

However, age, race, and education, which can be readily collected in clinical settings, were found to be significant predictors of health literacy in a systematic review of 85 studies.29 Therefore, there is a need to examine the ability of SILS items and the BHLS, in addition to demographic factors, to identify patients with limited health literacy.20 We quantitatively assessed whether the addition of each SILS item or the BHLS improves the ability to identify patients with limited health literacy compared with patient demographic information. We hypothesized that a combination of the SILS and demographic characteristics improves the ability to identify patients with limited health literacy, compared with standard sociodemographic variables, in clinical settings where administration of objective health literacy assessments is not feasible.

Methods

Settings and Participants

We analyzed data from 2 cross-sectional convenience samples of patients from an emergency department (ED) (n = 425) and primary care clinic (n = 486) affiliated with an urban academic medical center in St. Louis, Missouri. Using SAS statistical software (SAS Inc., Cary, NC), half of the participants from each sample were randomly assigned to the estimation data set, and the remaining observations were combined to form the validation data set.

Emergency Department

Trained research assistants recruited patients between March 1, 2011, and February 29, 2012, from an urban academic ED. Patients aged ≥18 years were identified for enrollment by review of the electronic medical record dashboard. Exclusion criteria included undue patient distress as judged by the attending physician, altered mental status, aphasia, mental handicap, previously diagnosed dementia or insurmountable communication barrier as judged by family or the screener, non-English-speaking, sexual assault victims, acute psychiatric illness, or corrected visual acuity worse than 20/100 using both eyes. This study was approved by the hospital institutional review board. Research assistants administered health literacy assessments to all eligible and consenting patients and recorded their responses. Demographic data were collected during the interview and from the electronic medical record. De-identified age, race, and sex data were recorded for patients declining to participate. A total of 588 patients were approached; 139 (24%) refused, 9 were excluded, and 446 (76%) were enrolled. Enrolled patients' age, sex, and race did not significantly differ from patients who refused to participate or from the ED patient population.11,30,31

Primary Care Clinic

Participants were recruited between July 2013 and April 2014 from the Primary care clinic (PCC) of the same large, urban academic medical center. Patients in the waiting rooms of the PCC were approached by trained data collectors and asked to complete a survey in English. Inclusion criteria were that participants be at least 18 years old, a patient at the PCC, and speak English. Participants were asked to complete a self-administered written questionnaire and a verbally administered survey component. The latter component assessed health literacy with the REALM, Revised (REALM-R) and NVS and was administered by a trained data collector, who recorded responses. All participants completed a verbal consent process and signed a written consent form before completing the survey. This study was approved by the Human Research Protection Office at Washington University School of Medicine.

Approximately 26% (n = 1111) of those approached were ineligible to participate in the study because they were not patients, did not speak English, or had previously taken the survey. Among eligible participants, 44% (n = 1380) agreed to participate in the study and gave consent to trained data collectors. Of the 1380 patients who gave consent, 975 (71%) completed the written survey. Among those with complete written surveys, 602 (60%) completed the verbally administered component. Survey respondents were generally similar to the underlying primary care clinic patient population with respect to sex, age, and race.

For inclusion in this analysis, participants must have completed all 3 health literacy assessments (ie, the REALM-R, NVS, and BHLS) and have demographic data (age, sex, race, education). Because of the small number of patients in the “other race” category for both the ED (n = 11) and PCC (n = 27) samples, we limited analysis to patients whose self-reported race was white or black and who met all inclusion criteria (n = 425 for the ED and n = 486 for the PCC).

Health Literacy Assessments

REALM, Revised

The REALM-R is a health literacy assessment (word recognition test) in which participants are asked to pronounce 11 common medical terms: fat, flu, pill, allergic, jaundice, anemia, fatigue, directed, colitis, constipation, and osteoporosis. The first 3 words are included to reduce test anxiety and are therefore not scored as part of the REALM-R. A trained REALM-R administrator scores the pronunciation (correct/incorrect) of each of the remaining 8 words, resulting in 8 possible points.8 Using standard scoring, we dichotomized the REALM-R score into limited health literacy (scores 0 to 6) and adequate health literacy (scores >6).32

Newest Vital Sign

The NVS is a verbally administered, 6-item measure that asks about information contained in a standard food nutrition label, which requires reading comprehension and numeracy skills.33 Participants received an NVS score ranging from 0 to 6 based on the number of correct answers. Scores from 0 to 1 reflect a high likelihood of limited health literacy; 2 to 3, a possibility of limited health literacy; and 4 to 6, adequate health literacy.33 For analysis, NVS was dichotomized as limited health literacy (scores 0–3) and adequate health literacy (scores 4–6).

Brief Health Literacy Screen

Participants were administered 3 written SILS items, which were measured on 5-point Likert scales that assess self-reported health literacy skills: “How often do you have problems learning about your medical condition because of difficulty understanding written information?” (1 = always, 2 = often, 3 = sometimes, 4 = rarely, 5 = never); “How confident are you filling out medical forms by yourself?” (1 = not at all, 2 = a little bit, 3 = somewhat, 4 = quite a bit, 5 = extremely confident); and “How often do you have someone help you read hospital materials?” (1 = always, 2 = often, 3 = sometimes, 4 = rarely, 5 = never). In the estimation models, these questions were dichotomized into limited health literacy (responses <4) or adequate health literacy (responses ≥4) as individual predictors and continuously as a BHLS sum score, based on prior studies.17,18,34

Statistical Analysis

Sample characteristics for the overall combined samples (N = 911) and the estimation (n = 456) and validation (n = 455) samples are examined to ensure no demographic differences between samples. Five estimation models for 2 validated objective health literacy measures (REALM-R, NVS) are compared. We started with a base multivariable logistic regression model consisting of patient demographic information; age (continuous); sex (female, male); race (white, black); and education (less than high school, high school diploma or equivalent degree, more than high school). Categorical variables were modeled using indicators, with male as the reference for sex, white as the reference for race, and high school (middle category) as the reference level of education. Each SILS item is examined individually by adding them one at a time to the base model; these models are compared with a model that includes the full BHLS sum score. To select a final estimation model we used 3 goodness-of-fit criteria: rescaled R2, Akaike information criterion (AIC), and AUROC. R2 and AUROC values closer to 1 and smaller AIC values are obtained from models with better fit. Data were analyzed using SAS software version 9.4 (SAS, Inc.); statistical significance was assessed at P < .05.

Based on the best estimation model, we estimated the probability of limited health literacy for each participant in the validation sample. The limited health literacy cutoff was determined by the lowest misclassification rate to establish an ideal trade-off between sensitivity and specificity. We examined the discrimination (ability to distinguish patients with limited health literacy from those with adequate health literacy) of the final estimation model and the cutoff selected by examining concordance (sensitivity, specificity) using a 2 × 2 table, kappa statistic (and 95% confidence interval [CI]), and misclassification rate. The kappa statistic measures interrater agreement; we examined the agreement between the estimation models and validated objective health literacy assessments (REALM-R, NVS) for determining patients with limited health literacy.35,36 We assessed this model as a diagnostic test for limited health literacy by calculating positive and negative likelihood ratios (LR+ and LR, respectively).

Results

The analytic sample consisted of 911 patients; the majority were women (62%), black (66%), and had at least a high school education (83%). Patient age ranged from 18 to 94 years, with an average age of 49 years (standard deviation, 14 years). The majority of patients were assessed as having adequate health literacy based on the REALM-R (54%) but limited health literacy according to the NVS (63%). The majority (72%) reported “rarely” or “never” having difficulty understanding written information. More than half of the patients reported being “extremely” or “quite a bit” confident (62%) when filling out medical forms. A majority (74%) stated that they “rarely” or “never” have someone help them read hospital materials. Half of this sample was randomly selected to the estimation sample (n = 456) and the other half to the validation sample (n = 455); there were no significant differences in sex, education, age, race and health literacy as assessed by the REALM-R, NVS, BHLS, or SILS between the estimation and validation samples based on the 2-sample test for proportions (sex, education, race, REALM-R, NVS, SILS) and 2-sample t test (BHLS, age) (see Table 1).

Table 1.

Demographic Characteristics of Participants in the Overall, Estimation, and Validation Samples

Overall (N = 911)
Estimation (n = 456)
Validation (n = 455)
Variables n % n % n % P value*
Sex
 Male 353 38.7 174 38.2 179 39.3 .83*
 Female 558 62.3 282 61.8 276 60.7 .79*
Education
 <High school 156 17.1 78 17.1 78 17.1 1.00*
 High school 395 43.4 201 44.0 194 42.6 .78*
 >High school 360 39.5 177 38.8 183 40.2 .79*
Race
 White 308 33.8 151 33.1 157 34.5 .80*
 Black 603 66.2 305 66.9 298 65.5 .72*
Difficulty with written information
 Always/often/sometimes 251 27.6 151 20.4 100 22.0 .79*
 Rarely/never 660 72.4 305 79.6 355 78.0 .60*
Confidence in filling out medical forms
 Not at all/a little bit/somewhat 348 38.2 179 39.3 169 37.1 .67*
 Quite a bit/extremely confident 563 61.8 277 60.8 286 62.9 .61*
Help reading hospital material
 Always/often/sometimes 233 25.6 123 27.0 110 24.2 .63*
 Rarely/never 678 74.4 333 73.0 345 75.8 .40*
Rapid Estimation of Adult Literacy in Medicine, Revised
 Limited health literacy 418 45.9 205 45.0 213 46.8 .71*
 Adequate health literacy 493 54.1 251 55.0 242 53.2 .69*
Newest Vital Sign
 Limited health literacy 578 63.4 292 64.0 286 62.9 .79*
 Adequate health literacy 333 36.6 164 36.0 169 37.1 .83*
Brief Health Literacy Screen score, mean (SD) 12.1 2.8 12.1 2.7 12.1 2.8 .78
Age (years), mean (SD) 48.5 14 48.5 14.0 48.4 14.1 .88

Data are n (%) unless otherwise indicated.

*

Two-sample test for proportions.

Two-sample t test.

SD, standard deviation.

REALM-R Estimation

Table 2 presents the model results and goodness-of-fit statistics for 5 REALM-R estimation models. All demographic predictors, with the exception of age, were statistically significant in the base model that contained demographic predictors only (age, education, sex, and race); the goodness-of-fit statistics suggested a model with fair estimation ability (R2 = 0.34; AIC = 505; AUROC = 0.79). Addition of the “difficulty with written information” SILS created a model that identified limited health literacy (R2 = 0.38; AIC = 491; AUROC = 0.81) better than the base model. Models containing the 2 other SILS items, or the BHLS, did not identify patients with limited health literacy as well.

Table 2.

Logistic Regression Models Estimating Limited Health Literacy against the Rapid Estimate of Adult Literacy in Medicine, Revised

SILS
Model Demographics Only Difficulty with Written
Information
Confidence Filling out
Medical Forms
Help Reading Hospital
Materials
BHLS






Predictors OR 95% CI* P
value
OR 95% CI* P
value
OR 95% CI* P
value
OR 95% CI* P
value
OR 95% CI* P
value
Age 0.99 0.98 1.01 .34 0.99 0.97 1.01 .21 0.99 0.98 1.01 .21 0.99 0.98 1.01 .32 0.99 0.97 1.01 .19
Sex (reference = male)
 Female 0.45 0.29 0.71 <.01 0.45 0.28 0.72 <.01 0.45 0.28 0.70 <.01 0.47 0.30 0.74 <.01 0.45 0.28 0.71 <.01
Race (reference = white)
 Black 8.25 4.89 13.90 <.01 8.76 5.10 15.06 <.01 8.10 4.80 13.67 <.01 8.25 4.87 14.00 <.01 8.21 4.83 13.96 <.01
Education (reference = high school)
 <High School 2.57 1.36 4.84 <.01 1.97 1.02 3.79 <.01 2.51 1.33 4.74 <.01 2.22 1.17 4.24 .02 2.18 1.14 4.15 .02
 >High School 0.35 0.22 0.56 <.01 0.37 0.23 0.61 <.01 0.37 0.23 0.59 <.01 0.35 0.21 0.56 <.01 0.37 0.23 0.60 <.01
Difficulty with written information (SILS) (reference = rarely/never)
 Always/often/Sometimes 3.14 1.75 5.65 <.01
Confidence filling out medical forms (SILS) (reference = quite a bit/extremely confident)
 Not at all/a little bit/somewhat 1.35 0.87 2.12 .19
Help reading hospital materials (SILS) (reference = rarely/never)
 Always/often/sometimes 1.94 1.18 3.18 .01
BHLS 0.88 0.81 0.96 <.01
Goodness of fit statistics
R 2 0.342 0.375 0.346 0.357 0.361
AIC 505.02 491.46 505.27 500.08 498.07
AUROC 0.794 0.812 0.800 0.803 0.809
*

For 95% confidence interval (CI) values, the lower limit is set on the left and the upper limit is on the right.

AIC, Akaike information criterion; AUROC, area under the receiver operator characteristics curve; BHLS, Brief Health Literacy Screen; OR, odds ratio; SILS, Single Item Literacy Screener.

NVS Estimation

All demographic predictors, except sex, were statistically significant in the base model that contained demographic predictors only; the goodness-of-fit statistics suggested a model with fair estimation ability (R2 = 0.20; AIC = 535; AUROC = 0.73). Addition of the “difficulty with written information” SILS with demographics identified patients with limited health literacy (R2 = 0.23; AIC = 525; AUROC = 0.75) better than the demographics-only model. Models containing the 2 other SILS items did not identify patients with limited health literacy as well. The full BHLS model had slightly better estimation (R2 = 0.24; AIC = 524; AUROC = 0.75) than the 1 SILS item model (Table 3).

Table 3.

Logistic Regression Models Estimating Limited Health Literacy against the Newest Vital Sign

SILS
Model Demographics Only Difficulty with Written Information Confidence Filling out Medical Forms Help Reading Hospital Materials BHLS






Predictors OR 95% CI* P value OR 95% CI* P value OR 95% CI* P value OR 95% CI* P value OR 95% CI* P value
Age 1.02 1.01 1.04 <.01 1.02 1.01 1.04 .01 1.02 1.01 1.04 <.01 1.02 1.01 1.04 <.01 1.02 1.01 1.04 .01
Sex (reference = male)
 Female 0.93 0.60 1.44 .75 0.95 0.61 1.48 .83 0.92 0.60 1.43 .72 0.96 0.62 1.49 .85 0.94 0.06 1.46 .79
Race (reference = white)
 Black 3.51 2.27 5.43 <.01 3.50 2.25 5.45 <.01 3.41 2.20 5.28 <.01 3.41 2.20 5.29 <.01 3.33 2.14 5.18 <.01
Education (reference = high school)
 <High school 2.06 1.03 4.13 0.04 1.68 0.82 3.44 .15 1.97 0.98 3.97 .06 1.84 0.97 3.73 .06 1.75 0.86 3.57 .12
 >High school 0.40 0.25 0.62 <.01 0.43 0.27 0.67 <.01 0.42 0.27 0.67 <.01 0.41 0.26 0.64 <.01 0.44 0.28 0.67 <.01
Difficulty with written information (SILS) (reference = rarely/never)
 Always/often/sometimes 2.86 1.50 5.46 <.01
Confidence filling out medical forms (SILS) (reference = quite a bit/extremely confident)
 Not at all/a little bit/somewhat 1.61 1.03 2.52 .04
Help reading hospital materials (SILS) (reference = rarely/never)
 Always/often/sometimes 1.80 1.08 3.00 .02
BHLS 0.85 0.78 .94 <.01
Goodness of fit statistics
R 2 0.203 0.232 0.214 0.216 0.235
AIC 534.75 525.27 532.29 531.46 524.15
AUROC 0.734 0.749 0.738 0.741 0.749
*

For 95% confidence interval (CI) values, the lower limit is set on the left and the upper limit is on the right.

AIC, Akaike information criterion; AUROC, area under the receiver operator characteristics curve; BHLS, Brief Health Literacy Screen; OR, odds ratio; SILS, Single Item Literacy Screener.

Validation

Using model coefficients and lowest misclassification cutoffs, the validation sample was used to compare the estimation of limited health literacy by the models with the “difficulty with written information” SILS and the BHLS with both objective health literacy assessments (REALM-R, NVS).

Difficulty Understanding Written Information (SILS)

The addition of the “difficulty with written information” SILS item to demographic information (age, sex, race, education) has the ability to identify limited health literacy on the REALM-R, with a sensitivity of 62%, specificity of 81%, a 28% misclassification rate, and a moderate kappa statistic of 0.43 (95% CI, 0.35– 0.51).37 The likelihood ratio of a positive test result (LR+) is 3.26, and the likelihood ratio of a negative test (LR) is 0.47; this model slightly underestimates (39%) limited health literacy in the sample (Table 4). This model showed greater sensitivity (82%) and lower specificity (68%) in estimating the NVS, attenuating the LR+ (2.56) and improving the LR (0.26). The NVS estimation model also had a lower misclassification rate (24%) and a slight increase in kappa statistic to 0.49 (95% CI, 0.41–0.57); this model estimates limited health literacy among 63% of the sample.

Table 4.

Comparison of Single-Item Literacy Screener/Brief Health Literacy Screen (SILS/BHLS) Model Identification of Limited Health Literacy With Objective Health Literacy Measures (Rapid Estimate of Adult Literacy in Medicine, Revised, and Newest Vital Sign*)

Models* Limited Health Literacy n (%) Adequate Health Literacy n (%) Kappa 95% CI Misclassified (%) Sensitivity Specificity Positive Likelihood Ratio Negative Likelihood Ratio
Difficulty with written information (SILS) and demographics model *
 REALM-R limited health literacy 132 62.0 81 38.0 0.43 0.35–0.51 28.1 0.62 0.81 3.26 0.47
 REALM-R adequate health literacy 47 19.4 195 80.6
 NVS limited health literacy 233 81.5 53 18.5 0.49 0.41–0.57 23.7 0.82 0.68 2.56 0.26
 NVS adequate health literacy 55 32.5 114 67.5
Brief Health Literacy Screen and demographics model *
 REALM-R limited health literacy 171 80.3 42 19.7 0.42 0.34–0.50 29.5 0.80 0.62 2.11 0.32
 REALM-R adequate health literacy 92 38.0 150 62.0
 NVS limited health literacy 221 77.3 65 22.7 0.48 0.40–0.56 24.8 0.77 0.72 2.75 0.32
 NVS adequate health literacy 48 28.4 121 71.6
*

Models control for age, sex, race, and education.

NVS, Newest Vital Sign; REALM-R, Rapid Estimate of Health Literacy in Medicine, Revised.

Brief Health Literacy Screen

The addition of the BHLS to demographic information has the ability to estimate limited health literacy on the REALM-R, with a sensitivity of 80%, specificity of 62%, an LR+ of 2.11, an LR of 0.32, a 30% misclassification rate, and a moderate kappa statistic of 0.42 (95% CI, 0.34–0.50).37 This model estimates 58% limited health literacy in the sample, overestimating limited health literacy (Table 4). The BHLS estimation model had slightly lower sensitivity (77%) and higher specificity (72%) for estimating the NVS; improving the LR+ (2.75) and preserving LR (0.32). The NVS estimation model also had a lower misclassification rate (25%) and a slight increase in kappa statistic to 0.48 (95% CI, 0.40–0.56). This model estimates limited health literacy among 59% of the sample, underestimating limited health literacy (Table 4).

Discussion

The utility of SILS items and the BHLS in clinical practice have been demonstrated;18,38,39 we extend this work to examine predictive ability compared with and combined with demographic characteristics that can be easily collected in clinical settings. Age, sex, race, education, and 1 SILS item (difficulty understanding written information) were found to be predictors of limited health literacy; combined they yielded the best estimation model for limited health literacy measured by the REALM-R and NVS. This model identified patients with limited health literacy better than demographic factors alone. We posit that differences between the results of our analyses and previous studies could be attributed to sample demographics and analysis techniques. Our sample included only English speakers and was predominately nonwhite (69% black). We used regression analytic approaches and assessed 2 objective measures of health literacy (REALM-R and NVS), as well as multiple predictors of limited health literacy, in both ED and primary care settings. Most previous studies have examined only 1 objective measure of health literacy among patients in only 1 clinical setting (primary care), and do not report likelihood ratios to facilitate the clinical interpretation of these health literacy screening test results.19,38,40 The extension of this work to the ED has important implications because the majority of rural EDs are staffed by family medicine physicians.4143

BHLS estimation models have slightly higher misclassification rates than the “difficulty understanding written information” SILS estimation models for both REALM-R and NVS, suggesting that the use of 1 SILS item in addition to demographic information can improve the ability to identify limited health literacy in fast-paced clinical settings serving medically underserved populations. Despite being easier to administer in clinical settings, however, SILS subjective measures of health literacy have misclassification rates of >20% when used in addition to known demographic predictors and do not replace objective measures.

Our study has several limitations that should be considered when interpreting the findings. This is a convenience sample of English-speaking ED and primary care patients at a single urban academic medical center, and analysis was limited to black and white respondents because of the small number of patients from other racial/ethnic groups, limiting the generalizability of findings to other populations. As with most health literacy measures, SILS items do not assess oral communication, listening, writing,44 or visual literacy,45 and do not consider age, sex, language, culture, education, health condition, and health care settings.46 While we did see variability in health literacy, most of the sample had at least a high school education, and we excluded those with visual impairments from our study because the health literacy measures are not validated for this population.

While the NVS can be performed in <3 minutes11,40 this still requires staff to administer the test, and so it is not feasible in many clinical settings. There has been some work to examine the feasibility of a self-administered NVS, but the instrument has yet to be validated.47 In this study we validated our limited health literacy estimation model against 2 validated objective health literacy measures (REALM-R and NVS).

Conclusions

Our findings endorse the utility of 1 SILS question combined with demographics to identify patients with limited health literacy in fast-paced clinical settings, rather than objective assessments that may not be feasible. Future research is needed to refine these models and predictors that decrease misclassification rates and to examine the validity of this approach in other populations. It is important to note that, given the high misclassification rates, universal precautions should be considered for use in all patients.48,49

Acknowledgments

Funding: MSG is supported by the Barnes-Jewish Hospital Foundation; Siteman Cancer Center; National Institutes of Health (NIH); National Cancer Institute (grant nos. P30 CA91842, U54CA153460, R01 CA168608, 3U54CA153460-03S1, U54 CA155496); the Patent-Centered Outcomes Research Institute (grant ID 4586); Department of Defense (grant no. W81XWH-14-1-0503); and the Washington University School of Medicine (WUSM) and WUSM Faculty Diversity Scholars Program. RTG is supported by an institutional KM1 Comparative Effectiveness Award (no. KM1CA156708) through the National Cancer Institute (NCI) at the NIH; and the Clinical and Translational Science Award (CTSA) program of the National Center for Research Resources and the National Center for Advancing Translational Sciences at the NIH (grant nos. UL1 RR024992, KL2 RR024994, TL1 RR024995). RTG is also supported through the Emergency Medicine Foundation/Emergency Medicine Patient Safety Foundation Patient Safety Fellowship. CRC is supported by the National Institute of Aging (grant no. 1U13AG048721-01) and by the Washington University Emergency Care Research Core, which receives funding from the Barnes-Jewish Hospital Foundation. KAK is supported by the Washington University School of Medicine; Barnes-Jewish Hospital Foundation; Siteman Cancer Center; the National Cancer Institute (grant nos., R01 CA168608, 3U54CA153460-03S1, and P50CA95815); the National Institute of Diabetes and Digestive and Kidney Diseases (grant P30 DK092950); the Agency for Health care Research and Quality (grant no. R21 HS020309); and the Centers for Disease Control and Prevention (grant no. U58 DP0003435).

The authors acknowledge the assistance of our research and screening staff: Lucy D'Agostino McGowan, William D. MacMillan, Renee Gennarelli, Meng-Ru Cheng, Sarah Lyons, Nhi Nguyen, Ralph O'Neil, Emma Dwyer, Ian Ferguson, Mallory Jorif, Matthew Kemperwas, Jasmine Lewis, Darain Mitchell, Margaret Lin, Andrew Melson, and John Schneider.

Footnotes

Conflict of interest: none declared.

Publisher's Disclaimer: The content of this article is solely the responsibility of the authors and does not necessarily represent the official views of the supporting societies and foundations or the funding agencies.

References

  • 1.Institute of Medicine Committee on Health LiteracyNielsen-Bohlman L, Panzer AM, Kindig DA, editors. Health literacy: a prescription to end confusion. National Academies Press; Washington, DC: 2004. [PubMed] [Google Scholar]
  • 2.Berkman N, Pignone MP, DeWalt D, Sheridan S. Health literacy: Impact on health outcomes. Agency for Health care Research and Quality; Rockville, MD: 2004. [Google Scholar]
  • 3.Williams MV, Baker DW, Parker RM, Nurss JR. Relationship of functional health literacy to patients' knowledge of their chronic disease: a study of patients with hypertension and diabetes. Arch Intern Med. 1998;158:166–72. doi: 10.1001/archinte.158.2.166. [DOI] [PubMed] [Google Scholar]
  • 4.Baker DW, Parker RM, Williams MV, et al. The health care experiences of patients with low literacy. Arch Fam Med. 1996;5:329–34. doi: 10.1001/archfami.5.6.329. [DOI] [PubMed] [Google Scholar]
  • 5.Baker DW, Parker RM, Williams MV, Clark WS. Health literacy and the risk of hospital admission. J Gen Intern Med. 1998;13:791–8. doi: 10.1046/j.1525-1497.1998.00242.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Baker DW, Gazmararian JA, Williams MV, et al. Functional health literacy and the risk of hospital admission among Medicare managed care enrollees. Am J Public Health. 2002;92:1278–83. doi: 10.2105/ajph.92.8.1278. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Gazmararian JA, Williams MV, Peel J, Baker DW. Health literacy and knowledge of chronic disease. Patient Educ Couns. 2003;51:267–75. doi: 10.1016/s0738-3991(02)00239-2. [DOI] [PubMed] [Google Scholar]
  • 8.Baker DW. The meaning and the measure of health literacy. J Gen Intern Med. 2006;21:878–83. doi: 10.1111/j.1525-1497.2006.00540.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Mancuso JM. Assessment and measurement of health literacy: an integrative review of the literature. Nurs Health Sci. 2009;11:77–89. doi: 10.1111/j.1442-2018.2008.00408.x. [DOI] [PubMed] [Google Scholar]
  • 10.Brez SM, Taylor M. Assessing literacy for patient teaching: perspectives of adults with low literacy skills. J Adv Nurs. 1997;25:1040–7. doi: 10.1046/j.1365-2648.1997.19970251040.x. [DOI] [PubMed] [Google Scholar]
  • 11.Carpenter CR, Kaphingst KA, Goodman MS, Lin MJ, Melson AT, Griffey RT. Feasibility and diagnostic accuracy of brief health literacy and numeracy screening instruments in an urban emergency department. Acad Emerg Med. 2014;21:137–46. doi: 10.1111/acem.12315. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Brownson RC, Jacobs JA, Tabak RG, Hoehner CM, Stamatakis KA. Designing for dissemination among public health researchers: findings from a national survey in the United States. Am J Public Health. 2013;103:1693–9. doi: 10.2105/AJPH.2012.301165. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Neta G, Glasgow RE, Carpenter CR, et al. A framework for enhancing the value of research for dissemination and implementation. Am J Public Health. 2015;105:49–57. doi: 10.2105/AJPH.2014.302206. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Onoda K, Yamaguchi S. Revision of the Cognitive Assessment for Dementia, iPad Version (CADi2) PLoS One. 2014;9:e109931. doi: 10.1371/journal.pone.0109931. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Tahir HJ, Murray IJ, Parry NRA, Aslam TM. Optimisation and assessment of three modern touch screen tablet computers for clinical vision testing. PLoS One. 2014;9:0095074. doi: 10.1371/journal.pone.0095074. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Kelly SM, Gryczynski J, Mitchell SG, Kirk A, O'Grady KE, Schwartz RP. Validity of brief screening instrument for adolescent tobacco, alcohol, and drug use. Pediatrics. 2014;133:819–26. doi: 10.1542/peds.2013-2346. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Chew LD, Griffin JM, Partin MR, et al. Validation of screening questions for limited health literacy in a large VA outpatient population. J Gen Intern Med. 2008;23:561–6. doi: 10.1007/s11606-008-0520-5. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Chew LD, Bradley KA, Boyko EJ. Brief questions to identify patients with inadequate health literacy. Fam Med. 2004;36:588–94. [PubMed] [Google Scholar]
  • 19.Stagliano V, Wallace LS. Brief health literacy screening items predict newest vital sign scores. J Am Board Fam Med. 2013;26:558–65. doi: 10.3122/jabfm.2013.05.130096. [DOI] [PubMed] [Google Scholar]
  • 20.Wallace LS, Rogers ES, Roskos SE, Holiday DB, Weiss BD. Brief report: screening items to identify patients with limited health literacy skills. J Gen Intern Med. 2006;21:874–7. doi: 10.1111/j.1525-1497.2006.00532.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Schillinger D, Grumbach K, Piette J, et al. Association of health literacy with diabetes outcomes. JAMA. 2002;288:475–82. doi: 10.1001/jama.288.4.475. [DOI] [PubMed] [Google Scholar]
  • 22.Schillinger D, Barton LR, Karter AJ, Wang F, Adler N. Does literacy mediate the relationship between education and health outcomes? A study of a low-income population with diabetes. Public Health Rep. 2006;121:245–54. doi: 10.1177/003335490612100305. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Berkman ND, Sheridan SL, Donahue KE, Halpern DJ, Crotty K. Low health literacy and health outcomes: an updated systematic review. Ann Intern Med. 2011;155:97–107. doi: 10.7326/0003-4819-155-2-201107190-00005. [DOI] [PubMed] [Google Scholar]
  • 24.Wallston KA, Cawthon C, McNaughton CD, Rothman RL, Osborn CY, Kripalani S. Psychometric properties of the brief health literacy screen in clinical practice. J Gen Intern Med. 2014;29:119–26. doi: 10.1007/s11606-013-2568-0. doi:10.1007/s11606-013-2568-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Bauer AM, Schillinger D, Parker MM, et al. Health literacy and antidepressant medication adherence among adults with diabetes: the Diabetes Study of Northern California (DISTANCE) J Gen Intern Med. 2013;28:1181–7. doi: 10.1007/s11606-013-2402-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Hawley ST, Janz NK, Lillie SE, et al. Perceptions of care coordination in a population-based sample of diverse breast cancer patients. Patient Educ Couns. 2010;81(Suppl):S34–40. doi: 10.1016/j.pec.2010.08.009. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Lyles CR, Karter AJ, Young BA, et al. Correlates of patient-reported racial/ethnic health care discrimination in the Diabetes Study of Northern California (DISTANCE) J Health Care Poor Underserved. 2011;22:211–25. doi: 10.1353/hpu.2011.0033. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Wolf MS, Gazmararian JA, Baker DW. Health literacy and functional health status among older adults. Arch Intern Med. 2005;165:1946–52. doi: 10.1001/archinte.165.17.1946. [DOI] [PubMed] [Google Scholar]
  • 29.Paasche-Orlow MK, Parker RM, Gazmararian JA, Nielsen-Bohlman LT, Rudd RR. The prevalence of limited health literacy. J Gen Intern Med. 2005;20:175–84. doi: 10.1111/j.1525-1497.2005.40245.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30.Kaphingst KA, Goodman MS, Macmillan WD, Carpenter CR, Griffey RT. Effect of cognitive dysfunction on the relationship between age and health literacy. Patient Educ Couns. 2014;95:218–25. doi: 10.1016/j.pec.2014.02.005. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.Griffey RT, Melson AT, Lin MJ, Carpenter CR, Goodman MS, Kaphingst KA. Does numeracy correlate with measures of health literacy in the emergency department? Acad Emerg Med. 2014;21:147–53. doi: 10.1111/acem.12310. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32.Bass PF, Wilson JF, Griffith CH. A shortened instrument for literacy screening. J Gen Intern Med. 2003;18:1036–8. doi: 10.1111/j.1525-1497.2003.10651.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33.Weiss BD, Mays MZ, Martz W, et al. Quick assessment of literacy in primary care: the newest vital sign. Ann Fam Med. 2005;3:514–22. doi: 10.1370/afm.405. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34.McNaughton C, Wallston KA, Rothman RL, Marcovitz DE, Storrow AB. Short, subjective measures of numeracy and general health literacy in an adult emergency department. Acad Emerg Med. 2011;18:1148–55. doi: 10.1111/j.1553-2712.2011.01210.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35.Bossuyt PM, Reitsma JB, Bruns DE, et al. The STARD statement for reporting studies of diagnos tic accuracy: explanation and elaboration. Ann Intern Med. 2003;138:W1–12. doi: 10.7326/0003-4819-138-1-200301070-00012-w1. [DOI] [PubMed] [Google Scholar]
  • 36.Viera AJ, Garrett JM. Understanding interobserver agreement: the kappa statistic. Fam Med. 2005;37:360–3. [PubMed] [Google Scholar]
  • 37.Tremblay M. Beyond mythology. CMAJ. 2005;173:15. doi: 10.1503/cmaj.1050033. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 38.Schwartz KL, Bartoces M, Campbell-Voytal K, et al. Estimating health literacy in family medicine clinics in metropolitan Detroit: a MetroNet study. J Am Board Fam Med. 2013;26:566–70. doi: 10.3122/jabfm.2013.05.130052. [DOI] [PubMed] [Google Scholar]
  • 39.Morris NS, MacLean CD, Chew LD, Littenberg B. The Single Item Literacy Screener: evaluation of a brief instrument to identify limited reading ability. BMC Fam Pract. 2006;7:21. doi: 10.1186/1471-2296-7-21. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 40.Shah LC, West P, Bremmeyr K, Savoy-Moore RT. Health literacy instrument in family medicine: the “newest vital sign” ease of use and correlates. J Am Board Fam Med. 2010;23:195–203. doi: 10.3122/jabfm.2010.02.070278. [DOI] [PubMed] [Google Scholar]
  • 41.Groth H, House H, Overton R, DeRoo E. Board-certified emergency physicians comprise a minority of the emergency department workforce in Iowa. West J Emerg Med. 2013;14:186–90. doi: 10.5811/westjem.2012.8.12783. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 42.Peterson LE, Dodoo M, Bennett KJ, Bazemore A, Phillips RL. Coverage in rural emergency departments. Patient Care. 2008;24:183–8. doi: 10.1111/j.1748-0361.2008.00156.x. [DOI] [PubMed] [Google Scholar]
  • 43.McGirr J, Williams JM, Prescott JE. Physicians in rural West Virginia emergency departments: residency training and board certification status. Acad Emerg Med. 1998;5:333–6. doi: 10.1111/j.1553-2712.1998.tb02715.x. [DOI] [PubMed] [Google Scholar]
  • 44.Priorities for action: outcomes from the National Symposium on Health Literacy. Candian Public Health Association; Ottawa, Ontario: 2008. [Google Scholar]
  • 45.Entwistle V, Williams B. Health literacy: the need to consider images as well as words. Health Expect. 2008;11:99–101. doi: 10.1111/j.1369-7625.2008.00509.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 46.Rootman I, Ronson B. Literacy and health research in Canada: where have we been and where should we go? Can J Public Health. 2005;96(Suppl 2):S62–77. doi: 10.1007/BF03403703. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 47.Welch VL, VanGeest JB, Caskey R. Time, costs, and clinical utilization of screening for health literacy: a case study using the Newest Vital Sign (NVS) instrument. J Am Board Fam Med. 2011;24:281–9. doi: 10.3122/jabfm.2011.03.100212. [DOI] [PubMed] [Google Scholar]
  • 48.Brown DR, Ludwig R, Buck GA, Durham D, Shumard T, Graham SS. Health literacy: universal precautions needed. J Allied Health. 2004;33:150–5. [PubMed] [Google Scholar]
  • 49.DeWalt DA, Broucksou KA, Hawk V, et al. Developing and testing the health literacy universal precautions toolkit. Nurs Outlook. 2011;59:85–94. doi: 10.1016/j.outlook.2010.12.002. [DOI] [PMC free article] [PubMed] [Google Scholar]

RESOURCES