ABSTRACT
BACKGROUND
The three-item Brief Health Literacy Screen (BHLS) has been validated in research settings, but not in routine practice, administered by clinical personnel.
OBJECTIVE
As part of the Health Literacy Screening (HEALS) study, we evaluated psychometric properties of the BHLS to validate its administration by clinical nurses in both clinic and hospital settings.
PARTICIPANTS
Beginning in October 2010, nurses in clinics and the hospital at an academic medical center have administered the BHLS during patient intake and recorded responses in the electronic health record.
MEASURES
Trained research assistants (RAs) administered the short Test of Functional Health Literacy in Adults (S-TOFHLA) and re-administered the BHLS to convenience samples of hospital and clinic patients. Analyses included tests of internal consistency reliability, inter-administrator reliability, and concurrent validity by comparing the nurse-administered versus RA-administered BHLS scores (BHLS-RN and BHLS-RA, respectively) to the S-TOFHLA.
KEY RESULTS
Cronbach’s alpha for the BHLS-RN was 0.80 among hospital patients (N = 498) and 0.76 among clinic patients (N = 295), indicating high internal consistency reliability. Intraclass correlation between the BHLS-RN and BHLS-RA among clinic patients was 0.77 (95 % CI 0.71–0.82) and 0.49 (95 % CI 0.40–0.58) among hospital patients. BHLS-RN scores correlated significantly with BHLS-RA scores (r = 0.33 among hospital patients; r = 0.62 among clinic patients), and with S-TOFHLA scores (r = 0.35 among both hospital and clinic patients), providing evidence of inter-administrator reliability and concurrent validity. In regression models, BHLS-RN scores were significant predictors of S-TOFHLA scores after adjustment for age, education, gender, and race. Area under the receiver operating characteristic curve for BHLS-RN to predict adequate health literacy on the S-TOFHLA was 0.71 in the hospital and 0.76 in the clinic.
CONCLUSIONS
The BHLS, administered by nurses during routine clinical care, demonstrates adequate reliability and validity to be used as a health literacy measure.
KEY WORDS: health literacy, hospital medicine, primary care, nurses
INTRODUCTION
Over one-third of adults in the U.S. have limited health literacy.1,2 Limited health literacy is associated with difficulty understanding one’s health condition, less adherence to self-care behaviors, poorer health status, increased mortality, and higher health care costs.3–5 Nevertheless, most healthcare providers are unaware of their patients’ health literacy,6–8 and most health systems have difficulty accommodating the needs of low health literacy patients.9,10
In 2004, the Institute of Medicine report on health literacy stated that “[h]ealth literacy assessment should be part of a healthcare information systems and quality data collection…[and] the Joint Commission on Accreditation of Healthcare Organizations should clearly incorporate health literacy into their accreditation standards.”1 Although the clinical screening of literacy has been debated,11 measuring the health literacy skills of populations seeking care, and even those of individual patients, may enable health care facilities to improve care delivery.12 Moreover, the availability of valid health literacy data on large populations of patients would serve as a valuable research tool.1
While multiple measures have been developed to measure health literacy, many are resource intensive, and most require between 3 and 12 min for administration,13–16 making them impractical in busy clinical environments. A more concise measure, the Brief Health Literacy Screen (BHLS), takes approximately 1 min to complete and has been validated when administered by trained research assistants (RAs), using the short Test of Functional Health Literacy in Adults (S-TOFHLA) and Rapid Estimate of Adult Literacy in Medicine (REALM) as reference standards.17–21 However, the BHLS has not been validated when administered by clinical personnel during the course of routine clinical care.
In this manuscript, we evaluate the validity of nurse administration of the BHLS in real-world clinical settings. As part of the Health Literacy Screening (HEALS) study, a large cohort study evaluating the relationships among health literacy, patient demographic characteristics, and clinical outcomes, the BHLS was integrated into the nursing intake assessments performed for adults at a large university hospital and in three affiliated primary care clinics. The aim of this study was to evaluate the psychometric properties of the BHLS when administered by clinical staff in inpatient and outpatient settings.
METHODS
Setting and Population
The HEALS study was conducted at Vanderbilt University Medical Center in Nashville, Tennessee. The university hospital receives more than 40,000 admissions annually and has approximately 5,000 nurses. The three selected primary care clinics have approximately 50,000 visits annually and 60 nursing staff. The medical center has a robust electronic health record (EHR) that is shared across the hospital and clinics and which allows real-time data entry during nursing intake assessments. In both the hospital and clinic settings, nurses record patients’ demographic and social information during the intake process as part of their routine clinical duties.
Procedures
In October 2010, the BHLS was integrated into the Adult Nursing Admission History, the nurse intake EHR form for the university hospital. Nurses generally complete this form within 8 hours of a patient’s hospitalization. The BHLS was integrated into the clinic intake form of the EHR at two primary care sites in November 2010 and a third in May 2011. This form is completed by nursing staff at the beginning of each outpatient visit. The full details of EHR integration and clinical implementation are summarized elsewhere.22
Adult patients who were administered the BHLS during nurse intake in either the hospital or clinics between November 2010 and April 2012 were eligible to be included in the present analysis. Of these patients, an RA approached and consented a convenience sample of adults who spoke English and were available and willing to complete the Short Test of Functional Health Literacy in Adults (S-TOFHLA) and repeat the BHLS (BHLS-RA) for comparison with the nurse administered BHLS (BHLS-RN). Hospitalized patients were approached within the first 3 days of hospitalization; clinic patients were approached in their exam room immediately following their clinic visit. Exclusion criteria consisted of incomplete basic demographic information recorded in the EHR, acute illness preventing participation in the interview, impaired cognition, vision or hearing, and previous participation in the study. Because a separate analysis was planned to be conducted on the association of health literacy with blood pressure control, hospitalized patients were excluded from RA testing if they did not have a diagnosis of hypertension, were transferred to Vanderbilt from an outside facility, and were not planning to be followed at a Vanderbilt outpatient clinic. Among enrolled patients, RAs also collected patient-reported race and years of education. Age and gender were collected from the EHR. Study data were collected and managed using REDCap (Research Electronic Data Capture),23 a secure, web-based application. Participants provided written informed consent. The Vanderbilt institutional review board (IRB) approved all study procedures.
Health Literacy Measures
The BHLS has been validated for use in outpatient and emergency department settings.17–21,24 It consists of three items on a 5-point response scale, read aloud to participants (Table 1). After reverse-scoring the item addressing confidence with forms, responses to the three items are summed; scores range between 3 and 15, with higher scores indicating higher subjective health literacy.
Table 1.
Items and Response Options for the Brief Health Literacy Screen
| Item | Response options |
|---|---|
| (1) How confident are you filling out medical forms by yourself? | Extremely, Quite a bit, Somewhat, A little bit, Not at all |
| (2) How often do you have someone help you read hospital materials? | All of the time, Most of the time, Some of the time, A little of the time, None of the time |
| (3) How often do you have problems learning about your medical condition because of difficulty understanding written information? | All of the time, Most of the time, Some of the time, A little of the time, None of the time |
The S-TOFHLA consists of two prose passages and has a time limit of 7 min. It categorizes health literacy skills as inadequate (0–16), marginal (17–22), or adequate (23–36).15,16
Statistical Analysis
Data were analyzed separately according to care setting (hospital or clinics) and according to who administered the survey (nursing staff [BHLS-RN] or research assistant [BHLS-RA]).
Using SPSS version 20, descriptive statistics including means and standard deviations, medians and interquartile ranges, and skewness were computed for the BHLS-RA, BHLS-RN, and S-TOFHLA. The S-TOFHLA was used as a reference standard because it has been well-validated, though no gold standard exists. Cronbach’s alpha was used to calculate the internal consistency reliability of the BHLS-RN and BHLS-RA. Intraclass correlations (ICCs) were used to compute inter-administrator type reliability (nurses vs. RAs), and Pearson product–moment correlations were computed to show the association between the BHLS-RN, BHLS-RA, and the reference standard S-TOFHLA scores. The bootstrap procedure within the SPSS correlation program was used to generate 95 % confidence intervals (CIs) for the correlation coefficients. Mann–Whitney U tests were used to compare health literacy scores for hospitalized patients to those of the clinic patients. Within each patient group (hospital or clinics), Wilcoxon signed-rank tests were used to examine mean differences between BHLS-RN and BHLS-RA scores.
As a more stringent test of concurrent validity, two-step multiple linear regressions (also referred to as hierarchical linear regressions in the behavioral sciences) were used to regress the S-TOFHLA scores on a priori selected covariates (age, gender, race, education) in Step 1, followed by BHLS-RN or BHLS-RA scores in Step 2. The Chow test was used to examine differences in R2 among the regression models.25 Finally, receiver-operating characteristic (ROC) analyses were used to compute the area under the response curve (AUROC) for the BHLS-RN and BHLS-RA vs. the reference category of “adequate health literacy” as determined by an S-TOFHLA score ≥ 23. ROC curves illustrate the sensitivity and specificity for a given predictive tool depending on the cutoff value or criterion used. The AUROC curve value summarizes the overall predictive strength and may range from 0.5 to 1.0, with 1.0 representing 100 % or perfect performance. Values ≥ 0.7 are considered acceptable, and ≥ 0.8 excellent.26
RESULTS
Descriptive Statistics and Internal Consistency Reliability
Between March 2011 and February 2012, research assistants enrolled 500 hospital and 300 clinic patients and administered the BHLS and S-TOFHLA (Fig. 1). The enrollment rate among eligible patients who were offered participation was 87.7 % in the hospital and 98.4 % in the clinics. Due to missing data, two hospital patients and five clinic patients were excluded from analyses. Patient characteristics are shown in Table 2. Compared to patients in the clinic sample, patients in the hospital sample were, on average, older, less well educated, more likely to be male, more likely to be Black, and less likely to have missing data on race.
Figure 1.
Selection of study samples. Abbreviations: N number; BHLS-RN nurse-administered Brief Health Literacy Screen; BHLS-RA research assistant-administered Brief Health Literacy Screen; S-TOFHLA Short test of functional health literacy in adults.
Table 2.
Demographic Characteristics of Study Subjects, by Sample
| Hospital sample (N = 498) | Clinic sample (N = 295) | |
|---|---|---|
| Age, mean (SD), years | 56.1 (14.1) | 53.5 (16.1) |
| Education, mean (SD), years | 14.0 (3.0) | 15.6 (3.2) |
| Female gender, N (%) | 259 (52.0) | 197 (66.8) |
| Race, N (%) | ||
| White | 329 (66.1) | 202 (68.5) |
| Black | 164 (32.9) | 56 (19.0) |
| Other race or missing* | 5 (1.0) | 37 (12.5) |
*Response options for patient-reported race included: White, Black or African American, Asian, Native Hawaiian or Other Pacific Islander, American Indian or Alaska Native, Other, Don’t know/Not sure, Refused
Table 3 presents descriptive statistics for the health literacy measures, as well as Cronbach’s alphas for the BHLS-RN and BHLS-RA. In both care settings, all measures of health literacy were significantly, negatively skewed. Compared to the hospitalized patients, scores on each of the health literacy measures were higher in the clinic sample (p < 0.001 for all). In both settings, BHLS scores were higher when assessed by nurses than by RAs. For the hospital sample, the BHLS-RN scores averaged 0.92 points higher than the BHLS-RA scores (95 % CI 0.63–1.20; Wilcoxon Z = 5.88; p < 0.0001). In the clinic sample, the mean difference in BHLS scores between nurses and RAs was 0.19 points (95 % CI 0.00–0.38; Wilcoxon Z = 2.03 p = 0.043).
Table 3.
Descriptive Statistics for Multiple Measures of Health Literacy, by Sample
| Hospital sample | Clinic sample | |
|---|---|---|
| BHLS – RN; Mean (SD) | 13.1 (2.7) | 13.9 (1.8) |
| Median (IQR) | 15 (3) | 15 (2) |
| Cronbach’s alpha | 0.80 | 0.76 |
| N | 498 | 295 |
| BHLS – RA; Mean (SD) | 12.1 (3.0) | 13.7 (1.9) |
| Median (IQR) | 13 (5) | 14 (2) |
| Cronbach’s alpha | 0.79 | 0.71 |
| N | 498 | 295 |
| S-TOFHLA; Mean (SD) | 29.2 (8.1) | 33.0 (5.6) |
| Median (IQR) | 33 (11) | 35 (2) |
| N | 486 | 295 |
BHLS-RN nurse-administered brief health literacy screen; IQR interquartile range; BHLS-RA research assistant-administered brief health literacy screen; S-TOFHLA short test of functional health literacy in adults
Inter-administrator Reliability
The intra-class correlation between BHLS-RN and BHLS-RA scores for the clinic sample (ICC = 0.77; 95 % CI 0.71–0.82) was higher than that for the hospital sample (ICC = 0.49; 95 % CI 0.40–0.58).
Concurrent Validity
In both the hospital and clinic settings, BHLS scores obtained by the nurses and research assistants correlated significantly with the reference standard S-TOFHLA (p < 0.001 for all; Table 4). The BHLS-RA had a higher correlation with the S-TOFHLA than did the BHLS-RN, but within settings, the CIs overlapped one another.
Table 4.
Pearson Product–Moment Correlations (and 95 % CIs) Between the S-TOFHLA and Each of the BHLS Scores, by Sample
| Hospital sample | Clinic sample | |
|---|---|---|
| S-TOFHLA with BHLS-RN | 0.35 (0.27–0.43) | 0.35 (0.19–0.50) |
| S-TOFHLA with BHLS-RA | 0.48 (0.40–0.55) | 0.42 (0.29–0.55) |
For all values, p < 0.001
BHLS-RN nurse-administered brief health literacy screen; BHLS-RA research assistant-administered brief health literacy screen; S-TOFHLA short test of functional health literacy in adults
A more stringent test of validity is whether the relationship between the BHLS and S-TOFHLA still holds when controlling for demographic variables such as age, gender, race, and education, which have been associated with other measures of health literacy in the literature. Table 5 presents the results of four two-step multiple regression models in which the criterion variable—the S-TOFHLA score—was first regressed on age, gender, race, and education (in Step 1), and then (in Step 2) either the BHLS-RN or BHLS-RA was allowed to enter the equation. There was a statistically significant change in R2 for Step 2 in all four models, ranging from 3 % to 9 %. The changes in R2 for Step 2 appeared to be higher in the hospital sample than in the clinic sample, and when the BHLS was RA-administered as opposed to nurse-administered, but follow-up analyses using the Chow test showed that these differences across sites and administrators were not significant.
Table 5.
Two-Step Multiple Regression Analyses Predicting S-TOFHLA Scores by Demographic Covariates and BHLS Scores, by Sample
| Hospital sample | Clinic sample | |||
|---|---|---|---|---|
| BHLS-RN | BHLS-RA | BHLS-RN | BHLS-RA | |
| ∆ R2 for Step 1 | 0.26 | 0.26 | 0.34 | 0.34 |
| ∆ R2 for Step 2 | 0.04 | 0.09 | 0.03 | 0.06 |
| Total R2 | 0.30 | 0.35 | 0.37 | 0.40 |
| Standardized regression coefficients (ß) (at Step 2) | ||||
| Age (years) | −0.33 | −0.32 | −0.38 | −0.38 |
| Education (years) | 0.28 | 0.18 | 0.20 | 0.17 |
| Gender (Female) | 0.14 | 0.08† | 0.12† | 0.11† |
| Race (White) | 0.23 | 0.22 | 0.24 | 0.23 |
| BHLS | 0.20 | 0.35 | 0.19 | 0.27 |
For all values, p < 0.001 except where otherwise noted; † p < 0.05
Step 1: Demographics entered as a block; Step 2: BHLS entered, as well as demographics
S-TOFHLA short test of functional health literacy in adults; BHLS brief health literacy screen; BHLS-RN nurse-administered brief health literacy screen; BHLS-RA research assistant-administered brief health literacy screen; ∆ R2, change in R2
Table 5 also shows that after entering the BHLS into the equation in Step 2, all four of the covariates contributed unique variance to the S-TOFHLA scores, with (younger) age making the greatest independent contribution among the covariates. It is also noteworthy that, at the end of Step 1, the combination of the four demographic variables explains between 24.6 % and 33.6 % of the variance in S-TOFHLA scores, and is somewhat higher in the clinic than in the hospital sample.
Areas under the Receiver Operating Characteristic Curve
Figure 2 shows the ROC curves for the BHLS vs. being classified as having “adequate health literacy” on the S-TOFHLA, by sample (hospital vs. clinic) and by administrator (nurse vs. RA). The corresponding AUROC curve values and 95 % CIs are: (a) hospital, BHLS-RN = 0.71 (0.65–0.77); (b) clinic, BHLS-RN = 0.76 (0.64–0.87); (c) hospital, BHLS-RA = 0.75 (0.70–0.80); and (d) clinic, BHLS-RA = 0.84 (0.76–0.92).
Figure 2.
Area under the Receiver-operating characteristic curve for the Brief Health Literacy Screen vs. S-TOFHLA. BHLS administered (a) to hospitalized patients by nurses, (b) to clinic patients by nurses, (c) to hospitalized patients by research assistants, and (d) to clinic patients by research assistants. Abbreviations: BHLS-RN nurse-administered Brief Health Literacy Screen; BHLS-RA research assistant-administered Brief Health Literacy Screen; S-TOFHLA Short test of functional health literacy in adults.
DISCUSSION
We studied the reliability and validity of a brief health literacy measure that was incorporated into clinical practice in both the hospital and clinic settings. To our knowledge, this is the first study to evaluate the administration of a health literacy test by nursing staff during clinical care. Overall, our results show that the nurse-administered BHLS has adequate internal consistency reliability and evidence of concurrent validity, demonstrated by (1) its significant correlation with S-TOFHLA scores, (2) high AUROC for indicating adequate vs. marginal or inadequate health literacy, and (3) independent contribution to predicting S-TOFHLA scores after controlling for demographic characteristics. This was also true of the RA-administered BHLS, which performed slightly, but not significantly, better than the nurse-administered BHLS. These results support the reliability and validity of a brief assessment of health literacy administered by nursing staff during routine clinical care in both inpatient and outpatient settings.
The average BHLS scores of both the hospital and clinic samples were high, especially when the measure was administered by nurses, where the median value of the BHLS was 15, the highest possible score. This is not surprising given the educational attainment of the patients we studied; for example, patients in the three primary care clinics studied had completed an average of 2.6 years of education beyond high school. Also, having more years of education was consistently associated with health literacy, as was younger age and being of the White race. In fact, gender was the only demographic covariate we studied that was not strongly associated with health literacy, although in adjusted analyses, women had higher health literacy scores than men.
The BHLS-RN and BHLS-RA correlated more highly in the clinic setting, compared to the hospital. One possible reason is that the two measures were administered to patients during the same clinic visit, sometimes as little as 20 min apart, while there was a much longer time interval (e.g., 24–72 hours) between the two BHLS administrations for hospitalized patients. The closer in time two measures are administered, the higher we might expect the ICC to be.27 This might also help explain why the BHLS-RA and S-TOFHLA correlations were somewhat higher than the BHLS-RN and S-TOFHLA correlations; the RAs administered both health literacy measures during the same interview, while that was not the case for the BHLS-RN and the S-TOFHLA.
The AUROCs for the BHLS-RN in the hospital and clinic were similar to, though slightly lower than, the AUROCs for the BHLS-RA in those settings. Comparable AUROC values have been previously reported for the BHLS in an Emergency Department setting (0.76)24 and in a surgical clinic (0.86).19 The slightly higher AUROCs observed in the clinic and with RA-administration also could be due in part to the timing of administration. However, it is also possible that nurses’ assessment of health literacy as part of their clinical duties may not be quite as accurate as assessment by dedicated research staff. Nevertheless, the advantages afforded by nurse administration, principally the possibility of large-scale data collection, may outweigh this slight decrease in accuracy. The HEALS project, for example, has enabled measurement of health literacy on approximately 32,000 hospital patients and 20,000 clinic patients annually since November 2010 and remains ongoing.22
Although most prior research on brief health literacy measures has focused on the performance of individual items,28 we chose to treat the BHLS as a three-item instrument for several reasons. First, internal consistency reliability was high, supporting the notion that the items measure a single construct. Second, a three-item instrument scored from 3 to 15 may provide greater discrimination than a single-item measure with only 5 levels. Third, prior research has shown that the combination of the three items has slightly better accuracy in the prediction of health literacy levels, than any item alone.19 Finally, administration time is not significantly increased by the use of all three items, compared to a single item.
One of the unique contributions of this study is demonstration that the BHLS, whether administered by RAs or nurses, contributed unique variance to the prediction of health literacy skills after adjustment for demographic characteristics. This is important because age, gender, race, and education are readily available, and one could argue that a measure of health literacy is only valuable if it has additional predictive value beyond those demographic characteristics.29
This study has several limitations. First, we lack detailed information on the fidelity with which nurses administered the BHLS. Compared to BHLS-RA scores, the BHLS-RN scores were higher overall and had a somewhat weaker association with the reference standard, S-TOFHLA. This could be due in part to the timing of administration, as the BHLS-RA and S-TOFHLA were given together, whereas the BHLS-RN had been given approximately 20 min before in the clinic, or 1–3 days before in the hospital. It is possible that hospital patients could have responded differently to the BHLS administered by the RA, given that they had more health care experiences after they were first administered the BHLS by a nurse, which could have changed their perception of their abilities. We do not expect that patients in the clinic experienced the same effect, owing to closer timing. However, it is also possible that nurses in the busy clinical environment administered the BHLS differently than intended (e.g., they may have asked one item and inferred the others, or they may have provided patients with different response options than specified by the instrument).
Second, because in this study the BHLS-RN was always administered prior to the BHLS-RA, we cannot separate the effect of the administrator (nurse or RA) from a possible order effect. It is plausible that the patients responded differently to the questions asked by the RAs simply because they had already been asked those questions by the nurses and, therefore, may have been sensitized to their own health literacy, or the lack thereof.
Third, as is the case with all previous studies in this area, our argument for the validity of the BHLS rests mainly on the association of BHLS scores with an imperfect reference standard, in this case the S-TOFHLA. Though the S-TOFHLA is used in many research studies and has been shown to be a valid indicator of health literacy,30 it does not measure the full complement of skills thought to comprise health literacy. A stronger test of the validity of the BHLS will involve its association with clinical outcomes; that work is underway. Fourth, generalizability is a factor, given that this work was performed at a single university hospital and its affiliated primary care clinics where the educational level was relatively high. The BHLS may perform differently in other settings or populations.
Despite these limitations, our findings support the use of the BHLS administered by nurses in clinical practice as a valid measure of health literacy. This approach is consistent with the Institute of Medicine’s recommendations that health literacy measures be incorporated into EHRs.1 The data from the nurses’ health literacy assessment are stored in the medical center’s Enterprise Data Warehouse and are being used for research that seeks to better understand the influence of low health literacy on patients’ participation in care, as well as their clinical outcomes.
Acknowledgements
Supported by R21 HL096581 and the Vanderbilt University Innovation and Discovery in Engineering And Science (IDEAS) Program Grant Award (Dr. Kripalani). Dr. Osborn is supported by a Career Development Award (K01 DK087894). Dr. McNaughton is supported by the Vanderbilt Emergency Medicine Research Training Program (K12 HL109019).
Conflict of Interest
The authors declare that they do not have any conflicts of interest.
REFERENCES
- 1.Health Literacy: A Prescrition to End Confusion. Washington: National Academies Press; 2004. [PubMed] [Google Scholar]
- 2.Kutner M GE, Jin Y, Boyle B, Hsu Y, Dunleavy E. The Health Literacy of America’s Adults: Results From the 2003 National Assessment of Adult Literacy (NCES 2006–483). Washington, DC; 2006.
- 3.Berkman ND, Sheridan SL, Donahue KE, Halpern DJ, Crotty K. Low health literacy and health outcomes: an updated systematic review. Ann Intern Med. 2011;155:97–107. doi: 10.7326/0003-4819-155-2-201107190-00005. [DOI] [PubMed] [Google Scholar]
- 4.Howard DH, Gazmararian J, Parker RM. The impact of low health literacy on the medical costs of Medicare managed care enrollees. Am J Med. 2005;118:371–377. doi: 10.1016/j.amjmed.2005.01.010. [DOI] [PubMed] [Google Scholar]
- 5.Eichler K, Wieser S, Brugger U. The costs of limited health literacy: a systematic review. Int J Public Health. 2009;54:313–324. doi: 10.1007/s00038-009-0058-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6.Kelly PA, Haidet P. Physician overestimation of patient literacy: a potential source of health care disparities. Patient Educ Couns. 2007;66:119–122. doi: 10.1016/j.pec.2006.10.007. [DOI] [PubMed] [Google Scholar]
- 7.Powell CK, Kripalani S. Resident recognition of low literacy as a risk factor in hospital readmission. J Gen Intern Med. 2005;20:1042–1044. doi: 10.1007/s11606-005-0246-6. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Bass PF, 3rd, Wilson JF, Griffith CH, Barnett DR. Residents’ ability to identify patients with poor literacy skills. Acad Med. 2002;77:1039–1041. doi: 10.1097/00001888-200210000-00021. [DOI] [PubMed] [Google Scholar]
- 9.Sudore RL, Mehta KM, Simonsick EM, et al. Limited literacy in older people and disparities in health and healthcare access. J Am Geriatr Soc. 2006;54:770–776. doi: 10.1111/j.1532-5415.2006.00691.x. [DOI] [PubMed] [Google Scholar]
- 10.Paasche-Orlow MK, Schillinger D, Greene SM, Wagner EH. How health care systems can begin to address the challenge of limited literacy. J Gen Intern Med. 2006;21:884–887. doi: 10.1111/j.1525-1497.2006.00544.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Paasche-Orlow MK, Wolf MS. Evidence does not support clinical screening of literacy. J Gen Intern Med. 2008;23:100–102. doi: 10.1007/s11606-007-0447-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12.The Joint Commission. “What Did the Doctor Say?:” Improving Health Literacy to Protect Patient Safety. Available at http://www.jointcommission.org/What_Did_the_Doctor_Say/. Accessed June 10, 2013.
- 13.Davis TC, Crouch MA, Long SW, et al. Rapid assessment of literacy levels of adult primary care patients. Fam Med. 1991;23:433–435. [PubMed] [Google Scholar]
- 14.Davis TC, Long SW, Jackson RH, et al. Rapid estimate of adult literacy in medicine: a shortened screening instrument. Fam Med. 1993;25:391–395. [PubMed] [Google Scholar]
- 15.Baker DW, Williams MV, Parker RM, Gazmararian JA, Nurss J. Development of a brief test to measure functional health literacy. Patient Educ and Couns. 1999;38:33–42. doi: 10.1016/S0738-3991(98)00116-5. [DOI] [PubMed] [Google Scholar]
- 16.Parker RM, Baker DW, Williams MV, Nurss JR. The test of functional health literacy in adults: a new instrument for measuring patients’ literacy skills. J Gen Intern Med. 1995;10:537–541. doi: 10.1007/BF02640361. [DOI] [PubMed] [Google Scholar]
- 17.Chew LD, Bradley KA, Boyko EJ. Brief questions to identify patients with inadequate health literacy. Fam Med. 2004;36:588–594. [PubMed] [Google Scholar]
- 18.Chew LD, Griffin JM, Partin MR, et al. Validation of screening questions for limited health literacy in a large VA outpatient population. J Gen Intern Med. 2008;23:561–566. doi: 10.1007/s11606-008-0520-5. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19.Wallace LS, Cassada DC, Rogers ES, et al. Can screening items identify surgery patients at risk of limited health literacy? J Surg Res. 2007;140:208–213. doi: 10.1016/j.jss.2007.01.029. [DOI] [PubMed] [Google Scholar]
- 20.Wallace LS, Rogers ES, Roskos SE, Holiday DB, Weiss BD. Screening items to identify patients with limited health literacy skills. J Gen Intern Med. 2006;21:874–877. doi: 10.1111/j.1525-1497.2006.00532.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21.Sarkar U, Schillinger D, Lopez A, Sudore R. Validation of self-reported health literacy questions among diverse English and Spanish-speaking populations. J Gen Intern Med. 2011;26:265–271. doi: 10.1007/s11606-010-1552-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 22.Cawthon C, Mion LC, Willens DE, Roumie CL, Kripalani S. Implementation of routine health literacy assessment in hospital and primary care patients. Jt Comm J Qual Patient Saf 2014 (in press). [DOI] [PMC free article] [PubMed]
- 23.Harris PA, Taylor R, Thielke R, Payne J, Gonzalez N, Conde JG. Research electronic data capture (REDCap)–a metadata-driven methodology and workflow process for providing translational research informatics support. J Biomed Inform. 2009;42:377–381. doi: 10.1016/j.jbi.2008.08.010. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24.McNaughton C, Wallston KA, Rothman RL, Marcovitz DE, Storrow AB. Short, subjective measures of numeracy and general health literacy in an adult emergency department. Acad Emerg Med. 2011;18:1148–1155. doi: 10.1111/j.1553-2712.2011.01210.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 25.Chow GC. Tests of equality between sets of coefficients in two linear regressions. Econometrica. 1960;28:591–605. doi: 10.2307/1910133. [DOI] [Google Scholar]
- 26.Hosmer DW, Lemeshow S. Applied Logistic Regression. 2. New York: John Wiley & Sons; 2000. [Google Scholar]
- 27.DeVellis RF. Scale Development: Theory and Applications. 2003.
- 28.Powers BJ, Trinh JV, Bosworth HB. Can this patient read and understand written health information? JAMA. 2010;304:76–84. doi: 10.1001/jama.2010.896. [DOI] [PubMed] [Google Scholar]
- 29.Baker DW. The meaning and the measure of health literacy. J Gen Intern Med. 2006;21:878–883. doi: 10.1111/j.1525-1497.2006.00540.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 30.Davis TC, Kennen EM, Gazmararian JA, Williams MV. Literacy testing in health care research. In: Schwartzberg JG, VanGeest JB, Wang CC, editors. Understanding Health Literacy. Chicago: American Medical Association; 2005. pp. 157–179. [Google Scholar]


