Skip to main content
Annals of Family Medicine logoLink to Annals of Family Medicine
. 2009 Jan;7(1):24–31. doi: 10.1370/afm.919

Screening Questions to Predict Limited Health Literacy: A Cross-Sectional Study of Patients With Diabetes Mellitus

Kelly Marvin Jeppesen 1, James D Coyle 2, William F Miser 1
PMCID: PMC2625834  PMID: 19139446

Abstract

PURPOSE Limited health literacy is increasingly recognized as a barrier to receiving adequate health care. Identifying patients at risk of poor health outcomes secondary to limited health literacy is currently the responsibility of clinicians. Our objective was to identify which screening questions and demographics independently predict limited health literacy and could thus help clinicians individualize their patient education.

METHODS Between August 2006 and July 2007, we asked 225 patients being treated for diabetes at an academic primary care office several questions regarding their reading ability as part of a larger study (57% response rate). We built a logistic regression model predicting limited health literacy to determine the independent predictive properties of these questions and demographic variables. Patients were classified as having limited health literacy if they had a Short Test of Functional Health Literacy in Adults (S-TOFHLA) score of less than 23. The potential predictors evaluated were self-rated reading ability, highest education level attained, Single-Item Literacy Screener (SILS) result, patients’ reading enjoyment, age, sex, and race.

RESULTS Overall, 15.1% of the patients had limited health literacy. In the final model, 5 of the potential predictors were independently associated with increased odds of having limited health literacy. Specifically, patients were more likely to have limited health literacy if they had a poorer self-rated reading ability (odds ratio [OR] per point increase in the model = 3.37; 95% confidence interval [CI], 1.71–6.63), more frequently needed help reading written health materials (assessed by the SILS) (OR = 2.03; 95% CI, 1.26–3.26), had a lower education level (OR = 1.89; 95% CI, 1.12–3.18), were male (OR = 4.46; 95% CI, 1.53–12.99), and were of nonwhite race (OR = 3.73; 95% CI, 1.04–13.40). These associations were not confounded by age. The area under the receiver operating characteristic curve was 0.9212.

CONCLUSIONS Self-rated reading ability, SILS result, highest education level attained, sex, and race independently predict whether a patient has limited health literacy. Clinicians should be aware of these associations and ask questions to identify patients at risk. We propose an “SOS” mnemonic based on these findings to help clinicians wishing to individualize patient education.

Keywords: Literacy, educational status, mass screening, diabetes mellitus, psychometrics, primary health care

INTRODUCTION

Health literacy is defined as “the degree to which individuals have the capacity to obtain, process, and understand basic health information and services needed to make appropriate health decisions.”1 Limited health literacy, which refers to marginal health literacy, inadequate health literacy, or both depending on study definitions, has been independently associated with several undesirable health-related outcomes. Compared with individuals with adequate health literacy, those with limited health literacy have poorer understanding of their chronic diseases,25 physicians’ instructions,68 and health-related Web sites9; poorer disease management skills2; higher levels of disease indicators10,11; and worse self-reported health.12,13 Limited health literacy is also associated with less use of certain preventive services,14,15 increased hospitalizations,12,16 and increased health care costs.17 Not all of these associations have been found with perfect consistency, however.1820 We refer the interested reader to larger literature reviews and summary analyses for a more comprehensive look at the effects of literacy on health-related outcomes.21,22

Interventions exist to aid persons with limited health literacy. Simplifying instruction forms is an effective means of ensuring better comprehension for entire patient populations.2325 Patients’ health care teams may be of assistance by providing simplified education and ensuring that patients understand and retain what is being said.2628 Although the use of such strategies should be encouraged for helping all patients, those with limited health literacy may receive a particularly strong benefit.27

Aggressive educational interventions have shown some additional benefit in outcomes among patients with limited health literacy,5,27 and at least 8 clinical trials of intensive behavioral interventions that may help this population have recently finished or are under way.29 Because most limited health literacy–focused interventions are only investigational, Paasche-Orlow and Wolf30 recently recommended against universal screening for health literacy difficulties. Instead, they recommend that clinicians take responsibility for assessing how well their individual patients understand health information.

Clinicians need to be able to recognize the “symptoms” of limited health literacy and other barriers to acquiring information. Unfortunately, physicians are sometimes poor estimators of literacy level, and the prevalence of limited health literacy varies widely between populations.31,32 Knowing what questions to ask about a patient’s learning styles would thus be helpful for clinicians seeking to personalize their patient education.

Recent studies have suggested that clinicians could use a few questions to identify patients with limited health literacy.3336 Results of one of these studies were further refined to produce the Single-Item Literacy Screener (SILS), a 1-question test for adequate literacy.37 But these studies have not shown whether such questions are superior to other proxies, such as highest education level attained or self-rated reading ability, which they were intended to replace. Highest education level attained and self-rated reading ability, although not perfect predictors of literacy, may still be strongly associated with literacy level and would therefore merit study as risk factors for a potential literacy problem.38,39

The purpose of this study was to identify questions that could best indicate to a clinician that a patient may have low or marginal health literacy. Our hypothesis was that short screening questions and demographic information would help predict a patient’s literacy status. We also hoped to discover which questions are superior for predicting limited health literacy, and which predict it independently of the other questions and demographic risk factors.

METHODS

Data Collection

The Ohio State University Biomedical Institutional Review Board approved this study, which was conducted at the university’s Rardin Family Practice Center in Columbus, Ohio. At this practice site, 11 faculty family physicians and 15 family practice residents provide primary care to more than 9,100 individuals from the local community. Of this patient population, 60% are women, 53% are white, 38% are black, and more than 700 individuals have diabetes mellitus (95% have type 2).

Data for this study were collected as part of a larger diabetes cohort study intended to identify associations between health literacy test scores and diabetes outcomes. Patients with a prior diagnostic code for diabetes were recruited either in person at the office or by telephone a few days before a visit. Identified patients were asked to participate in a reading and diabetes knowledge survey. Recruitment and interviews for this study took place between August 2006 and July 2007.

After obtaining informed consent, trained research assistants interviewed participants, asking them for their age, sex, race, and highest education level completed. Participants were then asked 3 questions related to reading ability: (1) “How would you rate your ability to read?” (self-rated reading ability); (2) “On a scale of 1 to 10, where 1 is ‘not at all’ and 10 is ‘a great deal,’ how much do you like reading?” (reading enjoyment); and (3) “How often do you need to have someone help you when you read instructions, pamphlets, or other written material from your doctor or pharmacy?” (SILS).37 We selected these 3 questions for analysis because each was thought to capture some different aspect of what may predict an individual’s literacy level. A Likert scale was used to code answers to the first and last items.

Participants were then screened for visual acuity using a floating E eye chart. Those with sufficient visual acuity were administered the Short Test of Functional Health Literacy in Adults (S-TOFHLA).40 The S-TOFHLA is a 36-item multiple-choice test of reading comprehension that can be completed in 7 minutes. It is composed of 2 passages that have several missing words. Participants are asked to select the words that best fit into the passages given the context of surrounding words. This test is a health literacy metric that has good correlation with the full TOFHLA from which it was derived and has become the more popular of the 2 tests. After the interview, participants were awarded $15 grocery store gift cards for their participation.

For our analysis, we excluded participants if their visual acuity was less than 20/50 corrected, if they could not communicate with research staff in English, or if they had an obvious cognitive impairment that would interfere with testing (such as known dementia or mental retardation).

Analysis

We analyzed the data using binomial logistic regression analysis. An S-TOFHLA score of less than 23 (a generally accepted cut point) was regarded as a positive outcome, indicating limited health literacy. Limited health literacy thus included both marginal health literacy (a score of 17–22) and inadequate health literacy (a score of ≤16).

Covariates included for analysis were sex, race, age, highest education level attained, and responses to the self-rated reading ability, reading enjoyment, and SILS questions. If the patient reported more than 1 race, their race was coded as “other” for analysis. Answer options for the questions, as well as the codes applied for all of the variables, are found in Table 1. For health literacy level, derived from the S-TOFHLA score, an adequate level was coded as 0, whereas marginal and inadequate levels were both coded as 1. In the regression analysis, for ordinal variables, we treated each difference between adjacent response options as a 1-point increase. Patients’ enjoyment of reading was coded using the original scale from 1 to 10.

Table 1.

Distributions of Variables Used to Predict Limited Health Literacy and Results of Bivariate Analyses (N = 225)

Variable Response Options Codea No. (%)bof Patients Crude ORc P Valuec
Highest education level attained Master’s degree or greater −4 20 (8.9) d <.001
Bachelor’s degree (≥4 years of college) −3 25 (11.1) 0.18
Associate’s degree (2 years of college) −2 22 (9.8) 0.21
Some college −1 57 (25.3) 0.25
12th grade (GED or equivalent) 0 65 (28.9) (ref)
11th grade or less 1 32 (14.2) 3.44
6th grade or less 2 4 (1.8) 13.25
Self-rated reading ability Excellent or very good 0 112 (49.8) (ref) <.001
Good 1 71 (31.6) 6.66
Okay 2 35 (15.6) 21.47
Poor 3 7 (3.1) d
Terrible or very poor 4 0 (0)
Reading enjoyment 1–10 Natural 8 (6–10)e .01
Single-Item Literacy Screener37 Never 0 140 (62.2) (ref) <.001
Rarely 1 47 (20.9) 3.91
Sometimes 2 22 (9.8) 4.85
Often 3 10 (4.4) 24.75
Always 4 6 (2.7) d
Sex Male 1 71 (31.6) 2.20 .04
Race White 99 (44.0) (ref)
Black 101 (44.9) 4.83 <.001
Other 25 (11.1) 2.95 .11
Age Years Natural 53.76 (12.8)f .01g

OR=odds ratio; ref=reference group; GED=general equivalency diploma.

a Represents point values used in the logistic model.

b Unless otherwise noted.

c Crude odds ratios and P values are of association with limited health literacy. Exact tests were used for categorical variables; rank-sum test was used for ordinal variables.

d Odds ratios for some variable values are not reported because the contingency table contained a zero cell.

e Median (interquartile range).

f Mean (SD).

g Age data were normally distributed. P value was determined with a t test.

We examined the distribution of answers for each variable, and performed analyses for crude associations with limited health literacy using exact tests for categorical independent variables and t tests or Wilcoxon rank-sum tests for continuous and ordinal variables. Each variable was then tested as a lone independent variable in a logistic regression model predicting limited health literacy. From these variables, we chose an initial univariate model using the lowest Akaike Information Criterion as a guideline for model superiority. This process was repeated, adding 1 variable at a time to build the multivariate model until there was a rise in the Akaike Information Criterion value for all new models being tested. Although a recent analysis by Vittinghoff and McCulloch41 suggests that the current data set could reliably support a model with up to 6 variables, we checked the stability of the model at each step of this process.

We checked for confounding by demographic variables and plausible interactions by comparison with the main effects model. We considered interactions to be significant at an individual type I error rate of .05, and confounding to be substantial if parameter coefficients changed by greater than 15%. Linearity in the logit was assessed using the fractional polynomial method for determining acceptable transforms.42 Goodness of fit was assessed using the Hosmer-Lemeshow test. The area under the receiver operating characteristic (ROC) curve was also calculated. To detect the presence of outliers, we obtained the Δ-β (change in coefficient) and Δ χ2 (change in significance) for each subject and plotted these values against predicted probability in the model. We considered variables included in the final model to be significant at an individual type I error rate of .05. Wald and likelihood ratio statistics were compared for model sensitivity and stability, but only results of Wald statistics are reported. All analyses were run using Stata SE, version 9.2 (Stata Corp, College Station, Texas).

RESULTS

Of the 396 patients invited to participate, 225 (57%) completed the interview and were included in analyses. Of the 171 who were not included, 13 (3% of those invited to participate) were found to have vision less acute than 20/50 during the interview, 99 (25%) declined to participate, and 59 (15%) were excluded before the interview (6 reported being legally blind, 14 were no longer patients at the office where the study was being conducted, 27 did not speak sufficient English, 4 had considerable dementia, 5 had considerable mental retardation, 2 refused to sign the Health Insurance Portability and Accountability Act waiver for the larger study, and 1 had gestational diabetes only). Of the 99 patients who declined to participate, 54 had demographic information available. Compared with the 238 patients who participated (regardless of visual acuity), these nonparticipants did not differ significantly by race (P = .85) or sex (P=.42); however, participants were on average 6.2 years younger (P=.002). This difference was attenuated but remained significant if age in the medical chart was used instead of self-reported age (difference of means: 5.0 years, P=.01). One patient failed to indicate how much he enjoyed reading, and this value was imputed as the median of the sample (8 out of 10). Otherwise, the data set used was complete. Distributions of participants’ responses and P values of crude associations of the variables studied with limited health literacy are shown in Table 1.

On the basis of S-TOFHLA scores, 15.1% of participants had limited health literacy (14, or 6.2%, had marginal health literacy and 20, or 8.9%, had inadequate health literacy). The final model for predicting limited health literacy is given in Table 2, and its development and properties are described below. Lower self-rated reading ability, lower educational attainment, and more frequent need for help with written health materials were all independently associated with limited health literacy. Male sex and nonwhite race were independently associated with this outcome as well.

Table 2.

Final Logistic Regression Model For Predicting Limited Health Literacy

Variablea Adjusted ORb(95% CI) Coefficient (SE) P Value of H0: Coefficient=0 P Value of H0: Linear in Logitc
Self-rated reading abilityd 3.37 (1.71–6.63) 1.22 (0.35) <.001 .82
SILS resultd 2.03 (1.26–3.26) 0.71 (0.24) .003 .32
Highest education leveld 1.89 (1.12–3.18) 0.64 (0.27) .02 .94
Male sex 4.46 (1.53–12.99) 1.50 (0.55) .006
Nonwhite race 3.73 (1.04–13.40) 1.32 (0.65) .04
Constant –4.94 (0.86) <.001

CI=confidence interval; H0=null hypothesis; OR=odds ratio; SILS=Single-Item Literacy Screener.

a Listed in order of introduction into the model during the step-forward process.

b Odds ratio after adjustment for the other variables in the model.

c Results of fractional polynomial method.

d Odds ratios and coefficients reported are per unit increase, according to the code in Table 1.

The main effects model included the same variables as the final model. We excluded reading enjoyment from the model because its association with adequate literacy switched from positive to negative after adjusting for the other screening questions. This finding indicated that although reading enjoyment accounted for some residual variability, leaving it in the model would falsely amplify the effect of other variables. We combined black race and other race after fitting the model because their coefficients were very similar and did not warrant the addition of a second categorical variable for race. Four plausible interactions were tested: sex with self-rated reading ability, sex with SILS result, sex with race, and race with self-rated reading ability. None of these interactions were significant at the predefined level, and no confounding by age was observed. Multicollinearity was not a problem; the maximum variance inflation factor among explanatory variables was 1.36 (mean, 1.22). All ordinal variables were linear in the logit analysis (Table 2), and the Hosmer-Lemeshow test yielded a P value of .19, indicating no significant lack of fit.

We noted that 2 participants had outlying results. These participants’ records were rechecked for accuracy, and the interviewer was asked to verify that the data were correct. We determined that these participants were representative of the participants generally, so we did not drop any of the outlying observations. This model had excellent discriminatory performance, with an area under the ROC curve of 0.9212 (Figure 1). Sensitivities, specificities, and likelihood ratios of the model at various cutoffs are given in Table 3. Depending on the probability cutoff value chosen, sensitivity of the model could range from 100% down to 49%, with a corresponding rise in specificity from 50% to 98%.

Figure 1.

Figure 1.

Receiver operating characteristic curve of the final model. Area under the curve = 0.9212.

Table 3.

Performance of the Model in Predicting Limited Health Literacy at Varying Cutoffs of Predicted Probability (N = 225)

Probability Cutoff Sensitivity, % Specificity, % Positive Likelihood Ratioa Negative Likelihood Ratiob
0.025 100 49 1.95 0
0.05 97 68 3.04 0.04
0.075 88 71 3.06 0.16
0.1 85 77 3.79 0.19
0.2 76 86 5.62 0.27
0.3 68 95 12.92 0.34
0.4 62 96 16.85 0.40
0.5 50 98 23.88 0.51

a The increase in likelihood of having limited health literacy if subject is found to have a positive result (an individual probability equal to or greater than the given cutoff).

b The decrease in likelihood of having limited health literacy if a subject is found to have a negative result (an individual probability less than the given cutoff).

DISCUSSION

A strength of this study is that it is clinically based and provides health care workers with tools for better understanding their patients’ learning styles. Whereas most literacy screening tools can determine whether or not a patient has limited health literacy, they do little to define etiologies or direct management. Asking specific questions about how an individual understands health information will better elucidate interventions that can be used. For example, asking about self-rated reading ability may reveal a known diagnosis of dyslexia, which could subsequently be managed by teaching with pictures or reading aloud to patients. A positive SILS result may indicate that a patient should have a family member present during education sessions. At the very least, asking these questions can prompt a discussion about an individual’s learning styles and coping methods. It will also prevent clinicians from being ignorant of problems associated with limited health literacy, so that when more effective interventions become available, clinicians will understand their usefulness and be ready to use them.

Another strength of this study is that it combines several different screening questions and demographic information into 1 predictive model. Of note, self-rated reading ability was the single most reliable predictor of limited health literacy of the predictors tested; thus, although it is known to be biased, it is still the best stand-alone question we have for determining literacy level. That being said, the fact that self-rated reading ability, SILS result, highest education level attained, sex, and race were all significant predictors of limited health literacy even after adjustment for one another serves to clarify that they are independently helpful at predicting limited health literacy, and that the use of all these questions and demographics is superior to the use of any one of them.

Recommendation

We recommend that clinicians who wish to screen for limited health literacy ask about self-rated reading ability and highest education level attained, and use the SILS as part of a thorough social history. To help clinicians remember what specific questions to ask and which answers may be considered suspicious for a health literacy problem, we propose the mnemonic “SOS” (Table 4). According to this mnemonic, an educational attainment of high school or less, a self-rated reading ability of “okay” or worse, and asking for help with reading health materials at least “sometimes” are all associated a higher likelihood of limited health literacy. Clinicians can use this mnemonic as a framework for discussing their patients’ potential barriers to learning.

Table 4.

SOS Mnemonic for Screening Patients for Limited Health Literacy

Mnemonic
Question Topic Letter Category Thresholda
Educational attainment S The person’s Schooling is … Sub-Secondary.
Self-rated reading ability O The person’s Opinion of his or her reading ability is that … … he or she is Only an Okay reader.
Help needed when readingb S When the person reads health-related materials, Support is … Sometimes Solicited.

a Answers that may indicate a problem with health literacy.

b Single-Item Literacy Screener.37

Limitations

This study has several limitations, foremost of which is that the participants interviewed were all being treated for diabetes at a single academic family practice center. It is also unknown why patients who participated in this study were generally younger than patients who refused to participate. The generalizability of the results presented here should be tested in a more representative sample of patients nationwide. Also, because the patients knew they were going to be given reading tests, they may have been less likely to attempt to conceal a reading problem. This association would effectively remove a reporting bias that might be present outside of the study population. Our model should be validated to determine how well these results apply to other populations or persons who are present in offices for reasons other than literacy testing.

Another limitation of this study was the use of the S-TOFHLA as the reference standard for health literacy. Although the S-TOFHLA correlates well with the full version of the TOFHLA and the Rapid Estimate of Adult Literacy in Medicine (REALM), none of these is a comprehensive measure of health literacy as defined earlier.1 In fact, no simple reading test could realistically be a complete measure of the common definition of health literacy, which includes such skills as Internet or visual literacy.

For some of the questions used in this study, patient embarrassment may be of concern.7,43 We believe that the literacy questions are far less intrusive than common health-related questions concerning drug abuse or sexual behaviors, however, and may likewise provide information necessary for improving patient care. The general consensus among our research assistants was that these questions did not cause embarrassment. Other studies also indicate that patients are willing to divulge their literacy status and consider it to be important clinical information.44,45 Of course, as with any condition discovered in a physician’s office, limited health literacy should be kept confidential.

Self-rated reading ability, highest education level attained, and the SILS result can each provide clinicians with valuable information about a patient’s learning needs. Clinicians are advised to be aware of these associations and know what questions can help them identify patients who may need assistance with navigating the health care system or understanding health-related materials.

Acknowledgments

The authors would like to thank Andrea Jeppesen, Benjamin Hull, Stephen Wilkes, Mark Stevens, Charlie Pizanis, Maggie Ryan, Alan Sanderson, Ryan Hackett, David Williams, Julia Budde, Pooja Lahoti, Patricia S. Hatton, Philip Binkley, Joan Allen, and the very helpful staff at The Ohio State University Rardin Family Practice Center.

Conflicts of interest: Dr Miser is on the Speaker’s Bureau for Pfizer Corporation. Mr Jeppesen and Dr Coyle have no conflicts of interest to declare.

Funding support: This work was funded by The Ohio State University Crisafi-Monte Primary Care Cardiopulmonary Endowment, and by grant T32RR023260 from the National Center for Research Resources (NCRR), a component of the National Institutes of Health (NIH).

Disclaimer: This publication was made possible by The Ohio State University Crisafi-Monte Primary Care Cardiopulmonary Endowment and the NIH through the NIH Roadmap for Clinical Research. Its contents are solely the responsibility of the authors and do not necessarily represent the offficial views of Crisafi-Monte or NIH.

REFERENCES

  • 1.Selden CR, Zorn M, Ratzan S, Parker RM, compilers. Health Literacy [bibliography online]. Bethesda, MD: National Library of Medicine; 2000. Current Bibliographies in Medicine No. 2000-1. http://www.nlm.nih.gov/archive//20061214/pubs/cbm/hliteracy.html. Accessed Jul 8, 2008.
  • 2.Williams MV, Baker DW, Honig EG, Lee TM, Nowlan A. Inadequate literacy is a barrier to asthma knowledge and self-care. Chest. 1998;114(4):1008–1015. [DOI] [PubMed] [Google Scholar]
  • 3.Gazmararian JA, Williams MV, Peel J, Baker DW. Health literacy and knowledge of chronic disease. Patient Educ Couns. 2003;51(3):267–275. [DOI] [PubMed] [Google Scholar]
  • 4.Williams MV, Baker DW, Parker RM, Nurss JR. Relationship of functional health literacy to patients’ knowledge of their chronic disease. A study of patients with hypertension and diabetes. Arch Intern Med. 1998;158(2):166–172. [DOI] [PubMed] [Google Scholar]
  • 5.Paasche-Orlow MK, Riekert KA, Bilderback A, et al. Tailored education may reduce health literacy disparities in asthma self-management. Am J Respir Crit Care Med. 2005;172(8):980–986. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Spandorfer JM, Karras DJ, Hughes LA, Caputo C. Comprehension of discharge instructions by patients in an urban emergency department. Ann Emerg Med. 1995;25(1):71–74. [DOI] [PubMed] [Google Scholar]
  • 7.Baker DW, Parker RM, Williams MV, et al. The health care experience of patients with low literacy. Arch Fam Med. 1996;5(6):329–334. [DOI] [PubMed] [Google Scholar]
  • 8.Persell SD, Osborn CY, Richard R, Skripkauskas S, Wolf MS. Limited health literacy is a barrier to medication reconciliation in ambulatory care. J Gen Intern Med. 2007;22(11):1523–1526. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Birru MS, Monaco VM, Charles L, et al. Internet usage by low-literacy adults seeking health information: an observational analysis. J Med Internet Res. 2004;6(3):e25. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Schillinger D, Grumbach K, Piette J, et al. Association of health literacy with diabetes outcomes. JAMA. 2002;288(4):475–482. [DOI] [PubMed] [Google Scholar]
  • 11.Lincoln A, Paasche-Orlow MK, Cheng DM, et al. Impact of health literacy on depressive symptoms and mental health-related: quality of life among adults with addiction. J Gen Intern Med. 2006;21(8):818–822. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Baker DW, Parker RM, Williams MV, Clark WS, Nurss J. The relationship of patient reading ability to self-reported health and use of health services. Am J Public Health. 1997;87(6):1027–1030. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Weiss BD, Hart G, McGee DL, D’Estelle S. Health status of illiterate adults: relation between literacy and health status among persons with low literacy skills. J Am Board Fam Pract. 1992;5(3):257–264. [PubMed] [Google Scholar]
  • 14.Scott TL, Gazmararian JA, Williams MV, Baker DW. Health literacy and preventive health care use among Medicare enrollees in a managed care organization. Med Care. 2002;40(5):395–404. [DOI] [PubMed] [Google Scholar]
  • 15.Endres LK, Sharp LK, Haney E, Dooley SL. Health literacy and pregnancy preparedness in pregestational diabetes. Diabetes Care. 2004;27(2):331–334. [DOI] [PubMed] [Google Scholar]
  • 16.Baker DW, Gazmararian JA, Williams MV, et al. Functional health literacy and the risk of hospital admission among Medicare managed care enrollees. Am J Public Health. 2002;92(8):1278–1283. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Weiss BD, Palmer R. Relationship between health care costs and very low literacy skills in a medically needy and indigent Medicaid population. J Am Board Fam Pract. 2004;17(1):44–47. [DOI] [PubMed] [Google Scholar]
  • 18.Morris NS, MacLean CD, Littenberg B. Literacy and health outcomes: a cross-sectional study in 1002 adults with diabetes. BMC Fam Pract. 2006;7:49. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Miller DP Jr, Brownlee CD, McCoy TP, Pignone MP. The effect of health literacy on knowledge and receipt of colorectal cancer screening: a survey study. BMC Fam Pract. 2007;8:16. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Moon RY, Cheng TL, Patel KM, Baumhaft K, Scheidt PC. Parental literacy level and understanding of medical information. Pediatrics. 1998;102(2):e25. [DOI] [PubMed] [Google Scholar]
  • 21.Dewalt DA, Berkman ND, Sheridan S, Lohr KN, Pignone MP. Literacy and health outcomes: a systematic review of the literature. J Gen Intern Med. 2004;19(12):1228–1239. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Health literacy: report of the Council on Scientific Affairs, American Medical Association. JAMA. 1999;281(6):552–557. [PubMed] [Google Scholar]
  • 23.Davis TC, Fredrickson DD, Arnold C, Murphy PW, Herbst M, Bocchini JA. A polio immunization pamphlet with increased appeal and simplified language does not improve comprehension to an acceptable level. Patient Educ Couns. 1998;33(1):25–37. [DOI] [PubMed] [Google Scholar]
  • 24.Davis TC, Bocchini JA Jr, Fredrickson D, et al. Parent comprehension of polio vaccine information pamphlets. Pediatrics. 1996;97(6 Pt 1):804–810. [PubMed] [Google Scholar]
  • 25.Jolly BT, Scott JL, Sanford SM. Simplification of emergency department discharge instructions improves patient comprehension. Ann Emerg Med. 1995;26(4):443–446. [DOI] [PubMed] [Google Scholar]
  • 26.Safeer RS, Keenan J. Health literacy: the gap between physicians and patients. Am Fam Physician. 2005;72(3):463–468. [PubMed] [Google Scholar]
  • 27.Rothman RL, DeWalt DA, Malone R, et al. Influence of patient literacy on the effectiveness of a primary care-based diabetes disease management program. JAMA. 2004;292(14):1711–1716. [DOI] [PubMed] [Google Scholar]
  • 28.Schillinger D, Piette J, Grumbach K, et al. Closing the loop: physician communication with diabetic patients who have low health literacy. Arch Intern Med. 2003;163(1):83–90. [DOI] [PubMed] [Google Scholar]
  • 29.United States Registry of Clinical Trials. Literacy. Bethesda, MD: National Institutes of Health; 2008. http://www.clinicaltrials.gov/ct2/results?term=literacy. Accessed Jul 8, 2008.
  • 30.Paasche-Orlow MK, Wolf MS. Evidence does not support clinical screening of literacy. J Gen Intern Med. 2008;23(1):100–102. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.Bass PF III, Wilson JF, Griffith CH, Barnett DR. Residents’ ability to identify patients with poor literacy skills. Acad Med. 2002;77(10):1039–1041. [DOI] [PubMed] [Google Scholar]
  • 32.Paasche-Orlow MK, Parker RM, Gazmararian JA, Nielsen-Bohlman LT, Rudd RR. The prevalence of limited health literacy. J Gen Intern Med. 2005;20(2):175–184. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33.Chew LD, Bradley KA, Boyko EJ. Brief questions to identify patients with inadequate health literacy. Fam Med. 2004;36(8):588–594. [PubMed] [Google Scholar]
  • 34.Bennett IM, Robbins S, Al-Shamali N, Haecker T. Screening for low literacy among adult caregivers of pediatric patients. Fam Med. 2003;35(8):585–590. [PubMed] [Google Scholar]
  • 35.Chew LD, Griffin JM, Partin MR, et al. Validation of screening questions for limited health literacy in a large VA outpatient population. J Gen Intern Med. 2008;23(5):561–566. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 36.Wallace LS, Rogers ES, Roskos SE, Holiday DB, Weiss BD. Brief report: screening items to identify patients with limited health literacy skills. J Gen Intern Med. 2006;21(8):874–877. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 37.Morris NS, MacLean CD, Chew LD, Littenberg B. The Single Item Literacy Screener: evaluation of a brief instrument to identify limited reading ability. BMC Fam Pract. 2006;7:21. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 38.Kirsch IS, Jungeblut A, Jenkins L, Kolstad A. Adult Literacy in America: A First Look at the Finding of the National Adult Literacy Survey. Washington, DC: National Center for Education Statistics, US Department of Education; 2002. Document NCES 1993-275. http://nces.ed.gov/pubsearch/pubsinfo.asp?pubid=93275. Accessed Jul 8, 2008.
  • 39.Jackson RH, Davis TC, Bairnsfather LE, George RB, Crouch MA, Gault H. Patient reading ability: an overlooked problem in health care. South Med J. 1991;84(10):1172–1175. [DOI] [PubMed] [Google Scholar]
  • 40.Baker DW, Williams MV, Parker RM, Gazmararian JA, Nurss J. Development of a brief test to measure functional health literacy. Patient Educ Couns. 1999;38(1):33–42. [DOI] [PubMed] [Google Scholar]
  • 41.Vittinghoff E, McCulloch CE. Relaxing the rule of ten events per variable in logistic and Cox regression. Am J Epidemiol. 2007;165(6):710–718. [DOI] [PubMed] [Google Scholar]
  • 42.Hosmer DW, Lemeshow S. Applied Logistic Regression. 2nd ed. New York, NY: John Wiley & Sons, Inc; 2000.
  • 43.Parikh NS, Parker RM, Nurss JR, Baker DW, Williams MV. Shame and health literacy: the unspoken connection. Patient Educ Couns. 1996;27(1):33–39. [DOI] [PubMed] [Google Scholar]
  • 44.Ryan JG, Leguen F, Weiss BD, et al. Will patients agree to have their literacy skills assessed in clinical practice? Health Educ Res. 2007;23(4):603–611. [DOI] [PubMed] [Google Scholar]
  • 45.Seligman HK, Wang FF, Palacios JL, et al. Physician notification of their diabetes patients’ limited health literacy. A randomized, controlled trial. J Gen Intern Med. 2005;20(11):1001–1007. [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from Annals of Family Medicine are provided here courtesy of Annals of Family Medicine, Inc.

RESOURCES