Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2013 Jun 1.
Published in final edited form as: J Biomed Inform. 2012 Apr 12;45(3):598–607. doi: 10.1016/j.jbi.2012.04.001

Health Literacy Screening Instruments for eHealth Applications: A Systematic Review

Sarah A Collins 1,2,3, Leanne M Currie 4, Suzanne Bakken 5,6, David K Vawdrey 5, Patricia W Stone 6
PMCID: PMC3371171  NIHMSID: NIHMS370331  PMID: 22521719

Abstract

Objective

To systematically review current health literacy (HL) instruments for use in consumer-facing and mobile health information technology screening and evaluation tools.

Design

The databases, PubMed, OVID, Google Scholar, Cochrane Library and Science Citation Index, were searched for health literacy assessment instruments using the terms “health”, “literacy”, “computer-based,” and “psychometrics”. All instruments identified by this method were critically appraised according to their reported psychometric properties and clinical feasibility.

Results

Eleven different health literacy instruments were found. Screening questions, such as asking a patient about his/her need for assistance in navigating health information, were evaluated in 7 different studies and are promising for use as a valid, reliable, and feasible computer-based approach to identify patients that struggle with low health literacy. However, there was a lack of consistency in the types of screening questions proposed. There is also a lack of information regarding the psychometric properties of computer-based health literacy instruments.

Limitations

Only English language health literacy assessment instruments were reviewed and analyzed.

Conclusions

Current health literacy screening tools demonstrate varying benefits depending on the context of their use. In many cases, it seems that a single screening question may be a reliable, valid, and feasible means for establishing health literacy. A combination of screening questions that assess health literacy and technological literacy may enable tailoring eHealth applications to user needs. Further research should determine the best screening question(s) and the best synthesis of various instruments’ content and methodologies for computer-based health literacy screening and assessment.

Keywords: Health Literacy, systematic review

I. INTRODUCTION

Challenges with low health literacy have grown with the increasing expectations for consumers to engage with interactive health information technologies, which are frequently referred to as “eHealth” applications.1,2 Improved consumer health literacy is necessary for effective self-management of chronic diseases using online patient portals and health management programs that rely on mobile devices.1,3,4 A meta-analysis of data from two large-scale surveys of adult literacy found associations between at-risk groups, low general literacy skills, and low health-related literacy skills at a population level.5 Inadequate health literacy inflates health care inefficiencies; low literacy individuals use fewer preventive services and less health information technology, and they have higher rates of emergency department utilization, poorer overall health status and greater risk of death.610 Therefore, policy efforts to increase health system efficiencies, decrease costs, improve outcomes, and decrease disparities may be limited if patients’ health literacy skills–particularly their ability to interact with health information technology--are not addressed.2,911 To address health literacy in today’s technology-rich health care environment, there is a strong need for validated, computer-based tools to assess health literacy. Therefore, this systematic review assesses the psychometric properties of health literacy instruments for adaption to computer-based administration.

II. BACKGROUND

Clinicians consistently overestimate patients’ health literacy and provide patients with overly-complex information.15,16 Health literacy is comprised of numeracy (numerical literacy), oral literacy, print literacy, and cultural and conceptual knowledge and is defined by the Institute of Medicine as “the degree to which individuals can obtain, process, and understand the basic health information and services they need to make appropriate health decisions.”15 Additionally, literacy skills are content and context specific; therefore, in addition to patients with low literacy levels, patients with otherwise high literacy skills or level of education may also have difficulty comprehending health information.7,15,17,18 Health literacy is necessary for patients to manage administrative tasks (e.g., scheduling appointments, filling out insurance forms, understanding consent forms) as well as clinical tasks (e.g., explaining a medical history to a doctor or understanding and following instructions for a procedure or post-operative care).6 Increasingly, these are tasks that patients may be expected to perform independently through a patient portal or mobile health application.19,20 Moreover, health information searches via cellular phones and the development of mobile health applications that utilize short-message-services have great potential to reach users with limited health care access and limited literacy skills.12 However, tailoring eHealth applications based on user competence is necessary to prevent exacerbation of health disparities and to maximize the benefit to patients with all levels of health literacy skills.2,10,11,13 In response to the challenges of accessing and effectively using health information technology, the concept of eHealth literacy has been developed. eHealth literacy is “the ability to seek, find, understand, and appraise health information from electronic sources and apply the knowledge gained to addressing or solving a health problem”.21 eHealth literacy is composed of two types of skills: general skills and specific skills.11 General skills include traditional literacy (reading, writing, and numeracy), media literacy (media analysis skills), and information literacy (information seeking and understanding). Specific skills include computer literacy (IT skills), health literacy (health knowledge comprehension), and science literacy (science process and outcome).11 To support eHealth literacy and tailoring eHealth applications to appropriate health literacy levels, we need reliable, valid, and feasible instruments for computer-based health literacy screening.14,15 To understand an instrument’s ability to reliability measure the concept of health literacy an analysis of the psychometric properties of instruments should be completed. Psychometric testing results should inform the selection and recommendation of a tool for widespread use in patient care.

Reliable and valid health literacy instruments were developed in the early 1990s and continue to be used in research today.17,22 However, given the limited time available during a patient-provider interaction there is a lack of standardized measurement tools used to assess individuals’ health literacy and health literacy across patient populations within our health care system.10 A lack of standardized measurement tools severely limits the ability to compare health literacy initiatives.23 Increased public access to electronic health information and a shift in technology to more consumer-driven content, such as the website www.patientslikeme.com, require communication technologies that support the patient as a partner and consumer at the center of the health care system by providing information access and assistance when and where it is needed.11,24,25 This systematic change may be met through computer-based health literacy screening to identify patients who may have unmet needs or difficulty navigating our technologically sophisticated and information dense health care system and subsequent computer-adaptive testing and tailoring of information readability during electronic health communications.15 Therefore, the aim of this systematic review was to analyze health literacy screening instruments that may be used as computer-based screening tools.

III. METHODS

A. Health Literacy Instrument Retrieval

We conducted a systematic review of English language general health literacy measurement instruments to determine the state of the science of clinically applicable, valid, reliable and feasible health literacy screening tools for use as computer-based screening and evaluation tools that may be used in a variety of setting and populations. Inclusion criteria were: all instruments that measured general health literacy in English speaking populations. Exclusion criteria were health literacy screening tools that were only targeted at a specific disease population or diagnosis (e.g., cancer or diabetes) and tools in other languages. We acknowledge that translation to languages beyond English, particularly Spanish, is a needed focus area for both verbal and written patient education and, hence, warrants language appropriate valid and reliable health literacy screening tools. However, merely translating a previously validated English tool is insufficient due to differences between the languages26; therefore, this review focuses only on English language health literacy instruments and a separate review of Spanish and other languages’ health literacy screening tools is recommended. Similarly, our search revealed that the health literacy literature is a rich area that includes many studies beyond the assessment of patient’s health literacy, such as the development of content with appropriate levels of health literacy for patients, but these types of studies were beyond the scope of this review.

An initial search was performed to identify the current gold standards for assessing the health literacy of patients. This combined search of “health”, “literacy”, “computer-based,” and “psychometrics” in OVID’s Health and Psychosocial Instruments database retrieved 28 citations. From these 28 citations, 2 instruments were identified, the Test of Functional Health Literacy in Adults (TOFHLA) and the Rapid Estimate of Adult Literacy in Medicine (REALM). PubMed, OVID, Cochrane Library, Google Scholar and Science Citation Index searches revealed that the TOFHLA and the REALM have been extensively tested and are the current criterion for comparison during the development of new instruments. We then combined all variations of the terms “health”, “literacy”, “TOFHLA”, and “REALM” to search PubMed, OVID (Medline and CINHAL), Cochrane Library, and Google Scholar to ensure the retrieval of a comprehensive list of health literacy tools available to clinicians. To ensure comprehensiveness of our systematic review, we gathered all instruments developed since the early 1990s, when the TOFHLA and REALM were developed. We utilized the “cited-by” tools within PubMED and OVID and Science Citation Index to identify all articles that cited the TOFHLA and REALM publications from the 1990s. , . Furthermore, we received input from colleagues familiar with the health literacy literature as well as searched the reference lists of all retrieved articles to ensure the completeness and adequacy of our search. Our analysis of the content of all articles, their reference lists, and articles that they were cited by confirmed the comprehensiveness of the key words and search techniques used.

B. Critical Appraisal of Identified Instruments

All retrieved health literacy instruments were critically appraised for validity and feasibility as a screening tool according to the reported psychometric properties, potential for use in the general population, and administration information. Given the TOFHLA and REALM’s prevalence and use throughout the literature as current gold standards, this health literacy instrument systematic review first includes a brief overview of the established reliability and validity of the TOFHLA and the REALM. Next, the other identified instruments are critically appraised according to the following criteria:

  1. The constructs that the instrument measures

  2. Criterion related validity assessed by the correlation coefficients of the instrument with the TOFHLA, the REALM, or both

  3. Reported psychometric properties, such as Cronbach’s Alpha (to assess internal consistency) and Receiver Operating Curves (ROC) (to assess sensitivity and specificity)

  4. The extensiveness of testing in diverse populations as evidence of the instrument’s appropriateness for use as a screening tool in the general population

  5. The reported ability of the instrument to detect marginal health literacy, defined as higher than inadequate health literacy but lower than adequate health literacy

  6. The number of questions included in the instrument

  7. The administration requirements, such as requirements for score calculations

  8. The administration time

  9. The cost and method of obtaining the instrument

These factors were seen as contributing to a health literacy assessment instrument’s potential for use as a computer-based screening tool in the clinical and mobile health setting within the United States.27

IV. RESULTS

Table 1 provides an overview of the h health literacy measurement tools we retrieved. This includes the two gold standards TOFHLA (including pilot testing of a computer-based version) and the REALM as well as the Newest Vital Sign (NVS) and the eHealth Literacy Scale (eHEALS). Additionally, we found aX of Health Literacy Screening Question Methodologies (HLSQM), which, as a group, we will refer to as HLSQMs despite the fact that they are independent question sets that have been developed by separate research teams.. The TOFHLA tool, REALM tool, and HLSQMs each have shortened versions: Shortened Test of Functional Health Literacy in Adults (S-TOFHLA); Shortened Rapid Estimate of Adult Literacy in Medicine (REALM-R); and four versions of HLSQMs comprised of one or three questions.

Table 1.

Psychometric properties of health literacy assessment instruments

Instrument Constructs
Measured
Cronbach’s Alpha Correlation
with
TOFHLA
Correlation
with
REALM
Correlation
with
WRATR-
R
AUROC for
Inadequate HL
(95% CI)
AUROC for
Marginal HL
(95%CI)
Cost
REALM28,29 Medical word recognition 0.9628 NR NA 0.8829 NR NR varies
TOFHLA22 Cloze-type comprehension 0.98 NA 0.84 0.74 NR NR $70
S-TOFHLA17 Cloze-type comprehension 0.68 (section1); 0.97
(section2); 0.96;0.95
NR 0.80 NR NR NR $70
REALM-R30 Medical word recognition 0.91 NR NR 0.64 NR NR varies
Newest Vital Sign (NVS)31,32 Reading and comprehension nutrition label 0.81; 0.731 0.7632 0.6131
0.5932
0.4131 NR 0.71 (0.68–0.77)31
0.73 (0.70–0.78)31
0.88 (0.84–0.93)32
NR Free
HLSQM33 Screening Questiona NR NR See AUROC NR 0.82 (0.77–0.86) 0.79 (0.74–0.83) Free
HLSQM34,35 Screening Questionsb NR See AUROC NR NR 0.87 (0.78–0.96)34
0.74 (0.69–0.79)35
0.68 (0.60–0.77)34
0.72 (0.69–0.76)35
Free
HLSQM36,37 Screening Questionc NR OR$=2.03
(1.26–3.26)37
NR NR 0.78 (0.73–0.83)36 0.73 (0.69–0.78)36 Free
HLSQM38 Screening Questionsd NR NR See AUROC NR 0.76 (0.66–0.86) NR Free
HLSQM39 Screening Questionse NR NR NR See AUROC 0.63 (0.53–0.72) NR Free
eHEALS Self-report health literacy 0 .88 NR NR NR NR NR Free

Screening Questions:

a

“How confident are you filling out medical forms by yourself?"

b

"How often do you have someone help you read hospital materials?"; "How confident are you filling out medical forms by yourself?"; "How often do you have problems learning about your medical condition because of difficulty understanding written information?"

c

"How often do you need to have someone help you when you read instructions, pamphlets, or other written material from your doctor or pharmacy?"

d

“How many years of school have you completed?”; “Is your child’s other parent living with you now?”; “Do you ever read books for fun?”

e

“What is your last grade of school completed?”; “What is your age?”; “How often do you read books?”; “What is your reading preference: I’d like to read the questions myself, or I’d like the questions read to me?”

NR = Not Reported; NA = Not Applicable;

$

Odds Ratio for limited health literacy; All values significant at .001 level.

A. TOFHLA

The TOFHLA was developed to measure health information comprehension and not simply the ability to read and correctly pronounce words as measured by other instruments.22 The TOFHLA characterizes patients as having adequate, marginal or inadequate health literacy; limited health literacy is referred to when a patient is characterized as having either marginal or inadequate health literacy skills. The TOFHLA is administered by clinicians and uses actual materials that patients encounter such as pill bottles and appointment slips.17 The TOFHLA consists of 17 numeracy items and 3 prose passages and takes up to 22 minutes to administer.

In 1995, to establish reliability and validity, the TOFHLA, the Wide Range Achievement Test Revised (WRAT-R), and the Rapid Estimate of Adult Literacy in Medicine (REALM) were administered at an urban public hospital in Atlanta, Georgia which primarily cares for indigent African-American residents of its surrounding counties.17 The TOFHLA correlated well with the WRAT-R and the REALM, .74 and .84 respectively.22 This study concluded that the TOFHLA is a valid and reliable indicator of a patient’s ability to read health-related materials.22 Since the early 1990s the TOFHLA has been used extensively for research in different patient populations and as a “gold-standard” comparison measurement to assess the reliability and concurrent criterion related validity of new tools developed to assess health literacy.26,32

The TOFHLA was adapted to an interactive computer-based health literacy assessment tool called the Talking Touchscreen and tested in 2009 by the same investigators that originally developed the TOFHLA.40 Questions from the TOFHLA were tailored to what could be measured via an interactive “Talking Touchscreen” tool that allows the patient to either read questions displayed on the screen or hear the question read out loud. Development included an iterative user interface design process based on user feedback. Pilot testing of 98 items was conducted with 97 English speaking participants (67% female and 59.8% African American) from two primary care clinics and two community organizations in the Chicago metropolitan area. Participants were at least 21 years of age, with sufficient vision, hearing, cognitive function and manual dexterity to allow for ideal interaction with the touchscreen. However, 72 of the 97 participants completed the items by self-administration on paper-and-pencil and only 25 participants completed the items by self-administration on the Talking Touchscreen. Twenty-four of the 98 items were dropped from the item pool because of low adjusted point-biserial correlations (<0.20), redundancy, or they were too easy in that all participants got them correct. An additional 40 English items were written using the same procedures. The best 90 items were selected based on the pilot data results and internal and external expert review for content coverage and relevance.40

The pilot testing of the Talking Touchscreen highlighted problems in the translation of the TOFHLA, a previously validated instrument, to an interactive computer-based format. The evaluation of the Talking Touchscreen showed that many items were answered correctly by a high proportion of participants, therefore, providing little discriminate information for individuals reading above a fourth grade level and some items showed measurement bias between Hispanics and non-Hispanics.40 Qualitative analysis indicated that participants found the Talking Touchscreen acceptable and easy to use. Therefore, this pilot testing provided empirical data that the psychometric properties of a paper-based validated instrument may differ when that tool is translated for computer-based administration.

B. REALM

The REALM, developed in the early 1990s to measure the ability of patients to pronounce health-related words, has also been used extensively to assess patient’s health literacy and to assess the concurrent criterion related validity of other health literacy tools.17 The REALM is the most commonly used health literacy assessment tool in the clinical setting.28 The REALM does not measure understanding, but serves as a screening instrument for clinicians by providing a reading grade estimate for patients that read below a ninth grade level.29 Limited health literacy is defined as subjects that score at or below a 6th-grade reading level (REALM score=0 to 44), marginal health literacy is defined as subjects that score at the 7th- to 8th-grade reading level (REALM score=45 to 60), and adequate health literacy is defined as subjects that score at or above a 9th grade reading level (REALM score=61–66). The REALM can be administered in under 3 minutes with minimal training.29

Extensive testing and factor analysis led to the development of the popularized 66 item REALM, shortened from the original 125-word version. This version, which is still used in practice, demonstrated good concurrent criterion related validity with standard literacy measurements (correlation coefficients: Peabody Individual Achievement Test-Revised, r = 0.97, p<0.001; Slosson Oral Reading Test-Revised, r = 0.96, p<0.001; Wide Range Achievement Test-Revised, r = 0.88, p<0.001) and test-retest reliability performed on the same population 2 weeks apart (r = 0.99, P < .001).29

Initially, the REALM was tested in predominantly African American populations. However, in 2004, Shea et al. tested the REALM on a convenience sample of 1,610 African Americans and Caucasians while controlling for gender, age, and education level.28 The Cronbach alpha for the total group was 0.96; however, the mean score of 55.7 for African Americans was significantly lower than the mean score of 61 for Caucasian patients (p<0.0001, ES=.49).28

C. S-TOFHLA

An abbreviated version of the Test of Functional Health Literacy in Adults (S-TOFHLA) has been adapted from the original TOFHLA with the aims to provide clinicians with a shorter test that is still a practical, reliable and valid means of measuring patients’ ability to read and understand the things they commonly encounter in the health care setting.

The S-TOFHLA, reduced to 4 numeracy items and 2 reading comprehension passages, which are at the 4th and 10th grade reading level, is a general measure of health literacy that is professionally administered in up to 12 minutes. The S-TOFHLA and the TOFHLA can be purchased together online for 60 dollars at www.peppercornbooks.com.

The same research team that developed the TOFHLA in 1995 (Baker, Williams, Parker, Gazmararian, and Nurss) adapted the S-TOFHLA in 1999 and tested the instrument at the same Atlanta, Georgia hospital.17 A convenience sample of 211 patients was used to test the instrument with exclusion criteria of: age less than 18 years; unintelligible speech; overt psychiatric illness; lack of cooperation; native language other than English; and being too ill to participate.

Demonstrating good internal consistency (reliability), Cronbach’s alpha was 0.68 for the 4 numeracy items and 0.97 for the items in the reading comprehension section. The correlation between the numeracy score and the reading comprehension score was moderately high (r=0.60). Correlation between the S-TOFHLA and the REALM was 0.80 (p<0.001) with the numeracy section at 0.61(p<0.001) and the reading comprehension section at 0.81(p<0.001), respectively. The S-TOFHLA demonstrated only a slightly lower correlation to the REALM (r=0.80) than the TOFHLA demonstrated (r=0.84) in the original study.17 The S-TOFHLA, like the TOFHLA, requires the administrator of the test to calculate the patient’s score using a simple formula.

The S-TOFHLA has been utilized in a variety of English speaking populations in the U.S., from young adults to the elderly, including: adults with diabetes and heart failure in Vermont41, urban-living Medicare patients42 and Caucasian surgical patients43. Currently, the S-TOFHLA uses only the reading comprehension section; due to the elimination of the numeracy section the S-TOFHLA takes between 7 and 8 minutes to administer.44 This change was made based on the differences between the psychometric properties of the numeracy section and the reading comprehension section (Cronbach alpha = 0.68 and .97, respectively; r = 0.61 and 0.80, respectively).

D. REALM-R

The REALM has recently been shortened from 66 items to 8 items (REALM-R) and can be administered within 2 minutes.30 The developers tested the psychometric properties of the REALM-R and reported a Spearman rank correlation between the REALM-R and the WRAT-R at 0.64; however, this pilot testing was performed in a homogenous, fairly well-educated population.30 Therefore, further testing is needed to determine the discriminate validity of the REALM-R. The REALM and the REALM-R sample kits, which include patient word lists and scoring cards, are available for purchase from the developer Terry C. Davis, PhD, LSU Medical Center, 1501 Kings Highway, Shreveport, LA 71130-3932, tdavis1@lsuhsc.edu.

E. NVS

The NVS is a six-item assessment of a patient’s reading and comprehension of an ice cream nutrition label. This instrument, whose research and development was funded by Pfizer Pharmaceuticals, can be accessed online free of charge and requires a maximum administration time of 6.2 minutes (average of 2.9 minutes).32 In 2005 the NVS was tested on a convenience sample of 250 English and 250 Spanish speaking patients from university based primary care clinics in Tucson, AZ.32 The reported Cronbach’s Alpha for the English version was 0.76 and the Pearson Correlation Coefficient when compared to the TOFHLA was 0.59 (P<0.001). Receiver Operating Characteristic (ROC) curves were reported that indicate the overall performance of a screening tool by plotting the sensitivity versus (1-specificity).34 The area under the ROC curve (AUROC), using the TOFHLA as the gold standard, was 0.88 (95% CI 0.84–0.93; p<0.001).

Osborn et al., in 2007, conducted two psychometric studies of the NVS whose results were published together.31 Both studies assessed similar populations (70–74% Female; 61% African American) at an urban hospital clinic and a non-profit, federally funded health clinic, however, the first study compared the NVS to the REALM and the second study compared the NVS to the S-TOFHLA.31 The Cronbach’s Alphas for study 1 and 2 were 0.81 and 0.71, respectively. When compared to the REALM, Pearson’s correlation coefficient was 0.41 (p<.001) and the AUROC was 0.71 (95% CI 0.68–0.77) for limited health literacy. When compared to the S-TOFHLA, Pearson’s correlation coefficient was 0.61 and the AUROC for detecting inadequate health literacy (a comparable measure to the REALMs limited health literacy measurement) was 0.73 (95% CI 0.70–0.78).31

Another study compared results of the NVS to known trend of health literacy in various suburban, urban, and rural primary care settings.45 The instrument was administered to 1,014 adults and middle- and high-school athletes (response rate, 97.5%). On average, the administration time for the NVS was 2.63 minutes. 48.1% of the adults and 59.7% of the middle- and high-school athletes demonstrated adequate health literacy.45 The authors concluded that the NVS provided results comparable to more extensive literacy tests; however, the results were only compared to known trends, not results from the administration of a gold standard health literacy assessment instrument within the same sample population.

F. HLSQMs

To identify clinically useful questions that may avoid the pitfalls, such as long administration times or the potential embarrassment of patients, which can occur with currently validated health literacy instruments Chew et al. developed questions based on six themes identified in a qualitative study of patients with limited health literacy and previous health literacy studies.34 Chew et al. applied methodologies appropriate for stigmatized behavior to determine the best wording of questions, such as “how often” they had a problem rather than asking “if” they had a problem.34

Sixteen questions were verbally administered to 332 English speaking male patients presenting to a Veteran’s Affairs (VA) preoperative clinic in Seattle.34 Concurrent administration of the S-TOFHLA was used to generate ROC curves by categorizing patients into three mutually exclusive groups: inadequate, marginal, or adequate health literacy.34 Subjects were asked to respond to each question by answering (1) Always (2) Often (3) Sometimes (4) Occasionally (5) Never.

Of the sixteen questions, the area under the ROC curve indicated that three of the questions effectively detected inadequate health literacy and to a lesser extent detected marginal health literacy as measured by the S-TOFHLA.34 The first question, “How often do you have someone help you read hospital materials?” had the highest AUROC (0.87; 95% CI = 0.78 – 0.96); the second question, “How confident are you filling out medical forms by yourself?” had the second highest AUROC (0.80; 95% CI = 0.67 – 0.93); the third question, “How often do you have problems learning about your medical condition because of difficulty understanding written information?” had a slightly lower AUROC than the second question (0.76; 95% CI = 0.62 – 0.90).34 In a follow-up study, Chew et al. assessed the same three questions for a sample of 1,796 veterans who received primary care services at 1 of 4 large VA medical centers.35 The AUROCs of the three questions for detecting inadequate health literacy, as measured by the S-TOFHLA and the REALM, ranged from 0.66 to 0.74. The question “How confident are you filling out forms by yourself?” performed significantly better than the two other questions (p<0.05) with an AUROC of 0.74 (95% CI = 0.69 – 0.79) and 0.84 (95% CI = 0.79 – 0.89), as measured by the S-TOFHLA and REALM, respectively. No combination of these three questions significantly increased the AUROC in detecting “inadequate or marginal” health literacy above the single question “How confident are you filling out forms by yourself?” However, the surveys were administered by telephone interview and the non-responders were significantly more likely to be older, less educated, and with a household income less than $20,000.35

The limitation of this instrument is that the performance of the screening questions was weaker for detecting patients with either inadequate or marginal health literacy than it was for detecting only inadequate health literacy. The AUROC for detecting inadequate or marginal health literacy for questions 1 to 3 were 0.68, 0.66, and 0.60, respectively.34 In the 2008 follow-up study the AUROC for detecting inadequate or marginal health literacy for questions 1 to 3 were 0.71–0.72, 0.62–0.63, and 0.63, respectively.35 The Likert scale scoring system of these screening questions allows for the cutoff point to be determined based on prevalence of limited health literacy in the population and an acceptable trade-off between the sensitivity and specificity of the tool.34

The three questions identified by Chew et al. in 2004 were evaluated in 2006 by Wallace et al. in a demographically different patient population and with a different reference standard, the REALM.33 The sample consisted of 305 predominately female, Caucasian, Medicare patients presenting to a university based clinic in Tennessee.33 In this validation study, question 2 had a significantly higher AUROC for detecting inadequate health literacy (0.82; 95% CI 0.77 – 0.86) and for detecting limited or marginal health literacy (0.79; 95% CI 0.74 – 0.83) than the other two questions (p<0.01).33 Both Chew et al.34 and Wallace et al.33 found that combinations of questions were no more effective in identifying those with limited or marginal health literacy than was one question; however, the most effective question differed in the two studies.

In 2004, Chew et al.34 found the question, “How often do you have someone help you read hospital materials?” to be the most effective. However, the 2008 follow-up study by Chew et al. and the 2006 study by Wallace et al. found the question, “How confident are you filling out medical forms by yourself?” to be the most effective. Additionally, Wallace found a higher AUROC for detecting marginal health literacy (AUROC = 0.79) than Chew did in both 2004 (AUROC = 0.68) and in 2008 (AUROC = 0.72).3335

Morris et al., in 2006, tested the independently developed Single Item Literacy Screener (SILS) question: “How often do you need to have someone help you when you read instructions, pamphlets, or other written material from your doctor or pharmacy?”36 This question differs only slightly from Chew et al.’s 2004 most effective screening question to detect inadequate health literacy by detailing the types of health related materials a person may need help reading. Nine hundred and ninety-nine patients at primary care practices in Vermont were given the SILS and the S-TOFHLA. The AUROCs were 0.78 (95%CI 0.73–0.83) for inadequate health literacy and 0.73 (95% CI 0.69–0.78) for marginal and inadequate health literacy.36 This AUROC for the detection of marginal and inadequate health literacy is higher than the AUROC that Chew et al. found for the detection of marginal and inadequate health literacy for any of their three effective questions.

Bennett et al. in 2003, tested the ability of a fourth set of screening questions on 98 poor, urban-dwelling, African American women to detect limited health literacy as compared to the REALM.38

The questions are modeled after the four-item CAGE questionnaire used to identify adults at risk for alcohol abuse or dependence: (1) “How many years of school have you completed?” (2) “Is your child’s other parent living with you now?” and (3) “Do you ever read books for fun?”.38 These three screening questions differ from the previous three studies assessing screening questions, yet the AUROCs remains high at .76 (P<0.001, 95% CI=.66–.86) for detecting limited (inadequate) health literacy.38 Based on the sensitivity (0.84 (95% CI = 0.67–0.95)), specificity (0.53 (95% CI = 0.40–0.63)), positive predictive value (PPV = 0.49 (95% CI=0.35–0.63)) and negative predictive value (NPV = 0.86 (95% CI = 0.71–0.96)) found, Bennett et al. recommends that a two-question cutoff point be used for this instrument as the minimum number of positive screening responses needed to identify an adult at risk for low literacy.38 This cutoff point has a sensitivity of .84 for detecting adults with low literacy, however, 46% would be a false positive result (incorrectly screened as belonging to the low literacy risk group yet actually having a higher level of literacy).38 This study was limited to a demographically homogeneous population. The prevalence rate of low health literacy for the tested population used to obtain the PPV and NPV is consistent with national rates for poor minority populations; however, in a sample with lower rates of low literacy skills these numbers would not be as strong.38 Given further psychometric testing in different settings, this instrument may prove to be a useful screening tool, not a diagnostic tool of illiteracy, to aid in the identification of adults with increased risk of low health literacy skills.38

Lobach et al., in 2003, tested questions that were similar to Bennett’s et al.’s questions.39 However, Lobach tested the questions in a computer-based setting for the purpose of determining the appropriate reading level that a computer-based tool should be set at for a particular patient.39 Lobach et al., with the help of a literacy expert, based their questions on the National Adult Literacy Survery.39 The developers tested their initial sample of questions in an exploratory laboratory study of 100 subjects using the TOWRE (Test of Word Reading Efficiency, PROED, Austin, Texas) as a gold standard reference.39 To increase the probability of including subjects with limited literacy, the recruitment of subjects took place at two clinic sites and through community organizations which offered reading instruction to low literacy individuals.39 The four questions found to be predictive of reading literacy were included in their confirmatory evaluation study of a second convenience sample of 78 subjects.39 These four questions were evaluated against the WRAT3 (Wide Range Achievement Test Version 3) to determine an AUROC for the combined questions. The four questions were: (1) “What is your last grade of school completed?”; (2) “What is your age?”; (3) “How often do you read books?”; (4) “What is your reading preference: I’d like to read the questions myself, or I’d like the questions read to me?”. Lobach et al. found that the calculated scores differentiated between high and low literacy subjects at a highly statistically significant level (p=0.0025).39 The developers reported an AUROC of 0.63 (95% CI: 0.53–0.72) in detecting high versus low literacy levels.

Jeppesen et al. in 2009, as part of a larger study, surveyed 225 diabetic patients at an academic primary care clinic (57% response rate).37 In addition to data of their age, sex, race, and highest education level completed, participants were asked: 1) “How would you rate your ability to read?” (5 item Likert scale), 2) “On a scale of 1 to 10, where 1 is ‘not at all’ and 10 is ‘a great deal,’ how much do you like reading?”, and, finally, the SILS question, 3) “How often do you need to have someone help you when you read instructions, pamphlets, or other written material from your doctor or pharmacy?” (5 item Likert scale). The predictive model found that participants with limited health literacy, as measured by the S-TOFHLA, were more likely to self-report a lower ability to read (OR = 3.37; 95% CI = 1.71 – 6.63), more frequently needed help reading written health materials (OR = 2.03; 95% CI = 1.26 – 3.26), a lower education level (OR = 1.89; 95% CI = 1.12 – 3.18), were male (OR = 4.46; 95% CI = 1.53 – 12.99), and were of nonwhite race (OR = 3.73; 95% CI = 1.04 – 13.40).

One study found that, despite an adequate sample size (N= 140, power = 80%), the use of screening questions were insufficient in identifying a statistically significant correlation between the patient’s response to the question and the patient’s level of health literacy and comprehension.46 A subset of the TOFHLA, comprehension passage ‘B’, which discusses ‘Medicaid Rights and Responsibilities’ and uses a modified Cloze procedure, was used as a ‘gold standard’ measurement of health literacy. Cloze procedure is a method of assessing understanding of context and vocabulary by removing portions of text from a narrative passage and asking the participant to replace the missing words. Comprehension was measured by a comprehension index of comparative health plan quality information which was developed by the authors and has not yet been validated. The screening question, measured on a 4 point Likert scale, was: “How confident are you filling out medical forms by yourself?” The results indicated a positive correlation between the screening question and the TOFHLA (r=0.15, p=0.09) and no relationship between the screening question and the comprehension index.46 However, it is not known if the use of the validated S-TOFHLA, as opposed to a subset of the TOFHLA, or the use of a validated comprehension index, would have also resulted in the conclusion that there is no statistically significant relationship. Of note, this study was published in the journal Family Medicine in the section ‘Letters to the editor’, which publishes abstracts of original research that are 600 words or less.

G. eHEALS

No identified studies described validated computer-based health literacy screening instruments. However, the eHealth Literacy Scale (eHEALS) is a reliable measure of patients’ perceived skills at finding and using electronic health information.47 Norman et al. defined eHealth literacy as a combination of traditional, health, information, scientific, media and computer literacy.47 Therefore, the purpose of this assessment is to assist clinicians in determining the appropriateness of ‘prescribing’ eHealth, such as directing a patient to MedlinePlus® to read about diabetes management or to use a personal health record. eHEALS consists of 8 statements of an individual’s perception of their eHealth literacy measured on a 5-point Likert scale and was validated in a youth population as part of a single session, randomized intervention trial evaluating Web-based eHealth programs. These statements were: 1) I know how to find helpful health resources on the Internet, 2) I know how to use the Internet to answer my health questions, 3) I know what health resources are available on the Internet, 4) I know where to find helpful health resources on the Internet, 5) I know how to use the health information I find on the Internet to help me, 6) I have the skills I need to evaluate the health resources I find on the Internet, 7) I can tell high quality from low quality health resources on the Internet, 8) I feel confident in using information from the Internet to make health decisions47. Psychometric testing was performed on 664 participants (55.7% male) aged 13 to 21 at four time points over 6 months. Cronbach’s alpha was 0.88 with modest test-retest stability from baseline to 6-month follow-up. Norman et al. concluded that eHEALS may be a useful clinical tool for the identification of patients that may or may not benefit from referrals to an eHealth intervention or resource, however, further research needs to examine the applicability of the eHEALS to other populations and settings.47

V. DISCUSSION

The TOFHLA and the REALM have been extensively tested and validated and remain the gold standard for health literacy testing, yet new instruments have a decreased administration time and, therefore, are more useful in the clinical setting. HLSQMs and eHEALS questions offer great potential for use in computer-based health literacy screening and should be the focus of computer-based health literacy screening tools. However, further validation is needed to ensure that the psychometric properties of screening questions when tested using a paper-based instrument hold up when adapted to a computer-based administrative environment. For example, in one of the studies reviewed, the psychometric properties of the TOFHLA did not remain stable when administered via the interactive computer-based “Talking Touchscreen”.40 However, Sinadinovic et al. found that psychometric properties of a paper-based an alcohol and drug use screening tool remained stable when it was adapted to a computer-based format that included an interactive voice response system.48 These different findings indicate the need for further research. At this point, analysis of the two studies suggests that the difference may be due to the type of questions included in the screening tool. For example, the computer-based Talking Touchscreen used the TOFHLA, which measures comprehension using a Cloze question format.40 These questions may be inherently different than the questions used by Sinadinovic et al., who were screening for alcohol and drug use. Interestingly, HLSQMs were developed using a methodology appropriate for screening stigmatized behavior, such as alcohol abuse, to determine the best wording of questions, such as “how often” they had a problem rather than asking “if” they had a problem.34 Therefore, it is possible that questions similar to the HLSQMs are readily adaptable to an interactive computer-based format and comprehension Cloze questions are not. Further research is needed to investigate this concept.

Evidence suggests that screening patients at all levels of health literacy during routine health care visits does not decrease patient satisfaction.49 In addition to the need for a validated tool, a screening tool must be easily administered, quickly administered, inexpensive and have a low false negative rate (high specificity) to ensure a high level of detection.27 A high false positive rate, an inherent risk for a more sensitive test, will make the tool inefficient, waste resources and deter clinicians from using it; therefore, a balance must be found between the specificity and the sensitivity of the screening tool.27 Traditionally, administration time for screening in the clinical setting is the criteria that must be sufficiently met before further psychometric properties are critiqued. However, the dynamic for administrating screening tools via a computer-based format has different criteria. A short administration time remains important, although, given that a clinician may not be responsible for directly administrating the tool the lower-level cutoff for administration time is likely more flexible.

A health literacy screening tool could be administered in a written or verbal format through a computer-based applications and leverage touch screen functionality or interactive voice response systems where applicable48; these administration capabilities may increase the feasibility of implementing certain screening tools. For example, when the S-TOFHLA was developed in 1999 it appeared to have a significantly decreased administration time, which indicated the potential to increase the frequency of the evaluation of a patient’s health literacy at the point-of-care and allow for just-in-time patient tailored education by clinicians. Based on reported psychometric properties and extensiveness of testing, the S-TOFHLA appears to be the best available health literacy assessment tool, yet as a paper-based assessment may not be feasible as a clinically administered screening tool. The S-TOFHLA continues to be used in research settings, yet has not been widely adopted for use in clinical care, possibly due to the 7–8 minute administration time which is still high in the clinical care setting.44 Further testing is warranted to assess the S-TOFHLA’s feasibility for screening through a patient portal or mobile health application account. The 3 minute REALM test remains the most widely used screening tool in the clinical setting, yet the validity of the tool among African American and Caucasian populations has been questioned in the past few years.28 Therefore, while the REALM currently remains a gold standard for comparison to newer instruments, further research is needed to assess the validity of the REALM, and the REALM-R, across different populations, racial or otherwise. Moreover, the REALM, which assesses the patient’s proper pronunciation of medical words, in all likelihood, is not easily adaptable to a computer-based format.

The NVS, reports an average administration time of 2.9 minutes, which is one of the lowest reported administration times of a health literacy screening tool; however, it is not known if a clinician or computer application asking 1–3 HLSQMs would have a clinically significantly lower administration time. The AUROC’s for the NVS are comparable to those reported for the HLSQMs (see table 1). However, no AUROCs were reported for detecting marginal health literacy using the NVS, a population also at risk for the consequences of low health literacy. At this time no recommendation can be made for the widespread use of the NVS; yet, due to promising psychometric properties, further research is warranted. The NVS uses a nutrition label to assess health literacy. Psychometric analyses of a different research tool that specifically measures nutritional literacy found a difference between the constructs of health literacy, as measured by the S-TOFHLA, and nutritional literacy.50 In that study, overweight and obese patients scored lower on nutritional literacy, but the same as other patients on health literacy as measured by the S-TOFHLA.50 Future work should determine the role of general health literacy measures versus disease or domain specific literacy measures for different patient populations.

The HLSQMs presented in this review offer great potential as a new methodology for a health literacy screening tool. Asking a patient from 1 to 3 multiple choice questions appears to incur minimal administration time regardless of the mode of administration. Differences in the AUROCs between Chew et al.’s study and Wallace et al.’s study may be due to the different reference standards (S-TOFHLA versus REALM) used to evaluate the effectiveness of the questions.33,34 Additional differences among the AUROCs for all of the HLSQMs are likely due to the inconsistencies that still exist among the questions (see table 1). Interestingly, many of these questions were developed independently and in many cases the studies did not reference each other. These questions require further testing to determine consensus on the best phrasing and the ideal question or combination of questions. Yet, the AUROC’s reported for both inadequate and marginal health literacy detection, despite a lack of consistency in the questions, demonstrate a potentially ideal methodological approach for a health literacy screening tool (see table 1). In 2010, Daniel et al. did not detect a statistically significant relationship between the screening questions and a sub-set of the TOFHLA or the comprehension index they developed.46 However, validated measurements were not used, or only partially used, and the study was only published in abstract format; therefore, further research is needed to corroborate these findings. Given the state of the evidence, the adaption and validation of a computer-based tool that uses HLSQMs with an audio-visual feature (similar to interacting with an automated teller during a customer service phone call) may be a timely intervention.

Additionally, a computer-based health literacy screening tool may be appropriate to administer pre-appointment in the waiting room of a provider’s office or clinic. This will be a good use of waiting time and is likely a time when patients will be motivated to seek health information.11 Furthermore, the internet has the potential to transform the clinician–patient relationship.25 Expert consensus posits that the concept of health literacy should be expanded to include characteristics of the health professional, in addition to the patient, that impact the relationship and effectiveness of communication between the health professional and the patient.23 Computer-based health screening offers an opportunity for a greater role for informatics interventions focused on tailoring the delivery of health care information to meet the patient’s and the provider’s health literacy needs.

Recently Paasche-Orlow and Wolf (2007) argued that no evidence exists to support the routine screening of health literacy due to a lack of known effective interventions for patients with low health literacy and the need for clinicians to communicate health care information clearly and without the use of jargon for all patients regardless of their literacy level.44 Paasche-Orlow and Wolf, concerned that screening may negatively impact patient care by stigmatizing and labeling patients, did not acknowledge the difference between general literacy and health literacy nor do the authors provide strong evidence to support their argument that screening may cause patient harm.44 Previous research has shown that patients are accepting of health literacy screening.49 Furthermore, a recent study found that the response rate was higher with the use of internet-based administration versus paper-based administration for the screening of the stigmatized behavior of problematic alcohol and drug use.48

However, caution regarding the implementation of routine health literacy screening is warranted at this time. Screening tools, as discussed in this review, need further research. Additionally, the use of screening tools in the clinical setting necessitates distinct interventions based on screening results for the clinician to provide in order to justify screening usefulness.44 The purpose of the eHEALS tool addresses some of these concerns. Patients are increasingly consumers of electronic health information through unassisted internet searches and may be directed to specific websites by clinicians. Therefore, the increasing use of eHealth interventions by clinicians, including Telehealth and personal health records, will necessitate screening a patient’s eHealth literacy.47 However, it is important to distinguish between eHealth and health literacy. eHealth is not a static, objective assessment; eHealth levels will constantly be in flux as technology changes.11 Therefore, incorporating a validated assessment of health literacy into eHealth assessments and tools is necessary to track types of literacy. Distinguishing these validated and reliable measures within a standard eHealth assessment will provide evidence indicating whether interventions and tailored technologies should be targeted at low health literacy, low technological literacy, or a combination of the two.

With further research, we see the potential for health literacy screening at the point of a patient’s initial access to eHealth and mobile health applications with the desired intervention of building applications that are tailored to an individual’s health literacy needs. Furthermore, health literacy screening, made feasible on a large scale through computer-administrated modalities, will inform clinicians and researchers about the literacy levels of patient populations in order to foster the development and appropriate use of patient education materials and target known needs within a community. Additionally, health literacy screening should inform clinicians about individual patients that are likely to need extra teaching or may not benefit from eHealth solutions without assistance. These patients are not only at risk for not understanding their health information during the patient visit at the time of screening, but will continue to be at risk as they continuously attempt to navigate the health care system and health information technology in the future. Cost effective interventions targeted at low health literacy individuals may include low-technology interventions such as follow-up telephone calls, adult health education classes or the provision of standardized dosing instruments to prevent medication administration errors at home51 as well as high-technology interventions such as computer-based adaptive learning.

VI. CONCLUSION

At this time we are in need of a screening tool that can be used across different socioeconomic and cultural populations. Current health literacy screening tools demonstrate different beneficial properties depending on the context of use. Additionally, evidence-based interventions to tailor electronic health information will be necessary to support individuals that are screened in the eHealth setting and are identified to have low health literacy skills.

We conclude that HLSQMs are a potentially effective and feasible methodology to detect patients at risk for inadequate and marginal health literacy. Additionally, the administration of these questions may be readily adapted to a computer-based format through audio-visual interaction. Considerable work has been performed to develop valid and reliable paper-based health literacy instruments; it is time to leverage and transition this work for validation in the setting of computer-based administration. We recommend that health literacy instrument development and refinement focus on a combination of validated and reliable HLSQM and eHEALS questions for the purpose of developing a computer-based instrument to screen individuals accessing eHealth applications. Further research should determine the best HLSQM screening question(s) and the best synthesis of eHEALs methodologies for computer-based health literacy screening and assessment. The continued research of valid, reliable, and feasible computer-based screening instruments and evidence based interventions should provide clinicians and patients with the tools to decrease health disparities attributed to low health literacy.

Highlights.

There are a variety of validated paper-based health literacy screening instruments.

There is a lack of information regarding the psychometric properties of computer-based health literacy instruments.

Screening questions may be a valid and feasible approach to health literacy screening for use in computer-based tools.

Further research should determine the best combination of screening question(s) for computer-based tools.

ACKNOWLEDGMENTS

This project was supported by National Institute for Nursing Research T32NR007969 and P30 NR010677. Dr. Collins is supported by T15 LM 007079.

Footnotes

Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

REFERENCES

  • 1.Parker RM. Health literacy: a challenge for American patients and their health care providers. Health Promot International. 2000;15:277–283. [Google Scholar]
  • 2.Kahn JS, Aulakh V, Bosworth A. What it takes: characteristics of the ideal personal health record. Health Aff (Millwood) 2009;28:369–376. doi: 10.1377/hlthaff.28.2.369. [DOI] [PubMed] [Google Scholar]
  • 3.Editorial. The health illiteracy problem in the USA. Lancet. 2009;374:2028. doi: 10.1016/S0140-6736(09)62137-1. [DOI] [PubMed] [Google Scholar]
  • 4.Baer J, Kutner M, Sabatini J. Basic Reading Skills and the Literacy of America’s Least Literate Adults: Results from the 2003 National Assessment of Adult Literacy (NAAL) Supplemental Studies. Washington, DC: National Center for Education Statistics, Institute of Education Sciences, U.S. Department of Education; 2009. [Google Scholar]
  • 5.Rudd R, Kirsch I, Yamamoto K. Literacy and Health in America. Princeton, NJ: Educational Testing Service; 2004. [Google Scholar]
  • 6.Weiss BD. Health literacy: an important issue for communicating health information to patients. Zhonghua Yi Xue Za Zhi (Taipei) 2001;64:603–608. [PubMed] [Google Scholar]
  • 7.Baker DW, Wolf MS, Feinglass J, Thompson JA, Gazmararian JA, Huang J. Health literacy and mortality among elderly persons. Arch Intern Med. 2007;167:1503–1509. doi: 10.1001/archinte.167.14.1503. [DOI] [PubMed] [Google Scholar]
  • 8.Herndon J, Chaney M, Carden D. Health Literacy and Emergency Department Outcomes: A Systematic Review. Ann Emerg Med. 2010 doi: 10.1016/j.annemergmed.2010.08.035. [DOI] [PubMed] [Google Scholar]
  • 9.Sarkar U, Karter A, Liu J, et al. The literacy divide: health literacy and the use of an internet-based patient portal in an integrated health system-results from the diabetes study of northern California (DISTANCE) J Health Commun. 2010;15(Suppl 2):183–196. doi: 10.1080/10810730.2010.499988. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Isham G. Opportunity at the Intersection of Quality Improvement, Disparities Reduction, and Health Literacy. In: IoMURoH, eds., editor. Implementation IoMUFotSoHCQIa, Disparities IoMURoH, Literacy. Washington, DC: National Academies Press; 2009. Toward Health Equity and Patient-Centeredness: Integrating Health Literacy, Disparities Reduction, and Quality Improvement: Workshop Summary. [Google Scholar]
  • 11.Hernandez L. Health Literacy, eHealth, and Communication: Putting the Consumer First: Workshop Summary. Washington, DC: Institute of Medicine; 2009. [PubMed] [Google Scholar]
  • 12.S. F. Mobile Health 2010. Washington, DC: 2010. [Google Scholar]
  • 13.Ahern D, Kreslake J, Phalen J. What is eHealth (6): perspectives on the evolution of eHealth research. J Med Internet Res. 2006;8:e4. doi: 10.2196/jmir.8.1.e4. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Miller EA, West DM. Where's the revolution? Digital technology and health care in the internet age. J Health Polit Policy Law. 2009;34:261–284. doi: 10.1215/03616878-2008-046. [DOI] [PubMed] [Google Scholar]
  • 15.Nielsen-Bohlman L, Panzer AM, Kindig DA. In: Health Literacy, A Prescription to End Confusion. Medicine Io, ed., editor. Washington, DC: National Academies Press; 2004. [PubMed] [Google Scholar]
  • 16.Bass PF, Wilson JF, Griffith CH, Barnett DR. Residents' ability to identify patients with poor literacy skills. Acad Med. 2002;77:1039–1041. doi: 10.1097/00001888-200210000-00021. [DOI] [PubMed] [Google Scholar]
  • 17.Baker DW, Williams MV, Parker RM, Gazmararian JA, Nurss J. Development of a brief test to measure functional health literacy. Patient Education and Counseling. 1999;38:33–42. doi: 10.1016/s0738-3991(98)00116-5. [DOI] [PubMed] [Google Scholar]
  • 18.Baker DW, Johnson JT, Velli SA, Wiley C. Congruence between education and reading levels of older persons. Psychiatric Services. 1996;47:194–196. doi: 10.1176/ps.47.2.194. [DOI] [PubMed] [Google Scholar]
  • 19.Tang PC, Lansky D. The missing link: bridging the patient-provider health information gap. Health Aff (Millwood) 2005;24:1290–1295. doi: 10.1377/hlthaff.24.5.1290. [DOI] [PubMed] [Google Scholar]
  • 20.Bates DW, Bitton A. The future of health information technology in the patient-centered medical home. Health Aff (Millwood) 2010;29:614–621. doi: 10.1377/hlthaff.2010.0007. [DOI] [PubMed] [Google Scholar]
  • 21.Norman C, Skinner H. eHealth Literacy: Essential Skills for Consumer Health in a Networked World. J Med Internet Res. 2006;8:e9. doi: 10.2196/jmir.8.2.e9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Parker RM, Baker DW, Williams MV, Nurss JR. The Test of Functional Health Literacy in Adults - A New Instrument for Measuring Patients Literacy Skills. Journal of General Internal Medicine. 1995;10:537–541. doi: 10.1007/BF02640361. [DOI] [PubMed] [Google Scholar]
  • 23.Pleasant A, McKinney J. Coming to consensus on health literacy measurement: an online discussion and consensus-gauging process. Nurs Outlook. 2011;59:95.e1–106.e1. doi: 10.1016/j.outlook.2010.12.006. [DOI] [PubMed] [Google Scholar]
  • 24.Kaplan B, Brennan PF. Consumer informatics supporting patients as co-producers of quality. J Am Med Inform Assoc. 2001;8:309–316. doi: 10.1136/jamia.2001.0080309. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.McCray AT. Promoting health literacy. J Am Med Inform Assoc. 2005;12:152–163. doi: 10.1197/jamia.M1687. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Lee SY, Bender DE, Ruiz RE, YI C, Lee SD, et al. Development of an easy-to-use spanish health literacy test. Health Services Research. 2006;41:1392–1412. doi: 10.1111/j.1475-6773.2006.00532.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Jull A. Evaluation of studies of assessment and screening tools, and diagnostic tests. Evidence-based nursing. 2002;5:68–72. doi: 10.1136/ebn.5.3.68. [DOI] [PubMed] [Google Scholar]
  • 28.Shea JA, Beers BB, McDonald VJ, Quistberg DA, Ravenell KL, Asch DA. Assessing Health Literacy in African American and Caucasian Adults: Disparities in Rapid Estimate of Adult Literacy in Medicine (REALM) Scores. Fam Med. 2004;36:575–581. [PubMed] [Google Scholar]
  • 29.Davis TC, Long SW, Jackson KM, et al. Rapid estimate of adult literacy in medicine: a shortened screening instrument. Fam Med. 1993;25:391–395. [PubMed] [Google Scholar]
  • 30.Bass PF, Wilson JF, Griffith CH. A shortened instrument for literacy screening. J Gen Intern Med. 2003;18:1036–1038. doi: 10.1111/j.1525-1497.2003.10651.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.Osborn CY, Weiss BD, Davis TC, et al. Measuring Adult Literacy in Health Care: Performance of the Newest Vital Sign. Am J Health Behav. 2007;31:S36–S46. doi: 10.5555/ajhb.2007.31.supp.S36. [DOI] [PubMed] [Google Scholar]
  • 32.Weiss BD, Mays MZ, Martz W, et al. Quick assessment of literacy in primary care: The newest vital sign. Annals of Family Medicine. 2005;3:514–522. doi: 10.1370/afm.405. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33.Wallace LS, Rogers ES, Roskos SE, Holiday DB, Weiss BD. Brief Report: Screening items to identify patients with limited health literacy skills. J Gen Intern Med. 2006;21:874–877. doi: 10.1111/j.1525-1497.2006.00532.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34.Chew LD, Bradley KA, Boyko EJ. Brief questions to identify patients with inadequate health literacy. Family Medicine. 2004;36:588–594. [PubMed] [Google Scholar]
  • 35.Chew LD, Griffin JM, Partin MR, et al. Validation of screening questions for limited health literacy in a large VA outpatient population. J Gen Intern Med. 2008;23:561–566. doi: 10.1007/s11606-008-0520-5. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 36.Morris NS, MacLean CD, Chew LD, Littenberg B. The Single Item Literacy Screener: Evaluation of a brief instrument to identify limited reading ability. BMC Family Practice. 2006;7 doi: 10.1186/1471-2296-7-21. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 37.Jeppesen KM, Coyle JD, Miser WF. Screening questions to predict limited health literacy: a cross-sectional study of patients with diabetes mellitus. Ann Fam Med. 2009;7:24–31. doi: 10.1370/afm.919. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 38.Bennett IM, Robbins S, Al-Shamali N, Haecker T. Screening for low literacy among adult caregivers of pediatric patients. Fam Med. 2003;35:585–590. [PubMed] [Google Scholar]
  • 39.Lobach DF, Hasselblad V, Wildemuth BM. Evaluation of a tool to categorize patients by reading literacy and computer skill to facilitate the computer-administered patient interview; AMIA Annual Symposium proceedings / AMIA Symposium; 2003. pp. 391–395. [PMC free article] [PubMed] [Google Scholar]
  • 40.Yost KJ, Webster K, Baker DW, Choi SW, Bode RK, Hahn EA. Bilingual health literacy assessment using the Talking Touchscreen/la Pantalla Parlanchina: Development and pilot testing. Patient Educ Couns. 2009;75:295–301. doi: 10.1016/j.pec.2009.02.020. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 41.Laramee AS, Morris N, Littenberg B. Relationship of literacy and heart failure in adults with diabetes. BMC Health Services Research. 2007;7:98. doi: 10.1186/1472-6963-7-98. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 42.Baker DW, Wolf MS, Feinglass J, Thompson JA, Gazmararian JA, Huang J, Baker DW, et al. Health literacy and mortality among elderly persons. Archives of Internal Medicine. 2007;167:1503–1509. doi: 10.1001/archinte.167.14.1503. [DOI] [PubMed] [Google Scholar]
  • 43.Wallace LS, Cassada DC, Rogers ES, et al. Can screening items identify surgery patients at risk of limited health literacy? Journal of Surgical Research. 2007;140:208–213. doi: 10.1016/j.jss.2007.01.029. [DOI] [PubMed] [Google Scholar]
  • 44.Paasche-Orlow MK, Wolf MS. Evidence Does Not Support Clinical Screening of Literacy. J Gen Intern Med. 2007 doi: 10.1007/s11606-007-0447-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 45.Shah LC, West P, Bremmeyr K, Savoy-Moore RT. Health literacy instrument in family medicine: the "newest vital sign" ease of use and correlates. J Am Board Fam Med. 23:195–203. doi: 10.3122/jabfm.2010.02.070278. [DOI] [PubMed] [Google Scholar]
  • 46.Daniel D, Greene J, Peters E. Screening question to identify patients with limited health literacy not enough. Fam Med. 2010;42:7–8. [PubMed] [Google Scholar]
  • 47.Norman CD, Skinner HA. eHEALS: The eHealth Literacy Scale. Journal of medical Internet research. 2006;8:e27. doi: 10.2196/jmir.8.4.e27. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 48.Sinadinovic K, Wennberg P, Berman AH. Population screening of risky alcohol and drug use via Internet and Interactive Voice Response (IVR): A feasibility and psychometric study in a random sample. Drug Alcohol Depend. 2010 doi: 10.1016/j.drugalcdep.2010.09.004. [DOI] [PubMed] [Google Scholar]
  • 49.Ryan JG, Leguen F, Weiss BD, et al. Will patients agree to have their literacy skills assessed in clinical practice? Health Educ Res. 2007 doi: 10.1093/her/cym051. in press. [DOI] [PubMed] [Google Scholar]
  • 50.Diamond JJ. Development of a reliable and construct valid measure of nutritional literacy in adults. Nutr J. 2007;6:5. doi: 10.1186/1475-2891-6-5. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 51.Yin HS, Dreyer BP, Foltin G, van Schaick L, Mendelsohn AL. Association of low caregiver health literacy with reported use of nonstandardized dosing instruments and lack of knowledge of weight-based dosing. Ambul Pediatr. 2007;7:292–298. doi: 10.1016/j.ambp.2007.04.004. [DOI] [PubMed] [Google Scholar]

RESOURCES