Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2019 Jun 17.
Published in final edited form as: Female Pelvic Med Reconstr Surg. 2019 Mar-Apr;25(2):145–148. doi: 10.1097/SPV.0000000000000672

Considering Low Health Literacy: How Do the Pelvic Floor Distress Inventory—Short Form 20 and Pelvic Floor Impact Questionnaire—Short Form 7 Measure Up?

Jordan Spencer *, Kristie Hadden , Heidi Brown , Sallie S Oliphant *
PMCID: PMC6572727  NIHMSID: NIHMS1033959  PMID: 30807417

Abstract

Objective

The objective of this study was to evaluate the readability and understandability of 2 commonly used pelvic floor disorder questionnaires, Pelvic Floor Distress Inventory—Short Form 20 (PFDI-20) and Pelvic Floor Impact Questionnaire—Short Form 7 (PFIQ-7), in a low health literacy patient population.

Methods

Flesh-Kincaid, SMOG, Fry, and FORCAST readability assessment tools were used to assign US grade levels to each questionnaire (PFDI-20, PFIQ-7). Two health literacy experts used PEMAT and ELF-Q tools to determine understandability, organization, content, and quality of each form. A focus group of women with low health literacy used Stop Light Coding and a facilitator-prompted discussion to further evaluate understandability and critique the forms.

Results

The PFIQ-7 required higher reading ability compared with PFDI-20 (ninth to 11th vs sixth to eighth mean grade level equivalents). Expert and focus group reviews identified concerns regarding purpose, formatting, and word choice in both forms. Focus group participants recommended assistance with questionnaire completion from clinical staff and gave mean overall ratings of 5.4 (0–10/worst to best) for PFDI-20 and 8.0 for PFIQ-7.

Conclusions

Knowledge of potential barriers to understanding and completion may improve utilization of and accuracy of patient responses to PFDI-20 and PFIQ-7 in women with low health literacy.

Keywords: health literacy, PFDI-20, PFIQ-7, prolapse, incontinence


The World Health Organization defines health literacy as “the cognitive and social skills which determine the motivation and ability of individuals to gain access to, understand, and use information in ways which promote and maintain good health.”1 An individual’s health literacy status directly impacts her ability to navigate and interface with the increasingly complex systems of modern health care delivery.2 Approximately one third of the US population has basic or below basic health literacy skills, and only 12% possess proficient health literacy.3 Studies have shown that lower health literacy is associated with poor health outcomes and poor compliance with care plans.4,5

The National Assessment of Adult Literacy translated national survey results into an average reading level of eighth grade or below for US adults, so the National Academy of Science and the National Institutes of Health recommends patient materials be written at or below this reading level.6,7 Readability, usability, and effectiveness of print materials for patients are also affected by formatting, organization, and use of plain language.8 Print materials are commonly used in health care delivery to gather health information, to quantify symptoms and quality of life impact, and to provide health education and instruction.

In urogynecology, validated questionnaires are used in both clinical care and research to characterize pelvic floor disorder symptoms and health burden and to define changes in symptom burden over time. Previous work has shown that only half of the commonly used pelvic floor symptom questionnaires are written below the eighth grade reading level.9 Two common quality of life measures specific to pelvic floor disorders are the Pelvic Floor Distress Inventory—Short Form 20 (PFDI-20) and the Pelvic Floor Impact Questionnaire—Short Form 7 (PFIQ-7).10,11 These questionnaires are widely used internationally and have been translated and validated in multiple languages.12,13 Despite widespread incorporation of these measures, their use in women with low health literacy has not been directly assessed. Thus, the primary aim of our study was to evaluate the readability and under-standability of the PFDI-20 and PFIQ-7 in female patients with low health literacy.

MATERIALS AND METHODS

We undertook a 3-part approach to investigate the use of the PFDI-20 and PFIQ-7 in women with low health literacy including (1) formal readability assessment using validated tools, (2) evaluation for readability and understandability by health literacy expert consensus, and (3) field testing for understandability using a focus group of women with low health literacy. The research team then synthesized findings from these 3 investigations to create a summary of recommendations to optimize readability and understandability of these validated instruments among urogynecology patients with low health literacy. Our institutional review board reviewed this project and deemed it exempt.

Formal Readability Assessment

We assessed readability of both pelvic floor disorder questionnaires (PFIQ-7 and PFDI-20) using 4 validated assessment tools: Flesch-Kincaid Grade Level Calculator, SMOG Readability Formula (SMOG), Fry Graph Readability Calculator, and FORCAST Readability Formulas. Flesch-Kincaid, SMOG, and Fry assess narrative text including headers, questions, and response options; FORCAST is the only readability test not designed for running narrative, making it ideal for evaluation of questionnaires and forms. Flesch-Kincaid estimates grade level based on the average number of words per sentence and average number of syllables per word in a 150 to 600 word sample.14 SMOG uses the number of words with 3 or more syllables in three 10-sentence passages.15 Fry includes the number of sentences and syllables within three 100-word passages.16 FORCAST takes into account the number of single-syllable words in a 150-word pas- sage.17 Each tool assigned a US grade level to the PFIQ-7 and PFDI-20, generating estimates for the average adult reading level required to comprehend each form. Mean grade level scores were calculated for the 3 tools that assessed narrative for each questionnaire. FORCAST scores were not included in this calculation because this tool is designed to evaluate questionnaires whereas Flesch-Kincaid, SMOG, and Fry evaluate narrative text.

Expert Assessment for Readability and Understandability

Two health literacy experts at the University of Arkansas for Medical Sciences Center for Health Literacy independently assessed both questionnaires using a standardized protocol and then reached consensus on scores using the Patient Education Materials Assessment Tool for Printable Materials (PEMAT) and the Evaluative Linguistic Framework for Questionnaires (ELF-Q).18,19 The PEMAT explores the understandability and actionability of a document (ie, the ease with which patients would be able to understand and act on the presented information). The ELF-Q assesses the quality of self-reported health questionnaires by evaluating the organization, formatting, and content. Given that the PFDI-20 and PFIQ-7 are used to gather information rather than provide patient instruction, only the understandability items in PEMAT were used in our assessment.

Focus Group Assessment for Understandability

An audio recorded focus group with a target sample of 8 to 10 women20,21 was led by 2 trained facilitators and attended by one notetaker. Female participants were selected for invitation to include those with low health literacy and age ranges reflective of the institution’s urogynecology population. Demographic data and health literacy status were collected on all participants. Four participants had been previously identified as having low health literacy using a single-question validated screening question,22 and the others were screened using the Newest Vital Sign, a validated health literacy screening instrument that assesses both literacy and numeracy.23 Facilitators trained in the Stop Light Coding method led the focus group.24 Stop Light Coding elicits a qualitative assessment of print material by asking participants to read materials along with the facilitator and color code the written documents so that words or phrases that are hard to understand are marked with a red pen, whereas those that are easy to understand are marked with a green pen. Phrases or words that are understandable but unnecessarily complicated are marked with a yellow highlighter. Participants were given paper copies of the PFDI-20 and PFIQ-7 and instructed to individually evaluate each questionnaire using Stop Light Coding. All participants coded the same materials at the same time. Group discussion created a final consensus to capture the group’s feedback on both questionnaires.

The facilitators then prompted open discussion using open-ended prompts developed a priori for use with this focus group to assess the participants’ impressions of each form’s organization, readability, and understandability. Each participant was asked to provide her subjective rating for each questionnaire using a Likert scale of 1 to 10, where 1 was worst and 10 was best. Finally, participants were invited to make recommendations to improve each questionnaire without actually changing the content or format. The notetaker recorded observations of participant behavior and comments. Each participant received a US $25 gift card as compensation for participating in the focus group.

Integrated Synthesis

The research team incorporated summary findings from each of the 3 investigations described previously to create recommendations informed by these diverse perspectives. Data from formal readability testing and expert consensus regarding readability and understandability were incorporated with focus group audio recording and notes. Recommendations to improve document readability and understandability were generated and refined through team discussion using an iterative approach.

RESULTS

Formal Readability Assessment

The results of the readability assessment are summarized in Table 1. Grade level scores vary owing to the differences in readability formulas, but overall, a higher reading level is required to interpret the PFIQ-7 compared with the PFDI-20: 11th versus sixth grade for the mean of the narrative scores (Flesch-Kincaid, SMOG, and Fry) and ninth versus eighth for the overall questionnaire score (FORCAST).

TABLE 1.

Grade Level Scores for the PFDI-20 and PFIQ-7 Based on Formal Readability Assessment

Narrative
Entire Form
Assessment Tool Flesch- Kincaid SMOG Fry Mean FORCAST Overall Range
PFDI-20 5.7 6.5 5 5.7 8.1 5th to 8th
PFIQ-7 10.6 9.5 15+ 11.7 9.44 9th to 15th+

Scores reflect mean US grade level equivalent.

Expert Assessment for Readability and Understandability

Expert consensus for the PFDI-20 understandability based on PEMAT noted the following concerns: purpose of form not clear from instructions provided; use of complex medical jargon may be difficult for patients to understand; and 2-part formatting of questions could be hard to follow and complete correctly. Using PEMAT to assess the the PFIQ-7, the experts concluded again that the purpose for completion was not clear and that the grid layout of questions and symptom types might be confusing. They praised the minimal use of medical jargon but expressed concern that patients could have difficulty dintinguishing between “somewhat” and “moderately” on the Likert scale.

Using the ELF-Q tool for both the PFDI-20 and PFIQ-7, the experts concluded that the forms would benefit from a clearly stated purpose and more instructions to ensure correct completion. For example, in the PFDI-20, participants may not intuitively understand that the Likert scale is meant to indicate severity of bother rather than frequency of symptom occurrence. They expressed concern regarding formatting and flow of items as they perceived that patients might become confused about which pelvic floor condition they were to consider for each question. For example, in the PFDI-20 questions 4 to 7, question 4 asks about defecation, questions 5 and 6 ask about urination, and question 7 goes back to defecation.

Focus Group Assessment for Understandability

A total of 9 English-speaking women with a median age of 53 years (range, 34–81 years), all of whom self-identified as African American, participated in the focus group. Only 1 had previously received care for a pelvic floor disorder. Eighty-nine percent (8/9) of participants had low health literacy, although 78% had attended at least some college and 22% were college graduates.

For the PFDI-20, participants felt both the purpose and the instructions for completion were clear but had trouble understanding many of the terms used, such as pressure, heaviness, dullness, lose gas, and bulge. Some participants misinterpreted the Likert scale as referencing frequency of symptom occurrence rather than level of bother. They disliked the form’s length and format and preferred a table format more similar to the PFIQ-7. The average participant ranking for the PFDI-20 was 5.4 of 10, where 1 was worst and 10 was best. For the PFIQ-7, participants understood the majority of the questions and liked the table format. One participant struggled with recalling the primary question prompt throughout completion, answering how well she could do each activity instead of how much each group of symptoms affected her ability to do the activity in question. The average participant ranking for the PFIQ-7 was 8.0 of 10.

For both questionairres, participants pointed out that a 3-month retrospective time frame of symptoms was challenging. They also recommended the use of large font to improve visualization for older patients. Their consensus was that a staff member should personally provide more detailed instructions for completion than the standard text instructions provide and should directly assist patients with completion. When asked about preferences for paper versus electronic or tablet completion, participants uniformly expressed concern that patients would have difficulty completing these forms using an electronic device. Interestingly, however, all participants expressed comfort with use of electronic devices or tablets themselves and did not feel they, personally, would have any difficulty completing these forms electronically.

Integrated Synthesis

Although the PFIQ-7 required a higher reading level than the PFDI-20 according to all 4 validated readability assessment tools, both health literacy experts and focus group participants with low health literacy identified more issues with the PFDI-20 than the PFIQ-7. Health literacy experts and focus group participants identified confusing terms in the PFDI-20 and had concerns about the question structure being confusing, asking about level of bother rather than frequency of symptom occurrence. Focus group participants with low health literacy preferred the table format of the PFIQ-7 to the physical layout of the PFDI-20 and rated the PFIQ-7 much more highly (8.0 vs 5.4 on a 1–10 scale where 10 is best).

Recommendations to improve document readability and understandability among women with low health literacy are outlined in Table 2. Emphasizing certain words in the instructions by using bold, italicized, or underlined fonts may improve comprehension of instructions. Similarly, patient understanding may be optimized by offering assistance from clinical staff to help patients define confusing terms or complete questionnaires correctly.

TABLE 2.

Recommendations for Improving Understanding of the PFDI-20 and PFIQ-7 Among Women With Low Health Literacy

Purpose Emphasize instructions:
 > PFIQ-7: degree to which the symptom affects the ability to do each activity (not frequency of activity)
 > PFDI-20: degree to which the symptom bothers patient (not frequency of symptom occurrence)
Emphasize that responses will supplement discussions with provider (clinical applications).
Emphasize that symptoms should be reported over last 3 mo.
Language Invite participant to seek clarification for any unclear terms or phrases.
Layout and administration Use forms with 12 point or larger font.
Consider providing example forms to demonstrate appropriate completion.
Offer options for assisted completion (allow participants to select):
 > Reading aloud to participant with participant viewing and completing her own copy.
 > Reading aloud to participant with participant viewing answer choices only with assisted form completion.
 > Reading aloud to participant and assisting with form completion.
Offer tablet/electronic completion.

DISCUSSION

Our thorough evaluation of the PFDI-20 and the PFIQ-7 revealed significant limitations for their use in a population of women with low health literacy. Both questionnaires require reading levels higher than that recommended by the National Institutes of Health for patient written materials, with the PFIQ-7 requiring a high school reading level. Both health literacy experts and women with low health literacy identified concerns regarding communication of purpose, organization, and language for both the PFDI-20 and the PFIQ-7. Women with low health literacy recommended that clinical or research staff assist patients or research participants with form completion to overcome these potential sources for confusion. We recognize that assistance with questionnaire completion is not feasible in many settings where these questionnaires are used but should be considered when available, particularly in populations with lower health literacy.

Multiple socioeconomic factors contribute to health literacy, including financial status, race, educational attainment, and age, and health literacy is an important mediator of health disparities.2527 The majority of focus group participants in our study had attended at least some college but had low health literacy, underscoring the difference between formal education and health literacy. We hope that awareness of the potential barriers to understanding and completion of the PFDI-20 and PFIQ-7, even among women with some college education, will prompt clinicians and researchers to consider the role of health literacy in our delivery of care and conduct of research. We have emphasized the importance of patient-reported outcomes and subjective measures of symptom improvement in our clinical trials for decades, but the questionnaires we use may not be appropriate for most of our patients.

To minimize the potential for health disparities in urogynecology clinical care and research, we suggest tailoring questionnaires and patient education materials to reach patients of both high and low health literacy. Adding a statement inviting patients or participants to ask a staff member about any confusing words or instructions will not compromise the validity of the instruments that serve as a cornerstone of our treatment evaluation, and may improve understandability for all of our patients. Similarly, emphasizing instructions through bold, italicized, or underlined text may improve comprehension.

A strength of our study is our 3-pronged approach, assessing the PFDI-20 and PFIQ-7 using readability formulas, standardized understandability assessment by health literacy experts, and evaluation by women with low health literacy. A potential weakness of our study is that we did not have a focus group assessment using a high health literacy group; thus, we cannot make comparisons between those with high versus low health literacy. However, because we do not routinely assess health literacy in clinical or research interactions, it is important that we improve questionnaire administration for women of all literacy levels. Our study is limited because we explored only 2 of the many commonly used pelvic floor symptom questionnaires. Future work should address the use of additional measures in women with inadequate health literacy, with particular focus on those measures captured in large scale registries, such as the American Urogynecology Society’s Urogynecology Quality Measure. Another potential weakness of our study is that all of our focus group participants were African American, which could limit the generalizability of our findings. This could, however, be seen as a strength because African American women are often underrepresented in clinical research.

Recognition of the limitations of health symptom questionnaires, particularly for those with low health literacy, is essential to accurate data collection. We anticipate that improved patient understanding of these questionnaires will allow more accurate and meaningful data collection for clinical and research applications.

ACKNOWLEDGMENT

The authors thank the Center for Health Literacy, University of Arkansas for Medical Sciences.

Footnotes

The authors have declared they have no conflicts of interest.

REFERENCES

  • 1.World Health Organization (7th Global Conference on Health Promotion, 2009). Track 2: Health Literacy and Health Behaviour. Available at: http://www.who.int/healthpromotion/conferences/7gchp/track2/en/2. Accessed July 19, 2018.
  • 2.Nutbeam D Health literacy as a public health goal: a challenge for contemporary health education and communication strategies into the 21st century. Health Promot Int 2000;15(3):259–267. [Google Scholar]
  • 3.Kutner M, Greenberg E, Jin Y, et al. The Health Literacy of America’s Adults: Results From the 2003 National Assessment of Adult Literacy (NCES 2006–83). U.S. Department of Education. Washington, DC: National Center for Education Statistics; 2006. [Google Scholar]
  • 4.Berkman ND, Sheridan SL, Donahue KE, et al. Low health literacy and health outcomes: an updated systematic review. Ann Intern Med 2011;155(2):97–107. [DOI] [PubMed] [Google Scholar]
  • 5.Hâlleberg Nyman M, Nilsson U, Dahlberg K, et al. Association between functional health literacy and postoperative recovery, health care contacts, and health-related quality of life among patients undergoing day surgery: secondary analysis of a randomized clinical trial. JAMA Surg 2018;153(8): 738–745. doi: 10.1001/jamasurg.2018.0672. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.National Institutes of Health (2017). How to Write Easy-to-Read Health Materials: Medlineplus. Available at: https://medlineplus.gov/etr.html. Accessed March 23, 2018.
  • 7.Institute of Medicine (US) Committee on Health Literacy Neilsen-Bohlman L, Panzer A, Kindig D. Health Literacy: A Prescription to End Confusion. Washington, DC: National Academies Press; 2004. [PubMed] [Google Scholar]
  • 8.Centers for Medicare and Medicaid Services. Toolkit for Making Written Material Clear and Effective. Available at: https://www.cms.gov/Outreach-and-Education/Outreach/WrittenMaterialsToolkit/Downloads/ToolkitPart03.pdf. Accessed August 6, 2018.
  • 9.Alas AN, Bergman J, Dunivan GC, et al. Readability of common health-related quality-of-life instruments in female pelvic medicine. Female Pelvic Med Reconstr Surg 2013;19(5):293–297. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Barber MD, Kuchibhatla MN, Pieper DF, et al. Psychometric evaluation of 2 comprehensive condition-specific quality of life instruments for women with pelvic floor disorders. Am J Obstet Gynecol 2001;185(6):1366–1395. [DOI] [PubMed] [Google Scholar]
  • 11.Barber MD, Walters MD, Bump RC. Short forms of two condition-specific quality-of-life questionnaires for women with pelvic floor disorders (PFDI-20 and PFIQ-7). Am J Obstet Gynecol 2005;193(1):103–113. [DOI] [PubMed] [Google Scholar]
  • 12.Barber MD, Walters MD, Cundiff GW, et al. Responsiveness of the pelvic floor distress inventory (PFDI) and pelvic floor impact questionnaire (PFIQ) in women undergoing vaginal surgery and pessary treatment for pelvic organ prolapse. Am J Obstet Gynecol 2006;194(5):492–498. [DOI] [PubMed] [Google Scholar]
  • 13.Sánchez Sánchez B, Torres Lacomba M, Navarro Brazález B, et al. Responsiveness of the Spanish pelvic floor distress inventory and pelvic floor impact questionnaires short forms (PFDI-20 and PFIQ-7) in women with pelvic floor disorders. Eur J Obstet Gynecol Reprod Biol 2015;190: 20–25. [DOI] [PubMed] [Google Scholar]
  • 14.Flesch R A new readability yardstick. J Appl Psychol 1948;32(3): 221–233. [DOI] [PubMed] [Google Scholar]
  • 15.McLaughlin H SMOG grading — a new readability formula. Journal of Reading 1969;639–646. [Google Scholar]
  • 16.Fry E Fry’s readability graph: clarifications, validity, and extension to level J Read 1977;21(3):242–252. [Google Scholar]
  • 17.Caylor J, Stict T, Ford JP. The FORCAST readability formula. Literacy Discuss International Institute for Adult Literacy: UNESCO; 1973. [Google Scholar]
  • 18.Agency for Healthcare Research and Quality. The Patient Education Materials Assessment Tool (PEMAT) and User’s Guide. 2017. Available at: http://www.ahrq.gov/professionals/prevention-chronic-care/improve/self-mgmt/pemat/index.html. Accessed August 12, 2018.
  • 19.Clerehan R, Guillemin F, Epstein J, et al. Using the evaluative linguistic framework for questionnaires to assess comprehensibility of self-report health questionnaires. Value Health 2016;19(4):335–342. [DOI] [PubMed] [Google Scholar]
  • 20.Nyumba T, Wilson K, Derrick C, et al. The use of focus group discussion methodology: insights from two decades of application in conservation. Methods Ecol Evol 2018;9:20–32. [Google Scholar]
  • 21.Bender D, Ewbank D. The focus group as a tool for health research: issues in design and analysis. Health Trans Review 1994;4(1):63–80. [PubMed] [Google Scholar]
  • 22.Chew LD, Griffin JM, Partin MR, et al. Validation of screening questions for limited health literacy in a large VA outpatient population. J Gen Intern Med 2008;23(5):561–566. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Pfizer. The Newest Vital Sign. Available at: https://www.pfizer.com/health/literacy/public-policy-researchers/nvs-toolkit. Accessed August 12, 2018.
  • 24.Hadden K The stoplight method: a qualitative approach for health literacy research. Health Literacy Research and Practice 2017;1(2):e18–e22. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Rikard RV, Thompson MS, McKinney J, et al. Examining health literacy disparities in the United States: a third look at the National Assessment of Adult Literacy (NAAL). BMC Public Health 2016; 16(1):975. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Howard DH, Sentell T, Gazmararian JA. Impact of health literacy on socioeconomic and racial differences in health in an elderly population. J Gen Intern Med 2006;21(8):857–861. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Ayotte BJ, Allaire JC, Bosworth H. The associations of patient demographic characteristics and health information recall: the mediating role of health literacy. Neuropsychol Dev Cogn B Aging Neuropsychol Cogn 2009;16(4):419–432. [DOI] [PubMed] [Google Scholar]

RESOURCES