Abstract
Objective:
Natural disasters are becoming increasingly common, but it is unclear whether families can comprehend and use available resources to prepare for such emergencies. The objective of this study was to evaluate the literacy demands of risk communication materials on natural disasters for U.S. families with children.
Methods:
In January 2018, we assessed 386 online self-directed learning resources related to emergency preparedness for natural disasters using five literacy assessment tools. Assessment scores were compared by information source, audience type, and disaster type.
Results:
One-in-three websites represented government institutions, and three-quarters were written for a general audience. Nearly one-in-five websites did not specify a disaster type. Assessment scores suggest a mismatch between the general population’s literacy levels and literacy demands of materials in the areas of readability, complexity, suitability, web usability, and overall audience-appropriateness. Materials required more years of education beyond the grade level recommended by prominent health organizations. Resources for caregivers of children generally and children with special health care needs possessed lower literacy demands than materials overall, for most assessment tools.
Conclusions:
Risk communication and public health agencies could better align the literacy demands of emergency preparedness materials with the literacy capabilities of the general public.
Keywords: Disaster Literacy, Natural Disasters, Risk Communication, Emergency Preparedness, Children
Natural disasters such as hurricanes, floods, and wildfires are increasing in frequency and intensity, highlighting the need for advanced planning by communities.1,2 Although many government, nonprofit, and private actors are involved in emergency preparedness planning, it is the preparedness knowledge and skills of individual families that form the foundation of a prepared community.3 The role of families and caregivers has garnered particular interest in the emergency management sector, as they are essential for the prevention of adverse disaster-related consequences among children that depend on them.4,5
Children are often disproportionately impacted by public health emergencies due to their unique physiological, social, and physical characteristics (e.g., faster respiratory rate).4,6 Further, children with special healthcare needs (CSHCN) may face additional difficulty in disasters, as they often require specialized equipment, medications, or other considerations distinct from typically-developing peers.7 In order to make informed decisions about child health and safety, parents and other primary caregivers often turn to the internet as a resource.8 Likewise, community emergency planners and health systems utilize various risk communication channels to educate families about public health threats and harm mitigation strategies. Ensuring that families are able to understand available resources so that they can take action is paramount, particularly as public health emergencies can overwhelm jurisdictions’ standard communications systems.9
Recent evaluations demonstrate a mismatch between the literacy demands (i.e., cognitive skills necessary to process, understand, and act upon provided content10) of available materials and the literacy levels of vulnerable populations, such as low-income people of color;11 pregnant and postpartum women;12 and people who are deaf or hard-of-hearing.13 These populations may not be able to access standard resources offered in emergency preparedness, response, and recovery. They might also face barriers to appropriate action (e.g., developing a family communication plan or emergency kit) even when information about hazards is made available.9 Recent evidence suggests that caregivers can prepare for emergencies only if they both perceive threat and have the self-efficacy to use available resources to meet situational demands.14 Therefore, it is critical for risk communication agencies to pay attention to more than the content of disseminated materials. Addressing factors that determine individuals’ ability to comprehend and effectively use information, such as literacy demands, might help ameliorate disparities in disaster-related morbidity and mortality.
Unfortunately, evaluation efforts have often only focused on a single dimension of the cognitive demands involved in literacy (e.g., readability); not been specific to families or children; or used non-systematic methods to retrieve resources. Further, these assessments typically focus on government resources, even though families draw upon information from a range of trusted sources including professional (e.g., American Academy of Pediatrics) and non-profit organizations.8 In recognition of these gaps, we conducted a comprehensive evaluation of the online information environment for personal- or family-level preparedness. Enhanced understanding of the literacy requirements of these sources could guide the development of appropriate messaging prior to crises, help communities tailor risk communication interventions to better reach target audiences, and ultimately keep families safe during and after disasters.9,15
Methods
We conducted a cross-sectional evaluation of web-based risk communication materials using standard search engines that parents or caregivers in the U.S. would likely use to search for resources. For the purposes of this study, we defined “parents or caregivers” as the person(s) with primary responsibility for the day-to-day care and well-being of the child; this does not include professionals such as educators. We limited our focus to emergency preparedness for natural disasters, given the increasing frequency of these crises.
To address limitations of previous research11–13 and a lack of consensus on how to comprehensively assess literacy demands,10,16 we used several evaluation instruments. We selected tools informed by five dimensions articulated in Brown et al.’s (2014)17 disaster literacy framework to assess literacy demands: readability, complexity, suitability, web usability, and overall audience appropriateness. This approach leverages the unique strengths of each tool while offsetting their respective limitations, consistent with similar studies.18
Sampling Procedure
We performed searches on the three most frequently used search engines using SimilarWeb.com’s (New York, NY) Top Websites Ranking function in January 2018: Google (Mountain View, CA), Bing (Redmond, WA), and Yahoo! (Sunnyvale, CA). These search engines represent more than 95% of all searches conducted across the globe.18 To ensure comprehensiveness, we conducted (1) a general search for emergency preparedness, and (2) targeted searches for specific natural disaster event types. We refined our search by including terms for children and families and using search strings for the 10 most common natural disasters in the U.S. For each string, the disaster type query (e.g., “hurricane”) was coupled with a query for preparedness (i.e., “hurricane” AND “preparedness”), along with common synonyms (Table 1).
Table 1.
Concept | Search string a |
---|---|
General Search | |
Emergency Preparedness | “Emergency Preparedness” |
Disaster Preparedness | “Disaster Preparedness” |
Emergency Readiness | “Emergency Readiness” |
Disaster Readiness | “Disaster Readiness” |
Children’s Emergency Preparedness | “Emergency Preparedness” AND (Child* OR Famil*) |
Children’s Disaster Preparedness | “Disaster Preparedness” AND (Child* OR Famil*) |
Targeted Searches | |
Blizzards | (Blizzard* OR (Ice Storm*) OR (Snow Storm*) OR (Winter Storm*)) AND “Preparedness” |
Droughts | Drought* AND “Preparedness” |
Earthquakes | (Earthquake* OR Quake* OR Tremor* OR Temblor*) AND “Preparedness” |
Floods | Flood* AND “Preparedness” |
Heat Waves | (Heat Wave*) AND “Preparedness” |
Hurricanes | (Hurrican* OR Typhoon* OR (Tropical Storm*)) AND “Preparedness” |
Thunderstorms | (Thunderstorm* OR (Electrical Storm*) OR (Lightning Storm*) OR Thundershower*) AND “Preparedness” |
Tornadoes | (Tornado* OR Twister* OR Cyclon*) AND “Preparedness” |
Volcanic Eruptions | (Volcan* OR Erupt*) AND “Preparedness” |
Wildfires | (Wildfire* OR (Brush Fire*) OR (Forest Fire*)) AND “Preparedness” |
Identical search strings were applied to Google.com, Yahoo.com, and Bing.com
Inclusion Screening
For each search engine, we examined the first 30 results of each search as internet users infrequently visit websites beyond the 30th result.19,a We excluded websites that predominantly or solely contained links to other websites, but considered websites linked from these pages for exclusion screening and subsequent coding. This yielded 1,430 websites. We did not use search engine optimization features, such as settings to match websites to the searcher’s geographic region. We excluded advertisements in search results.
Two authors ([Author initials, blinded]) divided the list randomly and subjected each website to exclusion criteria within four categories (access barriers, wrong resource type, limited or wrong audience, or other issues) informed by previous studies.18,20 We focused on free, English-language self-directed learning resources targeted to the general public in the U.S. (Table 2). We only included websites that received ≥1 visit in the month before the evaluation (based on web traffic data from SimilarWeb.com). This process resulted in 386 unduplicated websites (27% of original sample).
Table 2.
Exclusion Reasons | # of websites excluded | % |
---|---|---|
Access Barriers | ||
Broken link | 26 | 2.5 |
Non-English | 0 | 0.0 |
Financial cost to access | 2 | 0.2 |
Login required | 5 | 0.5 |
Wrong Resource Type | ||
Research article | 30 | 2.9 |
News article | 32 | 3.1 |
Curriculum | 7 | 0.7 |
Social media websites | 4 | 0.4 |
Podcasts/audio/video | 19 | 1.8 |
All or predominantly links to other pages | 138 | 13.2 |
Not a self-directed learning resource | 110 | 10.5 |
Limited or Wrong Audience | ||
Explicitly non-U.S. | 6 | 0.6 |
Researcher | 4 | 0.4 |
Clinician/provider/healthcare organization | 5 | 0.5 |
Prenatal/pregnant only | 0 | 0.0 |
Government document not intended for personal/family preparedness | 82 | 7.9 |
Retail website | 10 | 1.0 |
Other Issues | ||
Duplicate | 558 | 53.4 |
Infographic or PDF that could not be assessed | 6 | 0.6 |
Total | 1,044 | 100.0 |
Measures
Five literacy assessment tools were used to examine distinct domains of literacy demands: (1) Flesch-Kincaid Grade Level21 for readability, (2) Peter Mosenthal and Irwin Kirsch (PMOSE/IKIRSCH) formula22 for complexity, (3) Suitability Assessment of Materials (SAM) instrument23 for suitability, (4) National Library of Medicine and National Institute on Aging (NLM/NIA) Web Usability guidelines24 for web usability, and (5) Centers for Disease Control and Prevention (CDC) Clear Communication Index (CCI)25 for overall audience appropriateness (see Table 3 for details on each tool’s scoring range and protocol). For each communication material we also recorded information source type, audience type, and natural disaster type (see Supplemental Table 1 for protocol).
Table 3.
Assessment Tool | Construct | Scoring | Reference |
---|---|---|---|
Flesch-Kincaid Grade Level | Readability, or years of education needed (measured by grade-level) to understand a document based on sentence and word length. | Scores range from 0.0 to 12.0 representing school grade year. For example, a score of 9.3 signifies that the average person who completed the ninth grade would be able to read the material. | Kincaid et al. (1975) |
Peter Mosenthal and Irwin Kirsch (PMOSE/iKIRSCH) Formula | Complexity, based on the structure, density, and dependency of information in print materials. | Scores range from 1.0 to 5.0 as a sum of the three criteria, and interpreted using a chart to determine complexity level (very low, low, moderate, high, very high); proficiency level (1, 2, 3, 4, or 5); and grade/schooling (range including grade 4 or equivalent to <8 years of schooling, grade 8 or equivalent to high school degree, grade 12 or equivalent to some education after high school, 15 years of schooling or equivalent to college degree, or 16 years of schooling or equivalent to post college degree). | Mosenthal and Kirsch (1981) |
Suitability Assessment of Materials (SAM) | Suitability, or how understandable materials are based on how easy it is to comprehend information. | The tool rates factors as superior, adequate, or not suitable in six areas (content, literacy demand, graphics, layout and type, learning stimulation and motivation, and cultural appropriateness). Scores range from 0.0 to 100.0 and the total score is interpreted as superior (70-100%), adequate/below average (40-69%), or not suitable (0-39%). | Doak, Doak, and Root (1996) |
National Library of Medicine / National Institute on Aging (NLM/NIA) Web Usability Guidelines | Web usability, or the text readability, information presentation, and ease-of-navigation of websites. | Scores range from 0.0 to 25.0, and generate a percentage for the total number of items fulfilled out of a checklist of 24 items. Items capture domains including typeface/weight, text spacing, use of color, simplicity, information presentation, organization, webpage ease-of-navigation, menus, and scrolling. | Hodes and Lindberg (2002) |
Clear Communication Index (CCI) | Overall audience appropriateness in consideration of additional dimensions such as clarity and understandability, beyond the basic characteristics that simple readability formulas capture. | The score divides the points that the material earned by the total number possible to generate a score that can be interpreted in a binary fashion: excellent (90 or above) or needs work (89 or lower). Items capture domains including main message/call to action, information design, language, state of the science, behavioral recommendations, numbers, and risk. | Baur and Prue (2014) |
In certain instances, we needed to make modifications to the assessment tools’ criteria, or further operationalize items within tools in order to standardize the coding protocol across coders. We limited such changes to only those that were truly necessary; for example, the SAM includes a criterion for glossy vs. matte-based paper, which is not pertinent to viewing resources on screens and was thus not included as an item when calculating scores for that tool. These decisions were applied in a manner consistent with previous literature,13,26 and are available in Supplemental Table 2.
Literacy Assessment
[Author initials, blinded] and [Author initials, blinded] independently coded a randomly-selected 10% of websites, and inter-rater reliability was calculated via percent agreement and Scott’s Pi,27 for every item on each assessment instrument. Scott’s Pi was deemed an appropriate measure as it accounts for expected vs. observed agreement in the coding of nominal data by two coders. These scores were continuously monitored, with coders discussing discrepancies throughout. Before initiating data extraction, scores were deemed “satisfactory” (95% agreement on average, corresponding to Scott’s Pi of 0.85). [Author initials, blinded] and [Author initials, blinded] divided the list of websites in half and coded every included website with each assessment tool using a coding form matrix (available upon request).
We only considered text that was clearly a component of the page’s content, effectively excluding irrelevant text concerning website navigation (e.g., “return to top”), advertisements, or embedded social media content. If a website contained gadgets that segmented or hid portions of text to facilitate user comprehension (e.g., collapsible modules, section tabs) coders included this content in its entirety, so long as the user was not directed to a new URL when expanding sections. We excluded materials that were solely images without usable text (e.g., infographics).
Data Analysis
We tabulated descriptive statistics and non-parametric tests of association, upon determination of the dataset’s non-normal distribution using quantile-quantile plots. We tested whether online information literacy assessment scores differed depending on source, targeted audience, and type of natural disaster covered, using Kruskal-Wallis and Wilcoxon rank-sum tests. To account for multiple comparisons, we controlled all tests for false discovery rates using Benjamini and Hochberg’s procedure.28 Differences were deemed significant at p<0.05 after these adjustments. We used Stata 14.1 (College Station, TX) for all analyses. As this was an evaluation of publicly-available websites, human subjects approval was not necessary.
Results
Assessment tool scores of included websites overall, by audience type, information source, and disaster type are depicted in Table 4.
Table 4.
Literacy Assessment Tool, Mean (SD) |
||||||
---|---|---|---|---|---|---|
Characteristics | N | Flesch-Kincaid b | PMOSE/IKIRSCH b | SAM c | NLM/NIA c | CDC CCI c |
Overall | 386 | 9.6 (2.1) | 3.5 (1.0) | 61.6 (12.3) | 16.9 (1.7) | 66.6 (15.3) |
Information Source | ||||||
Government Institution | 119 | 9.9 (2.3) | 3.7 (1.1)† | 60.5 (13.5) | 17.1 (1.6) | 64.3 (16.1) |
News Organization | 41 | 9.1 (2.0) | 3.0 (1.0)** | 60.2 (8.3) | 15.5 (1.2)** | 64.7 (10.4) |
Educational Institution | 33 | 9.2 (0.8) | 3.6 (1.2) | 62.8 (13.2) | 16.1 (1.4)** | 68.8 (12.4) |
Healthcare Institution a | 0 | --- | --- | --- | --- | --- |
Non-Profit Organization | 81 | 9.5 (2.2) | 3.6 (1.1) | 65.0 (11.2)* | 17.1 (1.7) | 66.9 (16.0) |
Private Company | 98 | 9.4 (2.0) | 3.3 (0.6)† | 60.5 (11.6) | 16.8 (1.6) | 69.2 (13.5) |
Other | 14 | 8.6 (2.1) | 3.3 (0.5) | 59.4 (13.8) | 16.8 (2.7) | 67.0 (20.1) |
Audience Type | ||||||
Parents/caregivers of CSHCN | 18 | 8.3 (1.2)* | 5.3 (2.9)** | 68.7 (7.4)* | 16.8 (2.4) | 75.7 (10.0)* |
Parents/caregivers of children (not CSHCN) | 54 | 9.4 (2.3) | 3.7 (0.7) | 70.2 (10.7)** | 17.9 (1.6)** | 73.5 (15.5)** |
Children a | 0 | --- | --- | --- | --- | --- |
General | 272 | 9.5 (2.1) | 3.4 (0.8) | 60.2 (11.9) | 16.8 (1.5) | 65.5 (14.8) |
Other clearly specified audience | 42 | 10.6 (2.1)** | 3.5 (0.8) | 59.2 (13.8) | 16.1 (1.9)** | 63.8 (16.8) |
Disaster Type | ||||||
Blizzards | 25 | 8.9 (1.6) | 3.6 (1.3) | 58.7 (13.7) | 16.7 (1.6) | 71.3 (13.7) |
Droughts | 16 | 10.5 (2.0) | 3.0 (0.0)† | 62.9 (13.5) | 15.5 (1.6)** | 67.3 (18.7) |
Earthquakes | 39 | 8.9 (1.6)† | 3.5 (0.6) | 64.2 (10.5) | 16.7 (1.8) | 66.1 (12.3) |
Floods | 33 | 9.5 (2.0) | 3.9 (1.1)* | 60.8 (10.0) | 16.4 (2.0) | 61.1 (21.4)† |
Heat Waves | 26 | 8.7 (1.4)* | 3.4 (0.6) | 64.7 (11.5) | 17.2 (1.4) | 64.6 (13.7) |
Hurricanes | 31 | 9.5 (2.8) | 3.5 (0.6) | 61.6 (11.1) | 17.5 (1.5)† | 64.6 (13.8) |
Thunderstorms | 33 | 8.9 (1.4)† | 3.3 (0.7) | 58.4 (14.5) | 16.8 (1.3) | 62.0 (9.0) |
Tornadoes | 38 | 9.6 (1.8) | 3.4 (0.8) | 62.0 (14.5) | 17.0 (1.6) | 65.4 (16.5) |
Volcanic Eruptions | 47 | 9.9 (1.8) | 3.2 (0.4)† | 62.5 (9.7) | 16.9 (1.8) | 68.5 (13.8) |
Wildfires | 26 | 9.6 (2.0) | 3.8 (1.1) | 62.4 (17.6) | 16.6 (1.7) | 68.3 (15.9) |
General | 72 | 9.6 (1.3) | 3.2 (0.3)* | 63.1 (8.8) | 16.7 (1.3) | 68.4 (10.2) |
Notes.
CSHCN: Children with Special Health Care Needs. SD: Standard Deviation.
Flesch-Kincaid: Flesch-Kincaid Grade Level,21 Range: 0.0-12.0.
PMOSE/IKIRSCH: Peter Mosenthal and Irwin Kirsch Document Complexity Level,22 Range: 1.0-5.0.
SAM: Suitability Assessment of Materials Score,23 Range: 0.0-100.0.
NLM/NIA: National Library of Medicine/National Institute on Aging Web Usability Guidelines Score,24 Range: 0.0-25.0. CDC CCI: Centers for Disease Control and Prevention Clear Communication Index Score,25 Range: 0.0-100.0.
p<0.05,
p<0.01,
p<0.10.
Although this category was explicitly indicated in searches, no websites were ultimately included in this category.
A lower score for this tool is “better” (i.e., has lower literacy demands).
A higher score for this tool is “better” (i.e., has lower literacy demands).
Website Characteristics
Of the 386 included websites, government institutions represented the largest group of information sources (119/386, 30.8%). Private (98/386, 25.4%) and non-profit organizations (81/386, 21.0%) each comprised more than a fifth of the overall sample. Less common information source types included news organizations (41/386, 10.6%), educational institutions (33/386, 8.5%), and other sources (e.g., individuals’ personal websites; 18/386, 4.7%). No websites classified as healthcare institutions were present in the final sample.
Nearly three-quarters of website content did not specify a clear audience or user (279/386, 72.3%). The number of websites targeted to caregivers of children outnumbered those targeted for caregivers of CSHCN by threefold (54/386 vs. 18/386, respectively). There were no websites for which children (i.e., individuals 0-18 years old) themselves were the target audience. Materials that were targeted to neither caregivers nor children, but did specify a clear target audience (e.g., childcare providers, business owners), made up 10.8% of the sample (42/386).
Few websites specified a natural disaster type (72/386, 18.7%). Among those that focused on a specific type of disaster, volcanic eruptions were the most prevalent (47/386, 12.2%) and droughts were least prevalent (16/386, 4.1%).
Website Literacy Demand Scores
Flesch-Kincaid Grade Level.
The mean Flesch-Kincaid score for resources was grade 9.6 (SD: 2.1), indicating that on average, available websites are readable to individuals who have reached the 9th grade.21,b There were no significant differences in Flesch-Kincaid scores across categories of information sources. Materials targeted at parents or caregivers of CSHCN scored at a significantly lower reading level than materials overall ( grade levels, p=0.013). Those intended for other clearly specified audiences scored at a significantly higher reading level ( grade levels, p=0.005). Resources for heat waves were written at a significantly lower reading level ( grade levels, p=0.042) compared to the average score for all materials in the evaluation. There were no other significant differences by disaster type.
PMOSE/IKIRSCH.
Mean scores on the PMOSE/IKIRSCH complexity measure for included websites was 3.5 (SD: 1.0), indicating a “moderate” degree of complexity and a high school-level of education is necessary for a person to use available materials.22 We observed differences in scores by information source, audience type, and disaster type. News organizations evidenced significantly lower complexity scores (, p=0.003) compared to materials overall. Materials for caregivers of CSHCN demonstrated significantly higher scores (, p<0.001) compared to materials overall. Materials pertaining to floods also scored higher (, p=0.038), Three other disaster type categories (droughts, volcanic eruptions, and general disaster resources) scored lower than the average complexity score for materials overall, but only scores for general resources reached significance (, p=0.016).
SAM.
The suitability of materials, as assessed by the SAM, scored 61.6 out of 100 (SD: 12.3) on average. This average score falls into the “adequate or below average” range of scores.23 Across information sources, the only source that had a significantly different score than the average were nonprofit organizations (, p=0.029). When examining audience types, materials for parents of both children with and without SHCN had significantly higher suitability scores than materials overall (, p<0.001; and , p=0.020 respectively). We found no differences in scores across disaster types.
NLM/NIA Guidelines.
The NLM/NIA guidelines provide an indication of how usable materials are with regard to website characteristics. Overall, materials in this evaluation fulfilled 16.9/25 NLM/NIA guidelines (SD: 1.7). Among the different information sources, news organizations (, p<0.001) and educational institutions (, p=0.012) fulfilled significantly fewer guidelines. Materials for parents of children (not CSHCN) fulfilled significantly more of the guidelines (, p<0.001) whereas those for other clearly specified audiences fulfilled fewer (, p=0.006), compared to the average of materials overall. Drought materials were the only disaster type that met significantly fewer guidelines than materials overall (, p=0.002).
CDC CCI.
The CCI provides a multi-faceted indication of overall audience appropriateness. The mean CCI score we observed for all materials was 66.6 (SD: 15.3), which can be interpreted as “needs work” (as opposed to an “excellent” score of 90 or above) according to the assessment tool.25 (Information regarding the CDC CCI score range and domains covered is available in Table 3.) We did not see differences in scores by information source categories. Materials for parents scored significantly higher than all materials on average – inclusive of both parents of CSHCN (, p=0.003) and those without special needs (, p=0.017). We found no significant differences in scores by disaster type.
Discussion
Communities that are prepared for emergencies begin with families who are prepared for emergencies.4 Increasingly, U.S. families use free information on the internet to understand children’s needs and inform their decisions regarding caregiving concerns, including emergency preparedness.8 We evaluated the literacy demands required for caregivers to comprehend the information contained in nearly 400 publicly available online resources. Findings indicate a mismatch between literacy demands and the general public’s literacy capabilities, suggesting that families may not have the appropriate information to be adequately prepared for natural disasters. As such, risk communication and public health organizations could consider modifications to available resources to address literacy demands in the areas of readability, complexity, suitability, web usability, and overall audience-appropriateness.
Information Sources
Most available resources were materials issued by government agencies, underscoring the prominent role such institutions play as sources for risk communication to the public.15 Notably, less than one-tenth of all resources examined were educational websites. Most children and adolescents spend the majority of their day in schools, which can also serve as family reunification points during emergencies. Thus, educational organizations represent an essential information source in community-level disaster preparedness and response.2,4 Our searches also uncovered no resources from healthcare organizations. This is notable as pediatricians are credible, well-positioned sources for providing anticipatory guidance around emergency preparedness.2 Although educational and healthcare organizations likely possess risk communication materials that were not captured – or that did not otherwise meet inclusion criteria – our evaluation suggests that disseminating messages to families via these trusted sources remains an important gap in the online information environment.7,29 Emergency preparedness agencies must engage with educational and healthcare partners in order to reach the whole community.5,30
Target Audiences
Nearly three-quarters of materials in the evaluation did not clearly specify a target audience. This may be a concern, as tailoring messages for intended audiences is a core pillar of risk communication.31 We acknowledge that inclusion of search terms tailored to specific audiences might have netted different results; however, this was beyond the scope of our evaluation given our goal to broadly understand literacy demands of materials for families in the U.S.
Additionally, our evaluation highlights a lack of resources to help children directly contribute to family-level preparedness planning. Increasingly, researchers and practitioners have advocated both for expanded consideration of children in preparedness as well as meaningful integration of their perspectives directly into the planning cycle.6,7 One promising resource is CDC’s Ready Wrigley campaign, which provides emergency preparedness information directly to children through media including activity books, checklists, and a mobile application (https://www.cdc.gov/phpr/readywrigley/). Evaluating the impacts of Ready Wrigley and similar materials could help make the case for additional resources that speak directly to young people.
Literacy Demands
For the three assessment tools that offer an ordinal interpretation of continuous scores (PMOSE/IKIRSCH, SAM, CDC CCI), materials on average did not meet the optimal category specified for each respective tool. According to both the Flesch-Kincaid readability and the PMOSE/IKIRSCH complexity scores, the average required reading level of assessed materials was higher than the average U.S. adult reading level of eighth grade.23,32 This is also higher than the grade level recommended by prominent health organizations including the National Institutes of Health33 (“Keep within a range of about a 7th or 8th grade reading level”) and American Medical Association34 (“Write at or below the 6th grade level”).c Thus, efforts to improve readability and reduce the complexity of extant materials could augment comprehension and appropriate utilization of these materials. Of note, that different grade levels were interpreted by the Flesch-Kincaid and the PMOSE/IKIRSCH measures highlights the value of employing multiple assessment tools as part of a comprehensive communication planning process.36
The suitability of existing resources, measured by the SAM, could also be improved from “adequate/below average” to “superior” through concerted focus on information content, graphics, layout/typography, and learning stimulation. Findings from the NLM/NIA guidelines are difficult to interpret as there is no established recommendation for what constitutes sufficient web usability; nonetheless, materials, on average, complied with <70% of the recommendations suggesting improvements could be made by focusing on domains targeted by this assessment tool (e.g., ease-of-navigation, information presentation; Table 3). For the CCI, materials scored in the 66th percentile, lower than the 90% criterion that the CDC considers adequately audience-appropriate. Fortunately, many resources are available to risk communication and public health specialists to better align materials’ literacy demands with the skill levels of the general public (see 33,35,36).
No clear patterns in scores on literacy assessment tools emerged across information sources nor disaster type (Table 4). For instance, although news organizations had lower complexity according to PMOSE/IKIRSCH (better), they scored lower with regard to web usability based on NLM/NIA guidelines (worse). When looking at audience type, materials targeted for a particular target audience did appear to fare better than the overall average, compared to those that were not tailored (i.e., general audience message). Resources aimed at parents/caregivers of both CSHCN and children generally had lower literacy demands compared to other materials – both groups demonstrated lower literacy demands than overall averages, for three-of-five assessment tools used. This finding is encouraging, as parents may share materials with their children as part of their family emergency planning process. In addition, parents of CSHCN specifically may require additional information to effectively prepare for their children’s disaster-related needs.7 Relatedly, materials aimed at “other” specified audiences performed better for two-of-five tools (Flesch-Kincaid, NLM/NIA). These results are not wholly surprising, as several tools contain items that probe directly about target audiences.
Although we are unaware of any evaluation that has examined this scope of websites using this range of tools, a 2008 study did identify that online emergency preparedness resources performed poorer with respect to the Flesch-Kincaid and SAM measures than our estimates for those tools.26 This difference suggests that efforts to improve the readability and suitability of public materials may have improved in recent years, though we note the two evaluations are not directly comparable due to study design differences.
Strengths and Limitations
This study presents findings buoyed by several methodological strengths. With regard to our search strategy, we applied literature from the field of human-computer interaction19,20 to develop our approach and used a reproducible protocol within three common search engines to capture private and public websites (Table 2). These strategies increase the likelihood that the materials assessed represent a reasonable approximation of what the public would encounter at the time of the study’s writing. We also adjusted for multiple comparisons to minimize the possibility of false positive findings, augmenting our confidence in the differences observed. Further, intentional study design elements including the use of two independent coders, five separate tools, and Brown et al.’s framework afford us a more comprehensive understanding of extant resources’ literacy demands, particularly in the context of this emerging science.
This evaluation is not without limitations. First, we focused solely on natural disasters. Although weather-related events are an increasing cause of morbidity and mortality,1 findings may not generalize to other disaster types (e.g., human-induced disasters). Second, our search strategy may not have adequately captured the universe of resources that individuals might encounter on the internet – such as social media and peer forums.8,37 Relatedly, conducting searches without localization to specific regions could have affected the websites retrieved in the search (e.g., hurricane-related webpages might appear more often if user were to enter the affected region or hurricane name; local school district webpages might not have emerged without explicit mention of one’s county). Future work could broaden the scope of materials evaluated, target specific geographic areas, or expand upon our search terms to provide more complete insights. Third, although we sought to characterize the landscape of online resources by making comparisons to average scores of assessed materials overall, we did not make direct statistical comparisons to discrete cutoff points because recommended scores are not available for each tool.
Finally, although the use of multiple tools assessing varied dimensions of literacy enhances the validity of our findings, the comparability and psychometric properties of the tools merit further investigation.10,16 For example, one study showed that materials revised via the CCI were perceived to be more comprehendible than materials that were not.31 However, the 90% benchmark for classifying a material as “Excellent” (vs. “Needs Work”) was not developed as a validated cut-off score with an established sensitivity and specificity. Rather, the benchmark was intended as a practical starting point for identifying revisions; the CCI developers recognized that the content of materials is often the product of negotiations between scientists and communicators (C. Baur, e-mail communication, September 2018).
Future Directions
This study lays the foundation for additional work in this area to better prepare families and communities for natural disasters. When developing or evaluating materials, organizations may consider content-analyzing their materials to illuminate other factors that influence audience comprehension, or to assess other determinants of compliance with preventative behaviors (e.g., messaging quality, credibility, accuracy).19,26 Notably, the CDC’s Public Health Emergency Preparedness (PHEP) capability standards were developed to help state, local, tribal, and territorial programs advance preparedness capacity. These standards were recently refined to include elements for vulnerable populations (e.g., children, individuals with disabilities).30 Assessing materials developed by programs receiving PHEP support to meet these priorities could help determine if the needs of children are being effectively integrated for community preparedness and response.
Future efforts might also incorporate the perspectives of families and/or children themselves to elucidate whether findings hold by intended users, using techniques commonly employed within user experience research.13,35,38 This would be particularly important as a paucity of literature regarding families’ online information-seeking behaviors about preparedness was available to guide our methodology. As others have argued, although guidelines help identify aspects of materials to be improved, their value is diminished without corresponding efforts to test usability with target audiences.12,17 Moving forward, communities’ actions towards improving the pre-disaster information environment can be evaluated for their success in effecting behavior change – both in real-time and following actual disasters – using emergent tools (see 39,40).
Conclusions
As parents and caregivers continue to turn to the Web for resources on family emergency preparedness, their ability to apply such information when disaster strikes will depend on how understandable the information is. This evaluation offers a global view of English-language risk communication materials that the U.S. general public would likely encounter when conducting internet-based searches for family-level preparedness as of January 2018. We identified areas that could be addressed to lower the literacy demands of available materials and consequently strengthen their usefulness for families. Achieving national goals of improved health and disaster literacy will require sustained attention to ensuring resources are not only available to the public, but appropriately matched to their literacy capabilities.35
Supplementary Material
Acknowledgments
The authors thank Josephine Lau, Melissa Jennings, and Joshua Petimar for their thoughtful guidance in improving the study design and manuscript; and Cynthia Baur for her insights on the Clear Communication Index. Preliminary findings of this research were presented at the 2018 National Association of County and City Health Officials Preparedness Summit (Atlanta, GA). The findings and conclusions in this article are those of the authors and do not necessarily represent the official position of CDC.
Funding
This research was supported in part by an appointment to the Research Participation Program at the Centers for Disease Control and Prevention administered by the Oak Ridge Institute for Science and Education through an interagency agreement between the U.S. Department of Energy and CDC (Mr. So and Ms. Franks).
Footnotes
For the purposes of this study, we assumed that early webpage hits reflected sites that (1) received high quantity of web traffic, and (2) members of the general public would be most likely to find and select for preparedness planning. However, ranked search engine results are the result of a more complex set of factors than sole website traffic. Web developers employ search engine optimization (SEO) strategies to improve the web page rankings on search engines (e.g., including relevant keywords in meta-data, strategically hyperlinking text, using descriptive titles and sub-titles on the page, etc.).
In the United States, a 9th grade level corresponds to children who are typically 14-15 years old.
In the United States, a 6th grade level corresponds to children who are typically 11-12 years old; 7th grade corresponds to children who are typically 12-13 years old; 8th grade level corresponds to children who are typically 13-14 years old.
References
- 1.United Nations Office for Disaster Risk Reduction. Global Assessment Report on Disaster Risk Reduction: Making Development Sustainable: The Future of Disaster Risk Management. Geneva, Switzerland; 2015. https://www.preventionweb.net/english/hyogo/gar/2015/en/gar-pdf/GAR2015_EN.pdf Accessed January 3, 2018. [Google Scholar]
- 2.American Academy of Pediatrics Disaster Preparedness Advisory Council, Committee on Pediatric Emergency Medicine. Ensuring the health of children in disasters. Pediatrics. 2015;136(5):e1407–e1417. doi: 10.1542/peds.2015-3112. [DOI] [PubMed] [Google Scholar]
- 3.U.S. Federal Emergency Management Agency. Preparedness in America: Research Insights to Increase Individual, Organizational, and Community Action. Washington, DC; 2014. https://www.fema.gov/media-library-data/1409000888026-1e8abc820153a6c8cde24ce42c16e857/20140825_Preparedness_in_America_August_2014_Update_508.pdf. Accessed January 24, 2019. [Google Scholar]
- 4.Dziuban EJ, Peacock G, Frogel M. A Child’s Health Is the Public’s Health: Progress and Gaps in Addressing Pediatric Needs in Public Health Emergencies. Am J Public Health. 2017;107(S2):S134–S137. doi: 10.2105/AJPH.2017.303950. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5.Chung S, Gardner AH, Schonfeld DJ, et al. Addressing Children’s Needs in Disasters: A Regional Pediatric Tabletop Exercise. Disaster Med Public Health Prep. 2018;12(5)582–586. doi: 10.1017/dmp.2017.137. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6.Peek L, Abramson DM, Cox RS, Fothergill A, Tobin J. Children and disasters In: Handbook of Disaster Research. Springer, Cham; 2018:243–262. doi: 10.1007/978-3-319-63254-4. [DOI] [Google Scholar]
- 7.Hipper TJ, Davis R, Massey PM, et al. The Disaster Information Needs of Families of Children with Special Healthcare Needs: A Scoping Review. Heal Secur. 2018. doi: 10.1089/hs.2018.0007. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Plantin L, Daneback K. Parenthood, information and support on the internet. A literature review of research on parents and professionals online. BMC Fam Pract. 2009;10(1):34. doi: 10.1186/1471-2296-10-34. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Savoia E, Lin L, Viswanath K. Communications in public health emergency preparedness: a systematic review of the literature. Biosecur Bioterror. 2013;11(3):170–184. doi: 10.1089/bsp.2013.0038. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.Pleasant A, Rudd RE, O’Leary C, et al. Considerations for a New Definition of Health Literacy: Discussion Paper. Washington, DC: National Academy of Medicine; 2016. doi: 10.31478/201604a. Accessed January 3, 2018. [DOI] [Google Scholar]
- 11.James X, Hawkins A, Rowel R. An Assessment of the Cultural Appropriateness of Emergency Preparedness Communication for Low Income Minorities. J Homel Secur Emerg Manag. 2007;4(3). doi: 10.2202/1547-7355.1266. [DOI] [Google Scholar]
- 12.McDonough B, Felter E, Downes A, Trauth J. Communicating Public Health Preparedness Information to Pregnant and Postpartum Women: An Assessment of Centers for Disease Control and Prevention Web Pages. Disaster Med Public Health Prep. 2015;9(02):134–137. doi: 10.1017/dmp.2015.2. [DOI] [PubMed] [Google Scholar]
- 13.Neuhauser L, Ivey SL, Huang D, et al. Availability and Readability of Emergency Preparedness Materials for Deaf and Hard-of-Hearing and Older Adult Populations: Issues and Assessments. PLoS One. 2013;8(2):e55614. doi: 10.1371/journal.pone.0055614. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14.Ryan MT, Rohrbeck CA, Wirtz PW. The Importance of Self-Efficacy in Parental Emergency Preparedness: A Moderated Mediation Model. Disaster Med Public Health Prep. 2018;12(3):345–351. doi: 10.1017/dmp.2017.80. [DOI] [PubMed] [Google Scholar]
- 15.Abrams MA, Klass P, Dreyer BP. Health Literacy and Children: Recommendations for Action. Pediatrics. 2009;124(Supplement 3):S327–S331. doi: 10.1542/peds.2009-1162I. [DOI] [PubMed] [Google Scholar]
- 16.Kaphingst KA, Kreuter MW, Casey C, et al. Health Literacy INDEX: Development, Reliability, and Validity of a New Tool for Evaluating the Health Literacy Demands of Health Information Materials. J Health Commun. 2012;17(sup3):203–221. doi: 10.1080/10810730.2012.712612. [DOI] [PubMed] [Google Scholar]
- 17.Brown LM, Haun JN, Peterson L. A Proposed Disaster Literacy Model. Disaster Med Public Health Prep. 2014;8(03):267–275. doi: 10.1017/dmp.2014.43. [DOI] [PubMed] [Google Scholar]
- 18.Whitten P, Nazione S, Lauckner C. Tools for assessing the quality and accessibility of online health information: Initial testing among breast cancer websites. Informatics Heal Soc Care. 2013;38(4):366–381. doi: 10.3109/17538157.2013.812644. [DOI] [PubMed] [Google Scholar]
- 19.Morahan-Martin JM. How Internet Users Find, Evaluate, and Use Online Health Information: A Cross-Cultural Review. CyberPsychology Behav. 2004;7(5):497–510. doi: 10.1089/cpb.2004.7.497. [DOI] [PubMed] [Google Scholar]
- 20.Dhillon G, Coss D, Hackney R. Interpreting the role of disruptive technologies in e-businesses. Logist Inf Manag. 2001;14(l/2): 163–171. doi: 10.1108/09576050110363167. [DOI] [Google Scholar]
- 21.Kincaid J, Fishbume RJ, Rogers R, Chissom B. Research Branch Report 8-75: Derivation of New Readability Formulas (Automated Readability Index, Fog Count and Flesch Reading Ease Formula) for Navy Enlisted Personnel. Millington, TN; 1975. http://www.dtic.mil/dtic/tr/fulltext/u2/a006655.pdf Accessed March 14, 2018. [Google Scholar]
- 22.Mosenthal P, Kirsch I. A new measure for assessing document complexity: The PMOSE/IKIRSCH document readability formula. J Adolesc Adult Lit. 1998;41(8):638–657. doi: 10.2307/40016961. [DOI] [Google Scholar]
- 23.Doak C, Doak L, Root J. Teaching Patients with Low Literacy. J.B. Lippincott; 1996. https://www.popline.org/node/425267. Accessed March 14, 2018. [Google Scholar]
- 24.Hodes RJ, Lindberg DAB. Making Your Website Senior Friendly: A Checklist. Washington, DC; 2002. https://www.nlm.nih.gov/pubs/checklist.pdf. Accessed March 14, 2018. [Google Scholar]
- 25.Centers for Disease Control and Prevention. CDC Clear Communication Index: A Tool for Developing and Assessing CDC Public Communication Products User Guide. Atlanta, GA; 2014. https://www.cdc.gov/ccindex/pdf/clear-communication-user-guide.pdf. Accessed September 21, 2017. [Google Scholar]
- 26.Friedman DB, Tanwar M, Richter JVE. Evaluation of online disaster and emergency preparedness resources. Prehosp Disaster Med. 2008;23(5):438–446. [DOI] [PubMed] [Google Scholar]
- 27.Scott WA. Reliability of Content Analysis: The Case of Nominal Scale Coding. Public Opin Q. 1955; 19(3):321. doi: 10.1086/266577. [DOI] [Google Scholar]
- 28.Benjamini Y, Hochberg Y. Controlling the False Discovery Rate: A Practical and Powerful Approach to Multiple Testing. J R Stat Soc Ser B. 1995;57:289–300. doi: 10.2307/2346101. [DOI] [Google Scholar]
- 29.Olympia RP, Rivera R, Heverley S, Anyanwu U, Gregorits M. Natural disasters and mass-casualty events affecting children and families: A description of emergency preparedness and the role of the primary care physician. Clin Pediatr (Phila). 2010;49(7):686–698. doi: 10.1177/0009922810364657. [DOI] [PubMed] [Google Scholar]
- 30.Centers for Disease Control and Prevention. Public Health Emergency Preparedness and Response Capabilities: National Standards for State, Local, Tribal, and Territorial Public Health. Atlanta, GA; 2018. https://www.cdc.gov/cpr/readiness/00_docs/CDC_PreparednesResponseCapabilities_October2018_Final_508.pdf Accessed December 11, 2018. [Google Scholar]
- 31.Baur C, Prue C. The CDC Clear Communication Index Is a New Evidence-Based Tool to Prepare and Review Health Information. Health Promot Pract. 2014;15(5):629–637. doi: 10.1177/1524839914538969. [DOI] [PubMed] [Google Scholar]
- 32.Kirsch IS, Jungeblut A, Jenkins L, Kolstad A. Adult Literacy in America: A First Look at the Results of the National Adult Literacy Survey. Natl Cent Educ Stat. 2002:178 doi:NCES 1993-275. [Google Scholar]
- 33.National Institutes of Health. How to Write Easy-to-Read Health Materials. https://medlineplus.gov/etr.html. Published 2017.
- 34.Weiss BD. Health Literacy: A Manual for Clinicians. Chicago, IL: American Medical Association Founation; 2008. [Google Scholar]
- 35.National Institutes of Health. Making Health Communication Programs Work. Washington, DC; 2013. http://www.cancer.gov/cancertopics/cancerlibrary/pinkbook/page5. Accessed August 2, 2018. [Google Scholar]
- 36.U.S. Department of Health & Human Services. Quick Guide to Health Literacy. Washington, DC; 2006. https://health.gov/communication/literacy/quickguide/Quickguide.pdf. Accessed August 2, 2018. [Google Scholar]
- 37.Thomas TL, Schrock C, Friedman DB. Providing Health Consumers with Emergency Information: A Systematic Review of Research Examining Social Media Use During Public Crises. J Consum Health Internet. 2016;20(1-2):19–40. doi: 10.1080/15398285.2016.1142927. [DOI] [Google Scholar]
- 38.Jaspers MWM. A comparison of usability methods for testing interactive health technologies: Methodological aspects and empirical evidence. Int J Med Inform. 2009;78(5):340–353. doi: 10.1016/J.IJMEDINF.2008.10.002. [DOI] [PubMed] [Google Scholar]
- 39.Savoia E, Lin L, Gamhewage GM. A Conceptual Framework for the Evaluation of Emergency Risk Communications. Am J Public Health. 2017;107(S2):S208–S214. doi: 10.2105/AJPH.2017.304040. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 40.Bergeron CD, Friedman DB. Developing an evaluation tool for disaster risk messages. Disaster Prev Manag An Int J. 2015;24(5):570–582. doi: 10.1108/DPM-11-2014-0224. [DOI] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.