Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2019 May 1.
Published in final edited form as: Br J Health Psychol. 2018 Jan 31;23(2):420–435. doi: 10.1111/bjhp.12297

Education-based disparities in knowledge of novel health risks: The case of knowledge gaps in HIV risk perceptions

Marc T Kiviniemi 1,*, Heather Orom 1, Erika A Waters 2, Megan McKillip 1, Jennifer L Hay 3
PMCID: PMC5882541  NIHMSID: NIHMS939748  PMID: 29388364

Abstract

Objective

Risk perception is a key determinant of preventive health behaviour, but when asked, some individuals indicate they do not know their health risk. Low education is associated with both lack of knowledge about health risk and with the persistence and exacerbation of gaps in knowledge about health issues. This study uses the context of an emerging infectious disease threat to explore the hypothesis that the education-don’t know risk relation results from differences in knowledge about the health issue of interest. Specifically, we examine whether patterns of change over time follow theoretical predictions that disparities in risk knowledge would increase over time in less educated sectors of the population (knowledge gap hypothesis).

Design

Secondary analysis of population-representative behavioural surveillance survey.

Method

We analysed data from the 1993 to 2000 Behavior Risk Factor Surveillance System surveys, which measured education and perceived HIV/AIDS risk in a population sample collected separately in each survey year; don’t know responses were coded.

Results

In each year, individuals with higher education were less likely to respond don’t know. The absolute prevalence of don’t know responding dropped over time; nonetheless, there was an increase over time in the magnitude of the pattern of lower education being associated with greater don’t know responding.

Conclusions

We found support for the knowledge gap hypothesis. Over time, populations with greater education gained more knowledge about their HIV risk than populations with lower education. Results highlight the need to carefully consider health communication strategies to reach and address those individuals with low education and health knowledge.


Although the assumption that people have an established perception of their risk for illnesses is common to both models of health behaviour and to public health communication strategies, a meaningful proportion of the population reports that they ‘don’t know’ their risk of developing a health problem. When asked to estimate their risk for colorectal cancer, 7–9% of individuals respond don’t know even when a don’t know option is not explicitly provided; rates of don’t know responding are as high as 50–70% when a don’t know response option is available (Waters, Hay, Orom, Kiviniemi, & Drake, 2013). Similar patterns for don’t know responses and other indicators of uncertainty have been observed in other health domains (e.g., Bruine de Bruin, Fischhoff, Millstein, & Halpern-Felsher, 2000; Garc,es-Palacio & Scarinci, 2012; Lipkus, Rimer, & Strigo, 1996).

Don’t know responding to risk perception questions is related to lower levels of education and to other markers of socioeconomic status (Waters et al., 2013) as well as lower risk factor knowledge, lower numeracy, and lower likelihood of seeking health information (Hay, Orom, Kiviniemi, & Waters, 2015). Fundamentally, differences in education, a key social determinant of health, may underlie differences in knowledge, health literacy, and numeracy that may be contributing to uncertainty about disease risk. Given the importance of perceived risk for engaging in preventive health behaviours (Sheeran, Harris, & Epton, 2014), such relations may partially account for differences by education in disease prevention.

In this article, we test the hypothesis that expressing a lack of knowledge of personal health risks results from underlying differences in education. The context of a previously unknown infectious disease, perceived risk for HIV/AIDS in the early stages of the epidemic, serves as a historical case study to test the relation of education to the development of risk perceptions as information about a health hazard becomes more available to the general public.

Knowledge, knowledge gaps, and don’t know responses to health risk perception questions

Several explanations have been proposed for why people respond ‘don’t know’ when asked to report their perceived risk for an illness. Most relevant to this study is that answering don’t know might reflect a lack of knowledge necessary to inform the development of risk perceptions (Hay et al., 2015; Waters et al., 2013). The idea that disparities in knowledge about a health threat widen when health communication campaigns are used to try to increase the public’s awareness of a health threat has been referred to as the knowledge gap hypothesis (Tichenor, Donohue, & Olien, 1970; Viswanath & Finnegan, 1996). Tichenor posited that individuals with higher socioeconomic status increase their level of knowledge about an issue more quickly than those with lower socioeconomic status. Over time this leads to a gap in knowledge between those with higher and lower socioeconomic status. This gap can remain if mass media health communications do not successfully reach those with lower socioeconomic status. A second prediction is that at any given point in time, the relationship between socioeconomic status and knowledge should be stronger if more mass media attention has been given to a topic, because access to information, media channels, and ability to understand messages all covary with socioeconomic status (Tichenor et al., 1970).

Evidence for this relation has been found in numerous studies (for a review, see Viswanath & Finnegan, 1996), including multiple investigations in the health domain (e.g., Gaziano & Horowitz, 2001; Niederdeppe, 2008; Viswanath, Kahn, Finnegan, Hertog, & Potter, 1993). Gaps have also been demonstrated for levels of knowledge about HIV/AIDS (Salmon, Wooten, Gentry, Cole, & Kroger, 1996). However, to our knowledge, the existence of a knowledge gap for knowledge of one’s personal risk for a health problem has not been previously examined, nor has the hypothesis that disparities in risk perception knowledge will increase over the course of a public health awareness campaign. The work reported here provides such a test.

Development of HIV/AIDS risk knowledge over time as a case study for studying DKR

Examining the question about knowledge gap patterns in personal health risk knowledge requires meeting specific criteria. First, one must have measurements of perceived risk for a particular health domain collected repeatedly over a period of several years, preferably during the early years of identification and awareness of the health issue. Second, that multiyear period must span a time period when one would reasonably expect the population’s knowledge about the relevant health risk to increase. Third, in addition to measurement of risk perception, one must have information about education levels of respondents.

The historical context and empirical data available for HIV/AIDS in the early years of the epidemic meet each of these requirements. Empirically, from 1993 to 2000, the Centers for Disease Control and Prevention’s Behavioral Risk Factor Surveillance System (BRFSS) survey included a question that asked respondents to indicate their perceived risk for HIV/AIDS infection. A don’t know response was not explicitly provided, but the BRFSS data collection protocol allowed the don’t know response to be recorded if a participant gave it verbally. BRFSS also collected a variety of demographic data, including educational attainment.

Conceptually, the historical context of HIV/AIDS between 1993 and 2000 lends itself to this multiyear examination. HIV/AIDS was an unfamiliar, novel risk when first publicized in the early 1980s (Centers for Disease Control, 1982; Centers for Disease Control and Prevention, 2006; Friedman-Kien et al., 1981). Many public awareness campaigns in the United States did not take place until the late 1980s (e.g., Koop, 1987). Building on these early years of publicity, during the period covered by the BRFSS surveys, there was a substantial increase in publicity and public awareness of HIV/AIDS. This increase was marked by increasingly common media portrayals of people with HIV/AIDS in the early to mid-1990s, celebrity disclosures of HIV status, and increasing awareness of heterosexual transmission routes. In addition, development of testing procedures in the late 1980s led to a rise in both the availability of and therefore the rates of testing in the 1990s, which were accompanied by public awareness campaigns about HIV and the need for testing (for more extensive historical background, see Henry J. Kaiser Family Foundation, 2008; U.S. Department of Health & Human Services, 2011).

Thus, HIV/AIDS during this time period provides a valuable ‘case study’ for exploring how education levels relate to developing perceptions of risk for a novel health hazard as public health messaging around that hazard becomes increasingly available. Although the primary rationale for these analyses is examining knowledge gap patterns in perceived risk for health problems, it is also the case that findings from this study are relevant across many contexts as the emergence of novel health risks is a perennial issue for public health communication and behaviour change efforts (e.g., Zika virus and avoiding mosquito contact). In addition, such an examination provides valuable understanding of why disparities in knowledge of and therefore potentially preventive action against novel risks might exist. This understanding has relevance for public health intervention efforts around ongoing emerging health risks.

Current study and hypotheses

The work reported here examines whether and how the relation between education and don’t know responding shifts as public awareness of a health risk increases. In addition, given that the knowledge gap hypothesis leads to the prediction that the relation between education and don’t know responses will become stronger over time, the study characterizes temporal patterns in the interrelations of education and don’t know responding.

If lack of health knowledge underlies don’t know responding, as suggested by prior research (Hay et al., 2015), rates of don’t know responding should decline over the course of the epidemic, given the substantial health education efforts expended to disseminate information about HIV transmission and testing between 1993 and 2000 (Henry J. Kaiser Family Foundation, 2008; U.S. Department of Health & Human Services, 2011). In addition, the knowledge gap hypothesis leads to the prediction that any socioeconomic status gap in knowledge of perceived risk for HIV should grow during the course of information dissemination efforts. Consistent with the statistical patterning of the knowledge gap hypothesis (see Jenssen, 2013; Tichenor et al., 1970), a growth in knowledge gaps over time would be evidenced by systematically stronger associations between education and don’t know responding over the course of the survey years.

Method

BRFSS data collection overview

The Centers for Disease Control and Prevention (CDC) coordinates the BRFSS survey. BRFSS is a yearly, population-representative survey of US adults. The survey is conducted separately by each state with centralized CDC coordination. For each year, each state obtains a new, yearly survey sample using a random digit dialling survey methodology with a multistage cluster sampling design. Given this, when analysed with appropriate survey analysis techniques, BRFSS provides population-representative mean estimates and inferential statistics. Detailed information about the BRFSS study design, sampling methodology, and analytic recommendations is available elsewhere (Centers for Disease Control and Prevention, 2013).

Participants and response rates

The number of respondents per survey year and the response rates for each year are presented in Table 1. Across the 8 years of the survey, participants per year ranged from 102,263 (1993) to 184,450 (2000). Given the state-by-state survey design, BRFSS calculates and reports response rates separately for each state. The median state-level response rate ranged from 48.9% (2000) to 71.4% (1993) (Centers for Disease Control, 1995, 2000).

Table 1.

Number of respondents and median state-level response rate for each survey year

BRFSS year Number of respondents Median state-level response rate (%)
1993 102,263 71.4
1994 105,853 70.0
1995 113,394 68.4
1996 124,085 63.2
1997 135,182 62.1
1998 149,342 59.1
1999 159,989 55.2
2000 184,450 48.9

Measures

Risk perception questions

Respondents reported their perceived risk of becoming infected with HIV. In 1993 and 1994, the question was, ‘What are your chances of getting the AIDS virus?’ From 1995 to 2000, the question was, ‘What are your chances of getting infected with HIV, the virus that causes AIDS?’ Response options for both versions were high, medium, low, or none. Don’t know responses were recorded when participants verbally provided a ‘don’t know’ answer.

Demographics

Participants reported age, gender, race, ethnicity, education, income, and insurance status. The age, education, and income questions were all assessed using categorical responses (see Table 3 for relevant categories).

Table 3.

Multivariable logistic regression results; Odds ratios for the relation of demographic characteristics to don’t know responding in each survey year

Characteristic 1993 1994 1998 1996 1997 1998 1999 2000
Education
 Less than 9th grade
 Some high school 0.82
(0.57, 1.17)
0.82
(0.55, 1.22)
0.77
(0.49, 1.23)
0.38
(0.25, 0.58)
0.44
(0.29, 0.66)
0.46
(0.30, 0.70)
0.36
(0.24, 0.53)
0.37
(0.24, 0.58)
 High school graduate 0.83
(0.38, 0.72)
0.56
(0.38, 0.80)
0.63
(0.43, 0.93)
0.29
(0.20, 0.42)
0.42
(0.28, 0.66)
0.27
(0.18, 0.41)
0.28
(0.20, 0.41)
0.29
(0.20, 0.42)
 Some college/Tech school 0.35
(0.24, 0.50)
0.41
(0.28, 0.60)
0.42
(0.27, 0.64)
0.27
(0.17, 0.42)
0.32
(0.21, 0.48)
0.23
(0.15, 0.35)
0.23
(0.16, 0.35)
0.21
(0.14, 0.31)
 College graduate 0.25
(0.17, 0.36)
0.41
(0.27, 0.62)
0.35
(0.23, 0.54)
0.31
(0.19, 0.49)
0.30
(0.18, 0.46)
0.18
(0.11, 0.29)
0.24
(0.15, 0.38)
0.28
(0.17, 0.44)
Race
 White
 Black 2.27
(1.86, 2.78)
2.31
(1.82, 2.93)
2.42
(1.73, 3.38)
1.86
(1.45, 2.40)
2.46
(1.91, 3.17)
2.74
(2.11, 3.56)
2.01
(1.54, 2.62)
2.64
(1.95, 3.60)
 Asian/Pacific Islander 4.64
(3.33, 6.47)
3.86
(2.64, 5.65)
6.62
(4.18, 10.47)
4.08
(2.48, 6.69)
5.69
(3.80, 8.52)
7.62
(4.75, 12.2)
6.92
(4.32, 11.1)
5.55
(3.61, 8.53)
 Amer. Indian/Alaska Native 2.06
(1.25, 3.41)
1.72
(0.93, 3.16)
1.32
(0.54, 3.21)
1.19
(0.63, 2.25)
3.77
(1.80, 7.87)
1.61
(0.80, 3.26)
1.32
(0.69, 2.53)
1.62
(0.91, 2.86)
 Other 1.77
(1.15,2.72)
1.47
(0.87, 248)
2.57
(1.58, 4.20)
2.82
(1.78, 4.47)
1.61
(0.97, 2.69)
1.71
(1.10, 2.68)
1.73
(1.17, 2.57)
2.27
(1.47, 3.49)
Ethnicity
 Non-Hispanic
 Hispanic 1.88
(1.29, 2.74)
1.25
(0.81, 1.91)
2.24
(1.55, 3.23)
1.32
(0.92, 1.90)
1.70
(1.16, 2.50)
1.66
(1.19, 2.32)
1.40
(1.03, 1.88)
1.22
(085, 1.75)
Income
 <10,000
 $10,000–$14,999 1.00
(0.76, 1.32)
0.81
(0.58, 1.13)
0.99
(0.59, 1.64)
0.71
(0.46, 1.09)
0.86
(0.53, 1.38)
0.71
(0.42, 1.19)
0.93
(0.60, 1.45)
0.78
(0.48, 1.26)
 $15,000–$19,999 0.77
(0.57, 1.02)
1.01
(0.70, 1.47)
1.01
(0.64, 1.60)
0.72
(0.47, 1.09)
0.66
(0.42, 1.02)
0.69
(0.46, 1.03)
0.85
(0.54, 1.31)
0 74
(0.46, 1.18)
 $20,000–$24,999 0.70
(0.52, 0.94)
0.72
(0.51, 1.01)
0.99
(0.62, 1.58)
0.59
(0.39, 0.90)
0.77
(0.52, 1.16)
0.58
(0.37, 0.92)
0.81
(0.51, 1.26)
0.67
(0.43, 1.03)
 $25,000–$34,999 0.83
(0.61, 1.13)
0.81
(0.59, 1.12)
0.69
(0.43, 1.12)
0.59
(0.38, 0.92)
0.77
(0.51, 1.15)
0.62
(0.41, 0.94)
0.55
(0.35, 0.85)
0.50
(0.34, 0.74)
 $35,000–$49,000 0.71
(0.53, 0.95)
0.55
(0.39, 0.79)
0.75
(0.47, 1.20)
0.40
(0.26, 0.62)
0.44
(0.29, 0.67)
0.41
(0.25, 0.65)
0.43
(0.27, 0.70)
0.50
(0.33, 0.77)
 $50,000–$74,999 0.85
(0.63, 1.15)
0.76
(0.52, 1.11)
0.56
(0.31, 1.02)
0.37
(0.22, 0.62)
0.36
(0.22, 0.62)
0.25
(0.15, 0.42)
0.38
(0.23, 0.64)
0.34
(0.22, 0.53)
 $74,000 or more N/A 0.47
(0.28, 0.80)
1.15
(0.60, 2.20)
0.27
(0.16, 0.45)
0.33
(0.20, 0.55)
0.26
(0.15, 0.45)
0.26
(0.15, 0.44)
0.28
(0.17, 0.45)
Insurance
 No insurance
 Insurance 0.93
(0.76, 1.14)
0.91
(0.70, 1.18)
1.01
(0.73, 1.39)
0.97
(0.71, 1.32)
0.79
(0.60, 1.04)
1.09
(0.80, 1.48)
0.78
(0.59, 1.04)
0.91
(0.67, 1.22)
Age
 18–24
 25–29 0.79
(0.56, 1.10)
1.34
(0.86, 2.08)
1.67
(1.00, 2.79)
0.97
(0.58, 1.61)
2.11
(1.28, 3.52)
1.24
(0.73, 212)
0.91
(0.57, 1.45)
1.19
(0.80, 1.77)
 30–34 0.86
(0.61, 1.20)
1.19
(0.79, 1.80)
1.13
(0.67, 1.92)
1.43
(0.90, 2.27)
2.29
(1.43, 3.67)
1.38
(0.84, 225)
1.05
(0.66, 1.69)
1.13
(0.73, 1.76)
 35–39 1.06
(0.77, 1.47)
1.49
(1.00, 2.22)
1.69
(1.02, 2.79)
1.26
(0.79, 1.98)
2.77
(1.75, 4.39)
1.66
(1.01, 2.72)
1.16
(0.74, 1.81)
1.00
(0.66, 1.51)
 40–44 1.19
(0.83, 1.70)
1.42
(0.94, 2.15)
1.55
(0.94, 2.58)
1.96
(1.26, 3.06)
2.35
(1.47, 3.73)
2.06
(1.24, 3.43)
1.75
(1.09, 2.82)
1.46
(0.91, 234)
 45–49 1.36
(0.96, 1,93)
1.89
(1.24, 2.88)
1.78
(1.06, 2.99)
2.02
(1.27, 3.22)
3.39
(213, 5.41)
2.16
(1.28, 3.62)
1.86
(1.14, 3.02)
1.19
(0.74, 1.91)
 50–54 1.24
(0.88, 1.73)
2.56
(1.66, 3.94)
2.16
(1.28, 3.63)
1.83
(1.14, 2.94)
3.31
(2.07, 5.27)
2.46
(1.49, 4.07)
1.40
(0.85, 2.30)
1.34
(0.89, 2.03)
 55–59 1.92
(1.35, 2.72)
1.98
(1.31, 2.98)
2.76
(1.66, 4.60)
1.97
(1.22, 3.15)
3.74
(2.31, 6.06)
2.39
(1.42, 4.01)
1.33
(0.82, 2.15)
1.87
(1.14, 3.05)
 60–64 1.39
(0.99, 1.96)
2.65
(1.75, 4.03)
2.08
(1.21, 3.56)
1.89
(1.15, 3.12)
4.36
(2.72, 6.98)
1.82
(1.01, 3.29)
1.95
(1.15, 3.29)
1.62
(1.03, 2.54)
Sex
 Male
 Female 1.15
(0.97, 1.36)
0.99
(0.82, 1.19)
0.79
(0.63, 0.98)
1.04
(0.85, 1.29)
1.04
(0.84, 1.29)
0.70
(0.56, 0.88)
0.89
(0.72, 1.11)
0.76
(0.61, 0.95)

Note. Odds ratios in bold are significant, p < .05.

Analysis plan

Complex survey analysis procedures in Stata 13 (Stata Corporation, College Station, TX, USA) were used. These analyses account for the BRFSS complex survey design and sampling scheme, allowing us to estimate population-representative descriptive and inferential statistics. The 8 years of survey data were combined for analysis. We followed the recommendations of Korn and Graubard (1999) for combining complex survey datasets and sampling weights for analysing changes in estimates over time.

Survey-weighted descriptive statistics were used to estimate the population prevalence of don’t know responding to the HIV/AIDS perceived risk question in each survey year and to examine don’t know response rates by levels of education. Within each survey year, univariable and multivariable logistic regression analyses were used to examine the relation between demographic characteristics and don’t know responding. In each logistic regression analysis, don’t know responding was used as the categorical outcome variable (valid response = 0; don’t know response = 1) and the demographic characteristic(s) served as predictor variables. Because of the covariance between education, the hypothesized mechanism for don’t know effects, and other demographics (particularly race and income), we controlled for other demographic factors in analyses.

To examine the change in don’t know responding over time, we estimated a logistic regression model with don’t know responding as the categorical dependent variable and survey year as a continuous predictor variable. To test the knowledge gap hypothesis that the relation of education to don’t know responding would become more pronounced over time, we estimated a logistic regression model with don’t know responding as a categorical dependent variable and education, survey year, and the Education 9 Survey year interaction as predictors.

Results

Changes in the prevalence of don’t know responding across survey years

The prevalence of don’t know responding for each year is presented in Table 2. As can be seen in Table 2, there was a decline over time in the prevalence of don’t know responding from the first (1993 prevalence: 2.4%) to the final year of the survey (2000 prevalence: 1.1%); the odds ratio for the effect of year on prevalence of DK responding, controlling for other demographic characteristics was OR = 0.95 (95% CI = 0.92, 0.99; p < .05).

Table 2.

Rate of don’t know responding for each survey year and estimated number of don’t know responses in the US adult population

Prevalence of don’t know response by year
Survey year % Don’t know responses Estimated US adult don’t know responders
1993 2.4 5,049,888
1994 1.7 3,606,074
1995 1.6 3,420,480
1996 1.3 2,800,356
1997 1.3 2,824,640
1998 1.1 2,412,674
1999 1.2 2,657,208
2000 1.1 2,469,720

Table 2 also presents the estimated number of don’t know responders, based on the prevalence of don’t know responding and the US Census reported size of the US adult population for each survey year (US Census Bureau, 2001). Although the prevalence dropped in nearly every year, even in the final survey year, approximately 2.5 million US adults would have reported that they did not know their risk for HIV/AIDS infection.

Relation of education levels and don’t know responding

The results of multivariable models for the relation between demographic characteristics and don’t know responding are presented in Table 3 (note: tables with results of univariable models are obtainable by request from the first author. Univariable and multivariable models showed very consistent patterns of results). As can be seen in the tables, education was consistently associated with don’t know responding across all survey years. In each survey year, compared to those who do not have at least a high school education, respondents with higher education levels were significantly less likely to respond don’t know. As described in the analysis plan, given the interrelation of education and race/ethnicity in the United States, we controlled for race/ethnicity in analyses. It is the case that relative to White respondents, Black and Asian/Pacific Islander respondents were significantly more likely to answer don’t know in every survey year. In addition, there were years in which age, income, sex, and ethnicity were associated with don’t know responding, although these effects varied by survey year (see Table 3).

The consistency of the relation of education to don’t know responding in each year highlights the robustness of the effect over time. In addition to individually examining the effect in each survey year, we also examined the omnibus effect of education on don’t know responding combining all survey years. That relation was also significant, and all comparisons of increasing levels of education to those with less than high school education were significant, with ORs ranging from 0.40 to 0.25, all ps < .001.

Knowledge gap patterns in don’t know responses

With respect to the knowledge gap hypothesis (Tichenor et al., 1970), we first further explored the existence of knowledge gaps in knowledge of HIV risk documented by the significant education effect reported in Table 3 (see above). Table 4 presents the proportion of individuals who answered ‘don’t know’ separately for each of the five levels of the education variable in each survey year.

Table 4.

Percentage of population answering ‘don’t know’ by level of education

1993 1994 1995 1996 1997 1998 1999 2000
Education level (%)
 LEES than 9th grade 8.16 4.52 5.53 7.32 5.98 5.70 7.25 5.53
 Some high school 4.70 3.55 2.57 2.02 2.24 2.27 1.93 1.66
 High school graduate 2.73 1.75 1.61 1.27 1.37 1.07 1.18 1.16
 Some college/Tech school 1.74 1.08 1.10 0.93 0.90 0.74 0.81 0.65
 College graduate 1.27 1.04 0.99 0.92 0.68 0.57 0.65 0.71

An important pattern to note in Tables 3 and 4 concerns the relation of education to don’t know responding across the survey years. In Table 3, for virtually all categories of education, the relation of education to don’t know responding becomes stronger across survey years, and the change in this magnitude over time is significant, as indexed by the Education 9 Survey year interaction; interaction effect t = −3.09, p < .01. The pattern of this interaction effect can be seen by examining the ‘some high school’ row (for this row, odds ratios are calculated with ‘some high school’ as the comparison/criterion group and ‘less than high school’ as the referent). In 1993, the OR is 0.82, whereas in 2000, the OR for the same comparison is 0.37; that is, the difference between the likelihood of responding DK for respondents with less than high school versus some high school was larger in the year 2000 than the year 1993. As can be seen in Table 4, all education groups show a significant decline in don’t know responding over the years of the survey – the rates in 1999 and 2000 are all substantially lower than they were in 1993. Also, within each survey year, the overall pattern of lower don’t know responding rates associated with higher levels of education is fairly consistent. However, the pace of decline varies substantially as a function of education. In general, the prevalence of don’t know responding over time declines more rapidly for those individuals with higher levels of education. For example, relative to 1993 levels, individuals with less than a 9th-grade education in the year 2000 survey are 68% as likely to respond don’t know. However, for those with a high school education, the likelihood of responding don’t know to the 2000 survey items is only 42% of the likelihood in 1993 and the relative likelihood for those with some college is 37%. Separate examination of the relation of the change in don’t know responding over time by level of education is consistent with this description. For those with the lowest level of education, there was no significant change over time in the levels of don’t know responding, OR = 0.99 (95% CI = 0.96, 1.04; p = .89). It was only at higher levels of education that there was a significant shift in rates of don’t know responding across years; ORs all <0.90, all significant at p < .001.

Discussion

Our analyses revealed a strong pattern of support for the hypothesis that don’t know responses to risk perception questions support the emergence of knowledge gaps over time between those with higher and lower educational levels. Across 8 years of data, education was a consistent and powerful predictor of don’t know responding. The steady decline in don’t know responses over the survey years is parsimoniously explained by increasing knowledge about HIV risk in the population. Moreover, the pattern of temporal effects supports the knowledge gap hypothesis; patterns in don’t know responses to the risk perception questions indicate that disparities in health risk knowledge widened over the course of health campaigns to increase the public’s understanding of HIV/AIDS risk. Also, the strength of the relationship between education and knowledge grew between 1993 and 2000. During this time, information proliferated across communication channels, increasing the volume of information available to the public. However, if there were education gradients in access for these sources, this would amplify the strength of the association between education and knowledge over time. These effects are especially important to note given that very little research examines temporal trends in risk perception, knowledge, and the relation of other factors to these trends.

Lack of knowledge of risk for a health problem may have considerable negative consequences for public health. The rates of lack of knowledge of HIV/AIDS risk observed in this study equate to between 2.5 million and 5 million US adults not knowing their risk for HIV. Given the relation between risk perception and preventive action, this lack of knowledge is potentially a substantial barrier to risk reduction (Waters, Kiviniemi, Orom, & Hay, 2016) and a possible explanation for the observation that the relation of risk perception to protective behaviour is often not as strong as one would expect (e.g., Brewer, Weinstein, Cuite, & Herrington, 2004; Brewer et al., 2007; Sheeran et al., 2014; Taber & Klein, 2016). The education-based disparities in knowledge of personal risk presented here and elsewhere (Waters et al., 2013) are quite important because the populations who display a lack of risk knowledge are exactly those populations who suffer a greater burden from many chronic and infectious diseases.

As discussed in the introduction, the context of HIV/AIDS between 1993 and 2000 provides an ideal test of the role of education as a driver of novel risk knowledge and knowledge gaps. There have, however, been revolutionary changes in the context of information access, media delivery, growth of social media, and other related phenomena since the time period addressed in these surveys. However, recent work on the knowledge gap hypothesis addressing the current media and information environment suggests that these changes in technology have not altered the existence and persistence in knowledge gaps. Indeed, adoption and use of Internet and related media technologies and their resultant effects on knowledge show similar knowledge gap effects as did the earlier work with more traditional media environments (e.g., Kim, 2008; Neter & Brainin, 2012; Wei & Hindman, 2011) and meta-analytic examinations of knowledge gap effects show that the magnitude of knowledge gap effects is not affected by the year in which the study was conducted (Hwang & Jeong, 2009). These more recent findings suggest that the phenomena demonstrated in the results reported here have ongoing relevance for understanding health knowledge and risk perception even in the current media and technology environment.

Understanding the mechanisms underlying don’t know responding

The results reported here and the existing body of research on correlates of don’t know responses yield insights into the mechanisms underlying don’t know responding to risk perception questions. At first blush, the data reported here could lead to the conclusion that don’t know responses are centrally a phenomenon of education. We would argue that this is true, but as a distal causal factor. Although identifying distal causes is important, we would argue that from the point of view of both understanding and addressing knowledge gaps, it is critical to elucidate the proximal mechanisms, such as health literacy and cultural accessibility of messages, that more directly shape don’t know responses.

Don’t know responses may arise from factors related to lack of health knowledge – possession of knowledge about health issues, the ability and motivation to acquire knowledge when needed to understand a health issue and to form a risk perception, and accesstoqualityhealthinformation. Hay et al. (2015) foundthatindividualswhoanswered don’t know when asked about their risk for cancer had lower levels of knowledge about both risk factors for cancer and for cancer screening options, in addition to finding that don’t know responses were strongly associated with education level. Consistent with this knowledge-based interpretation of our results, Salmon et al. (1996) reported that there were persistent education-related disparities in correct knowledge of routes of HIV transmission between the years 1987 and 1990 (see also Wanta & Elliott, 1995).

Over and above existing knowledge, don’t know responding is likely affected by differences in motivation and ability to acquire health information when needed. For example, people with lower education and other socioeconomic disparities may be more concerned about reducing more immediate risks such as food insecurity than reducing a more distant risk such as heart disease (Haushofer & Fehr, 2014; Johnson, 1997), leading to a greater likelihood of don’t know responses. Similarly, because poverty has been associated with chronic cognitive load (Mani, Mullainathan, Shafir, & Zhao, 2013) and may therefore lead to fewer cognitive resources available for health information seeking and information processing, ability may also play a role in health information acquisition (Lenzner, Kaczmirek, & Lenzner, 2010; Shoemaker, Eichholz, & Skewes, 2002). Finally, education and other markers of socioeconomic status might covary with the sources, amount, and quality of information available to the individual (Johnson, 1997; Tichenor et al., 1970; Viswanath & Finnegan, 1996).

Interventions to address lack of risk perceptions – cautionary notes

Interventions to address the knowledge deficits that lead to not knowing health risks need to take seriously both the individual-level (e.g., motivation, ability, defensive processing) and the broader structural issues (e.g., inaccessibility to high-quality health information) that lead to such deficits. The knowledge gap pattern found in our analyses necessitates an important cautionary note in considering implications for interventions. A common tendency when lack of knowledge and awareness contributes to public health problems is to invest in public education and publicity campaigns to raise awareness. Although no one, including us, would argue that such a strategy is inherently problematic, it is not in and of itself a sufficient response to address the knowledge gaps and the don’t know responding-education relation reported here.

As a response to knowledge gaps, publicity potentially has critical limitations. There is evidence that knowledge gaps can actually be greater for more publicized than for less publicized issues (Gaziano & Horowitz, 2001). Furthermore, knowledge gaps can persist over time for issues where public health educational efforts and media exposure is highly saturated, such as the relation between cigarette smoking and lung cancer (Viswanath et al., 2006). Publicity can widen rather than narrow knowledge gaps if the channels through which messages are disseminated are such that higher socioeconomic status individuals are more likely to be exposed to the message or if the content of the message is such that individuals with lower levels of education are less able to comprehend and use it (Jenssen, 2013; Viswanath & Finnegan, 1996).

These constraints on the effectiveness of publicity as a means of addressing knowledge deficits create a considerable public health communication challenge, but one which can be addressed. Dealing with individual-level information acquisition and information- seeking skills can aid in overcoming lack of knowledge when information is available. Addressing issues of access to information and of access to channels through which information is disseminated can address the underlying structural issues that create and expand knowledge gaps. Finally, within a particular health issue context and in the context of a particular target community, conducting fine-grained assessments of why there are disparities in access to information, channels, and other communication factors should be a standard part of health communications efforts to address lack of risk perception. For example, the promotora model illustrates that lay health workers who interact closely with a population in need may be particularly effective in increasing health knowledge, access, and healthy behaviour change (Elder, Ayala, Parra-Medina, & Talavera, 2009). The bottom line is that a simple, broad mass media information dissemination approach is unlikely to adequately address the knowledge gaps in risk perception that we document here.

Limitations

As with all secondary analyses, the constructs that can be analysed are limited to those in the survey. In BRFSS, there are not measures of the intervening mechanisms that we hypothesize to be related to the don’t know response pattern. As discussed above, we believe that the analyses reported here and the past literature provide strong support for these mechanisms. However, given the limitations of the secondary analysis, these conclusions are inferences. Direct examination of such hypothesized mechanisms, such as knowledge of risk factors, would be a powerful direction for future work as would examination of other possible explanations for the effects (e.g., defensive processing, fatalism). In addition, there are other individual difference variables, such as sexual orientation, that would might plausibly influence knowledge of HIV risk in these survey years. However, we are limited to examining only those variables included in the BRFSS surveys. Finally, in relation to examining possible mechanisms and covarying influences, the ability to detect those influences is a function of the magnitude of don’t know responding and the effect of that magnitude in a categorical variable on statistical power. Although this is not a direct limitation of the results reported here, it is important to note for future work.

In addition, the nature of the BRFSS study design and implementation is that each year’s survey sample is separately drawn from the population using a system that provides a population-representative sample for that year. In the context of our analyses, this provides confidence that the estimates reported for the prevalence of don’t know responding and for the strength of its relation to education in each study year are valid representations of those parameters in the US adult population for that year. However, it does mean that the across-year comparisons are, of necessity, based on those population- level estimates. Although there are well-developed techniques for examining change over time in such survey designs (Korn & Graubard, 1999), it is important to note that the analyses based on separate and distinct yearly samples and not a single cohort that is followed over time.

Conclusion

Risk perception and knowledge of one’s risk for a given health problem are dynamic phenomena that can shift over time. The individual and structural forces that impact and shape knowledge of risk are fluid. Understanding and addressing the mechanisms that lead to the education-risk perception relation are critical to addressing public health problems impacted by lack of risk knowledge and, therefore, behavioural action to reduce risk.

Statement of contribution.

What is already known on this subject?

  • A meaningful potion of the population answers ‘don’t know’ when asked to report their risk for health problems, indicating a lack of risk perception in the domain.

  • Previous studies have shown that level of education is associated with don’t know responding – those with lower educational attainment are more likely to respond don’t know.

  • The education-don’t know responding relation suggests that lack of health information and health domain knowledge might be a factor in lacking risk perception, but this mechanism has not been previously tested.

What does this study add?

  • Patterns of changes in don’t know responding over time as population-level knowledge of a health risk increase are consistent with the health information/health knowledge hypothesis outlined above.

  • As population knowledge of HIV/AIDS risk in the United States increased over time (indicated by declining overall rates of don’t know responses), the relation of education level to don’t know responding actually became stronger.

  • The pattern of change over time is the classic ‘knowledge gap hypothesis’ pattern, which has not been previously demonstrated for knowledge of personal health risk. The knowledge gap response pattern supports the health information/health knowledge hypothesis.

References

  1. Brewer NT, Chapman GB, Gibbons FX, Gerrard M, McCaul KD, Weinstein ND. Meta-analysis of the relationship between risk perception and health behavior: The example of vaccination. Health Psychology. 2007;26:136–145. doi: 10.1037/0278-6133.26.2.136. https://doi.org/10.1037/0278-6133.26.2.136. [DOI] [PubMed] [Google Scholar]
  2. Brewer NT, Weinstein ND, Cuite CL, Herrington JE., Jr Risk perceptions and their relation to risk behavior. Annals of Behavioral Medicine. 2004;27:125–130. doi: 10.1207/s15324796abm2702_7. https://doi.org/10.1207/s15324796abm2702_7. [DOI] [PubMed] [Google Scholar]
  3. Bruine de Bruin W, Fischhoff B, Millstein SG, Halpern-Felsher BL. Verbal and numerical expressions of probability: ‘It’s a fifty-fifty chance’. Organizational Behavior and Human Decision Processes. 2000;81(1):115–131. doi: 10.1006/obhd.1999.2868. https://doi.org/10.1006/obhd.1999.2868. [DOI] [PubMed] [Google Scholar]
  4. Centers for Disease Control. Current trends update on Acquired Immune Deficiency Syndrome (AIDS)-United States. Morbidity and Mortality Weekly Report. 1982;24:31–37. [PubMed] [Google Scholar]
  5. Centers for Disease Control. 1995 BRFSS summary quality control report. Atlanta, GA: 1995. Retrieved from http://www.cdc.gov/brfss/annual_data/1995/1995SummaryDataQualityReport.pdf. [Google Scholar]
  6. Centers for Disease Control. 2000 Behavioral Risk Factor Surveillance System summary data quality report. Atlanta, GA: 2000. Retrieved from http://www.cdc.gov/brfss/annual_data/2000/2000SummaryDataQualityReport.pdf. [Google Scholar]
  7. Centers for Disease Control and Prevention. Twenty-five years of HIV/AIDS–United States, 1981–2006. MMWR Morbidity and Mortality Weekly Report. 2006;55:585. [PubMed] [Google Scholar]
  8. Centers for Disease Control and Prevention. The BRFSS data user guide. Atlanta, GA: 2013. Retrieved from http://www.cdc.gov/brfss/data_documentation/PDF/UserguideJune2013.pdf. [Google Scholar]
  9. Elder JP, Ayala GX, Parra-Medina D, Talavera GA. Health communication in the Latino community: Issues and approaches. Annual review of public health. 2009;30:227–251. doi: 10.1146/annurev.publhealth.031308.100300. [DOI] [PubMed] [Google Scholar]
  10. Friedman-Kien A, Laubenstein L, Marmor M, Hymes K, Green J, Ragaz A, Weintraub M. Kaposis sarcoma and Pneumocystis pneumonia among homosexual men–New York City and California. MMWR Morbidity and Mortality Weekly Report. 1981;30(25):305–308. [PubMed] [Google Scholar]
  11. Garc,es-Palacio IC, Scarinci IC. Factors associated with perceived susceptibility to cervical cancer among Latina immigrants in Alabama. Maternal and Child Health Journal. 2012;16(1):242–248. doi: 10.1007/s10995-010-0737-x. https://doi.org/10.1007/s10995-010-0737-x. [DOI] [PubMed] [Google Scholar]
  12. Gaziano C, Horowitz AM. Knowledge gap on cervical, colorectal cancer exists among US women. Newspaper Research Journal. 2001;22(1):12. https://doi.org/10.1177/073953290102200102. [Google Scholar]
  13. Haushofer J, Fehr E. On the psychology of poverty. Science. 2014;344:862–867. doi: 10.1126/science.1232491. https://doi.org/10.1126/science.1232491. [DOI] [PubMed] [Google Scholar]
  14. Hay JL, Orom H, Kiviniemi MT, Waters EA. “I don’t know” my cancer risk: Exploring deficits in cancer knowledge and information-seeking skills to explain an often- overlooked participant response. Medical Decision Making. 2015;35:436–445. doi: 10.1177/0272989X15572827. https://doi.org/10.1177/0272989x15572827. [DOI] [PMC free article] [PubMed] [Google Scholar]
  15. Henry J, Kaiser Family Foundation The global HIV/AIDS epidemic: A timeline of key milestones. 2008 Retrieved from http://kff.org/hivaids/timeline/global-hivaids-timeline/
  16. Hwang Y, Jeong SH. Revisiting the knowledge gap hypothesis: A meta-analysis of thirty-five years of research. Journalism & Mass Communication Quarterly. 2009;86:513–532. https://doi.org/10.1177/107769900908600304. [Google Scholar]
  17. Jenssen AT. Widening or closing the knowledge gap? Nordicom Review. 2013;33(1):19–36. [Google Scholar]
  18. Johnson JD. Cancer-related information seeking. Cresskill, NJ: Hampton Press; 1997. [Google Scholar]
  19. Kim SH. Testing the knowledge gap hypothesis in South Korea: Traditional news media, the internet, and political learning. International Journal of Public Opinion Research. 2008;20:193–210. https://doi.org/10.1093/ijpor/edn019. [Google Scholar]
  20. Koop CE. Surgeon General’s report on acquired immune deficiency syndrome. Public Health Reports. 1987;102(1):1. [PMC free article] [PubMed] [Google Scholar]
  21. Korn EL, Graubard BI. Analysis of health surveys. New York, NY: Wiley Interscience; 1999. https://doi.org/10.1002/9781118032619. [Google Scholar]
  22. Lenzner T, Kaczmirek L, Lenzner A. Cognitive burden of survey questions and response times: A psycholinguistic experiment. Applied Cognitive Psychology. 2010;24:1003–1020. https://doi.org/10.1002/acp.1602. [Google Scholar]
  23. Lipkus IM, Rimer BK, Strigo TS. Relationships among objective and subjective risk for breast cancer and mammography stages of change. Cancer Epidemiology Biomarkers & Prevention. 1996;5:1005–1011. [PubMed] [Google Scholar]
  24. Mani A, Mullainathan S, Shafir E, Zhao J. Poverty impedes cognitive function. Science. 2013;341:976–980. doi: 10.1126/science.1238041. https://doi.org/10.1126/science.1238041. [DOI] [PubMed] [Google Scholar]
  25. Neter E, Brainin E. eHealth literacy: Extending the digital divide to the realm of health information. Journal of Medical Internet Research. 2012;14(1):e19. doi: 10.2196/jmir.1619. https://doi.org/10.2196/.1619jmir. [DOI] [PMC free article] [PubMed] [Google Scholar]
  26. Niederdeppe J. Beyond knowledge gaps: Examining socioeconomic differences in response to cancer news. Human Communication Research. 2008;34:423–447. https://doi.org/10.1111/j.1468-2958.2008.00327.x. [Google Scholar]
  27. Salmon CT, Wooten K, Gentry E, Cole GE, Kroger F. AIDS knowledge gaps: Results from the first decade of the epidemic and implications for future public information efforts. Journal of Health Communication. 1996;1:141–156. doi: 10.1080/108107396128112. [DOI] [PubMed] [Google Scholar]
  28. Sheeran P, Harris PR, Epton T. Does heightening risk appraisals change people’s intentions and behavior? A meta-analysis of experimental studies. Psychological Bulletin. 2014;140:511–543. doi: 10.1037/a0033065. https://doi.org/10.1037/a0033065. [DOI] [PubMed] [Google Scholar]
  29. Shoemaker PJ, Eichholz M, Skewes EA. Item nonresponse: Distinguishing between don’t know and refuse. International Journal of Public Opinion Research. 2002;14:193–201. https://doi.org/10.1093/ijpor/14.2.193. [Google Scholar]
  30. Taber JM, Klein WMP. The role of conviction in personal disease risk perceptions: What can we learn from research on attitude strength? Social and Personality Psychology Compass. 2016;10:202–218. doi: 10.1111/spc3.12244. https://doi.org/10.1111/spc3.12244. [DOI] [PMC free article] [PubMed] [Google Scholar]
  31. Tichenor PJ, Donohue GA, Olien CN. Mass media flow and differential growth in knowledge. Public Opinion Quarterly. 1970;34:159–170. https://doi.org/10.1086/267786. [Google Scholar]
  32. US Census Bureau. Resident population estimates of the United States by age and sex: April 1 1990 to July 1, 1999. 2001 Retrieved from www.census.gov/population/estimates/nation/intfile2-1.txt.
  33. U.S. Department of Health & Human Services. 30 Years of HIV/AIDS timeline. 2011 Retrieved from https://www.aids.gov/hiv-aids-basics/hiv-aids-101/aids-timeline/
  34. Viswanath K, Breen N, Meissner H, Moser RP, Hesse B, Steele WR, Rakowski W. Cancer knowledge and disparities in the information age. Journal of Health Communication. 2006;11:1–17. doi: 10.1080/10810730600637426. https://doi.org/10.1080/10810730600637426. [DOI] [PubMed] [Google Scholar]
  35. Viswanath K, Finnegan JR., Jr The knowledge gap hypothesis: Twenty-five years later. Communication Yearbook. 1996;19:187–227. [Google Scholar]
  36. Viswanath K, Kahn E, Finnegan JR, Hertog J, Potter JD. Motivation and the knowledge gap effects of a campaign to reduce diet-related cancer risk. Communication Research. 1993;20:546–563. https://doi.org/10.1177/009365093020004003. [Google Scholar]
  37. Wanta W, Elliott WR. Did the “magic” work? Knowledge of HIV/AIDS and the knowledge gap hypothesis. Journalism & Mass Communication Quarterly. 1995;72:312–321. https://doi.org/10.1177/107769909507200205. [Google Scholar]
  38. Waters EA, Hay JL, Orom H, Kiviniemi MT, Drake BF. “Don’t know” responses to risk perception measures: Implications for underserved populations. Medical Decision Making. 2013;33:271–281. doi: 10.1177/0272989X12464435. https://doi.org/10.1177/0272989x12464435. [DOI] [PMC free article] [PubMed] [Google Scholar]
  39. Waters EA, Kiviniemi MT, Orom H, Hay JL. “I don’t know” my cancer risk: Implications for health behavior engagement. Annals of Behavioral Medicine. 2016;50:784–788. doi: 10.1007/s12160-016-9789-5. https://doi.org/10.1007/s12160-016-9789-5. [DOI] [PMC free article] [PubMed] [Google Scholar]
  40. Wei L, Hindman DB. Does the digital divide matter more? Comparing the effects of new media and old media use on the education-based knowledge gap. Mass Communication and Society. 2011;14:216–235. https://doi.org/10.1080/15205431003642707. [Google Scholar]

RESOURCES