Abstract
Security fatigue has been used to describe experiences with online security. This study identifies the affective manifestations resulting from decision fatigue and the role it plays in users’ security decisions.
I think I am desensitized to it—I know bad things can happen. You get this warning that some virus is going to attack your computer, and you get a bunch of emails that say don’t open any emails, blah, blah, blah. I think I don’t pay any attention to those things anymore because it’s in the past. People get weary of being bombarded by “watch out for this or watch out for that”
(participant 101).
Over and over again today, people are bombarded with messages about the dangers lurking on the Internet, about security breaches in major corporations1 and the US government, and about the need to be constantly attentive while online. To combat these dangers, users are forced to update passwords, run antivirus software programs, and accept unwieldy terms of agreement, often without a clear understanding of why and to what end. One result is that people reach a saturation point and become inured to the issue of cybersecurity.2
Here, we argue that people have indeed reached this saturation point, one that results in what Steven Furnell and Kerry-Lynn Thomson call “security fatigue.”3 They propose that “there is a threshold at which it simply gets too hard or burdensome for users to maintain security.”3 When this fatigue happens, people become “desensitized” and “get weary,” as participant 101 from our study notes at the beginning of the article.
We define fatigue as a type of weariness, a reluctance to see or experience any more of something. When these feelings are related to security, we use the term security fatigue. Although other factors might be included in security fatigue, including vigilance and loss of control, this article focuses on the role that decision fatigue plays and the affective manifestations resulting from it. This weariness often manifests as resignation or a loss of control in people’s responses to online security. People are told they need to be constantly on alert, constantly “doing something,” but they are not even sure what that something is or what might happen if they do or do not do it. Security fatigue, and the resignation and loss of control associated with it, certainly present a challenge to efforts aimed at promoting online security and protecting online privacy.
Furnell and Thomson identify security fatigue as a concept related to people’s experiences with online security in the workplace. Our work here examines security fatigue among the general public (average users, not security experts or IT professionals), presenting empirical evidence for its existence in their everyday lives and technology usage. We argue that security fatigue is one cost that users experience when bombarded with security messages, advice, and demands for compliance—and that this cost often results in what security experts might consider less secure online behavior. Such work can help advance our understanding of how the public thinks about and approaches cybersecurity, and can provide us with a better understanding of how to help users be more secure in their online interactions.
Background Literature
Alessandro Acquisti and Jens Grossklags argue that bounded rationality4 limits our ability to acquire and then apply information in the online privacy and security space.5 Several factors can limit our rationality: the amount of information we can process, our minds’ cognitive limitations, the time we have available to make a decision, incomplete information, and systematic psychological deviations from rationality. From this perspective, individuals generally have access to limited information, and even with complete information, are often unable to act in optimal ways given the vast amount of data they need to process. In addition, Acquisti and Grossklags argue that even if individuals had complete information, they might still not make rational decisions because they are “influenced by motivational limitations and misrepresentations of personal utility.”5
Another limit to rationality related to online privacy and security is what Adam Beautement, M. Angela Sasse, and Mike Wonham call the “compliance budget.”6 They argue that employees engage in a cost-benefit analysis in which they weigh the advantages and disadvantages of compliance. Once their compliance limit is reached, people choose not to comply or find ways to work around compliance; their willingness to comply stops as they are confronted with additional security policies and requirements. This work clearly addresses security behaviors in organizations, where workers are expected to comply with workplace policies and practices related to online security. Because there are no clear security policies or best practices for the general public to follow, the compliance budget does not seem to apply in the same way it does in the workplace. Anne Adams and Sasse argue that, in fact, many security policies promote an adversarial relationship with users,7 putting them in what Cormac Herley calls an “impossible compliance regime.”8
All this research deals with limits to the decision-making process. The ability to make decisions, like the compliance budget, is a finite resource. It occurs when individuals are inundated with choices and asked to make more decisions than they can process.9 The result is that they often make irrational tradeoffs, avoid making decisions, and have impaired self-regulation.10 These indicators of decision fatigue are also present in the cybersecurity space. Amos Tversky and Daniel Kahneman argue that when people are fatigued, they fall back on heuristics and cognitive biases when making decisions.11 In our previous work based on this dataset,12 we discovered that participants often relied on multiple mental models that were partially formed and incomplete to help them make sense of and negotiate their experiences with online privacy and security. These mental models reflect the use of heuristics and cognitive biases previously identified by Tversky and Kahneman. We see security fatigue as yet another piece that influences the decision–making process, a subset of decision fatigue, but in a different domain.
Methods
Data for this article are part of a larger qualitative study related to average users’ (not computer experts’) perceptions and beliefs about cybersecurity and online privacy. Data collection took place from January to March 2011, and included 40 semistructured interviews (see Table 1 for participant demographics). Participants came from urban, rural, and suburban areas and from a range of employment settings.
Table 1.
Demographic data about study participants.
| Demographic | Washington DC metropolitan area | Central Pennsylvania |
|---|---|---|
| Men | 11 | 7 |
| Women | 16 | 6 |
| Age | 21–59 | 21–60+ |
| Participant numbers | 101–127 | 201–212 |
The semistructured protocol asked questions about online activities, computer security, security icons and tools, and security terminology. Participants also answered demographic questions and provided a self-assessment of their computer knowledge. We piloted the protocol with a small group of participants to assess the questions’ face validity and language appropriateness, adjusting the instrument based on feedback from the pilot. When all interviews were completed, one of the researchers on the team transcribed them, along with field notes from the interviews. The question categories that the protocol utilized to elicit participant knowledge, behaviors, and emotions related to online activity and cybersecurity were as follows:
list of, frequency of, reasons for, security needs of, security tools for, and feelings about online activities
definition of, knowledge about, reasons for, levels of, training in, use of, and feelings about computer security
identification of, knowledge about, beliefs about, use of, and feelings about security icons and tools
familiarity with, knowledge about, and understanding of security terminology
Data analysis began with the development of an a priori code list (informed by the literature and our knowledge of the field) constructed by the research team. We operationalized all codes and then each researcher worked with a subset of interviews to determine inter-coder reliability. We met regularly as a team to discuss codes and their application and to revise the code list as needed. Once we reached agreement on the codes and their operationalization, we continued to code interviews independently, with each interview being coded by at least two researchers. We continued to meet to discuss the coding process until we reached saturation—the point at which no new properties or dimensions emerged from the coding process.13 At this point, we shifted from coding to analysis, discussing relationships in the data and among the codes. We wrote memos individually and shared ideas related to our interpretation of the data and codes. This iterative and recursive analytic process provided opportunities for interdisciplinary discussions and alternative explanations—especially important in this multidisciplinary team-based study.
Although the interviews did not specifically address security fatigue, we began to notice many indicators in which fatigue surfaced as participants discussed their perceptions and beliefs about online privacy and security. When we recoded the data for security fatigue, it surfaced in 25 out of 40 interviews, being one of the most consistent codes across the dataset. We then refined our analysis and examined the data for contributing factors, symptoms, and outcomes of fatigue. When compiled together, there were more than eight single-spaced pages of data related to security fatigue. It permeated the data and tinged it with a tremendous negativity, often expressed as resignation or loss of control. We explore these responses next, as well as how security fatigue might in fact be the result of a “bad cost-benefit tradeoff to users.”8
Results
Adopting security advice is an ongoing cost that users experience and that contributes to security fatigue. The result is that users are likely to ignore such advice and engage in behaviors that experts believe put them at risk. From the perspective of a cybersecurity expert, this behavior is often seen as irrational. Yet we argue that when examined from the lens of security fatigue, these behaviors (and the beliefs that drive them) make much more sense. Users are tired of being overwhelmed by the need to be constantly on alert, tired of all the measures they are asked to adopt to keep themselves safe, and tired of trying to understand the ins and outs of online security. All of this leads to security fatigue, which causes a sense of resignation and a loss of control. Our data clearly demonstrate the manifestations of security fatigue as a specific example of decision fatigue,10 including
avoiding unnecessary decisions,
choosing the easiest available option,
making decisions driven by immediate motivations,
choosing to use a simplified algorithm,
behaving impulsively, and
feeling resignation and a loss of control.
Security fatigue, like decision fatigue, occurs when individuals are asked to make more decisions than they can process, depleting their resources and resulting in the behaviors and emotions just listed. In previous work, we discussed how these were linked to the public’s use of incomplete and often contradictory multiple mental models.12 In the following sections, we present evidence of security fatigue in the general public, using direct quotes from our participants, rather than our synthesis of them, given that these are critical in qualitative research to understanding the data and its implications.
I Get Tired Just Thinking about It
More than half of our participants alluded to fatigue in one way or another during the course of their interview, even though fatigue was not a direct part of the interview protocol. The quote from participant 101 used at the beginning of this article highlights this idea succinctly: “People get weary of being bombarded by ‘watch out for this and watch out for that.’” A little further on in the interview, he states, “it also bothers me when I have to go through more additional security measures to access my things, or get locked out of my own account because I forgot [and] I accidentally typed in my password incorrectly.” When discussing security, he uses words such as “irritating,” “annoying,” and “frustrating,” which all contribute to an overarching sense of the fatigue he experiences in the online environment. Other participants articulate similar positions, whether speaking about passwords, antivirus software, or security in general:
I get tired of remembering my username and passwords
(participant 204).
If you give me too many more blocks, I am going to be turned off. My [XXX] site, first it gives me a login, then it gives me a site key I have to recognize, and then it gives me a password. So that is enough, don’t ask me anything else
(participant 109).
[Security] seems to be a bit cumbersome, just something else to have and keep up with
(participant 117).
There is the firewalls, and Norton, and there is this and antivirus, and run your checkup, and so many things that you can do, I just get overwhelmed
(participant 108).
Participants are “tired,” “turned off,” and “overwhelmed” by it all. This overarching fatigue factors into their cost-benefit analysis, and the result is that many reject security advice or practices that they realize might protect them more, in part because they are driven by immediate motivations—usually related to the completion of their primary task. For example, participant 101 recognizes that some of his behaviors are not what they should be (“I am lazy about my passwords, and I use the same ones; I know they should have random numbers and letters”), but chooses them anyway, because following the security advice “just makes things more difficult.” For these participants, security is cumbersome, difficult, and altogether overwhelming, and they choose to follow practices that make things easier and less complicated. In many ways, they are making what seem to be irrational tradeoffs,10 although Herley would argue that, in fact, the tradeoffs make sense economically in terms of users’ cost-benefit analyses.8
In addition, when people are fatigued, they are prone to fall back on heuristics and cognitive biases when making decisions.11 Our data show that, based on their experience, users have several cognitive biases that result from security fatigue:
they personally are not at risk (they have nothing of value that someone else would want);
someone else is responsible for security, and if targeted, they will be protected (not their responsibility); and
no security measure that they put in place will really make a difference (large corporations and the government cannot even protect themselves, so how can any one individual?).
When participants expressed these beliefs, it was often with a sense of resignation or a sense that they had no control over the situation.
Why Would I Be Targeted?
Our data show participants often feel they are not personally at risk—they are not important enough for anyone to care about their information. There is an edge of frustration to this data, as if participants are tired of hearing about having to protect themselves when they do not feel at risk. The frustrated tone, minimization of risk, and devaluing of information is evident in the following participant comments:
I don’t see what of value I have on there that would make a difference
(participant 110).
It doesn’t appear to me that it poses such a huge security risk. I don’t work for the state department, and I am not sending sensitive information in an email. So, if you want to steal the message about [how] I made blueberry muffins over the weekend, then go ahead and steal that
(participant 108).
If someone needs to hack into my emails to read stuff, they have problems. They need more important things to do
(participant 119).
I am not working for the government like you are, where everything is top secret and important. If [my data] is stolen or hacked into, no big deal
(participant 112).
This often results in avoiding decisions related to security or in choosing the easiest available option.
In addition, participants who have not experienced a problem themselves, or who do not know others who have experienced any security issues, are prone to ignore advice in spite of recognizing that threats exist, as in the following quote:
I know that the risk exists, and my security can be compromised, and my information can be stolen. But I don’t hear it happening often to my peers. I haven’t heard horror stories of anyone getting my email
(participant 101).
Again, participants are likely to avoid making decisions related to security because they do not feel at risk.
Whose Job Is It, Anyway?
Many participants relegate online security to another source, whether that is their bank, the store or site with which they are interacting, or someone with more expertise. This represents a form of decision avoidance, another characteristic of decision fatigue.10 The following participant comments clearly articulate that they are relying on others to take care of them:
There seems to be so many updates. I would like to think that Norton is supposed to be handling that security. I don’t want to do all those updates
(participant 112).
It is up to the banks to make sure they protect your information
(participant 115).
[Security is] something that I rely on somebody else to take care of
(participant 206).
For these participants, security is not really something they want to do, they feel comfortable doing, or for which they feel responsible. Instead, it is something to be relegated to others who are more capable or more responsible. Many participants seem resigned to the idea that cybersecurity is something they are not going to understand and that if it were up to them, they would not be secure. Having another entity responsible for their online security makes them feel safer, even though they recognize it as a loss of control.
Will It Really Matter?
Participants often articulate a position that no matter what they do, it will never be enough, so why do it? In part, this stems from the fact that they do not see the direct benefit from the behavior. In addition, many participants recognize that major institutions (banks, stores, and the government) have experienced security breaches. If these large, wealthy organizations cannot protect themselves online, how can any one individual be expected to do so? Again, participant 101 provides insight here:
If I took all the security measures possible, and I made my password d3121, unlike scissors90, is it going to make all that difference? I don’t have to be vigilant all the time. If it is going to happen, it is going to happen
(participant 101).
There is a sense of fatalism here, as if something will eventually happen no matter what a person does. The following three participants clearly expressed this same sense:
If someone wants to get into my account, they are going to get into my account
(participant 116).
It is not the end of the world. If something happens it is going to happen
(participant 119).
I haven’t kept up with the latest and greatest software applications for antivirus or antispyware. I know there are risks, and there are ways to prevent some of the risks. I don’t know that I feel completely comfortable that all of the risks are completely avoidable or that everything can be blocked. You hear that things—I don’t even know how real or threatening threats are. Don’t open this, don’t open that, there is a new worm or Trojan horse. There is a lot of information and there may be a lot of misinformation. And I tried—it is all behind me, and I cannot ever secure my computer. There is a lot to keep up with
(participant 117).
In the physical world, we lock our doors to protect ourselves, and we know it works because we know when someone has broken into our space. However, with cybersecurity, we might not know when the attempts have been made and thwarted, perhaps leading to the idea of misinformation as expressed by participant 117 in the previous quote.
Users feel inundated by the tasks, choices, and work that security decisions require of them and are unsure that compliance actually makes them any more secure. Whatever they do, it is never quite enough. Herley agrees, noting that “users ignore new advice for several reasons. First, they are overwhelmed. Given the sheer volume of advice offered, no user has any real prospect of keeping up.”8 Things are constantly changing, both in terms of the security advice they receive and the tactics used by those who want to violate their security. As one participant put it, “I think I am probably two weeks behind those who are out there to try and break into computers, forever two weeks behind” (participant 107).
While users’ cybersecurity behavior is often portrayed as irrational, in fact it might be quite rational and reflect an astute cost-benefit analysis that results in users choosing to ignore “complex security advice that promises little and delivers less.”8 We argue that users experience a sense of security fatigue that also contributes to this cost-benefit analysis and reinforces their ideas about the lack of benefit for following security advice. From this perspective, we in the IT community need to rethink the way we currently conceptualize the public’s relationship to cybersecurity. Current mental models that position cybersecurity as something that is not worth the effort will be challenging if not impossible to change. Yet, as IT professionals, it is our responsibility to take up this challenge and work to alleviate the security fatigue users experience.
Our data provide evidence for three specific ways to minimize security fatigue and help users adopt more secure online practices: limit the decisions users have to make related to security; make it easy for users to do the right thing related to security; and provide consistency (whenever possible) in the decisions users need to make. For example, consider a work environment that offers different ways for users to log into the system. The first is the traditional user name and password. The second is a personal identification and verification card (PIV). The card is easier for the user and more secure than the alternative. Thus, the PIV option should show up as the default login option—it should not be difficult to access. This is but one example of a way to alleviate decisions that cause security fatigue and make it easier for the user to do the right thing.
As we design security solutions, we must be conscious of those areas that cause users to experience fatigue, so they do not become resigned and complacent or feel a loss of control related to their online security. We must also continue to investigate users’ beliefs, knowledge, and use of cybersecurity advice and the factors, such as security fatigue, that inform them, so we can ultimately provide more benefit and less cost for adopting cybersecurity advice that will keep users safer online.
Biographies
Brian Stanton is a cognitive scientist in the Visualization and Usability Group at the US National Institute of Standards and Technology. He works on the Common Industry Format project developing usability standards and investigates usability and security issues ranging from password rules and analysis to privacy concerns. Stanton has also worked on biometric projects for the US Department of Homeland Security and the Federal Bureau of Investigation’s Hostage Rescue Team, and with latent fingerprint examiners. He received an MS in cognitive psychology from Rensselaer Polytechnic Institute. Contact him at brian.stanton@nist.gov.
Mary F. Theofanos is a computer scientist with the US National Institute of Standards and Technology’s Materials Measurement Laboratory. She performs research on usability and human factors of systems. Theofanos is the principal architect of the Usability and Security Program evaluating the human factors and usability of cybersecurity and biometric systems, and the convener of the ISO SC7/Working Group 28 on usability standards. She received an MS in computer science from the University of Virginia. Contact her at mary.theofanos@nist.gov.
Sandra Spickard Prettyman is an independent research consultant. She specializes in qualitative research methods, providing expertise in designing and implementing rigorous qualitative research projects. Prettyman was a professor at the University of Akron, where she taught doctoral courses in research methods and courses in social and philosophical foundations of education for graduate and undergraduate students. Contact her at sspretty50@icloud.com.
Susanne Furman is a cognitive scientist in the US National Institute of Standards and Technology’s Visualization and Usability Group. She works on and investigates usability for both cybersecurity and biometric devices for agencies such as the US Department of Homeland Security and the Federal Bureau of Investigation. Furman has worked at the US Department of Health and Human Services, and ran its usability program. She has a PhD in applied experimental psychology human factors from George Mason University. Contact her at susanne. furman@nist.gov.
Contributor Information
Sandra Spickard Prettyman, Independent Consultant.
Susanne Furman, US National Institute of Standards and Technology.
References
- 1.McGraw G, “Security Fatigue? Shift Your Paradigm,” Computer, vol. 47, no. 3, 2014, pp. 81–83. [Google Scholar]
- 2.Viseau A, Clement A, and Aspinall J, “Situating Privacy Online: Complex Perceptions and Everyday Practices,” Information, Communication, and Society, vol. 7, 2004, pp. 92–114. [Google Scholar]
- 3.Furnell S and Thomson KL, “Recognising and Addressing ‘Security Fatigue,’” Computer Fraud and Security, Nov. 2009, pp. 7–11. [Google Scholar]
- 4.Acquisit A and Grossklags J, “Privacy and Rationality in Individual Decision Making,” IEEE Security & Privacy, vol. 3, no. 1, 2005, pp. 26–33. [Google Scholar]
- 5.Simon HA, “Theories of Bounded Rationality,” Decision and Organization, McGuire CB and Radner Roy, eds., North-Holland Publishing, 1972, pp. 161–176. [Google Scholar]
- 6.Beautement A, Sasse MA, and Wonham M, “The Compliance Budget: Managing Security Behaviour in Organisations,” Proc. 2008 Workshop on New Security Paradigms, 2008, pp. 47–58. [Google Scholar]
- 7.Adams A and Sasse MA, “Users Are Not the Enemy,” Comm. ACM, vol. 42, no. 12, 1999, pp. 40–46. [Google Scholar]
- 8.Herley C, “So Long, and No Thanks for the Externalities: The Rational Rejection of Security Advice by Users,” Proc. 2009 Workshop on New Security Paradigms, 2009, pp. 133–144. [Google Scholar]
- 9.Vohs KD et al. , “Making Choices Impairs Subsequent Self-Control: A Limited-Resource Account of Decision Making, Self-Regulation, and Active Initiative,” J. Personality and Social Psychology, vol. 94, no. 5, 2008, pp. 883–898. [DOI] [PubMed] [Google Scholar]
- 10.Oto B, Limmer D, and Training EMS, “When Thinking Is Hard: Managing Decision Fatigue,” EMS World, vol. 41, no. 5, 2012, pp. 46–50. [PubMed] [Google Scholar]
- 11.Tversky A and Kahneman D, “Availability: A Heuristic for Judging Frequency and Probability,” Cognitive Psychology, vol. 5, no. 2, 1973, pp. 207–232. [Google Scholar]
- 12.Prettyman SS et al. , “Privacy and Security in the Brave New World: The Use of Multiple Mental Models,” Human Aspects of Information Security, Privacy, and Trust, Springer, 2015, pp. 260–270. [Google Scholar]
- 13.Charmaz K, Constructing Grounded Theory: A Practical Guide through Qualitative Research, Sage Publications, 2006. [Google Scholar]
