The Psych of Sec
I recently gave this presentation at BsidesCT and have found that slideshare does not like my sense of graphic design as well as a slide deck at times alone just doesn’t tell the full story of the presentation. So, I am going to add commentary here that I gave in person and let you all see a better picture of what was talked about.
Computer security starts and ends with people. People are the ones creating the hardware, software, processes, and operating the internet of things. We are the reason we have these problems around security and we are the reason as well that things don’t get done right or are abused. Our species, the tool user, has created a series of tools that outstrip our capacities to comprehend them en mas as well as operate them securely as a whole. I want you to remember one thing from this talk and that is that we are the reason we can’t have nice things as they say today. We are the beginning and the end of the problem and we must address this smartly to overcome the problem.
First though we will start out with the biological makeup of the brain that causes the dissonance that we are seeing today within the security community at large. The organic brain is the key to much of our problems around security. We have a lump of brain matter that has varying sections that operate in different ways and much of the time are the cause of our not being so able to handle security tasks very well as well as predisposes us to certain types of failures. These predispositions can be overcome but we have to work at improving out abilities of cognition as well as deal with the host of emotional and social issues that stem from our brains and our societal makeup.
Our brains are a wonder and yet they are the product of evolution that did not include computers from the start. The brain has some limitations in scope where cognition is concerned and you can see this in the form of such things as inability to remember hard passwords and long term memory and learning processes. We have simply created a tool (computers) that outstrip our capacities to retain and manipulate information and as such we have created a shortcut for ourselves to ease our brains burden. Unfortunately at the same time we have opened ourselves up to more insecurities now because of the tools that we have created to ease that brain workload.
Keeping all of the above in mind let’s take a look at the two primary actors in my presentation on security within the brain that come to bear on the issue. The first part of the brain that I will cover is the Amygdala. The Amygdala (shown above) is the part of the brain that deals with emotion as well as fight or flight responses. This part of the brain is the more reactionary and plays a key role in our abilities to react to stimuli such as needing to say “That’s a tiger and it’s about to eat me RUN!” The amygdala also functions as the short term memory agent to translate memories and data into long term memory in another part of the brain. Overall the Amygdala is that section of the brain that is reactive and knee jerk while the next part of our brain I will cover is the more reasoned one. A part of the brain that is almost diametrically opposed to the Amygdala, the Prefrontal Cortex.
While the Amygdala was great for our ancestors on the great savannah and still functions well for immediate threat responses it is a rather poor organ for information security today. Since the Amygdala deals with imminent threats to our lives the Prefrontal Cortex (henceforth PFC) deals with the more abstract things such as long term threats and other kinds of reasoning. While the Amygdala is freaking out at every little sound the PFC says “wait, we’ve heard that before.. It’s a cat so calm down”
Herein lies one of the primary reasons that infosec today has so many issues. The brain, while being really good at certain things that served us well in the past is not so well suited on average to long term threats due to this dichotomy of the PFC and the Amygdala. One of the primary functions of the Amygdala is to take really bad things and insure that the rest of the brain cognates that they were bad and to remember them long term. An example of this would be say 9/11. We all pretty much remember where we were and what we were doing when it happened. This is the amygdala processing something horrible into long term memory because it was scary.
Now ponder your everyday computer security problems. Are they life or death? On average they are not and thus without the huge scare factor, the memory engram isn’t created quickly if at all because the PFC rationalizes that this is nothing to really fear and is not as important as other tasks it is being hit with through all of the stimuli it gets daily. So you see that our physical makeup within the brain creates a certain cognitive dissonance to the problems of long term and abstract security concepts such as we face every day in INFOSEC.
Cognitive bias is a factor that comes from the aforementioned brain fight that we have between the Amygdala and the PFC. The bias issue is a large part of why we fail so much in security from the people side of the equation and it is rather systemic to the entirety of the problem. From the structural issues I just spoke about above we have bias issues where things like “It won’t happen to me” come to the fore. Unless the user has been really hit hard with real effects from a security incident, life or death kinds of incidents, the brain just does not really process that on average as a high priority to store in long term memory and this is a problem where we are concerned.
So unless the problems are fight or flight and life or death, then we tend to get these bias issues of it can’t happen or it won’t happen because it hasn’t already happened. We are poor at looking at statistics and relating them to probabilities that we will be victims of the same attack. This too is also part of the brains way of coping with day to day life really. If we all feared going out for a walk because we thought we’d get attacked by a bear then no one would go anywhere. It’s the brains ability to rationalize and normalize all of this that allows us to live our lives. So it’s a thin line really but it is one that we have to address in security to maximize our abilities to protect our data and perhaps our way of life, if you believe the hype.
How often have you heard users complain that passwords are too complex to remember? How many times have you heard those same users complain that just as they have gotten to the point where they can remember a password they now are being forced to change them? These aren’t just users being lazy. These are people who like all of us, have the same brain makeup that inherently causes us to tend to not remember these things so well.
As I spoke to earlier the brain does not do a great job with fight or flight vs. long term threats and so goes it as well for memory leaks let’s call them. Unless you train your brain or you are a savant the average person is not going to remember a long non standard multi variable password. It’s just a function of the brain. So we will have people who make the shortcut of writing it down and as tool users we really should just use a password safe on our smartphone right? Well then you have ANOTHER password to remember!
The brain loves to make shortcuts and use heuristics and well, changing passwords so much and making them difficult is anathema to the way the brain operates. The same goes for HCI’s (Human Computer Interfaces) in general too. Take a serious look at Windows and you will see just how poorly it is designed to really do things with the operating system effectively. There are too many flaming hoops for common users and their brains to bear so they just go with what works until it breaks. They don’t get under the hood because once you do that is having to become a specialist.
HCI’s should be designed more simplistically to allow for users to follow a process and really be able to handle their systems. MAC (APPLE) does this pretty well on the OS side but one has to remember that even they have this issue because the technology is too complex really to simplify everything into useable bytes for the average end user to truly own their system. It’s not that they are lazy (these users) per se, but really, do you have to be an engineer to set up a firewall?
Now that we have talked about brain makeup and the cognition issues let’s start talking about the emotional and psychological issues that come to play here. The brain works the way it works physically but all of that comprises a whole that has a life of it’s own and that is within the psychological realm. Why we react at a base level is one thing, but organically we will all respond differently depending on our psychological makeups as well.
First off though let’s put this on the table… Security is a “feeling” Think about this from the actual word and definition to the implications of that. Outside the abstract idea of this our relationship with this notion is emotional and deals with the brain. From soup to nuts here, from creation of systems to abuse of them we are all going along dealing with the feeling of security being the core of how we react, or don’t to situations.
So once again, harkening back to earlier slides take into account how we are wired in our daily dealings with INFOSEC.
Building on the psychology of human beings we next have to move further out to how collectives of humans work together where security is concerned. We have individual behaviors but then when we get in groups there are all kinds of dynamics that come up through the social aspect of society. There are many unwritten rules within our societies that differ and on average we are all beholden to them. If you go outside the norms there usually are punitive actions that are taken against you and this is a factor on how we react to things.
A key here though for me is to look at how this structure plays out on behaviour that can be and is abused all of the time as well as how it may be leveraged or changed to better serve security. In the case listed on this slide of authority figures, this is a common chink in the armor that social engineers use to trick people into phishing exploits or other attacks where data is handed over to them by a user afraid to rock the boat. Our social natures are the very same thing that are so helpful to the smart adversary because we on average are going to react much the same way.
Now look at the social behaviours in the petri dish that is the corporation today. A collective amalgam of how we are wired interacting with our social mores in tandem with the corporate needs put on us all. Many times businesses, which funnily enough now are considered by law (tenuously) as people or entities make some stunningly counter-intuitive decisions. many times this counter-intuitive behaviour directly affects the security of a company. A primary driver of this may be the perception of “productivity” where people are feeling pressed to be productive and will bypass security altogether to appear as productive.
Another factor that I have found is that often times a company will have a large body of security policies but no enforcement of them at all. This means in the collective unconscious that they are not important and there is no real negative effect for not following them. This is a cognitive dissonance that adds to our problems of trying to secure things. If there are no bad things happening when people do not follow the play book, then how do you get it to them that it is important to really do these things? One has to look at the social structures in the companies today as well as the social animals that run them. If you do not look at this aspect of security then you will be doomed to just repeat the failures we see every day.
The adversaries out there also have their own psyche’s, social structures, and all the same issues we have.. In their personal spheres but not where it concerns how they attack us. The smart adversary is going to use the psychology and the social norms against us to get what they want. What’s more they are not bound by the mores and the rules we have in our own societal and corporate structures and this is a key fact. We need to take this into account when we talk about how we secure our networks and our data because those are all rules based and when rules don’t matter to the adversary, well, they are pretty useless aren’t they?
I think we in security need to take a good look at how our societies run, our psychologies, and our biases to get a better handle on how we might effect better security with them in mind. A converse to this is to use the adversary’s rule-less model against them as well. Now by this I don’t necessarily mean hacking back. What I do mean is to study the adversary and their habits, their social dynamics, and use that intelligence against them. How? Well, build better security here with that data as well as perhaps deeper knowledge of how they operate to just stop them cold to start …but that is another presentation down the line I suppose.
On the other side of this fence is the defender class. The defenders have to work within the rules of the companies they work for, the social structures they live in, and overall must act within the bounds of rules laid upon them. This is a real issue for many defenders as they watch attacks happen that may have been stopped had there been a real pentest in the past that allowed a no holds barred approach. Alternately perhaps the defender feels frustrated by the rules themselves because those that make the rules do not comprehend the security issues to start with, no matter how they may try to enlighten them.
All too often today I hear people talk about users as just dull witted and not willing to do what is right. I say that sure there is some of that but you have to understand why they are that way innately as well as understand the pressures upon them to make them disregard things as they do. This is not a binary and as much as many people in this field would love it to be, it is a much more complex and abstract issue than that. So when you get frustrated next time around by obstructive behavior from the user level up to corporate with regard to security take a step back and ponder this. How can you make a change here by looking at behavior and understanding the rudiments of behavior?
To sum up here we have a lot of talk about the ROI of security measures like awareness training. Some say it is useless but I say it is not. In fact I would say that the current model of security awareness (i.e. once a year by powerpoint) is not enough. The reality is that people learn only by repetitive means. This is why we teach children times tables in school as well as innately children want to be read the same story over and over and over again. Our brain makes long term memory and learning by repetitive means. So yes, I would say our current model of awareness is useless because we are not really teaching anything to anyone by not doing it repeatedly and more than once a year.
I think we also need to take a long hard look at our rather simplistic ideal that the technology solution is the panacea to all our ills. The FireEye technology did not fail in the target hack. What failed was the people and the organizations mores about reporting and reacting that were at fault. Often times the implementation of security products is also the problem in that they weren’t done at all and weren’t monitored. This is an organic issue not a technological issue and we need to hold ourselves accountable to that fact. From design to implementation and management we are 99% of the time the organic failure that causes a breach and loss of data.
Face that fact and do something about it. Don’t just buy another blinky light product. Do the hard work and work with the users.