Great is the Enemy of Good
by guest writer Keith Maxon
This article will discuss the practice of altering an organization’s security strategy due to the increased likelihood of a specific type of security event. Specifically, understanding the difference between the possibility of a security event occurring and its probability. After defining and discussing this issue, the article will conclude with some actionable solutions for addressing these situations.
While reviewing some interesting reading material recently, I saw something that really got me to stop and think. It stated that facial recognition systems were flawed because there was a possibility (emphasis on possibility) that someone with similar facial features may, mistakenly, be able to attain the same access as an authorized user. I was struck by how silly this premise seemed. Think about it, since there is an infinitesimally remote chance of a false-positive, this technology is flawed? Because of this, the article in question suggested reevaluating the use of this type of security control.
What was also striking is that the article did not even come from a vendor attempting to sell a competing product or one that corrected the inherent “flaws” of the technology. Instead, it was solely presented to illustrate that the technology was less than perfect: in a manner that could illicit fear or doubt in those reading it.
What an informative piece of literature. This article had indirectly taught me a great deal. Not on biometrics as a technology, but rather on the security profession in general. It had eloquently illustrated how we in the profession rely upon the concept of “possibility” as the overriding consideration, rather than the far more useful “probability” associated with a given situation.
What do I mean by this? Many people believe that the possibility of an event occurring should automatically prompt action to prevent it from happening, regardless of the probability of that event. Following that line of logic: There is a probability that the Earth shall be struck by a meteor the size of Australia; extinguishing all life on Earth. Therefore, we should all start living underground…just in case. Anyone who suggested such a plan would be labeled as a “crack-pot” and rightfully ignored. While in the security profession this type of strategy is accepted as “proactive.”
If you carry this theory a little further, every time a new element (security control) is introduced into an environment there is a new set of possibilities to address. (“Oh no” says Chicken Little). These possibilities must be addressed, in turn, creating a rather expensive cascading spiral. The really clever part comes from the way in which security professionals obscure the behavior. We blame it on the ever-changing nature of threats rather than our own approach. Regardless of how we frame the argument, the net result is that those who fund security eventually stop listening. So, what should you do about it?
During your security efforts, focus on the following four items to ensure realistic security objectives and focus.
1. Realistically evaluate the amount of time that you are spending on “Low” probability issues. No behavior can be modified without self-awareness of the behavior. Ask yourself if you are falling victim to this behavior and just as important, if you are then passing it on to others via doomsday reports. Frustration is an excellent indicator of this behavior; feeling that those around you are not listening to your warnings. If you are always pushing the boulder uphill whenever selling something, this may be a good indication. If you are struggling to make a case for a control or policy, maybe there is a reason.
2. Evaluate whether you are responding to risk items logically or emotionally. Why do we respond so positively to fear and not to logic? I think this goes back to possibility versus probability. When we think about what is possible, the results scare us to the point of changing. It often does not even matter how likely it is this item may occur. When a light is shone on something that is otherwise innocuous, it becomes possible. The problem with this approach is that when you constantly cry wolf, people stop listening. It is important to communicate what can happen as well as what is likely to happen. The final two items below will serve as tools to help you with this.
3. Factor in real attack vectors. Make sure that you are not making up attack vectors with the intent of scarring the people into action. For example, I am reminded of a discussion that I had with some colleagues regarding when to encrypt data in transit. Within the solution, if the proper controls were in place, and the network was trusted, encryption would not be necessary, no matter what the data (there was obviously more that helped satisfy this criterion, but this may be a discussion for another day). One of the solution team members argued for encryption regardless of the fact that there was a non-existent possibility of data compromise within the solution design. When we asked why, he said, “Because it does not feel right”. The point of this example is that you should be careful in your efforts where you choose to focus. There is plenty of work for all of us to do; there is no need to grasp at straws.
4. Consider using risk rating methodologies that factor in probability. FAIR <http://www.riskmanagementinsight.com/
> is a good example. FAIR has its drawbacks, as most risk rating methodologies do, but it has excellent verbiage with respect to defining security terms and selling risk based on probability.
In conclusion, take time to first measure each situation in order to establish your level of emphasis, and then respond accordingly in a logical manner. Taking this approach will save you countless hours during your efforts and will improve your ability to gain consensus among your stakeholders.
Please provide any comments for this article here