Why Security exceptions shouldn't exist.





There's something that happens in pentests more often than any pentester would like to admit. Security Exceptions, findings in a security assessment that get marked as "no need to fix" by the larger organization's security operation (usually). In this post I'm going to talk about why the philosophy of this idea is fundamentally broken and will not benefit any org that has such a policy enforced in such a way.





Also I know some people I work with or worked with may be reading this and obviously our business being what it is i need to declare that the sentiments expressed here are general ones, not subscribing to any of the clients I've worked on before. I work for very important orgs, and it means not only protecting their info in my public communication but also making sure I constantly advertise my urgency to keep their data protected. 

Security exceptions can usually rile penetration testers up because there are times when they are trying to stretch certain bugs beyond their real impact and exploit-ability these usually include potentially cool ideas like ClickJacking (which is pretty bad to be honest, most people just don't know this), Login/Logout Cross Site Request Forgery attacks and things of such nature. At times they are reported in contexts where they would not affect any meaningful security attribute of a system and they might not need to be fixed. I'm not talking about these bugs, I'm talking about things that cause actual security impact, but are argued as "no need to fix" because someone else doesn't consider the risk it exposes "enough" risk to close i.e. its possible to connect with an ssl server using TLS_NULL but its okay because we also have TLS_AES_* or whatever.

Any pentester worth their salt will immediately tell you the addition of TLS_NULL means you do NOT have any verifiable secure channel and "verification" is the basis of labeling things as secure. In crypto we produce security proofs, a mathematical argument as to why this system holds to its claims in security. What this means is in all instantiations of systems meant to protect secrets this proof is surely also strictly required!

Quick side rant: (obviously this is just my coloring of the picture, other colors may work too!) I think this happens because it's just that some folks often think proofs as theorems are things drawn for and live in some magical "theoretical realm" where no practicality has any tact. This is basically the same as the "mind-body" problem intellectuals rooted out long ago. If you turn this on the idea that there is a "theoretical realm" for ideas; you realize there is no such thing, since it must interact physically in some context it must be physical in some context too.

It is a perception of the reality; not an actual reality. The truth is, theorems are drawn on objectionable observations, so that they apply to all things that subscribe to ATLEAST those few objectionable properties, and thought the reasoning based only on these things seems to describe an impractical world, it is actually the most NATURAL way to talk about many practical things in one go. If you think of it; even in the imagination of the theoretical world we work from beginnings in practical concepts; this means it is the practical that gave birth to the theoretical and so "theory" is just a special form of "practice" then!  - Rant over. 


Although to mark them as a Security "Exception" after having a Security Assessment creates a mechanism that actually breaks the intended action of the security assessment itself. I will break down why this is so in the next few paragraphs.

Exceptions to security means less security


An security assessment is to expose all the security problems an application suffers from, this means should something form part of a security report it means it causes security problems. Problems that can be abused to break the confidentiality integrity and all those other nice things we like to enjoy in our most trusted software. What does it mean to have done a security test and have some of the findings marked as "no need to fix"? It means the purpose of your assessment will always be potentially moot.  As long as someone has the power to "quiet" certain security bugs and approve a release, it means the organization risks releasing software with security bugs, this then means practically; security assessment provide could at times provide no meaningful impact on security.

I'm sorry for this very atheist statement but something I like to say about situations like this is :
"That's about as useful as praying" #noOffenceToThoseWhoStillPray

And there in lies the risk, producing an illusion that it is ALWAYS more secure after a security assessment. It remains just an illusion of MORE security as long as security exceptions exist. And if you have been in security long enough you know; illusions of security are main propagator of the most devastating security bugs!  i.e. heap overflows cannot be exploited so we don't need to fix them; [insert SOLAR DESIGNER HERE] or XSS cannot cause real problems, we don't need to fix them [INSERT RANDOM PERSON FROM 0xA list here] or the government can't listen everything we say so not using encryption is fine [INSERT SNOWDEN HERE]. This seems to be the hardest lesson to teach as a security consultant: Unvetted assumptions about security protections ARE NEVER A GOOD IDEA. Produce PROOF! rely on that!

If your idea is that a security assessment should aid in producing software with no security bugs or ALWAYS BETTER security, why is it possible that software be released with security bugs AFTER a security assessment? What was the point of the assessment then? Why not just hand the application to who ever makes that call in neutering bugs had have them pre-determine which bugs they would prefer being reported? I throw no shade at the folks who make this call, they are wonderful folks! All I'm saying here is that it is unfair to have them make such calls on technical security, especially if they are not intimately involved with the actual testing, OR willing to take the heat for exploits targeting exceptions.


Security exceptions expose a lot about an org 


It is easy to detect which bugs an org has policy preventing them to fix, and if this is easy, it is also easy to predict how they will be exploited in future. Lets say we for instance notice that software company X, always has a certain vulnerable cipher suite enabled. Across multiple products that use SSL, you can bet your left nut (or ovary), that this bug will exist inside the corp as well! And so this security exception thing has created a way to practically determine bugs INSIDE an org by observing the kinds of security the release for their products.  Isn't it wonderful that security policy can break security? Brilliant!

Conclusion


My closing thought here (and this though is still working through my machinery) is that we shouldn't have security exceptions, we should have functional exceptions i.e. turn off a function if we cannot turn on all the security!

Instead of turning off "security" to achieve functionality, we should turn off functionality that and keep the security on!

In better words perhaps:

If the idea is that applications operate more securely after an assessment, then the only way to maintain that is to turn off things in the app we cannot operate securely! Since only then will it ALWAYS produce MORE security. 

Comments