[InfoSec Rant] "Unspecifying" vulnerabilities is a vulnerability for vulnerability specification.

There is a practice in the information security world in which vendors issuing statements about the vulnerabilities reported to them can withhold as much information as they like; reducing what is meant to be helpful identification and declaring of software errata as another place for companies to save face. It is literally like someone writing a book and lying about things they got wrong so the book keeps selling- given the strong language parallels I can make here this analogy is quite applicable! Essentially capitalizing not only on software but also on the errata of their software. Which is to say they make money from making mistakes in the way the have essentially declared they will make money i.e. "We said we would sell you this wonderful software, but it turns out is completely broken and possibly doesn't do anything we initially promised it does; so in order to preserve our rights to say it does the initial stuff we promised we are not really gonna tell you why the software we sell is potentially not the software we sold you at all." Here's an extract from one such very helpful vendor report:

Multiple privilege escalations in kernel in Intel Trusted Execution Engine Firmware 3.0 allows unauthorized process to access privileged content via unspecified vector.



If you don't know what this means basically it says (read critically) : "The most trusted most critical portion of the software we provide; has a vulnerability we will not tell you about". What you must also realize is this is but one juicy example from a years long tradition of declaring critical vulns that affect user's this way. According to another possibly badly done piece of statistical analysis i would say there is a noticeable significant dip in "unidentifiable" bugs in the global aggregation of bug types as years go on and vendors become more aggressive about reporting this way, here:

It can be seen here that there is a definite slow increase in the "other" (green) category here; essentially its prominence in the years before 2008 is evidence of the categorizations kind of "archaic" beginnings. The steady and in 2017 massive increase in these bugs is telling that we might soon return to this archaic beginning of vulnerability classification perhaps?


from https://nvd.nist.gov/vuln/visualizations/cwe-over-time 

Here's the Rub


There is no dishonor in making money from software; but there is certainly dishonor in lying to people. There is certainly dishonor in lying to people so you can continue lying to them. There is certainly then even further dishonor in making sure you can make money from this. According to my critical analysis (and these are my opinions and rantings alone) this is what is happening in the software industry today. And I must now proudly claim these mad rantings are mine; so my friends can not suffer from the onslaught I face lol the socratic death that haunts-brutally-honest speakers like me; I see it will happen to me; I see it happen stronger and stronger in metaphor over and over again; caused not by any grand aggression of mine other than probing and trying to humbly coax out objective truths - so I am ready for you to ridicule me; as long as I can ridicule your non-ridicule of my logic ;)  

There are real tragedies in the fall out to this practice; essentially:

  • InfoSec Blackhole theory*: By denying the information security community information about how the software breaks and why. They are then left powerless to conduct accurate and objectionable research that warns and prevents these vulnerabilities potentially ensuring others also suffer from it too. Essentially this form the perspective of the user and how it affects them; means that today software companies are allowed to both: 
    • decide what a user runs; and 
    • decide whether a user can know whether what the user runs is actually provably detrimental to their health and privacy. 
  • Vendor Star Collapse Theory*: The vendors (who practice this dark practice) themselves are left to ever increasingly hide details of their own software to ensure no logical inferences can be made about the truths of what causes certain flaws and mistakes - they need to keep this vuln secret; all of the efforts to keep this vuln secret must also then be kept secret; all of the code behaviors that could eventually cause that same vuln too since they will divulge information about how well the org can track its own issues etc. OR hypocritically decide when is the right time according the survival of their businesses; when user's should know about certain risks and when not (again; this idea of capitalizing on both the softwares good behaviour and its behaviour that proves it is absolutely not good at all). 

*(just arb names for reference stick to the story lol)

Are there some real examples of this happening or is this just grand theory? Well I have been seeing some hilarious things happen (trust me even people inside these companies must find it hilarious):


  1. Self-restricting effect of proprietary software [MS01,MS03,MS04]: Microsoft needed to patch a vulnerability; had to rely on reverse engineering a non-source version of their own software to do it. (in my view this means) Suffering directly from their proprietary restrictions - nullifying its own benefit and leaving the users ultimately to suffer from this as well. So here we have a clear example in my view that software being proprietary doesn't actually benefit anyone; not the vendor or the user or even the programmers making the software. Kernel developers today are essentially people who beg and hope for driver code (or successful reverse engineering there of) so they can abstract that already potentially erroneous abstraction to software hosted inside the user-land portions of the os. Or they are open source developers developing on open source hardware - not begging or hoping for anything in their calculations; so accurate are they some of them are scientists of computing and physics and medicine and many other fields - this is reflected strongly in recent results of the top 500 super computers being all linux based kernels.  Some might argue that this was some kind of desperate effort because of some other reason - which ultimately will also be Microsoft's fault as well (this is how i think counter rhetoric here must work - or be completely non-sensical and maybe for flair or flavor be morbidly nihilistic). But the painful truth here is that because of microsoft's own actions they were almost as powerless as their own users to fix the software they sell them.
  2. Intel's Recent Management Engine bugs [INT01,INT02,INT03] - ironically (according to what I've read and heard) this being an open source kernel suffering from an implementation (because that is the real honesty of open source software: you suffer from your implementation so of the design not the design only:) What is telling about this example is that intel clearly didn't declare enough of the vuln affecting an open source kernel (I see rants on twitter form prominent security folks on this theme) - essentially trying cocoon an open kernel inside proprietary practices (which is only making things worse- and rendering intel only more untrustworthy in the practice). This is think, demonstrates how proprietary organizations can obnoxiously and completely nonsensically dismantles its own advantages even if some of the software is open source because its practices that support the software are not also open source in merit. This of course in the minds of some thinkers means "oh look they ran open and still got stuff wrong" not "wow look; they didn't implement that open kernel properly; now they must lie about what went wrong aggressively because people will immediately figure out what nonsense they were doing"

Conclusion

Running open source software leviates the burden of organizations to maintain secrecy of their mistakes since they are selling an open design of an implementation user's are responsible for-due to the way opens source licensing works (essentially); user's are as much programmers as the programmers. Whats left then is to get it running and running smoothely is where the business really "lies" and can be isolated to be operated in a focused an effective way- not dedicating effort to keeping secret what cannot actually be kept secret for very long (as we see vendors aggressively try to hide this stuff but the hacker industry time and time again finds out about it the Ms bug was 17 years old!). Running completely closed does immediately mean that you must suffer from your own closed-ness; your programmers cannot benefit from entire communities of developers already passionately dedicated to helping them solve and implement solutions purely for the sake of making good working software for the world to use.  In ultimate conclusion I would say that as far as i can see it only makes more security sense from an organizationally perspective to contribute too, internally run, and sell open source software. 

References



Comments