Toward a critical phenomenology of closed source security

In my critical view and by argument here I claim that closed sourcing imposes a limitation of everyone's view of the software and fair determination of its properties. In other words my argument is essentially even though closed source achieves ANY properties in software it does so by maintaining a practice that actually limits fair determination such a property can even be provably achieved. There mere idea that companies who distribute closed source (from the perspective of users and developers) can perform a pantomime convincing people that they ever achieve security at some point (in a way apologetic sincerely to the subjective domain of the user) - does not sway my ability to take crucially the lack of actually evidence for any claim (due to the lack of source code as proof at least!) for achievement of these properties, and the constant and almost publicly accepted complete failure of their security efforts (Mac Root Bug  failures, Oracles notoriously bad patching history etc etc) - also in ways users can never fully experience fairly (we see exploitation of this altruism in companies like Uber not being open about data breaches, Facebook selling of data in ways clearly distanced from user's experience of the platform).

My position rests firmly also on this requirement for security as a word to have a realistic means of actuating what establishes it fairly to users. This dimension of experiencing subjectively the proof of security is inescapable and in any working security mechanism is crucial - both in small locks and entire governmental organizations. So as a first move in detailing my argument here I open a deconstruction of security and show that in society it requires much more than a couple blinking lights and convincing billboards to actually achieve security of software effectively , they need transparency!

The Semiotics of Software Security

How do people actually experience security in software? Must there not be something beyond icons that edifies their subjective perception of something as secure? What allows us essentially to assign labels to things as secure? We probably won't have a definite solid set of words that describes "security" of all things (from a point external to potentially everything it could describe - as a pure adjective effectively). It is almost like it exists in a tension between; collapsing into a flat symbol (an empty inanimate shell of the concept) or distancing itself from it (providing an embodiment of some kind beyond the shell), by being "gapped" (in a semiotic sense) by constant assault on its claim to the label. So its like saying ironically our ability to constantly disprove security is what is actually a means to maintain our security definitions!

A lock doesn't work because it is purely a symbol of a lock (Moxie will probably agree with me). It works because in its design it lends itself to physical orchestration of its security properties - you want a lock to actually lock things; this is why you use it as a lock. It is also why you use things as locks that are not called locks; for instance an elastic band or a configuration of voltages on a circuit board. These different lock effect inducing "configurations" we use to lock things; achieves that of course by rendering a clear demonstration in many facets that it can actually perform a locking of something. The demonstration is important here, it is also important user's have a means to view or orchestrate this proof. As much as the demonstration must happen; in order to experience any sense of security you must have a fair register that it actually happened as well - potentially also encompassing many facets.

This demonstration must provide a clear and fair presentation of what happens in the code otherwise it fails to be a demonstration of any useful effect. We see this is crucial in things like boot loader glitching attacks, in some of these the attackers basically trip the boot loader in a certain way to induce a state in which it verifies the wrong boot image and trusts it. Here the demonstration happens exactly as it always will it just verifies the wrong thing - a clear signal that we seek actual representation of the reality of the code definition IN the demonstration directly! Nothing else, not someones hopes and dreams about what the code could be if you wished up on a star or something.  I establish here that beyond the demonstration of the security it must of course actually register some identification with the real code or it ultimately fails entirely as a proof. We need transparency of the code to prove anything happened in the software fundamentally!

So here I will draw the simple conclusion that: Security "labeling" things (calling software secure, happens because it is) driven on an ability to conduct a proof or a means to assume a proof has happened. And no pretenses or assumptions of whether these proofs actually suffice to serve as actual security unless fair confirmation of the experience of this proof can happen. It must actually happen or you will always be abused by those who identify the gap between you being able to prove it (your security stance) and being able to be convinced you have proved.

How does closed sourcing affect security?


We often like to lock the affects of limiting view and contest of code out of the dimension of the user. But this is of course painfully reductionist. When companies glaze over these objections to closed sourcing software - it is often build on an analysis lacking actual depth of the real effects of this practice. There is often not a fair question asked about how developers and the organization by large themselves suffer from closed sourcing software - in a minimalist sense with regards to security.

One one hand developers can be subject to illogically motivated support for security solutions over and over again only to find out they have again suffered from their symbol-only decision making. Sometimes solutions are trusted and forced on developers when there was no proof any solution worked, nor was their any logical means to demonstrate it to the developers or even teach them to fairly demonstrate this proof of correctness. Claims to its security properties are locked securely in legal agreements and this often suffices to please non-technical folks (which is all very nice) - but unfortunately not actually being able to fairly introspect the software you buy means; both you and the developers building the product you are buying could be deeply mistaken about what you bought!


Conclusion


Of course the question you must ask is: What then of the users? If it is absolutely clear that because of closed sourcing and lack of code openness developers fail miserably to engineer software - how do user's fairly use software then, when it is closed? And how much punishment do they suffer for not understanding something they are not allowed to either (this machinates in user-blame theory like thing i.e. Users choose bad passwords, click on pdfs etc etc - all this wonderful blame rests solely on the design of the system not those who are forced to use it!) It is potentially the case that this user-blame is only a stage in a chain of folks blaming each other because they are not allowed to understand software code that their decisions depend on!

*One could from a given view reduce almost all machinations at least of security problems of exploitation of a failure of ontological classification or induction of an impossibility of successful ontological classification. 

Annex Note: if we are critical about bug counting as a metric to compete closed vs open, there is something to learn form the boot loader glitching bug example:

What this example means is that ALL security if it is EVER achieved by extension is achieved on fair representation of what the code actually is - other wise it proves nothing. So even if you find a bug in a closed source piece of software - claiming it can count as a support for the advance of a position for CLOSED security is wrong! What you provide in such a case is evidence open sourcing the code actually benefits closed source security! If this isn't the case you would be able to fish out bugs without coming close to anything that allows you to ever establish causal analysis based on the codes behavior (since you claim it being closed helped you understand it)- you must convince me of other domains excluding code behavior for why the bug is to be fixed, like scrying bowls or summoning spirits or something.  So every time you count bugs in ontological domains like closed vs open - realize both tallies should actually be under OPEN since you know and can fairly attest to the code causing issues in both cases due to your ability to view it or proof knowledge of it.  

Comments