Public Disclosure Shaming SO HOT RIGHT NOW


Obviously I'm going to employ that very popular zoolander meme. Because i think InfoSec (not exempt unfortunately in its vulnerability to group think hypnosis) is becoming this meme.


Critically speaking:
The amazing culture that has taken over what seems to be a large section of the InfoSec community is to shame and lambast people who publicly report bugs. This is done with the notion that exposing potential attackers to knowledge of the bug somehow makes matters worse.  (If i understand it correctly)

Couple interesting questions:

  • Will lambasting and shaming cause more people to make us aware of the bugs?
  • Does it really make things worse for users?
  • How much worse is this worse for users? Can we argumentatively determine the weight of the worse-ness for users?
  • Is it always always better to only report to the vendor?
  • Is every bug when reported publicly immediately worse in effect before the vendor responds?
Now that last question is the ringer for me. I'll start with this one: "Is every bug when reported publicly immediately worse in effect before the vendor responds"? I would (tentatively now - with very humble postulations) say no! Not every bug when put through the imaginary public disclosure function always mapped into the range of [bug that is now worse]. I can say that because you could guess that the bug and its worse-ness depends heavily on how many people are affected surely.  Another name for this worse-ness is Impact.

What is vulnerability impact? Does it not depend on the people impacted? 


I obviously need a formal enough description of this Impact thing. So here this seems a sensical way to calculate impact:
(word maths :)

impact = how much damage = the magnitude of the damage. 
=> magnitude is a scaling factor so we need to ensure the scaling reflects how much will be affected so its a kind of counting all the instances of the bad thing in every place it will happen.
=>  accounting for all the bad things means knowing everywhere it will happen
=> knowing where it will happen implies being able to calculate the probability it will happen per given place it is likely to happen

so then:

impact = the sum how likely it will be to happen per area it will likely happen in (we want to account for each single place it could affect us) =  (risk/probability of it happening) x (area of effect) -if i postulate correctly).  

Area of effect here is also an argument of the amount of people. The CVSS score I think can then be explained as meaning to calculate a total of effects on security properties, but leaves to beg how this further scales per person per probability of that person being affected also (probably a good idea to add that lol) relying on those security properties (most likely because this information isn't always available - ironically due to vendors sometimes with holding that information

What causes the "impactness" of impact? 


To see this lets play a thought experiment to establish some axioms here i.e. what frame are we working in - and then what the are the rules for being in this frame; once we know its boundaries? How does impact behave as a function of people and security properties if it existed in such a way? 

So essentially what happens when you have a bug that affects no one at all? As in; 

Theoretically you have a bug that is very very dangerous - will immediately post your private key to your worse enemy; but no one is running the software anywhere in the universe and never will ever.  Nothing you say about it anywhere will cause it to be exploited - this is essentially what that means. Here reporting even if you do it 1000 times every second means no worse-ness or impact that is produced for the user. 

Why is that? Simply because impact requires users to be running the vulnerable software. So then if you had a billion users that were effected? Potentially more impact no? Does that not mean impact is strongly dependent on how many people run the software?  To close the other extreme of this argument:

what happens when you have a slightly harmless bug that say flips a bit in a computer and thats all it does; a random bit gets flipped somewhere in the kernel? 

If you have one person to exploit this with; probabilistically it won't have much effect will it? on average it will it average parts of the kernel where you can target the bit flip no? Would it not be even more effective (in a Markov integration kinda style) if you had potentially billions of people to test the bit flip on? Well a probability theorists would say: "as long has you have a certain amount of people you can bet very easily it will have some regular function of damage regardless of precisely where the bit flip lands" 

Essentially saying enough samples, by law of large numbers will produce an probability curve that has a magnitude in every area of the kernel effectively - some areas have low weighting but will definitely scale according to the amount of people you can exploit. This means immediately again; the amount of people you can exploit or be aware of exploiting is directly proportional to impact.

Conclusion


So the humble conclusion I can draw now is that you need a lot of people and a weak bug to do a lot of damager OR an incredibly high damage bug and few people. But we can see that in both cases it only gets worse and the impact gets bigger with a heavy weighting on how many people can be targeted. 

We can now take this result and layer it over the situation we are analyzing here. Researcher has 1 bug lets say; what is the impact of the bug? Well as we know it must be some function of how many people run it! Did the researcher contribute to that function? I.e. Is the researcher the reason many people run software that can be affected by this bug? The answer is no. If the researcher was responsible solely for the impact (as we have postulated it to work) then he would need to first get everyone to run the software, then expose the bug to the bad folks to exploit.  

Of course blaming the researcher for the bug makes even less sense when you realize the impact of the bug also depends on what kind of bug and what kind of context is abused in. Again here this completely dependent on the vendor! The researcher is really just gambling that the software will suffer from a given bug. The if researcher gambles - what is the vendor doing? Not gambling? Well that means they determine under strict calculation ALL THE TIME what bugs will be there for the researcher to discover not so? And if they are not not-gambling lol then they are gambling! They are gambling with your security! Also not good for the vendor to blame the researcher :) 

So as the impact is directly proportional to the amount of people running the bug; thus the blame must obviously lie in who ever is responsible for causing the magnitude of the bug - here it is the vendor again!  So you can look at the software vendor here as the massive force that ensure there were enough people to suffer from the proverbial kernel bit flip - so much so that it renders it dangerous!

Either way i don't think it makes any sense to blame researchers for talking about bugs - when cleary the impact of the bug is entirely the fault of the vendor not the researcher. With out the vendors very important contribution to the bug; the researcher would just be someone making noise about software that isn't true!

Comments