Recall in Authorization is a Puzzle, Authentication is a Mystery, I made the case that logic should operate differently when you are operating with good information (puzzles) than it does with uncertain information (mysteries), I think that decoupling risk and uncertainty leads to more effective security architectures, and in fact using a Dhandho Infosec approach we want to spend more time on risk and less on uncertainty. The former leads to more effective actions, whereas the latter often devolves into spin cycles, and counting the number of SQL injection attacks that can fit on the head of a pin.
Our global financial system has become so staggeringly complex and opaque that we’ve moved from a world of risk to a world of uncertainty. In a world of risk, we can judge dangers and opportunities by using the best evidence at hand to estimate the probability of a particular outcome. But in a world of uncertainty, we can’t estimate probabilities, because we don’t have any clear basis for making such a judgment. In fact, we might not even know what the possible outcomes are. Surprises keep coming out of the blue, because we’re fundamentally ignorant of our own ignorance. We’re surrounded by unknown unknowns.
Some smart risk people like Alex Hutton disagree with my suggested prioritization of highly effective/low risk instead of high risk/low effectiveness, but however you want to prioritize, I think we can all agree that its important to understand which problem set you are dealing with and know you need to bring different tools to bear depending on the problem space context.
Its not like we are sitting on a pile of highly effective tools for high risk problems. We are sitting on:
1. Low effectiveness tools for high risk areas
2. Highly effective tools for low risk areas
3. And metric tons of low effective tools for low risk areas (see here)
So you really do have to choose. I have had this discussion in the context of static analysis tools with several consultants, the security consultants bashed static analysis tools as weak and demonstrated how they can find so many more bugs in a manual review. Now these are excellent security consultants and if you could afford to have them review every line of your code I would.
But here is the thing, you can't afford and even if you had the $ it slows down the process if you are rolling lots of code. Static analysis tools don't find everything, and a lot of what they find may not be the highest risk like say an architectural flaw (note - I am *not* saying that software security is a low risk area), but the point is they find it, its automated, repeatable, the tools keep getting better and you can actually *solve* the problems they find. We would be in a lot better place as an industry if we used tools to find and solve as many puzzles as possible and spent less time in uncertainty spin cycles.