There is a burgeoning issue, a fatal flaw, in Infosec processes that has the potential to result in increased downside risk to companies, and that issue is found at the intersection of assumptions and security products.
Robin Roberts former head of R&D at NSA said the "the problem is you are dealing with systems that have all sorts of assumptions built in, but you cannot query the system about its assumptions, it does not know what they are."
Andy Steingruebl and I tackled some of the security aspects here in "Software Assumptions Lead to Preventable Errors" where we discuss making implicit assumptions explicit about expectations at Design Time and Run Time.
What happens in most Infosec teams is that there is basically this:
Compliance Spreadsheet -> Security Product -> Check Box
That is fine as far as it goes, everyone needs to work to pass their audits and I get that you have to measure processes. But there is a fatal flaw here, the quality of the security product itself. Veracode's analysis showed that security products are more likely to have vulnerabilities than any other kind. Worse than travel apps, worse than HR.
The fact that we are putting out fires with gasoline does not get nearly the attention it deserves, it should be a top of mind issue. Its not just that these are vulns, we are marinating in vulns. Its that the vulns in a security product impact the very foundation of the security architecture.
Simply put, security products should be rigorously tested, more so than applications and systems. Why? Because its their job to defend them and they are frequently leveraged across applications. Leverage drives down cost but increases correlated downside risk.
Pretty much everyone does some code and host scanning. Basic security testing is table stakes for applications. In reality, not only do most organizations not scrutinize security products to the same degree as webapps, they do not even scan them at all!
This has huge knock on effects, because it attacks the security architecture itself. As Matt Bishop says "The structure of security is built upon assumptions."
Instead of only finding holes in others' code, Security teams should work like Charles Darwin. Darwin had an average IQ, but he became a world changing scientist because he wore an intellectual hair shirt. He was always trying to disprove his own ideas. He rigorously challenged his own thinking.
How can we put such an idea into practice in Infosec? We focus so much time on Infosec problems, but Infosec has some strengths too. One of which is testing. Security teams can really excel at testing just as much as any other area of software engineering.
Gauntlt and Security monkey tests are a great way to verify each and every assumption in your standards. I used to say that if you do not have a readily available implementation to solve something then do not put it in a standard or policy. Now I say, if you do not have a readily available test to verify that it work, remove it as well.
Take Poodle and Beast and the litany of TLS/SSL issues. Could infosec teams have predicted them all? I doubt it. But that is not a reason not to test the protocols themselves. There are excellent freely available tools like SSLyze and SSLLabs to uncover these. Companies should have a test suite that checks for known issues - poor cert validation, hostname mismatch, and so on.
This exact same approach should be done on across the security foundation. Does the RBAC work as expected? Do you have separation of duty test cases? Does strong auth work or can it be bypassed? Does the logging and monitoring system actually recognize attacks? The way out is to extend assurance techniques to the security foundation. As Brian Snow said the worst position is not be insecure, its to think we are secure, act accordingly, when in fact we are not secure.
Comments