As a general question, we have way more security metrics than we did say 5 years ago, but do we have the right kind of metrics?
Wade Baker and the Verizon DBIR team performed an exceptionally useful service when they launched DBIR. It was among the first rocks that started tumbling that led an avalanche of de-fudified the infosec industry (ok, mostly def-fudified), before DBIR breaches were discussed in whispers and "if you knew what I know" kind of BS hallway conversations. Afterwards, they are discussed openly and in the spirit of how we can learn. This was a massive change, I am sure not easy one esp at the outset, and they have continued to improve the report every year. Kudos.
Having said that, its time for the industry to move on from data breaches as the central organizing construct. We'd all love to see a John Snow emerge, but unfortunately our Ghost Map does not have clusters around pump handles. The pump handle is the internet and if you are connected to it and you have something someone wants, then you are going to have to deal with breaches.
Data breach is analysis is threat centric and threat centric thinking is only useful for wowing noobs at confences or getting funding in DC. Whats needed instead are metrics that help us build and deploy more robust applications and make the systems more operationally resilient. In short we need to learn from engineering.
What difficulties do bridge engineers face and how might we learn from them? With the recent Washington bridge failure, Carl Bialik examines conflicting reports about the so-called sufficiency ratings of a bridge and the combination of quantitative and qualititative measures.
“It is not an exact science,” said Charles Roeder, professor of structural engineering and mechanics at the University of Washington, Seattle. “The probability of failure depends upon imponderable factors. I don’t think anyone knew that this truck would hit the Skagit River Bridge. Even if by some miracle someone knew that this specific truck would hit this specific bridge, they would not know that the bridge would collapse.”
Sufficiency ratings take into account many characteristics of bridges, including ones not directly related to their chance of failure. But “removal of a single truss member of a truss bridge can bring the full bridge down,” said Robert K. Dowell, Director, Structural Engineering Laboratory at San Diego State University. Even bridges with redundant features, meaning there are backups if one fails, “can collapse if one member is removed, as the redistribution of loads may overload other members.”
Bridges are rated on about 100 data points an 20 factors. What would a sufficiency rating look like for software systems? I do not think it looks precisely like the OWASP Top Ten or SANS Top 20. Could we get to something like how safes are rated (time and tools)? How are threats, vulnerabilities and countermeasures weighted? Where does data classification fit? Improtantly, is the indsutry itself even able to deploy actionable improvements based on such a rating?
Can we use metrics to answer even basic design questions? I like to use the "if you have a $100 to spend" kind of design questions. If you have a $100 to spend and you have a large amount of low assurance endpoints. Should you spend it upgrading them all to medium level of assurance? Or should you keep them all at low assurance level, buy a high assurance gateway/proxy, and run them all through that? Or what middle road would you use? This scenario would have you look at failure modes in addition to the security aspects, too. Its just one scenario, but these are basic questions we have to answer every day and the current data doesn't help us yet.
A lot of this will just take time, but there are some things that begin to be improved already, structural improvements. I think static analysis and black box vulnerability scanners have a major role to play here from at least two perspectives. One, reporting the attack surface and vulnerability density for systems (this exists on some level today) and two, the efficacy of controls. As to the second point, there is a long way to go, the security industry is so immature at this point in time, it appears that many of the countermeasures actually make things worse. We cannot make real progress in infosec without better measures for both, and these are our best opportunities for improvement over the medium to long term.
Bayes.
Posted by: Alex | June 01, 2013 at 10:39 AM