We just completed the software security metrics track at MetriCon 1.0. The slide decks from the track are here. We followed Steve Bellovin's talk which recapitulated a number of issues he described in IEEE S&P article on what he terms the infeasibility of security metrics. I will blog more about this in the future, the IEEE S&P article followed my article on Introduction to Identity Risk Metrics in the same issue. Steve is looking for a way to use metrics to measure the strength of the system, which we do not have yet. But it does not mean that metrics are useless for security or for risk. At a more granular level, we can use metrics to illustrate parts of the security equation, like availability, integrity, authentication, and so on, even if we do not have the uber way to roll all these things up yet. Also, there is another pattern that is not much used fast and cheap metrics. Would you rather spend a billion to build a rocket launch to a planet or just take $1,000 sensor disks and fling them at a planet and see if some gas or chemical exists?
The first talk in the track was Brian Chess who discussed his and Katrina Tsipenyuk's work on using Fortify to generate software metrics over time. Their proposed metrics looks at false positives and false negatives and gives a scoring trend. The main thing I liked was how they showed two distinct views over their raw datasets (effectively scoring) for auditors as well as developers where each group cares about different things.
Then we had Pratyusa Manadhata who explored his work on attack surface metrics. In this case the attack surface metric decouples security metrics for channel, data, and methods. This makes a many sided view and is particularly useful for technologies like web services where these services are decoupled logically and at runtime.
Jeremy "Just Do It" Epstein talked about a vastly underdiscussed area - Good Enough Metrics. His presentation contains a list of specific examples that are cheap and easy to get today that you can use to get started with software security metrics. He differentiated between standardized and non standard risks, and he has an interesting formula that has a union of security/insecurity x popularity x ubiquity.
Thomas Heyman and Christophe Huygens gave an interesting presentation on early work on using patterns for richer metrics. What is so valuable about this approach is that patterns allow for the metric to capture contextual structure and behavior, and that is so critical to security.
Lastly, Pravir Chandra examined the interesection of remediation and complexity using field experience from Secure Software. In other words, can I correlate from a set of vulns and bugs back to the complexity of the code under development.
Comments