1 Raindrop

Gunnar Peterson's loosely coupled thoughts on distributed systems, security, and software that runs on them.

Recent Posts

  • Security Champions Guide to Web Application Security
  • Security > 140 Conversation with Pamela Dingle on Identity
  • 6 Things I Learned from Robert Garigue
  • The Curious Case of API Security
  • Security Capability Engineering
  • Ought implies can
  • Security > 140 Chat with T. Rob Wyatt on MQ and Middleware Security
  • Privilege User Management Bubble?
  • The part where security products solve the problem
  • Four Often Overlooked Factors to Give Your Security Team a Fighting Chance

Blogroll

  • Adding Simplicity - An Engineering Mantra
  • Adventures of an Eternal Optimist
  • Andy Steingruebl
  • Andy Thurai
  • Anton Chuvakin
  • Beyond the Beyond
  • cat slave diary
  • Ceci n'est pas un Bob
  • ConnectID
  • Cryptosmith
  • Emergent Chaos: Musings from Adam Shostack on security, privacy, and economics
  • Enterprise Integration Patterns: Gregor's Ramblings
  • Financial Cryptography
  • infosec daily: blogs
  • Jack Daniel
  • James Kobielus
  • James McGovern
  • John Hagel
  • Justice League [Cigital]
  • Kim Cameron's Identity Weblog
  • Krypted - Charles Edge's Notes from the Field
  • Lenny Zeltser
  • Light Blue Touchpaper
  • Mark O'Neill
  • Off by On
  • ongoing
  • Patrick Harding
  • Perilocity
  • Pushing String
  • Rational Survivability
  • rdist: setuid just for you
  • RedMonk
  • RiskAnalys.is
  • Rudy Rucker
  • Software For All Seasons
  • Spire Security Viewpoint
  • TaoSecurity
  • The New School of Information Security
  • Windley's Technometria
  • zenpundit
Blog powered by Typepad

Metricon 4.0 - The Importance of Context

The CFP for Metricon 4.0 says you have about a month to get your submission in if you want to present. As usual, Metricon 4.0 is co-located with Usenix Security, this time in Montreal, eh?


MetriCon 4.0 is intended as a forum for lively, practical discussion in the area of security metrics. 
It is a forum for quantifiable approaches and results to problems afflicting information security 
today, with a bias towards practical, specific approaches that demonstrate the value of security 
metrics with respect to a security-related goal. 


One thing I do when designing security services is to try and think when I am designing say authZ - how would I measure the success and/or failure of this service? Its not always possible but the discipline of looking at the service objectively has proved a useful exercise. For more info on how to participate in Meitrco check the CFP. 

April 28, 2009 in Metricon, Security, Security Metrics | Permalink | Comments (0)

MetriCon 2.0 : Oh Well a Touch of Gray Kinda Suits You Anyway

MetriCon 2.0 happened on Tuesday in Boston, featuring a lot of collaborative, constructive dialog (the talks were 15 min. each with 15 min. of open discussion). Slides are here. Lots of good feedback from participants.

An interesting thing happened in the morning. Fredrick DeQuan Lee from Fortify showed some examples of security metrics in practice, including some findings from their review of open source projects. Using Fortify's tools they scanned projects Azureus, Blojsom, and Groovy on Rails weekly, over 74 million lines of code scanned each week. To rank the findings they developed a rating system like the Morningstar rating system for mutual funds. The ratings are:

1 Star: Absence of Remote and/or Setuid Vulnerabilities 2 Stars: Absence of Obvious Reliability Issues 3 Stars: Follow Best Practices 4 Stars: Documented Secure Development Process 5 Stars: Passed Independent Security Review

So first off, I am big fan of maturity continuums in security. They eliminate the black/white boolean "you are 100% compliant with our ivory tower policy (which no one is) or you are forever broken" view of the world. Maturity continuums also give a way to make incremental progress over time without having to have every project cram every security feature in before going live.

Next, the Fortify star system sets a harsh bar for even attaining one star (more on this later), on the plus side 1 star and 2 star should be able to measured quantitatively. As you move up the stars become fuzzier and more qualitative. I don't have a big issue with this because like stocks we can use separate criteria for assessing penny stocks as for assessing 3M or Walmart. When you are into the realm of debating whether to invest 3M or to invest in Walmart it is a different discussion. Would that we had more of these type discussion in security!

After the Fortify presentation, Jeremiah Grossman showed the data his team collected, that data showed 70% of websites they assessed had serious vulnerabilities (XSS, Information leakage, etc.). Interestingly, they also correlated the vulns across industry segments which showed marked differences in security profile based on sector. Good job retail sector! retail came with the lowest percentage of vulns, which could indicate that the higher spending on security we see Fin Svcs is not paying dividends.

At any rate, in the Q&A portion someone asked Jeremiah if *any* of the hundreds of sites included in his survey could gain even one star in the aforementioned Fortify ranking system? No way.

Is this a problem? Is Fortify's sytem broken? I don't think so. I think we need maturity continuums because just getting to one star is hard enough, and if we say you have 5 stars or you're broken, no one will get there. Evah. I don't think we need one uber ranking system either, Morningstar is just one of hundreds (thousands?) in the financial world. We also need different criteria for assessing penny stocks (Jeremiah's) from those we use to measure lower risk and more mature stocks (Fortify). For web apps today, the bad news is that we have a metric ton of penny stocks.

August 09, 2007 in Metricon, Security, Security Metrics | Permalink | Comments (2)

William Gibson Thinks You Should Go to Metricon

Ok, he did not explicitly say that about Metricon. But, in his last book "Pattern Recognition", he did underscore our current information security dilemma:

“We have no future because our present is too volatile. We have only risk management. The spinning of the given moment's scenarios. Pattern recognition...”

Risk is the price we pay to move forward, innovate, and grow the enterprise. Risk management is different from the standard "its perfect or its broken and I can't help you unless you follow everything in the 137 page policy manual" mentality that most IT security groups attempt to govern by.

Stepping into a risk management mindset is accepting there are going to be tradeoffs and then the queston becomes how to reason about these tradeoffs. Well today we mainly use axioms and "best" practices, for example gems like inside firewall = good, outside firewall = bad. (I am three years behind on patches on the Oracle system that has all our customer data, but its inside the firewall).

In trying to find better ways to reason about the tradeoffs and measure the efficacy over time, look to security metrics to objectively illustrate your system's capabilities. Use numbers and measurements to add weight to and to challenge existing axioms to see if the assumptions that were made actually hold up.

This is what we'll explore in the collaborative workshop Metricon, held in conjunction with Usenix security conference in Boston, August 7. There is limited time to register, so if you are interested in attending, do it soon.

(Since I took Mr. Gibson's name in vain, I should mention he has a new book coming out - "Spook Country" which I am excited to read)

July 16, 2007 in Metricon, Security, Security Metrics | Permalink | Comments (0)

Metricon 2.0 Draws Near

By my calculation, we are 4 weeks away from Metricon 2.0, August 7 in Boston (with Usenix Security). Registration, and further conference details are availablehere.

The format for Metricon is an open, collaborative discussion. The sessions were planned with lots of time for Metricon attendees to challenge speakers, explore ideas, and work through issues. When you put out a cfp, you never know what you'll get back, but I was really impressed by the quality of the presentations and I think the agenda below shows we can expect a real high rate of information and conceptual transfer 4 weeks from now. (getting geeked up...)

Metrics can get a bad rap, and they certainly can be misused, but I think the main problem we are trying to solve is to find places where we can design and operate systems using objective measurements as decision support rather than solely relying on axioms and best practices.

The day starts with a debate “Do Metrics Matter?” between Pro: Andrew Jaquith (Yankee Group) and Con: Mike Rothman (SecurityIncite)

The talks that are currently set are:

"Security Meta Metrics--Measuring Agility, Learning, and Unintended Consequence"
Russell Cameron Thomas (Meritology)

"Security Metrics in Practice: Development of a Security Metric System to Rate Enterprise Software"
Frederick Lee and Brian Chess (Fortify)

"A Software Security Risk Classification System"
Eric Dalci and Robert Hines (Cigital)

"Web Application Security Metrics"
Jeremiah Grossman (WhiteHat Security)

"Operational Security Risk Metrics: Definitions, Calculations, and Visualiztions", Brian Laing, Mike Llyod, and Alain Mayer (Redseal Systems)

"Metrics for Network Security Using Attack Graphs: A Position Paper", Anoop Singhal (NIST), Lingyu Wang and Sushil Jaodia (Center for Secure Information Systems, George Mason University)

"Software Security Weakness Scoring"
Chris Wysopal (Veracode)

"Developing secure applications with metrics in mind"
Thomas Heyman Christophe Huygens, and Wouter Joosen (K.U.Leuven)

"Correlating Automated Static Analysis Alert Density to Reported Vulnerabilities in Sendmail"
Michael Gegick and Laurie Williams (North Carolina State University)

There is a practitioner panel moderated by Becky Bace

And finally at the end of the day we have planned a "Stump the Chumps" session where security metricians spin the hamster wheel of pain.

Again each session is designed with a maximum amount of time to collaborate, explore ideas, and debate with presenters.

July 09, 2007 in Metricon, Security, Security Metrics | Permalink | Comments (0)

Choosing the Right Security Metric

Juice Analytics has an interesting post on Choosing the Right Metric. They describe the four dimensions of a good metric: Actionable (you know what to do if it goes up/down/flat), Common interpretation (different stakeholders agree upon definition), Accessible, creditable data, and Transparent, simple calculation (nothing up my sleeve).

This highlights several issues with security metrics. Making security metrics actionable is a challenge - if failed authentications go down is the access control too permissive? Or is it working well?

Common interpretation - also a security bugaboo. This presupposes that stakeholders at various levels have an idea what the security team's value proposition for the organization is to begin with. Compliance checkbox Olympics notwithstanding I'll wager this is not widely agreed upon.

**

Metricon 2.0 is in one month in Boston along with Usenix Security.

July 06, 2007 in Metricon, Security, Security Metrics | Permalink | Comments (1)

Metricon 2.0: For a Few Metrics More

Metricon 2.0 agenda is now online for the day before Usenix security in Boston in August. We will work on moving infosec towards a more scientific approach rather than an accumulatin set of axioms. The format is a collaborative workshop with shorter presentations and more open discussion. Read the summary from the Metricon 1.0 event here.

The day starts with a debate “Do Metrics Matter?” between Pro: Andrew Jaquith (Yankee Group) and Con: Mike Rothman (SecurityIncite)

The talks that are currently set are:

"Security Meta Metrics--Measuring Agility, Learning, and Unintended Consequence"
Russell Cameron Thomas (Meritology)

"Security Metrics in Practice: Development of a Security Metric System to Rate Enterprise Software"
Frederick Lee and Brian Chess (Fortify)

"A Software Security Risk Classification System"
Eric Dalci and Robert Hines (Cigital)

"Web Application Security Metrics"
Jeremiah Grossman (WhiteHat Security)

"Operational Security Risk Metrics: Definitions, Calculations, and Visualiztions", Brian Laing, Mike Llyod, and Alain Mayer (Redseal Systems)

"Metrics for Network Security Using Attack Graphs: A Position Paper", Anoop Singhal (NIST), Lingyu Wang and Sushil Jaodia (Center for Secure Information Systems, George Mason University)

"Software Security Weakness Scoring"
Chris Wysopal (Veracode)

"Developing secure applications with metrics in mind"
Thomas Heyman Christophe Huygens, and Wouter Joosen (K.U.Leuven)

"Correlating Automated Static Analysis Alert Density to Reported Vulnerabilities in Sendmail"
Michael Gegick and Laurie Williams (North Carolina State University)

There is a practitioner panel moderated by Becky Bace

And finally at the end of the day we have planned a "Stump the Chumps" session where security metricians spin the hamster wheel of pain.

June 15, 2007 in Metricon, Security, Security Metrics | Permalink | Comments (0)

MetriCon 2.0

MetriCon 2.0 is fast approaching...to get a flavor for what this collaborative workshop is about, see SecurityMetrics.org and the MetriCon 1.0 digest.

It is August 7 in Boston, during Usenix Security. There are still a few days to notify the committee if you are interested in participating.

May 09, 2007 in Metricon, Security, Security Metrics | Permalink | Comments (0)

Second Workshop on Security Metrics (MetriCon 2.0)

Second Workshop on Security Metrics (MetriCon 2.0)

August 7, 2007 Boston, MA

Overview

Do you cringe at the subjectivity applied to security in every manner? If so, MetriCon 2.0 may be your antidote to change security from an artistic "matter of opinion" into an objective, quantifiable science. The time for adjectives and adverbs has gone; the time for hard facts and data has come.

MetriCon 2.0 is intended as a forum for lively, practical discussion in the area of security metrics. It is a forum for quantifiable approaches and results to problems afflicting information security today, with a bias towards practical, specific implementations. Topics and presentations will be selected for their potential to stimulate discussion in the Workshop.

MetriCon 2.0 will be a one-day event, Tuesday, August 7, 2007, co-located with the 16th USENIX Security Symposium in Boston, MA, USA (http://www.usenix.org/events/sec07/). Beginning first thing in the morning, with meals taken in the meeting room, and extending into the evening. Attendance will be by invitation and limited to 60 participants. All participants will be expected to "come with findings" and be willing to address the group in some fashion, formally or not. Preference given to the authors of position papers/presentations who have actual work in progress.

Each presenter will have 10-15 minutes to present his or her idea, followed by 15-20 minutes of discussion with the workshop participants. Panels and groups of related presentations may be proposed to present different approaches to selected topics, and will be steered by what sorts of proposals come in response to this Call.


Goals and Topics

The goal of the workshop is to stimulate discussion of and thinking about security metrics and to do so in ways that lead to realistic, early results of lasting value. Potential attendees are invited to submit position papers to be shared with all. Such position papers are expected to address security metrics in one of the following categories:

Benchmarking
Empirical Studies
Metrics Definitions
Financial Planning
Security/Risk Modeling
Tools, Technologies, Tips, and Tricks
Visualization
Practical implementations, real world case studies, and detailed models will be preferred over broader models or general ideas.


How to Participate

Submit a short position paper or description of work done/ongoing. Your submission must be no longer than five(5) paragraphs or presentation slides. Author names and affiliations should appear first in/on the submission. Submissions may be in PDF, PowerPoint, HTML, or plaintext email and must be submitted to MetriCon AT securitymetrics.org.

Presenters will be notified of acceptance by June 22, 2007 and expected to provide materials for distribution by July 22, 2007. All slides and position papers will be made available to participants at the workshop. No formal proceedings are intended. Plagiarism constitutes dishonesty. The organizers of this Workshop as well as USENIX prohibit these practices and will take appropriate action if dishonesty of this sort is found. Submission of recent, previously published work as well as simultaneous submissions to multiple venues is acceptable but please so indicate in your proposal.


Location
MetriCon 2.0 will be co-located with the 16th USENIX Security Symposium (Security ’07). (http://www.usenix.org/events/sec07/)


Cost
$200 all-inclusive of meeting space, materials preparation, and meals for the day.

Important Dates
Requests to participate: by May 11, 2007
Notification of acceptance: by June 22, 2007
Materials for distribution: by July 22, 2007


Workshop Organizers
Fred Cohen, Fred Cohen & Associates
Jeremy Epstein, webMethods
Dan Geer, Geer Risk Services
Andrew Jaquith, Yankee Group
Elizabeth Nichols, ClearPoint Metrics, Co-Chair
Gunnar Peterson, Arctec Group, Co-Chair
Russell Cameron Thomas, Meritology

April 09, 2007 in Metricon, Security, Security Metrics | Permalink | Comments (0)

MetriCon 1.0 Digest

Dan Geer posted a digest of MetriCon 1.0. It is a great read, almost as good as attending. The slides are here. I blogged some additional thoughts here [1, 2, 3]. As the track chair for Software Security metrics, I had to follow Steve Bellovin saying that we do not have system security metrics that are usable and the security metrics are infeasible. My point was that "system" is way to all encompassing for where we are at right now and we need to begin by decomposing things into smaller pieces. A number of the presentation in the software security metrics did just that with attack pattern metrics at a channel, method, and data level, and pattern-based metrics, for example.

The second point is that the word "security" is particularly harmful in the metrics space, at least for where we are at right now as an industry. We need to be more granular and focus on measuring what we can today. Let's say that security means the union of confidentiality, integrity and availability. So if you are like most enterprises, then you can't do *all* of them today (much less join them all together in some meaningful way), but does that mean you shouldn't measure what you can? Well, every enterprise I see has at least some availability metrics. Granted confidentiality and integrity may be much harder, but at least there are some starting points, for example in authN, authZ, and identity and access management systems are rich sources for potential metrics.

The conference is 1.0 for a reason, it is where the industry is right now, but I never saw any software product that was perfect in version 1.0 and we shouldn't expect security metrics to be perfect right otu of the chute either.

September 14, 2006 in Metricon, Security, Security Metrics | Permalink | Comments (0)

MetriCon software security metrics track

We just completed the software security metrics track at MetriCon 1.0. The slide decks from the track are here. We followed Steve Bellovin's talk which recapitulated a number of issues he described in IEEE S&P article on what he terms the infeasibility of security metrics. I will blog more about this in the future, the IEEE S&P article followed my article on Introduction to Identity Risk Metrics in the same issue. Steve is looking for a way to use metrics to measure the strength of the system, which we do not have yet. But it does not mean that metrics are useless for security or for risk. At a more granular level, we can use metrics to illustrate parts of the security equation, like availability, integrity, authentication, and so on, even if we do not have the uber way to roll all these things up yet. Also, there is another pattern that is not much used fast and cheap metrics. Would you rather spend a billion to build a rocket launch to a planet or just take $1,000 sensor disks and fling them at a planet and see if some gas or chemical exists?

The first talk in the track was Brian Chess who discussed his and Katrina Tsipenyuk's work on using Fortify to generate software metrics over time. Their proposed metrics looks at false positives and false negatives and gives a scoring trend. The main thing I liked was how they showed two distinct views over their raw datasets (effectively scoring) for auditors as well as developers where each group cares about different things.

Then we had Pratyusa Manadhata who explored his work on attack surface metrics. In this case the attack surface metric decouples security metrics for channel, data, and methods. This makes a many sided view and is particularly useful for technologies like web services where these services are decoupled logically and at runtime.

Jeremy "Just Do It" Epstein talked about a vastly underdiscussed area - Good Enough Metrics. His presentation contains a list of specific examples that are cheap and easy to get today that you can use to get started with software security metrics. He differentiated between standardized and non standard risks, and he has an interesting formula that has a union of security/insecurity x popularity x ubiquity.

Thomas Heyman and Christophe Huygens gave an interesting presentation on early work on using patterns for richer metrics. What is so valuable about this approach is that patterns allow for the metric to capture contextual structure and behavior, and that is so critical to security.

Lastly, Pravir Chandra examined the interesection of remediation and complexity using field experience from Secure Software. In other words, can I correlate from a set of vulns and bugs back to the complexity of the code under development.

August 01, 2006 in Metricon, Security, Security Metrics, Software Architecture | Permalink | Comments (0)

My Photo

SOS: Service Oriented Security

  • The Curious Case of API Security
  • Getting OWASP Top Ten Right with Dynamic Authorization
  • Top 10 API Security Considerations
  • Mobile AppSec Triathlon
  • Measure Your Margin of Safety
  • Top 10 Security Considerations for Internet of Things
  • Security Checklists
  • Cloud Security: The Federated Identity Factor
  • Dark Reading IAM
  • API Gateway Secuirty
  • Directions in Incident Detection and Response
  • Security > 140
  • Open Group Security Architecture
  • Reference Monitor for the Internet of Things
  • Don't Trust. And Verify.

Archives

  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015

More...

Subscribe to this blog's feed