1 Raindrop

Gunnar Peterson's loosely coupled thoughts on distributed systems, security, and software that runs on them.

Recent Posts

  • Security Champions Guide to Web Application Security
  • Security > 140 Conversation with Pamela Dingle on Identity
  • 6 Things I Learned from Robert Garigue
  • The Curious Case of API Security
  • Security Capability Engineering
  • Ought implies can
  • Security > 140 Chat with T. Rob Wyatt on MQ and Middleware Security
  • Privilege User Management Bubble?
  • The part where security products solve the problem
  • Four Often Overlooked Factors to Give Your Security Team a Fighting Chance

Blogroll

  • Adding Simplicity - An Engineering Mantra
  • Adventures of an Eternal Optimist
  • Andy Steingruebl
  • Andy Thurai
  • Anton Chuvakin
  • Beyond the Beyond
  • cat slave diary
  • Ceci n'est pas un Bob
  • ConnectID
  • Cryptosmith
  • Emergent Chaos: Musings from Adam Shostack on security, privacy, and economics
  • Enterprise Integration Patterns: Gregor's Ramblings
  • Financial Cryptography
  • infosec daily: blogs
  • Jack Daniel
  • James Kobielus
  • James McGovern
  • John Hagel
  • Justice League [Cigital]
  • Kim Cameron's Identity Weblog
  • Krypted - Charles Edge's Notes from the Field
  • Lenny Zeltser
  • Light Blue Touchpaper
  • Mark O'Neill
  • Off by On
  • ongoing
  • Patrick Harding
  • Perilocity
  • Pushing String
  • Rational Survivability
  • rdist: setuid just for you
  • RedMonk
  • RiskAnalys.is
  • Rudy Rucker
  • Software For All Seasons
  • Spire Security Viewpoint
  • TaoSecurity
  • The New School of Information Security
  • Windley's Technometria
  • zenpundit
Blog powered by Typepad

MetriCon 3.0

Along with OWASP's AppSec conferences, MetriCon is at the top of my list of conferences. MetriCon brings together people with varied background and a common interest in making security more objective and measurable. This year's conference chair is Dan Geer and the agenda and speakers looks like the best yet. MetriCon 3.0 is July 29 in San Jose, along with the Usenix Security conference.

July 03, 2008 in Security, Security Metrics | Permalink | Comments (1)

Can you hear me now?

Verizon released a very interesting Data Breach report that analyzes over 500 forensic reports on their system over a number of years. It is great work by Verizon to gather this data and to publish it. Of course a consultant I go into lots of companies where they could learn a lot just by being more open and talking through issues with peers in other companies. Would be great to see other companies follow Verizon's lead.


I suggest you read their report, and I would like to add a little color to their findings from the perspective of the swamp I spend most of my time in - Web services security. Granted it is just one report, but the data run counter to a lot of conventional security "wisdom":

Who is behind data breaches? 

73% resulted from external sources
18% were caused by insiders 
39% implicated business partners 
30% involved multiple parties


The internal/external divide is pretty silly these days, as is companies' recanting "inside the firewall and outside the firewall", I spend most of time hooking things up together precisely _so_ they intereoperate remotely. The firewall is a speed bump at best. At any rate external sources is a primary concern in Web services security, because - hey look our Web service front end just made your Mainframe/As400/Unix DB/ CICS/whatever accessible remotely. This is great from a functionality standpoint, but the issue is that these back end systems were never designed with anything remotely resembling an Internet threat model. Additionally, the Verizon team's findings around business parties and multiple parties strikes at the heart of a number of popular misconceptions in Web services security - "well its just B2B and its behind a firewall."


How do breaches occur? 


62% were attributed to a significant error

59% resulted from hacking and intrusions  

31% incorporated malicious code 

22% exploited a vulnerability 
15% were due to physical threats 


A couple of things to note here - malicious code in my opinion is likely to be the biggest problem in Web services security going forward. There is a large gap waiting to be exploited here. You have no control over the other end of the pipe plus a massive attack surface, the only thing lacking is the attacker's ability to find and exploit which I strongly suspect is just a matter of time. Wrt hacking an intrusions we have the remote, passive nature of web security to blame here in Web services world. Paraphrasing Jeff Williams, the problem is that an attacker can just try an attack if it doesn't work, try again, again, and so on. This partially because of the loosely coupled nature of the systems, but it is also because commonly used information security protocols have diverged from reality are modeled using an object-centric mentality, where you "own" the object you are protecting and can afford to put passive controls around.

What commonalities exist? 


66%  involved data the victim did not know was on the system
75%  of breaches were not discovered by the victim  
83%  of attacks were not highly difficult 
85%  of breaches were the result of opportunistic attacks 
87%  were considered avoidable through reasonable controls 

Many of the attacks against Web Services are not difficult, in my training class, we'll typically execute 8-10 different attacks in a two day period. But the big one from this list is the first one - the amazing amount of attack surface offered up by Web services. Brad Hill has done a good job articulating these issues in SOAP/XML/WS-*, but at an enterprise its even bigger than those standards - the thing is we use Web services to make stuff interoperate, to make stuff reusable, and to virtualize endpoints. Great stuff if what you want to do is decentralize your business, but this creates oceans of space for attackers to roam. When you look beyond the Visio and the IDE view of web services, and get to the runtime there is an amazing amount of detritus left behind by all these layers.



June 27, 2008 in Security, Security Metrics | Permalink | Comments (0)

MetriCon 3.0

MetriCon 3.0 — Third Workshop on Security Metrics 

Tuesday,29 July 2008, San Jose, California 

___________________________________________________________________ 


8:45am:Welcome words / housekeeping details - Dan Geer 

___________________________________________________________________ 

9:00am-10:30am - Models proposed and derived 

•Thomas Heyman & Christophe Huygens : "Using Model Checkers to Elicit Security 

Metrics" 

•Adam O’Donnell : "Games, Metrics, and Emergent Threats" 

•Fred Cohen : "Bringing Clarity to Security Decision Making Using Qualitative 

Metrics in 2 Dimensions" 

Discussants:Lloyd Ellam & Elizabeth Nichols 

___________________________________________________________________ 

10:45am-12:15pm - Tools and their application 

•Yolanta Beresnevichiene : "Metrics Driving Security Analytics" 

•Alain Mayer : "Security Risk Metrics: The View From the Trenches" 

•Amrit Williams : "How to Define and Implement Operationally Actionable Security 

Metrics" 

Discussants:Gunnar Peterson & Andrew Jaquith 

___________________________________________________________________ 

12:15pm-1:30pm - In-room lunch, the final 30 minutes jointly from 

•Jennifer Bayuk : "Comparing Metrics Designed for Risk-Management with Metrics 

Designed for Security" 

Discussant:Bryan Ware 

___________________________________________________________________ 

1:30pm-3:00pm - Scoring results and methods 

•James Walden : "Code Complexity and Static Analysis" 

•Karen Scarfone : "Evidence-Based, Good Enough, & Open" 

•Arshad Noor : "Identity Protection Factor" 

Discussants:Fred Cohen & Dan Conway 

___________________________________________________________________ 

3:15pm-4:45pm Enterprise plans and lessons learned 

•Caroline Wong : "eBay’sMetrics Program" 

•Clint Kreitner : "CIS’ Metrics Program" 

•Kevin Peuhkurinen : "Great-West’s Metrics Program" 

Discussants:Christine Whalley&Dan Geer 

___________________________________________________________________ 

5:00pm-5:45pm - Perimeters are the simplest possible thing to measure, right? 

•Sandeep Bhatt : "Metrics-Based Firewall Management" 

•Avishai Wool : "Firewall Configuration Errors Revisited" 

Discussant:Bob Blakley 

___________________________________________________________________ 

5:45pm-whenever:Minimalist closing remarks - Dan Geer 

Drinks & dinner in room, and whatever happens next — which it is hoped includes  lessons learned, volunteers for further episodes of MetriCon, ideas on how we can best further support ourselves jointly,etc. Perhaps we will have someone stand up and lead such a discussion; consider that part of the program still fluid. 

June 13, 2008 in Security, Security Metrics | Permalink | Comments (0)

Stalking the right software security metric

Zach Gemignani from JuiceAnalytics posits the following rules for a Choosing the Right Metric

Metrics_framework_2

One of the best tools for security metricians are static analysis tools, let's see how they compare to the four dimensions.

Actionable - Static analysis findings are actionable because the tools prescribe remediations to the security vulnerabilities they find

Common interpretation - Generally this is the hardest thing to get "out of the box", common interpretation for security metrics usually requires mapping to policy, architecture, and/or standards that are agreed on.

Accessible, creditable data - Static analysis conducted against an objective set of rules that can be customized provide a good way to both see the rule and verify its logic.

Transparent, simple calculation - a MetriCon 2.0 Fredrick DeQuan Lee from Fortify showed a nice simple calculation for grading applications, it is based on the Morningstar model for grading mutual funds

1 Star: Absence of Remote and/or Setuid Vulnerabilities

2 Stars: Absence of Obvious Reliability Issues

3 Stars: Follow Best Practices

4 Stars: Documented Secure Development Process

5 Stars: Passed Independent Security Review

I am big fan of maturity continuums such as this (if you can't get one star there is not a lot we can do for you), because it gives you a fixed point and something to shoot for to improve. This is just one example, but I think static analysis tools are the best security metrics tool we have in software security.

Got ideas for the "right" security metric? MetriCon 3.0 is coming up soon!

May 06, 2008 in Security, Security Metrics | Permalink | Comments (0)

MetriCon 3.0 CFP

Call for Participation

MetriCon 3.0
Third Workshop on Security Metrics
Tuesday, 29 July 2008, San Jose, California
Overview

Security metrics -- an idea whose time has come. No matter whether you read the technical or the business press, there is a desire for converting security from a world of adjectives to a world of numbers. The question is, of course, how exactly to do that. The advantage of starting early is, as ever, harder problems but a clearer field though it is very nearly too late to start early. MetriCon is where hard progress is made and harder problems brought forward.

The MetriCon Workshops offer lively, practical discussion in the area of security metrics. It is a, if not the, forum for quantifiable approaches and results to problems afflicting information security today, with a bias towards practical, specific implementations. Topics and presentations will be selected for their potential to stimulate discussion in the Workshop. Past events are detailed here and here; see, especially, the meeting Digests on those pages.

MetriCon 3.0 will be a one-day event, Tuesday, July 29, 2008, in San Jose, California, USA. The Workshop begins first thing in the morning, meals are taken in the meeting room, and work/discussion extends into the evening. As this is a workshop, attendance is by invitation (and limited to 60 participants). Participants are expected to "come with findings," to "come with problems," or, better still, both. Participants should be willing to discuss what they have and need, i.e., to address the group in some fashion, formally or not. Preference will naturally be given to the authors of position papers/presentations who have actual work in progress.

Presenters will each have a short 10-15 minutes to present his or her idea, followed by a another 10-15 minutes of discussion. If you would like to propose a panel or a group of related presentations on different approaches to the same problem, then please do so. Also consistent with a Workshop format, the Program Committee will be steered by what sorts of proposals come in response to this Call.

Goals and Topics

Our goal is to stimulate discussion of, and thinking about, security metrics and to do so in ways that lead to realistic, early results of lasting value. Potential attendees are invited to submit position papers to be shared with all, with or without discussion on the day of the Workshop. Such position papers are expected to address security metrics in one of the following categories:

Benchmarking of security technologies
Empirical studies in specific subject matter areas
Financial planning
Long-term trend analysis and forecasts
Metrics definitions that can be operationalized
Security and risk modeling including calibrations
Tools, technologies, tips, and tricks
Visualization methods both for insight and lay audiences
Data and analyses emerging from ongoing metrics efforts
Other novel areas where security metrics may apply
Practical implementations, real world case studies, and detailed models will be preferred over broader models or general ideas.

How to Participate

Submit a short position paper or description of work done or ongoing. Your submission must be brief -- no longer than five (5) paragraphs or presentation slides. Author names and affiliations should appear first in or on the submission. Submissions may be in PDF, PowerPoint, HTML, or plaintext email and must be submitted to metricon3 AT securitymetrics.org. These requests to participate are due no later than noon GMT, Monday, May 12, 2008 (a hard deadline).

The Program Committee will invite both attendees and presenters. Participants of either sort will be notified of acceptance quickly -- by June 2, 2008. Presenters who want hardcopy materials to be distributed at the Workshop must provide originals of those materials to the Program Committee by July 21, 2008. All slides, position papers, and what-not will be made available to all participants at the Workshop. No formal academic proceedings are intended, but a digest of the meeting will be prepared and distributed to participants and the general public. (Digests for previous MetriCon meetings are on the past event pages mentioned above.) Plagiarism is dishonest, and the organizers of this Workshop will take appropriate action if dishonesty of this sort is found. Submission of recent, previously published work as well as simultaneous submissions to multiple venues is entirely acceptable, but only if you disclose this in your proposal.

Location

MetriCon 3.0 will be co-located with the 17th USENIX Security Symposium at the Fairmont Hotel in San Jose, California.

Cost

$225 all-inclusive of meeting space, materials preparation, and meals for the day.

Important Dates

Requests to participate: by May 12, 2008
Notification of acceptance: by June 2, 2008
Materials for distribution: by July 21, 2008
Workshop Organizers

Dan Geer, Geer Risk Services, Chair
Bob Blakley, The Burton Group
Fred Cohen, Fred Cohen & Associates & California Sciences Institute
Dan Conway, Indiana University
Lloyd Ellam, Iceberg Networks
Andrew Jaquith, The Yankee Group
Elizabeth Nichols, PlexLogic
Gunnar Peterson, Arctec Group
Bryan Ware, Digital Sandbox
Christine Whalley, Pfizer

May 05, 2008 in Security, Security Metrics | Permalink | Comments (0)

Web services security metrics

Here is a list of measurements we built today in my SOA, Web Services Security class

Metric: Number of Web services vulns
Source: scanner
Expression: H/M/L

Metric: Application issues - exposure issues
Source: *.WAF logs
Expression: Security event vs number of request

Metric: Anomalous behavior
Source: application logs
Expression: known good vs anomalies

Metric: Policy Compliance
Source: SOAP Sniffer, Encryption Scheme, Keys
Expression: number of policy events - success/fail

Metric: AuthN strength
Source: SOAP Analyzer
Expression: Policy compliance for service request authN

Metric: AuthZ
Source: Design or logs or access Policy Enforcement Point
Expression: Success/fail based on policy

Metric: Unsuccessful authN
Source: Log files, *.AccessMgmt
Expression: Pct of failed requests

Metric: Input validation errors - XSD validation
Source: JAXB schema validation, *.XSG
Expression: Number of fields vs. validation failures

Metric: XDoS
Source: Synthetic transaction monitor
Expression: Availability, and uptime

Metric: R/R
Source: App logs or gateway
Expression:Number of inbound requests vs number of responses


Metric: Usage patterns
Source: App logs or gateway
Expression: Value metric based on usage

Metric: XSS, SQL Injection, XML Injection
Source: IDS, IPS
Expression: Time of attacks, before or after business hours

Metric: authN vs. un-authN attacks
Source: IDS, IPS
Expression: reverse engineer based on resource - success/fail

Metrics: access logs, servlets - learn about app
Source: proxy, app logs
Expression: known and unknown resource deltas

Metric: success/unsuccessful cross domain authN
Source: access control container
Expression: measure by services


November 13, 2007 in Security, Security Metrics | Permalink | Comments (3)

Network Security Budget Cruft - Why you are probably spending waaayyy too much on network security

Awhile back, Dan Geer posed the following questions

  • How secure am I?
  • Am I better than this time last year?
  • Am I spending the right amount of $$?
  • How do I compare to my peers?
  • What risk transfer options do I have?
  • Dan asserted, and I agree, that these are perfectly reasonable for senior management to ask, virtually any part of a business can provide some enlightenment on them, and the exception is infosec which has virtually no way to answer any of these today.

    So anyway, following up on Mike Rothman's tip on surviving budget season, let's drill down on the question - Am I spending the right amount of $? And examine for the $ I have am I spending it on the right things?

    Let's assume you have fictional infosec budget of $100, where should you focus your spend? One good thing about budget numbers is that they are generally readily available. This is an exercise I have done for a number of clients, it can be done by an outside consultant even in a very large organization in about two weeks, an employee who knows where to look can probably get it done in half that.

    One thing I learned from Pete Lindstrom is that an asset can be valued as being worth "no less than what you pay to develop, own, and operate it." Hopefully it is worth more (if you like profits) but it is worth at least what you paid for it. This is the floor.

    Now to apply the budget to layers that are useful to security, we will break up the overall IT budget into Network spend (what do you spend to operate your network), Host spend (sys admin, OS, licenses, and so on), Application spend (What do you spend on app dev, app servers, and so on), and Data spend (DBAs, database licenses and so on). Let's assume ABC Ice Cream Co spends the following

    IT Budget
  • Network 2,000,000
  • Host 8,000,000
  • Applications 32,000,000
  • Data 12,000,000
  • It sure looks to me like the business values - apps, data, hosts, and network - in that order. Again these are big numbers, but big companies are good at some things - one of these things is assigning spend and cost centers, so for decision support purposes you can find "good enough" numbers in a relatively short amount of time. Now let's look at the same categories for IT security spend - network security (firewall, IDS, and so on), host security (VM, hardening, and so on), app security (static analysis, SDLC, web services security, and so on), and data security (xml security, data encryption, backups, and so on)

    IT Security Budget
  • Network 750,000
  • Host 400,000
  • Applications 250,000
  • Data 100,000
  • It looks to me like IT security thinks the most important areas are - network, host, apps, and data. We can compare these two budget priorities thusly

    Budgets

    Now there are a couple of possible takeaways here. One is that the People's Republic of IT Security is just waaaayyyy smarter than the business folks, if we just gave IT Security control over all business strategy the stock price would go right to $120. Another view is that IT Security is completely out of alignment with how and where the business invests its dollars. Run the numbers using the above breakdowns on your organizations and see what you come up with. These are fictional, but I bet the priorities are pretty similar in your shop.

    Now I am not in any way suggesting that IT Security just parrot back and copy the budget percentage spends, but what I am saying is that 1) there should be some alignment of priorities and 2) the alignment should be the starting point of IT Security investment instead of "hey we have all these network security licenses/people/devices". The starting point is aligning security investment with the business and assets, not investing in network security because that was a good idea in 1997 and hey that's how we've always done it - doing so is pure budget cruft.

    So if we rebalance the IT Security spend we can arrive at something that reflects IT Security's competencies and aligns better with what the business values.

    Rebalancebudgets

    Obviously taking into account the business' priorities adds additional constraints, but delivering in the face of constraints is what separates engineers from apes.

    Update: Mark Curphey takes a look at the budget issue from another perspective.

    Update 2: Interview on out of control IT Security budgets

    October 03, 2007 in Security, Security Metrics | Permalink | Comments (5)

    MetriCon 2.0 : Oh Well a Touch of Gray Kinda Suits You Anyway

    MetriCon 2.0 happened on Tuesday in Boston, featuring a lot of collaborative, constructive dialog (the talks were 15 min. each with 15 min. of open discussion). Slides are here. Lots of good feedback from participants.

    An interesting thing happened in the morning. Fredrick DeQuan Lee from Fortify showed some examples of security metrics in practice, including some findings from their review of open source projects. Using Fortify's tools they scanned projects Azureus, Blojsom, and Groovy on Rails weekly, over 74 million lines of code scanned each week. To rank the findings they developed a rating system like the Morningstar rating system for mutual funds. The ratings are:

    1 Star: Absence of Remote and/or Setuid Vulnerabilities 2 Stars: Absence of Obvious Reliability Issues 3 Stars: Follow Best Practices 4 Stars: Documented Secure Development Process 5 Stars: Passed Independent Security Review

    So first off, I am big fan of maturity continuums in security. They eliminate the black/white boolean "you are 100% compliant with our ivory tower policy (which no one is) or you are forever broken" view of the world. Maturity continuums also give a way to make incremental progress over time without having to have every project cram every security feature in before going live.

    Next, the Fortify star system sets a harsh bar for even attaining one star (more on this later), on the plus side 1 star and 2 star should be able to measured quantitatively. As you move up the stars become fuzzier and more qualitative. I don't have a big issue with this because like stocks we can use separate criteria for assessing penny stocks as for assessing 3M or Walmart. When you are into the realm of debating whether to invest 3M or to invest in Walmart it is a different discussion. Would that we had more of these type discussion in security!

    After the Fortify presentation, Jeremiah Grossman showed the data his team collected, that data showed 70% of websites they assessed had serious vulnerabilities (XSS, Information leakage, etc.). Interestingly, they also correlated the vulns across industry segments which showed marked differences in security profile based on sector. Good job retail sector! retail came with the lowest percentage of vulns, which could indicate that the higher spending on security we see Fin Svcs is not paying dividends.

    At any rate, in the Q&A portion someone asked Jeremiah if *any* of the hundreds of sites included in his survey could gain even one star in the aforementioned Fortify ranking system? No way.

    Is this a problem? Is Fortify's sytem broken? I don't think so. I think we need maturity continuums because just getting to one star is hard enough, and if we say you have 5 stars or you're broken, no one will get there. Evah. I don't think we need one uber ranking system either, Morningstar is just one of hundreds (thousands?) in the financial world. We also need different criteria for assessing penny stocks (Jeremiah's) from those we use to measure lower risk and more mature stocks (Fortify). For web apps today, the bad news is that we have a metric ton of penny stocks.

    August 09, 2007 in Metricon, Security, Security Metrics | Permalink | Comments (2)

    Chicken Soup for the CISO's Soul

    51eowkui6el_aa240_
    Andrew Jaquith's book Security Metrics - Replacing Fear, Uncertainty and Doubt is all killer no filler. Jaquith provides new directions in a field, information security, that sorely needs them. In a sea of Infosec books this one stands out -a fresh approach too an important yet misunderstood topic; a focus on how to communicate which is a key to success; and using numbers to amplify decision support process.

    Simply put, Security Metrics is a cookbook of ideas and you can pick up any chapter, read it, and get actionable ideas on how to improve your decision making in your security organization. The book begins by neatly encapsulating the flailing efforts seen in many enterprise infosec groups, which Jaquith dubs the "Hamster Wheel of Pain" aka ignorance is bliss. Set against this all too common problem statement are security metrics, which Jaquith proposes to measure if your security is getting better.

    There are of course more than one way to approach security measurement. Jaquith looks at two - Measurers and Modelers. Measurers look at empirical data, correlation, essential practices, economic spending and before and after views. Modelers are more concerned with risk equations, loss expectancy, attack surfaces, and why questions. Most of the book is focused on a measurers approach so we don't get to see a grand overarching model. On the plus side we do get lots of metrics recipes that can be plugged and used in a real world infosec program.

    Probably the best chapter for the uninitated is chapter 2 Defining a Good Security Metric which summarizes these rules for good security metrics - Consistently Measured, Cheap to gather, Expressed as a cardinal number, Expressed using at least one unit of measure. The chapter is equally useful in describing what metrics are not, explicitly excludes infosec sacred cows audit metrics like ISO 17799 and Annual Loss Expectancy. If you are going to send a message to the rest of the hurd, you have to be prepared to shoot some of the lead buffalo. Thank you, Mr. Jaquith.

    Chapters 3 & 4 are where the cookbook comes together with a large number of detailed metrics recipes for measuring aspects of network security, host security, application security and so on. This is the "take this back to your desk and start working on this part" stuff. Chapter 5 presents a good overview of measurement analysis techniques so that you can better understand that which you just gathered. Useful again, because we are now in the realm of using numbers to better understand security instead of mere axiom.

    The last part of the book is very important for enterprise infosec because it deals with scorecards and visualization, my partner Pat Christiansen likes to say the architecture is 50% technical ability and 50% communication. These chapters provide some Tufte-esque approaches to communicating the findings to different security stakeholders types with ideas for facilitating communication up, down, and across the organization.

    This is really a good book for anyone in IT to demystify the fud-laden world of IT security. If you work in security it is a must read. If you manage a security group, I recommend buying a copy for everyone on your staff, wait 2-4 weeks, and come back ask where the heck are all the decision support metrics?

    August 03, 2007 in Books, Security, Security Metrics | Permalink | Comments (0)

    William Gibson Thinks You Should Go to Metricon

    Ok, he did not explicitly say that about Metricon. But, in his last book "Pattern Recognition", he did underscore our current information security dilemma:

    “We have no future because our present is too volatile. We have only risk management. The spinning of the given moment's scenarios. Pattern recognition...”

    Risk is the price we pay to move forward, innovate, and grow the enterprise. Risk management is different from the standard "its perfect or its broken and I can't help you unless you follow everything in the 137 page policy manual" mentality that most IT security groups attempt to govern by.

    Stepping into a risk management mindset is accepting there are going to be tradeoffs and then the queston becomes how to reason about these tradeoffs. Well today we mainly use axioms and "best" practices, for example gems like inside firewall = good, outside firewall = bad. (I am three years behind on patches on the Oracle system that has all our customer data, but its inside the firewall).

    In trying to find better ways to reason about the tradeoffs and measure the efficacy over time, look to security metrics to objectively illustrate your system's capabilities. Use numbers and measurements to add weight to and to challenge existing axioms to see if the assumptions that were made actually hold up.

    This is what we'll explore in the collaborative workshop Metricon, held in conjunction with Usenix security conference in Boston, August 7. There is limited time to register, so if you are interested in attending, do it soon.

    (Since I took Mr. Gibson's name in vain, I should mention he has a new book coming out - "Spook Country" which I am excited to read)

    July 16, 2007 in Metricon, Security, Security Metrics | Permalink | Comments (0)

    « | »
    My Photo

    SOS: Service Oriented Security

    • The Curious Case of API Security
    • Getting OWASP Top Ten Right with Dynamic Authorization
    • Top 10 API Security Considerations
    • Mobile AppSec Triathlon
    • Measure Your Margin of Safety
    • Top 10 Security Considerations for Internet of Things
    • Security Checklists
    • Cloud Security: The Federated Identity Factor
    • Dark Reading IAM
    • API Gateway Secuirty
    • Directions in Incident Detection and Response
    • Security > 140
    • Open Group Security Architecture
    • Reference Monitor for the Internet of Things
    • Don't Trust. And Verify.

    Archives

    • November 2015
    • October 2015
    • September 2015
    • August 2015
    • July 2015
    • June 2015
    • May 2015
    • April 2015
    • March 2015
    • February 2015

    More...

    Subscribe to this blog's feed